You are here






The scientific approach of the project can be schematized as shown in the graph and explained in more detail below. In an initial phase the semantic and geometric datasets are prepared, followed by the development and integration of the software that compose the platform. This process results in the Re-Unification, Re-Association and potential Re-Assembly of cultural heritage objects.


Preparation of Semantic data. Semantic data is based on a corpus of texts (archaeological publications, paper, archival data, excavation reports and catalogues). Through Natural Language Processing (NLP) this corpus is analysed, a vocabulary is created  and meaningful relationships between single terms are established. In parallel a Cultural Heritage Artifact Partonomy (CHAP) is created to describe a cultural heritage artifact, dividing it in components, visual and morphological features with hierarchical relationship between them. In the end CHAP is incorporated together with the outcome of  NLP in a complex system of relationships, structured in CIDOC CRM and then codified as RDF annotations.



Preparation of Geometric data. The cultural heritage artifacts are acquired digitally trough 3D scanning or photogrammetry. The raw data is post-processed and 3D models for each object are created in different resolutions, according to the needs of the various uses of the models within the platform (e.g visualization, 3D matching etc.). The colour information for each model is stored on the vertices.

The 3D models can be Faceted, namely recognising the principal sides of each fragment: fracture, inside skin and outside skin.


Quantification. Each fragment is analysed in terms of its geometric properties, like thickness, dimensions, facets’ curvature etc. and this data is going to enrich the semantic description of the object.
A feature extraction technique (Hough transform) is applied to detect features like eyes, mouth, flower decoration etc. In the end a library of algorithms is created to quantify and identify specific features. The identified features can be enriched with additional parameters and attributes like their dimension, symmetry, etc.

Semantic search. Through the preparation of the  metadata, comprehensive semantic searches across various data types are enabled. Standard graph similarity algorithms and NLP methods are used to optimize query results. 

Geometric search. The information gained from the quantification process of the geometry is stored and is used for content based queries. Through the combination of heterogeneous algorithms describing geometric features and properties as well as the colours, which are analyzed in the quantification process, queries can be performed.

The integration of semantic and geometric search, permits to formulate novel queries which combine semantic and geometric values e.g. find all fragments with thickness less than 2 cm; and queries about the entire fragment as well as a specific characteristic of the items, e.g. find a fragment with a nose shaped like this one.


Annotations. Features of the model are annotated according to a CHAP concept and are related to each other with structural properties (e.g. eyes is part of face and above nose).

CHAP classes and relationships are used also in the creation of a 3D template, which is used to semi-automatically arrange fragments of the same object. Fragments are automatically positioned according to their annotations on the template and can be manually adjusted.




Pre-Reassembly and Reassembly. A selection of potential complementary fragments is done on the basis of their geometry and their associated semantic information. A Pre-Reassembly phase provides a coarse initial alignment, based on localized contour information (shape and colour) and semantics (a foot will not be aligned on a neck).

Initial alignment based on shape. Fragments are filtered by matching curvature, and then rotated according to the information given by the Faceting process.


Initial alignment based on colour. Assuming a continuity in the pattern and colour information, each fragment is extrapolated based on 2D images. The extrapolated images are compared with other fragments for similarity and overlap in colour and pattern.

During the Reassembly fine alignment will be provided and closure of assembly loops. The matched fragments  can be saved into the system as one.