ScholarlyData is the new and currently the largest reference linked dataset of the Semantic Web community about papers, people, organisations, and events related to its academic conferences. Originally started from the Semantic Web Dog Food (SWDF), it addressed multiple issues on data representation and maintenance by (i) adopting a novel data model and (ii) establishing an open source workflow to support the addition of new data from the community. Nevertheless, the major issue with the current dataset is the presence of multiple URIs for the same entities, typically in persons and organisations. In this work we: (i) perform entity deduplication on the whole dataset, using supervised classification methods; (ii) devise a protocol to choose the most representative URI for an entity and deprecate duplicated ones, while ensuring backward compatibilities for them; (iii) incorporate the automatic deduplication step in the general workflow to reduce the creation of duplicate URIs when adding new data. Our early experiment focused on the person and organisation URIs and results show significant improvement over state-of-the-art solutions. We managed to consolidate, on the entire dataset, over 100 and 800 pairs of duplicate person and organisation URIs and their associated triples (over 1,800 and 5,000) respectively, hence significantly improving the overall quality and connectivity of the data graph. Integrated into the ScholarlyData data publishing workflow, we believe that this serves a major step towards the creation of clean, high-quality scholarly linked data on the Semantic Web.
Entity Deduplication on ScholarlyData
Publication type:
Contributo in atti di convegno
Publisher:
Springer, Berlin, DEU
Source:
14th International Conference, ESWC 2017, pp. 85–100, Portoroz, Slovenia, 28/05/2017-01/06/2017
Date:
2017
Resource Identifier:
http://www.cnr.it/prodotto/i/371993
https://dx.doi.org/10.1007/978-3-319-58068-5_6
info:doi:10.1007/978-3-319-58068-5_6
urn:isbn:978-3-319-58068-5
Language:
Eng