Information extraction from Wikipedia using pattern learning

In this paper we present solutions for the crucial task of extracting structured information from massive free-text resources, such as Wikipedia, for the sake of semantic databases serving upcoming Semantic Web technologies. We demonstrate both a verb frame-based approach using deep natural language...

Full description

Saved in:
Bibliographic Details
Main Author: Miháltz Márton
Corporate Author: Conference on Hungarian Computational Linguistics (7.) (2010) (Szeged)
Format: Article
Published: 2010
Series:Acta cybernetica 19 No. 4
Kulcsszavak:Számítástechnika, Nyelvészet - számítógép alkalmazása
Subjects:
Online Access:http://acta.bibl.u-szeged.hu/12888
Description
Summary:In this paper we present solutions for the crucial task of extracting structured information from massive free-text resources, such as Wikipedia, for the sake of semantic databases serving upcoming Semantic Web technologies. We demonstrate both a verb frame-based approach using deep natural language processing techniques with extraction patterns developed by human knowledge experts and machine learning methods using shallow linguistic processing. We also propose a method for learning verb frame-based extraction patterns automatically from labeled data. We show that labeled training data can be produced with only minimal human effort by utilizing existing semantic resources and the special characteristics of Wikipedia. Custom solutions for named entity recognition are also possible in this scenario. We present evaluation and comparison of the different approaches for several different relations.
Physical Description:677-694
ISSN:0324-721X