SciELO - Scientific Electronic Library Online

 
vol.20 issue4Limiting the Velocity in the Particle Swarm Optimization AlgorithmPOS Tagging without a Tagger: Using Aligned Corpora for Transferring Knowledge to Under-Resourced Languages author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

On-line version ISSN 2007-9737Print version ISSN 1405-5546

Abstract

MAJUMDER, Goutam; PAKRAY, Partha; GELBUKH, Alexander  and  PINTO, David. Semantic Textual Similarity Methods, Tools, and Applications: A Survey. Comp. y Sist. [online]. 2016, vol.20, n.4, pp.647-665. ISSN 2007-9737.  https://doi.org/10.13053/cys-20-4-2506.

Measuring Semantic Textual Similarity (STS), between words/ terms, sentences, paragraph and document plays an important role in computer science and computational linguistic. It also has many applications over several fields such as Biomedical Informatics and Geoinformation. In this paper, we present a survey on different methods of textual similarity and we also reported about the availability of different software and tools those are useful for STS. In natural language processing (NLP), STS is a important component for many tasks such as document summarization, word sense disambiguation, short answer grading, information retrieval and extraction. We split out the measures for semantic similarity into three broad categories such as (i) Topological/Knowledge-based (ii) Statistical/ Corpus Based (iii) String based. More emphasis is given to the methods related to the WordNet taxonomy. Because topological methods, plays an important role to understand intended meaning of an ambiguous word, which is very difficult to process computationally. We also propose a new method for measuring semantic similarity between sentences. This proposed method, uses the advantages of taxonomy methods and merge these information to a language model. It considers the WordNet synsets for lexical relationships between nodes/words and a uni-gram language model is implemented over a large corpus to assign the information content value between the two nodes of different classes.

Keywords : WordNet taxonomy; natural language processing; semantic textual similarity; information content; random walk; statistical similarity; cosine similarity; term-based similarity; character-based similarity; n-gram; Jaccard similarity; WordNet similarity.

        · text in English