词条 | Concept search |
释义 |
A concept search (or conceptual search) is an automated information retrieval method that is used to search electronically stored unstructured text (for example, digital archives, email, scientific literature, etc.) for information that is conceptually similar to the information provided in a search query. In other words, the ideas expressed in the information retrieved in response to a concept search query are relevant to the ideas contained in the text of the query. DevelopmentConcept search techniques were developed because of limitations imposed by classical Boolean keyword search technologies when dealing with large, unstructured digital collections of text. Keyword searches often return results that include many non-relevant items (false positives) or that exclude too many relevant items (false negatives) because of the effects of synonymy and polysemy. Synonymy means that one of two or more words in the same language have the same meaning, and polysemy means that many individual words have more than one meaning. Polysemy is a major obstacle for all computer systems that attempt to deal with human language. In English, most frequently used terms have several common meanings. For example, the word fire can mean: a combustion activity; to terminate employment; to launch, or to excite (as in fire up). For the 200 most-polysemous terms in English, the typical verb has more than twelve common meanings, or senses. The typical noun from this set has more than eight common senses. For the 2000 most-polysemous terms in English, the typical verb has more than eight common senses and the typical noun has more than five.[1] In addition to the problems of polysemous and synonymy, keyword searches can exclude inadvertently misspelled words as well as the variations on the stems (or roots) of words (for example, strike vs. striking). Keyword searches are also susceptible to errors introduced by optical character recognition (OCR) scanning processes, which can introduce random errors into the text of documents (often referred to as noisy text) during the scanning process. A concept search can overcome these challenges by employing word sense disambiguation (WSD),[2] and other techniques, to help it derive the actual meanings of the words, and their underlying concepts, rather than by simply matching character strings like keyword search technologies. ApproachesIn general, information retrieval research and technology can be divided into two broad categories: semantic and statistical. Information retrieval systems that fall into the semantic category will attempt to implement some degree of syntactic and semantic analysis of the natural language text that a human user would provide (also see computational linguistics). Systems that fall into the statistical category will find results based on statistical measures of how closely they match the query. However, systems in the semantic category also often rely on statistical methods to help them find and retrieve information.[3] Efforts to provide information retrieval systems with semantic processing capabilities have basically used three different approaches:
Auxiliary structuresA variety of techniques based on artificial intelligence (AI) and natural language processing (NLP) have been applied to semantic processing, and most of them have relied on the use of auxiliary structures such as controlled vocabularies and ontologies. Controlled vocabularies (dictionaries and thesauri), and ontologies allow broader terms, narrower terms, and related terms to be incorporated into queries.[4] Controlled vocabularies are one way to overcome some of the most severe constraints of Boolean keyword queries. Over the years, additional auxiliary structures of general interest, such as the large synonym sets of WordNet, have been constructed.[5] It was shown that concept search that is based on auxiliary structures, such as WordNet, can be efficiently implemented by reusing retrieval models and data structures of classical information retrieval.[6] Later approaches have implemented grammars to expand the range of semantic constructs. The creation of data models that represent sets of concepts within a specific domain (domain ontologies), and which can incorporate the relationships among terms, has also been implemented in recent years. Handcrafted controlled vocabularies contribute to the efficiency and comprehensiveness of information retrieval and related text analysis operations, but they work best when topics are narrowly defined and the terminology is standardized. Controlled vocabularies require extensive human input and oversight to keep up with the rapid evolution of language. They also are not well suited to the growing volumes of unstructured text covering an unlimited number of topics and containing thousands of unique terms because new terms and topics need to be constantly introduced. Controlled vocabularies are also prone to capturing a particular world view at a specific point in time, which makes them difficult to modify if concepts in a certain topic area change.[7] Local co-occurrence statisticsInformation retrieval systems incorporating this approach count the number of times that groups of terms appear together (co-occur) within a sliding window of terms or sentences (for example, ± 5 sentences or ± 50 words) within a document. It is based on the idea that words that occur together in similar contexts have similar meanings. It is local in the sense that the sliding window of terms and sentences used to determine the co-occurrence of terms is relatively small. This approach is simple, but it captures only a small portion of the semantic information contained in a collection of text. At the most basic level, numerous experiments have shown that approximately only ¼ of the information contained in text is local in nature.[8] In addition, to be most effective, this method requires prior knowledge about the content of the text, which can be difficult with large, unstructured document collections.[7] Transform techniquesSome of the most powerful approaches to semantic processing are based on the use of mathematical transform techniques. Matrix decomposition techniques have been the most successful. Some widely used matrix decomposition techniques include the following:[9]
Matrix decomposition techniques are data-driven, which avoids many of the drawbacks associated with auxiliary structures. They are also global in nature, which means they are capable of much more robust information extraction and representation of semantic information than techniques based on local co-occurrence statistics.[7] Independent component analysis is a technique that creates sparse representations in an automated fashion,[10] and the semi-discrete and non-negative matrix approaches sacrifice accuracy of representation in order to reduce computational complexity.[7] Singular value decomposition (SVD) was first applied to text at Bell Labs in the late 1980s. It was used as the foundation for a technique called latent semantic indexing (LSI) because of its ability to find the semantic meaning that is latent in a collection of text. At first, the SVD was slow to be adopted because of the resource requirements needed to work with large datasets. However, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome [11] and even open sourced[12]. LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[13] Uses
Effective searchingThe effectiveness of a concept search can depend on a variety of elements including the dataset being searched and the search engine that is used to process queries and display results. However, most concept search engines work best for certain kinds of queries:
As with all search strategies, experienced searchers generally refine their queries through multiple searches, starting with an initial seed query to obtain conceptually relevant results that can then be used to compose and/or refine additional queries for increasingly more relevant results. Depending on the search engine, using query concepts found in result documents can be as easy as selecting a document and performing a find similar function. Changing a query by adding terms and concepts to improve result relevance is called query expansion.[19] The use of ontologies such as WordNet has been studied to expand queries with conceptually-related words.[20] Relevance feedbackRelevance feedback is a feature that helps users determine if the results returned for their queries meet their information needs. In other words, relevance is assessed relative to an information need, not a query. A document is relevant if it addresses the stated information need, not because it just happens to contain all the words in the query.[21] It is a way to involve users in the retrieval process in order to improve the final result set.[21] Users can refine their queries based on their initial results to improve the quality of their final results. In general, concept search relevance refers to the degree of similarity between the concepts expressed in the query and the concepts contained in the results returned for the query. The more similar the concepts in the results are to the concepts contained in the query, the more relevant the results are considered to be. Results are usually ranked and sorted by relevance so that the most relevant results are at the top of the list of results and the least relevant results are at the bottom of the list. Relevance feedback has been shown to be very effective at improving the relevance of results.[21] A concept search decreases the risk of missing important result items because all of the items that are related to the concepts in the query will be returned whether or not they contain the same words used in the query.[15] Ranking will continue to be a part of any modern information retrieval system. However, the problems of heterogeneous data, scale, and non-traditional discourse types reflected in the text, along with the fact that search engines will increasingly be integrated components of complex information management processes, not just stand-alone systems, will require new kinds of system responses to a query. For example, one of the problems with ranked lists is that they might not reveal relations that exist among some of the result items.[22]Guidelines for evaluating a concept search engine
Conferences and forumsFormalized search engine evaluation has been ongoing for many years. For example, the Text REtrieval Conference (TREC) was started in 1992 to support research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies. Most of today's commercial search engines include technology first developed in TREC.[24] In 1997, a Japanese counterpart of TREC was launched, called National Institute of Informatics Test Collection for IR Systems (NTCIR). NTCIR conducts a series of evaluation workshops for research in information retrieval, question answering, text summarization, etc. A European series of workshops called the Cross Language Evaluation Forum (CLEF) was started in 2001 to aid research in multilingual information access. In 2002, the Initiative for the Evaluation of XML Retrieval (INEX) was established for the evaluation of content-oriented XML retrieval systems. Precision and recall have been two of the traditional performance measures for evaluating information retrieval systems. Precision is the fraction of the retrieved result documents that are relevant to the user's information need. Recall is defined as the fraction of relevant documents in the entire collection that are returned as result documents.[21] Although the workshops and publicly available test collections used for search engine testing and evaluation have provided substantial insights into how information is managed and retrieved, the field has only scratched the surface of the challenges people and organizations face in finding, managing, and, using information now that so much information is available.[22] Scientific data about how people use the information tools available to them today is still incomplete because experimental research methodologies haven’t been able to keep up with the rapid pace of change. Many challenges, such as contextualized search, personal information management, information integration, and task support, still need to be addressed.[22] See also{{div col|colwidth=22em}}
References1. ^Bradford, R. B., Word Sense Disambiguation, Content Analyst Company, LLC, U.S. Patent 7415462, 2008. 2. ^R. Navigli, Word Sense Disambiguation: A Survey, ACM Computing Surveys, 41(2), 2009. 3. ^Greengrass, E., Information Retrieval: A Survey, 2000. 4. ^Dubois, C., The Use of Thesauri in Online Retrieval, Journal of Information Science, 8(2), 1984 March, pp. 63-66. 5. ^Miller, G., Special Issue, WordNet: An On-line Lexical Database, Intl. Journal of Lexicography, 3(4), 1990. 6. ^Fausto Giunchiglia, Uladzimir Kharkevich, and Ilya Zaihrayeu. Concept Search, In Proceedings of European Semantic Web Conference, 2009. 7. ^1 2 3 Bradford, R. B., Why LSI? Latent Semantic Indexing and Information Retrieval, White Paper, Content Analyst Company, LLC, 2008. 8. ^Landauer, T., and Dumais, S., A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge, Psychological Review, 1997, 104(2), pp. 211-240. 9. ^Skillicorn, D., [https://pdfs.semanticscholar.org/eb53/62c08a8bf013c4646072c0dd9e800f1de201.pdf Understanding Complex Datasets: Data Mining with Matrix Decompositions], CRC Publishing, 2007. 10. ^Honkela, T., Hyvarinen, A. and Vayrynen, J. WordICA - Emergence of linguistic representations for words by independent component analysis. Natural Language Engineering, 16(3):277-308, 2010 11. ^{{cite web |url=http://radimrehurek.com/phd_rehurek.pdf |title=Scalability of Semantic Analysis in Natural Language Processing |last1=Řehůřek |first1=Radim |last2= |first2= |date=2011 |publisher= |accessdate=27 January 2015}} 12. ^Gensim open source software 13. ^Dumais, S., Latent Semantic Analysis, ARIST Review of Information Science and Technology, vol. 38, Chapter 4, 2004. 14. ^Magistrate Judge John M. Facciola of the U.S. District Court for the District of Washington, D.C.Disability Rights Council v. Washington Metropolitan Transit Authority, 242 FRD 139 (D. D.C. 2007), citing George L. Paul & Jason R. Baron, "Information Inflation: Can the Legal System Adapt?" 13 Rich. J.L. & Tech. 10 (2007). 15. ^1 2 Laplanche, R., Delgado, J., Turck, M., Concept Search Technology Goes Beyond Keywords, Information Outlook, July 2004. 16. ^1 Lew, M. S., Sebe, N., Djeraba, C., Jain, R., Content-based Multimedia Information Retrieval: State of the Art and Challenges, ACM Transactions on Multimedia Computing, Communications, and Applications, February 2006. 17. ^Datta R., Joshi, D., Li J., Wang, J. Z., Image Retrieval: Ideas, Influences, and Trends of the New Age, ACM Computing Surveys, Vol. 40, No. 2, April 2008. 18. ^https://web.archive.org/web/20140307134534/http://www.liacs.nl/~mir/ 19. ^Robertson, S. E., Spärck Jones, K., Simple, Proven Approaches to Text Retrieval, Technical Report, University of Cambridge Computer Laboratory, December 1994. 20. ^Navigli, R., Velardi, P. An Analysis of Ontology-based Query Expansion Strategies. Proc. of Workshop on Adaptive Text Extraction and Mining (ATEM 2003), in the 14th European Conference on Machine Learning (ECML 2003), Cavtat-Dubrovnik, Croatia, September 22-26th, 2003, pp. 42–49 21. ^1 2 3 Manning, C. D., Raghavan P., Schütze H., Introduction to Information Retrieval, Cambridge University Press, 2008. 22. ^1 2 Callan, J., Allan, J., Clarke, C. L. A., Dumais, S., Evans, D., A., Sanderson, M., Zhai, C., Meeting of the MINDS: An Information Retrieval Research Agenda, ACM, SIGIR Forum, Vol. 41 No. 2, December 2007. 23. ^Rehurek, R., A combined system for vector similarity search based on the inverted fulltext index, ScaleText Search Engine, Pending U.S. Patent 15726803, 2017. 24. ^Croft, B., Metzler, D., Strohman, T., Search Engines, Information Retrieval in Practice, Addison Wesley, 2009. External links
1 : Information retrieval genres |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。