Information retrieval
Encyclopedia
Information retrieval is the area of study concerned with searching for documents, for information
within documents, and for metadata about documents, as well as that of searching structured storage
, relational database
s, and the World Wide Web
. There is overlap in the usage of the terms data retrieval, document retrieval
, information retrieval, and text retrieval, but each also has its own body of literature, theory, praxis, and technologies. IR is interdisciplinary, based on computer science
, mathematics
, library science
, information science
, information architecture
, cognitive psychology
, linguistics
, and statistics
.
Automated information retrieval systems are used to reduce what has been called "information overload
". Many universities and public libraries
use IR systems to provide access to books, journals and other documents. Web search engine
s are the most visible IR applications
.
by Vannevar Bush
in 1945. The first automated information retrieval systems were introduced in the 1950s and 1960s. By 1970 several different techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents). Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s.
In 1992, the US Department of Defense along with the National Institute of Standards and Technology
(NIST), cosponsored the Text Retrieval Conference
(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale
to huge corpora. The introduction of web search engine
s has boosted the need for very large scale retrieval systems even further.
The use of digital methods for storing and retrieving information has led to the phenomenon of digital obsolescence
, where a digital resource ceases to be readable because the physical media, the reader required to read the media, the hardware, or the software that runs on it, is no longer available. The information is initially easier to retrieve than if it were on paper, but is then effectively lost.
into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevancy
.
An object is an entity that is represented by information in a database
. User queries are matched against the database information. Depending on the application
the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata.
Most IR systems compute a numeric score on how well each object in the database match the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.
to the user's information need.
In binary classification
, precision is analogous to positive predictive value
. Precision takes all retrieved documents into account. It can also be evaluated at a given cut-off rank, considering only the topmost results returned by the system. This measure is called precision at n or P@n.
Note that the meaning and usage of "precision" in the field of Information Retrieval differs from the definition of accuracy and precision
within other branches of science and technology.
In binary classification, recall is called sensitivity. So it can be looked at as the probability that a relevant document is retrieved by the query.
It is trivial to achieve recall of 100% by returning all documents in response to any query. Therefore recall alone is not enough but one needs to measure the number of non-relevant documents also, for example by computing the precision.
In binary classification, fall-out is closely related to specificity . It can be looked at as the probability that a non-relevant document is retrieved by the query.
It is trivial to achieve fall-out of 0% by returning zero documents in response to any query.
of precision and recall, the traditional F-measure or balanced F-score is:
This is also known as the measure, because recall and precision are evenly weighted.
The general formula for non-negative real is:.
Two other commonly used F measures are the measure, which weights recall twice as much as precision, and the measure, which weights precision twice as much as recall.
The F-measure was derived by van Rijsbergen (1979) so that "measures the effectiveness of retrieval with respect to a user who attaches times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure . Their relationship is where .
This integral is in practice replaced with a finite sum over every position in the ranked sequence of documents:
where is the rank in the sequence of retrieved documents, is the number of retrieved documents, is the precision at cut-off in the list, and is the change in recall from items to .
This finite sum is equivalent to:
where is an indicator function equaling 1 if the item at rank is a relevant document, zero otherwise.
Some authors choose to interpolate the function to reduce the impact of "wiggles" in the curve.
For example, the PASCAL Visual Object Classes challenge (a benchmark for computer vision object detection) computes average precision by averaging the precision over a set of evenly spaced recall levels {0, 0.1, 0.2, ... 1.0}:
where is an interpolated precision that takes the maximum precision over all recalls greater than :.
Average precision is also sometimes referred to geometrically as the area under the Precision-Recall curve.
where Q is the number of queries.
The DCG accumulated at a particular rank position is defined as:
Since result set may vary in size among different queries or systems, to compare performances the normalised version of DCG uses an ideal DCG. To this end, it sorts documents of a result list by relevance, producing an ideal DCG at position p (), which normalizes the score:
The nDCG values for all queries can be averaged to obtain a measure of the average performance of a ranking algorithm. Note that in a perfect ranking algorithm, the will be the same as the producing an nDCG of 1.0. All nDCG calculations are then relative values on the interval 0.0 to 1.0 and so are cross-query comparable.
Information
Information in its most restricted technical sense is a message or collection of messages that consists of an ordered sequence of symbols, or it is the meaning that can be interpreted from such a message or collection of messages. Information can be recorded or transmitted. It can be recorded as...
within documents, and for metadata about documents, as well as that of searching structured storage
Structured storage
COM Structured Storage is a technology developed by Microsoft as part of its Windows operating system for storing hierarchical data within a single file...
, relational database
Relational database
A relational database is a database that conforms to relational model theory. The software used in a relational database is called a relational database management system . Colloquial use of the term "relational database" may refer to the RDBMS software, or the relational database itself...
s, and the World Wide Web
World Wide Web
The World Wide Web is a system of interlinked hypertext documents accessed via the Internet...
. There is overlap in the usage of the terms data retrieval, document retrieval
Document retrieval
Document retrieval is defined as the matching of some stated user query against a set of free-text records. These records could be any type of mainly unstructured text, such as newspaper articles, real estate records or paragraphs in a manual...
, information retrieval, and text retrieval, but each also has its own body of literature, theory, praxis, and technologies. IR is interdisciplinary, based on computer science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...
, mathematics
Mathematics
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...
, library science
Library science
Library science is an interdisciplinary or multidisciplinary field that applies the practices, perspectives, and tools of management, information technology, education, and other areas to libraries; the collection, organization, preservation, and dissemination of information resources; and the...
, information science
Information science
-Introduction:Information science is an interdisciplinary science primarily concerned with the analysis, collection, classification, manipulation, storage, retrieval and dissemination of information...
, information architecture
Information Architecture
Information architecture is the art of expressing a model or concept of information used in activities that require explicit details of complex systems. Among these activities are library systems, Content Management Systems, web development, user interactions, database development, programming,...
, cognitive psychology
Cognitive psychology
Cognitive psychology is a subdiscipline of psychology exploring internal mental processes.It is the study of how people perceive, remember, think, speak, and solve problems.Cognitive psychology differs from previous psychological approaches in two key ways....
, linguistics
Linguistics
Linguistics is the scientific study of human language. Linguistics can be broadly broken into three categories or subfields of study: language form, language meaning, and language in context....
, and statistics
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....
.
Automated information retrieval systems are used to reduce what has been called "information overload
Information overload
"Information overload" is a term popularized by Alvin Toffler in his bestselling 1970 book Future Shock. It refers to the difficulty a person can have understanding an issue and making decisions that can be caused by the presence of too much information...
". Many universities and public libraries
Public library
A public library is a library that is accessible by the public and is generally funded from public sources and operated by civil servants. There are five fundamental characteristics shared by public libraries...
use IR systems to provide access to books, journals and other documents. Web search engine
Web search engine
A web search engine is designed to search for information on the World Wide Web and FTP servers. The search results are generally presented in a list of results often referred to as SERPS, or "search engine results pages". The information may consist of web pages, images, information and other...
s are the most visible IR applications
Information retrieval applications
Areas where information retrieval techniques are employed include :-General applications of information retrieval:* Digital libraries* Information filtering** Recommender systems* Media search...
.
History
The idea of using computers to search for relevant pieces of information was popularized in the article As We May ThinkAs We May Think
As We May Think is an essay by Vannevar Bush, first published in The Atlantic Monthly in July 1945, and republished again as an abridged version in September 1945 — before and after the U.S. nuclear attacks on Japan...
by Vannevar Bush
Vannevar Bush
Vannevar Bush was an American engineer and science administrator known for his work on analog computing, his political role in the development of the atomic bomb as a primary organizer of the Manhattan Project, the founding of Raytheon, and the idea of the memex, an adjustable microfilm viewer...
in 1945. The first automated information retrieval systems were introduced in the 1950s and 1960s. By 1970 several different techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents). Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s.
In 1992, the US Department of Defense along with the National Institute of Standards and Technology
National Institute of Standards and Technology
The National Institute of Standards and Technology , known between 1901 and 1988 as the National Bureau of Standards , is a measurement standards laboratory, otherwise known as a National Metrological Institute , which is a non-regulatory agency of the United States Department of Commerce...
(NIST), cosponsored the Text Retrieval Conference
Text Retrieval Conference
The Text REtrieval Conference is an on-going series of workshops focusing on a list of different information retrieval research areas, or tracks. It is co-sponsored by the National Institute of Standards and Technology and the Intelligence Advanced Research Projects Activity , and began in 1992...
(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale
Scalability
In electronics scalability is the ability of a system, network, or process, to handle growing amount of work in a graceful manner or its ability to be enlarged to accommodate that growth...
to huge corpora. The introduction of web search engine
Web search engine
A web search engine is designed to search for information on the World Wide Web and FTP servers. The search results are generally presented in a list of results often referred to as SERPS, or "search engine results pages". The information may consist of web pages, images, information and other...
s has boosted the need for very large scale retrieval systems even further.
The use of digital methods for storing and retrieving information has led to the phenomenon of digital obsolescence
Digital obsolescence
Digital obsolescence is a situation where a digital resource is no longer readable because the physical media, the reader required to read the media, the hardware, or the software that runs on it, is no longer available. A prime example of this is the BBC Domesday Project...
, where a digital resource ceases to be readable because the physical media, the reader required to read the media, the hardware, or the software that runs on it, is no longer available. The information is initially easier to retrieve than if it were on paper, but is then effectively lost.
Timeline
- Before the 1900s
- 1880s: Herman HollerithHerman HollerithHerman Hollerith was an American statistician who developed a mechanical tabulator based on punched cards to rapidly tabulate statistics from millions of pieces of data. He was the founder of one of the companies that later merged and became IBM.-Personal life:Hollerith was born in Buffalo, New...
invents the recording of data on a machine readable medium. - 1890 Hollerith cards, keypunches and tabulatorsTabulating machineThe tabulating machine was an electrical device designed to assist in summarizing information and, later, accounting. Invented by Herman Hollerith, the machine was developed to help process data for the 1890 U.S. Census...
used to process the 1890 US Census data.
- 1880s: Herman Hollerith
- 1940s–1950s
- late 1940s: The US military confronted problems of indexing and retrieval of wartime scientific research documents captured from Germans.
- 1945: Vannevar BushVannevar BushVannevar Bush was an American engineer and science administrator known for his work on analog computing, his political role in the development of the atomic bomb as a primary organizer of the Manhattan Project, the founding of Raytheon, and the idea of the memex, an adjustable microfilm viewer...
's As We May ThinkAs We May ThinkAs We May Think is an essay by Vannevar Bush, first published in The Atlantic Monthly in July 1945, and republished again as an abridged version in September 1945 — before and after the U.S. nuclear attacks on Japan...
appeared in Atlantic Monthly. - 1947: Hans Peter LuhnHans Peter LuhnHans Peter Luhn was a computer scientist for IBM, and creator of the Luhn algorithm and KWIC indexing. He was awarded over 80 patents....
(research engineer at IBM since 1941) began work on a mechanized punch card-based system for searching chemical compounds.
- 1945: Vannevar Bush
- 1950s: Growing concern in the US for a "science gap" with the USSR motivated, encouraged funding and provided a backdrop for mechanized literature searching systems (Allen Kent et al.) and the invention of citation indexing (Eugene GarfieldEugene GarfieldEugene "Gene" Garfield is an American scientist, one of the founders of bibliometrics and scientometrics. He received a PhD in Structural Linguistics from the University of Pennsylvania in 1961. Dr. Garfield was the founder of the Institute for Scientific Information , which was located in...
). - 1950: The term "information retrieval" appears to have been coined by Calvin MooersCalvin MooersCalvin Northrup Mooers , was an American computer scientist known for his work in information retrieval and for the programming language TRAC....
. - 1951: Philip Bagley conducted the earliest experiment in computerized document retrieval in a master thesis at MIT.
- 1955: Allen Kent joined Case Western Reserve UniversityCase Western Reserve UniversityCase Western Reserve University is a private research university located in Cleveland, Ohio, USA...
, and eventually became associate director of the Center for Documentation and Communications Research. That same year, Kent and colleagues published a paper in American Documentation describing the precision and recall measures as well as detailing a proposed "framework" for evaluating an IR system which included statistical sampling methods for determining the number of relevant documents not retrieved. - 1958: International Conference on Scientific Information Washington DC included consideration of IR systems as a solution to problems identified. See: Proceedings of the International Conference on Scientific Information, 1958 (National Academy of Sciences, Washington, DC, 1959)
- 1959: Hans Peter Luhn published "Auto-encoding of documents for information retrieval."
- late 1940s: The US military confronted problems of indexing and retrieval of wartime scientific research documents captured from Germans.
- 1960s:
- early 1960s: Gerard SaltonGerard SaltonGerard Salton , also known as Gerry Salton, was a Professor of Computer Science at Cornell University. Salton was perhaps the leading computer scientist working in the field of information retrieval during his time...
began work on IR at Harvard, later moved to Cornell. - 1960: Melvin Earl (Bill) Maron and John Lary Kuhns published "On relevance, probabilistic indexing, and information retrieval" in the Journal of the ACM 7(3):216–244, July 1960.
- 1962:
- Cyril W. Cleverdon published early findings of the Cranfield studies, developing a model for IR system evaluation. See: Cyril W. Cleverdon, "Report on the Testing and Analysis of an Investigation into the Comparative Efficiency of Indexing Systems". Cranfield Collection of Aeronautics, Cranfield, England, 1962.
- Kent published Information Analysis and Retrieval.
- 1963:
- Weinberg report "Science, Government and Information" gave a full articulation of the idea of a "crisis of scientific information." The report was named after Dr. Alvin Weinberg.
- Joseph Becker and Robert M. HayesRobert M. HayesRobert Mayo Hayes is Professor Emeritus and former dean of the School of Library Service, later known as the Graduate School of Library and Information Science , now the Graduate School of Education and Information Studies at UCLA. In the early years, he jointly taught mathematics and...
published text on information retrieval. Becker, Joseph; Hayes, Robert Mayo. Information storage and retrieval: tools, elements, theories. New York, Wiley (1963).
- 1964:
- Karen Spärck JonesKaren Spärck JonesKaren Spärck Jones FBA was a British computer scientist.Karen Spärck Jones was born in Huddersfield, Yorkshire, England. Her father was Owen Jones, a lecturer in chemistry, and her mother was Ida Spärck, a Norwegian who moved to Britain during World War II...
finished her thesis at Cambridge, Synonymy and Semantic Classification, and continued work on computational linguisticsComputational linguisticsComputational linguistics is an interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective....
as it applies to IR. - The National Bureau of Standards sponsored a symposium titled "Statistical Association Methods for Mechanized Documentation." Several highly significant papers, including G. Salton's first published reference (we believe) to the SMARTSMART Information Retrieval SystemThe SMART Information Retrieval System is an information retrieval system developed at Cornell University in the 1960s...
system.
- Karen Spärck Jones
- mid-1960s:
-
- National Library of Medicine developed MEDLARS Medical Literature Analysis and Retrieval System, the first major machine-readable database and batch-retrieval system.
- Project Intrex at MIT.
- 1965: J. C. R. LickliderJ. C. R. LickliderJoseph Carl Robnett Licklider , known simply as J.C.R. or "Lick" was an American computer scientist, considered one of the most important figures in computer science and general computing history...
published Libraries of the Future. - 1966: Don Swanson was involved in studies at University of Chicago on Requirements for Future Catalogs.
-
- late 1960s: F. Wilfrid Lancaster completed evaluation studies of the MEDLARS system and published the first edition of his text on information retrieval.
- 1968:
- Gerard Salton published Automatic Information Organization and Retrieval.
- John W. Sammon, Jr.'s RADC Tech report "Some Mathematics of Information Storage and Retrieval..." outlined the vector model.
- 1969: Sammon's "A nonlinear mapping for data structure analysis" (IEEE Transactions on Computers) was the first proposal for visualization interface to an IR system.
- early 1960s: Gerard Salton
- 1970s
- early 1970s:
-
- First online systems—NLM's AIM-TWX, MEDLINE; Lockheed's Dialog; SDC's ORBIT.
- Theodor Nelson promoting concept of hypertextHypertextHypertext is text displayed on a computer or other electronic device with references to other text that the reader can immediately access, usually by a mouse click or keypress sequence. Apart from running text, hypertext may contain tables, images and other presentational devices. Hypertext is the...
, published Computer Lib/Dream Machines.
-
- 1971: Nicholas Jardine and Cornelis J. van Rijsbergen published "The use of hierarchic clustering in information retrieval", which articulated the "cluster hypothesis." (Information Storage and Retrieval, 7(5), pp. 217–240, December 1971)
- 1975: Three highly influential publications by Salton fully articulated his vector processing framework and term discrimination model:
-
- A Theory of Indexing (Society for Industrial and Applied Mathematics)
- A Theory of Term Importance in Automatic Text Analysis (JASIS v. 26)
- A Vector Space Model for Automatic Indexing (CACMCommunications of the ACMCommunications of the ACM is the flagship monthly journal of the Association for Computing Machinery . First published in 1957, CACM is sent to all ACM members, currently numbering about 80,000. The articles are intended for readers with backgrounds in all areas of computer science and information...
18:11)
-
- 1978: The First ACMAssociation for Computing MachineryThe Association for Computing Machinery is a learned society for computing. It was founded in 1947 as the world's first scientific and educational computing society. Its membership is more than 92,000 as of 2009...
SIGIRSpecial Interest Group on Information RetrievalSIGIR is the Association for Computing Machinery's Special Interest Group on Information Retrieval. The scope of the group's specialty is the theory and application of computers to the acquisition, organization, storage, retrieval and distribution of information; emphasis is placed on working with...
conference. - 1979: C. J. van Rijsbergen published Information Retrieval (Butterworths). Heavy emphasis on probabilistic models.
- early 1970s:
- 1980s
- 1980: First international ACM SIGIR conference, joint with British Computer Society IR group in Cambridge.
- 1982: Nicholas J. BelkinNicholas J. BelkinNicholas J. Belkin is a professor at School of Communication, Information and Library Studies at Rutgers University. Among the main themes of his research are digital libraries; information-seeking behaviors; and interaction between humans and information retrieval systems.Dr...
, Robert N. Oddy, and Helen M. Brooks proposed the ASK (Anomalous State of Knowledge) viewpoint for information retrieval. This was an important concept, though their automated analysis tool proved ultimately disappointing. - 1983: Salton (and Michael J. McGill) published Introduction to Modern Information Retrieval (McGraw-Hill), with heavy emphasis on vector models.
- 1985: Blair and Maron publish: An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System
- mid-1980s: Efforts to develop end-user versions of commercial IR systems.
- 1985–1993: Key papers on and experimental systems for visualization interfaces.
- Work by Donald B. Crouch, Robert R. KorfhageRobert R. KorfhageRobert Roy Korfhage was an American computer scientist, famous for his contributions to information retrieval and several textbooks....
, Matthew Chalmers, Anselm Spoerri and others.
- 1989: First World Wide WebWorld Wide WebThe World Wide Web is a system of interlinked hypertext documents accessed via the Internet...
proposals by Tim Berners-LeeTim Berners-LeeSir Timothy John "Tim" Berners-Lee, , also known as "TimBL", is a British computer scientist, MIT professor and the inventor of the World Wide Web...
at CERNCERNThe European Organization for Nuclear Research , known as CERN , is an international organization whose purpose is to operate the world's largest particle physics laboratory, which is situated in the northwest suburbs of Geneva on the Franco–Swiss border...
.
- 1990s
- 1992: First TRECText Retrieval ConferenceThe Text REtrieval Conference is an on-going series of workshops focusing on a list of different information retrieval research areas, or tracks. It is co-sponsored by the National Institute of Standards and Technology and the Intelligence Advanced Research Projects Activity , and began in 1992...
conference. - 1997: Publication of KorfhageRobert R. KorfhageRobert Roy Korfhage was an American computer scientist, famous for his contributions to information retrieval and several textbooks....
's Information Storage and Retrieval with emphasis on visualization and multi-reference point systems. - late 1990s: Web search enginesWeb search engineA web search engine is designed to search for information on the World Wide Web and FTP servers. The search results are generally presented in a list of results often referred to as SERPS, or "search engine results pages". The information may consist of web pages, images, information and other...
implementation of many features formerly found only in experimental IR systems. Search engines become the most common and maybe best instantiation of IR models, research, and implementation.
- 1992: First TREC
Overview
An information retrieval process begins when a user enters a queryQuery string
In World Wide Web, a query string is the part of a Uniform Resource Locator that contains data to be passed to web applications such as CGI programs....
into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevancy
Relevance
-Introduction:The concept of relevance is studied in many different fields, including cognitive sciences, logic and library and information science. Most fundamentally, however, it is studied in epistemology...
.
An object is an entity that is represented by information in a database
Database
A database is an organized collection of data for one or more purposes, usually in digital form. The data are typically organized to model relevant aspects of reality , in a way that supports processes requiring this information...
. User queries are matched against the database information. Depending on the application
Information retrieval applications
Areas where information retrieval techniques are employed include :-General applications of information retrieval:* Digital libraries* Information filtering** Recommender systems* Media search...
the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata.
Most IR systems compute a numeric score on how well each object in the database match the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.
Performance and correctness measures
Many different measures for evaluating the performance of information retrieval systems have been proposed. The measures require a collection of documents and a query. All common measures described here assume a ground truth notion of relevancy: every document is known to be either relevant or non-relevant to a particular query. In practice queries may be ill-posed and there may be different shades of relevancy.Precision
Precision is the fraction of the documents retrieved that are relevantRelevance (information retrieval)
In information science and information retrieval, relevance denotes how well a retrieved document or set of documents meets the information need of the user.-Types:...
to the user's information need.
In binary classification
Binary classification
Binary classification is the task of classifying the members of a given set of objects into two groups on the basis of whether they have some property or not. Some typical binary classification tasks are...
, precision is analogous to positive predictive value
Positive predictive value
In statistics and diagnostic testing, the positive predictive value, or precision rate is the proportion of subjects with positive test results who are correctly diagnosed. It is a critical measure of the performance of a diagnostic method, as it reflects the probability that a positive test...
. Precision takes all retrieved documents into account. It can also be evaluated at a given cut-off rank, considering only the topmost results returned by the system. This measure is called precision at n or P@n.
Note that the meaning and usage of "precision" in the field of Information Retrieval differs from the definition of accuracy and precision
Accuracy and precision
In the fields of science, engineering, industry and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual value. The precision of a measurement system, also called reproducibility or repeatability, is the degree to which...
within other branches of science and technology.
Recall
Recall is the fraction of the documents that are relevant to the query that are successfully retrieved.In binary classification, recall is called sensitivity. So it can be looked at as the probability that a relevant document is retrieved by the query.
It is trivial to achieve recall of 100% by returning all documents in response to any query. Therefore recall alone is not enough but one needs to measure the number of non-relevant documents also, for example by computing the precision.
Fall-Out
The proportion of non-relevant documents that are retrieved, out of all non-relevant documents available:In binary classification, fall-out is closely related to specificity . It can be looked at as the probability that a non-relevant document is retrieved by the query.
It is trivial to achieve fall-out of 0% by returning zero documents in response to any query.
F-measure
The weighted harmonic meanHarmonic mean
In mathematics, the harmonic mean is one of several kinds of average. Typically, it is appropriate for situations when the average of rates is desired....
of precision and recall, the traditional F-measure or balanced F-score is:
This is also known as the measure, because recall and precision are evenly weighted.
The general formula for non-negative real is:.
Two other commonly used F measures are the measure, which weights recall twice as much as precision, and the measure, which weights precision twice as much as recall.
The F-measure was derived by van Rijsbergen (1979) so that "measures the effectiveness of retrieval with respect to a user who attaches times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure . Their relationship is where .
Average precision
Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked sequence of documents, one can plot a precision-recall curve, plotting precision as a function of recall . Average precision computes the average value of over the interval from to :This integral is in practice replaced with a finite sum over every position in the ranked sequence of documents:
where is the rank in the sequence of retrieved documents, is the number of retrieved documents, is the precision at cut-off in the list, and is the change in recall from items to .
This finite sum is equivalent to:
where is an indicator function equaling 1 if the item at rank is a relevant document, zero otherwise.
Some authors choose to interpolate the function to reduce the impact of "wiggles" in the curve.
For example, the PASCAL Visual Object Classes challenge (a benchmark for computer vision object detection) computes average precision by averaging the precision over a set of evenly spaced recall levels {0, 0.1, 0.2, ... 1.0}:
where is an interpolated precision that takes the maximum precision over all recalls greater than :.
Average precision is also sometimes referred to geometrically as the area under the Precision-Recall curve.
R-Precision
Precision at R-th position in the ranking of results for a query that has R relevant documents. This measure is highly correlated to Average Precision.Mean average precision
Mean average precision for a set of queries is the mean of the average precision scores for each query.where Q is the number of queries.
Discounted cumulative gain
DCG uses a graded relevance scale of documents from the result set to evaluate the usefulness, or gain, of a document based on its position in the result list. The premise of DCG is that highly relevant documents appearing lower in a search result list should be penalized as the graded relevance value is reduced logarithmically proportional to the position of the result.The DCG accumulated at a particular rank position is defined as:
Since result set may vary in size among different queries or systems, to compare performances the normalised version of DCG uses an ideal DCG. To this end, it sorts documents of a result list by relevance, producing an ideal DCG at position p (), which normalizes the score:
The nDCG values for all queries can be averaged to obtain a measure of the average performance of a ranking algorithm. Note that in a perfect ranking algorithm, the will be the same as the producing an nDCG of 1.0. All nDCG calculations are then relative values on the interval 0.0 to 1.0 and so are cross-query comparable.
Model types
For the information retrieval to be efficient, the documents are typically transformed into a suitable representation. There are several representations. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model.First dimension: mathematical basis
- Set-theoretic models represent documents as sets of words or phrases. Similarities are usually derived from set-theoretic operations on those sets. Common models are:
- Standard Boolean modelStandard Boolean modelThe Boolean model of information retrieval is a classical information retrieval model and, at the same time, the first and most adopted one. It is used by virtually all commercial IR systems today.-Definitions:...
- Extended Boolean modelExtended Boolean modelThe Extended Boolean Model was described in a Communications of the ACM article appearing in 1983, by Gerard Salton, Edward A. Fox, and Harry Wu. The goal of the Extended Boolean Model is to overcome the drawbacks of the Boolean model that has been used in information retrieval...
- Fuzzy retrievalFuzzy retrievalFuzzy retrieval techniques are based on the Extended Boolean model and the Fuzzy set theory. There are two classical fuzzy retrieval models: Mixed Min and Max and the Paice model...
- Standard Boolean model
- Algebraic models represent documents and queries usually as vectors, matrices, or tuples. The similarity of the query vector and document vector is represented as a scalar value.
- Vector space modelVector space modelVector space model is an algebraic model for representing text documents as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings...
- Generalized vector space modelGeneralized vector space modelThe Generalized vector space model is a generalization of the vector space model used in information retrieval. Wong et al. presented an analysis of the problems that the pairwise orthogonality assumption of the Vector space model creates...
- (Enhanced) Topic-based Vector Space ModelTopic-based vector space modelThe Topic-based Vector Space Model extends the Vector space model by removing the constraint that the term-vectors be orthogonal. The assumption of orthogonal terms is incorrect regarding natural languages which causes problems with synonyms and strong related terms...
- Extended Boolean modelExtended Boolean modelThe Extended Boolean Model was described in a Communications of the ACM article appearing in 1983, by Gerard Salton, Edward A. Fox, and Harry Wu. The goal of the Extended Boolean Model is to overcome the drawbacks of the Boolean model that has been used in information retrieval...
- Latent semantic indexingLatent semantic indexingLatent Semantic Indexing is an indexing and retrieval method that uses a mathematical technique called Singular value decomposition to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words...
aka latent semantic analysisLatent semantic analysisLatent semantic analysis is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close...
- Vector space model
- Probabilistic models treat the process of document retrieval as a probabilistic inference. Similarities are computed as probabilities that a document is relevant for a given query. Probabilistic theorems like the Bayes' theoremBayes' theoremIn probability theory and applications, Bayes' theorem relates the conditional probabilities P and P. It is commonly used in science and engineering. The theorem is named for Thomas Bayes ....
are often used in these models.- Binary Independence ModelBinary Independence ModelThe Binary Independence Model is a probabilistic information retrieval technique that makes some simple assumptions to make the estimation of document/query similarity probability feasible.-Definitions:...
- Probabilistic relevance modelProbabilistic relevance modelThe probabilistic relevance model was devised by Robertson and Jones as a framework for probabilistic models to come.It makes an estimation of the probability of finding if a document dj is relevant to a query q. This model assumes that this probability of relevance depends on the query and...
on which is based the okapi (BM25) relevance function - Uncertain inferenceUncertain inferenceUncertain inference was first described by Rijsbergen as a way to formally define a query and document relationship in Information retrieval. This formalization is a logical implication with an attached measure of uncertainty.-Definitions:...
- Language modelLanguage modelA statistical language model assigns a probability to a sequence of m words P by means of a probability distribution.Language modeling is used in many natural language processing applications such as speech recognition, machine translation, part-of-speech tagging, parsing and information...
s - Divergence-from-randomness model
- Latent Dirichlet allocationLatent Dirichlet allocationIn statistics, latent Dirichlet allocation is a generative model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar...
- Binary Independence Model
- Feature-based retrieval models view documents as vectors of values of feature functions (or just features) and seek the best way to combine these features into a single relevance score, typically by learning to rankLearning to rankLearning to rank or machine-learned ranking is a type of supervised or semi-supervised machine learning problem in which the goal is to automatically construct a ranking model from training data. Training data consists of lists of items with some partial order specified between items in each list...
methods. Feature functions are arbitrary functions of document and query, and as such can easily incorporate almost any other retrieval model as just a yet another feature.
Second dimension: properties of the model
- Models without term-interdependencies treat different terms/words as independent. This fact is usually represented in vector space models by the orthogonalityOrthogonalityOrthogonality occurs when two things can vary independently, they are uncorrelated, or they are perpendicular.-Mathematics:In mathematics, two vectors are orthogonal if they are perpendicular, i.e., they form a right angle...
assumption of term vectors or in probabilistic models by an independency assumption for term variables. - Models with immanent term interdependencies allow a representation of interdependencies between terms. However the degree of the interdependency between two terms is defined by the model itself. It is usually directly or indirectly derived (e.g. by dimensional reduction) from the co-occurrenceCo-occurrenceCo-occurrence or cooccurrence can either mean concurrence / coincidence or, in a more specific sense, the above-chance frequent occurrence of two terms from a text corpus alongside each other in a certain order. Co-occurrence in this linguistic sense can be interpreted as an indicator of semantic...
of those terms in the whole set of documents. - Models with transcendent term interdependencies allow a representation of interdependencies between terms, but they do not allege how the interdependency between two terms is defined. They relay an external source for the degree of interdependency between two terms. (For example a human or sophisticated algorithms.)
Major figures
- Thomas BayesThomas BayesThomas Bayes was an English mathematician and Presbyterian minister, known for having formulated a specific case of the theorem that bears his name: Bayes' theorem...
- Claude E. Shannon
- Gerard SaltonGerard SaltonGerard Salton , also known as Gerry Salton, was a Professor of Computer Science at Cornell University. Salton was perhaps the leading computer scientist working in the field of information retrieval during his time...
- Hans Peter LuhnHans Peter LuhnHans Peter Luhn was a computer scientist for IBM, and creator of the Luhn algorithm and KWIC indexing. He was awarded over 80 patents....
- Cyril CleverdonCyril CleverdonCyril Cleverdon was a British librarian and computer scientist who is best known for his work on the evaluation of information retrieval systems....
- W. Bruce CroftW. Bruce CroftW. Bruce Croft is a distinguished professor of computer science at the University of Massachusetts Amherst whose work focuses on information retrieval....
- Karen Spärck JonesKaren Spärck JonesKaren Spärck Jones FBA was a British computer scientist.Karen Spärck Jones was born in Huddersfield, Yorkshire, England. Her father was Owen Jones, a lecturer in chemistry, and her mother was Ida Spärck, a Norwegian who moved to Britain during World War II...
- Calvin MooersCalvin MooersCalvin Northrup Mooers , was an American computer scientist known for his work in information retrieval and for the programming language TRAC....
- C. J. van RijsbergenC. J. van RijsbergenC. J. "Keith" van Rijsbergen is a professor of computer science and the leader of the Glasgow Information Retrieval Group based at the University of Glasgow...
- Stephen E. Robertson
- Amit SinghalAmit SinghalAmit Singhal is a software engineer at Google Inc., a Google Fellow, and the head of Google's core ranking team.- Education :Born in Jhansi, a city in the state of Uttar Pradesh, India, Amit received a Bachelor of Engineering degree in computer science from IIT Roorkee in 1989. He continued his...
See also
- Adversarial information retrievalAdversarial information retrievalAdversarial information retrieval is a topic in information retrieval related to strategies for working with a data source where some portion of it has been manipulated maliciously. Tasks can include gathering, indexing, filtering, retrieving and ranking information from such a data source...
- Collaborative information seekingCollaborative information seekingCollaborative information seeking is a field of research that involves studying situations, motivations, and methods for people working in collaborative groups for information seeking projects, as well as building systems for supporting such activities. Such projects often involve information...
- Controlled vocabularyControlled vocabularyControlled vocabularies provide a way to organize knowledge for subsequent retrieval. They are used in subject indexing schemes, subject headings, thesauri, taxonomies and other form of knowledge organization systems...
- Cross-language information retrievalCross-language information retrievalCross-language information retrieval is a subfield of information retrieval dealing with retrieving information written in a language different from the language of the user's query. For example, a user may pose their query in English but retrieve relevant documents written in French.The first...
- Data miningData miningData mining , a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems...
- European Summer School in Information RetrievalEuropean Summer School in Information RetrievalThe European Summer School in Information Retrieval is a scientific event founded in 1990, which starts off a series of Summer Schools to provide high quality teaching of information retrieval on advanced topics...
- Human–computer information retrieval
- Information extractionInformation extractionInformation extraction is a type of information retrieval whose goal is to automatically extract structured information from unstructured and/or semi-structured machine-readable documents. In most of the cases this activity concerns processing human language texts by means of natural language...
- Information Retrieval FacilityInformation Retrieval FacilityThe Information Retrieval Facility , founded 2006 and located in Vienna, Austria, is a research platform for networking and collaboration for professionals in the field of information retrieval.The IRF has members in the following categories:...
- Information scienceInformation science-Introduction:Information science is an interdisciplinary science primarily concerned with the analysis, collection, classification, manipulation, storage, retrieval and dissemination of information...
- Information seekingInformation seekingInformation seeking is the process or activity of attempting to obtain information in both human and technological contexts. Information seeking is related to, but yet different from, information retrieval .- Information Retrieval :...
- Knowledge visualization
- Multimedia Information RetrievalMultimedia Information RetrievalMultimedia Information Retrieval is a research discipline of computer science that aims at extracting semantic information from multimedia data sources. Data sources include directly perceivable media such as audio, image and video, indirectly perceivable sources such as text, biosignals as well...
- Personal information managementPersonal information managementPersonal information management refers to the practice and the study of the activities people perform in order to acquire, organize, maintain, retrieve and use information items such as documents , web pages and email messages for everyday use to complete tasks and fulfill a person’s various...
- Relevance (Information Retrieval)Relevance (information retrieval)In information science and information retrieval, relevance denotes how well a retrieved document or set of documents meets the information need of the user.-Types:...
- Relevance feedbackRelevance feedbackRelevance feedback is a feature of some information retrieval systems. The idea behind relevance feedback is to take the results that are initially returned from a given query and to use information about whether or not those results are relevant to perform a new query...
- Rocchio ClassificationRocchio ClassificationRocchio Classification is based on a method of relevance feedback found in information retrieval systems which stemmed from the SMART Information Retrieval System around the year 1970. Like many other retrieval systems, the Rocchio feedback approach was developed using the Vector Space Model...
- Search indexIndex (search engine)Search engine indexing collects, parses, and stores data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, physics, and computer science...
- Social information seekingSocial information seekingSocial information seeking is a field of research that involves studying situations, motivations, and methods for people seeking and sharing information in participatory online social sites, such as , , and . as well as building systems for supporting such activities...
- Subject indexingSubject indexingSubject indexing is the act of describing or classifying a document by index terms or other symbols in order to indicate what the document is about, to summarize its content or to increase its findability. In other words, it is about identifying and describing the subject of documents...
- Tf-idf
- XML-RetrievalXML-RetrievalXML Retrieval, or XML Information Retrieval, is the content-based retrieval of documents structured with XML . As such it is used for computing relevance of XML documents.-Queries:...
External links
- ACM SIGIR: Information Retrieval Special Interest Group
- BCS IRSG: British Computer Society - Information Retrieval Specialist Group
- Text Retrieval Conference (TREC)
- Forum for Information Retrieval Evaluation (FIRE)
- Chinese Web Information Retrieval Forum (CWIRF)
- Information Retrieval (online book) by C. J. van RijsbergenC. J. van RijsbergenC. J. "Keith" van Rijsbergen is a professor of computer science and the leader of the Glasgow Information Retrieval Group based at the University of Glasgow...
- Information Retrieval Wiki
- Information Retrieval Facility
- Information Retrieval @ DUTH
- Introduction to Information Retrieval (online book) by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Cambridge University Press. 2008.