请输入您要查询的百科知识:

 

词条 Learning to rank
释义

  1. Applications

      In information retrieval    In other areas  

  2. Feature vectors

  3. Evaluation measures

  4. Approaches

      Pointwise approach    Pairwise approach    Listwise approach    List of methods  

  5. History

      Practical usage by search engines  

  6. References

  7. External links

{{machine learning bar}}

Learning to rank[1] or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems.[2] Training data consists of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment (e.g. "relevant" or "not relevant") for each item. The ranking model's purpose is to rank, i.e. produce a permutation of items in new, unseen lists in a way which is "similar" to rankings in the training data in some sense.

Applications

In information retrieval

Ranking is a central part of many information retrieval problems, such as document retrieval, collaborative filtering, sentiment analysis, and online advertising.

A possible architecture of a machine-learned search engine is shown in the accompanying figure.

Training data consists of queries and documents matching them together with relevance degree of each match. It may be prepared manually by human assessors (or raters, as Google calls them),

who check results for some queries and determine relevance of each result. It is not feasible to check the relevance of all documents, and so typically a technique called pooling is used — only the top few documents, retrieved by some existing ranking models are checked. Alternatively, training data may be derived automatically by analyzing clickthrough logs (i.e. search results which got clicks from users),[3] query chains,[4] or such search engines' features as Google's SearchWiki.

Training data is used by a learning algorithm to produce a ranking model which computes the relevance of documents for actual queries.

Typically, users expect a search query to complete in a short time (such as a few hundred milliseconds for web search), which makes it impossible to evaluate a complex ranking model on each document in the corpus, and so a two-phase scheme is used.[5] First, a small number of potentially relevant documents are identified using simpler retrieval models which permit fast query evaluation, such as the vector space model, boolean model, weighted AND,[6] or BM25. This phase is called top- document retrieval and many heuristics were proposed in the literature to accelerate it, such as using a document's static quality score and tiered indexes.[7] In the second phase, a more accurate but computationally expensive machine-learned model is used to re-rank these documents.

In other areas

Learning to rank algorithms have been applied in areas other than information retrieval:

  • In machine translation for ranking a set of hypothesized translations;[8]
  • In computational biology for ranking candidate 3-D structures in protein structure prediction problem.[8]
  • In recommender systems for identifying a ranked list of related news articles to recommend to a user after he or she has read a current news article.[9]
  • In software engineering, learning-to-rank methods have been used for fault localization.[10]

Feature vectors

For the convenience of MLR algorithms, query-document pairs are usually represented by numerical vectors, which are called feature vectors. Such an approach is sometimes called bag of features and is analogous to the bag of words model and vector space model used in information retrieval for representation of documents.

Components of such vectors are called features, factors or ranking signals. They may be divided into three groups (features from document retrieval are shown as examples):

  • Query-independent or static features — those features, which depend only on the document, but not on the query. For example, PageRank or document's length. Such features can be precomputed in off-line mode during indexing. They may be used to compute document's static quality score (or static rank), which is often used to speed up search query evaluation.[7][11]
  • Query-dependent or dynamic features — those features, which depend both on the contents of the document and the query, such as TF-IDF score or other non-machine-learned ranking functions.
  • Query level features or query features, which depend only on the query. For example, the number of words in a query. Further information: query level feature

Some examples of features, which were used in the well-known LETOR dataset:[12]

  • TF, TF-IDF, BM25, and language modeling scores of document's zones (title, body, anchors text, URL) for a given query;
  • Lengths and IDF sums of document's zones;
  • Document's PageRank, HITS ranks and their variants.

Selecting and designing good features is an important area in machine learning, which is called feature engineering.

Evaluation measures

There are several measures (metrics) which are commonly used to judge how well an algorithm is doing on training data and to compare the performance of different MLR algorithms. Often a learning-to-rank problem is reformulated as an optimization problem with respect to one of these metrics.

Examples of ranking quality measures:

  • Mean average precision (MAP);
  • DCG and NDCG;
  • Precision@n, NDCG@n, where "@n" denotes that the metrics are evaluated only on top n documents;
  • Mean reciprocal rank;
  • Kendall's tau;
  • Spearman's rho.

DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used.[13] Other metrics such as MAP, MRR and precision, are defined only for binary judgments.

Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric:

  • Expected reciprocal rank (ERR);[14]
  • Yandex's pfound.[15]

Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document.

Approaches

{{Expand section|date=December 2009}}

Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his paper "Learning to Rank for Information Retrieval".[1] He categorized them into three groups by their input representation and loss function: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets.[16]

Pointwise approach

In this case, it is assumed that each query-document pair in the training data has a numerical or ordinal score. Then the learning-to-rank problem can be approximated by a regression problem — given a single query-document pair, predict its score.

A number of existing supervised machine learning algorithms can be readily used for this purpose. Ordinal regression and classification algorithms can also be used in pointwise approach when they are used to predict the score of a single query-document pair, and it takes a small, finite number of values.

Pairwise approach

In this case, the learning-to-rank problem is approximated by a classification problem — learning a binary classifier that can tell which document is better in a given pair of documents. The goal is to minimize the average number of inversions in ranking.

Listwise approach

These algorithms try to directly optimize the value of one of the above evaluation measures, averaged over all queries in the training data. This is difficult because most evaluation measures are not continuous functions with respect to ranking model's parameters, and so continuous approximations or bounds on evaluation measures have to be used.

List of methods

A partial list of published learning-to-rank algorithms is shown below with years of first publication of each method:

Year Name Type Notes
1989 OPRF [17] 2 pointwise Polynomial regression (instead of machine learning, this work refers to pattern recognition, but the idea is the same)
1992 SLR [18] 2 pointwise Staged logistic regression
1999 MART (Multiple Additive Regression Trees) 2 pairwise
2000 Ranking SVM (RankSVM) 2 pairwise A more recent exposition is in,[3] which describes an application to ranking using clickthrough logs.
2002 Pranking[19] 1 pointwise Ordinal regression.
2003 RankBoost 2 pairwise
2005 [https://www.microsoft.com/en-us/research/wp-content/uploads/2005/08/icml_ranking.pdf RankNet] 2 pairwise
2006 IR-SVM 2 pairwise Ranking SVM with query-level normalization in the loss function.
2006 LambdaRank pairwise/listwise RankNet in which pairwise loss function is multiplied by the change in the IR metric caused by a swap.
2007 AdaRank 3 listwise
2007 FRank 2 pairwise Based on RankNet, uses a different loss function - fidelity loss.
2007 GBRank 2 pairwise
2007 ListNet 3 listwise
2007 McRank 1 pointwise
2007 [https://web.archive.org/web/20100807162456/http://www.stat.rutgers.edu/~tzhang/papers/nips07-ranking.pdf QBRank] 2 pairwise
2007 RankCosine 3 listwise
2007 RankGP[20] 3 listwise
2007 RankRLS 2 pairwise

Regularized least-squares based ranking. The work is extended in

[21] to learning to rank from general preference graphs.

2007 SVMmap 3 listwise
2008 LambdaSMART/LambdaMART pairwise/listwise Winning entry in the recent Yahoo Learning to Rank competition used an ensemble of LambdaMART models. Based on MART (1999)[22] “LambdaSMART”, for Lambda-submodel-MART, or LambdaMART for the case with no submodel [https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2008-109.pdf (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2008-109.pdf]).
2008 ListMLE 3 listwise Based on ListNet.
2008 PermuRank 3 listwise
2008 SoftRank 3 listwise
2008 Ranking Refinement[23] 2 pairwise A semi-supervised approach to learning to rank that uses Boosting.
2008 [https://web.archive.org/web/20100723152841/http://www-connex.lip6.fr/~amini/SSRankBoost/ SSRankBoost][24] 2 pairwise An extension of RankBoost to learn with partially labeled data (semi-supervised learning to rank)
2008 SortNet[25] 2 pairwise SortNet, an adaptive ranking algorithm which orders objects using a neural network as a comparator.
2009 [https://web.archive.org/web/20101122085504/http://itcs.tsinghua.edu.cn/papers/2009/2009031.pdf MPBoost] 2 pairwise Magnitude-preserving variant of RankBoost. The idea is that the more unequal are labels of a pair of documents, the harder should the algorithm try to rank them.
2009 [https://web.archive.org/web/20130620070239/http://machinelearning.org/archive/icml2009/papers/498.pdf BoltzRank] 3 listwise Unlike earlier methods, BoltzRank produces a ranking model that looks during query time not just at a single document, but also at pairs of documents.
2009 BayesRank 3 listwise A method combines Plackett-Luce Model and neural network to minimize the expected Bayes risk, related to NDCG, from the decision-making aspect.
2010 [https://people.cs.pitt.edu/~valizadegan/Publications/NDCG_Boost.pdf NDCG Boost][26] 3 listwise A boosting approach to optimize NDCG.
2010 [https://arxiv.org/abs/1001.4597 GBlend] 2 pairwise Extends GBRank to the learning-to-blend problem of jointly solving multiple learning-to-rank problems with some shared features.
2010 [https://web.archive.org/web/20100601205607/http://wume.cse.lehigh.edu/~ovd209/wsdm/proceedings/docs/p151.pdf IntervalRank] 2 pairwise & listwise
2010 CRR 2 pointwise & pairwise Combined Regression and Ranking. Uses stochastic gradient descent to optimize a linear combination of a pointwise quadratic loss and a pairwise hinge loss from Ranking SVM.
2017 ES-Rank listwise Evolutionary Strategy Learning to Rank technique with 7 fitness evaluation metrics

Note: as most supervised learning algorithms can be applied to pointwise case, only those methods which are specifically designed with ranking in mind are shown above.

History

Norbert Fuhr introduced the general idea of MLR in 1992, describing learning approaches in information retrieval as a generalization of parameter estimation;[27] a specific variant of this approach (using polynomial regression) had been published by him three years earlier.[17] Bill Cooper proposed logistic regression for the same purpose in 1992 [18] and used it with his Berkeley research group to train a successful ranking function for TREC. Manning et al.[28] suggest that these early works achieved limited results in their time due to little available training data and poor machine learning techniques.

Several conferences, such as NIPS, SIGIR and ICML had workshops devoted to the learning-to-rank problem since mid-2000s (decade).

Practical usage by search engines

Commercial web search engines began using machine learned ranking systems since the 2000s (decade). One of the first search engines to start using it was AltaVista (later its technology was acquired by Overture, and then Yahoo), which launched a gradient boosting-trained ranking function in April 2003.[29][30]

Bing's search is said to be powered by [https://www.microsoft.com/en-us/research/wp-content/uploads/2005/08/icml_ranking.pdf RankNet] algorithm,[31]{{when|date=February 2014}} which was invented at Microsoft Research in 2005.

In November 2009 a Russian search engine Yandex announced[32] that it had significantly increased its search quality due to deployment of a new proprietary MatrixNet algorithm, a variant of gradient boosting method which uses oblivious decision trees.[33] Recently they have also sponsored a machine-learned ranking competition "Internet Mathematics 2009"[34] based on their own search engine's production data. Yahoo has announced a similar competition in 2010.[35]

As of 2008, Google's Peter Norvig denied that their search engine exclusively relies on machine-learned ranking.[36] Cuil's CEO, Tom Costello, suggests that they prefer hand-built models because they can outperform machine-learned models when measured against metrics like click-through rate or time on landing page, which is because machine-learned models "learn what people say they like, not what people actually like".[37]

In January 2017 the technology was included in the open source search engine Apache Solr™,[38] thus making machine learned search rank widely accessible also for enterprise search.

References

1. ^{{citation|author=Tie-Yan Liu|title=Learning to Rank for Information Retrieval|series=Foundations and Trends in Information Retrieval|year=2009|isbn=978-1-60198-244-5|doi=10.1561/1500000016|pages=225–331|volume=3|issue=3}}. Slides from Tie-Yan Liu's talk at WWW 2009 conference are available online
2. ^Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, TheMIT Press {{ISBN|9780262018258}}.
3. ^{{citation | author=Joachims, T. | journal=Proceedings of the ACM Conference on Knowledge Discovery and Data Mining | url=http://www.cs.cornell.edu/people/tj/publications/joachims_02c.pdf | title=Optimizing Search Engines using Clickthrough Data | year=2002}}
4. ^{{citation |author1=Joachims T. |author2=Radlinski F. | title=Query Chains: Learning to Rank from Implicit Feedback | url=http://radlinski.org/papers/Radlinski05QueryChains.pdf | year=2005 | journal=Proceedings of the ACM Conference on Knowledge Discovery and Data Mining}}
5. ^{{citation |author1=B. Cambazoglu |author2=H. Zaragoza |author3=O. Chapelle |author4=J. Chen |author5=C. Liao |author6=Z. Zheng |author7=J. Degenhardt. | title=Early exit optimizations for additive machine learned ranking systems | journal=WSDM '10: Proceedings of the Third ACM International Conference on Web Search and Data Mining, 2010. | url=http://olivier.chapelle.cc/pub/wsdm2010.pdf}}
6. ^{{citation |author1=Broder A. |author2=Carmel D. |author3=Herscovici M. |author4=Soffer A. |author5=Zien J. | title=Efficient query evaluation using a two-level retrieval process | journal=Proceedings of the Twelfth International Conference on Information and Knowledge Management | year=2003 | pages=426–434 | isbn=978-1-58113-723-1 | url=http://cis.poly.edu/westlab/papers/cntdstrb/p426-broder.pdf }}
7. ^{{citation |author1=Manning C. |author2=Raghavan P. |author3=Schütze H. | title=Introduction to Information Retrieval | publisher=Cambridge University Press | year=2008}}. Section 7.1
8. ^{{citation | author=Kevin K. Duh | title=Learning to Rank with {{sic|hide=y|Partially|-}}Labeled Data | year=2009 | url=http://ssli.ee.washington.edu/people/duh/thesis/uwthesis.pdf}}
9. ^Yuanhua Lv, Taesup Moon, Pranam Kolari, Zhaohui Zheng, Xuanhui Wang, and Yi Chang, Learning to Model Relatedness for News Recommendation {{Webarchive|url=https://web.archive.org/web/20110827065356/http://sifaka.cs.uiuc.edu/~ylv2/pub/www11-relatedness.pdf |date=2011-08-27 }}, in International Conference on World Wide Web (WWW), 2011.
10. ^{{Cite book|doi = 10.1109/ICSME.2014.41|chapter = Learning to Combine Multiple Ranking Metrics for Fault Localization|title = 2014 IEEE International Conference on Software Maintenance and Evolution|pages = 191–200|year = 2014|last1 = Xuan|first1 = Jifeng|last2 = Monperrus|first2 = Martin|citeseerx = 10.1.1.496.6829|chapter-url=https://hal.archives-ouvertes.fr/hal-01018935/document|isbn = 978-1-4799-6146-7}}
11. ^{{cite conference | first=M. |last=Richardson |author2=Prakash, A. |author3=Brill, E. | title=Beyond PageRank: Machine Learning for Static Ranking | booktitle=Proceedings of the 15th International World Wide Web Conference | pages=707–715 | publisher= | year=2006 | url=http://research.microsoft.com/en-us/um/people/mattri/papers/www2006/staticrank.pdf | accessdate= }}
12. ^LETOR 3.0. A Benchmark Collection for Learning to Rank for Information Retrieval
13. ^http://www.stanford.edu/class/cs276/handouts/lecture15-learning-ranking.ppt
14. ^{{citation|author1=Olivier Chapelle |author2=Donald Metzler |author3=Ya Zhang |author4=Pierre Grinspan |title=Expected Reciprocal Rank for Graded Relevance|url=http://research.yahoo.com/files/err.pdf |archive-url=https://web.archive.org/web/20120224053008/http://research.yahoo.com/files/err.pdf |dead-url=yes |archive-date=2012-02-24 |journal=CIKM|year=2009|pages=}}
15. ^{{citation|author1=Gulin A. |author2=Karpovich P. |author3=Raskovalov D. |author4=Segalovich I. |title=Yandex at ROMIP'2009: optimization of ranking algorithms by machine learning methods|url=http://romip.ru/romip2009/15_yandex.pdf|journal=Proceedings of ROMIP'2009|year=2009|pages=163–168}} (in Russian)
16. ^{{citation |author1=Tax, Niek |author2=Bockting, Sander |author3=Hiemstra, Djoerd | journal=Information Processing & Management |volume=51 |issue=6 | title=A cross-benchmark comparison of 87 learning to rank methods | pages=757–772 | year=2015 | url=http://wwwhome.cs.utwente.nl/~hiemstra/papers/ipm2015.pdf | doi=10.1016/j.ipm.2015.07.002}}
17. ^{{citation | last=Fuhr | first=Norbert | journal=ACM Transactions on Information Systems | title=Optimum polynomial retrieval functions based on the probability ranking principle | volume=7 | number=3 | pages=183–204 | year=1989 | doi=10.1145/65943.65944}}
18. ^{{citation |author1=Cooper, William S. |author2=Gey, Frederic C. |author3=Dabney, Daniel P. | journal=SIGIR '92 Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval | title=Probabilistic retrieval based on staged logistic regression | pages=198–210 | year=1992 | doi=10.1145/133160.133199|isbn=978-0897915236 }}
19. ^{{cite journal | citeseerx = 10.1.1.20.378 | title = Pranking }}
20. ^{{cite journal | citeseerx = 10.1.1.90.220 | title = RankGP }}
21. ^{{Citation|last=Pahikkala|first=Tapio |author2=Tsivtsivadze, Evgeni |author3=Airola, Antti |author4=Järvinen, Jouni |author5=Boberg, Jorma |title=An efficient algorithm for learning to rank from preference graphs|journal=Machine Learning|year=2009|volume=75|issue=1|pages=129–165|doi=10.1007/s10994-008-5097-z|postscript=.}}
22. ^C. Burges. (2010). [https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf From RankNet to LambdaRank to LambdaMART: An Overview].
23. ^Rong Jin, Hamed Valizadegan, Hang Li, Ranking Refinement and Its Application for Information Retrieval, in International Conference on World Wide Web (WWW), 2008.
24. ^Massih-Reza Amini, Vinh Truong, Cyril Goutte, A Boosting Algorithm for Learning Bipartite Ranking Functions with Partially Labeled Data {{Webarchive|url=https://web.archive.org/web/20100802093049/http://www-connex.lip6.fr/~amini/Publis/SemiSupRanking_sigir08.pdf |date=2010-08-02 }}, International ACM SIGIR conference, 2008. The code {{Webarchive|url=https://web.archive.org/web/20100723152841/http://www-connex.lip6.fr/~amini/SSRankBoost/ |date=2010-07-23 }} is available for research purposes.
25. ^Leonardo Rigutini, Tiziano Papini, Marco Maggini, Franco Scarselli, "SortNet: learning to rank by a neural-based sorting algorithm", SIGIR 2008 workshop: Learning to Rank for Information Retrieval, 2008
26. ^Hamed Valizadegan, Rong Jin, Ruofei Zhang, Jianchang Mao, Learning to Rank by Optimizing NDCG Measure, in Proceeding of Neural Information Processing Systems (NIPS), 2010.
27. ^{{citation | last=Fuhr | first=Norbert | journal=Computer Journal | title=Probabilistic Models in Information Retrieval | volume=35 | number=3 | pages=243–255 | year=1992 | doi=10.1093/comjnl/35.3.243}}
28. ^{{citation |author1=Manning C. |author2=Raghavan P. |author3=Schütze H. |title=Introduction to Information Retrieval |publisher=Cambridge University Press |year=2008}}. Sections 7.4 and 15.5
29. ^Jan O. Pedersen. The MLR Story {{Webarchive|url=https://web.archive.org/web/20110713120113/http://jopedersen.com/Presentations/The_MLR_Story.pdf |date=2011-07-13 }}
30. ^{{US Patent|7197497}}
31. ^Bing Search Blog: User Needs, Features and the Science behind Bing
32. ^Yandex corporate blog entry about new ranking model "Snezhinsk" (in Russian)
33. ^The algorithm wasn't disclosed, but a few details were made public in   and  .
34. ^Yandex's Internet Mathematics 2009 competition page
35. ^{{Cite web |url=http://learningtorankchallenge.yahoo.com/ |title=Yahoo Learning to Rank Challenge |access-date=2010-02-26 |archive-url=https://web.archive.org/web/20100301011649/http://learningtorankchallenge.yahoo.com/ |archive-date=2010-03-01 |dead-url=yes |df= }}
36. ^{{cite web |url = http://anand.typepad.com/datawocky/2008/05/are-human-experts-less-prone-to-catastrophic-errors-than-machine-learned-models.html |archiveurl = https://www.webcitation.org/5sq8irWNM?url=http://anand.typepad.com/datawocky/2008/05/are-human-experts-less-prone-to-catastrophic-errors-than-machine-learned-models.html |archivedate = 2010-09-18 |title = Are Machine-Learned Models Prone to Catastrophic Errors? |date = 2008-05-24 |last = Rajaraman |first = Anand |authorlink = Anand Rajaraman |access-date = 2009-11-11 |dead-url = no |df = }}
37. ^{{cite web | url = http://www.cuil.com/info/blog/2009/06/26/so-how-is-bing-doing | archiveurl = https://archive.is/20090627213358/http://www.cuil.com/info/blog/2009/06/26/so-how-is-bing-doing | archivedate = 2009-06-27 | title = Cuil Blog: So how is Bing doing? | date = 2009-06-26 | last = Costello | first = Tom}}
38. ^{{Cite news|url=https://www.techatbloomberg.com/blog/bloomberg-integrated-learning-rank-apache-solr/|title=How Bloomberg Integrated Learning-to-Rank into Apache Solr {{!}} Tech at Bloomberg|date=2017-01-23|work=Tech at Bloomberg|access-date=2017-02-28|language=en-US}}

External links

Competitions and public datasets
  • [https://www.microsoft.com/en-us/research/project/letor-learning-rank-information-retrieval/ LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval]
  • Yandex's Internet Mathematics 2009
  • [https://web.archive.org/web/20100301011649/http://learningtorankchallenge.yahoo.com/ Yahoo! Learning to Rank Challenge]
  • Microsoft Learning to Rank Datasets
Open Source code
  • [https://mloss.org/software/view/332/ Parallel C++/MPI implementation of Gradient Boosted Regression Trees for ranking, released September 2011]
  • [https://sites.google.com/site/rtranking/ C++ implementation of Gradient Boosted Regression Trees and Random Forests for ranking]
  • C++ and Python tools for using the SVM-Rank algorithm
  • [https://github.com/apache/lucene-solr/tree/master/solr/contrib/ltr Java implementation in the Apache Solr search engine]

3 : Information retrieval techniques|Machine learning|Ranking functions

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/12 20:22:08