词条 | Domain adaptation |
释义 |
Domain adaptation[1][2] is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new one who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources[3]. Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation.[4] FormalizationLet be the input space (or description space) and let be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) able to affect a label of to an example from . This model is learned from a learning sample . Usually in supervised learning (without domain adaptation), we suppose that the examples are drawn i.i.d. from a distribution of support (unknown and fixed). The objective is then to learn (from ) such that it commits the least error as possible for labelling new examples coming from the distribution . The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions and on . The domain adaptation task then consists of the transfer of knowledge from the source domain to the target one . The goal is then to learn (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain . The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain? The different types of domain adaptationThere are several contexts of domain adaptation. They differ in the informations considered for the target task.
Three algorithmic principlesReweighting algorithmsThe objective is to reweight the source labeled sample such that it "looks like" the target sample (in term of the error measure considered)[5][6] Iterative algorithmsA method for adapting consists in iteratively "auto-labeling" the target examples. The principle is simple:
Note that there exists other iterative approaches, but they usually need target labeled examples.[7][8] Search of a common representation spaceThe goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task. This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable [9][10]. Hierarchical Bayesian ModelThe goal is to construct a Bayesian hierarchical model , which is essentially a factorization model for counts , to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors[3]. References1. ^{{cite journal|last1=Bridle|first1=John S.|last2=Cox|first2=Stephen J|title=RecNorm: Simultaneous normalisation and classification applied to speech recognition|journal=Conference on Neural Information Processing Systems (NIPS)|date=1990|pages=234–240|url=http://papers.nips.cc/paper/328-recnorm-simultaneous-normalisation-and-classification-applied-to-speech-recognition.pdf}} 2. ^{{cite journal|last1=Ben-David|first1=Shai|last2=Blitzer|first2=John|last3=Crammer|first3=Koby|last4=Kulesza|first4=Alex|last5=Pereira|first5=Fernando|last6=Wortman Vaughan|first6=Jennifer|title=A theory of learning from different domains|journal=Machine Learning Journal|date=2010|volume=79|issue=1–2|pages=151–175|url=https://link.springer.com/content/pdf/10.1007/s10994-009-5152-4.pdf|doi=10.1007/s10994-009-5152-4}} 3. ^1 Hajiramezanali, E. & Dadaneh, S. Z. & Karbalayghareh, A. & Zhou, Z. & Qian, X. Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. https://arxiv.org/pdf/1810.09433.pdf 4. ^{{cite journal|last1=Crammer|first1=Koby|last2=Kearns|first2=Michael|last3=Wortman|first3=Jeniifer|title=Learning from Multiple Sources|journal=Journal of Machine Learning Research|date=2008|volume=9|pages=1757–1774|url=http://www.jmlr.org/papers/volume9/crammer08a/crammer08a.pdf}} 5. ^{{cite journal|last1=Huang|first1=Jiayuan|last2=Smola|first2=Alexander J.|last3=Gretton|first3=Arthur|last4=Borgwardt|first4=Karster M.|last5=Schölkopf|first5=Bernhard|title=Correcting Sample Selection Bias by Unlabeled Data|journal=Conference on Neural Information Processing Systems (NIPS)|date=2006|pages=601–608|url=http://papers.nips.cc/paper/3075-correcting-sample-selection-bias-by-unlabeled-data.pdf}} 6. ^{{cite journal|last1=Shimodaira|first1=Hidetoshi|title=Improving predictive inference under covariate shift by weighting the log-likelihood function|journal=Journal of Statistical Planning and Inference|date=2000|pages=227–244|url=https://www.researchgate.net/publication/230710850}} 7. ^{{Cite conference|last=Arief-Ang|first=I.B.|last2=Salim|first2=F.D.|last3=Hamilton|first3=M. |date=2017-11-08|title=DA-HOC: semi-supervised domain adaptation for room occupancy prediction using CO2 sensor data|url=https://dl.acm.org/citation.cfm?id=3137146|conference=4th ACM International Conference on Systems for Energy-Efficient Built Environments (BuildSys)|pages=1–10|doi=10.1145/3137133.3137146|location=Delft, Netherlands|isbn=978-1-4503-5544-5}} 8. ^{{cite journal |last1=Arief-Ang |first1=I.B. |last2=Hamilton |first2=M. |last3=Salim |first3=F.D. |date=2018-12-01 |title=A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO2 Sensor Data |journal=ACM Transactions on Sensor Networks (TOSN) |volume=14 |issue=3–4 |pages=21:1–21:28 |doi=10.1145/3217214 }} 9. ^Ganin, Yaroslav; Ustinova, Evgeniya; Ajakan, Hana; Germain, Pascal; Larochelle, Hugo; Laviolette, François; Marchand, Mario; Lempitsky, Victor (2016). "Domain-Adversarial Training of Neural Networks". Journal of Machine Learning Research, 17:1–35. 10. ^Wulfmeier, Markus; Bewley, Alex; Posber, Ingmar (2017). "[https://arxiv.org/pdf/1703.01461 Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation]". International Conference on Intelligent Robotics Systems (IROS). 1 : Machine learning |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。