词条 | Synthetic data |
释义 |
The creation of synthetic data is an involved process of data anonymization; that is to say that synthetic data is a subset of anonymized data.[3] Synthetic data is used in a variety of fields as a filter for information that would otherwise compromise the confidentiality of particular aspects of the data. Many times the particular aspects come about in the form of human information (i.e. name, home address, IP address, telephone number, social security number, credit card number, etc.). UsefulnessSynthetic data are generated to meet specific needs or certain conditions that may not be found in the original, real data. This can be useful when designing any type of system because the synthetic data are used as a simulation or as a theoretical value, situation, etc. This allows us to take into account unexpected results and have a basic solution or remedy, if the results prove to be unsatisfactory. Synthetic data are often generated to represent the authentic data and allows a baseline to be set.[4] Another use of synthetic data is to protect privacy and confidentiality of authentic data. As stated previously, synthetic data is used in testing and creating many different types of systems; below is a quote from the abstract of an article that describes a software that generates synthetic data for testing fraud detection systems that further explains its use and importance. "This enables us to create realistic behavior profiles for users and attackers. The data is used to train the fraud detection system itself, thus creating the necessary adaptation of the system to a specific environment."[4] HistoryThe history of the generation of synthetic data dates back to 1993. In 1993, the idea of original fully synthetic data was created by Rubin.[5] Rubin originally designed this to synthesize the Decennial Census long form responses for the short form households. He then released samples that did not include any actual long form records - in this he preserved anonymity of the household.[6] Later that year, the idea of original partially synthetic data was created by Little. Little used this idea to synthesize the sensitive values on the public use file.[7] In 1994, Fienberg came up with the idea of critical refinement, in which he used a parametric posterior predictive distribution (instead of a Bayes bootstrap) to do the sampling.[6] Later, other important contributors to the development of synthetic data generation were Trivellore Raghunathan, Jerry Reiter, Donald Rubin, John M. Abowd, and Jim Woodcock. Collectively they came up with a solution for how to treat partially synthetic data with missing data. Similarly they came up with the technique of Sequential Regression Multivariate Imputation.[6] ApplicationsSynthetic data are used in the process of data mining. Testing and training fraud detection systems, confidentiality systems and any type of system is devised using synthetic data. As described previously, synthetic data may seem as just a compilation of “made up” data, but there are specific algorithms and generators that are designed to create realistic data.[8] This synthetic data assists in teaching a system how to react to certain situations or criteria. Researcher doing clinical trials or any other research may generate synthetic data to aid in creating a baseline for future studies and testing. For example, intrusion detection software is tested using synthetic data. This data is a representation of the authentic data and may include intrusion instances that are not found in the authentic data. The synthetic data allows the software to recognize these situations and react accordingly. If synthetic data was not used, the software would only be trained to react to the situations provided by the authentic data and it may not recognize another type of intrusion.[4] Synthetic data is also used to protect the privacy and confidentiality of a set of data. Real data contains personal/private/confidential information that a programmer, software creator or research project may not want to be disclosed.[9] Synthetic data holds no personal information and cannot be traced back to any individual; therefore, the use of synthetic data reduces confidentiality and privacy issues. CalculationsResearchers test the framework on synthetic data, which is "the only source of ground truth on which they can objectively assess the performance of their algorithms".[10] Synthetic data can be generated through the use of random lines, having different orientations and starting positions.[11] Datasets can be get fairly complicated. A more complicated dataset can be generated by using a synthesizer build. To create a synthesizer build, first use the original data to create a model or equation that fits the data the best. This model or equation will be called a synthesizer build. This build can be used to generate more data.[12] Constructing a synthesizer build involves constructing a statistical model. In a linear regression line example, the original data can be plotted, and a best fit linear line can be created from the data. This line is a synthesizer created from the original data. The next step will be generating more synthetic data from the synthesizer build or from this linear line equation. In this way, the new data can be used for studies and research, and it protects the confidentiality of the original data.[12] David Jensen from the Knowledge Discovery Laboratory explains how to generate synthetic data: "Researchers frequently need to explore the effects of certain data characteristics on their data model."[12] To help construct datasets exhibiting specific properties, such as auto-correlation or degree disparity, proximity can generate synthetic data having one of several types of graph structure: random graphs that are generated by some random process; lattice graphs having a ring structure; lattice graphs having a grid structure, etc.[12] In all cases, the data generation process follows the same process:
Since the attribute values of one object may depend on the attribute values of related objects, the attribute generation process assigns values collectively.[12] Synthetic data in machine learningSynthetic data is increasingly being used for machine learning applications: a model is trained on a synthetically generated dataset with the intention of transfer learning to real data. Efforts have been made to construct general-purpose synthetic data generators to enable data science experiments.[13] In general, synthetic data has several natural advantages:
This usage of synthetic data has been proposed for computer vision applications, in particular object detection, where the synthetic environment is a 3D model of the object,[14] and learning to navigate environments by visual information. At the same time, transfer learning remains a nontrivial problem, and synthetic data has not become ubiquitous yet. Research results indicate that adding a small amount of real data significantly improves transfer learning with synthetic data. Advances in generative models, in particular generative adversarial networks (GAN), lead to the natural idea that one can produce data and then use it for training. This fully synthetic approach has not yet materialized,[15] although GANs and adversarial training in general are already successfully used to improve synthetic data generation.[16] Currently, synthetic data is used in practice for emulated environments for training self-driving cars (in particular, using realistic computer games for synthetic environments[17]), point tracking,[18] and retail applications,[19] with techniques such as domain randomizations for transfer learning.[20] See also
References1. ^{{cite web |title=Synthetic data |url=http://www.answers.com/topic/synthetic-data |work=McGraw-Hill Dictionary of Scientific and Technical Terms |accessdate=November 29, 2009}} 2. ^{{cite web |author=Mullins, Craig S. |date=February 5, 2009 |title=What is Production Data? |url=http://www.neon.com/blog/blogs/cmullins/archive/2009/02/05/What-is-Production-Data_3F00_.aspx |publisher=NEON Enterprise Software, Inc. |dead-url=yes |archive-url=https://web.archive.org/web/20090721111006/http://www.neon.com/blog/blogs/cmullins/archive/2009/02/05/What-is-Production-Data_3F00_.aspx |archive-date=2009-07-21}} 3. ^{{Cite book | title = Privacy: Theory meets Practice on the Map | journal = 2008 IEEE 24th International Conference on Data Engineering | doi = 10.1109/ICDE.2008.4497436 | pages = 277–286 | year = 2008 | last1 = MacHanavajjhala | first1 = Ashwin | last2 = Kifer | first2 = Daniel | last3 = Abowd | first3 = John | last4 = Gehrke | first4 = Johannes | last5 = Vilhuber | first5 = Lars| isbn = 978-1-4244-1836-7 | citeseerx = 10.1.1.119.9568 }} 4. ^1 2 {{cite conference |author=Barse, E.L. |author2=Kvarnström, H. |author3=Jonsson, E. |year=2003 |title=Synthesizing test data for fraud detection systems |conference=Proceedings of the 19th Annual Computer Security Applications Conference |publisher=IEEE |url=http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1254343&isnumber=28060 |doi=10.1109/CSAC.2003.1254343}} 5. ^{{Cite journal | authorlink = Rubin, Donald B. | title = Discussion: Statistical Disclosure Limitation | journal = Journal of Official Statistics | volume = 9 | pages = 461–468 | year = 1993}} 6. ^1 2 {{Cite web | last = Abowd | first = John M. | title = Confidentiality Protection of Social Science Micro Data: Synthetic Data and Related Methods. [Powerpoint slides] | url=http://www.idre.ucla.edu/events/PPT/2006_01_30_abowd_UCLA_synthetic_data_presentation.ppt | accessdate = 17 February 2011 }} 7. ^{{Cite journal | authorlink = Little, Rod | title = Statistical Analysis of Masked Data | journal = Journal of Official Statistics | volume = 9 | pages = 407–426 | year = 1993}} 8. ^{{cite conference |first=Robert H. |last=Deng |first2=Feng |last2=Bao |first3=Jianying |last3=Zhou |title=Information and Communications Security |conference=Proceedings of the 4th International Conference, ICICS 2002 Singapore |date=December 2002 |url=https://books.google.com/books?id=GL1sCQAAQBAJ&dq=%22synthetic+data%22&q=synthetic+data#v=snippet&q=synthetic%20data&f=false }} 9. ^{{cite conference |last=Abowd |first=John M. |last2=Lane |first2=Julia|author2-link=Julia Lane |year=June 9–11, 2004 |title=New Approaches to Confidentiality Protection: Synthetic Data, Remote Access and Research Data Centers |conference=Privacy in Statistical Databases: CASC Project Final Conference, Proceedings |location=Barcelona, Spain |doi=10.1007/978-3-540-25955-8_22 }} 10. ^{{cite journal |first=Charles |last=Jackson |first2=Robert F. |last2=Murphy |first3=Jelena |last3=Kovačević |date=September 2009 |title=Intelligent Acquisition and Learning of Fluorescence Microscope Data Models |volume=18 |number=9 |url=http://murphylab.web.cmu.edu/publications/161-jackson2009.pdf}} 11. ^{{cite journal |title=A Simple Method of Radial Distortion Correction with Centre of Distortion Estimation |first=Aiqi |last=Wang |first2=Tianshuang |last2=Qiu |first3=Longtan |last3=Shao |journal=Journal of Mathematical Imaging and Vision |date=July 2009 |volume=35 |issue=3 |pages=165–172 |doi=10.1007/s10851-009-0162-1 }} 12. ^1 2 3 4 {{cite book |title=Proximity 4.3 Tutorial |chapter=6. Using Scripts |author=David Jensen |chapter-url=http://kdl.cs.umass.edu/proximity/documentation/tutorial/ch06s09.html |year=2004 }} 13. ^{{cite conference |first1=Neha |last1=Patki |first2=Roy |last2=Wedge |first3=Kalyan |last3=Veeramachaneni |title=The Synthetic Data Vault |conference=Data Science and Advanced Analytics (DSAA) 2016 |publisher=IEEE |doi=10.1109/DSAA.2016.49}} 14. ^{{cite arXiv |first1=Xingchao |last1=Peng |first2=Baochen |last2=Sun |first3=Karim |last3=Ali |first4=Kate |last4=Saenko |date=2015 |eprint=1412.7122 |title=Learning Deep Object Detectors from 3D Models|class=cs.CV }} 15. ^{{Cite web |last=Sanchez |first=Cassie |title=At a Glance: Generative Models & Synthetic Data |url=https://mty.ai/blog/at-a-glance-generative-models-synthetic-data/ |accessdate = 5 September 2017 }} 16. ^{{cite arXiv |first1=Ashish |last1=Shrivastava |first2=Tomas |last2=Pfister |first3=Oncel |last3=Tuzel |first4=Josh |last4=Susskind |first5=Wenda |last5=Wang |first6=Russ |last6=Webb |date=2016 |eprint=1612.07828 |title=Learning from Simulated and Unsupervised Images through Adversarial Training|class=cs.CV }} 17. ^{{Cite web |last = Knight |first = Will |title=Self-Driving Cars Can Learn a Lot by Playing Grand Theft Auto |url=https://www.technologyreview.com/s/602317/self-driving-cars-can-learn-a-lot-by-playing-grand-theft-auto/ |accessdate = 5 September 2017 }} 18. ^{{cite arXiv |first1=Daniel |last1=De Tone |first2=Tomasz |last2=Malisiewicz |first3=Andrew |last3=Rabinovich |date=2017 |eprint=1707.07410 |title=Toward Geometric Deep SLAM|class=cs.CV }} 19. ^{{cite web |title=Neuromation has signed the letter of intent with the OSA Hybrid Platform for introducing a visual recognition service into the largest retail chains of Eastern Europe |url=https://neuromation.io/en/neuromation-signed-letter-intent-osa-hybrid-platform-introducing-visual-recognition-service-largest-retail-chains-eastern-europe/}} 20. ^{{cite arXiv |first1=Josh |last1=Tobin |first2=Rachel |last2=Fong |first3=Alex |last3=Ray |first4=Jonas |last4=Schneider |first5=Wojciech |last5=Zaremba |first6=Pieter |last6=Abbeel |date=2017 |eprint=1703.06907 |title=Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World|class=cs.RO }}
Further reading
External links
4 : Data|Computer data|Data management|Statistical data types |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。