请输入您要查询的百科知识:

 

词条 Glossary of artificial intelligence
释义

  1. A

  2. B

  3. C

  4. D

  5. E

  6. F

  7. G

  8. H

  9. I

  10. J

  11. K

  12. L

  13. M

  14. N

  15. O

  16. P

  17. Q

  18. R

  19. S

  20. T

  21. U

  22. V

  23. W

  24. X

  25. Y

  26. Z

  27. See also

  28. References and notes

{{Use dmy dates|date=September 2017}}Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.

{{compact ToC|side=yes|center=yes|nobreak=yes|seealso=yes|refs=yes|}}{{Artificial intelligence}}

A

  • Abductive logic programming – Abductive logic programming (ALP) is a high-level knowledge-representation framework that can be used to solve problems declaratively based on abductive reasoning. It extends normal logic programming by allowing some predicates to be incompletely defined, declared as abducible predicates.
  • Abductive reasoning – (also called abduction,[1] abductive inference,[1] or retroduction[2]) is a form of logical inference which starts with an observation or set of observations then seeks to find the simplest and most likely explanation. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it.
  • Abstract data type – is a mathematical model for data types, where a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations.
  • Abstraction – is the process of removing physical, spatial, or temporal details[3] or attributes in the study of objects or systems in order to more closely attend to other details of interest[4]
  • Accelerating change – is a perceived increase in the rate of technological change throughout history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change.
  • Action language – is a language for specifying state transition systems, and is commonly used to create formal models of the effects of actions on the world.[5] Action languages are commonly used in the artificial intelligence and robotics domains, where they describe how actions affect the states of systems over time, and may be used for automated planning.
  • Action model learning – is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.
  • Action selection – is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment.
  • Activation function – In artificial neural networks, the activation function of a node defines the output of that node, or "neuron," given an input or set of inputs. This output is then used as input for the next node and so on until a desired solution to the original problem is found.[6]
  • Adaptive algorithm – an algorithm that changes its behavior at the time it is run, based on a priori defined reward mechanism or criterion.
  • Adaptive neuro fuzzy inference system – or adaptive network-based fuzzy inference system (ANFIS) is a kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s.[7][8] Since it integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions.[9] Hence, ANFIS is considered to be a universal estimator.[10] For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained by genetic algorithm.[11][12]
  • Admissible heuristic – In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path.[13]
  • Affective computing – (sometimes called artificial emotional intelligence, or emotion AI)[14] is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science.[15]
  • Agent architecture – in computer science is a blueprint for software agents and intelligent control systems, depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures.[16]
  • AI accelerator – is a class of microprocessor[17] or computer system[18] designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning.
  • AI-complete – In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.[19] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
  • Algorithm – is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing and automated reasoning tasks.
  • Algorithmic efficiency – is a property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
  • Algorithmic probability – In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s.[20]
  • AlphaGo – is a computer program that plays the board game Go.[21] It was developed by Alphabet Inc.'s Google DeepMind in London. AlphaGo has several versions including AlphaGo Zero, AlphaGo Master, AlphaGo Lee, etc.[22] In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicaps on a full-sized 19×19 board.[23][24]
  • Ambient intelligence – (AmI) refers to electronic environments that are sensitive and responsive to the presence of people.
  • Analysis of algorithms – is the determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity).
  • Analytics – the discovery, interpretation, and communication of meaningful patterns in data.
  • Answer set programming – (ASP) is a form of declarative programming oriented towards difficult (primarily NP-hard) search problems. It is based on the stable model (answer set) semantics of logic programming. In ASP, search problems are reduced to computing stable models, and answer set solvers—programs for generating stable models—are used to perform search.
  • Anytime algorithm – an algorithm that can return a valid solution to a problem even if it is interrupted before it ends.
  • Application programming interface – (API) is a set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. An API may be for a web-based system, operating system, database system, computer hardware, or software library.
  • Approximate string matching – (often colloquially referred to as fuzzy string searching) is the technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately.
  • Approximation error – The approximation error in some data is the discrepancy between an exact value and some approximation to it.
  • Argumentation framework – or argumentation system, is a way to deal with contentious information and draw conclusions from it. In an abstract argumentation framework,[25] entry-level information is a set of abstract arguments that, for instance, represent data or a proposition. Conflicts between arguments are represented by a binary relation on the set of arguments. In concrete terms, you represent an argumentation framework with a directed graph such that the nodes are the arguments, and the arrows represent the attack relation. There exist some extensions of the Dung's framework, like the logic-based argumentation frameworks[26] or the value-based argumentation frameworks.[27]
  • Artificial immune systemArtificial immune systems (AIS) are a class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving.
  • Artificial intelligence – (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[28] Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".{{sfn|Russell|Norvig|2009|p=2}}
  • Artificial Intelligence Markup Language – is an XML dialect for creating natural language software agents.
  • Artificial neural network – (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains.[29] The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.[30]
  • Association for the Advancement of Artificial Intelligence – (AAAI) is an international, nonprofit, scientific society devoted to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence (AI), improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.[31]
  • Asymptotic computational complexity – In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.
  • Attributional calculus – is a logic and representation system defined by Ryszard S. Michalski. It combines elements of predicate logic, propositional calculus, and multi-valued logic. Attributional calculus provides a formal language for natural induction, an inductive learning process whose results are in forms natural to people.
  • Augmented reality – (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.[32][33]
  • Automata theory – is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science).
  • Automated planning and scheduling – sometimes denoted as simply AI Planning,[34] is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
  • Automated reasoning – is an area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science, and even philosophy.
  • Autonomic computing – (also known as AC) refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.[35]
  • Autonomous car – A self-driving car, also known as a robot car, autonomous car, auto, or driverless car,[36][37] is a vehicle that is capable of sensing its environment and moving with little or no human input.[38]
  • Autonomous robot – is a robot that performs behaviors or tasks with a high degree of autonomy. Autonomous robotics is usually considered to be a subfield of artificial intelligence, robotics, and information engineering.[39]
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

B

  • Backpropagation – is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.[40] Backpropagation is shorthand for "the backward propagation of errors," since an error is computed at the output and distributed backwards throughout the network’s layers.[41] It is commonly used to train deep neural networks,[42] a term referring to neural networks with more than one hidden layer.[43]
  • Backpropagation through time – (BPTT) is a gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers[44][45][46]
  • Backward chaining – (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.[47]
  • Bag-of-words model – is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision.[48] The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier.[49]
  • Bag-of-words model in computer vision – In computer vision, the bag-of-words model (BoW model) can be applied to image classification, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features.
  • Batch normalization – is a technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance.[50] Batch normalization was introduced in a 2015 paper.[51][52] It is used to normalize the input layer by adjusting and scaling the activations.[53]
  • Bayesian programming – is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available. Bayes’ Theorem is the central concept behind this programming approach, which states that the probability of something occurring in the future can be inferred by past conditions related to the event.[54]
  • Bees algorithm – is a population-based search algorithm which was developed by Pham, Ghanbarzadeh and et al. in 2005.[55] It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighbourhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies.[56][57][58][59]
  • Behavior informatics – (BI) is the informatics of behaviors so as to obtain behavior intelligence and behavior insights.[60]
  • Behavior tree – A Behavior Tree (BT) is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make BTs less error prone and very popular in the game developer community. BTs have shown to generalize several other control architectures.[61][62]
  • Belief-desire-intention software model
  • Bias–variance tradeoff
  • Big data – is a term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.[63]
  • Big O notation – is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann,[64] Edmund Landau,[65] and others, collectively called Bachmann–Landau notation or asymptotic notation.
  • Binary tree – is a tree data structure in which each node has at most two children, which are referred to as the {{visible anchor|left child}} and the {{visible anchor|right child}}. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set.[66] Some authors allow the binary tree to be the empty set as well.[67]
  • Bio-inspired computing
  • Blackboard system – is an artificial intelligence approach based on the blackboard architectural model,[68][69][70][71] where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem.
  • Boltzmann machine
  • Boolean satisfiability problem
  • Brain technology – or self-learning know-how systems, defines a technology that employs latest findings in neuroscience. The term was first introduced by the Artificial Intelligence Laboratory in Zurich, Switzerland, in the context of the Roboy project.[72] Brain Technology can be employed in robots,[73] know-how management systems[74] and any other application with self-learning capabilities. In particular, Brain Technology applications allow the visualization of the underlying learning architecture often coined as “know-how maps”.
  • Branching factor – In computing, tree data structures, and game theory, the branching factor is the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated.
  • Brute-force search – or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

C

  • Capsule neural network – A Capsule Neural Network (CapsNet) is a machine learning system that is a type of artificial neural network (ANN) that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.[75]
  • Case-based reasoning – (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems.
  • Chatbot – (also known as a smartbots, talkbot, chatterbot, Bot, IM bot, interactive agent, Conversational interface or Artificial Conversational Entity) is a computer program or an artificial intelligence which conducts a conversation via auditory or textual methods.[76]
  • Cloud robotics – is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centred on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low cost, smarter robots have intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.[77][78][79][80]
  • Cluster analysis – or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics.
  • Cobweb – is an incremental system for hierarchical conceptual clustering. COBWEB was invented by Professor Douglas H. Fisher, currently at Vanderbilt University.[81][82] COBWEB incrementally organizes observations into a classification tree. Each node in a classification tree represents a class (concept) and is labeled by a probabilistic concept that summarizes the attribute-value distributions of objects classified under the node. This classification tree can be used to predict missing attributes or the class of a new object.[83]
  • Cognitive architecture – The Institute of Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments."[84]
  • Cognitive computing – In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain[85][86][87][88][89][90] and helps to improve human decision-making.[91][92] In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus.
  • Cognitive science – is the interdisciplinary, scientific study of the mind and its processes.[93]
  • Combinatorial optimization – In Operations Research, applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects.[94]
  • Committee machine – is a type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response.[95] The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare with ensembles of classifiers.
  • Commonsense knowledge – In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.[96]
  • Commonsense reasoning – is one of the branches of artificial intelligence that is concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day.[97]
  • Computational chemistry – is a branch of chemistry that uses computer simulation to assist in solving chemical problems.
  • Computational complexity theory – focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
  • Computational creativity – (also known as artificial creativity, mechanical creativity, creative computing or creative computation) is a multidisciplinary endeavour that includes the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.
  • Computational cybernetics – is the integration of cybernetics and computational intelligence techniques.
  • Computational humor – is a branch of computational linguistics and artificial intelligence which uses computers in humor research.[98]
  • Computational intelligence – (CI), usually refers to the ability of a computer to learn a specific task from data or experimental observation.
  • Computational learning theory – In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.[99]
  • Computational linguistics – is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions.
  • Computational mathematics – the mathematical research in areas of science where computing plays an essential role.
  • Computational neuroscience – (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.[100][101][102][103]
  • Computational number theory – also known as algorithmic number theory, it is the study of algorithms for performing number theoretic computations.
  • Computational problem – In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might be able to solve.
  • Computational statistics – or statistical computing, is the interface between statistics and computer science.
  • Computer-automated design
  • Computer science – is the theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of algorithms that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems.[104]
  • Computer vision – is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.[105][106][107]
  • Concept drift – In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes.
  • Connectionism – is an approach in the fields of cognitive science, that hopes to explain mental phenomena using artificial neural networks (ANN).[108]
  • Consistent heuristic – In the study of path-finding problems in artificial intelligence, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor.
  • Constrained conditional model
  • Constraint logic programming – is a form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is {{code|2=prolog|A(X,Y) :- X+Y>0, B(X), C(Y)}}. In this clause, {{code|2=prolog|X+Y>0}} is a constraint; A(X,Y), B(X), and C(Y) are literals as in regular logic programming. This clause states one condition under which the statement A(X,Y) holds: X+Y is greater than zero and both B(X) and C(Y) are true.
  • Constraint programming – is a programming paradigm wherein relations between variables are stated in the form of constraints. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found.
  • Constructed language – (sometimes called a conlang) is a language whose phonology, grammar, and vocabulary are, instead of having developed naturally, consciously devised. Constructed languages may also be referred to as artificial, planned or invented languages[109]
  • Control theory – in control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability.
  • Convolutional neural network – In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing.[110] They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.[111][112]
  • Crossover – In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions are typically mutated before being added to the population.
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

D

  • Darkforest – is a computer go program developed by Facebook, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search.[113][114] The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them.[115] With the update, the system is known as Darkfmcts3.[116]
  • Dartmouth workshop – The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many[117][118] (though not all[119]) to be the seminal event for artificial intelligence as a field.
  • Data fusion – is the process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source.[120]
  • Data integration – involves combining data residing in different sources and providing users with a unified view of them.[121] This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data[122]) and the need to share existing data explodes.[123] It has become the focus of extensive theoretical work, and numerous open problems remain unsolved.
  • Data mining – is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
  • Data science – is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured,[124][125] similar to data mining. Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data.[126] It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science.
  • Data set – (or dataset) is a collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows.
  • Data warehouse – (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis.[127] DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place[128]
  • Datalog – is a declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integration, information extraction, networking, program analysis, security, and cloud computing.[129]
  • Decision boundary – In the case of backpropagation based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary.
  • Decision support system – (DSS), is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.
  • Decision theory – (or the theory of choice) is the study of the reasoning underlying an agent's choices.[130] Decision theory can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions.
  • Decision tree learning – uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining and machine learning.
  • Declarative programming – is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.[131]
  • Deductive classifier – is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. Compared to rule-based inference engines, which can only apply triggers like OFF or IF-THEN when a condition is not met, these classifiers seek to mimic human deductive logic.[132]
  • Deep Blue – was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls.
  • Deep learning – (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised.[133][134][135]
  • DeepMindDeepMind Technologies is a British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in Canada,[136] France,[137] and the United States. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans,[138] as well as a Neural Turing machine,[139] or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[140][141] The company made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film.[142] A more general program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.[143]
  • Default logic – is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.
  • Description logicDescription logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and reasoning complexity by supporting different sets of mathematical constructors.[144]
  • Developmental robotics – (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines.
  • Diagnosis – is concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour.
  • Dialogue system – or conversational agent (CA), is a computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
  • Dimensionality reduction – or dimension reduction, is the process of reducing the number of random variables under consideration[145] by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.[146]
  • Discrete system – is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.
  • Distributed artificial intelligence(DAI), also called Decentralized Artificial Intelligence,[147] is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of Multi-Agent Systems.
  • Dynamic epistemic logic(DEL), is a logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur.
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

E

  • Eager learning – is a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is made to the system.[148]
  • Ebert test – gauges whether a computer-based synthesized voice[150][149] can tell a joke with sufficient skill to cause people to laugh.[150] It was proposed by film critic Roger Ebert at the 2011 TED conference as a challenge to software developers to have a computerized voice master the inflections, delivery, timing, and intonations of a speaking human.[151] The test is similar to the Turing test proposed by Alan Turing in 1950 as a way to gauge a computer's ability to exhibit intelligent behavior by generating performance indistinguishable from a human being.[152]
  • Echo state network – The echo state network (ESN),[153][154] is a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can (re)produce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
  • Embodied agent
  • Embodied cognitive science – is an interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: 1) the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity, 2) the formation of a common set of general principles of intelligent behavior, and 3) the experimental use of robotic agents in controlled environments.
  • Error-driven learning – is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to minimize some error feedback. It is a type of reinforcement learning.
  • Ensemble averaging – In machine learning, particularly in the creation of artificial neural networks, ensemble averaging is the process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model.
  • Ethics of artificial intelligence
  • Evolutionary algorithm – (EA), is a subset of evolutionary computation,[155] a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.
  • Evolutionary computation – is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
  • Evolving classification function(ECF), evolving classifier functions or evolving classifiers are used for classifying and clustering in the field of machine learning and artificial intelligence, typically employed for data stream mining tasks in dynamic and changing environments.
  • Existential risk – is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe.[156][157][158]
  • Expert system – is a computer system that emulates the decision-making ability of a human expert.[159] Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code.[160]

F

  • Fast-and-frugal trees – is a type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category.[161]
  • Feature extraction – In machine learning, pattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations. Feature extraction is a dimensionality reduction process, where an initial set of raw variables is reduced to more manageable groups (features) for processing, while still accurately and completely describing the original data set.[162]
  • Feature learning – In machine learning, feature learning or representation learning[163] is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
  • Feature selection – In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.
  • First-order logic – also known as first-order predicate calculus and predicate logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists X such that X is Socrates and X is a man" and there exists is a quantifier while X is a variable.[164] This distinguishes it from propositional logic, which does not use quantifiers or relations.[165]
  • Fluent – is a condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time.
  • Formal language
  • Forward chaining – (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.[166]
  • Frame – is an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations." Frames are the primary data structure used in artificial intelligence frame language.
  • Frame language –is a technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.
  • Frame problem – is the problem of finding adequate collections of axioms for a viable description of a robot environment.[167]
  • Friendly artificial intelligence – (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.
  • Futures studies – is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them.[168]
  • Fuzzy control system – is a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).[169][170]
  • Fuzzy logic – is a simple form for the many-valued logic, in which the truth values of variables may have any degree of "Truthfulness" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only.
  • Fuzzy rule – Fuzzy rules are used within fuzzy logic systems to infer an output based on input variables.
  • Fuzzy set
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

G

  • Game theory – is the study of mathematical models of strategic interaction between rational decision-makers.[171]
  • Generative adversarial network – (GAN), is a class of machine learning systems. Two neural networks contest with each other in a zero-sum game framework.
  • Genetic algorithm – (GA), is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection.{{sfn|Mitchell|1996|p=2}}
  • Genetic operator – is an operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful.
  • Glowworm swarm optimization – is a swarm intelligence optimization algorithm developed based on the behaviour of glowworms (also known as fireflies or lightning bugs).
  • Graph (abstract data type) – In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics; specifically, the field of graph theory.
  • Graph (discrete mathematics) – In mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called an arc or line).[172]
  • Graph database – (GDB[173]), is a database that uses graph structures for semantic queries with nodes, edges and properties to represent and store data. A key concept of the system is the graph (or edge or relationship), which directly relates data items in the store a collection of nodes of data and edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly, and in many cases retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships within a graph database is fast because they are perpetually stored within the database itself. Relationships can be intuitively visualized using graph databases, making it useful for heavily inter-connected data.[174]
  • Graph theory – is the study of graphs, which are mathematical structures used to model pairwise relations between objects.
  • Graph traversal – (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal.

H

  • Heuristic – is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.[175]
  • {{anchor|hidden layer}}Hidden layer – an internal layer of neurons in an artificial neural network, not dedicated to input or output
  • {{anchor|hidden unit}}Hidden unit – an neuron in a hidden layer in an artificial neural network
  • Hyper-heuristic – is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.[176][177][178]

I

  • IEEE Computational Intelligence Society – is a professional society of the Institute of Electrical and Electronics Engineers (IEEE) focussing on "the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained".[179]
  • Incremental learning – is a method of machine learning, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.
  • Inference engine – is a component of the system that applies logical rules to the knowledge base to deduce new information.
  • Information integration – (II), is the merging of information from heterogeneous sources with differing conceptual, contextual and typographical representations. It is used in data mining and consolidation of data from unstructured or semi-structured resources. Typically, information integration refers to textual representations of knowledge but is sometimes applied to rich-media content. Information fusion, which is a related term, involves the combination of information into a new set of information towards reducing redundancy and uncertainty.[180]
  • Information Processing Language – (IPL), is a programming language that includes features intended to help with programs that perform simple problem solving actions such as lists, dynamic memory allocation, data types, recursion, functions as arguments, generators, and cooperative multitasking. IPL invented the concept of list processing, albeit in an assembly-language style.
  • Intelligence amplification – (IA), (also referred to as cognitive augmentation, machine augmented intelligence and enhanced intelligence), refers to the effective use of information technology in augmenting human intelligence.
  • Intelligence explosion – is a possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown, at the time of the technological singularity.
  • Intelligent agent – (IA), is an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex.
  • Intelligent control – is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.[181]
  • Intelligent personal assistant – A virtual assistant or intelligent personal assistant is a software agent that can perform tasks or services for an individual based on verbal commands. Sometimes the term "chatbot" is used to refer to virtual assistants generally or specifically accessed by online chat (or in some cases online chat programs that are exclusively for entertainment purposes). Some virtual assistants are able to interpret human speech and respond via synthesized voices. Users can ask their assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands. [182]
  • Interpretation
  • Issue trees
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

J

  • Junction tree algorithm – (also known as 'Clique Tree') is a method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches.[183]

K

  • Kernel method
  • KL-ONE
  • Knowledge acquisition
  • Knowledge-based systems
  • Knowledge engineering
  • Knowledge extraction
  • Knowledge Interchange Format
  • Knowledge representation and reasoning

L

  • Linked data
  • Lisp (programming language)
  • Logic programming
  • Long short-term memory

M

  • Machine vision
  • Markov chain
  • Markov decision process
  • Mathematical optimization
  • Machine learning
  • Machine listening
  • Machine perception
  • Mechanism design
  • Mechatronics
  • Metabolic network modelling
  • Metaheuristic
  • Model checking
  • Modus ponens
  • Modus tollens
  • Monte Carlo tree search
  • Multi-agent system
  • Multi-swarm optimization
  • Mutation
  • Mycin
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

N

  • Naive Bayes classifier
  • Naive semantics
  • Name binding
  • Named-entity recognition
  • Named graph
  • Natural language generation
  • Natural language processing
  • Natural language programming
  • Network motif
  • Neural machine translation
  • Neural Turing machine
  • Neuro-fuzzy
  • Neurocybernetics
  • Neuromorphic engineering
  • Node
  • Nondeterministic algorithm
  • Nouvelle AI
  • NP
  • NP-completeness
  • NP-hardness

O

  • Occam's razor
  • Offline learning
  • Online learning
  • Ontology engineering
  • Ontology learning
  • OpenAI
  • OpenCog
  • Open Mind Common Sense
  • Open-source software

P

  • Partial order reduction
  • Partially observable Markov decision process
  • Particle swarm optimization
  • Pathfinding
  • Pattern recognition
  • Planner
  • Predicate logic
  • Predictive analytics
  • Principal component analysis
  • Principle of rationality
  • Probabilistic programming language
  • Production Rule Representation
  • Production system
  • Programming language
  • Prolog
  • Propositional calculus
  • Python
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

Q

  • Qualification problem
  • Quantifier
  • Quantum computing
  • Query language

R

  • R programming language
  • Radial basis function network
  • Random forest
  • Reasoning system
  • Recurrent neural network
  • Region connection calculus
  • Reinforcement learning
  • Reservoir computing
  • Resource Description Framework
  • Restricted Boltzmann machine
  • Rete algorithm
  • Robotics
  • Rule-based system

S

  • Satisfiability
  • Search algorithm
  • Selection
  • Self-management
  • Semantic network
  • Semantic reasoner
  • Semantic query
  • Semantics
  • Sensor fusion
  • Separation logic
  • Similarity learning
  • Simulated annealing
  • Situated approach
  • Situation calculus
  • SLD resolution
  • Soft computing
  • Software
  • Software engineering
  • Spatial-temporal reasoning
  • SPARQL
  • Speech recognition
  • Spiking neural network
  • State
  • Statistical classification
  • Statistical relational learning
  • Stochastic optimization
  • Stochastic semantic analysis
  • STRIPS
  • Subject-matter expert
  • Superintelligence
  • Supervised learning
  • Swarm intelligence
  • Symbolic artificial intelligence
  • Synthetic intelligence
  • Systems neuroscience
{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

T

  • Technological singularity
  • Temporal difference learning
  • Tensor network theory
  • TensorFlow
  • Theoretical computer science
  • Theory of computation
  • Thompson sampling
  • Time complexity
  • Transhumanism
  • Transition system
  • Tree traversal
  • True quantified Boolean formula
  • Turing test
  • Type system

U

  • Unsupervised learning

V

  • Vision processing unit

W

  • Watson
  • Weak AI
  • World Wide Web Consortium

X

Y

Z

{{Compact ToC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}

See also

  • Artificial intelligence
  • Glossary of machine vision

References and notes

1. ^For example: {{cite book |editor1-last=Josephson |editor1-first=John R. |editor2-last=Josephson |editor2-first=Susan G. |date=1994 |title=Abductive Inference: Computation, Philosophy, Technology |location=Cambridge, UK; New York |publisher=Cambridge University Press |isbn=978-0521434614 |oclc=28149683 |doi=10.1017/CBO9780511530128}}
2. ^{{Cite web|url = http://www.commens.org/dictionary/term/retroduction|title = Retroduction | Dictionary | Commens|date = |accessdate = 2014-08-24|website = Commens – Digital Companion to C. S. Peirce|publisher = Mats Bergman, Sami Paavola & João Queiroz}}
3. ^{{Cite journal|last=Colburn|first=Timothy|last2=Shute|first2=Gary|date=2007-06-05|title=Abstraction in Computer Science|journal=Minds and Machines|language=en|volume=17|issue=2|pages=169–184|doi=10.1007/s11023-007-9061-7|issn=0924-6495}}
4. ^{{Cite journal|last=Kramer|first=Jeff|date=2007-04-01|title=Is abstraction the key to computing?|journal=Communications of the ACM|volume=50|issue=4|pages=36–42|doi=10.1145/1232743.1232745|issn=0001-0782|citeseerx=10.1.1.120.6776}}
5. ^Michael Gelfond, Vladimir Lifschitz (1998) "Action Languages", Linköping Electronic Articles in Computer and Information Science, vol 3, nr 16.
6. ^{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/activation-function|title=What is an Activation Function?|last=|first=|date=|website=deepai.org|archive-url=|archive-date=|dead-url=|access-date=}}
7. ^{{cite conference |last=Jang |first=Jyh-Shing R |title=Fuzzy Modeling Using Generalized Neural Networks and Kalman Filter Algorithm |year=1991 |conference=Proceedings of the 9th National Conference on Artificial Intelligence, Anaheim, CA, USA, July 14–19 |volume=2 |pages=762–767 |url=http://www.aaai.org/Papers/AAAI/1991/AAAI91-119.pdf }}
8. ^{{cite journal |last=Jang |first=J.-S.R. |year=1993 |title=ANFIS: adaptive-network-based fuzzy inference system |journal=IEEE Transactions on Systems, Man and Cybernetics |volume=23 |issue=3 |pages=665–685 |doi=10.1109/21.256541 }}
9. ^{{Citation |chapter=Adaptation of Fuzzy Inference System Using Neural Learning |title=Fuzzy Systems Engineering: Theory and Practice |first=A. |last=Abraham |year=2005 |editor-first=Nadia |editor1-last=Nedjah |editor2-first=Luiza |editor2-last=de Macedo Mourelle |series=Studies in Fuzziness and Soft Computing |volume=181 |publisher=Springer Verlag |location=Germany |doi=10.1007/11339366_3 |pages=53–83 |isbn=978-3-540-25322-8 |citeseerx=10.1.1.161.6135 }}
10. ^Jang, Sun, Mizutani (1997) – Neuro-Fuzzy and Soft Computing – Prentice Hall, pp 335–368, {{ISBN|0-13-261066-3}}
11. ^{{cite paper|last=Tahmasebi|first=P. |title=A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation |year=2012 |journal=Computers & Geosciences |volume=42 |pages=18–27 |url=http://www.sciencedirect.com/science/article/pii/S0098300412000398/pdfft?md5=cb070472e2eaa79fed8a2edf48943992&pid=1-s2.0-S0098300412000398-main.pdf |doi=10.1016/j.cageo.2012.02.004 |pmid=25540468 |pmc=4268588 }}
12. ^{{cite paper|last=Tahmasebi|first=P. |title=Comparison of optimized neural network with fuzzy logic for ore grade estimation |year=2010 |journal=Australian Journal of Basic and Applied Sciences |volume=4 |pages=764–772 |url=https://www.researchgate.net/publication/266881168 }}
13. ^{{cite book | author = Russell, S.J.|author2= Norvig, P. | year = 2002 | title = Artificial Intelligence: A Modern Approach | publisher = Prentice Hall | isbn = 978-0-13-790395-5|title-link= Artificial Intelligence: A Modern Approach }}
14. ^{{cite article| title=We Need Computers with Empathy |magazine=Technology Review |author=Rana el Kaliouby |volume=120 |number=6 |page=8 |date=Nov–Dec 2017|url=https://www.technologyreview.com/s/609071/we-need-computers-with-empathy/}}
15. ^{{cite conference |first=Jianhua |last=Tao |author2=Tieniu Tan |title=Affective Computing: A Review |booktitle=Affective Computing and Intelligent Interaction |volume=LNCS 3784 |pages=981–995 |publisher=Springer |year=2005 |doi=10.1007/11573548 }}
16. ^Comparison of Agent Architectures {{webarchive |url=https://web.archive.org/web/20080827222057/http://hri.cogs.indiana.edu/publications/aaai04ws.pdf |date=August 27, 2008 }}
17. ^{{cite web|url=https://www.v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator|title=Intel unveils Movidius Compute Stick USB AI Accelerator|date=2017-07-21}}
18. ^{{cite web|url=https://insidehpc.com/2017/06/inspurs-unveils-gx4-ai-accelerator/|title=Inspurs unveils GX4 AI Accelerator|date=2017-06-21}}
19. ^Shapiro, Stuart C. (1992). Artificial Intelligence In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)
20. ^Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
21. ^{{cite news|url=https://www.bbc.com/news/technology-35785875|title=Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol|newspaper=BBC News|accessdate=17 March 2016|date=2016-03-12}}
22. ^{{cite web |title=AlphaGo {{!}} DeepMind |url=https://deepmind.com/research/alphago/ |website=DeepMind}}
23. ^{{cite web |url=http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html|title=Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning |date=27 January 2016 |work=Google Research Blog}}
24. ^{{cite news |url=https://www.bbc.com/news/technology-35420579 |title=Google achieves AI 'breakthrough' by beating Go champion |date=27 January 2016 |work=BBC News}}
25. ^See Dung (1995)
26. ^See Besnard and Hunter (2001)
27. ^see Bench-Capon (2002)
28. ^Definition of AI as the study of intelligent agents* {{Harvnb|Poole|Mackworth|Goebel|1998|loc=p. 1}}, which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.* {{Harvtxt|Russell|Norvig|2003}} (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" {{Harv|Russell|Norvig|2003|p=55}}.* {{Harvnb|Nilsson|1998}}* {{Harvnb|Legg|Hutter|2007}}.
29. ^{{Cite web|url=https://www.frontiersin.org/research-topics/4817/artificial-neural-networks-as-models-of-neural-information-processing|title=Artificial Neural Networks as Models of Neural Information Processing {{!}} Frontiers Research Topic|language=en|access-date=2018-02-20}}
30. ^{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/neural-network|title=Build with AI {{!}} DeepAI|website=DeepAI|access-date=2018-10-06}}
31. ^{{cite web|url=http://www.aaai.org/Organization/bylaws.php|title=AAAI Corporate Bylaws|publisher=}}
32. ^{{Cite news|url=http://images.huffingtonpost.com/2016-05-13-1463155843-8474094-AR_history_timeline.jpg|title=The Lengthy History of Augmented Reality|last=|first=|date=May 15, 2016|work=Huffington Post|access-date=}}
33. ^{{Cite book|url=http://www.heg-fr.ch/EN/School-of-Management/Communication-and-Events/events/Pages/EventViewer.aspx?Event=patrick-schuffel.aspx|title=The Concise Fintech Compendium|last=Schueffel|first=Patrick|publisher=School of Management Fribourg/Switzerland|year=2017|isbn=|location=Fribourg|pages=}}
34. ^{{Citation | last1=Ghallab | first=Malik | last2=Nau | first2=Dana S. | last3=Traverso | first3=Paolo | title=Automated Planning: Theory and Practice | publisher=Morgan Kaufmann | year=2004 | url=http://www.laas.fr/planning/ | isbn=978-1-55860-856-6}}
35. ^{{Citation|doi=10.1109/MC.2003.1160055|title=The vision of autonomic computing|year=2003|last1=Kephart|first1=J.O.|last2=Chess|first2=D.M.|journal=Computer|volume=36|pages=41–52|citeseerx=10.1.1.70.613}}
36. ^[https://www.reuters.com/article/us-autos-selfdriving-uber/self-driving-uber-car-kills-arizona-woman-crossing-street-idUSKBN1GV296 ]
37. ^{{cite journal|last=Thrun|first=Sebastian|year=2010|title=Toward Robotic Cars|journal=Communications of the ACM|volume=53|issue=4|pages=99–106|doi=10.1145/1721654.1721679}}
38. ^{{cite conference|last1=Gehrig|first1=Stefan K.|last2=Stein|first2=Fridtjof J.|year=1999|title=Dead reckoning and cartography using stereo vision for an automated car|conference=IEEE/RSJ International Conference on Intelligent Robots and Systems|location=Kyongju|volume=3|pages=1507–1512|doi=10.1109/IROS.1999.811692|isbn=0-7803-5184-3}}
39. ^{{Cite web|url=http://www.robots.ox.ac.uk/|title=Information Engineering Main/Home Page|website=www.robots.ox.ac.uk|language=en|access-date=2018-10-03}}
40. ^Goodfellow, Ian; Bengio, Yoshua; Courville, Aaaron (2016) Deep Learning. MIT Press. p. 196. {{ISBN|9780262035613}}
41. ^{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/backpropagation|title=What is Backpropagation?|last=|first=|date=|website=deepai.org|archive-url=|archive-date=|dead-url=|access-date=}}
42. ^{{Cite journal |last=Nielsen|first=Michael A.|date=2015|journal=Neural Networks and Deep Learning|url=http://neuralnetworksanddeeplearning.com/chap6.html|title=Chapter 6|volume=|pages=|via=}}
43. ^{{Cite web|url=http://ufldl.stanford.edu/wiki/index.php/Deep_Networks:_Overview|title=Deep Networks: Overview - Ufldl|website=ufldl.stanford.edu|access-date=2017-08-04}}
44. ^{{Cite book|chapter-url=https://www.researchgate.net/publication/243781476|chapter=A Focused Backpropagation Algorithm for Temporal Pattern Recognition|website=ResearchGate|language=en|access-date=2017-08-21 |last=Mozer |first=M. C. |editor1-first=Y. |editor1-last=Chauvin |editor2-first=D. |editor2-last=Rumelhart |title= Backpropagation: Theory, architectures, and applications |publisher= Hillsdale, NJ: Lawrence Erlbaum Associates |pages=137–169 |year=1995}}
45. ^{{cite techreport|title=The utility driven dynamic error propagation network|author1=Robinson, A. J.|author2=Fallside, F.|lastauthoramp=yes|institution=Cambridge University, Engineering Department|number=CUED/F-INFENG/TR.1|year=1987|url=https://www.bibsonomy.org/bibtex/269a88ecbac9a51cbf0b4be189c412820/idsia}}
46. ^{{Cite journal|last=Werbos|first=Paul J.|title=Generalization of backpropagation with application to a recurrent gas market model|journal=Neural Networks|volume=1|issue=4|pages=339–356|doi=10.1016/0893-6080(88)90007-x|year=1988}}
47. ^{{cite book|last=Feigenbaum|first=Edward|title=The Rise of the Expert Company|year=1988|publisher=Times Books|isbn=978-0-8129-1731-4|page=317}}
48. ^{{cite journal | first = Josef | last = Sivic | title = Efficient visual search of videos cast as text retrieval | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence | volume=31 |issue=4 | pages = 591–605 | date = April 2009 | url = http://www.di.ens.fr/~josef/publications/sivic09a.pdf}}
49. ^McTear et al 2016, p. 167.
50. ^{{cite web|title=Understanding the backward pass through Batch Normalization Layer|url=https://kratzert.github.io/2016/02/12/understanding-the-gradient-flow-through-the-batch-normalization-layer.html|website=kratzert.github.io|accessdate=24 April 2018}}
51. ^{{cite journal|last1=Ioffe|first1=Sergey|last2=Szegedy|first2=Christian|title=Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift|arxiv=1502.03167|year=2015}}
52. ^{{cite web|title=Glossary of Deep Learning: Batch Normalisation|url=https://medium.com/deeper-learning/glossary-of-deep-learning-batch-normalisation-8266dcd2fa82|website=medium.com|accessdate=24 April 2018|date=2017-06-27}}
53. ^{{cite web|title=Batch normalization in Neural Networks|url=https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c|website=towardsdatascience.com|accessdate=24 April 2018|date=2017-10-20}}
54. ^{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/bayesian-programming|title=Bayesian versus Frequentist Probability|last=|first=|date=|website=deepai.org|archive-url=|archive-date=|dead-url=|access-date=}}
55. ^Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S and Zaidi M. The Bees Algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK, 2005.
56. ^Pham, D.T., Castellani, M. (2009), The Bees Algorithm – Modelling Foraging Behaviour to Solve Continuous Optimisation Problems. Proc. ImechE, Part C, 223(12), 2919-2938.
57. ^{{Cite journal | doi=10.1007/s00500-013-1104-9|title = Benchmarking and comparison of nature-inspired population-based continuous optimisation algorithms| journal=Soft Computing| volume=18| issue=5| pages=871–903|year = 2014|last1 = Pham|first1 = D. T.| last2=Castellani| first2=M.}}
58. ^{{Cite journal | doi=10.1080/23311916.2015.1091540|title = A comparative study of the Bees Algorithm as a tool for function optimisation| journal=Cogent Engineering| volume=2|year = 2015|last1 = Pham|first1 = Duc Truong| last2=Castellani| first2=Marco}}
59. ^Nasrinpour, H. R., Massah Bavani, A., Teshnehlab, M., (2017), Grouped Bees Algorithm: A Grouped Version of the Bees Algorithm, Computers 2017, 6(1), 5; (doi: 10.3390/computers6010005)
60. ^{{Cite journal|last=Cao|first=Longbing|year=2010|title=In-depth Behavior Understanding and Use: the Behavior Informatics Approach|url=|journal=Information Science|volume=180|issue=17|pages=3067–3085|doi=10.1016/j.ins.2010.03.025|pmid=|access-date=}}
61. ^Colledanchise Michele, and Ögren Petter 2016. How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees. In IEEE Transactions on Robotics vol.PP, no.99, pp.1-18 (2016)
62. ^Colledanchise Michele, and Ögren Petter 2017. [https://arxiv.org/abs/1709.00084 Behavior Trees in Robotics and AI: An Introduction.]
63. ^{{Cite journal|last=Breur|first=Tom|date=July 2016|title=Statistical Power Analysis and the contemporary "crisis" in social sciences|journal=Journal of Marketing Analytics|language=en|volume=4|issue=2–3|pages=61–65|doi=10.1057/s41270-016-0001-3|issn=2050-3318}}
64. ^{{cite book |first=Paul |last=Bachmann |authorlink=Paul Bachmann |title=Analytische Zahlentheorie |trans-title=Analytic Number Theory |language=de |volume=2 |location=Leipzig |publisher=Teubner |date=1894 |url=https://archive.org/stream/dieanalytischeza00bachuoft#page/402/mode/2up}}
65. ^{{cite book |first=Edmund |last=Landau |authorlink=Edmund Landau |title=Handbuch der Lehre von der Verteilung der Primzahlen |publisher=B. G. Teubner |date=1909 |location=Leipzig |trans-title=Handbook on the theory of the distribution of the primes |language=de |page=883 | url=https://archive.org/details/handbuchderlehre01landuoft}}
66. ^{{cite book|author1=Rowan Garnier|author2=John Taylor|title=Discrete Mathematics: Proofs, Structures and Applications, Third Edition|url=https://books.google.com/books?id=WnkZSSc4IkoC&pg=PA620|year=2009|publisher=CRC Press|isbn=978-1-4398-1280-8|page=620}}
67. ^{{cite book|author=Steven S Skiena|title=The Algorithm Design Manual|url=https://books.google.com/books?id=7XUSn0IKQEgC&pg=PA77|year=2009|publisher=Springer Science & Business Media|isbn=978-1-84800-070-4|page=77}}
68. ^{{Cite journal|last1=Erman|first1=L. D.|last2=Hayes-Roth|first2=F.|last3=Lesser|first3=V. R.|last4=Reddy|first4=D. R.|year=1980|title=The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty|journal=ACM Computing Surveys|volume=12|issue=2|pages=213|doi=10.1145/356810.356816|pmc=|pmid=}}
69. ^{{cite journal |last1 = Corkill |first1 = Daniel D. |title = Blackboard Systems |journal = AI Expert |volume = 6 |issue = 9 |date=September 1991 |pages = 40–47 |url = http://bbtech.com/papers/ai-expert.pdf }}
70. ^* {{cite techreport |first = H. Yenny |last = Nii |title = Blackboard Systems |number = STAN-CS-86-1123 |institution = Department of Computer Science, Stanford University |year = 1986 |url = http://i.stanford.edu/pub/cstr/reports/cs/tr/86/1123/CS-TR-86-1123.pdf |accessdate = 2013-04-12 }}
71. ^{{Cite journal|last1=Hayes-Roth|first1=B.|year=1985|title=A blackboard architecture for control|journal=Artificial Intelligence|volume=26|issue=3|pages=251–321|doi=10.1016/0004-3702(85)90063-3|pmc=|pmid=}}
72. ^NZZ- Die Zangengeburt eines möglichen Stammvaters. Website Neue Zürcher Zeitung. Seen 16. August 2013.
73. ^Official Homepage Roboy {{Webarchive|url=https://web.archive.org/web/20130803052035/http://www.roboy.org/mediaundnews.html# |date=2013-08-03 }}. Website Roboy. Seen 16. August 2013.
74. ^Official Homepage Starmind. Website Starmind. Seen 16. August 2013.
75. ^{{Cite arxiv|last=Sabour|first=Sara|last2=Frosst|first2=Nicholas|last3=Hinton|first3=Geoffrey E.|date=2017-10-26|title=Dynamic Routing Between Capsules|eprint=1710.09829|class=cs.CV}}
76. ^{{cite web|url=http://searchdomino.techtarget.com/sDefinition/0,,sid4_gci935566,00.html|title=What is a chatbot?|website=techtarget.com|accessdate=30 January 2017}}
77. ^{{cite web|title=Cloud Robotics and Automation A special issue of the IEEE Transactions on Automation Science and Engineering.|url=http://www.ieee-ras.org/publications/t-ase/special-issues/special-issue-on-cloud-robotics-and-automation|publisher=IEEE|accessdate=7 December 2014}}
78. ^{{cite web|title=RoboEarth|url=http://roboearth.org/cloud_robotics/}}
79. ^{{cite web|last1=Goldberg|first1=Ken|title=Cloud Robotics and Automation|url=http://goldberg.berkeley.edu/cloud-robotics}}
80. ^{{cite web|last1=Li|first1=R|title=Cloud Robotics-Enable cloud computing for robots|url=https://sites.google.com/site/ruijiaoli/blogs/page|accessdate=7 December 2014}}
81. ^{{cite journal|last=Fisher|first=Douglas|title=Knowledge acquisition via incremental conceptual clustering|journal=Machine Learning|year=1987|volume=2|issue=2|pages=139–172|doi=10.1007/BF00114265|url = http://www.springerlink.com/content/qj16212n7537n6p3/fulltext.pdf}}
82. ^{{cite conference|author = Fisher, Douglas H.|title = Improving inference through conceptual clustering|booktitle = Proceedings of the 1987 AAAI Conferences|conference = AAAI Conference|location = Seattle Washington|pages = 461–465|date = July 1987}}
83. ^{{cite book|title=Formal approaches in categorization|publisher=Cambridge University Press|location=Cambridge|isbn=9780521190480|pages=253–273|author=William Iba and Pat Langley|editor=Emmanuel M. Pothos and Andy J. Wills|chapter=Cobweb models of categorization and probabilistic concept formation|date=2011-01-27}}
84. ^Refer to the ICT website: http://cogarch.ict.usc.edu/
85. ^{{cite web|url=http://labs.hpe.com/research/next-next/brain/|title=Hewlett Packard Labs|publisher=}}
86. ^Terdiman, Daniel (2014) .IBM's TrueNorth processor mimics the human brain.http://www.cnet.com/news/ibms-truenorth-processor-mimics-the-human-brain/
87. ^Knight, Shawn (2011). IBM unveils cognitive computing chips that mimic human brain TechSpot: August 18, 2011, 12:00 PM
88. ^Hamill, Jasper (2013). [https://www.theregister.co.uk/2013/08/08/ibm_unveils_computer_architecture_based_upon_your_brain/ Cognitive computing: IBM unveils software for its brain-like SyNAPSE chips] The Register: August 8, 2013
89. ^{{cite journal|year=2014|title=Surfing Toward the Future|journal=Communications of the ACM|volume=57|issue=3|pages=26–29|doi=10.1145/2566967|author=Denning. P.J.}}
90. ^{{cite paper | author = Dr. Lars Ludwig | title = Extended Artificial Memory. Toward an integral cognitive theory of memory and technology. | publisher = Technical University of Kaiserslautern | year = 2013 | url = https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3662 | format = pdf | accessdate = 2017-02-07}}
91. ^{{cite web|url=http://www.hpl.hp.com/research/|title=Research at HP Labs|publisher=}}
92. ^{{Cite web|url=http://thesiliconreview.com/magazines/automate-complex-workflows-using-tactical-cognitive-computing-coseer/|title=Automate Complex Workflows Using Tactical Cognitive Computing: Coseer|website=thesiliconreview.com|access-date=2017-07-31}}
93. ^Cognitive science is an interdisciplinary field of researchers from Linguistics, psychology, neuroscience, philosophy, computer science, and anthropology that seek to understand the mind. How We Learn: Ask the Cognitive Scientist
94. ^Schrijver, Alexander (February 1, 2006). A Course in Combinatorial Optimization (PDF), page 1.
95. ^HAYKIN, S. Neural Networks - A Comprehensive Foundation. Second edition. Pearson Prentice Hall: 1999.
96. ^{{Cite web|url=http://www-formal.stanford.edu/jmc/mcc59/mcc59.html|title=PROGRAMS WITH COMMON SENSE|website=www-formal.stanford.edu|access-date=2018-04-11}}
97. ^{{Cite magazine|last=|author1=Ernest Davis|author2=Gary Marcus|date=2015|title=Commonsense reasoning|url=http://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/fulltext|magazine=Communications of the ACM|publisher=|volume=58|pages=92–103|doi=10.1145/2701413|accessdate=|number=9}}
98. ^Hulstijn, J, and Nijholt, A. (eds.). Proceedings of the International Workshop on Computational Humor. Number 12 in Twente Workshops on Language Technology, Enschede, Netherlands. University of Twente, 1996.
99. ^{{Cite web | url=http://www.learningtheory.org/ | title=ACL - Association for Computational Learning}}
100. ^ Trappenberg, Thomas P. (2002). Fundamentals of Computational Neuroscience. United States: Oxford University Press Inc. p. 1. {{ISBN|978-0-19-851582-1}}.
101. ^What is computational neuroscience? Patricia S. Churchland, Christof Koch, Terrence J. Sejnowski. in Computational Neuroscience pp.46-55. Edited by Eric L. Schwartz. 1993. MIT Press {{cite web |url=http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=7195 |title=Archived copy |accessdate=2009-06-11 |deadurl=yes |archiveurl=https://web.archive.org/web/20110604124206/http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=7195 |archivedate=2011-06-04 |df= }}
102. ^{{Cite web|url=https://mitpress.mit.edu/books/theoretical-neuroscience|title=Theoretical Neuroscience|last=Press|first=The MIT|website=The MIT Press|language=en|access-date=2018-05-24}}
103. ^{{ cite book | author1= Gerstner, W. | author2 = Kistler, W. | author3 = Naud, R. | author4 = Paninski, L.| title = Neuronal Dynamics | publisher = Cambridge University Press | location = Cambridge, UK | year = 2014 | isbn = 9781107447615}}
104. ^{{cite web |url=http://wordnetweb.princeton.edu/perl/webwn?s=computer%20scientist |title=WordNet Search—3.1 |publisher=Wordnetweb.princeton.edu |accessdate=14 May 2012}}
105. ^Dana H. Ballard; Christopher M. Brown (1982). Computer Vision. Prentice Hall. {{ISBN|0-13-165316-4}}.
106. ^ Huang, T. (1996-11-19). Vandoni, Carlo, E, ed. Computer Vision : Evolution And Promise (PDF). 19th CERN School of Computing. Geneva: CERN. pp. 21–25. doi:10.5170/CERN-1996-008.21. {{ISBN|978-9290830955}}.
107. ^ Milan Sonka; Vaclav Hlavac; Roger Boyle (2008). Image Processing, Analysis, and Machine Vision. Thomson. {{ISBN|0-495-08252-X}}.
108. ^{{cite book|url=https://plato.stanford.edu/archives/fall2018/entries/connectionism/|title=The Stanford Encyclopedia of Philosophy|first=James|last=Garson|editor-first=Edward N.|editor-last=Zalta|date=27 November 2018|publisher=Metaphysics Research Lab, Stanford University|via=Stanford Encyclopedia of Philosophy}}
109. ^{{cite web|title=Ishtar for Belgium to Belgrade |url=http://www.eurovision.tv/page/news?id=554&_t=ishtar_for_belgium_to_belgrade |publisher= European Broadcasting Union|accessdate=19 May 2013}}
110. ^{{cite web|url=http://yann.lecun.com/exdb/lenet/|title=LeNet-5, convolutional neural networks|last=LeCun|first=Yann|accessdate=16 November 2013}}
111. ^Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of annual conference of the Japan Society of Applied Physics.
112. ^Zhang, Wei (1990). "Parallel distributed processing model with local space-invariant interconnections and its optical architecture". Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. {{PMID|20577468}}.,
113. ^{{Cite arXiv|eprint=1511.06410v1|last1=Tian|first1=Yuandong|title=Better Computer Go Player with Neural Network and Long-term Prediction|last2=Zhu|first2=Yan|class=cs.LG|year=2015}}
114. ^{{Cite web|url=https://www.technologyreview.com/s/544181/how-facebooks-ai-researchers-built-a-game-changing-go-engine/|title=How Facebook's AI Researchers Built a Game-Changing Go Engine|date=December 4, 2015|website=MIT Technology Review|access-date=2016-02-03}}
115. ^{{Cite web|url=http://www.techtimes.com/articles/128636/20160128/facebook-ai-go-player-gets-smarter-with-neural-network-and-long-term-prediction-to-master-worlds-hardest-game.htm|title=Facebook AI Go Player Gets Smarter With Neural Network And Long-Term Prediction To Master World's Hardest Game|date=2016-01-28|website=Tech Times|access-date=2016-04-24}}
116. ^{{Cite web|url=https://venturebeat.com/2016/01/26/facebooks-artificially-intelligent-go-player-is-getting-smarter/|title=Facebook's artificially intelligent Go player is getting smarter|website=VentureBeat|access-date=2016-04-24|date=2016-01-27}}
117. ^Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153
118. ^Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006
119. ^Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society
120. ^{{Cite journal |doi = 10.1109/TIFS.2016.2569061|title = Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition|journal = IEEE Transactions on Information Forensics and Security|volume = 11|issue = 9|pages = 1984–1996|year = 2016|last1 = Haghighat|first1 = Mohammad|last2 = Abdel-Mottaleb|first2 = Mohamed|last3 = Alhalabi|first3 = Wadee}}
121. ^{{cite conference | author=Maurizio Lenzerini | title=Data Integration: A Theoretical Perspective | booktitle=PODS 2002 | year=2002 | pages=233–246 | url=http://www.dis.uniroma1.it/~lenzerin/homepagine/talks/TutorialPODS02.pdf}}
122. ^[https://www.talend.com/resource/big-data-integration/ Big Data Integration]
123. ^{{cite news | author=Frederick Lane | title=IDC: World Created 161 Billion Gigs of Data in 2006 | year=2006 | url=http://www.toptechnews.com/story.xhtml?story_id=01300000E3D0&full_skip=1 }}
124. ^{{Cite journal | last1 = Dhar | first1 = V. | title = Data science and prediction | doi = 10.1145/2500499 | journal = Communications of the ACM | volume = 56 | issue = 12 | pages = 64–73 | year = 2013 | pmid = | pmc = | url = http://cacm.acm.org/magazines/2013/12/169933-data-science-and-prediction/fulltext}}
125. ^{{cite web | url=http://simplystatistics.org/2013/12/12/the-key-word-in-data-science-is-not-data-it-is-science/ | title=The key word in "Data Science" is not Data, it is Science | publisher=Simply Statistics | date=2013-12-12 | author=Jeff Leek }}
126. ^{{Cite book|chapter-url=https://link.springer.com/chapter/10.1007/978-4-431-65950-1_3|url=https://www.springer.com/book/9784431702085|title=Data Science, Classification, and Related Methods|last=Hayashi|first=Chikio|date=1998-01-01|publisher=Springer Japan|isbn=9784431702085|editor-last=Hayashi|editor-first=Chikio|series=Studies in Classification, Data Analysis, and Knowledge Organization|location=|pages=40–51|language=en|chapter=What is Data Science? Fundamental Concepts and a Heuristic Example|doi=10.1007/978-4-431-65950-1_3|editor-last2=Yajima|editor-first2=Keiji|editor-last3=Bock|editor-first3=Hans-Hermann|editor-last4=Ohsumi|editor-first4=Noboru|editor-last5=Tanaka|editor-first5=Yutaka|editor-last6=Baba|editor-first6=Yasumasa}}
127. ^{{cite conference|last1=Dedić|first1=Nedim|last2=Stanier|first2=Clare|year=2016|editor1-last=Hammoudi|editor1-first=Slimane|editor2-last=Maciaszek|editor2-first=Leszek|editor3-last=Missikoff|editor3-first=Michele M. Missikoff|editor4-last=Camp|editor4-first=Olivier|editor5-last=Cordeiro|editor5-first=José|title=An Evaluation of the Challenges of Multilingualism in Data Warehouse Development|url=http://eprints.staffs.ac.uk/2770/|journal=Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016)|publisher=SciTePress|volume=1|pages=196–206|conference=International Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy|conferenceurl=https://eprints.staffs.ac.uk/2770/1/ICEIS_2016_Volume_1.pdf|doi=10.5220/0005858401960206|isbn=978-989-758-187-8}}
128. ^{{cite web|url=https://blog.rjmetrics.com/2014/12/04/10-common-mistakes-when-building-a-data-warehouse/|publisher=blog.rjmetrics.com|title=9 Reasons Data Warehouse Projects Fail|accessdate=2017-04-30}}
129. ^{{Citation | url = http://www.cs.ucdavis.edu/~green/papers/sigmod906t-huang.pdf | title = SIGMOD 2011 | contribution = Datalog and Emerging applications | last = Huang, Green, and Loo | format = PDF | publisher = UC Davis}}.
130. ^Steele, Katie and Stefánsson, H. Orri, "Decision Theory", The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.), URL =  
131. ^{{citation|last=Lloyd|first=J.W.|title=Practical Advantages of Declarative Programming}}
132. ^{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/deductive-classifier|title=What is a Deductive Classifier?|last=|first=|date=|website=deepai.org|archive-url=|archive-date=|dead-url=|access-date=}}
133. ^Bengio, Y.; Courville, A.; Vincent, P. (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/tpami.2013.50.
134. ^Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. {{PMID|25462637}}.
135. ^Bengio, Yoshua; LeCun, Yann; Hinton, Geoffrey (2015). "Deep Learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. {{PMID|26017442}}.
136. ^{{cite web |title=About Us {{!}} DeepMind |url=https://deepmind.com/about/ |website=DeepMind}}
137. ^{{cite web |title=A return to Paris {{!}} DeepMind |url=https://deepmind.com/blog/a-return-to-paris/ |website=DeepMind}}
138. ^{{cite web |title=The Last AI Breakthrough DeepMind Made Before Google Bought It |publisher= The Physics arXiv Blog |url= https://medium.com/the-physics-arxiv-blog/the-last-ai-breakthrough-deepmind-made-before-google-bought-it-for-400m-7952031ee5e1 |accessdate= 12 October 2014|date= 2014-01-29 }}
139. ^{{cite arXiv |eprint= 1410.5401 |title= Neural Turing Machines |newspaper= |last1= Graves |first1= Alex |last2= Wayne |first2= Greg |last3= Danihelka |first3= Ivo |class= cs.NE |year= 2014 }}
140. ^Best of 2014: Google's Secretive DeepMind Startup Unveils a "Neural Turing Machine", MIT Technology Review
141. ^{{Cite journal |authorlink= Alex Graves (computer scientist) |last=Graves |first=Alex |last2=Wayne |first2= Greg |last3=Reynolds |first3=Malcolm |last4= Harley |first4=Tim |last5=Danihelka |first5= Ivo |last6=Grabska-Barwińska |first6= Agnieszka |last7=Colmenarejo |first7=Sergio Gómez |last8= Grefenstette |first8=Edward |last9=Ramalho |first9=Tiago |date=12 October 2016 |title= Hybrid computing using a neural network with dynamic external memory |journal=Nature |language=en |volume=538 |issue=7626 |doi= 10.1038/nature20101 |issn= 1476-4687 |pages=471–476 |pmid= 27732574 |bibcode= 2016Natur.538..471G}}
142. ^{{Citation|last=Kohs|first=Greg|title=AlphaGo|date=29 September 2017|url=https://www.imdb.com/title/tt6700846/|others=Ioannis Antonoglou, Lucas Baker, Nick Bostrom|accessdate=9 January 2018}}
143. ^{{Cite arXiv|author-link1=David Silver (programmer)|first1=David|last1= Silver|first2=Thomas|last2= Hubert|first3= Julian|last3=Schrittwieser|first4= Ioannis|last4=Antonoglou |first5= Matthew|last5= Lai|first6= Arthur|last6= Guez|first7= Marc|last7= Lanctot|first8= Laurent|last8= Sifre|first9= Dharshan|last9= Kumaran|first10= Thore|last10= Graepel|first11= Timothy|last11= Lillicrap|first12= Karen|last12= Simonyan|first13=Demis |last13=Hassabis|author-link13=Demis Hassabis |eprint=1712.01815|title=Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm|class=cs.AI|date=5 December 2017}}
144. ^{{cite book |last=Sikos |first=Leslie F. |date=2017 |title=Description Logics in Multimedia Reasoning |url=https://www.springer.com/us/book/9783319540658 |location=Cham |publisher=Springer International Publishing |isbn=978-3-319-54066-5 |doi=10.1007/978-3-319-54066-5 }}
145. ^{{Cite journal | last1 = Roweis | first1 = S. T. | last2 = Saul | first2 = L. K. | title = Nonlinear Dimensionality Reduction by Locally Linear Embedding | doi = 10.1126/science.290.5500.2323 | journal = Science | volume = 290 | issue = 5500 | pages = 2323–2326 | year = 2000 | pmid = 11125150| pmc = | bibcode = 2000Sci...290.2323R | citeseerx = 10.1.1.111.3313 }}
146. ^{{Cite book | last1 = Pudil | first1 = P.| last2 = Novovičová | first2 = J.| editor1-first = Huan | editor1-last = Liu| editor2-first = Hiroshi | editor2-last = Motoda| doi = 10.1007/978-1-4615-5725-8_7 | chapter = Novel Methods for Feature Subset Selection with Respect to Problem Knowledge | title = Feature Extraction, Construction and Selection | pages = 101 | year = 1998 | isbn = 978-1-4613-7622-4 | pmid = | pmc = }}
147. ^Demazeau, Yves, and J-P. Müller, eds. Decentralized Ai. Vol. 2. Elsevier, 1990.
148. ^{{cite conference|url=https://books.google.com/books?id=GtcevX7n90wC&pg=PA158 |title=Hybrid algorithms with Instance-Based Classification|author=Hendrickx, Iris|author2=Van den Bosch, Antal|author-link2=Antal van den Bosch|date=October 2005|publisher=Springer|booktitle=Machine Learning: ECML2005|pages=158–169}}
149. ^{{cite news |author= JENNIFER 8. LEE |title= Roger Ebert Tests His Vocal Cords, and Comedic Delivery |publisher= The New York Times |quote= Now perhaps, there is the Ebert Test, a way to see if a synthesized voice can deliver humor with the timing to make an audience laugh.... He proposed the Ebert Test as a way to gauge the humanness of a synthesized voice. |date= March 7, 2011 |url= http://bits.blogs.nytimes.com/2011/03/07/roger-ebert-tests-his-vocal-cords-and-comedic-delivery/?src=me |accessdate= 2011-09-12}}
150. ^{{cite news |title= Roger Ebert’s Inspiring Digital Transformation |publisher= Tech News |quote= Meanwhile, the technology that enables Ebert to “speak” continues to see improvements – for example, adding more realistic inflection for question marks and exclamation points. In a test of that, which Ebert called the “Ebert test” for computerized voices, |date= March 5, 2011 |url= http://www.tips-tricks.co.in/2011/03/roger-eberts-inspiring-digital.html |accessdate= 2011-09-12}}
151. ^{{cite news |author= Adam Ostrow |title= Roger Ebert’s Inspiring Digital Transformation |publisher= Mashable Entertainment |quote= With the help of his wife, two colleagues and the Alex-equipped MacBook that he uses to generate his computerized voice, famed film critic Roger Ebert delivered the final talk at the TED conference on Friday in Long Beach, California.... |date= March 5, 2011 |url= http://mashable.com/2011/03/05/roger-ebert-ted-talk/ |accessdate= 2011-09-12}}
152. ^{{cite news |author= Alex_Pasternack |title= A MacBook May Have Given Roger Ebert His Voice, But An iPod Saved His Life (Video) |publisher= Motherboard |quote= He calls it the “Ebert Test,” after Turing’s AI standard...|date= Apr 18, 2011 |url= http://www.motherboard.tv/2011/4/18/a-macbook-may-have-given-roger-ebert-his-voice-but-an-ipod-saved-his-life-video|accessdate= 2011-09-12}}
153. ^Herbert Jaeger and Harald Haas. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 2 April 2004: Vol. 304. no. 5667, pp. 78 – 80 {{doi|10.1126/science.1091277}} PDF
154. ^Herbert Jaeger (2007) Echo State Network. Scholarpedia.
155. ^{{cite article|last=Vikhar|first=P. A.|title=Evolutionary algorithms: A critical review and its future prospects|url= http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7955308&isnumber=7955253|journal=Proceedings of the 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC)|publisher= Jalgaon, 2016, pp. 261-265|isbn=978-1-5090-0467-6}}
156. ^{{cite book |last1=Russell |first1=Stuart |author1-link=Stuart J. Russell |last2=Norvig |first2=Peter |author2-link=Peter Norvig |date=2009 |title=A Modern Approach |url= |location= |publisher=Prentice Hall |page= |chapter=26.3: The Ethics and Risks of Developing Artificial Intelligence|isbn=978-0-13-604259-4}}
157. ^{{cite journal|first=Nick | last=Bostrom|author-link=Nick Bostrom|title=Existential risks|journal=Journal of Evolution and Technology|date=2002|issue=9.1|pages=1–31}}
158. ^{{cite news|title=Your Artificial Intelligence Cheat Sheet|url=http://www.slate.com/articles/technology/future_tense/2016/04/killer_a_i_101_a_cheat_sheet_to_the_terminology_the_ethical_debates_the.html|accessdate=16 May 2016|work=Slate|date=1 April 2016}}
159. ^{{Citation |title= Introduction To Expert Systems |year= 1998 |publisher= Addison Wesley |edition= 3 |last= Jackson |first= Peter |isbn= 978-0-201-87686-4 |page= 2}}
160. ^{{cite web |url=https://www.pcmag.com/encyclopedia_term/0,2542,t=conventional+programming&i=40325,00.asp |title=Conventional programming |publisher=Pcmag.com |access-date=2013-09-15}}
161. ^Martignon, Laura; Vitouch, Oliver; Takezawa, Masanori; Forster, Malcolm. [https://www.researchgate.net/publication/27278577_Naive_and_Yet_Enlightened_From_Natural_Frequencies_to_Fast_and_Frugal_Decision_Trees "Naive and Yet Enlightened: From Natural Frequencies to Fast and Frugal Decision Trees"], published in Thinking : Psychological perspectives on reasoning, judgement and decision making (David Hardman and Laura Macchi; editors), Chichester: John Wiley & Sons, 2003.
162. ^{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/feature-extraction|title=What is Feature Extraction?|last=|first=|date=|website=deepai.org|archive-url=|archive-date=|dead-url=|access-date=}}
163. ^{{cite journal |author1=Y. Bengio |author2=A. Courville |author3=P. Vincent |title=Representation Learning: A Review and New Perspectives |journal=IEEE Trans. PAMI, special issue Learning Deep Architectures |year=2013|doi=10.1109/tpami.2013.50 |volume=35 |pages=1798–1828|arxiv=1206.5538 }}
164. ^Hodgson, Dr. J. P. E., "First Order Logic", Saint Joseph's University, Philadelphia, 1995.
165. ^Hughes, G. E., & Cresswell, M. J., A New Introduction to Modal Logic (London: Routledge, 1996), [https://books.google.cz/books?id=_CB5wiBeaA4C&pg=PA161#v=onepage&q&f=false p.161].
166. ^{{cite book|last=Feigenbaum|first=Edward|title=The Rise of the Expert Company|year=1988|publisher=Times Books|isbn=978-0-8129-1731-4|page=318}}
167. ^{{cite journal|last=Hayes|first=Patrick|title=The Frame Problem and Related Problems in Artificial Intelligence|journal=University of Edinburgh|url=http://aitopics.org/sites/default/files/classic/Webber-Nilsson-Readings/Rdgs-NW-Hayes-FrameProblem.pdf}}
168. ^Sardar, Z. (2010) The Namesake: Futures; futures studies; futurology; futuristic; Foresight -- What’s in a name? Futures, 42 (3), pp. 177–184.
169. ^{{cite book|last1=Pedrycz|first1=Witold|title=Fuzzy control and fuzzy systems|year=1993|publisher=Research Studies Press Ltd.|edition=2}}
170. ^{{cite book|last1=Hájek|first1=Petr|title=Metamathematics of fuzzy logic|year=1998|publisher=Springer Science & Business Media|edition=4}}
171. ^Myerson, Roger B. (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. [https://books.google.com/books?id=E8WQFRCsNr0C&printsec=find&pg=PA1 1]. Chapter-preview links, pp. [https://books.google.com/books?id=E8WQFRCsNr0C&printsec=find&pg=PR7 vii–xi].
172. ^{{cite book|last=Trudeau|first=Richard J.|title=Introduction to Graph Theory|year=1993|publisher=Dover Pub.|location=New York|isbn=978-0-486-67870-2|pages=19|url=http://store.doverpublications.com/0486678709.html|edition=Corrected, enlarged republication.|accessdate=8 August 2012|quote=A graph is an object consisting of two sets called its vertex set and its edge set.}}
173. ^{{cite book|url=https://books.google.com/?id=mV3wxKLHlnwC&pg=PA381&lpg=PA381&dq=%22gdb%22+%22graph+database%22#v=onepage&q=%22gdb%22%20%22graph%20database%22&f=false|title=Artificial Intelligence and Automation|author=Nikolaos G. Bourbakis|accessdate=2018-04-20|publisher=World Scientific|page=381|isbn=9789810226374|year=1998}}
174. ^{{cite journal|last=Yoon|first=Byoung-Ha|last2=Kim|first2=Seon-Kyu|last3=Kim|first3=Seon-Young|date=March 2017|title=Use of Graph Database for the Integration of Heterogeneous Biological Data|journal=Genomics & Informatics|volume=15|issue=1|pages=19–27|doi=10.5808/GI.2017.15.1.19|issn=1598-866X|pmc=5389944|pmid=28416946}}
175. ^{{cite book |last=Pearl |first=Judea |title=Heuristics: intelligent search strategies for computer problem solving |accessdate=June 13, 2017 |year=1984 |publisher=Addison-Wesley Pub. Co., Inc., Reading, MA |location=United States |page=3|url=https://www.osti.gov/scitech/biblio/5127296}}
176. ^E. K. Burke, E. Hart, G. Kendall, J. Newall, P. Ross, and S. Schulenburg, Hyper-heuristics: An emerging direction in modern search technology, Handbook of Metaheuristics (F. Glover and G. Kochenberger, eds.), Kluwer, 2003, pp. 457–474.
177. ^P. Ross, Hyper-heuristics, Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques (E. K. Burke and G. Kendall, eds.), Springer, 2005, pp. 529-556.
178. ^E. Ozcan, B. Bilgin, E. E. Korkmaz, A Comprehensive Analysis of Hyper-heuristics, Intelligent Data Analysis, 12:1, pp. 3-23, 2008.
179. ^IEEE CIS Scope
180. ^{{Cite journal |doi = 10.1109/TIFS.2016.2569061|title = Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition|journal = IEEE Transactions on Information Forensics and Security|volume = 11|issue = 9|pages = 1984–1996|year = 2016|last1 = Haghighat|first1 = Mohammad|last2 = Abdel-Mottaleb|first2 = Mohamed|last3 = Alhalabi|first3 = Wadee}}
181. ^{{cite web|url= https://engineering.purdue.edu/ManLab/control/intell_control.htm|title= Intelligent control}}
182. ^{{Cite journal|doi=10.1080/02763869.2018.1404391|pmid = 29327988|title = Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants|journal = Medical Reference Services Quarterly|volume = 37|issue = 1|pages = 81–88|year = 2018|last1 = Hoy|first1 = Matthew B.}}
183. ^{{Cite web|url=http://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf|title=A Short Course on Graphical Models|last=Paskin|first=Mark|date=|website=Standford|archive-url=|archive-date=|dead-url=|access-date=}}
{{Software engineering}}{{Computer science}}{{Evolutionary computation}}{{Emerging technologies}}{{Robotics}}{{Glossaries of science and engineering}}

4 : Glossaries of science|Artificial intelligence|Machine learning|Wikipedia glossaries

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/11 23:39:10