词条 | Feature integration theory |
释义 |
Feature integration theory is a theory of attention developed in 1980 by Anne Treisman and Garry Gelade that suggests that when perceiving a stimulus, features are "registered early, automatically, and in parallel, while objects are identified separately" and at a later stage in processing. The theory has been one of the most influential psychological models of human visual attention. StagesAccording to Treisman, the first stage of the feature integration theory is the preattentive stage. During this stage, different parts of the brain automatically gather information about basic features (colors, shape, movement) that are found in the visual field. The idea that features are automatically separated appears to be counterintuitive; however, we are not aware of this process because it occurs early in perceptual processing, before we become conscious of the object. The second stage of the feature integration theory is the focused attention stage, where the individual features of an object combine in order to perceive the whole object. In order to combine the individual features of an object, attention is required and selection of that object occurs within a "master map" of locations. The master map of locations contains all of the locations in which features have been detected, with each location in the master map having access to the multiple feature maps. When attention is focused at a particular location on the map, the features currently in that position are attended to and are stored in "object files". If the object is familiar, associations are made between the object and prior knowledge, which results in identification of that object. In support of this stage, researchers often refer to patients suffering from Balint's syndrome. Due to damage in the parietal lobe, these people are unable to focus attention on individual objects. Given a stimulus that requires combining features, people suffering from Balint's syndrome are unable to focus attention long enough to combine the features, providing support for this stage of the theory. Treisman distinguishes between two kinds of visual search tasks, "feature search" and "conjunction search". Feature searches can be performed fast and pre-attentively for targets defined by only one feature, such as color, shape, perceived direction of lighting, movement, or orientation. Features should "pop out" during search and should be able to form illusory conjunctions. Conversely, conjunction searches occur with the combination of two or more features and are identified serially. Conjunction search is much slower than feature search and requires conscious attention and effort. In multiple experiments, some referenced in this article, Treisman concluded that color, orientation, and intensity are features for which feature searches may be performed. As a reaction to the feature integration theory, Wolfe (1994) proposed the Guided Search Model 2.0. According to this model, attention is directed to an object or location through a preattentive process. The preattentive process, as Wolfe explains, directs attention in both a bottom-up and top-down way. Information acquired through both bottom-up and top-down processing is ranked according to priority. The priority ranking guides visual search and makes the search more efficient. Whether the Guided Search Model 2.0 or the feature integration theory are "correct" theories of visual search is still a hotly debated topic. ExperimentsIn order to test the notion that attention plays a vital role in visual perception, Treisman and Schmidt (1982) designed an experiment to show that features may exist independently of one another early in processing. Participants were shown a picture involving four objects hidden by two black numbers. The display was flashed for one-fifth of a second followed by a random-dot masking field that appeared on screen to eliminate "any residual perception that might remain after the stimuli were turned off".[1] Participants were to report the black numbers they saw at each location where the shapes had previously been. The results of this experiment verified Treisman and Schmidt's hypothesis. In 18% of trials, participants reported seeing shapes "made up of a combination of features from two different stimuli",[2] even when the stimuli had great differences; this is often referred to as an illusory conjunction. Specifically, illusory conjunctions occur in various situations. For example, you may identify a passing person wearing a red shirt and yellow hat and very quickly transform him or her into one wearing a yellow shirt and red hat. The feature integration theory provides explanation for illusory conjunctions; because features exist independently of one another during early processing and are not associated with a specific object, they can easily be incorrectly combined both in laboratory settings, as well as in real life situations.[3]As previously mentioned, Balint's syndrome patients have provided support for the feature integration theory. Particularly, Research participant R.M., a Bálint's syndrome sufferer who was unable to focus attention on individual objects, experiences illusory conjunctions when presented with simple stimuli such as a "blue O" or a "red T." In 23% of trials, even when able to view the stimulus for as long as 10 seconds, R.M. reported seeing a "red O" or a "blue T".[4] This finding is in accordance with feature integration theory's prediction of how one with a lack of focused attention would erroneously combine features. If people use their prior knowledge or experience to perceive an object, they are less likely to make mistakes, or illusory conjunctions. In order to explain this phenomenon, Treisman and Souther (1986) conducted an experiment in which they presented three shapes to participants where illusory conjunctions could exist. Surprisingly, when she told participants that they were being shown a carrot, lake, and tire (in place of the orange triangle, blue oval, and black circle, respectively), illusory conjunctions did not exist.[5] Treisman maintained that prior-knowledge played an important role in proper perception. Normally, bottom-up processing is used for identifying novel objects; but, once we recall prior knowledge, top-down processing is used. This explains why people are good at identifying familiar objects rather than unfamiliar. ReadingWhen identifying letters while reading, not only are their shapes picked up but also other features like their colors and surrounding elements. Individual letters are processed serially when spatially conjoined with another letter. The locations of each feature of a letter are not known in advance, even while the letter is in front of the reader. Since the location of the letter's features and/or the location of the letter is unknown, feature interchanges can occur if one is not attentively focused. This is known as lateral masking, which in this case, refers to a difficulty in separating a letter from the background.[6] See also
Notes1. ^Cognitive Psychology, E. Bruce Goldstein, P 105 2. ^Cognitive Psychology, E. Bruce Goldstein, P 105 3. ^Treisman, A. Cognitive Psychology 12, 97-136 (1980) 4. ^Friedman-Hill et al., 1995; Robertson et al., 1997. 5. ^Illusory words: The roles of attention and of top–down constraints in conjoining letters to form words.By Treisman, Anne; Souther, Janet. Journal of Experimental Psychology: Human Perception and Performance, Vol 12(1), Feb 1986, 3-17. 6. ^Anne Treisman and Garry Gelade (1980). "A feature-integration theory of attention." Cognitive Psychology, Vol. 12, No. 1, pp. 97-136. References
External links
3 : Neuropsychology|Cognition|Human–computer interaction |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。