请输入您要查询的百科知识:

 

词条 Dual process theory (moral psychology)
释义

  1. Core commitments of the dual process theory

      Camera analogy  

  2. Scientific evidence

      Neuroimaging    Brain lesions    Reaction times  

  3. Evolutionary rationale

  4. Scientific criticisms

  5. Alleged ethical implications

      Greene's "direct route" to ethical significance    Greene's "indirect route" to ethical significance  

  6. Philosophical criticisms

      Berker's criticisms    Three bad arguments    The argument from morally irrelevant factors  

  7. Moral Enhancement

  8. References

Dual process theory is an influential theory of human moral judgment that posits that human beings possess two distinct cognitive subsystems that compete in moral reasoning processes: one fast, intuitive and emotionally-driven, the other slow, deliberative and less dependent on emotion. Initially proposed by Joshua Greene along with Brian Sommerville, Leigh Nystrom, John Darley, Jonathan David Cohen and others,[1][2][3] the theory can be seen as a domain-specific example of more general dual process accounts in psychology. Greene has often emphasized the philosophical (and specifically ethical) implications of the theory,[4][5][6] and it has received extensive discussion in ethics.[7][8][8][9]

Core commitments of the dual process theory

The dual process account asserts that human beings have two separate methods for moral reasoning. The theory makes use of recent scientific findings about the workings of the brain to come up with a criteria for assessing our intuitions and moral judgments. If these inner workings can be revealed; then we may have less confidence in our ethical judgments.

The first method with which we make decisions involves fast, intuitive processing. These responses are implicit and the factors affecting them may be consciously inaccessible.[10] The second method refers to conscious, controlled reasoning processes. This method is less influenced by the immediate emotional aspects of decision making and instead focuses on maximizing gain or a particular conception of the good. In everyday decision making, most decisions use one or the other of these systems.

Greene hypothesizes that we respond to "personal" and "impersonal" moral dilemmas in different ways. The roots of differing responses lie in our different emotional responses.[11] "Heat of the moment" emotional reactions influence our responses to "personal" moral dilemmas but not "impersonal" moral dilemmas.

As Greene puts it:

"Characteristically deontological judgments are preferentially supported by automatic emotional responses, while characteristically consequentialist judgments are preferentially supported by conscious reasoning and allied processes of cognitive control."[6]

This theory of moral judgment has had influence on research in moral psychology. The original fMRI investigation[1] proposing the dual process account has been cited in excess of 2000 scholarly articles, generating extensive use of similar methodology as well as criticism.

Camera analogy

Greene compares our dual-process brains as a digital SLR camera which operates in two complementary modes: automatic and manual mode.[12] The automatic settings are highly efficient but not very flexible while the manual settings are the opposite.[12] He claims that human brain has a similar general design. Our brains are wired with a variety of automatic settings which allow intuitions to guide our behaviours, most of them emotional, which we may be aware of.[12] We rely on our automatic settings most of the time. On the other hand, there is a manual mode in our brains. It specialises in enabling behaviours that serve longer term goals. The operations of this system are usually conscious, and often experienced as effortful.[12] This mode of thinking requires "using explicit rules and to think explicitly about how the world works".[12]

Nevertheless, he also highlights three ways in which this analogy could be misleading. First, while a camera must be in either automatic or manual mode, human brain's automatic settings are always on. Second, the dual settings in our brains are asymmetrical dependent but a camera's dual modes can function independently of each other. Third, automatic settings of our brains can be acquired or modified through cultural learning but not necessarily be "innate" or "hard wired".[12]

Scientific evidence

Neuroimaging

Greene uses fMRI to evaluate the brain activities and responses of people confronted with different variants of the famous trolley problem in ethics.

There are 2 versions of trolley problem. They are trolley driver dilemma and footbridge dilemma presented as follows.

Trolley Driver Dilemma: “You are at the wheel of a runaway trolley quickly approaching a fork in the tracks. On the tracks extending to the left is a group of five railway workmen. On the tracks extending to the right is a single railway workman. If you do nothing the trolley will proceed to the left, causing the deaths of the five workmen. The only way to avoid the deaths of these workmen is to hit a switch on your dashboard that will cause the trolley to proceed to the right, causing the death of the single workman. Is it appropriate for you to hit the switch in order to avoid the deaths of the five workmen?[8] (Most people judge that it is appropriate to hit the switch in this case.)

Footbridge Dilemma: “A runaway trolley is heading down the tracks toward five workmen who will be killed if the trolley proceeds on its present course. You are on a footbridge over the tracks, in between the approaching trolley and the five workmen. Next to you on this footbridge is a stranger who happens to be very large. The only way to save the lives of the five workmen is to push this stranger off the bridge and onto the tracks below where his large body will stop the trolley. The stranger will die if you do this, but the five workmen will be saved. Is it appropriate for you to push the stranger onto the tracks in order to save the five workmen?[8] (Most people judge that it is not appropriate to push the stranger onto the tracks.)

Greene and his colleagues want to know why people find it appropriate to hit the switch in the switch case and not appropriate to push the stranger in the footbridge case. They investigate the brain activities and responses of people facing those cases. Firstly, they make a distinction between 2 types of moral judgments: characteristically consequentialist judgments and characteristically deontological judgments. Characteristically consequentialist judgments are the judgments that philosophers use to justify hitting the switch; they are based on consequentialist principles. Characteristically deontological judgments are the judgments that philosophers use to justify not pushing the stranger; they are based on deontological principles.

The dual process account showing that moral dilemmas following the logic of "trolleyology" engaged areas of the brain corresponding to emotional processing when the context involved "personal" moral violations (such as direct bodily force). When the context of the dilemma was more "impersonal" (the decision maker pulls a switch rather than use bodily force) areas corresponding to working memory and controlled reasoning were engaged instead.[1] This gives way to what Greene calls the Central Tension Problem: Characteristically deontological judgments are often associated with intuitive-emotional reasoning (system 1), while characteristically consequentialist judgments are often associated with conscious reasoning and cognitive control, (system 2). These 2 processes compete with each other when people make moral judgments in the context of trolley problem.

Greene points to a large body of evidence from cognitive science suggesting that inclination to deontological or consequentialist judgment depends on whether emotional-intuitive reactions or more calculated ones were involved in the judgment-making process.[13] For example, encouraging deliberation or removing time pressure leads to an increase in consequentialist response. Performing a distracting secondary task, for example, solving a math problem, increases the possibility of the individual choosing the consequentialist approach.[14] When asked to explain or justify their responses, subjects preferentially chose consequentialist principles – even for explaining characteristically deontological responses . Further evidence shows that consequentialist responses to trolley-problem-like dilemmas are associated with deficits in emotional awareness in people with alexithymia or psychopathic tendencies.[14] On the other hand, subjects being primed to be more emotional or empathetic give more characteristically deontological answers.

In addition, Greene's results show that some brain areas, such as the medial prefrontal cortex, the posterior cingulate/precuneus, the posterior superior temporal sulcus/inferior parietal lobe, and the amygdala, are associated with emotional processes. Subjects exhibited increased activity in these brain regions when presented with situations involving the use of personal force (e.g. the 'footbridge' case). The dorsolateral prefrontal cortex and the parietal lobe are 'cognitive' brain regions; subjects show increased activity in these two regions when presented with impersonal moral dilemmas.[15]

Brain lesions

Neuropsychological evidence from lesion studies focusing on patients with damage to the ventromedial prefrontal cortex also points to a possible dissociation between emotional and rational decision processes. Damage to this area is typically associated with antisocial personality traits and impairments of moral decision making.[16] Patients with these lesions tend to show more frequent endorsement of the "utilitarian" path in trolley problem dilemmas.[17] Greene et al. claim that this shows that when emotional information is removed through context or damage to brain regions necessary to render such information, the process associated with rational, controlled reasoning dominates decision making.[18]

A popular medical case, studied in particular by neuroscientist Antonio Damasio,[19] was that of American railroad worker Phineas Gage. On the 13th of September 1848, while working on a railway track in Vermont, he was involved in an accident: an "iron rod used to cram down the explosive powder shot into Gage’s cheek, went through the front of his brain, and exited via the top of his head".[20] Surprisingly, not only Gage survived, but he also went back to his normal life just in less than two months.[19] Although his physical capacities were restored, however, his personality and his character radically changed. He became vulgar and anti-social: "Where he had once been responsible and self-controlled, now he was impulsive, capricious, and unreliable".[20] Damasio wrote: "Gage was no longer Gage." [19] Moreover, also his moral intuitions were transformed. Further studies by means of neuroimaging showed a correlation between such "moral" and character transformations and injuries to the ventromedial prefrontal cortex.[21]

In his book Descartes' Error, commenting on Phineas Gage case, Damasio said that after the accident the railroad worker was able "To know, but not to feel."[19] As explained by David Edmonds, Joshua Greene thought that this could explain the difference in moral intuitions in different versions of the trolley problem: "We feel that we shouldn’t push the fat man. But we think it better to save five rather than one life. And the feeling and the thought are distinct.”[20]

Reaction times

Another critical piece of evidence supporting the dual process account comes from reaction time data associated with moral dilemma experiments. Subjects who choose the "utilitarian" path in moral dilemmas showed increased reaction times under high cognitive load in "personal" dilemmas, while those choosing the "deontological" path remained unaffected.[22] Cognitive load in general is also found to increase the likelihood of "deontological" judgment.[23] These laboratory findings are supplemented by work that looks at the decision-making processes of real world altruists in life-or-death situations.[24] These heroes overwhelming described their actions as fast and intuitive, and virtually never as carefully reasoned.

Evolutionary rationale

The dual process theory is often given an evolutionary rationale (in this basic sense, the theory is an example of evolutionary psychology).

In pre-Darwinian thinking, such as Hume's ‘Treatise of Human Nature’ we find speculations about the origins of morality as deriving from natural phenomena common to all humans. For instance, he mentions the "common or natural cause of our passions" and the generation of love for others represented through self-sacrifice for the greater good of the group. Hume's work is sometimes cited as an inspiration for contemporary dual process theories.[25]

Darwin's evolutionary theory gives a better descriptive process for how these moral norms have derived from evolutionary processes and natural selection.[25] For example, selective pressures favour self-sacrifice for the benefit of the group and punish those who do not. This provides a better explanation of the cost-benefit ratio for the generation of love for others as originally mentioned by Hume.

Another example of an evolutionary derived norm is justice, which is born out of the ability to detect those who cheat. Peter Singer explains that the instinct of reciprocity improved fitness for survival, therefore those who did not reciprocate were considered cheaters and cast-off from the group.[25]

Peter Singer agrees with Green that consequentialist judgements are to be favored over deontological judgements. According to him, moral constructivism search for reasonable grounds whereas deontological judgements rely on hasty and emotional responses. Singer argues our most immediate moral intuition should be challenged. A normative ethic must not be evaluated by the extent to which it matches those moral intuitions. He gives the example of our one brother and sister who secretly decide to have sex with each other using contraceptive measures. Our first intuitive reaction is a firm condemnation of incest as morally wrong. However, a consequentialist judgement brings another conclusion. As the brother and sister did not tell anyone and used contraceptive measures, the incest did not have harmful consequences. Thus, in that case, incest is not necessarily wrong.[25]

Singer relies on evolutionary theories to justify his claim. For most of our evolutionary history, human beings have lived in small groups where violence was ubiquitous. Deontological judgements linked to emotional and intuitive responses were developed by human beings as they were confronted to personal and close interactions with others. In the past century, our social organizations were altered and this type of interactions have become much less frequent. Therefore, we should rather rely on more sophisticated consequently judgements that fit better in our modern times, than deontological judgements that were useful for more rudimentary interactions.[25]

Scientific criticisms

Several scientific criticisms have been leveled against the dual process account. One asserts that the dual emotional/rational model ignores the motivational aspect of decision making in human social contexts [26][27] A more specific example of this criticism focuses on the ventromedial prefrontal cortex lesion data. Although patients with this damage display characteristically "cold-blooded" behavior in the trolley problem, they show more likelihood of endorsement of emotionally laden choices in the Ultimatum Game.[28] It is argued that moral decisions are better understood as integrating emotional, rational, and motivational information, the last of which has been shown to involve areas of the brain in the limbic system and brain stem.[29]

Other criticisms focus on the methodology of using moral dilemmas such as the trolley problem. These criticisms note the lack of affective realism in contrived moral dilemmas and their tendency to use the actions of strangers to offer a view of human moral sentiments. Paul Bloom in particular, has argued that a multitude of attitudes towards the agents involved are important in evaluating an individual's moral stance, as well as evaluating the motivations that may inform those decisions.[30]

Berker has raised three methodological worries about Greene´s empirical findings.[8] First, neural activities associated with emotional processes are not exclusively correlated with deontological judgements but can also be found in consequentialist judgements. Thus, one can argue that all moral judgements seem to involve emotional processing.  Second, Greene´s response time prediction has not born out if one considers that Greene´s study involved “easy” cases that should not be classified as dilemmas. This is because the way some cases were framed, people found one of the choices to be obviously inappropriate. Third, Greene´s criteria to classify impersonal and personal moral dilemmas do not map onto the distinction of deontological and consequentialist moral judgements. The “Lazy Susan Case” serves as a counter-example, showing that intuitive consequentialist answers can involve personal force.

Notwithstanding the above, the later criticism has been considered by [https://static1.squarespace.com/static/54763f79e4b0c4e55ffb000c/t/54cb945ae4b001aedee69e81/1422627930781/notes-on-berker.pdf Greene].

Alleged ethical implications

Greene ties the two processes to theories of ethics existing in moral philosophy, specifically consequentialism and deontological ethics.[31] He argues that the existing tension between systems of ethics that focus on "right action" and those that focus on "best results" can be explained by the existence of the proposed dueling systems in individual human minds. In particular, ethical decisions that fall under 'right action' correspond to system 1 processing, whereas 'best results' correspond to system 2. This poses problems for deontological moral theory, as it can be seen as offering 'post hoc' rationalisations for our emotional responses. Greene argues that our emotional responses are sensitive to morally irrelevant factors, such as personal force. For example, our intuitive moral judgements in trolley-cases depend on whether the use of personal force is required; In trolley scenarios, our convictions are bolstered to reach this outcome by way of flicking a switch, as opposed to deliberately pushing a helpless victim. This is because one relates more to the helpless victim. Greene proposes that this therefore vindicates consequentialism. He rejects deontology as a moral framework as it relies on morally irrelevant intuitions.

Greene's "direct route" to ethical significance

Greene firstly argues that scientific findings can help us reach interesting normative conclusions, without crossing the is/ought gap. For example, he considers the normative statement "capital juries make good judgements". Scientific findings could lead us to revise this judgement if it were found that capital juries were in fact sensitive to race, if we accept the uncontroversial normative premise that capital juries ought not be sensitive to race.[6]

Greene then states that the evidence for dual-process theory might give us reason to question judgements which are based upon moral intuitions, in cases where those moral intuitions might be based upon morally irrelevant factors. He gives the example of incestuous siblings. Intuition might tell us that this is morally wrong, but Greene suggests that this intuition is the result of incest historically being evolutionary disadvantageous. However, if the siblings take extreme precautions, such as vasectomy, in order to avoid the risk of genetic mutation in their offspring, the cause of the moral intuition is no longer relevant. In such cases, scientific findings have given us reason to ignore some of our moral intuitions, and in turn revise the moral judgements which are based upon these intuitions.[6]

Greene's "indirect route" to ethical significance

Greene is not making the claim that moral judgements based on emotion are categorically bad. His position is that the different “settings” are appropriate for different scenarios.

With regards to automatic settings, Greene says we should only rely on these when faced with a moral problem that is sufficiently “familiar” to us. Familiarity, on Greene's conception, can arise from three sources - evolutionary history, culture and personal experience. It is possible that fear of snakes, for instance, can be traced to genetic dispositions, whereas a reluctance to place one's hand on a stove is caused by previous experience on burning one's hand on a hot stove.[12]

The appropriateness of applying our intuitive and automatic mode of reasoning to a given moral problem thus hinges on how the process was formed in the first place. Shaped by trial-and-error experience, automatic settings will only function well when one has sufficient experience of the situation at hand.

In light of these considerations, Greene formulates the "No Cognitive Miracles Principle":[12]

{{quote|When we are dealing with unfamiliar* moral problems, we ought to rely less on automatic settings (automatic emotional responses) and more on manual mode (concious, controlled reasoning), lest we bank on cognitive miracles.|sign=|source=}}

Philosophical criticisms

Thomas Nagel has argued that Joshua Greene, in his book Moral Tribes, is too quick to conclude utilitarianism specifically from the general goal of constructing an impartial morality; for example, he says, Kant and Rawls offer other impartial approaches to ethical questions.[32]

Robert Wright has called[33] Joshua Greene's proposal for global harmony ambitious and adds, "I like ambition!" But he also claims that people have a tendency to see facts in a way that serves their ingroup, even if there's no disagreement about the underlying moral principles that govern the disputes. "If indeed we’re wired for tribalism," Wright explains, "then maybe much of the problem has less to do with differing moral visions than with the simple fact that my tribe is my tribe and your tribe is your tribe. Both Greene and Paul Bloom cite studies in which people were randomly divided into two groups and immediately favored members of their own group in allocating resources -- even when they knew the assignment was random." Instead, Wright proposes that "nourishing the seeds of enlightenment indigenous to the world’s tribes is a better bet than trying to convert all the tribes to utilitarianism -- both more likely to succeed, and more effective if it does."

Berker's criticisms

In a widely cited critique of Greene's work and the philosophical implications of the dual process theory, Harvard philosophy professor [https://scholar.harvard.edu/sberker/biocv Selim Berker] critically analyzed four possible arguments for the Greene and Singer's conclusion.[8] He labels three of them as merely rhetoric or "bad arguments", and the last one as the "The argument from irrelevant factors". According to Berker, all of them are fallacious.

Three bad arguments

The first is the “Emotions Bad, Reasoning Good” argument. According to this view our deontological intuitions are driven by emotions, while consequentialist intuitions imply abstract reasoning and therefore deontological intuitions don't have any normative force, whereas consequentialist intuitions do. Berker claims that this is question begging for two reasons. The first one is that the claim that emotionally driven intuitions are less reliable than those guided by reason is not supported by substantive further motivation, given that “there is a venerable tradition that sees emotions as an important way of discerning normative truths”.[8] The second reason is that the argument seems to rely on the assumption that deontological intuitions involve only emotional processes whereas consequentialist intuitions involve only abstract reasoning. Berker points out that there is an empirical issue with this assumption, as Greene's research[34] itself shows that consequentialist responses to personal moral dilemmas involve at least one brain region, the posterior cingulate, that is associated with emotional processes. Hence, he argues, it would be hard to justify the claim that deontological judgement are less reliable than consequentialist judgements by appealing to the role of emotions, as this line of argument would result in mere name-calling.

The second bad argument presented by Berker is “The Argument from Heuristics” and is an improved version of the previous one. In support of the claim that deontological intuitions are unreliable because emotionally driven, it is argued that just like it happens in other domains, also in the moral domains emotional processes tend to rely on fast heuristics, and thus are unreliable. According to Berker this line of thought is also flawed, because in moral reasoning, unlike in other domains, it is highly debated whether moral questions can have right and wrong moral answers are, and therefore to assume that emotional processes involved in moral judgement use heuristics is question begging. Berker also challenges the very assumption that heuristics leads to unreliable judgements, and argues that in any case, as far as we know consequentialist judgements too may rely on heuristics, given that it is highly unlikely that they could rely on an accurate and comprehensive mental calculation of all the possible outcomes. Thus, the argument is based on an implausible assumptions.[8]

The third argument is “The Argument from Evolutionary History”. It draws on the idea that our different moral response towards personal and impersonal harms is evolutionarily based. In fact, since personal violence has been known since ancient age, human developed emotional responses as innate alarm systems in order to adapt, handle and promptly respond to such situations of violence within their groups. Cases of impersonal violence, instead, do not raise the same innate alarm and therefore they leave room for a more accurate and analytical judgement of the situation. Thus, according to this argument, unlike consequentialist intuitions, emotion-based deontological intuitions are the side effects of this evolutionary adaption to the pre-existing environment. Therefore “deontological intuitions, unlike consequentialist intuitions, do not have any normative force".[8] Berker states that this is incorrect conclusion because there is no reason to think that consequentialist intuitions are not also by-products of evolution.[8] Moreover, he argues that the invitation, advanced by Singer,[25] to separate evolutionary-based moral judgements (allegedly unreliable) from those that are based on reason, is misleading because it is based on a false dichotomy.

The argument from morally irrelevant factors

Berker argued that the most promising argument from neural "is" to moral "ought" is the following.[8]

“P1. The emotional processing that gives rise to deontological intuitions responds to factors that make a dilemma personal rather than impersonal.

P2. The factors that make a dilemma personal rather than impersonal are morally irrelevant.

C1. So, the emotional processing that gives rise to deontological intuitions responds to factors that are morally irrelevant.

C2. So, deontological intuitions, unlike consequentialist intuitions, do not have any genuine normative force.”

Berker criticises both premises and the move from C1 to C2. Regarding P1, Berker is not convinced that deontological judgments are correctly characterized as merely appealing to factors that make the dilemma personal. Regarding P2, he argues that factors that make a dilemma personal or impersonal are not necessarily morally irrelevant. Eventually, Berker concludes that even if a personal/impersonal distinction is indeed morally irrelevant, this does not rule out deontological judgements as not genuinely normative. Otherwise, the same thing could be said of consequentialist judgements.

Moral Enhancement

[https://www.philosophy.ox.ac.uk/people/tom-douglas Thomas Douglas] defines moral enhancement as follows: "A person morally enhances herself if she alters herself in a way that may reasonably be expected to result in her having morally better future motives, taken in sum, than she would otherwise have had".[35] Douglas argues that moral enhancement is not always morally impermissible, as opposed to the Bioconservative Thesis, which states that "Even if it were technically possible and legally permissible for people to engage in biomedical enhancement, it would not be morally permissible for them to do so".[36] Douglas argues the Bioconservative thesis is predominantly based on social considerations. It argues that enhancement may only be good for the enhanced individual, but not for the others (i.e. the rest of society). For example, an enhanced individual may be more intellectual than an average human and therefore could be acquiring multiple jobs in the market which in turn diminishes the job opportunities for other people. Nevertheless, Douglas argues that morally enhancing a human would not harm others, thus the Bioconservative Thesis is false.

Getting back to his definition of moral enhancement, Douglas defines motives as "psychological- mental or neural- states or processes that will, given the absence of opposing motives, cause a person to act".[35] In this way Douglas argues the person that is morally enhanced is not necessarily moral, has a more moral character or will necessarily act more morally than the earlier un-enhanced self.[35] He argues that moral enhancement should alter certain emotions "which interfere with putative good motives (moral emotions, reasoning processes, and combinations thereof) and/or which are themselves uncontroversially bad motives".[35] For example, altering a strong aversion to a certain racial group would be a way of morally enhancing a human. One could agree that such an aversion would be uncontroversially a bad motive and so interfering with it would be for the best. The moral enhancement may be done by biological (i.e. a pill) or nonbiological means (i.e. self education). Douglas argues moral enhancement technologies will be used within 'medium term' (i.e. within centuries).

Douglas sketches a scenario to show how moral enhancement is morally permissible.[35] He demonstrates his scenario by the following assumptions. Say that Smith can undergo some biomedical intervention (i.e. a pill) that will bring him better motives after taking it. Before taking the pill, Smith would have more bad motives then after taking the pill. The pill will only alter some emotion(s) of Smith and will not have any side-effects. Also, Smith takes it voluntarily without any sign of coercion. Douglas argues that under these circumstances it would be morally permissible for Smith to morally enhance himself. He argues this by first stating that a consequentialist claim would argue that it would be morally permissible for Smith to take the pill as it would expectably bring about good consequences. Second, a non-consequentialist claim would argue that moral enhancement has some intrinsic property which would give him reason to perform it (i.e. such as the property of being an act of self-improvement). Douglas then looks at objections to his claim.

One set of the objections Douglas mentions is what he calls 'objectionable motives'.[35] The objection puts Smith's reason to enhance himself into question. It argues that Smith's best possible motive to enhance himself may not be good enough. A proponent of this objection is Michael Sandel. In line with Sandel the argument would say that the reason for Smith's enhancement is due to the fact he/she does not have sufficient acceptance of 'the given'.[35] Douglas rejects this claim by arguing that in the example above, Smith does not have reason to accept his bad motives and reject interference of his good motives. Rather, the more appropriate attitude for this case is one of non-acceptance and a desire for self-change. Furthermore, Douglas argues that opponents may argue that the enhancement restricts Smith's freedom. In this view, Smith will have less freedom to have and to act upon bad motives. Freedom in this view consists not merely in the absence of external constraints, but also internal ones. For it is only Smith's internal characteristics that would be altered by his moral enhancement.[35] In this view, "the self is divided into two parts- the true self, and a brute self that is external to the true self".[35] The enhancement would alter the brute self in such a way that it would constrain his true self, thus restricting Smiths freedom.[35] Douglas responds to this claim by arguing that if the self is divided into two parts, the enhancement would only alter the brute self. The enhancement would fundamentally be restricting the brain's emotion-generating mechanisms. He argues that it would be strange to think of subconscious mechanisms as being your true self. Rather, the enhancement would increase the freedom of Smiths true self, while diminishing his brute self. In this way, Smith would obtain more freedom to have and act upon good motives.

References

1. ^Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–8.
2. ^Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400.
3. ^{{Cite journal|last=Greene|first=Joshua D.|date=October 2017|title=The rat-a-gorical imperative: Moral intuition and the limits of affective learning|url=http://dx.doi.org/10.1016/j.cognition.2017.03.004|journal=Cognition|volume=167|pages=66–77|doi=10.1016/j.cognition.2017.03.004|issn=0010-0277}}
4. ^{{Cite journal|last=Greene|first=Joshua|date=October 2003|title=From neural 'is' to moral 'ought': what are the moral implications of neuroscientific moral psychology?|url=http://dx.doi.org/10.1038/nrn1224|journal=Nature Reviews Neuroscience|volume=4|issue=10|pages=846–850|doi=10.1038/nrn1224|issn=1471-003X}}
5. ^{{Cite journal|last=Greene|first=Joshua D.|date=2008|editor-last=Sinnott-Armstrong|editor-first=W.|title=The Secret Joke of Kant's Soul|url=|journal=Moral Psychology: The Neuroscience of Morality|publisher=MIT Press|publication-place=Cambridge, MA|volume=|pages=35–79|via=}}
6. ^{{Cite journal|last=Greene|first=Joshua D.|date=2014-07-01|title=Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics|url=https://www.journals.uchicago.edu/doi/10.1086/675875|journal=Ethics|volume=124|issue=4|pages=695–726|doi=10.1086/675875|issn=0014-1704}}
7. ^{{Cite journal|last=Railton|first=Peter|date=July 2014|title=The Affective Dog and Its Rational Tale: Intuition and Attunement|url=http://dx.doi.org/10.1086/675876|journal=Ethics|volume=124|issue=4|pages=813–859|doi=10.1086/675876|issn=0014-1704}}
8. ^{{Cite journal|last=Berker|first=Selim|date=September 2009|title=The Normative Insignificance of Neuroscience|url=http://dx.doi.org/10.1111/j.1088-4963.2009.01164.x|journal=Philosophy & Public Affairs|volume=37|issue=4|pages=293–329|doi=10.1111/j.1088-4963.2009.01164.x|issn=0048-3915|via=}}
9. ^{{Cite journal|last=Bruni|first=Tommaso|last2=Mameli|first2=Matteo|last3=Rini|first3=Regina A.|date=2013-08-25|title=The Science of Morality and its Normative Implications|url=http://dx.doi.org/10.1007/s12152-013-9191-y|journal=Neuroethics|volume=7|issue=2|pages=159–172|doi=10.1007/s12152-013-9191-y|issn=1874-5490}}
10. ^Cushman, F.; Young, L.; Hauser, M. (2006). The Role of Conscious Reasoning and Intuition in Moral Judgment Testing Three Principles of Harm. Psychological Science, 17(12), 1082–1089.
11. ^{{Cite journal|last=Singer|first=Peter|date=2005-10-01|title=Ethics and Intuitions|url=https://doi.org/10.1007/s10892-005-3508-y|journal=The Journal of Ethics|volume=9|issue=3|pages=331–352|doi=10.1007/s10892-005-3508-y|issn=1572-8609}}
12. ^{{Cite journal|last=Greene|first=Joshua D.|date=2014-07-01|title=Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics|url=https://www.journals.uchicago.edu/doi/10.1086/675875|journal=Ethics|volume=124|issue=4|pages=695–726|doi=10.1086/675875|issn=0014-1704|via=}}
13. ^{{Cite journal|last=Greene|first=Joshua|date=|title=Beyond Point-and-shoot morality: Why neuroscience matters for ethics|url=|journal=Ethics|volume=|pages=701–704|via=}}
14. ^{{Cite journal|last=Greene|first=Joshua D.|date=2015-01-01|title=Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics|url=http://dx.doi.org/10.1515/lehr-2015-0011|journal=The Law & Ethics of Human Rights|volume=9|issue=2|doi=10.1515/lehr-2015-0011|issn=2194-6531}}
15. ^{{Cite journal|last=Greene|first=Joshua|date=|title=Beyond Point-and-Shoot morality: Why Cognitive (Neuro)science Matters for Ethics|url=|journal=Ethics|volume=|pages=701|via=}}
16. ^Aaron D Boes et al (2011). "Behavioral effects of congenital ventromedial prefrontal cortex malformation". BMC Neurology 11 (151).
17. ^Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to prefrontal cortex increases utilitarian moral judgments. Nature, 446(7138), 908–911.
18. ^Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11(8), 322–3; author reply 323–4.
19. ^{{Cite book|title=Descartes’ Error: Emotion, Reason, and the Human Brain|last=Damasio|first=Antonio|publisher=Grosset/Putnam|year=1994|isbn=|location=New York|pages=}}
20. ^{{Cite book|title=Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us about Right and Wrong|last=Edmonds|first=David|publisher=Princeton University Press|year=2014|isbn=|location=Princeton, NJ|pages=137–139}}
21. ^{{Cite journal|last=Singer|first=Peter|date=2005|title=Ethics and Intuitions|url=|journal=The Journal of Ethics|volume=9|pages=331–352|via=}}
22. ^Greene, J. D., Morelli, S. a, Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–54.
23. ^Trémolière, B., Neys, W. De, & Bonnefon, J.-F. (2012). Mortality salience and morality: thinking about death makes people less utilitarian. Cognition, 124(3), 379–84.
24. ^Rand, David G., and Ziv G. Epstein. "Risking your life without a second thought: Intuitive decision-making and extreme altruism." PLoS ONE 9.10 (2014): e109687.
25. ^{{Cite journal|last=Singer|first=Peter|date=October 2005|title=Ethics and Intuitions|url=http://dx.doi.org/10.1007/s10892-005-3508-y|journal=The Journal of Ethics|volume=9|issue=3-4|pages=331–352|doi=10.1007/s10892-005-3508-y|issn=1382-4554}}
26. ^Moll, J., De Oliveira-Souza, R., & Zahn, R. (2008). The neural basis of moral cognition: sentiments, concepts, and values. Annals of the New York Academy of Sciences, 1124, 161–80.
27. ^Sun, R. (2012). Moral Judgement, Human Motivation, and Neural Networks. Cognitive Computation
28. ^Koenigs, M., & Tranel, D. (2007). Irrational economic decision-making after ventromedial prefrontal damage: evidence from the Ultimatum Game. The Journal of Neuroscience, 27(4), 951–6.
29. ^Moll, J., & de Oliveira-Souza, R. (2007). Response to Greene: Moral sentiments and reason: friends or foes? Trends in Cognitive Sciences, 2(3-4), 336–52.
30. ^Bloom, P. (2011). Family, community, trolley problems, and the crisis in moral psychology. The Yale Review, 99(2), 26-43.
31. ^Greene, J. D. (2008). The secret joke of Kant’s soul. In Sinnott-Armstrong (Ed.), Moral Psychology: Volume 3 (pp. 35–80). Cambridge: MIT University Press.
32. ^{{cite web|last=Nagel|first=Thomas|title=You Can't Learn About Morality from Brain Scans: The problem with moral psychology|url=https://newrepublic.com/article/115279/joshua-greenes-moral-tribes-reviewed-thomas-nagel|work=New Republic|accessdate=24 November 2013}}
33. ^{{cite news|last=Wright|first=Robert|title=Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality|url=https://www.theatlantic.com/magazine/archive/2013/11/why-we-fightand-can-we-stop/309525/|accessdate=24 November 2013|newspaper=The Atlantic|date=23 October 2013}}
34. ^{{Cite journal|last=Greene|first=Joshua D.|last2=Nystrom|first2=Leigh E.|last3=Engell|first3=Andrew D.|last4=Darley|first4=John M.|last5=Cohen|first5=Jonathan D.|date=October 2004|title=The Neural Bases of Cognitive Conflict and Control in Moral Judgment|url=http://dx.doi.org/10.1016/j.neuron.2004.09.027|journal=Neuron|volume=44|issue=2|pages=389–400|doi=10.1016/j.neuron.2004.09.027|issn=0896-6273}}
35. ^{{Cite journal|last=DOUGLAS|first=THOMAS|date=August 2008|title=Moral Enhancement|url=http://dx.doi.org/10.1111/j.1468-5930.2008.00412.x|journal=Journal of Applied Philosophy|volume=25|issue=3|pages=228–245|doi=10.1111/j.1468-5930.2008.00412.x|issn=0264-3758}}
36. ^{{Cite book|title=The case against perfection : ethics in the age of genetic engineering|author=Sandel, Michael J.|date=2009|publisher=Belknap Press of Harvard University Press|isbn=9780674019270|oclc=910402669}}

2 : Moral psychology|Psychological theories

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/24 1:18:39