请输入您要查询的百科知识:

 

词条 Future of Humanity Institute
释义

  1. History

  2. Existential risk

  3. Anthropic reasoning

  4. Human enhancement and rationality

  5. Selected publications

  6. See also

  7. References

  8. External links

{{Confused|Future of Life Institute}}{{Infobox organization
|name = Future of Humanity Institute
|image = FHI-Logo.png
|alt = Future of Humanity Institute logo
|caption = Future of Humanity Institute logo
|formation = {{start date and age|2005}}
|headquarters = Oxford, England
|leader_title = Director
|purpose = Research big-picture questions about humanity and its prospects
|leader_name = Nick Bostrom
|parent_organization = Faculty of Philosophy, University of Oxford
|homepage = {{URL|http://www.fhi.ox.ac.uk|fhi.ox.ac.uk}}
}}

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School.[1] Its director is philosopher Nick Bostrom, and its research staff and associates include futurist Anders Sandberg, engineer K. Eric Drexler, economist Robin Hanson, and Giving What We Can founder Toby Ord.[2]

Sharing an office and working closely with the Centre for Effective Altruism, the Institute's stated objective is to focus research where it can make the greatest positive difference for humanity in the long term.[3][4] It engages in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations.

History

Nick Bostrom established the Institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School.[1] Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. FHI researchers have been mentioned over 5,000 times in the media[5] and have given policy advice at the World Economic Forum, to the private and non-profit sector (such as the Macarthur Foundation, and the World Health Organization), as well as to governmental bodies in Sweden, Singapore, Belgium, the United Kingdom, and the United States. Bostrom and bioethicist Julian Savulescu also published the book Human Enhancement in March 2009.[6] Most recently, FHI has focused on the dangers of advanced artificial intelligence (AI). In 2014, its researchers published several books on AI risk, including Stuart Armstrong's Smarter Than Us and Bostrom's Paths, Dangers, Strategies.[7][8]

Existential risk

{{main article|Existential risk}}

The largest topic FHI has spent time exploring is global catastrophic risk, and in particular existential risk. In a 2002 paper, Bostrom defined an "existential risk" as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".[9] This includes scenarios where humanity is not directly harmed, but it fails to colonize space and make use of the observable universe's available resources in humanly valuable projects, as discussed in Bostrom's 2003 paper, "Astronomical Waste: The Opportunity Cost of Delayed Technological Development".[10]

Bostrom and Milan Ćirković's 2008 book Global Catastrophic Risks collects essays on a variety of such risks, both natural and anthropogenic. Possible catastrophic risks from nature include super-volcanism, impact events, and energetic astronomical events such as gamma-ray bursts, cosmic rays, solar flares, and supernovae. These dangers are characterized as relatively small and relatively well understood, though pandemics may be exceptions as a result of being more common, and of dovetailing with technological trends.[11][4]

Synthetic pandemics via weaponized biological agents are given more attention by FHI. Technological outcomes the Institute is particularly interested in include anthropogenic climate change, nuclear warfare and nuclear terrorism, molecular nanotechnology, and artificial general intelligence. In expecting the largest society-wise risks to stem from future technologies, and from advanced artificial intelligence in particular, FHI agrees with other existential risk reduction organizations, such as the Centre for the Study of Existential Risk and the Machine Intelligence Research Institute.[12][13] FHI researchers have also studied the impact of technological progress on social and institutional risks, such as totalitarianism, automation-driven unemployment, and information hazards.[14]

Anthropic reasoning

{{main article|Anthropic principle}}

FHI devotes much of its attention to exotic threats that have been little explored by other organizations, and to methodological considerations that inform existential risk reduction and forecasting. The Institute has particularly emphasized anthropic reasoning in its research, as an under-explored area with general epistemological implications.

Anthropic arguments FHI has studied include the doomsday argument, which claims that humanity is likely to go extinct soon because it is unlikely that one is observing a point in human history that is extremely early. Instead, present-day humans are likely to be near the middle of the distribution of humans that will ever live.[11] Bostrom has also popularized the simulation argument, which suggests that if we're likely to avoid existential risks, humanity and the world around are not unlikely to be a simulation.[15]

A recurring theme in FHI's research is the Fermi paradox, the surprising absence of observable alien civilizations. Robin Hanson has argued that there must be a "Great Filter" preventing space colonization to account for the paradox. That filter may lie in the past, if intelligence is much more rare than current biology would predict; or it may lie in the future, if existential risks are even larger than is currently recognized.

Human enhancement and rationality

Closely linked to FHI's work on risk assessment, astronomical waste, and the dangers of future technologies is its work on the promise and risks of human enhancement. The modifications in question may be biological, digital, or sociological, and an emphasis is placed on the most radical hypothesized changes, rather than on the likeliest short-term innovations. FHI's bioethics research focuses on the potential consequences of gene therapy, life extension, brain implants and brain–computer interfaces, and mind uploading.[16]

FHI's focus has been on methods for assessing and enhancing human intelligence and rationality, as a way of shaping the speed and direction of technological and social progress. FHI's work on human irrationality, as exemplified in cognitive heuristics and biases, includes an ongoing collaboration with Amlin to study the systemic risk arising from biases in modeling.[17][18]

Selected publications

  • [https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111 Superintelligence: Paths, Dangers, Strategies] {{ISBN|0-415-93858-9}}
  • [https://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom/dp/0199606501/ref=sr_1_1?s=books&ie=UTF8&qid=1325524767&sr=1-1 Nick Bostrom and Milan Cirkovic: Global Catastrophic Risks] {{ISBN|978-0-19-857050-9}}
  • Nick Bostrom and Anders Sandberg: Brain Emulation Roadmap
  • [https://www.amazon.com/Human-Enhancement-Julian-Savulescu/dp/0199299722/ref=sr_1_6?ie=UTF8&s=books&qid=1233825563&sr=8-6 Nick Bostrom and Julian Savulescu: Human Enhancement] {{ISBN|0-19-929972-2}}
  • [https://www.amazon.com/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ Anthropic Bias: Observation Selection Effects in Science and Philosophy] {{ISBN|0-415-93858-9}}

See also

{{div col|colwidth=30em}}
  • Future of Life Institute
  • Global catastrophic risks
  • Human enhancement
  • Leverhulme Centre for the Future of Intelligence
  • Nick Bostrom
  • Anders Sandberg
  • K. Eric Drexler
  • Robin Hanson
  • Toby Ord
  • Effective altruism
  • Paths, Dangers, Strategies
{{div col end}}

References

1. ^{{cite web |url=http://www.oxfordmartin.ox.ac.uk/institutes/future_humanity |title=Humanity's Future: Future of Humanity Institute |author= |date= |website=Oxford Martin School |publisher= |accessdate=28 March 2014}}
2. ^{{cite web |url=http://www.fhi.ox.ac.uk/about/staff/ |title=Staff |author= |date= |website=Future of Humanity Institute |publisher= |accessdate=28 March 2014}}
3. ^{{cite web |url=http://www.fhi.ox.ac.uk/about/mission/ |title=About FHI |author= |date= |website=Future of Humanity Institute |publisher= |accessdate=28 March 2014}}
4. ^{{cite web |url=http://aeon.co/magazine/world-views/ross-andersen-human-extinction |title=Omens |author=Ross Andersen |date=25 February 2013 |website=Aeon Magazine |publisher= |accessdate=28 March 2014}}
5. ^{{cite news |last= |first= |date= |title=Google News |url=https://www.google.com/search?hl=en&gl=uk&tbm=nws&authuser=0&q=%22nick+bostrom%22+OR+%22anders+sandberg%22+OR+%22Toby+ord%22+OR+%22eric+drexler%22&oq=%22nick+bostrom%22+OR+%22anders+sandberg%22+OR+%22Toby+ord%22+OR+%22eric+drexler%22&gs_l=news-cc.3..43j43i53.1575.17127.0.17432.83.23.6.53.0.0.158.1361.20j3.23.0...0.0...1ac.1.cmZ5I2Aaf_s&gws_rd=ssl#hl=en&gl=uk&authuser=0&tbm=nws&q=%22nick+bostrom%22+OR+%22anders+sandberg%22+OR+%22Toby+ord%22+OR+%22eric+drexler%22 |newspaper=Google News |location= |publisher= |accessdate=30 March 2015 }}
6. ^{{cite report |author=Nick Bostrom |date=18 July 2007 |archivedate=21 December 2012 |title=Achievements Report: 2008-2010 |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/19900/Achievements_Report_2008-2010.pdf |archiveurl=https://web.archive.org/web/20121221144029/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/19900/Achievements_Report_2008-2010.pdf |publisher=Future of Humanity Institute |page= |docket= |accessdate=31 March 2014 |quote= }}
7. ^{{cite news |author=Mark Piesing |date=17 May 2012 |title=AI uprising: humans will be outsourced, not obliterated |url=https://www.wired.co.uk/news/archive/2012-05/17/the-dangers-of-an-ai-smarter-than-us |newspaper=Wired |location= |publisher= |accessdate=31 March 2014 }}
8. ^{{cite news |last=Coughlan |first=Sean |date=24 April 2013 |title=How are humans going to become extinct? |url=https://www.bbc.com/news/business-22002530 |newspaper=BBC News |location= |publisher= |accessdate=29 March 2014 }}
9. ^{{cite journal |author=Nick Bostrom |date=March 2002 |title=Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards |url=http://www.jetpress.org/volume9/risks.html |journal=Journal of Evolution and Technology |publisher= |volume=15 |issue=3 |pages=308–314 |doi= |accessdate=31 March 2014}}
10. ^{{cite journal |author=Nick Bostrom |date=November 2003 |title=Astronomical Waste: The Opportunity Cost of Delayed Technological Development |url=http://www.nickbostrom.com/astronomical/waste.html |journal=Utilitas |publisher= |volume=15 |issue=3 |pages=308–314 |doi= 10.1017/s0953820800004076|accessdate=31 March 2014|citeseerx=10.1.1.429.2849 }}
11. ^{{cite news |author=Ross Andersen |date=6 March 2012 |title=We're Underestimating the Risk of Human Extinction |url=https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/ |newspaper=The Atlantic |location= |publisher= |accessdate=29 March 2014 }}
12. ^{{cite news |author=Kate Whitehead |date=16 March 2014 |title=Cambridge University study centre focuses on risks that could annihilate mankind |url=http://www.scmp.com/lifestyle/technology/article/1449541/cambridge-university-study-centre-focuses-risks-could |newspaper=South China Morning Post |location= |publisher= |accessdate=29 March 2014 }}
13. ^{{cite news |author=Jenny Hollander |date=September 2012 |title=Oxford Future of Humanity Institute knows what will make us extinct |url=http://www.bustle.com/articles/4682-oxford-future-of-humanity-institute-knows-what-will-make-us-extinct |newspaper=Bustle |location= |publisher= |accessdate=31 March 2014 }}
14. ^{{cite web |url=http://www.nickbostrom.com/information-hazards.pdf |title=Information Hazards: A Typology of Potential Harms from Knowledge |author=Nick Bostrom |date= |website=Future of Humanity Institute |publisher= |accessdate=31 March 2014}}
15. ^{{cite news |author=John Tierney |date=13 August 2007 |title=Even if Life Is a Computer Simulation... |url=http://tierneylab.blogs.nytimes.com/2007/08/13/even-if-life-is-but-a-computer-simulation/?_php=true&_type=blogs&_r=0 |newspaper=The New York Times |location= |publisher= |accessdate=31 March 2014 }}
16. ^{{cite web |url=http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf |title=Whole Brain Emulation: A Roadmap |author=Anders Sandberg and Nick Bostrom |date= |website=Future of Humanity Institute |publisher= |accessdate=31 March 2014}}
17. ^{{cite press release |author= |title=Amlin and Oxford University launch major research project into the Systemic Risk of Modelling |url=http://www.amlin.com/media/press-releases/2014/11-2-2014.aspx |location= |publisher=Amlin |agency= |date=11 February 2014 |accessdate=2014-03-31}}
18. ^{{cite news |author= |title=Amlin and Oxford University to collaborate on modelling risk study |url=http://www.cirmagazine.com/cir/Amlin-and-Oxford-University-to-collaborate-on-modelling-risk-study.php |newspaper= |location= |work=Continuity, Insurance & Risk Magazine |date=11 February 2014 |accessdate=31 March 2014 }}

External links

  • {{Official website|http://www.fhi.ox.ac.uk/|FHI official website}}
{{Future of Humanity Institute|state=expanded}}{{Effective altruism|state=expanded}}{{LessWrong}}{{Existential risk from artificial intelligence}}{{Molecular nanotechnology footer}}

7 : Departments of the University of Oxford|Futurology|Research institutes in Oxford|Research institutes established in 2005|2005 establishments in England|Transhumanist organizations|Existential risk organizations

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/16 0:57:35