请输入您要查询的百科知识:

 

词条 Future of Life Institute
释义

  1. Background

  2. Events

  3. Global research program

  4. In the media

  5. See also

  6. References

  7. External links

{{Confused|Future of Humanity Institute}}{{coord|42.3736158|-71.1097335|display=title}}{{Infobox organization
| name = Future of Life Institute
| native_name =
| native_name_lang =
| named_after =
| image =
| image_size =
| alt =
| caption =
| logo = Future of Life Institute logo.svg
| logo_size =
| logo_alt =
| logo_caption =
| map =
| map_size =
| map_alt =
| map_caption =
| map2 =
| map2_size =
| map2_alt =
| map2_caption =
| abbreviation =
| motto =
| predecessor =
| merged =
| successor =
| formation = {{start date and age|2014|03}}
| founders =
  • Jaan Tallinn
  • Max Tegmark
  • Viktoriya Krakovna
  • Anthony Aguirre
  • Meia Chita-Tegmark

| founding_location =
| extinction =
| merger =
| type =
| tax_id = 47-1052538
| registration_id =
| status = Active
| purpose = Mitigation of existential risk
| headquarters =
| location = Cambridge, Massachusetts, U.S.
| coords =
| region =
| services =
| products =
| methods =
| fields =
| membership =
| membership_year =
| language =
| owner =
| sec_gen =
| leader_title =
| leader_name =
| leader_title2 =
| leader_name2 =
| leader_title3 =
| leader_name3 =
| leader_title4 =
| leader_name4 =
| board_of_directors =
| key_people =
| main_organ =
| parent_organization =
| subsidiaries =
| secessions =
| affiliations =
| budget =
| budget_year =
| revenue =
| revenue_year =
| disbursements =
| expenses =
| expenses_year =
| endowment =
| staff =
| staff_year =
| volunteers =
| volunteers_year =
| slogan =
| website = {{URL|futureoflife.org|FutureOfLife.org}}
| remarks =
| formerly =
| footnotes =
}}

The Future of Life Institute (FLI) is a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). Its founders include MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, and its board of advisors includes entrepreneur Elon Musk and cosmologist Stephen Hawking (prior to his death in 2018).

Background

The FLI mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.[1][2] FLI is particularly focused on the potential risks to humanity from the development of human-level artificial intelligence.[3]

The Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, Harvard graduate student and IMO medalist Viktoriya Krakovna, BU graduate student Meia Chita-Tegmark (Tegmark's wife), and UCSC physicist Anthony Aguirre. As of 2017, the Institute's 14-person Scientific Advisory Board comprises 13 men and one woman, and includes a computer scientist Stuart J. Russell, biologist George Church, cosmologists Stephen Hawking and Saul Perlmutter, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman.[4][5][6]

FLI operates grassroots-style to recruit volunteers and younger scholars from the local community in the Boston area.[3]

Events

On May 24, 2014, FLI held a panel discussion on "The Future of Technology: Benefits and Risks" at MIT, moderated by Alan Alda.[3][7][8] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[9][10] The discussion covered a broad range of topics from the future of bioengineering and personal genetics to autonomous weapons, AI ethics and the Singularity.[3][11][12]

On January 2, 2015 through January 5, 2015, the Future of Life Institute organized and hosted "The Future of AI: Opportunities and Challenges" conference, which brought together the world's leading AI builders from academia and industry to engage with each other and experts in economics, law, and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI.[13] The Institute circulated an open letter on AI safety at the conference which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence experts.[14]

In January 2017, FLI organized a private gathering of what The New York Times called "heavy hitters of A.I." (including Yann LeCun, Elon Musk, and Nick Bostrom) to discuss concerns that A.I. research could lead to dangerous results.[15]

Global research program

On January 15, 2015, the Future of Life Institute announced that Elon Musk had donated $10 million to fund a global AI research endeavor.[16][17][18] On January 22, 2015, the FLI released a request for proposals from researchers in academic and other non-profit institutions.[19] Unlike typical AI research, this program is focused on making AI safer or more beneficial to society, rather than just more powerful.[20] On July 1, 2015, a total of $7 million was awarded to 37 research projects.[21]

In the media

  • United States and Allies Protest U.N. Talks to Ban Nuclear Weapons in "The New York Times"[22]
  • "Is Artificial Intelligence a Threat?" in The Chronicle of Higher Education, including interviews with FLI founders Max Tegmark, Jaan Tallinn and Viktoriya Krakovna.[3]
  • "But What Would the End of Humanity Mean for Me?", an interview with Max Tegmark on the ideas behind FLI in The Atlantic.[4]
  • "Transcending Complacency on Superintelligent Machines", an op-ed in the Huffington Post by Max Tegmark, Stephen Hawking, Frank Wilczek and Stuart J. Russell on the movie Transcendence.[1]
  • "Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea" in Diana Crow Science.[11]
  • "An Open Letter to Everyone Tricked into Fearing Artificial Intelligence", includes "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter" by the FLI [23]
  • {{cite web | title=Startup branding doesn’t hide apocalyptic undertones of letter signed by Elon Musk | date=15 January 2015 | url=http://upstart.bizjournals.com/entrepreneurs/hot-shots/2015/01/15/startup-branding-doesn-t-hide-apocalyptic.html | author=Michael del Castillo | work=Upstart Business Journal }}
  • {{cite web | title=Ex Machina movie asks: is AI research in safe hands? | date=21 January 2015 | url=http://eandt.theiet.org/magazine/2015/02/ex-machina-ai-robots.cfm | author=Edd Gent| work=Engineering & Technology }}
  • "Creating Artificial Intelligence" on PBS[24]

See also

  • Future of Humanity Institute
  • Centre for the Study of Existential Risk
  • Global catastrophic risk
  • Leverhulme Centre for the Future of Intelligence
  • Machine Intelligence Research Institute
  • Vasili Arkhipov "The man who saved the world"

References

1. ^{{cite web | url=http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html | title=Transcending Complacency on Superintelligent Machines | publisher=Huffington Post | date= 19 April 2014 | accessdate = 26 June 2014}}
2. ^{{cite web | url=http://cser.org/fli/ | title=CSER News: 'A new existential risk reduction organisation has launched in Cambridge, Massachusetts' | publisher=Centre for the Study of Existential Risk | date = 31 May 2014 | accessdate = 19 June 2014}}
3. ^{{cite web | url=http://chronicle.com/article/Is-Artificial-Intelligence-a/148763/ | title = Is Artificial Intelligence a Threat? | publisher = Chronicle of Higher Education | accessdate = 18 Sep 2014}}
4. ^{{cite web |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |publisher=The Atlantic | date = 9 May 2014 | accessdate = 11 June 2014}}
5. ^{{cite web | url=http://futureoflife.org/who | title = Who we are | publisher= Future of Life Institute | accessdate = 11 June 2014}}
6. ^{{cite web | url=http://www.salon.com/2014/10/05/our_science_fiction_apocalypse_meet_the_scientists_trying_to_predict_the_end_of_the_world/ |title = Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world | publisher = Salon | accessdate = 8 Oct 2014}}
7. ^{{cite web | url=http://www.futureoflife.org/events | title = Events | publisher= Future of Life Institute | accessdate = 11 June 2014}}
8. ^{{cite web | url=http://intelligence.org/2014/06/01/miris-june-2014-newsletter/ | title= Machine Intelligence Research Institute - June 2014 Newsletter | accessdate = 19 June 2014}}
9. ^{{cite web | url=http://www.fhi.ox.ac.uk/fli-mit/ | title = FHI News: 'Future of Life Institute hosts opening event at MIT' | publisher=Future of Humanity Institute | date = 20 May 2014 | accessdate = 19 June 2014}}
10. ^{{cite web | url=http://www.pged.org/event/the-future-of-technology-benefits-and-risks/ | title= The Future of Technology: Benefits and Risks | publisher= Personal Genetics Education Project | date = 9 May 2014 | accessdate = 19 June 2014}}
11. ^{{cite web | url=http://dianacrowscience.com/fsi-risk-benefits-top-23/ | title = Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea | publisher=Diana Crow Science | accessdate = 11 June 2014 | date = 29 May 2014}}
12. ^{{cite web | url=http://techtv.mit.edu/videos/29155-the-future-of-technology-benefits-and-risks | title = The Future of Technology: Benefits and Risks | publisher = MIT Tech TV | date=24 May 2014 | accessdate = 11 June 2014}}
13. ^{{cite web | url=http://futureoflife.org/misc/ai_conference |title=The Future of AI: Opportunities and Challenges |publisher=Future of Life Institute| accessdate = 19 January 2015}}
14. ^{{cite web|title=Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter|url=http://futureoflife.org/misc/open_letter|publisher=Future of Life Institute}}
15. ^ {{Cite web |url=https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html |title=Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots |last=Metz |first=Cade |work=NYT |quote=The private gathering at the Asilomar Hotel was organized by the Future of Life Institute, a think tank built to discuss the existential risks of A.I. and other technologies. |date=June 9, 2018 |accessdate=June 10, 2018}}
16. ^{{cite web|url=http://futureoflife.org/misc/AI |title=Elon Musk donates $10M to keep AI beneficial |publisher=Future of Life Institute |date=15 January 2015 |deadurl=yes |archiveurl=https://web.archive.org/web/20150424194538/http://futureoflife.org/misc/AI |archivedate=2015-04-24 |df= }}
17. ^{{cite web|url=http://www.slashgear.com/elon-musk-donates-10m-to-artificial-intelligence-research-15364795/| title=Elon Musk donates $10M to Artificial Intelligence research|publisher=SlashGear|date=15 January 2015}}
18. ^{{cite web | url=http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research | title= Elon Musk is Donating $10M of his own Money to Artificial Intelligence Research | publisher= Fast Company | date = 15 January 2015}}
19. ^{{cite web|url=http://futureoflife.org/grants-timeline|title=An International Request for Proposals - Timeline|publisher=Future of Life Institute|date=22 January 2015}}
20. ^{{cite web|url=http://futureoflife.org/grants-rfp|title=2015 INTERNATIONAL GRANTS COMPETITION|publisher=Future of Life Institute}}
21. ^{{cite web|url=http://futureoflife.org/AI/2015selection |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial |publisher=Future of Life Institute |deadurl=yes |archiveurl=https://web.archive.org/web/20150717025809/http://futureoflife.org/AI/2015selection |archivedate=2015-07-17 |df= }}
22. ^{{cite web|url=https://www.nytimes.com/2017/03/27/world/americas/un-nuclear-weapons-talks.html |title=United States and Allies Protest U.N. Talks to Ban Nuclear Weapons |work=New York Times |author1=Somini Sengupta |author2=Rick Gladstone |date=March 27, 2017}}
23. ^{{cite web | url=http://www.popsci.com/open-letter-everyone-tricked-fearing-ai | title= An Open Letter to Everyone Tricked into Fearing Artificial Intelligence | publisher= Popular Science | date = 14 January 2015 | accessdate = 19 January 2015}}
24. ^{{cite web|url=https://www.pbs.org/wnet/religionandethics/2015/04/17/april-17-2015-creating-artificial-intelligence/25770/|title=Creating Artificial Intelligence|publisher=PBS|date=17 April 2015}}

External links

  • {{Official website|http://futureoflife.org}}
  • The Future of Technology: Benefits and Risks - MIT Tech TV
{{Existential risk from artificial intelligence}}

7 : Futurology|2014 establishments in Massachusetts|Research institutes established in 2014|Artificial intelligence associations|Transhumanist organizations|Existential risk organizations|Artificial Intelligence existential risk

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/12 1:25:33