Coordinates: 42°22′25″N 71°06′35″W / 42.3736158°N 71.1097335°W
Abbreviation | FLI |
---|---|
Formation | March 2014 |
Founders |
|
Type | Non-profit research institute |
47-1052538 | |
Legal status | Active |
Purpose | Reduction of existential risk, particularly from advanced artificial intelligence |
Location |
|
President | Max Tegmark |
Website | futureoflife.org |
The Future of Life Institute (FLI) is a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its advisors include entrepreneur Elon Musk.
FLI's mission is reduce global catastrophic and existential risk from powerful technologies.[1] FLI is particularly focused on the potential risks to humanity from the development of human-level or superintelligent artificial general intelligence (AGI), but also works on risks from biotechnology, nuclear weapons and climate change.[2] The Institute's work is made up of grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.[3]
The Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Tufts University postdoctoral scholar Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. The Institute's advisors include computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018).[4][5][6]
Starting in 2017, FLI offered an annual Future of Life Award, with the first awardee being Vasili Arkhipov. Also in 2017, FLI released Slaughterbots, a short arms-control advocacy film. FLI released a sequel in 2021.[7]
In 2018, FLI drafted a letter calling for "laws against lethal autonomous weapons". Signatories included Elon Musk, Demis Hassabis, Shane Legg, and Mustafa Suleyman.[8]
In March 2023, FLI drafted a letter calling on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter said: "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control".[9] The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[10][11]
Prominent signatories included Elon Musk, Steve Wozniak, Evan Sharp, Chris Larsen, and Gary Marcus; AI lab CEOs Connor Leahy and Emad Mostaque; politician Andrew Yang; deep-learning pioneer Yoshua Bengio; and intellectual Yuval Noah Harari.[12] Marcus stated "the letter isn't perfect, but the spirit is right." Mostaque stated "I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter." In contrast, Bengio explicitly endorsed the six-month pause in a press conference.[13][14] Musk stated that "Leading AGI developers will not heed this warning, but at least it was said."[15] Some signatories, including Musk, were motivated by fears of existential risk from artificial general intelligence.[16] Some of the other signatories, such as Marcus, instead signed out of concern about mundane risks such as AI-generated propaganda.[17]
The letter cited the influential paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜"[18] whose authors include Emily M. Bender, Timnit Gebru, and Margaret Mitchell, all of whom criticised the letter.[19] Mitchell said: “By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.”[19]
In 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated by Alan Alda.[20][21] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[22][23]
Since 2015, FLI has organised biannual conferences that bring together leading AI builders from academia and industry. To date, the following conferences have taken place:
The FLI research program started in 2015 with an initial donation of $10 million from Elon Musk.[32][33][34] Unlike typical AI research, this program is focused on making AI safer or more beneficial to society, rather than just more powerful.[35] In this initial round, a total of $7 million was awarded to 37 research projects.[36] In July 2021, FLI announced that it would launch a new $25 million grant program with funding from the Russian–Canadian programmer Vitalik Buterin.[37]