(Go: >> BACK << -|- >> HOME <<)

Jump to content

OpenAI: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
DGG (talk | contribs)
Added {{news release}} tag to article (TW)
→‎See also: added OpenCog
Line 70: Line 70:
* [[Future of Humanity Institute]]
* [[Future of Humanity Institute]]
* [[Future of Life Institute]]
* [[Future of Life Institute]]
* [[OpenCog]]


==References==
==References==

Revision as of 07:28, 5 February 2016

OpenAI
FoundedDecember 11, 2015 (2015-12-11)
FoundersElon Musk, Sam Altman, and others
Type501(c)(3) Nonprofit organization[1]
Location
Endowment$1 billion pledged (2015)
Websitewww.openai.com
Business magnate Elon Musk, co-chair of OpenAI

OpenAI is a non-profit artificial intelligence (AI) research company, associated with business magnate Elon Musk, that aims to carefully promote and develop open-source friendly AI in such a way as to benefit, rather than harm, humanity as a whole. The organization aims to "freely collaborate" with other institutions[3] and researchers by making its patents and research open to the public.[4] The company is supported by over US$1 billion in commitments; however, only a tiny fraction of the $1 billion pledged is expected to be spent in the first few years.[5] Many of the employees and board members are motivated by concerns about existential risk from advanced artificial intelligence.

Motives

Some scientists, such as Stephen Hawking and Stuart Russell, believe that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Business magnate Elon Musk characterizes AI as humanity's biggest existential threat. OpenAI's founders structured it as a non-profit free of financial stockholder obligations, so that they could focus its research on creating a positive long-term human impact.[5]

Musk poses the question: "what is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." Musk acknowledges that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about";[6] nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."[6]

OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it's equally difficult to comprehend "how much it could damage society if built or used incorrectly".[5] Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach."[7] OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible..."[5] Co-chair Sam Altman expects the decades-long project to surpass human intelligence.[8]

Vishal Sikka, the CEO of Infosys, stated that an "openness" where the endeavor would "produce results generally in the greater interest of humanity" was a fundamental requirement for his support, and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".[9] Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations like Google and Facebook that own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.[8]

Participants

The two co-chairs of the project are:[10]

Other backers of the project include:[6]

Notable staff

The group will begin with seven researchers.[6]

See also

References

  1. ^ Levy, Steven (December 11, 2015). "How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over". Medium/Backchannel. Retrieved December 11, 2015. Elon Musk: ...we came to the conclusion that having a 501(c)(3)... would probably be a good thing to do {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  2. ^ Markoff, John (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved December 12, 2015. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  3. ^ Gershgorn, Dave (December 11, 2015). "New 'OpenAI' Artificial Intelligence Group Formed By Elon Musk, Peter Thiel, And More". Popular Science. Retrieved December 12, 2015. {{cite web}}: Italic or bold markup not allowed in: |publisher= (help)
  4. ^ Lewontin, Max (14 December 2015). "Open AI: Effort to democratize artificial intelligence research?". The Christian Science Monitor. Retrieved 19 December 2015.
  5. ^ a b c d "Tech giants pledge $1bn for 'altruistic AI' venture, OpenAI". BBC News. 12 December 2015. Retrieved 19 December 2015.
  6. ^ a b c d e f "Silicon Valley investors to bankroll artificial-intelligence center". The Seattle Times. 13 December 2015. Retrieved 19 December 2015.
  7. ^ Mendoza, Jessica. "Tech leaders launch nonprofit to save the world from killer robots". The Christian Science Monitor.
  8. ^ a b Metz, Cade (15 December 2015). "Elon Musk's Billion-Dollar AI Plan Is About Far More Than Saving the World". Wired. Retrieved 19 December 2015. Altman said they expect this decades-long project to surpass human intelligence.
  9. ^ Vishal Sikka (14 December 2015). "OpenAI: AI for All". InfyTalk. Infosys. Retrieved 22 December 2015.
  10. ^ Kraft, Amy (14 December 2015). "Elon Musk invests in $1B effort to thwart the dangers of AI". CBS News. Retrieved 19 December 2015.
  11. ^ a b c Liedtke, Michael. "Elon Musk, Peter Thiel, Reid Hoffman, others back $1 billion OpenAI research center". San Jose Mercury News. Retrieved 19 December 2015.

External links