Scientists say not enough research is being done on the effects of artificial intelligence.
By Steve Crowe
May 05, 2014
Stephen Hawking, the world’s most famous physicist, has issued a warning about artificial intelligence (AI), saying it could be “the biggest event in human history,” but also “the last.”
In an op-ed published in The Independent, Hawking and three other scientists write it would be the “worst mistake in history” to dismiss the threat of AI, and that not enough research is being devoted to the possible risks involved.
The op-ed cites several achievements in the field of AI, including self-driving cars, Siri and the computer that won Jeopardy! However, “such achievements will probably pale against what the coming decades will bring.”
The scientists continue, “The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.
”Safe at Any Speed: Anywhere, Any Time and with Anyone” is an in-depth report that analyzes the self-driving car industry and the supply chains and the IT firms that tie everything together. This report is available only to Premium Members or can be purchased separately for $299.
“Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.”
The scientists write that there may be nothing to prevent machines with superhuman intelligence from self-improving. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
The article was part of a paper Hawking co-wrote with Stuart Russell, a computer-science professor at University of California at Berkeley, and physics professors Frank Wilczek and Max Tegmark of the Massachusetts Institute of Technology.
The piece was inspired by the movie Transcendence, starring Johnny Depp and Morgan Freeman. The film looks at two opposing possible futures for humanity. One is the road in which AI is a strong and crucial part of our existence and taking over many aspects of human life. The other is an anti-technology perspective. However, Hawking warns about dismissing this sort of artificial intelligence simply as science fiction.
To read the full article, click here.