Photos and VideosView Slideshow
If you haven’t heard, Tesla founder Elon Musk is concerned about the potential dangers of artificial intelligence (AI). But he hasn’t really given specifics about why.
In a recent interview with scientist Neil deGrasse Tyson, Musk said he’s not worried about self-driving cars, his concern is when a machine can rapidly educate itself. He explains:
“If there was a very deep digital superintelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way – it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers, well – ”
“Then that’s all she wrote,” interjected Tyson, laughing.
Musk certainly isn’t alone with his thinking. Stephen Hawking has said AI “could spell the end of the human race.” And Microsoft founder Bill Gates is worried about the dangers of AI. He recently wrote in a Reddit Ask Me Anything, “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
We can’t even get drones to pick up and deliver shoes. We tried that, it was a disaster. What are the chances this superintelligence Musk speaks of comes to fruition? I’m no scientist, and there are more people than ever collaborating on AI, but the odds don’t seem too high. As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, recently said, “we quite simply have to separate science from science fiction.”
Many experts disagree with the “AI is going to take over the world” theory, and their reasoning is this: computer algorithms are too narrow, and they don’t have imaginations.
Thomas Dietterich, president of the Association for the Advancement of Artificial Intelligence and a professor of computer science at Oregon State University, says we’re asking AI to perform “high-stakes tasks that will depend on enormously complex algorithms.”
Dietterich says those algorithms won’t always work. “Computer systems can already beat humans at chess, but that doesn’t mean they can’t make a wrong move,” he tells Phys.org. “They can reason, but that doesn’t mean they always get the right answer. And they may be powerful, but that’s not the same thing as saying they will develop superpowers.”
The anti-AI crowd creates a lot of noise, so we thought it’s time to share the other side of the story. We’ve rounded up 5 experts who think the “dangers” of AI have been overblown, and we share their reasons why.
And if you’re in the anti-AI crowd, check out these experts who agree with you.