Originally published November 29 2014
AI researchers back Elon Musk's fears of technology causing human extinction
by L.J. Devon, Staff Writer
(NaturalNews) When big time entrepreneur Elon Musk headed to the podium at the MIT symposium at a recent meeting on technology, he turned heads as he spoke openly about the threat of artificial intelligence potentially causing human extinction.
"I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it's probably that," said Elon Musk, CEO of electric car maker Tesla Motors. "With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."
Elon Musk is known for the Tesla Motor's dual-motor Model S sedan, which features a revolutionary autopilot feature, allowing the vehicle to steer itself between lanes. He is also the CEO and co-founder of SpaceX , a company that is now striving to build communities on Mars.
Can AI gain consciousness and evil intent?
It turns out that Elon Musk's doomsday fears are quite plausible and potentially realistic. In fact, prominent AI researchers are coming out, backing Musk's concerns.
"At first I was surprised and then I thought, 'this is not completely crazy,'" said Andrew Moore, a computer scientist at Carnegie Mellon University. "I actually do think this is a valid concern and it's really an interesting one. It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."
Moore and Musk agree -- the increasing technology of artificial intelligence should be met with regulatory oversight at both the national and international level. In early August, Musk made his concerns public, saying that AI is "potentially more dangerous than nukes."
Could AI robots defy humans and ultimately turn on them? Could they adapt to their masters, overcome their instruction, and become hostile? Do the autopilot features founded within his own automobile company pose a future threat to humans?
How long before AI learns to reason on its own?
Sonia Chernova, the director of the Robot Autonomy and Interactive Learning lab in the Robotics Engineering Program at Worcester Polytechnic Institute, backs Musk's concerns and says, "It's important to understand that the average person doesn't understand how prevalent AI is." But Chernova highlights the importance of differentiating between the various levels of artificial intelligence. Chernova said that some AI research is harmless, like the artificial intelligence built into email to filter out spam. Phone applications that make recommendations for movie and restaurant preference to users are essentially harmless AI. Google also uses AI for its Maps service.
She said many AI technologies pose no risk: "I think [Musk's] comments were very broad and I really don't agree there. His definition of AI is a little more than what we really have working. AI has been around since the 1950s. We're now getting to the point where we can do image processing pretty well, but we're so far away from making anything that can reason."
Scientists agree that artificial intelligence crosses the line when it can reason, but Chernova says it could take 100 years or more for scientists to build an intelligent system like that.
Creating a system that keeps humans in control
Still, Musk said that he wants "to keep an eye" on AI researchers. That's why he helped invest $40 million in Vicarious FPC, a company working on future AI algorithms.
Yaser Abu-Mostafa , professor of electrical engineering and computer science at the California Institute of Technology, agrees that AI is far from being able to reason intelligently, but it's only a matter of time. He believes in creating systems that keep humans in control at all costs.
"Having a machine that is evil and takes over... that cannot possibly happen without us allowing it," said Abu-Mostafa. "There are safeguards... If you go through the scenario of a machine that wants to take over or destroy the world, it's a nice science-fiction scenario, as long as we don't allow a system to control itself."
Is it possible to give robots consciousness? Could evil intent take over machine intelligence and evolve to become an enemy of mankind, a threat to the human race?
Sources:
http://www.computerworld.com
http://www.naturalnews.com
All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing LLC takes sole responsibility for all content. Truth Publishing sells no hard products and earns no money from the recommendation of products. NaturalNews.com is presented for educational and commentary purposes only and should not be construed as professional advice from any licensed practitioner. Truth Publishing assumes no responsibility for the use or misuse of this material. For the full terms of usage of this material, visit www.NaturalNews.com/terms.shtml