Is Elon Musk Right About The Danger Of AI?

Artificial Intelligence (AI) has exploded onto the scene, and its rapid growth isn’t expected to slow down anytime soon.

A study by the McKinsey Global Institute reports that companies invested between $26 billion and $39 billion in AI in 2016 alone—representing 3X growth from only 3 years prior. Tractica analysts predict that by 2025, AI’s industry valuation will skyrocket to $36.8 billion.

 

The numbers are more than promising, but not everyone has such a rosy view of AI. Enter Elon Musk, famed serial entrepreneur and self-appointed AI vigilante.

 

For years, the innovator behind Tesla and SpaceX has been warily watching AI’s approaching future—and going public with his concerns. Although he’s dabbled in AI himself with his self-driving Teslas and $1 billion research company, OpenAI, the entrepreneur is determined to keep tabs on AI’s impact on humanity. (Just to be safe, Musk’s SpaceX is working to advance interplanetary colonization, which the CEO says is necessary to save humanity, should AI go rogue—as he fears it will.)

 

The idea of giving control to machines has quickly divided the business world into camps of naysayers and enthusiasts. Who could forget the Twitter beef between Musk and Facebook CEO Mark Zuckerberg? When Zuckerberg argued that Musk was overstating the risks of AI, he responded by calling Zuckerberg’s understanding of AI “limited.”

 

High-profile clashes in the upper echelons of the tech world are propelling the AI debate to the forefront. Meanwhile, new attempts at weaponizing AI are putting all camps on high alert.

 

Is the AI controversy simply fear-mongering, or is there really a basis for growing concern? Here’s a quick look at Elon Musk’s AI criticisms, and an overview of those who side with him and those who don’t.

 

ARE MUSK’S FEARS GROUNDED IN REALITY?

Musk doesn’t believe we should stop developing AI. However, he has gone public with the need for regulations and oversight. Here are some of the innovator’s top AI concerns:

 

  • AI lacks standards and regulations

To Musk, the lack of AI regulations is highly problematic. AI, he believes, is “the biggest risk we face as a civilization.” He has said the government must take an offensive stance to regulate and standardize AI before it’s too late.

 

  • Smart AI will be unstoppable

Deep learning is giving way to robots that develop their own languages and responses. Although companies like DeepMind suggest creating an “abort” button to stop AI if it goes haywire, Musk isn’t convinced. “I’m not sure I’d want to be the one holding the kill switch for some super-powered AI because you’d be the first thing it kills,” he said.

 

  • AI is goal-oriented—at any cost

Musk has likened the development of AI to playing with fire. The argument goes like this: Unlike humans, who use common sense, problem-solving and ethics to guide decisions, machines simply act as they are programmed to (or as they learn to). They’re exceedingly good at what they do, which is getting things done. But in the wrong hands, or through their own goal-driven nature, even the best-intentioned AI projects can go rogue and fulfill a mission without regard to ethics. As Musk put it in one interview, “Let’s say you create a self-improving AI to pick strawberries and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.”

 

WEAPONIZING AI AND THE ARMS RACE

 

Give a robot free will and a weapon, and takeover is imminent—or so many in the tech industry claim. It’s why more than 100 experts and leaders, from Musk to Steve Wosniak, have co-signed a letter to the U.N. requesting abatement of AI weaponization efforts. The letter stated that automated weapons would “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. … We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

 

More than 60% of cybersecurity experts believe that AI can and will be weaponized within the next year. Once put into motion, it could be used to launch countless cyberattacks against countries and governments. With innovators developing AI that can learn, respond and predict a move before it happens, Musk is concerned that if AI sees a preemptive strike as the pathway to victory, it will launch its own war—unprovoked. To avoid machine domination, the U.S. Military is focused on creating a safety feature in autonomous weapons that requires human authorization to kill.

 

Over the last few months, Musk’s statements have increased from expressing fears about the potential risk of AI gone wild to claims of WWIII. Now, with weaponization on the horizon, Musk and others see an impending arms race mirroring that of the Cold War. Russian president Vladimir Putin announced that the first global leader in AI would “become the ruler of the world.” The U.S. has thus far been leading the way with Google and Microsoft, but a Goldman Sachs report showed that China may not be far behind.

 

AI ALARM-SOUNDERS

AI isn’t evil in itself. But many fear that it could be programmed to act maliciously, or use destructive methods to achieve goals if it develops a mind of its own. Either way, there’s something about a machine emulating human behavior that doesn’t sit right with many technologists.

 

Back in 2014, Musk claimed that AI was potentially more dangerous than nuclear weapons. But now, even as the North Korean conflict quickly escalates, Musk says that when left to itself, AI is humanity’s biggest existential threat. Musk’s ongoing doomsday warnings earned him the Luddite Award in 2016. Still, he’s not the only one denouncing a machine-driven future.

 

From IBM Watson to Apple’s Siri, there’s no denying AI is getting smarter. And although Bill Gates foresaw a distant future where we should be concerned about AI, it could be a lot closer than he’d ever imagined. Oxford philosophy professor Nick Bostroom believes that once malicious AI enters the scene, there’s no going back. Why? Because superintelligence would prevent humans from reprogramming or recoding it. Tim Berners-Lee, the man behind the Internet, also warns that without ethics and regulations, AI can be dangerous.

 

AI LEADERS

From startup founders and innovators to countries and governments, there are plenty of AI enthusiasts to balance out the naysayers. Russia and China are already in the arms race, and America has been investing sizable amounts into AI weapon development. But the odds are still out on who will lead the way. While China intends to invest billions of dollars into AI, U.S. President Donald Trump recently proposed a 10% ($175 million) reduction in National Science Foundation funding for intelligent systems.

 

Meanwhile, many companies are blazing new trails. Google is quickly becoming a force to be reckoned with, acquiring 12 AI startups since 2012. Apple has acquired 8 startups, both local and global, to keep up with the fast-moving AI tide. Although Gates has announced his own fears about AI, Microsoft takes third place—in a tie with Facebook and Intel—in AI startup acquisitions.

 

BRIDGING THE GAP

Traversing the middle ground are business masterminds like Amazon, which is working to beat the stereotype that robots are taking over the workforce. The online retail giant is quietly creating a job force that blends the skills of both: utilizing machines for laborious tasks, and employing humans to oversee, troubleshoot and keep the robots on task.

 

Musk’s Neuralink venture is working to create a brain-computer interface—a sort of mind-meld of man and machine. The goal? To help humans keep up with AI, and maintain control. Despite his concerns, Musk believes the way forward is to connect neural pathways between us to “achieve a symbiosis between human and machine intelligence [and maybe solve] the control problem and the usefulness problem.”

 

As AI continues to dominate tech funding and innovator circles, there’s no understating its implications on our world. Perhaps famed academic Stephen Hawking said it best: “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”

Sign up for our Newsletter