Elon Musk’s revolt against futurist-led A.I. Apocalypse

Elon Musk’s revolt against futurist-led A.I. Apocalypse
Founder and CEO of Tesla Motors Elon Musk speaks during a media tour of the Tesla Gigafactory, which will produce batteries for the electric carmaker, in Sparks, Nevada, U.S. July 26, 2016. REUTERS/James Glover II

On Sunday Vanity Fair published a detailed summary of SpaceX CEO Elon Musk’s reservations against Artificial Intelligence 

(VERO BEACH, FLA) In the 7,900 word manifesto titled “Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse” Vanity Fair’s Maureen Dowd interviewed the leading figures of Silicon Valley, including Demis Hassabis, Peter Thiel, and Sam Altman, to decipher why fellow futurist Elon Musk is no fan of his colleagues goal of ubiquitously connected robotic artificial intelligence. 

The full manuscript is available here, these are notable nuggets from the piece:

  • Microsoft, Apple, and Google already use A.I. to power their digital services, such as mobile assistants Cortana and Siri and the Google search engine. These are all being used to create flexible, self-teaching A.I. that will mirror human learning.
  • At the 2017 World Government Summit in Dubai Elon Musk professed his support for cybernetic implants in humans as a deterrence to run-away AI, and that this advancement could come as soon as 2021.
  • Musk said:

“The way to escape human obsolescence, in the end, may be by having some sort of merger of biological intelligence and machine intelligence. We’re already cyborgs. Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow. For a meaningful partial-brain interface, I think we’re roughly four or five years away.”

  • Demis Hassabis, a leading creator of advanced artificial intelligence and a co-founder of the mysterious London laboratory DeepMind, told Elon Musk that developing “artificial super-intelligence” is the most important project in the world. Hassabis is regarded as the “Merlin who will likely help conjure our A.I. children.”
     
  • Peter Thiel, who co-founded PayPal with  Elon Musk, relayed a story about how an investor in DeepMind once joked “he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.”
     
  • Hassabis’s partners in DeepMind, Shane Legg, stated previously: “I think human extinction will probably occur, and technology will likely play a part in this.”
     
  • Before DeepMind was bought by Google in 2014 for $650m, Elon Musk invested in the company “to keep a wary eye on the arc of A.I.”
     
  • Elon Musk told Bloomberg’s Ashlee Vance that he fears Google may produce “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
     
  • During a speech at MIT in 2014 Elon Musk said A.I. was probably humanity’s “biggest existential threat,” adding: “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”
     
  • Following the speech some A.I. engineers jokingly began their lab work by saying: “O.K., let’s get back to work summoning.”
     
  • Musk told Recode’s annual Code Conference in 2016 “we could already be playthings in a simulated-reality world run by an advanced civilization.”
     
  • Stephen Hawking, and Bill Gates have joined Elon Musk in their warnings against the growth of artificial intelligence
     
  • Stephen Hawking told the BBC: “I think the development of full artificial intelligence could spell the end of the human race.” 
     
  • Bill Gates told Charlie Rose that A.I. was potentially more dangerous than a nuclear catastrophe. 
     
  • Nick Bostrom, a 43-year-old Oxford philosophy professor, warned in his 2014 book, Superintelligence, that “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” 
     
  • In 2016, Henry Kissinger held a confidential meeting with top A.I. experts at the Brook, a private club in Manhattan, to discuss concern over how smart robots could cause a rupture in history and unravel the way civilization works.
     
  • Stuart Russell, a computer-science professor at Berkeley said: “In 50 years, this 18-month period we’re in now will be seen as being crucial for the future of the A.I. community. It’s when the A.I. community finally woke up and took itself seriously and thought about what to do to make the future better.”
     
  • Steve Wozniak said: “Why do we want to set ourselves up as the enemy when they might overpower us someday? It should be a joint partnership. All we can do is seed them with a strong culture where they see humans as their friends.”
     
  • Peter Thiel: “Full-on A.I. is on the order of magnitude of extraterrestrials landing. There are some very deeply tricky questions around this . . . . If you really push on how do we make A.I. safe, I don’t think people have any clue. We don’t even know what A.I. is. It’s very hard to know how it would be controllable.”
     
  • Elon Musk and Y-combinator President Sam Altman have founded a billion-dollar nonprofit company named OpenAI, with the goal of working for safer artificial intelligence.
     
  • Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy”, a book about aliens destroying the earth to make way for a hyperspace highway, was a turning point for Elon Musk to seeing “man’s fate in the galaxy as his personal obligation”
     
  • Elon Musk’s mission statement: “The only thing that makes sense to do is strive for greater collective enlightenment.”
     
  • Elon Musk believes that it is better to try to get super-A.I. first and distribute the technology to the world than to allow the algorithms to be concealed and concentrated in the hands of tech or government elites.
     
  • Musk said: “I’ve had many conversations with Larry about A.I. and robotics—many, many. And some of them have gotten quite heated. You know, I think it’s not just Larry, but there are many futurists who feel a certain inevitability or fatalism about robots, where we’d have some sort of peripheral role. The phrase used is ‘We are the biological boot-loader for digital super-intelligence.’”
     
  • Greg Brockman, of OpenAI, believes the next decade will be all about A.I., with everyone “throwing money at the small number of ‘wizards” who know the A.I. ‘incantations’.”
     
  • Microsoft’s Jaron Lanier, a computer scientist known as the father of virtual reality, said: “It’s saying, ‘Oh, you digital techy people, you’re like gods; you’re creating life; you’re transforming reality.’ There’s a tremendous narcissism in it that we’re the people who can do it. No one else. The Pope can’t do it. The president can’t do it. No one else can do it. We are the masters of it . . . . The software we’re building is our immortality. I read about it once in a story about a golden calf.”
     
  • Eric Schmidt, the executive chairman of Google’s parent company, put it this way: “Robots are invented. Countries arm them. An evil dictator turns the robots on humans, and all humans will be killed. Sounds like a movie to me.
     
  • Mark Zuckerberg, 32, has developed a reputation of adopting eccentric habits as of late, including wearing a tie every day, reading a book every two weeks, learning Mandarin, and eating meat only from animals he killed with his own hands.
     
  • In 2016, Mark Zuckerberg said: “I think we can build A.I. so it works for us and helps us. Some people fear-monger about how A.I. is a huge danger, but that seems far-fetched to me and much less likely than disasters due to widespread disease, violence, etc.”
     
  • Ray Kurzweil believes AI will turn us into cyborgs, with nanobots the size of blood cells connecting us to synthetic neocortices in the cloud, giving us access to virtual reality and augmented reality from within our own nervous systems. 
     
  • Kurzweil: “We will be funnier; we will be more musical; we will increase our wisdom.”
     
  • Eliezer Yudkowsky, a highly regarded 37-year-old AI researcher:  “The A.I. doesn’t have to take over the whole Internet. It doesn’t need drones. It’s not dangerous because it has guns. It’s dangerous because it’s smarter than us. Suppose it can solve the science technology of predicting protein structure from DNA information. Then it just needs to send out a few e-mails to the labs that synthesize customized proteins. Soon it has its own molecular machinery, building even more sophisticated molecular machines. If you want a picture of A.I. gone wrong, don’t imagine marching humanoid robots with glowing red eyes. Imagine tiny invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead.”

At the 2017 Mobile World Congress (MWC) in Barcelona, Spain, TRUNEWS host Rick Wiles was told by keynote speakers that the integration of computers into human beings is a goal being pursed by the same architects behind the implementation of an Artificially Intelligent (AI) ubiquitously connected Global Brain. 

On the March 16th and 17th editions of TRUNEWS, host Rick Wiles described what he was told about the development of a Global Brain and how the Christian’s today must learn to evangelize in this coming technocracy. 

TRUNEWS copy, TRUNEWS analysis 

Please contact TRUNEWS correspondent Edward Szall with any news tips related to this story.
Email: Edward.Szall@trunews.com | Twitter: @realEdwardSzall 
DOWNLOAD THE TRUNEWS MOBILE APP on Apple and Google Play
  • Get it on Google Play

Donate Today!

Support TRUNEWS to help build a global news network that provides a credible source for world news


We believe Christians need and deserve their own global news network to keep the worldwide Church informed, and to offer Christians a positive alternative to the anti-Christian bigotry of the mainstream news media

You May Also Like