Stephen Hawking Warns of Killer A.I. and Genetic Superhumans

In an excerpt from his new book of essays, the iconic scientist posthumously warns of rogue artificial intelligence and genetic enhancement.

Stephen Hawking, killer A.I., genetic superhumans, NASA
Stephen Hawking | NASA/Kim Shiflett
Stephen Hawking | NASA/Kim Shiflett

Stephen Hawking would like to have a word. The world-famous astrophysicist and Cambridge University professor, who died in March, is sending warnings from beyond in his new book, Brief Answers to the Big Questions, which was posthumously published this week.

The collection of essays and articles is being touted as a postscript to his 1988 bestseller A Brief History of Time. An excerpt published over the weekend in Britain’s Sunday Times reveals that, in the years before his death, Hawking cultivated some rather foreboding thoughts about the future of our species.

While the pre-publication excerpt relates just a fraction of the material in the book, the advance material suggests a surprisingly dark vision about the threat of rogue artificial intelligence and the development of genetically enhanced superhumans.

The excerpt, structured in a Q&A format, begins with the question: “Will artificial intelligence outsmart us?” Hawking’s reply is direct.

“It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction,” he writes, “but this would be a mistake, and potentially our worst mistake ever.”

RELATED: The Humble Genius of Stephen Hawking

Hawking goes on to fret out loud about an artificial intelligence explosion that will result in machines “whose intelligence exceeds ours by more than ours exceeds that of snails.” (That’s an odd choice of IQ metrics, but this is Stephen Hawking, so we’ll defer to his wisdom.)

He warns of dire outcomes should artificial intelligence start to act independently in the realms of economics and, especially, warfare. Military organizations around the world are already developing autonomous weapons systems that can choose and fire upon human targets.

Stephen Hawking, Brief Answers to the Big Questions, A.I., NASA
NASA/Paul Alers

“What is the likely end point of an arms race and is that desirable for the human race?” Hawking asks. “Do we really want cheap A.I. weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market?”

Hawking’s primary concern about A.I. is ultimately about competition. Humans evolve on a biological scale, and Hawking points out that we cannot hope to compete with a lifeform that evolves digitally. And it’s a foolish notion, he says, to assume that machines will have goals and interests that align with our own. Advanced artificial intelligence doesn’t have to be hostile to be dangerous.

“The real risk with A.I. isn’t malice, but competence,” Hawking writes. “A super-intelligent A.I. will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble.”

Hawking famously voiced his concerns in this area well before he died. In the new book, he encourages readers to further the work outlined in the famous 2015 open letter on artificial intelligence published by the Future of Life Institute.

RELATED: Stephen Hawking Wants to Prevent AI From Killing Us All

As Hawking addresses other issues in the excerpt, his tone grows darker, especially when arriving at the topic of genetic enhancement.

“We are now entering a new phase of what might be called self-designed evolution, in which we will be able to change and improve our DNA,” Hawking writes. “We have now mapped DNA, which means we have read ‘the book of life,’ so we can start writing in corrections.”

This will almost surely result in a new kind of eugenics, Hawking says — and sooner than anyone thinks.

“I am sure that during this century people will discover how to modify both intelligence and instincts such as aggression,” he writes.

While laws may be passed against genetic engineering, some scientists won’t be able to resist the temptation to improve human characteristics.

“Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete,” he says. “Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving themselves at an ever-increasing rate.”

RELATED: ‘Slaughterbots’ Video Depicts a Dystopian Future of Autonomous Killer Drones

So is there any good news in Hawking’s vision of the future? Kind of. Throughout the excerpt, Hawking takes pains to note that he considers himself an optimist. He believes, for example, that humankind will inevitably establish extraplanetary colonies, as long as we survive long enough to develop the technology.

But alas, even Hawking’s chin-up message starts with a chilling look straight down.

“One way or another, I regard it as almost inevitable that either a nuclear confrontation or environmental catastrophe will cripple the Earth at some point in the next 1,000 years,” he offers. “By then I hope and believe that our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.”

Book your tickets now.