“What is the likely end point of an arms race and is that desirable for the human race?” Hawking asks. “Do we really want cheap A.I. weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market?”
Hawking’s primary concern about A.I. is ultimately about competition. Humans evolve on a biological scale, and Hawking points out that we cannot hope to compete with a lifeform that evolves digitally. And it’s a foolish notion, he says, to assume that machines will have goals and interests that align with our own. Advanced artificial intelligence doesn’t have to be hostile to be dangerous.
“The real risk with A.I. isn’t malice, but competence,” Hawking writes. “A super-intelligent A.I. will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble.”
Hawking famously voiced his concerns in this area well before he died. In the new book, he encourages readers to further the work outlined in the famous 2015 open letter on artificial intelligence published by the Future of Life Institute.
RELATED: Stephen Hawking Wants to Prevent AI From Killing Us All
As Hawking addresses other issues in the excerpt, his tone grows darker, especially when arriving at the topic of genetic enhancement.
“We are now entering a new phase of what might be called self-designed evolution, in which we will be able to change and improve our DNA,” Hawking writes. “We have now mapped DNA, which means we have read ‘the book of life,’ so we can start writing in corrections.”
This will almost surely result in a new kind of eugenics, Hawking says — and sooner than anyone thinks.
“I am sure that during this century people will discover how to modify both intelligence and instincts such as aggression,” he writes.
While laws may be passed against genetic engineering, some scientists won’t be able to resist the temptation to improve human characteristics.
“Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete,” he says. “Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving themselves at an ever-increasing rate.”
RELATED: ‘Slaughterbots’ Video Depicts a Dystopian Future of Autonomous Killer Drones
So is there any good news in Hawking’s vision of the future? Kind of. Throughout the excerpt, Hawking takes pains to note that he considers himself an optimist. He believes, for example, that humankind will inevitably establish extraplanetary colonies, as long as we survive long enough to develop the technology.
But alas, even Hawking’s chin-up message starts with a chilling look straight down.
“One way or another, I regard it as almost inevitable that either a nuclear confrontation or environmental catastrophe will cripple the Earth at some point in the next 1,000 years,” he offers. “By then I hope and believe that our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.”
Book your tickets now.