There was the psychotic HAL 9000 in "2001: A Space Odyssey," the humanoids which attacked their human masters in "I, Robot" and, of course, "The Terminator", where a robot is sent into the past to kill a woman whose son will end the tyranny of the machines.
Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking.
"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC.
"Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate," he said.
But experts interviewed by AFP were divided.
Some agreed with Hawking, saying that the threat, even if it were distant, should be taken seriously. Others said his warning seemed overblown.
"I'm pleased that a scientist from the 'hard sciences' has spoken out. I've been saying the same thing for years," said Daniela Cerqui, an anthropologist at Switzerland's Lausanne University.
Gains in AI are creating machines that outstrip human performance, Cerqui argued. The trend eventually will delegate responsibility for human life to the machine, she predicted.
"It may seem like science fiction, but it's only a matter of degrees when you see what is happening right now," said Cerqui. "We are heading down the road he talked about, one step at a time."
Nick Bostrom, director of a program on the impacts of future technology at the University of Oxford, said the threat of AI superiority was not immediate.
Bostrom pointed to current and near-future applications of AI that were still clearly in human hands -- things such as military drones, driverless cars, robot factory workers and automated surveillance of the Internet.
But, he said, "I think machine intelligence will eventually surpass biological intelligence -- and, yes, there will be significant existential risks associated with that transition."
Other experts said "true" AI -- loosely defined as a machine that can pass itself off as a human being or think creatively -- was at best decades away, and cautioned against alarmism.
Since the field was launched at a conference in 1956, "predictions that AI will be achieved in the next 15 to 25 years have littered the field," according to Oxford researcher Stuart Armstrong.
"Unless we missed something really spectacular in the news recently, none of them have come to pass," Armstrong says in a book, "Smarter than Us: The Rise of Machine Intelligence."
Jean-Gabriel Ganascia, an AI expert and moral philosopher at the Pierre and Marie Curie University in Paris, said Hawking's warning was "over the top."
"Many things in AI unleash emotion and worry because it changes our way of life," he said.
"Hawking said there would be autonomous technology which would develop separately from humans. He has no evidence to support that. There is no data to back this opinion."
"It's a little apocalyptic," said Mathieu Lafourcade, an AI language specialist at the University of Montpellier, southern France.
"Machines already do things better than us," he said, pointing to chess-playing software. "That doesn't mean they are more intelligent than us."
Allan Tucker, a senior lecturer in computer science at Britain's Brunel University, took a look at the hurdles facing AI.