We know that computers can look at the world and recognize what they see, via image-recognition software. They can pick out objects and people, and even distinguish individual faces and emotions. But what do they really perceive?

That’s the question behind an interesting new study concerning artificial intelligence and pattern recognition. Researchers from the University of Wyoming and Cornell University found out that — perhaps not so surprisingly — computers see the world a lot differently than we do.

In-Car Facial Recognition Could Deter Road Rage

The team started by working with a state-of-the-art image recognition algorithm system called a deep neural network (DNN). By crunching the visual data of millions of images, a DNN learns to distinguish a dog from a dolphin.

The team then paired that process with a second genetic algorithm that evolves new images from old ones. By way of a technique sometimes called evolutionary art, the genetic algorithm morphs the original image into the new one with human guidance. Start with any picture, select images of dolphins, and you wind up with something relatively dolphin-y. In this case, however, researchers replaced the human image selection with DNN input and got some odd results.

Jason Yosinski, Jeff Clune and Anh Nguyen

“We were expecting that we would get the same thing, a lot of very high-quality recognizable images,” researcher Jeff Clune told New Scientist. “Instead we got these rather bizarre images: a cheetah that looks nothing like a cheetah.”

Robot Frets Over Moral Puzzle, Humans Die

In fact, the process resulted in images that look like abstract art or visual white noise, but which the DNN would identify with 99 percent certainty as specific objects — a baseball, say, or an electric guitar. They’re optical illusions for computers, essentially, which could pose problems for applications like facial recognition security systems.

At issue is the fact that humans identify an object by looking at the whole picture, as it were, whereas DNNs select by looking for the parts of an object that most distinguish it from other objects. Like a targeting computer, for instance. There’s an interesting — and probably terrifying — term paper in here for some philosophy/computer science double major. You can read the full report here.