'Superhuman' A.I. Can Locate Any Image
Google Research Blog
A few weeks back, researchers with Google's artificial neural networks team issued ablog post
about its A.I. system, Deep Dream, that could see pictures in clouds and even (arguably) create original art. Yesterday, a team of four engineering students atHack Reactor
announced via Popular Science, that they were coming out with an app calledDreamify
that used Deep Dream's source code to create psychedelic art out of ordinary images. It gets technical, and a little existential, but the basic gist is that by running an image recognition processes in reverse, an image recognition system was able to generate original images rather than just identify them. After training the system with thousands of images of a particular object -- a starfish, say -- the team discovered that the neural network would identify "starfishy" elements in other, unrelated images. The results are trippy, to say the least. But Deep Dream is not the first computer to generate art. We take a look at it here, along with some other examples of machine-generated art.See the Dreams of an Artificial Brain: Photos
Google Research Blog
Deep Dream can generate surprisingly compelling images, depending on what parameters are established when it first begins to process a picture. Each layer in a neural network builds on the ones beneath it, so running an image through lower layers tends to generate lines and simple patterns. In the higher-level layers, however, the network is looking for more sophisticated features and will tend to generate complex images and entire objects. When the Google team had Deep Dream process an image of a cloudy sky, it began creating images of fantastic hybrid animals like the "pig-snail" and the "camel-bird." Google's name for the process? "Inceptionism." In the image above, a neural network programmed to distinguish architectural and animal elements was cut loose on a landscape. The resulting output is therefore not based on any sample image -- it's purely a result of the A.I.'s "thoughts" on the issue.Photo First: Light Captured as Both Particle and Wave
Google has since published theDeepDream
source code, putting A.I. artistry in the hands of the people. Almost immediately after the code was made public, enterprising engineers and hobbyists began creating tools to explore the possibilities of Google's neural network.Dreamscope
is one of several Web apps that has popped up in recent days, and it looks like Instagram on powerful alkaloids. While Dreamscope doesn't give access to the full spectrum of Deep Dream's abilities, it does make the process quick and easy. Just upload an image, select one of the 19 provided filters, and you'll get your own A.I. art show within about 15 seconds. (The first wave of "user-friendly" Deep Dream tools took hours or even days to process an image.) Above is one of the world's most famous public domain images -- the 1970 meeting between Richard Nixon and Elvis Presley -- as run through Dreamscope's "demonic" filter. Captures the moment nicely, doesn't it?How Face Recognition Tech Will Change Everything
The imagery Deep Dream produces is unique in terms of how it's produced, but machine-generated art -- sometimes called digital art or generative art -- has actually been around for quite a while. Probably the most familiar example is fractal art, in which dedicated software turns algorithmic equations into still images and animations. Fractals are natural phenomena which occur both in mathematics and biology. In a fractal, recursive patterns repeat at different scales -- so that a tiny sliver of a fern leaf will look much the same as the larger fern leaf itself. These repeating geometric patterns can be plotted mathematically, in two or three dimensions, then converted into lines, shapes and colors. The resulting images are virtually infinite in variety and complexity, depending on how you tweak each iteration of a fractal.
The Painting Fool
Machine-generated art has been exhibited in galleries all over the world since at least the 1960s. But artists and historians have historically disagreed over whether such exhibits are truly created by computers, or whether computers are simply another tool used by the human artist. Another open question: Can you even term a machine-generated image or object as "Art"? British computer scientist Simon Colton has been exploring these questions with his A.I. project known asThe Painting Fool
. The A.I. system, adapted for exhibition in galleries, takes a digital picture of each visitor then selects from thousands of abstract templates and image filters. The Painting Fool makes its choices depending upon processes that govern the machine's "mood" -- for instance, scanning text from a newspaper. If its mood is dark enough, it might not paint at all. The Painting Fool also learns from its mistakes and Colton is continually adjusting the A.I.'s algorithms to meet his seven criteria for true creativity: skill, appreciation, imagination, learning, intentionality, reflection and invention. The program has recently branched out to start producing sculptures, animations and poetry.Did da Vinci Create a 3-D 'Mona Lisa'?
Computer History Museum
But there's still that sticky question about Art, with a capital A. Even if a machine does generate original images -- or objects or manuscripts -- do these creations truly constitute artistic expression? The software system known asAARON
, for instance, has been creating original artistic images since 1973. Developed by painter and computer scientist Harold Cohen, the program has gone through different stylistic periods in which it has created both highly abstract and highly representative images. AARON's drawings are created though a system of custom printing machines and have been exhibited at the Tate Gallery in London. But while Cohen describes AARON as an A.I., he has officially left the issue of Art as an open question. In an effort to resolve the issue, computer researcher Mark Riedl recently proposed a new variation on the Turing Test, designed to identify true artificial intelligence. HisLovelace 2.0
test would require that an A.I. produce a range of creative work -- paintings, poems, designs -- that expert observers would find indistinguishable from the work of a human artist. Riedl's contention: If a machine can create art that is indistinguishable from human art, then the A.I. has achieved human-level intelligence.How Real-Life A.I. Rivals 'Ex Machina'
Where on earth was that random photo taken? Google’s freaky new artificial intelligence machine can figure it out.
Computer vision specialist Tobias Weyand and his colleagues at Google created a deep-learning program called PlaNet, and trained it to identify locations where photos were taken based on visual cues.
Imagine the ultimate game of “Where in the World is Carmen Sandiego?” Only way harder. The Googlers started by dividing up the globe into a grid, excluding the oceans and polar regions. Then they created a database for PlaNet that contained 126 million geolocated photos pulled from the Internet, Technology Review reported.
Since PlaNet is an artificial neural network, it can learn. So the team taught the network how to identify a photograph’s location on the grid just using information contained in the pixels.
To test PlaNet’s accuracy, Weyand and his team fed it 2.3 million geotagged Flickr images. From there, PlaNet narrowed down 48 percent of them to the right continent, 28.4 percent to the right country, 10.1 percent to the right city, and 3.6 percent to the actual street.
OK, so maybe it can’t accurately locate every single random image on a map, but consider everything in that Flickr mixed bag: building interiors, pets, food. And while the results might not seem all that great at first, they became remarkable when the Google team pitted their machine against 10 smart, well-traveled humans.
The machine won more than half the rounds — and had better accuracy.
“PlaNet outperforms previous approaches and even attains superhuman levels of accuracy in some cases,” the team wrote in their abstract about the machine. You can test your own abilities with online games like GeoGuessr. Might want to set an alarm first, though, because that one is kind of addictive.
The team says that PlaNet doesn’t need much memory, either. Their model only uses 377 MB, which means it could go into a smartphone, Technology Review reported.
I remember trying out Google Googles several years ago and quickly realized it was mostly limited to displaying info about well-known places. PlaNet has different potential. The technology could end up being like a Shazam for photo locations. You can run, Carmen Sandiego, but you can’t hide from the Google machine.