Space & Innovation

A.I. Assistant Lives In Your Ear

The Xperia Ear is standing by to give you hands-free phone, Internet, navigation and scheduling help. Continue reading →

In the 2013 movie, Her, the main character Theodore (Joaquin Phoenix) falls in love with his operating system/digital assistant Samantha (Scarlett Johansson). Throughout the movie, Theodore interacts with Samantha through a device he wears in his ear.

Sony's new Xperia Ear, due out this summer, looks similar. It uses NFC or Bluetooth to communicate with a smartphone and responds to the wearer's verbal commands. You can ask it to make a call, search the Internet, help you navigate as well as provide information about your schedule, the weather and news.

Putting The Art In Artificial Intelligence: Photos

According to the press release, the device, which is made from soft silicone, is lightweight, water resistant and made for continuous wear. The case doubles as a charger and automatically charges the device when it's stored.

Sony introduced the Xperia Ear today, the first day of the Mobile World Congress technology show in Barcelona, saying that it wanted to expand the Xperia brand beyond tablets and phones as a way to bolster its market share.

But as the BBC article points out, people have resisted Bluetooth headsets up until now and it's not clear they will suddenly latch onto them just because of some extra bells and whistles.

VIDEO: 'Her' And The Future Of Artificial Intelligence

Maybe they would if the artificially intelligent voice sounded like Scarlett Johansson. Maybe.

My own personal experience with anything worn in the ear is that it always pops out and using anything with an ear clip is uncomfortable. So I don't think I'll be falling in love with an A.I. assistant anytime soon.

Here's a video with a little more detail.

click to play video

A few weeks back, researchers with Google's artificial neural networks team issued a

blog post

about its A.I. system, Deep Dream, that could see pictures in clouds and even (arguably) create original art. Yesterday, a team of four engineering students at

Hack Reactor

announced via Popular Science, that they were coming out with an app called

Dreamify

that used Deep Dream's source code to create psychedelic art out of ordinary images. It gets technical, and a little existential, but the basic gist is that by running an image recognition processes in reverse, an image recognition system was able to generate original images rather than just identify them. After training the system with thousands of images of a particular object -- a starfish, say -- the team discovered that the neural network would identify "starfishy" elements in other, unrelated images. The results are trippy, to say the least. But Deep Dream is not the first computer to generate art. We take a look at it here, along with some other examples of machine-generated art.

See the Dreams of an Artificial Brain: Photos

Deep Dream can generate surprisingly compelling images, depending on what parameters are established when it first begins to process a picture. Each layer in a neural network builds on the ones beneath it, so running an image through lower layers tends to generate lines and simple patterns. In the higher-level layers, however, the network is looking for more sophisticated features and will tend to generate complex images and entire objects. When the Google team had Deep Dream process an image of a cloudy sky, it began creating images of fantastic hybrid animals like the "pig-snail" and the "camel-bird." Google's name for the process? "Inceptionism." In the image above, a neural network programmed to distinguish architectural and animal elements was cut loose on a landscape. The resulting output is therefore not based on any sample image -- it's purely a result of the A.I.'s "thoughts" on the issue.

Photo First: Light Captured as Both Particle and Wave

Google has since published the

DeepDream

source code, putting A.I. artistry in the hands of the people. Almost immediately after the code was made public, enterprising engineers and hobbyists began creating tools to explore the possibilities of Google's neural network.

Dreamscope

is one of several Web apps that has popped up in recent days, and it looks like Instagram on powerful alkaloids. While Dreamscope doesn't give access to the full spectrum of Deep Dream's abilities, it does make the process quick and easy. Just upload an image, select one of the 19 provided filters, and you'll get your own A.I. art show within about 15 seconds. (The first wave of "user-friendly" Deep Dream tools took hours or even days to process an image.) Above is one of the world's most famous public domain images -- the 1970 meeting between Richard Nixon and Elvis Presley -- as run through Dreamscope's "demonic" filter. Captures the moment nicely, doesn't it?

How Face Recognition Tech Will Change Everything

The imagery Deep Dream produces is unique in terms of how it's produced, but machine-generated art -- sometimes called digital art or generative art -- has actually been around for quite a while. Probably the most familiar example is fractal art, in which dedicated software turns algorithmic equations into still images and animations. Fractals are natural phenomena which occur both in mathematics and biology. In a fractal, recursive patterns repeat at different scales -- so that a tiny sliver of a fern leaf will look much the same as the larger fern leaf itself. These repeating geometric patterns can be plotted mathematically, in two or three dimensions, then converted into lines, shapes and colors. The resulting images are virtually infinite in variety and complexity, depending on how you tweak each iteration of a fractal.

Machine-generated art has been exhibited in galleries all over the world since at least the 1960s. But artists and historians have historically disagreed over whether such exhibits are truly created by computers, or whether computers are simply another tool used by the human artist. Another open question: Can you even term a machine-generated image or object as "Art"? British computer scientist Simon Colton has been exploring these questions with his A.I. project known as

The Painting Fool

. The A.I. system, adapted for exhibition in galleries, takes a digital picture of each visitor then selects from thousands of abstract templates and image filters. The Painting Fool makes its choices depending upon processes that govern the machine's "mood" -- for instance, scanning text from a newspaper. If its mood is dark enough, it might not paint at all. The Painting Fool also learns from its mistakes and Colton is continually adjusting the A.I.'s algorithms to meet his seven criteria for true creativity: skill, appreciation, imagination, learning, intentionality, reflection and invention. The program has recently branched out to start producing sculptures, animations and poetry.

Did da Vinci Create a 3-D 'Mona Lisa'?

But there's still that sticky question about Art, with a capital A. Even if a machine does generate original images -- or objects or manuscripts -- do these creations truly constitute artistic expression? The software system known as

AARON

, for instance, has been creating original artistic images since 1973. Developed by painter and computer scientist Harold Cohen, the program has gone through different stylistic periods in which it has created both highly abstract and highly representative images. AARON's drawings are created though a system of custom printing machines and have been exhibited at the Tate Gallery in London. But while Cohen describes AARON as an A.I., he has officially left the issue of Art as an open question. In an effort to resolve the issue, computer researcher Mark Riedl recently proposed a new variation on the Turing Test, designed to identify true artificial intelligence. His

Lovelace 2.0

test would require that an A.I. produce a range of creative work -- paintings, poems, designs -- that expert observers would find indistinguishable from the work of a human artist. Riedl's contention: If a machine can create art that is indistinguishable from human art, then the A.I. has achieved human-level intelligence.

How Real-Life A.I. Rivals 'Ex Machina'