One of the busiest research areas in artificial intelligence concerns teaching machines to truly see and comprehend their surroundings. Massively complex neural network systems are working on the issue in labs all over the planet.

Researchers at the University of Cambridge this week unveiled two newly developed programs in this area that could have a significant impact on the development of driverless cars. The complementary systems can analyze visual information from a passenger’s smartphone or an onboard vehicle camera, then use that data to help a car “see” and make decisions about its immediate surroundings.

‘Ex Machina’: Science Vs. Fiction

The visual data system could augment or in some instances even replace existing GPS and laser sensor systems, researchers say. The key is in the neural network technology, which is designed to create a task-specific artificial intelligence for vehicle navigation.

One of the busiest research areas in artificial intelligence, these days, concerns teaching machines to truly see and comprehend their surroundings. Massively complex neural network systems are working on the issue in labs all over the planet.

People-Free Test Town for Driverless Cars Opens

The first system, known as SegNet, analyzes live video feed from a vehicle camera and instantly sorts objects from the field of view into 12 different categories — such as road, building, vehicle, pedestrian, bike, tree or sign. The developers say this new system currently labels more than 90 percent of pixels correctly, and is more accurate than expensive laser or radar-based systems.

SegNet’s image recognition features are the result of an intensive machine learning process. Undergraduates at Cambridge trained the neural network system by manually labeling every pixel in 5000 different example images.

“It’s remarkably good at recognizing things in an image, because it’s had so much practice,” says researcher Alex Kendall, in press materials regarding the announcement.

The second system is built on similar architecture as Segnet, but is designed to ascertain a vehicle’s location and orientation. This system uses the precise colors and geometry of incoming imagery to determine where the vehicle is, relative to the objects it sees.

Driverless Taxis Could Reduce Emissions 90 Percent

The localization system also works in places where GPS doesn’t — for example, inside tunnels or in dense urban areas where GPS in unreliable. Details are a little fuzzy on exactly how this would work as a navigational tool, but the developers have posted an interactive online demo on the technology, if you’re curious.

A second interactive demo for the SegNet system can also be accessed through the University of Cambridge website. Meanwhile, feel free to check out the YouTube demo video below. The research team is big on demos, clearly.