As the car drove through the environment collecting video and images, software identified the objects and conditions and annotated them.
Until now, images used to train artificial intelligence programs are collected under real world conditions and annotated manually.
Accurately annotating video and images is tedious work. What's more, infrequent events, such as having a bus pull out in front of you or having a cyclist veer into your lane unexpectedly, are not always captured in real-world video or images and so artificial intelligence programs don't get good training about these conditions.
RELATED: Self-driving Cars, Futuristic Fuel Rev Up Auto Show
After collecting more than 213,000 images and video sequences in the virtual world, Ros and his team analyzed whether they really improved an A.I.'s ability to recognize similar events in the real world. Turns out they did. The A.I.'s success rate went from about 45 to around 55 percent.
To improve the software even more, Ros and his team are releasing all of the data produced by Synthia for non-commercial use. Hopefully other research teams will build on it.
That's good news for future self-driving vehicles, because safety is no game.