Intuitive, fluid motions are hard to get robots to do on their own...up until now. And get this--the robots are teaching themselves. A new robotic demonstration from a company called Open AI uses something called machine learning, specifically a neural network, to allow this robotic hand to perform a complicated series of independent object manipulations. That means the motions you’re seeing here? The robot is doing that by itself, without any input from or control by a human, and without any direct programming to perform each action.
Machine learning is a subset of artificial intelligence. It’s getting computers to perform tasks without being explicitly programmed to do them. Take one of the most advanced robots we have today, a robot that helps us perform surgeries. This robot is traditionally programmed, meaning it has to be explicitly told what to do every time. The programmer has to write: “if this happens, the machine will do that”, for every step of that robot’s action. For tasks where that would be prohibitively time-intensive, machine learning algorithms can be used instead. These are algorithms that you can expose to vast quantities of data, from which they can ‘learn’ certain criteria and identify patterns.
So how is this applied in something like the robotic hand from Open AI? In this situation the main data sets are all the different positions of the hand and the block. But the combination of all of these possibilities gives us way too many options for the robot to practice in real life so that it can ‘learn’ each one.