Such thoughtfulness is the result of patient parenting. The machine was taught using a database of 120 3D videos featuring people performing various household activities.
By translating these activities into robot language (i.e. mathematical models), the server robot can identify drinking, eating, cleaning and putting items away. It also associates certain objects with particular activities. [See also: Robots Share Stage With Actors in New Show]
When faced with a new situation, the robot uses its Microsoft Kinect 3-D camera to compare what it observes in the real world to what it learned from the videos.
Even if the actions it sees are slightly different than the model it has on file, the robot still understands what it's observing and can predict what will happen next. This makes it more useful than robots that blindly carry out a preprogrammed plan.
If, for example, the bot sees you drinking a cup of coffee, it can wait until the appropriate time to refill your cup. The robot knows that when you reach for the cup, you will most likely move it to your lips and then put it back down. This prediction helps it avoid making lap-scalding mistakes.