The future is here: Cornell University's Personal Robotics Lab has a robot who "has learned to foresee human action in order to step in and offer a helping hand, or more accurately, roll in and offer a helping claw." In fact, watch this robot pour a beer:

Of course, hip New Yorkers have been ordering drinks from robot mixologists since 2008, but it takes time for these things to catch on in the provinces. From the Cornell press release:

Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. A team from Cornell has created a solution.

Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future - such as eating, drinking, cleaning, putting away - and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.

“We extract the general principles of how people behave,” said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. “Drinking coffee is a big activity, but there are several parts to it.“ The robot builds a “vocabulary” of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.

The findings will be presented in Atlanta and Berlin next month. Cornell adds that the "research was supported by the U.S. Army Research Office, the Alfred E. Sloan Foundation and Microsoft." While we're sure those parties and Cornell are interested in noble pursuits, like helping the disabled, having a robot predict when you need a refill doesn't sound bad either.

Cornell also has a robot who can "organize" a room: