Subscribe free to our newsletters via your
. Space Industry and Business News .




ROBO SPACE
Robots learn to use tools by watching YouTube videos
by Staff Writers
College Park MD (SPX) Jan 15, 2015


University of Maryland computer scientist Yiannis Aloimonos (center) is developing robotic systems able to visually recognize objects and generate new behavior based on those observations. Image courtesy John T. Consoli.

Imagine having a personal robot prepare your breakfast every morning. Now, imagine that this robot didn't need any help figuring out how to make the perfect omelet, because it learned all the necessary steps by watching videos on YouTube. It might sound like science fiction, but a team at the University of Maryland has just made a significant breakthrough that will bring this scenario one step closer to reality.

Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) partnered with a scientist at the National Information Communications Technology Research Centre of Excellence in Australia (NICTA) to develop robotic systems that are able to teach themselves.

Specifically, these robots are able to learn the intricate grasping and manipulation movements required for cooking by watching online cooking videos. The key breakthrough is that the robots can "think" for themselves, determining the best combination of observed motions that will allow them to efficiently accomplish a given task.

The work will be presented on Jan. 29, 2015, at the Association for the Advancement of Artificial Intelligence Conference in Austin, Texas.

The researchers achieved this milestone by combining approaches from three distinct research areas: artificial intelligence, or the design of computers that can make their own decisions; computer vision, or the engineering of systems that can accurately identify shapes and movements; and natural language processing, or the development of robust systems that can understand spoken commands.

Although the underlying work is complex, the team wanted the results to reflect something practical and relatable to people's daily lives.

"We chose cooking videos because everyone has done it and understands it," said Yiannis Aloimonos, UMD professor of computer science and director of the Computer Vision Lab, one of 16 labs and centers in UMIACS. "But cooking is complex in terms of manipulation, the steps involved and the tools you use. If you want to cut a cucumber, for example, you need to grab the knife, move it into place, make the cut and observe the results to make sure you did them properly."

One key challenge was devising a way for the robots to parse individual steps appropriately, while gathering information from videos that varied in quality and consistency. The robots needed to be able to recognize each distinct step, assign it to a "rule" that dictates a certain behavior, and then string together these behaviors in the proper order.

"We are trying to create a technology so that robots eventually can interact with humans," said Cornelia Fermuller, an associate research scientist at UMIACS.

"So they need to understand what humans are doing. For that, we need tools so that the robots can pick up a human's actions and track them in real time. We are interested in understanding all of these components. How is an action performed by humans? How is it perceived by humans? What are the cognitive processes behind it?"

Aloimonos and Fermuller compare these individual actions to words in a sentence. Once a robot has learned a "vocabulary" of actions, they can then string them together in a way that achieves a given goal. In fact, this is precisely what distinguishes their work from previous efforts.

"Others have tried to copy the movements. Instead, we try to copy the goals. This is the breakthrough," Aloimonos explained. This approach allows the robots to decide for themselves how best to combine various actions, rather than reproducing a predetermined series of actions.

The work also relies on a specialized software architecture known as deep-learning neural networks. While this approach is not new, it requires lots of processing power to work well, and it took a while for computing technology to catch up. Similar versions of neural networks are responsible for the voice recognition capabilities in smartphones and the facial recognition software used by Facebook and other websites.

While robots have been used to carry out complicated tasks for decades--think automobile assembly lines--these must be carefully programmed and calibrated by human technicians. Self-learning robots could gather the necessary information by watching others, which is the same way humans learn. Aloimonos and Fermuller envision a future in which robots tend to the mundane chores of daily life while humans are freed to pursue more stimulating tasks.

"By having flexible robots, we're contributing to the next phase of automation. This will be the next industrial revolution," said Aloimonos. "We will have smart manufacturing environments and completely automated warehouses. It would be great to use autonomous robots for dangerous work--to defuse bombs and clean up nuclear disasters such as the Fukushima event. We have demonstrated that it is possible for humanoid robots to do our human jobs."


Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once


credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly


paypal only


.


Related Links
University of Maryland
All about the robots on Earth and beyond!






Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.

Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle








ROBO SPACE
Vision system for household robots
Boston MA (SPX) Jan 13, 2015
For household robots ever to be practical, they'll need to be able to recognize the objects they're supposed to manipulate. But while object recognition is one of the most widely studied topics in artificial intelligence, even the best object detectors still fail much of the time. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory believe that household robots sho ... read more


ROBO SPACE
Atomic placement of elements counts for strong concrete

Scientists build rice grain-sized laser powered by quantum dots

A novel inorganic material emitting laser light in solution is discovered

Zinc oxide materials tapped for tiny energy harvesting devices

ROBO SPACE
Marines order Harris wideband tactical radios

New Israeli defense contracts for Elbit Systems C4i services

Navy prepares for Jan. 20 communications satellite launch

Navy picks MIL Corporation for communications support

ROBO SPACE
Firefly Space Systems and NASA have Inked Space Act Agreement

SpaceX CEO Elon Musk wants to shake up satellite industry

Vega ready to launch ESA spaceplane

Russian firm seals $1 billion deal to supply US rocket engines

ROBO SPACE
Turtles use unique magnetic compass to find birth beach

W3C and OGC to Collaborate to Integrate Spatial Data on the Web

AirAsia disappearance fuels calls for real-time tracking

Four Galileo satellites at ESA test centre

ROBO SPACE
Switzerland restricts operations of F-5E aircraft

How prepared is your pilot to deal with an emergency?

Singapore navy finds main body of crashed AirAsia jet

Philippines buying C-130s from U.S. for security, disaster relief

ROBO SPACE
Laser-induced graphene 'super' for electronics

Toward quantum chips

Quantum optical hard drive breakthrough

Know when to fold 'em

ROBO SPACE
All instruments for GOES-R now integrated with spacecraft

NASA Satellite Set to Get the Dirt on Soil Moisture

Airbus Defence and Space, TerraNIS and ARTAL Technologies join forces

First satellite visible imagery of FY-2G successfully acquired

ROBO SPACE
Pollution soars in Chinese capital amid winter smog

Mercury from gold mines accumulates far downstream

India bans burning cow dung near yellowing Taj Mahal

China encourages environmental social groups to sue




The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.