Space Industry and Business News  
UCSD Researchers Give Computers Common Sense

Looking at the photo above, you see a person on a tennis court, wielding a tennis racket and chasing a...lemon. Right? Wrong. You don't think it's a lemon. You know it's a tennis ball. Computers with the latest image labeling algorithms don't have the contextual wits to know a lemon is very unlikely in this scene. UCSD computer scientists are looking to change that. Image credit: UC San Diego
by Staff Writers
San Diego CA (SPX) Oct 18, 2007
Using a little-known Google Labs widget, computer scientists from UC San Diego and UCLA have brought common sense to an automated image labeling system. This common sense is the ability to use context to help identify objects in photographs. For example, if a conventional automated object identifier has labeled a person, a tennis racket, a tennis court and a lemon in a photo, the new post-processing context check will re-label the lemon as a tennis ball.

"We think our paper is the first to bring external semantic context to the problem of object recognition," said computer science professor Serge Belongie from UC San Diego.

The researchers show that the Google Labs tool called Google Sets can be used to provide external contextual information to automated object identifiers. The paper will be presented on Thursday 18 October 2007 at ICCV 2007 - the 11th IEEE International Conference on Computer Vision in Rio de Janeiro, Brazil.

Google Sets generates lists of related items or objects from just a few examples. If you type in John, Paul and George, it will return the words Ringo, Beatles and John Lennon. If you type "neon" and "argon" it will give you the rest of the noble gasses.

"In some ways, Google Sets is a proxy for common sense. In our paper, we showed that you can use this common sense to provide contextual information that improves the accuracy of automated image labeling systems," said Belongie.

The image labeling system is a three step process. First, an automated system splits the image up into different regions through the process of image segmentation. In the photo above, image segmentation separates the person, the court, the racket and the yellow sphere.

Next, an automated system provides a ranked list of probable labels for each of these image regions.

Finally, the system adds a dose of context by processing all the different possible combinations of labels within the image and maximizing the contextual agreement among the labeled objects within each picture.

It is during this step that Google Sets can be used as a source of context that helps the system turn a lemon into a tennis ball. In this case, these "semantic context constraints" helped the system disambiguate between visually similar objects.

In another example, the researchers show that an object originally labeled as a cow is (correctly) re-labeled as a boat when the other objects in the image - sky, tree, building and water - are considered during the post-processing context step. In this case, the semantic context constraints helped to correct an entirely wrong image label. The context information came from the co-occurrence of object labels in the training sets rather than from Google Sets.

The computer scientists also highlight other advances they bring to automated object identification. First, instead of doing just one image segmentation, the researchers generated a collection of image segmentations and put together a shortlist of stable image segmentations. This increases the accuracy of the segmentation process and provides an implicit shape description for each of the image regions.

Second, the researchers ran their object categorization model on each of the segmentations, rather than on individual pixels. This dramatically reduced the computational demands on the object categorization model.

In the two sets of images that the researchers tested, the categorization results improved considerably with inclusion of context. For one image dataset, the average categorization accuracy increased more than 10 percent using the semantic context provided by Google Sets. In a second dataset, the average categorization accuracy improved by about 2 percent using the semantic context provided by Google Sets. The improvements were higher when the researchers gleaned context information from data on co-occurrence of object labels in the training data set for the object identifier.

Right now, the researchers are exploring ways to extend context beyond the presence of objects in the same image. For example, they want to make explicit use of absolute and relative geometric relationships between objects in an image - such as "above" or "inside" relationships. This would mean that if a person were sitting on top of an animal, the system would consider the animal to be more likely a horse than a dog.

Related Links
All about the robots on Earth and beyond!



Memory Foam Mattress Review
Newsletters :: SpaceDaily :: SpaceWar :: TerraDaily :: Energy Daily
XML Feeds :: Space News :: Earth News :: War News :: Solar Energy News


Japan's robot industry forecasts strong growth
Tokyo (AFP) Oct 11, 2007
Japan's robotics industry is expected to show robust growth and remain the world leader thanks to growing exports to emerging economies, an industry group said Thursday.







  • Internet preparing to go into outer space
  • US cities' Wi-Fi dreams fading fast
  • Digital Dandelions: The Flowering Of Network Research
  • Researchers Aim To Make Internet Bandwidth A Global Currency

  • United Launch Alliance Managed Delta 2 Launches New GPS For US Air Force
  • ATK Propulsion And Composite Technologies Help Launch GPS Satellite
  • United Launch Alliance Atlas V Awarded Two NASA Missions
  • Russia Says Space Launch Vehicles Tests To Start On Schedule

  • MEPs seek limits on aircraft emissions by 2010
  • Aircraft And Automobiles Thrive In Hurricane-Force Winds At Lockheed Martin
  • New Delft Material Concept For Aircraft Wings Could Save Billions
  • Cathay Pacific chief hits out at anti-aviation critics

  • Raytheon JPS Communications Collaborates With Cisco To Provide Interoperability Solution
  • Boeing Awarded Contract To Integrate F-22 Into UAF Distributed Mission Operations Training Network
  • Raytheon Sensor Netting Technology Contract
  • Northrop Grumman Actively Pursuing MP-RTIP Radar Enhancement For Joint STARS Platform

  • Novel Gate Dielectric Materials: Perfection Is Not Enough
  • Software Overcomes Problems Of Operating Research Tools Over The Internet
  • Stroll virtual world without moving a finger
  • Small is beautiful: Incredible shrinking memory drives new IT

  • Northrop Grumman Appoints GPS And Military Space VPs
  • Boeing Names Scott Fancher Missile Defense Systems VP And GM
  • CNP Powers Up Advanced Technology Suite To Improve Selection Board Process
  • MBDA Director Takes Up Business Management Assignment On The MEADS Program

  • ITT Sensors Aboard DigitalGlobe's WorldView-1 Satellite Capture First High-Res Images
  • Successful Image Taking By The High Definition Television
  • Boeing Launches WorldView-1 Earth-Imaging Satellite
  • New Faraway Sensors Warn Of Emerging Hurricane's Strength

  • Another GPS Satellite Successfully Launched
  • Science And Galileo - Working Together
  • Modernized GPS Built By Lockheed Martin Ready For Launch From Cape Canaveral
  • Krasnoyarsk Hosts GLONASS Development Conference

  • The content herein, unless otherwise known to be public domain, are Copyright Space.TV Corporation. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space.TV Corp on any Web page published or hosted by Space.TV Corp. Privacy Statement