Robotics Business Review

  • Home
  • Analysis / Opinion
    • RBR Analysis
    • RBR Opinion
    • RBR Interviews
  • Business
    • Management
    • Workforce
    • Start-Ups
      • RBR Start-Up Profiles
      • RBR Start-Up Insights
    • Social Good
    • Investment
    • Transaction Database
  • Markets / Industries
    • Agriculture
    • Construction / Demolition
    • Consumer
    • Defense / Security
    • Energy / Mining
    • Health / Medical
    • Logistics / Supply Chain
    • Manufacturing
    • Public Safety
    • Retail
    • Robotics Development
    • Utilities
  • Resources
    • Websites
      • The Robot Report
      • Mobile Robot Guide
      • Collaborative Robotics Trends
    • Webinars / Digital Events
    • Case Studies
    • Company Directory
    • Podcasts
    • Research
  • Events
    • Robotics Summit
    • RoboBusiness
    • Healthcare Robotics Engineering Forum
    • Robotics Weeks
    • RoboBusiness Direct
    • R&D 100
  • RBR50
    • RBR50 2022
      • RBR50 2022 Honorees
    • RBR50 2021
      • RBR50 2021 Honorees
      • RBR50 2021 Digital Edition
    • RBR50 2020
      • RBR50 2020 Honorees
      • RBR50 2020 Digital Edition

Researchers Tighten Focus on Robot Vision

Machine vision has been expensive and difficult to accomplish, but researchers in Switzerland, Australia, and the U.S. are working on different methods to enable robots to see for themselves.

By John Edwards | May 17, 2016

With robots being designed to handle increasingly complex and precise tasks, the fields of machine vision, image processing, and pattern recognition are gaining increasing importance. Researchers worldwide are now investigating promising robot vision technologies. Their goal is to creating robots that can easily navigate through various surroundings and recognize different types of objects with minimal or no human intervention.

Acoustic imaging reduces processor load

University researchers at ETH Zurich, for example, have created an acoustic imaging device that’s designed to show only the contours and edges of an object. This is an alternative to generating a more complex — and resource-draining — photorealistic image. According to project leader Chiara Daraio, an ETH professor of mechanics and materials, the new imaging technique is designed for when there is a need to quickly record critical information about an object rather than obtaining a fully detailed image.

ETH Zurich's acoustic imaging device

A 3D-printed polymer structure with five resonance chambers. Microphones in the side holes on the left systematically scan a surface and generate an outline image from measured sound data

At the technology’s heart is a unique pipe-shaped polymer structure that’s produced on a 3D printer. The structure features a square cross-section interior that is divided into five adjoining resonance chambers linked via a series of small windows.

To create an outline image, the scientists bounce sound tuned to a specific frequency off an object. In recent tests, they attached the polymer structure, embedded with microphones, to a robot located near to the object’s surface. This allowed them to systematically scan the object’s entire surface and generate an outline image from the measured sound data.
The technology tasks advantage of the fact that acoustics near an object’s edges is dominated by so-called evanescent waves. The ETH researchers devised a method that intensifies the evanescent waves and differentiates them from larger sound waves that are reflected normally.
The tuned resonance supplied by the polymer structure intensifies the evanescent waves. The adjoining chambers filter out the longer waves, enabling an object’s edges to be rapidly and precisely imaged.
The robot vision project is currently in a proof-of-concept stage. According to Miguel Moleron, a postdoctoral student in Daraio’s group, the method still needs to be refined before it can be used in real-world robotic applications. Although the researchers used sound at an audible frequency in their tests, the technology might someday operate at ultrasonic frequencies.

Robot vision mimics insects

Researchers at Australia’s University of Adelaide are taking a cue from the way insects see and track their prey to develop enhanced robot vision systems.

“Detecting and tracking small objects against complex backgrounds is a highly challenging task, observed research team member Zahra Bagheri, a mechanical engineering doctoral student. “Robotics engineers still dream of providing robots with the combination of sharp eyes, quick reflexes, and flexible muscles.”

Research conducted in the laboratory of Steven Wiederman, a neuroscientist at the University of Adelaide’s School of Medical Sciences, has shown that flying insects such as dragonflies exhibit a remarkable level of visually guided behavior when chasing mates or prey.

“They perform this task despite their low visual acuity and a tiny brain, around the size of a grain of rice,” Bagheri said. “The dragonfly chases prey at speeds up to 60 km/h, capturing them with a success rate over 97 percent.”

University of Adelaide researchers and mobile robot

University of Adelaide Ph.D. student Zahra Bagheri and supervisor Prof. Benjamin Cazzolato with a mobile robot featuring a vision system using algorithms based on insect vision.

The researchers created a unique algorithm that’s designed to emulate insect visual tracking. “Instead of just trying to keep the target perfectly centered on its field of view, our system locks on to the background and lets the target move against it,” Bagheri said.
The approach reduces background distraction while providing time for the underlying brain-like motion processing to work. In virtual reality tests, the researchers discovered that that the algorithm works just as well as state-of-the-art target-tracking algorithms while running up to 20 times faster.
“This type of performance can allow for real-time applications using quite simple processors,” Wiederman said. “We are currently transferring the algorithm to a hardware platform, a bio-inspired, autonomous robot.”

Robot vision gains multiple perspectives

Enabling household robots to recognize things faster and more accurately by imaging objects from multiple perspectives is the goal of researchers in the MIT Computer Science and Artificial Intelligence Laboratory. The researchers began their machine vision investigation by using a common algorithm that can combine different perspectives to recognize four times as many objects as an algorithm that uses a single perspective.

They then turned to a new algorithm that is just as accurate but can be up to 10 times as fast, making it much more practical for use in household robots.

“If you just took the output of looking at things from one viewpoint, there’s a lot of stuff that might be missing, or it might be the angle of illumination or something blocking the object that causes a systematic error in the detector,” said Lawson Wong, a graduate student in electrical engineering and computer science and lead author on the new paper. “One way around that is just to move around and go to a different viewpoint.”

Wong, working with thesis advisors Leslie Kaelbling, an computer science and engineering professor at the Massachusetts Institute of Technology, and TomAs Lozano-Perez, a professor of teaching excellence in the MIT School of Engineering, created scenarios in which 20 to 30 different images of household objects were placed near one another on a table. The first algorithm, developed years ago for various types of tracking systems, was used to analyze pairs of successive images and then create multiple hypotheses about which objects in one image correspond to objects in the other.

As new perspectives were added, the number of hypotheses rose. The drawback to the approach is that the algorithm must reject all but the most likely hypotheses at each step, a time-consuming task that is far from ideal for real-world robotic applications.

More on Robot Vision:

  • AGVs Get an ETA With Seegrid’s Subway Platform
  • Drone Funding Provides Lift for Specific Applications
  • Seegrid’s IP Goldmine: Logistics, and Then Some
  • Vicarious’ AI Vision Draws $70 Million From International Investors
  • Australian Efforts to Hone Robotic Vision Get Government Support
  • The Rise of Vision Systems Is a Turning Point for Robotics

Addressing this issue, the researchers developed an algorithm doesn’t reject any of the hypotheses generated over successive images. The algorithm also doesn’t attempt to fully evaluate the hypotheses, the process that’s primarily responsible for slowing down the algorithm’s final output.

Instead, the algorithm samples the hypotheses at random, taking advantage of the fact that there is a considerable overlap between multiple hypotheses. A sufficient number of samples, the researchers believe, should be sufficient to supply a consensus on the correspondences between the objects in any two successive images.

To keep the required number of samples low, the researchers adopted a simplified technique for evaluating hypotheses. In testing, the new algorithm reduced the number of matches generated by the first algorithm from 304 sets to only 20 comparisons.

On the downside, however, the shortcut could lead to the generation of nonsensical results created by the algorithm inadvertently mapping outcomes twice. To guard against false results, the new algorithm automatically searches for potential double mappings and re-evaluates them. The process demands additional time, yet the new algorithm is still far more efficient than its older counterpart. With the safety process in place, the algorithm performed 32 comparisons — more than 20, yet significantly less than 304.

Going deep for robot vision

Depth-sensing cameras, like the type included in the popular Microsoft Kinect video game controller, are widely used as 3D sensors in a variety of applications, including machine vision. A new imaging technology developed by researchers at the Carnegie Mellon University Robotics Institute and the University of Toronto Institute for Robotics and Mechatronics aims to resolve an important drawback found in such cameras: an inability to work in bright light, particularly sunlight.

A new camera can gather images even in bright light.

A new depth-sensing camera technology developed by CMU and the University of Toronto can capture 3D information, such as this face, in full sunlight. Conventional depth cameras are typically blinded by bright light.

The researchers have created a mathematical model that helps the camera and its light source work together more efficiently, removing unwanted light that only serves to wash out the signals needed to detect an object’s contours.

“We have a way of choosing the light rays we want to capture and only those rays,” said Srinivasa Narasimhan, a CMU associate professor of robotics.
“We don’t need new image-processing algorithms, and we don’t need extra processing to eliminate the noise, because we don’t collect the noise,” Narasimhan said. All of the work is performed by the sensor. Depth-sensing cameras function by projecting a dot pattern over a specific area.
The researchers’ goal was to be able to record only the light from a specific spot as it is being illuminated, rather than attempting to pick out the spot from an entire brightly lit space.
A prototype system, based on the new concept, automatically synchronizes a small laser projector with an ordinary rolling-shutter camera — the type of camera found in nearly all smartphones and tablets. The camera picks up light only from the points that are illuminated by the laser as it scans across the area, making it possible for the camera to work under extremely bright, reflected, or diffused light.
The researchers claim that their approach also happens to be extremely energy-efficient. The approach’s new mathematical framework can compute energy-efficient codes that optimize the amount of energy that reaches the camera.
Autonomous cars, gliding down roads in bright sunshine, could use the new depth-sensing camera technology to detect upcoming obstacles and maintain spacing with other self-driving vehicles. Since depth cameras actively illuminate scenes, the devices could be a big help for robots operating in near to total darkness, such as inside mines, caves, and craters.

Related Articles Read More >

Look to the Cloud to Improve Human-Robot Social Understanding & Behavior
RGo Robotics CEO Amir Bousani on Perception for Mobile Robots
Robotics for Aerospace Defense
Aerospace and Defense Manufacturers Must Prepare for the Robot Revolution
Robot Report Podcast
Robotics Summit & Expo 2022 Recap
The Robot Report Listing Database

Robot Report Podcast

June 24, 2022
Anders Beck introduces the UR20; California bans autonomous tractors
See More >
Robotics Business Review
  • Advertising
  • Contact Us
  • Subscribe
  • Collaborative Robotics Trends
  • The Robot Report
  • Mobile Robot Guide
  • RoboBusiness Conference & Expo
  • Healthcare Robotics Engineering Forum
  • Robotics Summit Conference & Expo

Copyright © 2022 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search Robotics Business Review

  • Home
  • Analysis / Opinion
    • RBR Analysis
    • RBR Opinion
    • RBR Interviews
  • Business
    • Management
    • Workforce
    • Start-Ups
      • RBR Start-Up Profiles
      • RBR Start-Up Insights
    • Social Good
    • Investment
    • Transaction Database
  • Markets / Industries
    • Agriculture
    • Construction / Demolition
    • Consumer
    • Defense / Security
    • Energy / Mining
    • Health / Medical
    • Logistics / Supply Chain
    • Manufacturing
    • Public Safety
    • Retail
    • Robotics Development
    • Utilities
  • Resources
    • Websites
      • The Robot Report
      • Mobile Robot Guide
      • Collaborative Robotics Trends
    • Webinars / Digital Events
    • Case Studies
    • Company Directory
    • Podcasts
    • Research
  • Events
    • Robotics Summit
    • RoboBusiness
    • Healthcare Robotics Engineering Forum
    • Robotics Weeks
    • RoboBusiness Direct
    • R&D 100
  • RBR50
    • RBR50 2022
      • RBR50 2022 Honorees
    • RBR50 2021
      • RBR50 2021 Honorees
      • RBR50 2021 Digital Edition
    • RBR50 2020
      • RBR50 2020 Honorees
      • RBR50 2020 Digital Edition