October 14, 2011      

Most people would say that picking a pepper off a vine is simpler than assembling a Harley-Davidson engine. Yet for robots, the opposite is true. To build the engine, a robot would start with a programmed order of operations. It would know the location of each part on a tray, how to grasp it, where it goes, and how to attach it to the motor. All the parts-cylinders, rings, tappets, rods-would match their required specifications and fit where they belonged.

Now consider picking a pepper. Unlike engine parts, peppers come in a variety of shapes, sizes, locations, and colors. Their color appears to change with clouds, rain, dusk, and other lighting conditions. Not every pepper is ripe for picking. Some may even be diseased.

And if that weren’t enough variables for the robot to deal with, once it successfully identified a pepper, the robot would have to grasp and pick it, posing still more problems. Nature’s presentation is messy. Peppers grow in different orientations. Leaves obscure some from view. Branches block others. A robot must calculate each pick, then remove the fruit with enough force to dislodge it, but not so much so as to cause damage.

And of course, to be economically viable as a commercial pepper picker, the robot would need to make all these calculations in seconds or less.

Classic challenge

Agriculture is a classical challenge for roboticists because the tasks are really difficult,” Sigal Berman says. Berman is a lecturer and researcher in the Department of Industrial Engineering at Ben-Gurion University of the Negev, Beer Sheva, Israel. She is part of a multidisciplinary agricultural robot team led by Yael Edan. Other members include Helman Stern, Amir Shapiro, and Ohad Ben-Shahar.

The team is at work developing sensing and grasping algorithms for Clever Robots for Crops (CROPS, www.crops-robots.eu), a European consortium tasked with creating an autonomous modular robot with two configurable manipulators. The device would be able to spray agricultural chemicals, while also harvesting such high-value crops as greenhouse vegetables, orchard fruits, and wine grapes.

Edan also leads a second European collaboration, the Interactive Robotics Research Network (INTRO, http://introbotics.eu), which aims to develop intelligent robots with enough cognitive and multimodal interaction abilities to cooperate with humans. Plus, she works on Israeli government-sponsored projects aimed at creating robots able to work in greenhouses, fields, and vineyards.

Edan began working on agricultural robots as a graduate student at Israel Institute of Technology (Technion) and at Purdue University in the late 1980s. Her prototype melon picker was one of the first robots to use a vision system to select targets. “It was way too expensive, and the old computers were too slow, but it had 88 percent detection performance,” she recalls.

Cost and speed remained issues for nearly 20 years. “In the last three to five years, things have been picking up. Computers are powerful and cheap, and the sensing technology is out there. This time, we’re going to put our agricultural expertise to use, get our algorithms to work, and then implement,” she says.

Spraying

The resulting cheaper robots could also wind up saving farmers money and help the environment. Robotic spraying, for example, could reduce the money farmers spend on agricultural chemicals.

Today, farmers apply these chemicals with a continuous spray. An automated system, on the other hand, would spray only leaves and fruits and skip the empty spots in between, resulting in an 80 percent decrease in pesticide use, Edan estimates. Moreover, the technology would also reduce the amount of pesticide applied to crops, and the amount getting into groundwater.

Edan has several spray projects under way. She is part of a large collaboration that hopes to develop a tractor-pulled smart sprayer for Case New Holland as part of the CROPS project.

Her team is also building an autonomous sprayer. Their prototype is a small four-wheeled tractor. It has a vertical pipe with seven spray heads and a pan/tilt nozzle on one side. The heads turn on or off, depending on whether the robot detects a target or not.

Edan’s team builds on existing autonomous driving programs, like the one John Deere is testing with an autonomous farming combine. As part of that effort, her team is developing adaptive algorithms so visual systems can compensate for clouds, rain, shadows, and other changes in visibility with the goal of improving positioning accuracy.

Sprayers are likely to find early application in the Negev Desert’s large greenhouses. Inside these big enclosures, navigation is easier, since the robots can simply run along tracks or use embedded devices to determine their position. Vineyards, a more complex challenge, would come next.

Vision

Besides inventing ways for robots to navigate the uneven terrain typical of vineyards, Ag robots need a sophisticated vision system to navigate and also to distinguish between crops, leaves, and empty space. This would enable them, for example, to apply herbicides to weeds, pesticides to leaves and fruit, and growth promoters only to fruit.

Since the 1990s, Edan notes, researchers have developed many sensing algorithms based on shape, surface texture, and color. However, Edan’s group plans to complement them with two tweaks of her own.

The first is the use of what’s known as hyperspectral imaging. This passive imaging technique uses advanced sensors and powerful computers to slice the spectrum-from infrared through visible to ultraviolet-into dozens or even hundreds of contiguous regions.

The system then creates images from each of those slices and recombines the data into three-dimensional “hyperspectral data cubes” for analysis.

The data cubes will enable robots to check for chemical “tells” that differentiate a pepper from an apple or a leaf. Edan believes this will improve detection accuracy when fused with data from algorithms that sense fruits from images partially obscured by leaves and branches. “We can look at parts of the spectrum and see if a fruit is ripe,” Edan says. The same technology could also detect early signs of plant disease.

The second tweak involves learning algorithms. These will enable robots to learn how different light conditions subtly alter sensor data. This will take massive computing power. Edan’s goal is to make the vision system faster and reduce the amount of error. “Current research reaches 85 to 88 percent detection, but that’s not enough. Farmers need more than 95 percent,” she says.

Grasping

Some robots will be charged with actually picking crops. This involves calculating how to approach, grasp, and remove fruits, vegetables, and the like from the plants. The calculations needed to do this are complex, but robots must make them rapidly in order to be economically viable.

Grasping is Berman’s field, and she is starting by studying humans. “We want to understand what humans define as a good grasp as they grasp peppers and apples. We’re trying to learn from the experts,” she explains.

Those experts are farm workers outfitted with gloves with speed and position sensors. Berman captures and models these movements. “Our models show that people use not only vision, but also touch when they grasp. After they touch the fruit, they slide their fingers into position,” she says.

Berman originally planned to apply her findings to a three-fingered electromechanical grasper from Cambridge, Mass.-based Barrett Technology Inc., which is capable of 8 degrees of freedom.

The CROPS consortium is also looking at a pneumatic hand developed by member Festo AG. The Festo hand, made from plastic, using a three-dimensional printer, is based on the flexible structure of a fish tail. Unlike a mechanical hand, which will keep squeezing until told to stop, the Festo device stops and maintains its position automatically when it meets a preset resistance.

Festo introduced the technology by demonstrating a three-fingered hand screwing and unscrewing light bulbs. That makes it ideal for manipulating fragile fruits and vegetables. The hand responds to only two commands, open and close. This makes it very simple to control, without a lot of computing overhead.

It also calls for new approaches to grasping. “”Once you tell the hand to grasp the fruit, it’s out of your control. The problem then becomes bringing the hand to the right place and to the right orientation,” Berman says.

While the Festo hand is less likely to bruise crops, problems could arise if the robot does not pick an optimal path for fruit removal. Also, some crops require careful cutting. Robots must remove peppers, for example, where their stems swell at the branch.

In both cases, robots must learn from experience. Fruits have many possible orientations and grasp points, Berman explains. The robot might choose one that looks good, but may base its choice on sensing errors or fail to heed obstructions. Her algorithms will enable robots to learn from these mistakes.

Robots must also learn to interact with farmers. Ideally, Berman says, a farmer would demonstrate what he or she wanted the robot to do. The robot would then analyze the motion to develop its own routines.

Edan and Berman have ambitious goals, but they can draw upon lots of talent. Ben-Gurion’s Paul Ivanier Center for Robotics has 60 faculty members, including Edan and Berman, and broad experience with military robots.

It may take all that talent to build a robot capable of a task as simple as picking a ripe pepper off a vine.