Georgia Tech Center Probes Robotic Frontiers - Robotics Business Review
Get the most out of Ro­bot­ics Business Review!

This is a preview article. Please register for access to all content.
Learn more about membership benefits and start your membership today!

Georgia Tech Center Probes Robotic Frontiers
Biologically-inspired research stimulates new technologies in automated neuroscience and vision
By John Edwards

At Georgia Tech’s Center for Robotics and Intelligent Machines, researchers are working to improve science’s understanding of how the human brain functions and to apply this and other research to projects aimed at creating advanced robotic systems for medical and other applications.

The Atlanta-based center was launched in 2006 to achieve three basic goals: to conduct world-class research into robotics and related technologies, to provide an integrated PhD program that would create new generations of robotics engineers and scientists and to usher newly created robotic technologies and systems into the business world.

The center’s core research activities include bio-robotics, human systems modeling, cognitive robotics, healthcare robotics, human-automation systems, humanoid robotics and intelligent machine dynamics. The center’s researchers are also active in mobile robotics, robotics and intelligent systems, machine vision, socially intelligent machines, underwater robotics and unmanned aerial vehicles (UAVs).

Currently, one of the center’s most complex and ambitious initiatives is developing a robotic technology to find and retrieve information located inside brain neurons. A pair of Georgia Tech scientists, working in cooperation with a colleague at the Massachusetts Institute of Technology (MIT), recently proved that a robotic arm guided by a cell-detecting computer algorithm can identify and record from neurons in a living mouse brain with better accuracy and speed than a human researcher. The technology is expected to lead to improvements in human brain research and disease treatment.

Understanding Cells

The new automated process aims to eliminate the need for months of training while providing long-sought information about brain cell activities. Using the new approach, scientists will be able to classify the thousands of different types of brain cells, map how they connect to each other and determine how diseased cells differ from normal cells.

“Here we have converted a skill that is regarded as an art form into an automated process.”Craig Forest, assistant professor, George W. Woodruff School of Mechanical Engineering at Georgia Tech

The project marks a collaboration between the labs of Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and Craig Forest, an assistant professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. Georgia Tech graduate student, Suhasa Kodandaramaiah, who has spent the past two years as a visiting student at MIT, also worked on the project.

The new method could be particularly useful in studying brain disorders, including Parkinson’s disease, schizophrenia, autism and epilepsy. “The brain is very complicated,” Forest says. “Other complicated fields, like genomics and molecular engineering have been revolutionized by the influx of robotics, but for the living brain no such thing has happened.”

The three researchers set out to automate a 30-year-old technique known as whole-cell patch clamping, which involves bringing a tiny hollow glass pipette in contact with the cell membrane of a neuron and then opening a small hole in the membrane to record electrical activity within the cell. This skill usually takes several months to learn.

The researchers decided to automate the process by building a robotic arm that can lower a glass pipette into the brain of an anesthetized mouse with micrometer accuracy. As it moves, the pipette monitors electrical impedance. If there are no cells nearby, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as easily and impedance goes up.

The pipette progresses in two-micrometer increments, measuring electrical impedance 10 times per second. Once the glass tubing detects a cell, it immediately stops to keep it from poking through the membrane. After the pipette finds a cell, it applies suction to form a seal with the cell’s membrane. The electrode can then break through the membrane to record the cell’s internal electrical activity. The robotic system can detect cells with 90 percent accuracy, establishing a connection about 40 percent of the time.
The researchers are now working on extracting a cell’s contents to read its genetic profile.

“Clinically speaking, this [process] could be used in neurosurgical settings to do integrative measurements on single cells, say in an epileptic brain,” Forest says. “This [process] could also be used to analyze biopsy samples, such as from tumors, or from surgical resections of brain at the single cell resolution, potentially revealing tissue heterogeneity.”

In the years ahead, a robot similar to the type developed by the researchers, could be employed to infuse drugs at targeted points in the brain, or to deliver gene therapy. The researchers hope that the technology will also inspire neuroscientists to pursue other kinds of robotic automation, including optogenetics, the use of light to perturb targeted neural circuits and determine the causal role that neurons play in brain functions.

“Neuroscience is currently considered to be an art form, learned through years of practice,” Forest says. “Here we have converted a skill that is regarded as an art form into an automated process.”

The researchers feel that their technology will open the door to users who don’t have the time available to study pipette manipulation.

“Someone can tune the robot through software now, not through years of practice,” Forest says. “This sets a new kind of precedent, and may lead to the further automation of neuroscience.”

Biologically-Inspired Robot Eye

Another Georgia Tech project that promises to lead to breakthroughs in medical, industrial, surveillance and other major areas is a biologically-inspired robot eye. Using piezoelectric materials, which convert mechanical actions into electrical charges, Georgia Tech researchers have replicated the human eye’s muscle motion to precisely control camera systems, improving accuracy and performance. At the heart of the new control system is a piezoelectric cellular actuator that enables a robot eye to move like a human eye. The research is being conducted by Ph.D. candidate Joshua Schultz under the direction of assistant professor Jun Ueda.

The researchers used a piezoelectric ceramic to create a device that can contract about as much as a human muscle. Previous researchers have used deformable rhomboid-shaped mechanisms to amplify the stroke length, but the devices can only achieve up to 1 percent of human muscle-level contraction.

“Some people had talked about having several stages: one rhomboid amplifies the piezo output, another amplifies the output of that rhomboid, another amplifies the second and so on, but our research—as far as we know–is the first to really describe this mathematically in such a way that a roboticist can design a muscle without a whole lot of guesswork,” Schultz says. “We’ve also demonstrated a couple of methods, inspired by the motion of the human eye to move robots that use this technology smoothly.” If the turn-on times for the various active units are not chosen carefully, the motion will not be smooth, but will oscillate back and forth.

“This research is important because it explores and provides a foundation for using the motion paradigm in robotics that we experience in our human bodies, that is small discrete on-off units that work together through a flexible interface to move something in the environment,” Schultz says. “Not only can we create new devices that can continue to function in spite of an active unit’s failure, but this serves as a platform to investigate the neural mechanisms that underlie human movement in a controlled way using an external device.”

Schultz says there will be numerous real world applications for the camera positioning mechanism. “Imagine if your smartphone had one of these baked in,” he says. “You could get some great panoramic shots.” He adds that the system’s actuator technology could be used in rehabilitation robotics and, with a few minor modifications, MRI-guided surgery. “I expect that the security, defense, hobbyist, medical and neuroscience communities will all benefit,” Schultz says. “Not to mention the fact that there are lots of people who want to be able to point a camera remotely.”

The researchers will next focus on developing a vision design framework for highly integrated robotic systems. They hope to create a highly adaptable platform that will accommodate everything from industrial robots to medical and rehabilitation robots to intelligent assistive robots.

“Based on my product design experience, a forward-thinking, lean company that’s willing to take risks could probably get this product to market in 18 months if they have some good engineers,” Schultz says.

Get premium access to all RBR content, join today!
Get your membership today!
Already a member? Log in.

No comments yet. Be the first to post a comment.



View comment guidelines

Remember me

Notify me of follow-up comments?

Special Focus: Robots and the Law

Special Focus: 3D Printing
3D Printing

The new reality of customizable, one-off production:
Additive Manufacturing (AM). Where it’s going, why and what’s
driving its emergence.

How Patents Die: Expiring 3D Printing Patents

Autonomous Marine Systems Raises Seed Funding

3D Printing Begins to Come of Age…Finally!
More in 3D Printing

Robotics Takeways From CES 2016

Chinese Firms Invest $20M in Israeli Robotics R&D

RoboBusiness Europe Is Reborn in Denmark

In Their Own Words: 10 Women Talk About the Future of Robotics

Is Robotic Welding ‘Inevitable’?