10 Most Memorable Robots of 2017
December 30, 2017      

We have already shared the top AI and robot stories of 2017, but we’d be remiss if we didn’t weigh in with a list of the most memorable robots of 2017. There was a lot to choose from, and many worthy candidates didn’t make our list.

But when we look back on 2017, we’ll remember a back-flipping robot, social robots, brain-controlled robots, major developments for autonomous vehicles, the rise of mobile manipulators, and a certain security robot that drove into a water fountain. What robots will you most remember from 2017? Here’s our list.

Boston Dynamics’ back-flipping Atlas robot

No surprise that Boston Dynamics created the most memorable robot in 2017. It released a video in November that showed its Atlas humanoid do a backflip that would make Aly Raisman proud. Of course, humanoids aren’t supposed to do backflips, but Atlas pulls it off in amazing fashion – even after leaping from platform to platform.

The video of the back-flipping Atlas has been viewed nearly 13 million times. Oliver Mitchell, founding partner of Autonomy Ventures and creator of the popular Robot Rabbi Blog, wrote a great piece about what this feat means for the future of robot agility.

Waymo’s Level 4 self-driving cars

2017 was a great year for the development of autonomous vehicles, but no company had a better year than Google’s Waymo. Eight years after launching its self-driving “moon shot,” Waymo in mid-October started testing its Level 4 autonomous minivans inside a 100-square-mile area of Chandler, Arizona, a suburb of Phoenix

This, of course, was a major milestone for autonomous vehicles. There’s still a long, long way to go before autonomous vehicles are officially rolled out and become mainstream. But the next step for Waymo is a big one: a ride-hail service in its autonomous minivans like Uber or Lyft.

Boston Dynamics Handle robot

Before showing off Atlas doing backflips, Boston Dynamics had another early candidate for most memorable robot of 2017. In February, it officially unveiled Handle — a video was leaked a couple weeks before — a research robot that combines the efficiency of wheels with the versatility of legs.

Handle stands 6.5 ft tall, travels at 9 mph and jumps 4​ ​feet vertically. ​It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles on one battery charge. ​​​Handle uses many of the same dynamics, balance and mobile manipulation principles​ found in the quadruped and biped robots the company builds, but with only about 10 actuated joints.

Cassie bipedal robot

Agility Robotics, a spin-off company from Oregon State University, debuted with a bang in February 2017 with its bipedal robot Cassie, a dynamic walker that tries to imitate how humans move. Cassie’s hips have 3 degrees of freedom, just like humans, so that it can move its legs forward and backward, side to side, and rotate them at the same time.

Cassie also has powered ankles to that allow it to stand in place without having to constantly move its feet, which is quite common with other bipedal robots. Agility is now working on a full humanoid robot that adds on to Cassie’s impressive legs. Check out this short video of Cassie wowing the crowd at RoboBusiness 2017.

IAM Robotics Swift mobile manipulator

2018 might be the year of the mobile manipulator, and Pittsburgh-based IAM Robotics has positioned itself to take advantage of this developing market. IAM demonstrated its Swift mobile picking robot at RoboBusiness. Swift autonomously navigates aisles and can pick and transport products at human-level speeds.

At RoboBusiness, IAM Robotics CEO Tom Galluzzo discussed how improvements in computer vision systems have allowed mobile piece picking robots to become quicker, as well as improving their ability to accurately identify objects. Humans rely almost entirely on vision to navigate warehouse spaces, and by giving robots these same abilities, it reduces the need to install navigation infrastructure for robots, or to design entire warehouse spaces around the robots.

MIT’s brain-controlled Baxter robot

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University developed a way to correct a robot’s actions by using a person’s electroencephalography (EEG) brain signals. The human needs to wear an EEG cap that measures their brain signals. The system looks for brain signals called “error-related potentials” (ErrPs) that are generated when the brain notices a mistake has been made.

As the robot indicates which choice it plans to make, the system uses ErrPs to determine if the human agrees with the decision. The system works in real-time, classifying brain waves in 10-30 milliseconds. In addition to monitoring ErrPs, the system also detects “secondary errors” that occur when the system doesn’t notice the human’s original correction. Read Story

Toyota’s next-gen T-HR3 humanoid

Toyota has a lot of work to do to catch up to Boston Dynamics’ humanoid robots, but its third-generation T-HR3 is still pretty impressive. It can’t do backflips, but it uses virtual reality (VR) to allow humans to remotely control the robot. The T-HR3 features a “Master Maneuvering System” that uses a force feedback, an HTC Vive VR headset, “data glove” and torque servos. This not only helps the robot mimic the human user’s movements in real time, but it uses the VR headset to show the user exactly what the robot sees.

Toyota says the T-HR3 has a total of 16 controls that command 29 individual robot body parts, making for “a smooth, synchronized user experience.” The T-HR3 stands 5 feet, 1 inches and weighs 165 pounds. According to Toyota’s Partner Robot Division, which built the robot, the T-HR3 was developed to explore the possibility of assisting humans in the home, medical facilities, construction sites, disaster areas, and even in space.


Three-plus years after raising more than $3.6 million on a crowdfunding campaign, social robot Jibo finally started arriving to backers in Sept. 2017. Social robots have been in the works for a long time, but Jibo is a first-of-its-kind robot to start living with people. Now, reviews have been mixed, and Jibo doesn’t have as many skills as Amazon Echo, Google Home or other smart speakers. But Jibo is trying to be more than a smart speaker.

Jibo recently laid off an undisclosed number of employees “to focus our resources on getting more content onto the robot.” The company said it’ll be at CES 2018 showing off some of Jibo’s new skills. We’ll be on hand in Vegas hosting our “Artificial Intelligence: Insights into Our Future” conference, so we’ll make sure to check in on Jibo while there.

Shortly after Jibo started shipping, Mayfield Robotics also started shipping its Kuri robot to early customers. Kuri is much different than Jibo as it has a mobile base to move around its environment, but it’s another social robot that the industry will start to learn from as it begins interacting with consumers in their homes. And Sony is bringing back its Aibo robo dog, so 2018 will be vital for social robots.

Androidol U

Japan loves it some androids, and the Androidol U is the latest notable robot from Osaka University professor Hiroshi Ishiguro. The life-like female android was developed for Niconico Live, a popular Japanese video sharing website and broadcasting platform. Androidol U has a more compact air servo system, better voice and body movement coordination and softer body materials compared to previous androids. Read more about Androidol U

Knightscope security robot

Sadly, Knightscope makes the list for all the wrong reasons. In July 2017, one of its security robots drove itself into a water fountain outside of an office building in Washington, D.C. Surely it’s equipped with a vision system, but this incident shows the limitations of today’s robots and how difficult it is for robots to navigate a world built for humans.

As David Bruemmer, co-founder and CTO of 5D Robotics, wrote about in a column, despite all the hype, AI systems get lost all the time, just like humans. The issue that caused Knightscope’s K5 security robot to fall into an indoor water fountain is a pesky, seemingly trivial sensor problem that has plagued robot autonomy and AI for decades.