This article has been republished with permission from the LinkedIn page of David Bruemmer, co-founder and CTO of 5D Robotics. Follow Bruemmer on LinkedIn for more insights about artificial intelligence and robotics.
Robots have a hard time these days. Everyone expects them to be perfect. They are supposed to do all the jobs we don’t want to and, in theory, perform more reliably than humans. Recently, Knightscope’s K5 security robot has been in the news several times, illustrating that artificial intelligence (AI) is not yet as reliable as some might think.
The image below shows injuries to a 16-month-old child’s leg after a Knightscope robot ran him over in Palo Alto. More recently, a K5 “offed itself” by taking an unauthorized dive in a Washington DC fountain. Reporters each came up with their own clever title explaining why the robot decided to kill itself: the pressure of the job, the feeling that no one seemed to care, the sheer drudgery.
The reality though is not as funny. The real issue was a pesky, seemingly trivial sensor problem that has plagued robot autonomy and AI for decades: wheel slippage. So what’s the real story here? Is this just an isolated bug? Is Knightscope just not good at developing AI?

The real story here is that when it comes to using robots in dynamic, unstructured environments, AI reliability continues to be a challenge. Knightscope is a pioneering robotics company, and they have an excellent team of roboticists. The problems of reliability are an issue not only for Knightscope, but for the entire robotics community
Here is the dirty truth: despite all the recent hype, AI systems get lost all the time, just like we do (and largely for the same basic reason). When robots depend on optics such as cameras or lasers, they invariably end up in situations where they lose track of their position. This is especially true when there is low light, shadows, bright light, moving obstacles that prevent line of sight to visual features, dynamic environments where things are changing, crowds of people, dust, rain, fog, snow, etc.
Better AI algorithms that cope with uncertainty can help up to a point, but the reality is that no matter how smart the system is, a visual camera is not helpful in the dark and a laser is not helpful at localizing if the features around it have changed. Both humans and robots can get confused, but humans still perform much better than robots in the face of uncertainty.
Even when conditions are generally ideal, most robotic systems still depend heavily on old-school sensors like gyros, GPS and wheel encoders to keep track of movement. When the wheels slip or the GPS signal degrades everything can easily go haywire, resulting in a big splash, or worse, a big crash. Most AI proponents respond by pointing out that mistakes are bound to happen. This is absolutely true, but does nothing to address the problem. To be clear, I don’t think robots should have to be perfect and I don’t hold mistakes against the researchers or the executives. What I advocate for is honesty about the real problem and how to solve it.
AI has come a long way and, for example, Cruise and Waymo have self-driving cars with some commendable reliability under the right conditions. If things have been carefully mapped and the sensors are able to see permanent features everything works well. The problem emerges primarily when this is not the case. When it comes to driving cars, the challenge is that the “edge cases” are far more common than you might think and mistakes can be deadly. As if that were not bad enough, many of the systems require a connection to the cloud which suffers from connectivity, latency and security issues.
Let’s say for the sake of discussion that all of these issues can be solved if we throw enough money at the problem. Even then there are remaining fundamental limitations to optics because they are the same ones human drivers face. Optics cannot see around blind corners, cannot see up far ahead on the road, cannot see through dense obscurants and will always have problems with reliability in poor lighting or inclement weather. So what does this mean for the multi-billion dollar effort to create self-driving cars? Is all of that work a waste?
I think optics-based AI is a worthy pursuit, and I have great optimism for how the technology can improve our world. The key is that we must recognize and compensate for the limitations of the optics approach rather than tout the overly optimistic concept that AI will somehow “figure everything out.”
If we want to know what the limitations of optics-based AI will ultimately be, all we have to do is take a hard look at ourselves. Despite excellent eyesight and a reasonably functional brain, I have a terrible sense of direction. Given a fifty-fifty choice to turn left or right, it seems I will choose the wrong one ninety percent of the time.
Robot wheel slippage is the same basic phenomenon. I would argue that the problem is not with my intelligence, but rather that given the complexity of the world, I need some help from my environment. Humans are smart but we still put up railings. Humans have eyes but we still need to put up lighting. Humans can read maps but we still put up roadway signs. No matter how smart we are, we can benefit from guidance embedded into the environment. Despite my intelligence and good optical perception, what I really want is a “readable world.”
The same thing is true for robots, drones, and self driving cars. We spent billions putting in lane markers, cat-eyes, traffic lights and road signs. All of this was necessary to create a human-readable world. If we want robots and self-driving cars to be reliable, we need to give them a robot-readable world.
Rod Brooks, one of my colleagues who I worked with on the DARPA Mobile Autonomous Robot Software MARS program back in the late ’90s recently made the same point in a public setting, explaining to the media that the current hype about AI is obfuscating the reality of working with robots for those of us trying to make practical AI solutions. His point is that AI is really not that mysterious or dangerous. Despite the AI bubble, he thinks that we are not going to see truly intelligent robots in the next thirty years.
The effort by the media (as well as many within the AI community) to spin a story about burgeoning robot intelligence is confusing the public as well as many decision makers and investors.
AI is a very useful tool in the tool kit but it should be viewed from a practical, performance-based perspective as a means to an end. The end could be reducing congestion, routing packages through a FedEx center or lowering your commute time. The means to accomplish these goals is not just smarter cars and robots but the need for smarter roads and IOT solutions in factories. We need to embed intelligence into our roads, factories and homes in an effort to make a “robot-readable world.” If we do it right this effort can help both humans and robots stay out of fountains.