Robot companies and researchers continue to make robots more autonomous, adding tasks and features that allow them to solve bigger problems and create more applications. One of the leading startups in this space, Exyn Technologies, is developing autonomy software to help robots know their location in environments where GPS signals and other communications capabilities are sparse, or non-existent.
This technology development has led the company to go underground, literally, as it works on developing applications and processes for the mining industry. In caves, mines, and tunnels, aerial and ground robots have limited communications and location data, driving the need for the robots to operate more autonomously.
Exyn, which began as a spinout from the University of Pennsylvania’s GRASP Laboratory, has been working on its core technology to improve robot autonomy – not just for aerial robotics, but for other robots as well. The company today announced new funding, raising $16 million in a Series A round, bringing its total funding to $20 million. The latest round was led by Centricus, with participation from Yamaha Motors Ventures, In-Q-Tel, Corecam Family Office, and Red and Blue Ventures. Existing investor IP Group, Inc., also participated in this round.
Exyn, one of the “companies we’re watching” in this year’s RBR50 2019 report, said it would use the new funding to focus on commercial growth by increasing its customer base, expand its global reach and develop offerings for customers in new industries. In addition, the company plans to accelerate its technology development to bring to market advanced swarming capabilities and move its autonomy intelligence to ground-based robots.
No pilot, map, or communications
The company’s A3R aerial robotics system can operate without the need for a pilot, any prior information, or the need for communication during flights. With the A3Rs, companies can acquire previously inaccessible data, including point clouds, imagery, gas readings, and more, in GPS-denied environments such as underground mines and indoor buildings.
“Exyn is changing the game in terms of what true autonomous robotics technology can deliver to the world,” said Michael Burychka, CEO North America for IP Group. “As founding investor and early advisor, we are proud of what Nader [Elm, CEO of Exyn] and the Exyn team have accomplished so far, and are excited to see them continue to scale the business.”
The company is also collaborating with the University of Pennsylvania and Ghost Robotics as one of the teams in the DARPA Subterranean Challenge, which aims to find new approaches to rapidly map, navigate, search and exploit complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.
Robotics Business Review recently spoke with Elm about the growth of the company, its plans beyond the mining industry, and how the team plans to stay focused on improving autonomy for all robots.
Requirements of autonomy
Q: What makes your autonomy software so unique compared with other companies’ approaches?
Elm: The unfortunate thing is autonomy is a very ill-defined word, so everyone claims to have it. For us, autonomy has to have three things that are absolute requirements if you’re looking at executing missions robustly, successfully, and safely in commercial and industrial environments.
The first is that our autonomy requires no infrastructure. Broadly, we mean we don’t require anything to aid the robot in terms of figuring out where it is. GPS is one of those things. We don’t use GPS, we don’t use all the motion capture cameras. We don’t require any markers in the environment. We don’t require beacons or any other kind of technology in the place. You should be able to drop the robot wherever, and the robot has to figure out where it is in relation to the environment by itself.
The second thing is we don’t require any prior information about the space and the environment. So we don’t require a prior map – the robot figures out the map as it flies in real time and executes the mission as it learns more about the environment.
The third thing is that we do not require communications in mission. The only time we need communication is when you issue the mission to the robot. Once you press play and the mission starts executing, the robot takes over and requires no communication with a ground control station. So we can’t leverage any other assets or infrastructure in the background while it’s executing the mission. Everything has to be on board the robot or vehicle for it to successfully execute the mission.
That’s how we ended up in mining, because they require all of those three things.
Q: Within the mining environment, what missions are the robots completing? Is it mapping, detection, visual imagery, sensors, etc.?
Elm: It’s all of the above, but it starts with mapping. With underground mining, there are tunnels, and the tunnels are made safe because that’s where the people in vehicles go. At the end of the tunnel, that’s where the ore is. The process hasn’t changed in over a hundred years. You drill the rock, you stick in explosives, you blast it, and then you pull out the rubble and the ore that you process.
But the cavity that was created by the blast is obviously inherently dangerous, and therefore people can’t go in there. But it also happens to be where the operations are, and you have the least information about what’s just happened. Because it’s dangerous, you’re not going to send people in to see whether the blast went the way it was supposed to.
So a lot of this done largely blindly, and operationally it is not optimized. So what we can enable is that we send in the robot and build a high fidelity, precision three-dimensional model of the cavity so you can figure out the blast, and did it go the way it was expected. With that new information about the shape of the cavity, you can then figure out the next blast, and so on. So you can start to see opportunities for much greater optimization and greater efficiency of operations, because now you have information about what’s happening in the cavity.
Then there’s another group in mining companies, the geologists, they want to come to see videos or imagery, so they’re asking for cameras to be installed on the robot. So as we’re flying, we can also see colors and textures of the wall so they can figure out where the ore is located. Then they start asking for other data points, such as gas readings, humidity, temperatures, and all of these give you a much richer picture of the environments that the operations are in.
Q: So this technology could be used on designs that aren’t necessarily an aerial drone, correct? Could this be applied to a ground vehicle?
Elm: Yes. One thing we did right from the get go is to make no assumptions on the underlying vehicle. That gave us flexibility in two ways. First, we can lift and shift from one area of robots to another, and we’ve done that several times already. But the other thing is we can take our payload, the autonomy payload, and put it onto other vehicles, whether it’s ground based, a tractor, wheeled vehicles, quadruped vehicles, etc. We can even take it underwater, on the surface or the water, or even in space.
Q: In terms of communication, you say it’s not required, but do you have the capability of communicating with the robot so if an operator is watching the feed, they can stop the drone to get a better look?
Elm: Absolutely. We tackle the harder problem, but if there are communications, a prior map, or infrastructure, we can use that to make it even more accurate. There are certain use cases where you need communications. For example, if you’re sending this into a dangerous situation – think first responders – you want to have that constant communication because you want to be able to send the robot and do some exploration, but also intervene as it’s exploring and say, ‘Hey, wait a minute, pause this.’
It’s similar with GPS – we currently don’t have GPS on the robot, but if it’s available, great. But if it falls away and you kind of cross a frontier and you go inside a building and lose GPS, that’s OK too. So we build the harder problems so everything else can be made so much easier.
In addition, the communication feed that does exist between the operator and the base station and the robot is just for visualization. The robot doesn’t need that communication to actually operate, because all of the CPU processing is happening on the robot, versus at the edge or off board.
Moving beyond mining
Q: This technology seems to open itself up to applications beyond mining. Does the additional funding help you expand to focus on other markets, or will you stay within mining and make other partnerships in that space?
Elm: Our strategy is to look at commercial applications first. But as you can imagine, the kinds of capabilities we have and the kinds of things we can execute has attracted the attention of U.S. government customers. So whatever we do, we go commercial first, but there’s always the second use. We have partnerships and projects with defense contractors and government customers – we obviously can’t talk too much about it, but you can imagine it leverages what we’re doing in mining – sending it into unknown environments, get further intelligence, surveillance, reconnaissance data. So we’re also in that industry.
We’re also looking now at other industry verticals, and we already have a customer doing deployments in a warehouse. We attach a barcode scanner and RFID scanner to the robot, and it flies through the warehouse. Instead of mapping the geometry of the space, it’s mapping the inventory, and feeding that into a warehouse management system.
We’re engaged in that and looking at other markets – the use of proceeds for the Series A is to go into new markets.
Q: With all these opportunities, how do you prevent your team from getting distracted with the different use cases and scenarios for the technology?
Elm: That’s the challenge of any startup, because there’s a universe of opportunities and you’ve got to be very disciplined about evaluating them and prioritizing. I wouldn’t say we haven’t fallen in that trap before. There are constantly new opportunities that come up that are very compelling, and sometimes we’ve been pulled into things only to realize it was a bit of a distraction or a red herring.
Nevertheless, we’ve been quite fortunate to have enough discipline to keep to a plan and evaluate the opportunities coming through somewhat effectively.
Q: Do you consider Exyn Technologies to be a robotics company?
Elm: As a company, we do have a robotics focus, but autonomy is what we’ve focused on mainly. Our proposition is data acquisition and automating fast, and now you can start thinking about all the different interesting applications.
We are primarily a software company, but given the fact that autonomy software has to deeply integrate into the underlying hardware, right now we’re focusing on selling the bundle of hardware and software. But over time, we’re abstracted from the vehicle, the sensors, and even the compute. So over time we will be moving more and more to software licensing models, and enable other manufacturers’ systems, integrate with solution providers, and enabling those so we can get some really compelling and remarkable solutions into a variety of different models.