PTC is well-known for design and manufacturing software, but at its LiveWorx user conference and the Smart Manufacturing Experience in Boston this year, it touted the transformational nature of augmented and virtual reality, or AR and VR.
Howard Heppelmann, divisional vice president of connected solutions at PTC, spoke with Robotics Business Review about how AR and VR are making manufacturing safer, enabling faster worker training, and augmenting human capabilities and the industrial Internet of Things (IoT).
Q: How did PTC find its way into AR and VR space?
Heppelmann: It started with a trend we saw: Innovation is happening at the border of the convergence of physical world and digital world. Our traditional customers, who design and make products, were putting more and more software into those products, connecting them to the Internet. But companies lacked the capabilities to take advantage of that connectivity.
We did acquisitions in that space and quickly realized that augmented reality presents a new way for people to digest and use rich industrial information that might be difficult to understand without a lot of training. It could be managed in the context of what we do every day — PLM [project lifecycle management] and configuration — plus the business logic to bring together data and put it on top of the physical unit.
Digital twins will change how humans interact with the world. Recent research published by Harvard Business Review validates that. Broader research needs to be done on how much more powerful visual data is in the right context than the traditional means of sourcing data.
For example, Boeing has found that front-line operators in the assembly process could improve efficiency by 35% and productivity by 90%
Q: What are some example use cases for AR and VR?
Heppelmann: We’ve surveyed our 8,000 customers. Two years ago, we launched a program for ThingWorks Studio and Vuphoria Studio. Manufacturers could come, use self-service, and experiment with AR and VR, and then deploy it.
Primary use cases, such as enabling front-line workers, decompose that into sub-use cases, like helping assembly operators on the line, guiding them through process of assembly or changing a tool. Another is applied to service techs in the field.
Another use case is anytime you’re creating a very expensive operational environment that you can’t experience beforehand. For example, creating a virtual environment for Mars vehicle Orion, allowing people to interact before they’re in the physical environment.
You can apply that to building or running a nuclear facility or in instance of a catastrophe to perform maintenance in a hazardous situation. Enabling people in such complex or dangerous environments is not common, but AR and VR pop up in different industries and is still important.
Q: How else can AR and VR help the workforce?
Heppelmann: How can we prepare tomorrow’s workers? Rich data can accelerate learning, unlocking the potential of workers.
There are also benefits for the aging and new workforce, which has certain expectations of modern technology. AR can deliver visual, contextual-based information, enabling employees to ramp up more quickly and making the workforce more portable.
For instance, a welding procedure could be dangerous if it’s the first time someone does it. Workers may be students coming out of a technical school. But in a virtual environment, they can do it as if they were on the factory floor.
It’s a great example of where augmented and virtual reality are coming together to enable more efficiency in the workforce, starting with training and cascading to providing them support.
Here’s another interesting combination of AR and VR. Think of manufacturing executives at a corporation with up to 200 factories. They can have the same VR experience, scan factories, and create virtual environments.
They can “teleport” in and augment with actual operational data coming off machines. In a few seconds, people can dive into the production metrics they care about and get a rich contextual sense of what’s happening in that space. They could see if a key customer order is stuck or a line is down.
Q: Several of the AR examples you’ve shown have used screens rather than goggles. Why is that?
Heppelmann: It’s an unfair rap that the augmented reality space gets today — everybody seems to equate AR with glasses. No doubt, we’ll someday have them or even contacts that give us an embedded biological experience.
Everybody’s chasing that today. Look at the smartphone industry, which took keypads away. We communicate visually, verbally. At some point in the future, what’s the need for a smartphone? The hardware goes away.
People are overly enamored with glasses as a way to deliver information. It will come, but an iPad mounted on an arm in an assembly line can guide people through an experience. That’s just as practical a use case for now.
Any opportunity to reduce that cognitive load or distance to the user is where the value lies. There are certainly appropriate uses for glasses now, such as training with the Microsoft HoloLens.
It might always not be practical on factory floor, but having somebody wear it during training to get a rich cognitive experience now is good.
Q: Looking ahead, what’s ahead for AR and VR and manufacturing?
Heppelmann: There’s the big opportunity of tomorrow versus the anchor of yesterday. With the introduction of low-cost sensors and connectivity in the Internet of Things, operations can move from latent to real-time.
If we apply analytics to the data, we can identify problems in real time or use predictive analytics and AI to see the future.
AR is bringing superhuman capabilities to workers. As companies digitize, humans must adapt to be fully relevant. We need to understand that we have new tools to contribute to the digital world.
One tool is analytics to find things in data that humans couldn’t identify or process, and the second is augmented reality — a picture is worth 1,000 words.
If you ask people to close their eyes for a second and then open them again, you can find out how much they have cognitively consumed. How long would it take them to write that down or convey the scene to someone?
By reducing the cognitive distance, we’re making things easier to use. From a delivery or IoT perspective, we’re filtering down data in real time.