It’s hard to avoid the constant exposure of artificial intelligence being posed to revolutionize the way of doing things in manufacturing, transportation, mining, automotive and many other industries. As a result, organizations often develop AI strategies and invest in AI-enabled solutions, but in the end, much of the work doesn’t meet the original expectations.
For example, according to PwC, only 4% of executives reported being able to successfully implement AI solutions, despite 46% of those surveyed stating their organizations had attempted adopting AI.
Despite the AI hype, when it comes to developing an AI strategy, organizations need to remember that their decisions need to be driven by the expected return on investment. The key question to ask while developing any AI strategy is what type of benefit – whether it’s a reduction in costs, improvement in performance, or the ability to unlock new growth opportunities – the company aims to achieve.
Often, it might turn out that the best possible decision is to wait and postpone the implementation. This can be especially true if there are technology blockers that might be eliminated in the near future through the natural development of the technology, or the cost reduction of the technology itself. In other cases, it might make the most sense to go full steam ahead.
While opportunities and challenges that companies are likely to face when implementing AI solutions to some extent can depend on specifics within a particular industry or task, the overall approach to evaluating opportunities to leverage AI is quite universal.
For this post, we’ll focus on the predictive maintenance scenario as one example that discusses the best practices associated with crafting an AI strategy, while providing concrete examples and actionable insights.
Step 1: Audit your data
One of the biggest challenges with successfully leveraging AI to enable predictive maintenance is related to data. Organizations that generate a lot of data are often convinced they already have plenty of data to build predictive models, and believe they are collecting enough input data of sufficient quality for these models to be accurate. In reality, it often turns out to be far from the truth.
The first step for organizations is to conduct a data audit and identify what types of data it currently has in its possession, where this data resides, and how it is collected. This then determines whether the quantity, quality, and data organization is acceptable for the target task.
Step 2: Create a roadmap
The next step is to develop a project roadmap, taking into account the current architecture of the data collection and storage systems, and the optimal architecture that would be optimal given the objectives (these two architectures can oftentimes have significant differences). Only after all those questions have been successfully addressed, can the actual work begin, starting with the process of connecting to various data sources that will later serve as the knowledge base for the machine learning model.
To use logistics and transportation industry applications as one example, one might look at:
- The data that’s being captured by sensors located on various parts of machines;
- Technical documentation related to those machines;
- Technical specifications for the produced parts
- Documentation for the assembly process;
- Transportation schedules;
- Weather forecasts;
- Equipment purchase orders, and various other datasets that stand to improve the model prediction quality, as long as the necessary data can be collected in a consistent manner and at a reasonable cost.
In some cases, it might turn out that both the existing architecture and the datasets have to be completely reorganized, while in other instances, the work will be focused around assembling the existing pieces into a new, more robust setup. In the end, the decision around the approach that has to be taken should always rely on a thorough value-based analysis applicable to a particular project.
Warning: Don’t look at past investments
It’s critical, however, to avoid over-focusing on the investments that were previously made into the existing systems and processes. All too often, organizations become biased and rely heavily on their current data architecture, and are resistant to significant changes.
This is one of the major blockers to building top-notch predictive solutions — for AI, the optimal set-up can be very different from the one that was developed decades ago and built for completely different purposes.
For example, what CIOs often describe as the data lake their companies run on, oftentimes isn’t actually a data lake at all (at least, not in today’s definition of the word), and therefore isn’t an appropriate set-up for the future-oriented AI-infused platform. Instead, the decision-making process should always treat past requirements as sunk costs, and be driven by the new requirements and objectives necessary to generate the right ROI.
Step 3: Implement the structure
Once the data sources have been defined and the right data flows established, there comes the time to implement the structure of the data lake that will be used as a centralized repository. All the data needs to be standardized, mapped, and cleaned up accordingly, ensuring that the historical and live feed data adhere to the same format and logic, and are being captured, processed and then retained in accordance with the policies that were established as part of the previous steps.
Step 4: Enabling predictions
Once the dataset is ready, the work to identify the reasons for equipment breakdowns can begin. At this point, the goal is to leverage data to uncover patterns that occur when certain parts of the equipment are getting closer to the end of service or require maintenance, and then determine the optimal times to conduct maintenance and replace parts that need to be retired.
Once again, it’s crucial to keep in mind that the ultimate goal is to maximize ROI, taking into account the cost of conducting maintenance and the possibility of the breakdowns, and the associated costs.
There are quite a few opportunities for cost savings that can be made possible by leveraging AI-enabled predictive maintenance modeling, among them:
- Reducing the instances of overspending related to replacing parts that still have useful life left;
- Optimizing maintenance schedules, through identifying and then reducing overhead spending on reserve equipment;
- Identifying the optimal times for preventive maintenance, and thus reducing losses associated with unexpected equipment breakdown (associated with systemic issues);
- In some cases (for example, in transportation), finding the optimal operating cadence for equipment that allows to reduce wear and allow for better fuel.
Final step: Generate insights
The final step in the effort to introduce AI into the organization’s business processes should be centered around creating a set of services to provide actionable insights for the organization’s leadership team and/or shareholders.
The critical thing to keep in mind here is that AI should serve the goal of simplifying operations, rather than further complicating things. Organizations should pay specific attention to the manner in which they leverage those insights and integrate them into their current workflows and processes so they can extract the maximum benefit from their AI strategy.
Despite the huge potential of AI for improving the existing processes and generating new insights for businesses, it’s crucial to note that AI shouldn’t be treated as a sort of universal multi-tool capable of solving all kinds of problems.
For once, there would always be instances where humans can still outperform AI, especially factoring in ROI. Next, some use cases come with natural limitations that diminish the predictive powers of AI, simply because there isn’t enough data to test hypotheses. In the case of predictive maintenance, one example where this phenomenon occurs would be attempting to leverage AI to predict breakdowns for equipment that on average only breaks once per 20 years.
Other situations where AI-enabled systems can produce less than perfect results include instances where installing sensors for data collection proves to be impossible or impractical, or the equipment has to operate in ever-changing environments.
Even simply going through the process of a preliminary audit necessary for implementing any AI solutions can deliver a lot of value. Remember that the ultimate goal of any organization isn’t to implement an AI-powered predictive engine just for the sake of doing it, but rather to solve a specific business challenge. Therefore, spend time to define and understand the limitations of the proposed AI-based solution, then compare it to a human-driven process, and finally, make an informed choice on the best north-star process to take going forward.