Staying ahead with predictive analytics

Optimised maintenance programmes and workflow systems can be significantly enhanced with cutting-edge technology, writes Bryan Betts


Once upon a time, asset management software was all about the workflow involved in keeping tabs on things – tracking their location, when they were last serviced or updated, their current value and so on. All that is now changing, thanks to technologies such as big data analytics and machine-to-machine communications.

The result is a new breed of holistic asset management tools that make it possible to look forward, and manage risk and performance. They use predictive analytics which can tap and correlate reservoirs of data right across an organisation and beyond, not only to schedule preventative maintenance, but also to predict the possible and likely consequences of failures and assess how best to minimise them.

“There’s a lot of drive to put sensors on things and generate real-time data,” says Steve Ehrlich, senior vice president of marketing at asset visualisation specialist Space-Time Insight. “The problem is that generates huge amounts of data – you used to have one ping a day, saying ‘Here I am’, but now it can be at sub-second intervals. Then the question is, I’ve collected all this data, now what do I do with it? How do I visualise it and what do I do then?

“The key with big data is to convert it to ‘little data’ so, for example, you don’t want all the normal conditions, you want the abnormal ones. It’s the unread meters, the planes approaching capacity.

“So the drive now is to simplify it on to a single pane of glass or dashboard, so you can see what asset it is, what it’s doing, how that’s different from what it did before and what factors are involved, such as weather, temperature, an accident. Then, and most importantly, it’s looking forward to see what will it be doing tomorrow or a year from now, and what else might be affected.”

A new breed of holistic asset management tools make it possible to look forward, and manage risk and performance

For example, if an electrical transformer fails suddenly, correlating your asset data with weather reports could guide your response by showing it was in the path of a thunderstorm. Then, analysing your network will show which customers are affected, and if your analytical tools also support what-ifs, you can look at the most likely outcomes and see how adjusting your response could change them.

“Analytics is very important, and it’s only going to become more important – it is essential for predictive maintenance, for example,” agrees Reid Paquin, a senior analyst with the Aberdeen Group. However, while companies, big and small, want to sell you software to deal with this challenge, he warns that it is much more than just a software problem.

“Some other things have to be in place,” he says. “For example, you have to have the right organisational structure, with buy-in at all levels from management to the employees – especially the employees because, if they don’t trust the system, they won’t use it and it’s going to be wasted. You also need employees with an analytical background – that kind of talent in the organisation is often overlooked.

“Then it’s because there’s so much data being collected and a lot of the time the systems are not interconnected. It really is about the data – it must be correct and up to date, you must have access to it when you need it.”

Collecting, cleaning and correlating this data is a major part of this classic big data operation. As well as different systems having different formats, the data may also come at different times and in different contexts, for example sub-second machine performance data versus hourly weather reports versus ad-hoc feedback from field service teams.

“The technology is all out there. The aerospace industry has been doing this for decades, for instance, because the cost of not doing it in that industry is enormous,” Mr Paquin says. “Centralising data has been getting easier because the technology vendors realise they need to integrate with other applications. The market has realised how critical this is.”

He adds that applying predictive analytics to assets is “not a new technology, it’s a new application. The company may already be doing analytics, for example in its supply chain network, production planning or on the CRM [customer relationship management] side”. As a result, the organisation may already have the necessary talent and understanding, capable of being redeployed in this new area.

Also vital is getting these predictive insights up to the board in a comprehensible form, says Dr Achim Krueger, vice president for operational excellence solutions at software providers SAP. “Your assets are much more intelligent now and are producing much more information, and your board needs sight of that,” he says.

“Technological forecasting of failure was done years ago,” he adds. “Now you have to put that into a business context – how it affects the level of spare parts you need or your negotiations on maintenance contracts – and present it to decision-makers in an understandable form.” For instance, 3D models could replace lists, with parts of the plant coloured by risk level.

The importance of executive insight is underscored by the advent of ISO 55000, the international standard for asset management. Derived from PAS 55, a specification for the optimised management of physical assets, ISO 55000 was approved in January. “It’s still the same paradigm – holistically managing assets, with a focus on risk and performance, but ISO 55000 raises the importance to board level,” Mr Krueger says.

He adds: “Even more important is that this goes hand in hand with changing business models from product to service orientation. For example, Rolls-Royce now sells flight hours, not engines, but the aircraft operator is still liable. So there is a need for much more horizontal integration and information sharing.”