Reducing downtime is a huge financial opportunity

downtime

In creating a world that doesn’t break down, a huge $20B opportunity is on the table for the process industry to address. Astute manufacturers should focus on reducing unplanned downtime and increasing asset utilization, as both processes represent the biggest opportunities for financial improvement in production operations.

Albert Einstein could well be alluding to smart manufacturing when he said: "If I had an hour to solve a problem and my life depended on it, I would use the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes."

The evolution of maintenance

In the past five decades, maintenance as a practice has evolved to better serve the manufacturing sector in the areas of reliability and availability. However, change is imminent. The current approaches, such as run-to-failure, calendar, usage-based, condition-based and reliability centered maintenance (RCM), are less than ideal.

Two key challenges remain. First, despite the increasing complexity of these maintenance initiatives, the exact science of when to conduct inspections and service the machines is less than scientific. Second, the current slew of maintenance methodologies focuses on wear and tear as the root cause of failure. This literally sidesteps the fact that 80 percent of degradation and failure in mechanical equipment is process driven.

This view is further reinforced by Boeing, a company at the cradle of RCM and the aircraft industry. Boeing basically acknowledges that up to 85 percent of all equipment failures happen on a time-random basis, no matter how much you inspect and service. ARC analyst, Peter Reynolds, gives a useful indication of what works by saying: "A useful prognostics solution is implemented when there is sound knowledge of the failure mechanisms that are likely to cause the degradations leading to eventual failures in the system."

However, the industry reality today is that in order to maximize profitability, processes tend to be operated as close to key limits as possible. This can be detrimental, as process excursions quickly place an asset in an undesirable operating point, where damage or excessive wear and tear to the asset occurs. This means that maintenance decisions need to be further mitigated by better understanding the impact on asset and process. A new generation of analytical capabilities is required to provide deeper insights into the asset, process and interaction between them. While operators need predictive solutions to red flag impending trouble, the software needs to be able to guide them away from trouble with prescriptive guidance. This requires the preferred solution provider to have deep domain and process expertise with the ability to extract data from design, production and maintenance systems.

The next generation Asset Performance Management (APM)

In the broader scheme of things, McKinsey & Co has observed: "[...]entirely new and more affordable manufacturing analytics methods and solutions -- which provide easier access to data from multiple data sources, along with advanced modeling algorithms and easy-to-use visualization approaches -- could finally give manufacturers new ways to control and optimize all processes throughout their entire operations."

ARC Research Group further crystallizes this view by saying: "With a good APM strategy, operations and maintenance groups become more collaborative, exchanging information to manage critical issues and operational constraints, while improving overall operating performance. Combining the information from traditionally separate operations and maintenance solutions improves the effectiveness of both areas, and offers new opportunities for managing risk and optimizing performance."

In taking a step into the future of manufacturing, APM 2.0 incorporates the advanced analytics that predict issues and prescribe operator actions. With a holistic view of the process and asset, Aspen APM software suite combines asset analytics, reliability modeling and machine learning to analyze, understand and guide. Principles of data analytics and data science enable the reliability strategy, which includes machine learning. A dominant predictive analytics technology in information technology today, machine learning on manufacturing assets requires domain specific knowledge of chemical processes, mechanical assets and maintenance practices, etc.

For industrial prowess, machine learning needs to interpret and manage complex, problematic sensor and maintenance event data. Eventually, it can determine the operating conditions and patterns that can have a deleterious impact on the asset by capturing the patterns of process operation and merging them with failure information.

One of the world’s largest plastics, chemical and refining companies, LyondellBasell, agrees that APM can unlock significant value, in saying: "AspenTech’s new Asset Analytics contains a unique set of modeling and data science-based technologies. Utilizing the additional process insight available from this promising new software solution brings with it the potential to operate closer to the true flooding limit on this tower. For a world scale olefins unit, this would be worth millions of dollars per year."

A system of success is long overdue

While predictive analytics can reduce downtime, disruption seldom happens in isolation. Instead, dozens of reliability, process and asset issues happen simultaneously. This presents a systemic problem for RCM, a current maintenance approach that conducts static assessments, by delaying the decision-making process.

As such, dynamic assessment is required, as new warnings need to be evaluated alongside other active conditions to prioritize and allocate resources. However, we cannot address everything at once. A system of success is mandatory to address problems and prioritize them, according to the level of risk they represent.

With Aspen APM software, each new alarm triggers a recalculation of risk profiles to guarantee that the most current financial and risk probability assessment is used.

However, to be thoroughly successful, companies need to adopt a holistic approach in implementation. First, they need to communicate their goals clearly. This helps in effective problem solving. Second, it is necessary for companies to genuinely embrace a data-driven world. Third, they need to differentiate between lagging and leading indicators, as well as how to respond accordingly. Fourth, the right mix of people, technology, strategy and solution is essential -- along with the use of relevant case studies. Fifth, companies need to invest time and master the technology well. Sixth, the adopted analytics program needs to be well aligned with business goals. Seventh, companies need to deploy the appropriate software and hardware to solve problems. Eighth, companies need to execute well and with a keen sense of urgency. With operational excellence and profitability at stake, it is imperative that organizations are successful in developing the best asset performance strategy.

Indeed, failure is not an option -- in creating a world that doesn’t break down!

From a number of recent surveys, it’s clear that more and more organizations are happy to accept that they will suffer a network security breach at some point and it is simply a matter of time until their D-Day arrives. Clearing up the mess post-issue and how to do it seems to be the focus now -- which is a bit of a shame since there are various ways to avoid the mess in the first place.

If we think of the networks we’re protecting as fortresses, the fact is that unfortunately, most companies rely on increasing the size and scope of perimeter defenses for their security -- a bit like digging a wider moat around the fortress or increasing the height and thickness of the external walls. It’s a great idea, unless the drawbridge is down and the guards are asleep. Or someone tunnels under the moat and walls. They might even have a friend on the inside who knows where the back door is.

History has taught and continues to teach us many lessons, but if we do not learn from them we are doomed to repeat the failures of the past. What were those Trojans thinking anyway? Someone dumps a big wooden horse outside the gates and they cheerfully wheel it in without a second thought or at least a cursory inspection?  No wonder Troy was lost.

Cyber criminals are not attacking organizations for fun, they’re doing it for commercial gain.  Stealing bank account or identity information that they can sell on or encrypting data and demanding a ransom for its release. Businesses fail to recognize the cyber criminal as a human threat. Well, most cyber criminals are still only human.

Remember the school bully? How they always picked on someone they perceived as being weaker than them? It’s human nature to go for the easy win, yet the more difficult something is to do, the less likely we are to do it. After all it’s a numbers game -- the softer the target, the more they extract and the more money they make. Harden the target and they’ll move on -- it’s just too much effort to break down all the defenses and it takes too long to make it viable.

This is why an "if it’s going to happen it will" attitude to network security can be self-fulfilling. Too many organizations are now being defeatist, thinking it’s only a matter of time before they suffer a network security breach and only focusing on how they will clear up the mess after it happens, rather than carrying on trying to prevent it.

Transforming the digital fortress into a house of horrors for the cyber attacker will keep a business safer, even when the attacker gains access. You can keep the moat and walls but build additional defenses inside; ramparts, spikes, bear traps even and section if off limiting the access to each section to a small number of controlled points, creating "one way" systems so that no one section can act as a universal jump-off point to anywhere in the fortress.

In IT security terms, we’re talking about security zones, micro-segmentation, network access control, authentication-based firewall policies, SSL visibility; there are multiple options. If the malware can’t go anywhere and you have it locked down in a particular part of your network, it can’t proliferate and the problem is contained. The "serious" bad guys are only interested in one thing -- breaching your defenses for their gain. They can devote time and effort to this single purpose, but only to a point.

Harden the target with multiple layers of defense, create one way systems and implement access restrictions and the bad guys will soon realize they’re wasting their time and move on to something softer and likely easier to penetrate.

It’s easy to get blinkered and focus on new products but, generally, by the time you deploy them, the world and the bad guys have moved on. Sometimes it’s better to step back and have a more considered wider strategic view. For example, we worked with a video games company that was being constantly hit by DDOS attacks on their live gaming site. So they did some lateral thinking and routed the gaming site through a secondary channel. The attackers have gone off and found a softer target.

So be proactive and make it hard for the attackers. Create multiple layers of defense, one-way "streets" and access control systems. They may devote time and effort to breaking down these barriers, but they too have limits to what is and isn’t worthwhile.

And like the bully, they’ll soon move on, attracted by the prospect of an easier win.

Robert Golightly, product marketing, AspenTech.

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Photo Credit: Tang Yan Song/Shutterstock

Comments are closed.

© 1998-2019 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.