February 2018

Columns

Digital: Fulfilling the promise: The next phase of asset performance management

As asset performance management (APM) continues to evolve, the introduction of new methods and cutting-edge technologies are rapidly increasing its bottom-line value. Cloud computing, data science and machine learning are among the technologies that are now being integrated with automated methodologies directly into APM solutions. As a result, operators and engineers now have advanced analytical capabilities at their fingertips.

Hague, J., AspenTech

As asset performance management (APM) continues to evolve, the introduction of new methods and cutting-edge technologies are rapidly increasing its bottom-line value. Cloud computing, data science and machine learning are among the technologies that are now being integrated with automated methodologies directly into APM solutions. As a result, operators and engineers now have advanced analytical capabilities at their fingertips.

While APM has made major advances over the past 20 yr, this is just the beginning. The traditional approaches of first principles-based solutions and “armies” of consulting engineers and data scientists are a thing of the past. Low-touch machine learning is the key to the future.

The evolution of APM

Technologies must continuously evolve to remain sustainable in today’s world. When APM was first introduced, engineers faced a number of challenges and constraints on the technologies surrounding their models. In response, disparate systems were updated to manage and optimize maintenance functions, develop risk assessment capabilities and enable continuous condition monitoring. However, these systems were isolated, with limited connectivity and integration, and faced workflow impairments. Due to limited integration, early computers were able to process only small volumes of available related data in batch mode rather than in real time, when insights are most valuable and actionable. As a result, outputs were delayed by days or even weeks. Computational power limited the advancement of new algorithms and ensured that static models were fixed, low-frequency and unadaptable to new failure behaviors or incremental operational changes.

It was not until the 2000s that assets became better instrumented for condition-based monitoring, and computational power continued to increase. While systems remained isolated, they began providing engineers with something resembling real-time data. However, all issues were not completely resolved. Although these advances brought additional insights, they also introduced complexities and data quality issues. For example, computing platforms could not scale adequately and were inordinately expensive for processing massive data volumes. Limited replication was a mechanism to approach data accessibility, but again introduced more data quality issues. This meant that APM software was unable to clearly communicate accurate and reliable diagnoses or recommendations.

Additionally, while the mechanically based systems were able to achieve some improved reliability through planned maintenance, usage and condition-based monitoring techniques, the ability to identify or address the root causes of asset breakdowns was still limited. The results offered extremely limited actionable insights, and too many system alerts delivered false positives—inaccurate declarations of degradation and failure events. These letdowns overwhelmed engineers and created a negative impression of these approaches.

One way to illustrate this is by examining a single system, such as a standalone pipeline. Sensors that are deployed across multiple assets along a pipeline system report regular readings of variable conditions, such as temperature, pressure and flow. In the 1990s and 2000s, classical, model-based or statistical technologies were designed only for anomaly detection, which is fraught with errors and continually requires expert human intervention to interpret results and separate real alerts from false alerts. This led to major pipeline issues, as engineers were inundated with hundreds of undifferentiated false positive leak alerts. Internal staff could not ignore these alerts and were not equipped to accurately assess their validity. Operational integrity suffered, and trust in the systems eroded.

While APM technology has come a long way since the early 2000s, many of today’s systems are in need of improvement. Asset integrity alerts are still designed to perform only anomaly detection, meaning that they continue to require a staff of consultants to intercept and interpret the correct course of action to avoid failure or mitigate risk. For pipeline leak detection, for example, this is overly expensive for customers and results in unnecessary maintenance and large consulting fees.

Moving in the right direction

Fig. 1. Cross-industry initiatives—sponsored by many owner-operator companies—led to the development of open standards for connecting disparate systems and work process inter-operation.
Fig. 1. Cross-industry initiatives—sponsored by many owner-operator companies—led to the development of open standards for connecting disparate systems and work process inter-operation.

As APM continues to evolve, a number of technology innovations have helped pave the way. For example, in 2006, Amazon Web Services debuted for scalable cloud computing. Advances in structured and unstructured databases and operational data pools were tested and improved at the enterprise level during this period. Around the same time, smart sensors saw a dramatic shift in performance, size, reliability and price. Added to this was a dramatic improvement in the computational and analytical capabilities of machine learning called “deep belief networks” or “deep learning.” This breakthrough was pioneered by Geoffrey Hinton at the University of Toronto. Hinton is now a key contributor to Google’s machine learning strategy. The result was a quantum leap in capability that has enabled machine learning to surpass the performance of previous analytical techniques that limited modeling and statistical methods. Today, machine learning is the dominant analytical method in the global IT field, and is used for credit card fraud detection; facial recognition by Facebook; voice recognition by Amazon, Apple and Google; driving automated cars; medical diagnosis; and other applications.

Apple made a huge impact with the introduction of the iPhone in 2007. This device greatly advanced computer literacy and afforded complex application (“app”) capabilities for the masses. Between 2007 and 2010, culminating with the iPad debut, the process industry workforce moved from experimentation to demands for smart devices and consumer-style applications at work. Industrial software and technology began to update offerings with user interfaces incorporating low-touch, readily navigable applications and displays. Vendors started delivering intuitive software that did not require intense skills and experience to be productive. At the same time, cross-industry initiatives—sponsored by many owner-operator companies—led to the development of open standards for connecting disparate systems and work process inter-operation, particularly between operations and maintenance systems (FIG. 1). Such initiatives ensured comprehensive use of data combinations to address problems and afford solutions that were previously unattainable. Blending such methodologies and the automation of technology approaches set the stage for a major leap in APM performance and value.

While these new enhancements were hitting the consumer market, the manufacturing industries were developing a greater need for APM solutions. Emerging techniques for maintenance operations on assets, particularly for mechanical assets, were coming under scrutiny. The movement provided incremental improvements—from fail-fix, through calendar-, usage- and condition-based planned maintenance events, to reliability-centered maintenance (RCM) techniques. However, the cost, complexity, time and staffing skill set requirements constrained deployments. Additionally, a growing realization emerged that maintenance alone cannot solve the problems of unexpected asset breakdowns. Market-leading companies began to realize that they had gone as far as they could go with traditional preventive maintenance techniques.

Modern era of APM technology

Fig. 2. By incorporating historical, present and projected conditions from process sensors, mechanical assets and process events, machine learning systems will enable agile, flexible models that learn and adapt to real-time data conditions that incorporate real asset behavior nuances.
Fig. 2. By incorporating historical, present and projected conditions from process sensors, mechanical assets and process events, machine learning systems will enable agile, flexible models that learn and adapt to real-time data conditions that incorporate real asset behavior nuances.

Data-intensive and complex environments in manufacturing industries are prime candidates to deploy new advances in APM. As tremendous economic pressure to increase performance year-over-year continues, these organizations are—now more than ever—looking to APM for additional return on investment (ROI), especially in avoiding unplanned downtime and preventing damage to equipment due to operational issues. If deployed correctly and with appropriate automation, machine learning enables greater agility and flexibility to incorporate historical, present and projected conditions from process sensors, mechanical assets and process events (FIG. 2). As a result, systems will become automatic, moving past traditional, consultant-heavy approaches and enabling agile, flexible models that learn and adapt to real-time data conditions that incorporate the nuances of real asset behavior. Data capacities and computational capabilities are also beneficial to internal staff, enabling them to perform active and accurate management of individual process and mechanical assets. This management capability can range from combinations of assets to plant-wide, system-wide, and across and throughout multi-locations.

Every process industry organization deals with complex systems, fluctuating conditions and a myriad of assets. A spectrum of pressure points exists, in varying degrees, due to diverse market needs, time criticality, staffing levels and skill sets. For example, an oil refiner may struggle with excessive asset breakdowns. Its compressors are well-instrumented and receive RCM treatment, with regular inspection and service, but unanticipated failures persist. A chemicals producer with multiple intermediate storage tanks, on the other hand, can suffer intermittent failures on feed pumps, leading to extended downtime and lost product. The aging and complex electric grid is another example, as it requires a sophisticated analytics approach to maintain uptime within fluctuating demand patterns. Knowing only the average asset lifespan creates maintenance schedules based on guesswork and eliminates the capability to model future demands that might cause a rolling blackout. This creates costly overspending on maintenance and does not avoid grid performance gaps.

Low-touch machine learning APM can address all of these issues.

BEST PRACTICES IN LOW-TOUCH MACHINE LEARNING APM

As we look to the future, low-touch machine learning is fulfilling the promise of APM. Low-touch machine learning is a disruptive technology, deploying precise failure pattern recognition with very high accuracies and months of advance notice of impending failures. The requirement for substantial resources and expertise to realize the value of the application is eliminated. By implementing low-touch machine learning in the manufacturing industries, organizations can truly deliver reliability through agility, flexibility, adaptability and scale. Five machine learning best practices that are applicable to any asset in any industry at any level, from a single location to a country-wide system, are briefly discussed here.

Data collection and cleansing

Over the last two decades, massive data analysis from diverse sources of plant data collected from sensors has run into serious collection, timeliness, validation, cleansing, normalization, synchronization and structure issues. Often, such data preparation can consume 50%–80% of the time it takes to execute and repeat data mining and analysis. However, this process is essential to ensure that appropriate and accurate data is provided to end-users. New advances in APM have automated the bulk of the data preparation process to ensure trust and to unearth previously undiscovered opportunities with minimal user preparation.

Condition-based monitoring (CBM)

Once data is trustworthy, CBM can be applied. Plant conditions vary constantly due to the mechanical performance of assets, feedstock quality variations, weather conditions, and production timeline and demand changes. Static models cannot work under such duress. In addition, focusing CBM on mechanical equipment behavior reveals only a small fraction of the true issues causing degradation and failure. Leading vendors recognize that legacy CBM is now inadequate, as it typically ignores the salient process-induced conditions causing the bulk of the breakdowns. New advances in APM deliver comprehensive monitoring of all mechanical and upstream and downstream process conditions that can lead to failure.

Work management history

A history of work provides insights of past solutions to failure prevention and/or remediation. Problem identification, coding and a standard approach of problem resolution provide an important baseline for the exact failure point in an asset’s lifecycle. Original equipment manufacturer (OEM) data that may live in a big data solution can provide insight into process issues and outliers specific to the configuration and engineering within the plant process. Leading vendors recognize the importance of this data and how it contributes to hyper-accurate predictions of production degradation that can ultimately lead to asset failure.

Predictive and prescriptive analytics

Clean data and CBM enable in-place predictive analytics—a process to interpret past behavior and, based on that analysis, predict future outcomes. When performed correctly, predictive analytics can accurately portray asset lifecycle and reliability, and focus on the early root causes of degradation, rather than later-stage detection of damage. The insights available from intense multi-variate and temporal pattern analysis provide accurate, critical lead times. This allows time for decisions that can eliminate damage and maintenance or, at the very least, provide preparation time to reduce time-to-repair and mitigate consequences. Best-in-class APM provides prescriptive advice based on established root cause(s) analysis (RCA) and presents information on the approach that will proactively avoid process conditions that cause damage, as well as advise on the precise maintenance required to service the asset.

As a result, predictive/prescriptive capabilities enable asset lifecycle reliability and facilitate decisions on when and how to maximize production, while proactively avoiding asset and output risks. Such real-time analytics guide maintenance scheduling and asset optimization, eliminating guesswork on future production and asset issues. For C-level executives, this complete picture of plant or site performance enables more confident risk analysis and performance projections for the board level.

Pool and fleet analytics

The next level of analytics allows patterns that have been discovered on one asset in a pool or fleet to be shared with the other members, enabling the same safety and shutdown protection for all equipment. Once deployed, the rapid scalability of solutions from a unit to a site, to multiple sites or even a whole corporation, are available. From all local systems, information roll-up from disparate sites into one larger model provides asset performance comparisons across sites and plants, creating common baselines that highlight areas for improvement.

Fulfilling the promise of APM

The manufacturing world has changed. With today’s advances, manufacturing facilities can now readily extract value from decades of existing design and operations data to better manage and optimize asset performance. This low-touch machine learning method continuously embraces changes in asset behavior, empowering real-time APM value creation. Vetted and tested across diverse industries, scalable across multiple assets and using cloud and parallel computing, low-touch machine learning APM ushers in a new era of performance and optimization for complex, capital-intensive industries. HP

The Author

Related Articles

From the Archive

Comments

Comments

{{ error }}
{{ comment.comment.Name }} • {{ comment.timeAgo }}
{{ comment.comment.Text }}