Sophisticated analytics build sustainability for renewable diesel projects
Anyone studying the development of the hydrocarbon processing industry over the last century will likely admire how far technologies have advanced, particularly considering the tools that were available in the industry’s early years.
Anyone studying the development of the hydrocarbon processing industry over the last century will likely admire how far technologies have advanced, particularly considering the tools that were available in the industry’s early years. Many clever people have made discoveries and incremental advances, driving the industry forward to where it is today. Moreover, the larger petrochemical and chemical industries have advanced in parallel, benefiting each other.
While technologies developed during the last decade represent enormous advances from what was available throughout much of the industry’s history, one cannot help but wonder if developments might have progressed more quickly had today’s tools been available during those formative years.
Spurred by sustainability initiatives in both the public and private sectors, producers around the world are seeking to develop and optimize methods to create diesel fuel without using crude-oil-derived feedstocks. This represents, in effect, a new hydrocarbon industry. However, today’s producers, unlike their predecessors, have a full range of modern resources, including digital tools and hardware that were unimaginable just a decade or two ago.
This article will examine what some of these new tools are, and will show how producers are applying them as they develop new processes for manufacturing renewable diesel. Naturally, the lessons learned here can be adopted far more universally, including in established refining and petrochemical processes.
Taming variability
To begin, we should define what process we are talking about (FIG. 1), since some common terms are not always clear and are often used interchangeably.
FIG. 1. Alternatives to conventional diesel use the same feedstocks, but with different approaches and results, each with their own characteristics.
Biodiesel is produced from vegetable oils and animal fats, which are converted to fatty acid methyl esters (FAME) through a transesterification process regulated by ASTM D6751-20a. This process is suitable for large- and small-scale installations, and vehicles can be adapted to run on biodiesel directly. The main drawback of this process is its tendency to gel at low temperatures, thus limiting the amounts that can be added to the conventional diesel distribution chain.
Renewable, or green, diesel is the primary area of interest. It can also be made with oils and fats as feedstocks (and this extends to include crop residues and other biomass) that are treated using a process similar to conventional oil refining. This process produces fuel that is indistinguishable from conventional diesel. Therefore, it falls under the ASTM D975-21 standard specification for diesel fuel, so producers can blend it into the conventional diesel distribution system without limitations.
Sustainable aviation fuel (SAF) is like renewable diesel, and some processes allow operators to make the switch relatively easily when market opportunities become available. Byproducts from the refining process (e.g., light hydrocarbons and naphtha) can be used as non-fossil fuels to meet processing needs. They can be sold as chemical feedstocks or added to gasoline blending stocks to reduce the operators’ carbon intensity score. Production complexity stems from the variety of feedstocks.
To maintain production volumes and continuous operation, refiners must anticipate changing feedstock types. Operators may be working with crop residues one day and animal fats the next, so process parameters can change frequently. This high degree of variability is the primary differentiator from conventional refining and represents the main production challenge.
Data lost or ignored
The following is a deeper dive into today’s tools, with three main assumptions:
- Producing renewable diesel is subject to more process variability than conventional refining.
- All the necessary tools are at our disposal to overcome the challenges.
- The same tools can have the same positive effects in many other application areas.
All the elements connected with a renewable diesel production unit produce data in many forms, and it is up to each facility to determine how much data gets collected and retained. Critical process variables will be kept in a historian, but the associated storage costs can drive a company to draw a line between the data it considers useful and the data it does not consider to be useful. The challenge is that many companies underestimate the value of their data because they cannot imagine an impactful way to apply all the data that they have collected.
For the most part, data gets locked in historians that are isolated on operational technology (OT) networks, where the data is stored and processed for release to management in the form of reports. An engineer may occasionally dig into a historian to solve a production problem, but much of the data simply remains in storage, never to be accessed again.
Data lakes bridge communication gaps
Since individuals have access to numerous tools, one may look at many companies and conclude that they do not have enough data, and that they are not putting their data in the right place. To start, historians are not the answer. Historians store data in the format in which it is produced, so data that is not directly comparable stays that way, making analysis more difficult. The solution involves either replacing historians or interconnecting them with a data lake.
The data lake concept is not unique to manufacturing, but using it in OT environments is relatively recent. The same approach is used for the same purpose for corporate information technology (IT) applications; so, to many coming from the IT side, the data lake concept will sound like a logical solution. However, some IT providers have encountered difficulties trying to extend these platforms into OT environments for a variety of reasons. An OT-oriented approach is much more effective and still allows easy extension into IT applications. The ability to connect from OT to IT has generally proven to be easier than the reverse.
How is an OT data lake different than a historian? A data lake aggregates operations data (FIG. 2) from manufacturing, process control and IT systems, without disruption, and then contextualizes it and transforms it into actionable information. These results can be presented to every decision-maker on any device, any time and at any location, thus creating manufacturing intelligence.
FIG. 2. An analytics platforma provides information in a collaborative environment that is able to unify data, people and systems. This can drive improvements in operational performance and workplace efficiency through automated workflows, advanced analytics and enhanced decision support.
When data is stored in this type of a central repository, users and critical systems can find anything they need in one place for more efficient analytics, trending and reporting. This eliminates OT data silos, while providing IT personnel with a single location to manage, protect and easily integrate OT data with IT tools and cloud applications.
How does a data lake do these things?
A renewable diesel plant or unit has various controllers, valves, process instruments and other equipment types that all individually create their own specific type of data, using their specific format and semantics. Historians can be added to communicate with various devices or classes of devices, but each only communicates with its assigned sources, using its own format and semantics. This leaves data from different sources difficult to compare or integrate, even within a single historian, and multiple historians do not normally communicate to each other. This results in isolated data stored in multiple silos that are cut off from each other.
A data lake takes a different approach. It creates a common data transport layer that supports all OT communications, regardless of the device type. This layer provides a common topology that can bridge gaps, making it far more practical to use digital tools for data analyses.
A modeling studio takes historical data and “cleans” it to remove outliers and create a more correlated data set that is suitable for modeling purposes. Next, it uses machine-learning algorithms to build predictive models. Once these are assembled, it is possible to verify a model’s ability to predict anomalies by using test data.
Once validated, the model is deployed in a production environment, using a project studio. This is the area to analyze real-time data and to create rules for complex events. When in operation, if a predictive model finds a new abnormal condition or process anomaly, the model is retrained to consider that condition the next time.
This basic approach is the mechanism used to generate information that informs decision-making for actions in areas such as asset health, efficiency and predictive emissions.
What can a data lake facilitate?
A data lake is not an end, but rather a means for analytics tools to deliver the desired benefits. A larger implementation, with a data lake and applications, unifies data while connecting systems and people. Together, these drive operational performance and workplace efficiency through automated workflows, advanced analytics and enhanced decision support. This is broad conceptually, so we need to focus on three applications:
- Feedstock variability
- Energy management
- Asset health.
First, renewable diesel production can begin with a wide variety of feedstocks, each of which must be treated differently to convert them into a suitable intermediate. While the variety is greater than with conventional refining processes, different fats or biomass can be separated into categories based on their characteristics. A data lake allows operators to compile a detailed catalog of feedstocks, including process adjustments needed to accommodate each of these feedstocks. Throughout time, analysis of historical process data from daily operations builds predictive models that simplify the steps that operators must take as they facilitate a feedstock change, no matter how drastic a shift this change represents. Once the transition is completed, the predictive model can optimize the process for the duration of that feedstock run.
Second, renewable diesel—like its conventional counterpart—requires energy inputs during manufacturing. The same fired heaters, boilers and the like are used, and these processes are under the same environmental regulatory constraints. Optimizing energy consumption is critically important, calling for detailed analysis of all energy consumers to vary their efficiency. This necessary level of detail depends on extensive instrumentation and data analysis for every installation. One beneficial side effect of having all this data is the ability to use predictive emissions techniques. When sufficient statistical information is available via a data lake, regulatory agencies will often allow a plant to base its emissions monitoring on modeling rather than on actual measurements via analyzers—which is far less expensive than maintaining a fleet of analyzers.
Third, the following will examine operating asset health and a specific centrifugal pump installation (FIG. 3). This hypothetical installation has been outfitted with several WirelessHART condition monitoring instruments, including those for measuring bearing temperature, vibration, motor current draw and flow.
FIG. 3. The centrifugal pump in this example is outfitted with instruments to monitor critical condition indicators.
Using open communication protocols, the data lake takes operations data from these instruments and then centralizes and contextualizes it, even if the different sources represent different data types. Real-time data is delivered to the maintenance and reliability teams via interactive dashboards so they can see what is happening at any moment. This capability alone would be a major advancement for many plants, but it is only a start. As historical data accumulates, it becomes possible to perform operational analytics.
In our scenario involving this pump, there are two main analytical methodologies—principles-driven and data-driven analytics—and both are available. Each approach has its capabilities and should be matched to the application. In some cases, both can help.
Principles-driven analytics techniques use laws and principles that have been codified for many years. Engineers understand how a pump works (e.g., pump head, speed, pressure differential), and, for example, how the current draw vs. liquid moved provides basic efficiency.
However, the concept can be extended to examine other variables (FIG. 4), such as vibration or changes in suction and discharge pressure, both of which can point to abnormal situations indicative of developing problems. With enough data, it is possible to characterize the deterioration by using failure mode and effects analysis. For example, it will be possible to know that when vibration reaches a specific threshold, the installation is nearing failure. This will be based on the specific unit, but will also draw on experiences with other units like it within the plant and perhaps within the larger company.
FIG. 4. Many equipment condition monitoring techniques use principles-driven analytics to determine when problems are developing.
Simultaneously, data-driven analytics create statistical models to fit the data. For a pump, analytics can create a graph plotting probability of failure on demand (PFD) against operating hours for a given motor/pump configuration. The larger the number of comparable installations, the more reliable the analysis will be. If the company assigns a PFD value based on criticality of a given installation, it will be a simple matter to determine the remaining useful life of an asset, thus allowing the maintenance team to optimize its activities.
While this pump example is real and valuable, the concepts that it illustrates go beyond this limited application and can be extended universally. We can move from this simple case to a much higher level of complexity, effectively treating an entire plant or production unit as a single asset.
The bigger picture
Using analytics to characterize a component—such as our pump example—is the bottom of the pyramid (FIG. 5). There are many other examples, but these are the least complex, and they offer, individually at least, the lowest payoff. However, when combined, they have a major financial benefit.
FIG. 5. The ability of an analytics platform to work with a data lake’s resources allows for complex problem solving and decision support.
Moving up the pyramid includes more complex and costly assets (such as large compressors, heat exchangers and fired heaters). Beyond these assets are a complete production unit, a plant and, ultimately, a company with multiple plants. As complex as these assets can become, they still follow the same basic methodology of applying data-driven and principles-driven tools.
FIG. 5 illustrates how this works. The left section defines the levels of challenges and opportunities based on their scale and complexity. Moving from the bottom up, the number of items analyzed becomes smaller (a single plant vs. thousands of components), but the complexity and stakes increase. The upper sections are where the importance of analytics supported by data lake technology becomes the most evident. Traditional methods using historians may be sufficient for some components and assets, but the ability to perform useful analysis begins to break down due to data incompatibilities.
The data lake’s ability to centralize and contextualize data makes it possible to apply analytical tools to more complex issues, as illustrated in the central green section of FIG. 5. As a practical matter, moving higher up the pyramid calls for more data-driven analysis due to complexity, whereas principles-driven methodology is more frequently used at the bottom. Solving higher-level problems often assumes that lower-level problems are resolved, so a bottom-up approach may be preferred. Additionally, addressing lower-level problems can sometimes resolve higher-level issues.
Identifying opportunity
The rightmost column in FIG. 5 illustrates the kinds of opportunities available using this methodology. Again, moving from the bottom to the top, one starts with individual pieces of equipment, and then gradually works toward enterprise-wide solutions. Management must look at this range and determine where to work on improvements to deliver the greatest impact.
Take efficiency, for example. Unfortunately, no process unit or plant comes equipped with an efficiency knob that operators can easily turn up through a single action. Improving overall efficiency is a process of making dozens and possibly hundreds of small changes that tighten up the operation to improve feedstock utilization, reaction rates and heat recovery, while also reducing losses, among other parameters.
Improving overall efficiency works in a data lake environment because this makes it possible to recognize the opportunity at the upper levels and then to drill down to individual reactors, heat exchangers, boilers, distillation towers and other assets to recognize where inefficiencies exist and how they can be fixed. The required actions happen at all levels and are coordinated by the overall analytics platform.
The unique challenge of renewable diesel production
The aforementioned points apply to companies producing renewable diesel, and to virtually every process industry. It is true that renewable diesel production faces particularly severe challenges related to feedstock variability, but those challenges are increasingly common across many industries.
The challenges for renewable diesel are not entirely technical, as environmental sustainability is also a key component. Often, conventional diesel is easier to make and cheaper to produce. A benefit of renewable diesel is that it is not made from fossil feedstocks and, therefore, it helps to reduce carbon emissions.
Maintaining production efficiency despite major feedstock variability requires the right tools to handle refining complexity. These new OT tools, with a data lake and applications, unify data to produce meaningful insights while connecting systems and people. Together, these tools drive operational performance and workplace efficiency through automated workflows, advanced analytics and enhanced decision support. The challenges are significant, but today’s technologies make it is easier than ever to design and implement optimized processes and work practices. HP
NOTE
a Emerson PlantWeb Optics platform
The Author
Berutti, M. - Emerson Automation Solutions, St. Louis, Missouri
Mart Berutti is the Vice President of Digital Transformation at Emerson Automation Solutions. He has 30 yr of experience in the fields of dynamic process simulation, control system validation, industrial ethernet integration and operator training. At Emerson, before accepting his current role, Mr. Berutti served as Vice President of process simulation. Prior to joining Emerson, he was the President and Chief Operating Officer of MYNAH Technologies from its founding in 2002 until Emerson acquired the company in May 2017. Mr. Berutti earned a BS degree in chemical engineering from the Missouri University of Science and Technology.
From the Archive