November 2020

Special Focus: Process Controls, Instrumentation and Automation

Virtual analyzers: Shaping the future of product quality

Product quality directly drives price and where hydrocarbons can be delivered.

Product quality directly drives price and where hydrocarbons can be delivered. It is a critical factor in optimizing how oil and gas are produced, transported and refined. Despite the importance of quality, it remains difficult to measure accurately and in real time. This is a daunting task, with product moving quickly and mixing constantly throughout the supply chain. Much of this challenge is due to an inability to perform real-time analyses of specific quality parameters. While sampling programs are widespread in the industry, they have a slow turnaround time and high cost.

Given these constraints, the authors’ company started to investigate virtual analyzers. The idea was simple and sought to answer the following question: Could low-resolution sampling, along with knowledge of operational set points, be combined to model quality parameters that are exclusively being measured via labs? This article delves into the exploration of this question, and outlines potential applications with a case study focused on predicting real-time sulfur quality.

The basics of virtual analyzers and why they work

Virtual analyzers make it possible to accurately monitor properties throughout a facility in real time and without the required capital expenditure or labor-intensive maintenance associated with traditional analyzers. They can monitor properties that traditional analyzers cannot, and have minimal friction when it comes to calibration and maintenance. Virtual analyzers have been used to monitor physical properties at places where analyzers were not economically feasible to use or could not reach the desired sites for monitoring. Virtual analyzers also make it possible to closely monitor quality parameters [such as total acid number (TAN) or distillation cuts] that could not otherwise be monitored by physical analyzers.

Virtual analyzers are models that utilize existing data and physical property relationships. The authors’ company considers three different models when building out a virtual analyzer. These three models are: process models, physical/empirical models and statistical models.

Process models are those that track the trajectory of the molecules through facilities. By tracking each hydrocarbon molecule as it enters the facility, this model can hypothetically tell where every molecule is at any time stamp. The buildout includes process conditions (such as shrinkage) and can approximate other parameters (such as transit time through a tank, the effect of linefill and mixing different qualities). Model examples include real-time mappings of processes occurring at plants that output specific blend ratios. These models help in predicting real-time density in tanks by using inputs and transit time for products. To validate these process models, it is essential that the output is measured against theoretical output properties. Once a proper process model exists, it can be used to build a physical model of the desired quality specification.

Empirical models use relationships of known physical properties (i.e., process parameters) to start tracking unknowns. Using known measurements of a product’s physical properties enables the calculation of other properties, such as calculating vapor pressure from composition by using Raoult’s Law.1 This law uses partial pressure and mole fraction to find the resultant vapor pressure of an ideal mixture. Although some of these data points exist at certain facilities, they are not used in conjunction with one another to draw additional conclusions about product quality.

If there is proper accounting of all molecules in a tank or a stream and for required inputs for the property in question, it is possible to calculate the value of this property in the facility, as well. This is trivial for linear physical relationships, such as density; however, this relationship becomes increasingly complex when dealing with nonlinear properties, such as vapor pressure, viscosity, flash point or, for many refineries, p-value. An example of direct application is the prediction of the output composition from a stabilizer, given knowledge of input properties and operational set points like temperature and pressure.

The input prediction model fills data gaps in process and prediction models by using statistics, historical test results or machine-learning (ML) algorithms. When physical relationships are not apparent, a statistical relationship can still provide entities with increased insight and accuracy of product quality. These models rely heavily on the iteration and examination of large data sets to learn emerging relationships. These prediction models are most used when certain changes in the process flow diagram cannot be explained by physical relationships alone—for example, the buildout of a viscosity model that can estimate the C4- content of every incoming truck, as well as its corresponding uncertainty. This model is a blend of four components: two with recent lab samples that measure viscosity and two with missing data. The viscosity of missing samples can be estimated from density and the authors’ company’s data set of density-viscosity relationships between different barrels.

In another example, an input prediction model was applied to predict vapor pressure drift from a tank that was tested only once per week. The vapor pressure of this tank was computed using sales line data and tank density. At current operations, the virtual analyzer has an estimated uncertainty of 3.71 kPa with a confidence interval of 95%, which is based on an operator testing approximately once every 7 d.

Generally, entities need to know predicted value and uncertainty if they use virtual analyzers to fine-tune processes. Both should be measured, and will often tune the virtual analyzer for a client’s control panel to account for individual safety margins. The reason why this helps clients get more value is that uncertainty is non-uniform in time. Times exist (i.e., where inputs are fresh) when uncertainty is low enough to run a very tight blend/process. Most often, however, uncertainty is higher. Operators usually set the safety margin at the level of maximum uncertainty (at the top end of the uncertainty range). As a result, even without increasing accuracy, virtual analyzer models can improve margins by measuring uncertainty efficiently. Virtual analyzers can extend scope on a case-by-case basis, and have included prediction capability, recommended operational decisions and virtual composite samples.

Real vs. virtual analyzers, based on their respective accuracy and precision

Based on the frequency of data collection, the accuracy and uncertainty of virtual analyzers can vary. The factors that limit the accuracy of a virtual analyzer are different than a real analyzer. For example, predictability of the incoming data is directly correlated with virtual analyzer accuracy. Naturally, the more data from incoming streams that is gathered over time, the more accurate the virtual analyzer will become. Even when there are unpredictable spikes on incoming streams, blending that stream through the tank(s) will reduce the overall sensitivity of the prediction to spikes in incoming data. Several existing virtual analyzers are as accurate as expensive in-line analyzers. As a benchmark, vapor pressure virtual analyzers have ranged between 2.5kPa–4.5kPa. With a similar expected uncertainty ranging from 2.5 kPa–5 kPa on similar physical instruments, the absence of capital expenditures makes a virtual analyzer an attractive choice for several entities. Virtual analyzers are easy to implement and maintain, and they are reliable and efficient solutions in the field.

Both physical and virtual analyzers exhibit the same classic sources of error (such as sample representativeness, sampling error, measurement error and calculation error) that can be propagated. Close attention to virtual analyzer data enables efficient error identification and correction. As is true for measurement in general, the accuracy of the overall measurement is most impacted by the least accurate step.

In comparison to real analyzer sources of error, it is generally difficult to predict which would act more accurately. Physical analyzers come with instrument fouling, sampling error and general instrument malfunction. Fouling and sampling errors are independent of stream stability. For predictable streams, virtual analyzers are often more accurate. For highly unpredictable streams, analyzer performance is less accurate; however, virtual analyzers provide the bonus of being much easier to calibrate and customize. In this case, unpredictable does not mean variable. Highly variable streams can be made highly predictable with a small number of regular field measurements.

Maintenance still plays an important role in the installation and calibration of virtual analyzers

Maintaining a virtual analyzer has a lot of parallels to real analyzer maintenance. Virtual analyzers are constantly tested for accuracy and range against actual physical tests to ensure that the virtual models are sustainable and accurate. Physical tests include occasional spot samples at certain locations or autocorrelation between the difference of predicted and measured values. After the origin of usable data is defined, it is important to determine which information is the most error prone. The weakest inputs in a virtual analyzer are determined by the potential error range of the input and how sensitive the model is to that input. Maintenance procedures should be mainly focused on the most error-prone input. As is true for every model, the more error-prone inputs that exist, the greater amount of maintenance work will be required.

Every property needs a physical model to run a perturbation test against to ensure that the model is behaving like it should. The authors’ company uses feedback loops to improve models. The most common way to prove models is to take physical tests of the sites where virtual analyzers are installed. Some virtual analyzers also feature built-in alerting to flag when a product goes off-spec, and to provide specific follow-up actions to course-correct.

Some companies provide maintenance of virtual analyzers. These responsibilities include locating and deciding on data inputs, building and testing the model, and then monitoring and maintaining it once it is built. These duties require enormous amounts of attention to detail and recalibration, including constant sanity checks against real-time data being collected in the field.

Typical build-time for a virtual analyzer depends on complexity and clean data availability, but reasonable results can be gathered in 1 wk. Gathering appropriate data usually takes the bulk of the time; however, this also depends on in-house capabilities.

Once the appropriate data is gathered, the virtual analyzer company can consult on testing schedules and test locations to maximize the visibility of product specifications. This saves time and money on the entity’s end and delivers better results, even without the purchase of a virtual analyzer. If an entity decides to use a virtual analyzer, it will provide operations, engineering and marketing specialists at midstream, as well as exploration and production companies with valuable insights on optimizing their measurement programs. These insights often unlock unknown operational upsides and also mitigate risks such as off-spec delivery and contamination risk.

Case study: Sulfur prediction at a refinery

This case study focuses on a virtual analyzer used for storage tanks to better predict density, sulfur and p-value. For the simplicity of this case study, it will focus specifically on sulfur value.

The business problem presented was inaccurate flowmeter balances, leading to imprecise product volume. Heel-and-line fill volume was unaccounted for in the operator’s calculations, and its absence continued to produce inaccurate specification estimates. Companies that utilize blending tanks to mix multiple crude streams must meet specification limits on quality parameters to sell their products. To meet these limits, they must have an accurate sense of the amounts of each input stream, along with their qualities (e.g., sulfur content). An important—but often overlooked—stream to keep track of is the amount of oil left in the tank and outflow pipe from the previous blend (known as the “heel-and-line fill”). The goal of the project was to accurately predict the blend quality parameters, including these neglected volumes. The first step was to create a process model of the systems and setup of the plant, including flowmeters, tank levels and lab testing with corresponding timing. Once the process model was created (FIG. 1), the empirical and ML models for prediction were created and implemented.

FIG. 1. Process flow diagram for the refiner’s blend skid and virtual analyzer implementation.

The inputs to the model for sulfur prediction were the volumes of the incoming streams, the density of each stream and the sulfur content. The flow meters and tank levels were given 5-min. increments; whereas, the lab data—depending on the stream—was taken every 1 d–3 d. If the lab tests were several days old, this could potentially lead to inaccurate-quality data. The data provided by the company prior to building the process model did not include any heel-and-line fill data—only inflow of crude streams, thus providing an inaccurate picture of actual blend composition.

The model required several components to work accurately. It used the tank level and flow data to determine the amount of heel-and-line fill that was present for the blend. The timestamp from the beginning of the blend was used to pull the data for volumes for heel-and-line fill. Flow meters were used to determine the total amount of each stream coming into the blend from other tanks. Once this volume data was in place, an empirical model was used to determine the final sulfur content of the blend, using the appropriate lab data from the constituent oils. The 2-sigma measurement of errors for the model with proper heel-and-line fill was 0.1 of the smaller sulfur specification limit. The model that included the heel-and-line fill from the process model had more than three times improvement in accuracy vs. the model that did not include the heel-and-line fill.

As a result, the refiner gained a better understanding of the measurement of heel-and-line fill by using its system data and was able to more accurately predict the sulfur content of its blends. In this instance, most of the data was already collected and available; however, the full potential to make product specifications more accurate was not being realized. Using the authors’ company’s virtual analyzer, the refiner identified heel-and-line fill volume data, and then realized this to be an important component in its process model. These results are presented in FIG. 2.

FIG. 2. Sulfur content of the refiner’s blend with and without using heel-and-line fill.

Takeaway

Through the authors’ company’s process of identifying and applying a virtual analyzer, results are realized immediately. Identifying areas of improvement requires close attention to detail, combined with industry and scientific knowledge. HP

The Authors

Related Articles

From the Archive

Comments

Comments

{{ error }}
{{ comment.comment.Name }} • {{ comment.timeAgo }}
{{ comment.comment.Text }}