November 2021

Special Focus: Process Controls, Instrumentation and Automation

Manage integrated operating limits

For safe and smooth operations of petrochemical and refining facilities, it is vital that they are continuously run within defined operating limits, ensuring high reliability of the plant and equipment and delivering per rated capacities throughout their lifecycle.

Baliga, M. A., Reliance Industries Ltd.

For safe and smooth operations of petrochemical and refining facilities, it is vital that they are continuously run within defined operating limits, ensuring high reliability of the plant and equipment and delivering per rated capacities throughout their lifecycle. However, operating limits are not cast in stone and must be constantly reviewed to derive maximum benefit with respect to production, quality and cost.

Operating limits are not a single set of limits, but are layered and vary depending on product grades and modes of operations, including startup and shutdowns. Often, undesirable events [e.g., pump seal leaks, overflow of tanks, over-pressurizing of vessels, popping of pressure safety valves (PSVs)] occur during shutdown and startup because the safe operating limits for these modes of operations were either not defined or ignored. Based on reactor performance as well as economics—depending on the aging of the catalysts—operating limits around the reactor and catalyst should be revised between turnarounds.

This article discusses the development and management of operating limits with an integrated approach between functions that include operations, engineering, technical services and quality assurance (QA).

Aspects of monitoring and control parameters, limits

Plant design documents explain the parameters to be checked and controlled. Key design documents that provide this information include process flow diagrams (PFDs), piping and instrumentation diagrams (P&IDs) and equipment data sheets. The licensor or vendor manuals or the trips and alarm schedule supply the recommended operating limits.

The required details are available in assorted documents and come from different sources, and a change in a parameter in one document may have an impact on another parameter in another document. Therefore, it is important for the operator (operating company) to combine this information into a single master data set, which should be considered as a control document. The plant manager must ensure that this master data set is always kept current and valid. The technical services, reliability and engineering groups must confirm that all relevant information is correct and maintain an audit trail of changes.

During the lifecycle of the plant, this master data set should be periodically reviewed and updated based on risk analyses, the outcome of failure investigations, regulatory changes, upgrades in engineering standards and customer requirements. Recommendations from process hazard analysis (PHA) studies [hazard and operability (HAZOP), layers of protection analysis (LOPA), etc.] and reliability studies should be considered for updating the master data. The master data must be reviewed after every turnaround and catalyst replacement. All changes must be conducted through the company’s change management process.

The master data of the plant’s parameters and their respective limits become especially stringent as the plant ages. Based on analysis of failures and inspection data, new parameters may be added or limits may become tighter. Certain parameters appear only for certain operating modes and may not be relevant for other modes.

The master data set should contain all relevant information needed to run the plant in a safe manner. All parameter limits must be set as wide as possible—any operation outside of its limits for a reasonably long duration can lead to degradation of the equipment, piping or the product itself. Limits are set in many cases where trade-offs exist [e.g., lower hydrogen (H2) quench in the hydroprocessing reactor], helping increase production rates and lower costs, and impacting catalyst life and product quality. A similar example is the reflux rate in distillation columns, which affect throughput and operating costs, and improve product quality. Tightening limits on one factor may lead to giveaways or losses in other aspects.

Aspects of a master list

A typical master list should contain the following information:

  1. Tag ID: Taken from a P&ID, every tag that is shown in a P&ID should be in the master list; in turn, every tag that appears in a master list should appear within the distributed control system (DCS).
  2. Tag description: The unique name of the measurement point.
  3. Associated equipment ID: Every measurement point or tag must be associated with some equipment or other; association may be done either based on equipment that influences the parameter most, or equipment that is mostly impacted by the process value.
  4. Name of the associated equipment; the unique name of the equipment.
  5. Location of measurement; the description of the location of measurement—in the field and in the panel.
  6. P&ID number: The P&ID number where the tag can be found.
  7. Boundary definition: Is the purpose of the tag for monitoring or control? If the purpose is control, then,
    a. Boundary types: Whether the tag influences reliability, integrity, environment, energy, production and quality. These boundaries are not furnished by the licensor or engineering for all tags but are to be generated      by the operating company itself. It is possible that a tag may fall into two or more types.
    b. Boundary levels: Type of boundary levels (e.g., H, L, HH, LL), sometimes called a standard boundary and critical boundary. Based on the company’s alarm philosophy, the number of boundary levels and names          may change. Every tag that has a defined boundary level has an alarm configured on the DCS. This information comes from P&IDs and vendor manuals.
    c. Data type: Continuous or discrete—in most cases this would be continuous, except in cases where the system data collection has a time interval that is significantly high that the data type may be considered as          discrete. The quality parameter values obtained from sampling and laboratory analysis at certain intervals can be considered as discrete.
    d. Alarm priority: The alarm priority P1, P2 or P3, as defined in a company’s alarm philosophy document.
    e. Mode of operations: Applicable mode of operation.
    f. Limits: The limiting values and the units of measurement. Depending on the mode of operations and the boundary type, the actual limit values may change.
    g. Transmitter range and least count: The maximum/minimum range and least count of the transmitter or measurement gauge.

Note: Data for b, d, e, f and g should come from alarm rationalization work (refer to the section below); data for item c comes from control system engineering documents; and item g comes from the maintenance management system or asset register.

It is advantageous for the operator to enrich the master data set with more relevant information, including the level of risk (e.g., high, medium, low) associated with each tag. This should be done in line with the company’s risk assessment process. The licensor or equipment vendor may not provide this information, and it will have to be generated by the company itself. Another set of data that should be included are concerned with troubleshooting steps; sometimes called “operator guide,” as follows:

  1. Purpose: Why the measurement point is on the P&ID.
  2. Cause: Why the measured value may deviate from normal allowed operating limits (several reasons exist why the deviation may occur in normal circumstances; all known causes should be listed here).
  3. Consequences: The consequences if operations are continued for a prolonged duration (both the activation of the next level of barrier and the eventual consequence may be described here).
  4. Actions required to normalize: This set of actions is listed in sequential order along with checkpoints. Operators should progress through these actions in given order to normalize plant operations. Different sets of actions are for different causes, so operators must first ascertain the causes before proceeding.
  5. Time required to normalize: Minimum time required to complete operator actions and allow the process system to recover.
  6. Time allowed in deviations: Maximum time allowed in deviation—in some cases, this is relevant for every deviation, while the cumulative time in deviations from the last outage or last inspection would be meaningful. Both the limit itself and the allowed time in deviation depend greatly on equipment metallurgy, design rating, mode of operation, sampling locations and frequency of sampling analysis, inspection locations and frequencies, the age of the plant, etc.
  7. Responsibility: Person or role responsible for maintaining the measured value within the range, otherwise the same should be escalated for further troubleshooting at the subject matter expert (SME) level.
  8. Owner/accountable party: The SME in the technical hierarchy who makes sure that the limits and all other information remain correct and relevant.

Note: High-quality content is a demonstration of operational excellence and the deployment of best practices. Content is updated each time an abnormal situation is encountered and the normal situation is restored.

Alarm rationalization

Most of the required information and details should be generated by a cross-functional team in a working session called alarm rationalization (AR.) The AR exercise is highly productive if participants include experienced panel operators, engineers from operations, process engineers or technical services engineers, control system engineers and reliability engineers.

A rigorous AR exercise proceeds through P&IDs one-by-one and systematically covers all tags. This approach is effective if the AR is carried out for the first time for a grassroots plant. For a plant that has been in operations, with adequate knowledge about plant behavior, a system-by-system approach may be taken—e.g., in a crude distillation plant, the crude column can be considered (typically approximately 50–60 tags) and rationalized, and the same can be done for the vacuum column, and so on. As a best practice, the team should yellow-line the tags that have been rationalized on the P&ID sheet so it can be verified that no tag is missed.

Rationalized limit sets may be updated into the DCS using the company’s change management process. Since the changes implanted with respect to the measurement points (adding/deleting tags) or limits have an impact on the process safety aspects of the plant, the updating of the DCS with rationalized data should be done following proper communication and training of the relevant operating staff.

Two types of risks must be considered while making the change: the risks associated while implementing the change (i.e., writing new limits over the old limits, which entails human errors); and continued operation with new limits, the actual “toll” on the plant and equipment that can only be determined over time. The updating itself may be done in many ways (a system-by-system approach, updating all revised data and allowing operators to adapt to the new limits, etc.). Another approach is more global: rationalization for the entire plant is completed and its entire data set is updated simultaneously. All approaches have their own pros and cons—plants should manage the changes as per their convenience and practice.

Limit fixation

FIG. 1 shows a typical consideration for fixing the standard high and low limits for a furnace feed flow. Various considerations (design rates, upstream and downstream system constraints, physical conditions of the coils, etc.) have their own minimum/maximum limits. During the rationalization exercise, all inputs are considered and the most favorable or logical high or low alarm limits are set.

FIG. 1. A typical consideration for fixing the standard high and low limits for a furnace feed flow. Various considerations have their own minimum/maximum limits.

Ensuring the quality of the master data set

One way to ensure that the master data set has been improved is to compare the alarm key performance indicators (KPIs) before and after the change. One of the simplest alarm KPIs is the average alarm count, which can be 6 per hour per console. However, this only indicates the short-term effectiveness and at the operator level. To determine long-term effectiveness, close monitoring of the process performance under various product grades and modes of operations is required. An effective KPI to track this is the peak alarm count, which can be 60 per hour per console.

Another way to check if the changes have added value is to test them on an operator simulator, provided an operator simulator has been maintained with up-to-date plant information or replicates all DCS graphics.

Of course, in the medium and long term, the continuous reduction in equipment failures, loss of containment due to internal corrosion and leaks, throughput loss due to fouling, etc., are evidence of a good operating limit set.

Operating companies may set their own leading/lagging KPIs to track the effectiveness of a good limit set.

Note: Alarm KPIs should be defined in the company’s alarm philosophy document; if this is unavailable, all petrochemicals and refining facilities may refer to International Society of Automation (ISA) 18.2.

Change management

As mentioned here, all changes should follow the company’s change management process, which should be audited periodically, simplified and made more effective. A best practice is to carry out a complete AR of the whole plant every 5 yr or every time a major turnaround is executed.

Enforcement

Most plants maintain their master data in a standalone system and use it for reference or audits. Operating limit sets are configured directly into the DCS. When a product grade change or operating mode change happens, companies use one of the following two methods to change the alarm limits in the DCS: allow the operator to change the limit sets through an SOP on the operating panel itself or ask the control system engineer or shift supervisor to make changes from the engineering workstation/supervisory system. In these two cases, human intervention is involved. It is advisable to follow a doer/checker approach for such critical changes, although such changes happen frequently and a risk of laxity can creep in.

An advanced management practice to ensure the right limit sets are used by a shift operator is to deploy digital tools and empower the operator to enforce the right limits into the DCS, as per the product grade or operating mode. The digital tool should allow the shift panel operator to check at the beginning of each shift that all alarm set points in the DCS are consistent with the master limit set, ensuring no discrepancies. In this case, the DCS is wired with an external master alarm data set that requires the installation of stringent security architecture to prevent undesirable protocols into the DCS process control system by external sources.

Both methods discussed here have their own inherent advantages and disadvantages—the operating company should determine the best fit. HP

The Author

Related Articles

From the Archive

Comments

Comments

{{ error }}
{{ comment.comment.Name }} • {{ comment.timeAgo }}
{{ comment.comment.Text }}