Environment & Safety Gas Processing/LNG Maintenance & Reliability Petrochemicals Process Control Process Optimization Project Management Refining

November 2025

Digital Technologies

Accelerate the transition to low-carbon fuels through digitalization and co-processing

Parkland Burnaby Refinery: S. Lim  |  A. Aragon
AF Expert Consulting LLC: C. Harclerode
IOTA Software: S. Krivonosova

Parkland’s Burnaby Refinery continues to demonstrate industry leadership in low-carbon fuel production through innovative co-processing strategies and enabled by alignment across the business strategy, operational excellence management system and the digital strategy. This article outlines the refinery’s journey in applying layers of advanced analytics, including artificial intelligence (AI)/machine-learning (ML), visualization and operational data governance in a hybrid cloud environment to support the scalable and reliable production of renewable fuels. 

Low-carbon fuels from co-processing. Located near Vancouver, British Columbia, Canada, the Burnaby Refinery has been in operation since 1935 and now supplies approximately one-third of the region’s fuel demand. Over the past decade, the site has focused on co-processing: blending renewable feedstocks such as tallow with conventional hydrocarbons in existing processing units to produce low-carbon fuels. This strategy allows the refinery to leverage existing infrastructure while reducing overall carbon intensity and complying with evolving regulatory frameworks such as the BC Low Carbon Fuel Standard (LCFS), as noted in literature.1 

Burnaby is recognized as the first refinery in North America to implement co-processing at an industrial scale. In 2023, it produced > 91 MM liters (l) of low-carbon fuels, including Canada’s first batch of sustainable aviation fuel (SAF) in December 2024. On average, these co-processed products achieve a carbon intensity roughly one-eighth that of conventional fossil-derived fuels, contributing meaningfully to regional decarbonization objectives. 

Despite these gains, co-processing presents operational challenges. Bio-based feedstocks can increase fouling risks if vaporization conditions are not carefully managed, and quantifying biogenic carbon content in final products remains difficult. Traditional laboratory methods, such as radiocarbon (C-14) testing, are slow and costly. To overcome these limitations, the refinery has begun deploying AI and ML tools to monitor key performance indicators in real time. 

Alignment across business strategy and operational excellence. Parkland’s Burnaby Refinery realigned its operations by embedding business strategy into both its operational excellence (OE) systems and digital roadmap. Business priorities are tracked through performance indicators, operationalized through original equipment manufacturers (OEMs) and supported by digital tools that enable effective execution, as illustrated in FIG. 1. 

FIG. 1. Integration of business strategy into OE strategy and digital strategy.  

Like many refineries navigating ownership changes and legacy system fragmentation, Burnaby undertook a strategic reset to position itself for sustained performance. This transformation was guided by the principle that business strategynot technologymust define how success is measured and achieved. 

Key performance indicators (KPIs) were developed to reflect strategic goals and were integrated into the site’s operational excellence management system (OEMS). This ensured alignment between business priorities and frontline execution, created ownership of outcomes, reinforced operational discipline and enabled continuous improvement initiatives. 

Digital technologies were introduced to support executionnot as a starting point, but as an enabler. This reflects a broader industry trend: digital transformation is most effective when it aligns people, processes and systems with strategic goals and evolves in step with changing business needs. 

Integrated, bi-directional layers of analytics and visualization. To enhance operational safety, reliability and profitability, the refinery implemented a five-layer analytics framework. This maturity model begins with strong data contextualization and builds up to advanced analytics and ML capabilities, as described in FIG. 2. 

FIG. 2. Integrated layers of analytics. 

  • Level 1Data contextualization. Refineries produce millions of sensor data points daily. Without structure, this information becomes noise. Burnaby addressed this by organizing data around physical assets (e.g., tanks, controllers, heat exchangers) using standardized proprietary asset framework (AF)a templates. These templates encode logic and naming conventions, supporting asset-based analytics and industrial master data governance. 
  • Level 2Descriptive analytics. This layer addresses the question, “What happened?” By combining structured process data with metadata from external systems (e.g., lab results), the refinery calculated standard KPIs such as energy efficiency, equipment health and throughput. These metrics provided a consistent baseline for performance analysis across time and assets. 
  • Level 3Diagnostic analytics. This layer asks, “Why did it happen?” Event frames in PI AFa are used to detect constraint violations and notify relevant teams in real time. Previously buried in raw tag data, incidents are now recorded in structured audit trails. Over time, patterns emerge that help engineers identify recurring failures and streamline root cause analysis (RCA). 
  • Level 4Predictive analytics. Predictive analytics provide foresight into future risks using rule-based logic, statistical trends and first-principles correlations. For example, exchanger fouling can be forecasted through physical models embedded in PI AFa expressions. These models help shift decision-making from reactive to proactive, enabling earlier, more confident interventions. 
  • Level 5Advanced analytics. At this level, AI/ML models are developed outside the historian but integrated bidirectionally with PI AFa. Contextualized time-series data is used to train models, and results such as condition scores or optimization recommendations are written back into AF tags. These outputs are surfaced through dashboards and other decision-support platforms, ensuring insights lead to operational action. 

Visualization strategy: Turning insights into action. Analytics must be actionable to create business value. Burnaby’s three-layer visualization strategy, as shown in FIG. 3, enables role-based access to insights across operations, engineering and leadership, presented through modern tools such as the co-author’s company’s visualization platformb, a proprietary analytics platformc and an interactive data visualization softwared. An important evolution of this approach is the ability to connect all three layers of visualization within a single platform (i.e., the co-author’s company’s visualization platformb), enabling seamless data flow and future collaboration across teams. 

FIG. 3. Layers of visualization strategy. 

Operational layer. This layer delivers real-time visibility into KPIs such as throughput, emissions and energy use, enabling frontline operators to monitor plant conditions and take informed action. Seamless integration with the analytics platformc allows ad hoc trending and short-interval analysis to support timely troubleshooting. 

Analytical layer. Engineers and technical specialists use this layer for deeper analysis. Complex data interrogation using condition-based logic in the analytics platformc or visualization platforms like Spotfire enables optimization studies and root cause investigations across unit operations. 

Strategic layer. This layer links operational performance to enterprise priorities. Curated dashboards in the interactive data visualization softwared present high-level KPIs, baselines and gaps, allowing leadership to track progress, escalate issues and prioritize strategic actions across the organization. 

Reducing operational barriers with self-service analytics and agile development. A major improvement at the Burnaby Refinery is the introduction of real-time decision support tools built on the foundational PI AFa layers. Engineers and subject matter experts (SMEs) are empowered with self-service capabilities to develop their own analytics and views in the co-author’s company’s visualization platformb and the proprietary analytics platformc without relying on traditional IT intermediaries. However, with poor governance, selfservice can become chaotic. Governance in the form of security controls, isolated development environments and quality reviews by SMEs ensure that solutions are organized and validated. A core design principle is that domain knowledge must guide analytics. Technology alone does not create performance gains; it enables a digitally literate workforce at the refinery to obtain results faster and make more confident decisions. Several case studies are described below to illustrate this approach. 

Pressure safety valve (PSV) overpressure monitoring. PSVs serve a critical safety function by protecting process equipment from overpressure events. However, many legacy PSVs at Parkland’s Burnaby Refinery lack direct instrumentation to confirm whether they have lifted, creating a long-standing visibility gap in pressure relief monitoring. 

Historically, PSV lifts were only investigated if a field operator heard a pop and reported it to engineering. In practice, this rarely occurred, particularly for valves located in elevated or remote locations where ambient noise and accessibility made detection unreliable. As a result, most relief events went undocumented, leading to incomplete audit trails, missed opportunities for RCA, and potential noncompliance with inspection programs. 

To close this gap without costly infrastructure upgrades, the process engineering team developed a scalable, analytics-based monitoring solution. By modeling each PSV’s set pressure and linking it to the nearest available pressure transmitter, engineers were able to infer potential relief events in real time. When upstream pressure exceeded a defined threshold, the system triggered a notification to flag a suspected lift. 

Initially, the threshold was configured at 90% of the PSV’s set pressure, balancing detection sensitivity with false positive risk. However, several PSVs operated close to this threshold under normal conditions, resulting in nuisance alarms. The team refined the configuration by adjusting limits on a case-by-case basis, informed by operating history and process conditions, ensuring that only meaningful deviations generated alerts. 

A standardized AF template was developed to capture key metadataPSV setpoints, pressure tag references and relief destinationswhile PI AFa expressions continuously monitored deviations and generated structured event frames. When a potential lift was detected, an automated email notification was sent to the assigned process engineer. 

The notification included all the critical information needed for efficient triage, including: 

  • A hyperlink to the live pressure trend for the associated transmitter 
  • A direct link to the relevant piping and instrumentation diagram (P&ID) 
  • A reference to the PSV datasheet for specification validation. 

This structured format enabled engineers to quickly validate or dismiss events without manually hunting for supporting documentation, streamlining investigations and improving responsiveness, as illustrated in FIG. 4. 

FIG. 4. PSV overpressure monitoring workflow. 

In 2024 alone, the PSV overpressure monitoring tool flagged 102 potential lift events across 30 individual PSVs, providing a level of visibility that was previously unattainable. This grassroots solution now covers approximately one-third of the refinery’s PSVs. It has significantly improved the site’s ability to detect and respond to overpressure events, while eliminating reliance on field-reported data for those assets. 

More importantly, this initiative highlights the power of self-service analytics and cross-functional innovation. By leveraging existing instrumentation and low-code analytics tools, PI AF-trained process engineers and analytics softwarec champions at the Burnaby Refinery can build robust, scalable monitoring solutions that enhance process safety, operational transparency and engineering autonomy, without requiring capital investment. 

Advanced analytics in co-processing. Co-processing renewable feedstocks in the fluid catalytic cracking (FCC) introduces unique technical challenges. One of the most critical is feed vaporization. Renewable components such as tallow or used cooking oil are blended into the conventional FCC feed, which tends to reduce the bulk temperature of the blended feed. Since the FCC riser relies on near-instantaneous vaporization of feed droplets upon injection, this temperature reduction increases the likelihood of incomplete vaporization. 

When FCC feed droplets are only partially vaporized, unvaporized residues can accumulate in the riser over time. These residues provide surface area that promotes the formation and growth of foulant layers, typically composed of coke and heavy hydrocarbons. As fouling builds up, it increases the pressure drop across the FCC riser, eventually limiting reactor throughput and impacting unit reliability, economics and product yield. FIG. 5 shows the snapshot of the riser cross sectional area before and after the fouling started to accumulate.  

FIG. 5. Radiometric scanning of FCC riser cross section showing gradual fouling. 

Direct measurement of the vaporization percentage is infeasible in real time, so the Burnaby Refinery developed soft sensor models based on first-principles to estimate the percentage of vaporization of the feed. The refinery partnered with Coanda, a Canadian engineering research firm, to develop advanced analytics models tailored to Burnaby’s FCC configuration. These models account for site-specific factors such as feedstock variability and riser geometry, enabling a more accurate prediction of fouling behavior and supporting better-informed operational decisions. These models incorporate unit-specific parameters such as feed temperature, composition and injection conditions. 

By modeling vaporization percentage, the operations group can monitor this value continuously and maintain it above a defined minimum threshold. When the estimated vaporization approaches the limit, operators can take corrective action using operational handlessuch as adjusting feed preheat, steam rates or injection strategiesto avoid conditions that promote fouling.  

Another co-processing challenge is the tracking of biogenic carbon content for regulatory reporting and internal yield accounting. With no commercial C-14 radiocarbon dating facilities in Canada, samples must be sent to U.S.-based labs, creating logistical delays and increased cost. To address this, the refinery collaborated with the University of British Columbia, where then-PhD students Dr. Liang Cao (now a postdoctoral researcher at MIT) and Dr. Jianping Su (now faculty at CUP-Beijing) developed ML models to estimate biogenic carbon content in real time. These models were trained on historical C-14 data, operating parameters and feed composition, providing a predictive tool that reduces dependence on lab testing. 

The collaboration produced several peer-reviewed publications and stands as a strong example of industrial-academic innovation. These initiatives illustrate how integrated digital tools, grounded in engineering fundamentals and enabled by cross-functional collaboration, can help refiners address the complex operational realities of renewable fuel integration. 

Data-centric AI (DCAI) and advanced process control (APC) monitoring. APC continuously optimizes a refinery to meet high-level business objectives (e.g., by minimizing reboiler steam while maximizing renewable feed rates, subject to plant constraints). APC technology can be challenging to sustain, and performance data is often siloed with restricted visibility due to cybersecurity policies. To address these issues, the Burnaby site leveraged asset-centric analytics by transforming key controller metadata and tag references into a re-usable AF template. 

This scalable, foundational layer is then applied to all controllers, integrating other forms of critical controller data like time series trends and steady-state gain matrices to create a unified “single pane of glass” for APC performance monitoring in the co-author’s company’s visualization platformb. 

The refinery then shifted its focus from lower-value data plumbing to higher-value analytics. Significant efforts were dedicated to research and development (R&D) for analyzing and interpreting APC controller behavior, in collaboration with UBC and Control Consulting Inc. The resulting work was published in literature.2 

By combining control engineering expertise with data visualization tools like a proprietary collaborative data analysis platforme, the team built novel interactive visualization tools that let users sort, filter and manage large controller datasets to quickly understand relationships between process variables, making it easier to troubleshoot controllers compared to legacy approaches with static spreadsheets. 

Critically, many initiatives in digitalization often yield limited returns if the underlying data is bad. At the Burnaby Refinery, the guiding principle around analytics has been consistent around leveraging engineering domain knowledge and good data as a starting point. More data is not always good data. Industrial datasets are often described as “data-rich but information-poor” since they are designed to operate at steady state. Practical experience has shown that meaningful process analytics begins not with models, but with a clear framing of the business problem, a collaborative approach to data cleaning and a shared understanding of the intended application. 

This is where DCAI provides a compelling shift in mindset, bringing the focus in analytics back to a strong foundation of clean, contextualized data. In an article presented at an international congress,3 the Burnaby Refinery led a cross-organizational team of academic and industry practitioners to develop a practical playbook for process analytics, highlighting several common industrial data pitfalls that are not often documented in literature, ranging from inappropriate data retrieval settings to overemphasis on marginal model metrics. 

Takeaways and future plans. Digital decarbonization requires more than deploying new digital technologiesit requires an evolved culture and mindset of continuous improvement from refinery personnel, alongside strong leadership commitment and a bold strategic vision for a digital future. Modernizing legacy systems, building a coherent assetcentric data foundation with data quality, and weaving digital key capabilities into operations represent the beginning of an evolving digital roadmap that is guided by business strategy and operational priorities. Along this path, five guiding principles emerge: 

  1. Align business strategies, operational excellence and digital strategy. 
  2. Embrace novel solutions without succumbing to analysis paralysis. Rapid, 80/20style experimentation reveals highimpact opportunities and accelerates learning cycles. 
  3. Forge deep, collaborative external partnerships. Tight-knit development loops across customers and technology providers ensure solutions remain grounded in realworld needs. 
  4. While IT-OT collaboration is important, empower OT teams to lead OT-centric projects. Solutions are better when the refinery frontlines and the people closest to the process drive innovation with IT enablement. 
  5. Invest in change management from day one. Technology alone cannot bridge the gap between possibility and performance. Transforming culture, mindsets and behaviors is the catalyst for lasting impact. 

ACKNOWLEDGEMENT 

As of November 1, 2025, Sunoco LP acquired Parkland Corp. (and the Burnaby Refinery). 

NOTES 

a AVEVA’s PI AF 

b IOTA VUE 

c Seeq 

d Power BI 

e D3.js 

LITERATURE CITED 

1 Brandt, S., D. Holder and G. Lee, “Maximizing renewable feed co-processing at an FCC,” Digital Refining, July 2023, online: Maximising renewable feed co-processing at an FCC 

2 Elnawawi, S., L. C. Siang, D. L. O’Connor and R. B. Gopaluni, “Interactive visualization for diagnosis of industrial model predictive controllers with steady-state optimizers,” Control Engineering Practice, Vol. 121, April 2022, online: Interactive visualization for diagnosis of industrial Model Predictive Controllers with steady-state optimizers - ScienceDirect 

3 Siang, L. C., S. Elnawawi, L. D. Rippon, D. L. O’Connor and R. B. Gopaluni, “Data quality over quantity: Pitfalls and guidelines for process analytics,” 2023 IFAC World Congress (International Federation of Automatic Control), Vol. 56, Iss. 2, 2023, online: Data Quality Over Quantity: Pitfalls and Guidelines for Process Analytics - ScienceDirect 

The Authors

Related Articles

From the Archive

Comments

Comments

{{ error }}
{{ comment.name }} • {{ comment.dateCreated | date:'short' }}
{{ comment.text }}