December 2020

Trends and Resources

Business Trends: Why we value engineering data

In the 21st century, one cannot escape the concept of data.

In the 21st century, one cannot escape the concept of data. Data is everywhere, and it is an integral part of people’s professional and personal lives. Within the business markets in which we operate, there are plenty of intriguing and fascinating terms—such as Industry 4.0, Construction 4.0, digital twin, artificial intelligence, big data, integrated project delivery (IPD), building information modeling (BIM), common data environment (CDE), virtual design and construction (VDC), Internet of Things (IoT), Internet of Everything (IoE), radio frequency identification (RFID), augmented and virtual reality (AR/VR), point cloud and cloud storage. Fundamentally, all data management serves the same purpose of storing information and subsequently sharing or publishing this data to the intended recipients.

However, for successful data management, there are a few important questions that must be answered, including:

  • What data is required?
  • Which are the real engineering authoring tools?
  • How can data be managed and used effectively?
  • How can engineers achieve consistent data?

Looking back to ancient Egypt and the pyramids, modern humanity does not fully understand how the Egyptians built them or how they used and recorded the data that made the pyramids able to stand for so many centuries. This precise data and knowledge are lost to the modern age under the shifting sands of the passing millennia.

The previous generation of engineers used to say, “Engineering data ended two places after dot; the rest was the non-significant noise.” Today, engineers recognize that data is a valuable commodity, with a similar intrinsic value as steel, concrete, cables, pipes and equipment. Engineers have historically recognized the value of data; however, in years past, the practice of formatting and storing data on paper meant that subsequent data retrieval in the future could be a challenge. Backup tapes and drives were stored offsite, and the effort to retrieve information from a previous project was typically not worth the time and effort to get it. Consequently, the data would be recreated again and again. This rework increased project costs and created a data glass ceiling that engineers could not break through to make significant improvements. The more recent breakthrough of having centralized, retrievable and standardized data is just what the industry needed.

The authors’ company values engineering data and innovation throughout the lifecycle of a project. Digital transformation enables the company to integrate and organize data across the lifecycle of the facility, which creates data harmony. The company’s data-centric execution strategy is based on three pillars: data collection, data quality and data management.

Data is consumed by several entities on a project. Some of the data consumers are internal recipients, including personnel in engineering disciplines and construction. The other major consumer is the end user (i.e., the client). It is imperative to engage and work with the client to ensure that the data developed by the project team aligns with the client’s data requirements. To this end, a company should place great value on matching the client’s expectations by engaging subject matter experts (SMEs) with data management knowledge and experience to ensure that this alignment is achieved.

What data is required?

From a current industry perspective, there has been visible progress made since the National Institute of Standards and Technology (NIST) published Cost Analysis of Inadequate Interoperability in the U.S. Capital Facilities Industry in August 2004. Two concepts were developed: BIM (2002) and IPD (2007), which focused on the integration of data and on organizing a CDE across all lifecycle phases of a facility. In 2019, the standard Capital Facilities Information Handover Specification v1.4 standardized the specification of information handover requirements between end-user clients and engineering services companies.

For example, specific data is required to build a facility with regard to engineering and construction requirements, and other data is required to maintain and operate the facility after it is built. Some of this data is utilized in multiple project lifecycle phases, while other data is specific to a single phase. It is important for the creator and consumer to know the purpose of each piece of data. If specific data is assigned to the wrong purpose, it can have a significant impact on a phase of the facilities lifecycle. A company’s data workflow should clearly define which pieces of data are required to be produced and the identity of the data consumer. This understanding is fundamental to how a company should manage data on the projects that it executes.

Various data formats—such as the ISO 10303 Standard for the Exchange of Product (STEP) data model, the Industry Foundation Classes (IFC) data format, and the CIMSteel Integration Standards (CIS/2) data exchange format—are commonly used in the industry. There is no easy way to build a single, universal data format for all software developers and the industry to utilize. BIM maturity levels describe shared collaboration based on the generated and exchanged data, as well as on the environment (federated model, integrated database). Instead of focusing on what data schema or format to exchange, the key is to understand what data—down to the lowest level of detail—is needed. Once that is obtained, personnel can map the various data exchange methods.

According to lean construction, this excessive collection of data could be called “information overload.” BIM standard PAS 1192, established in the UK, required “eliminating all non-value-adding activities, using an efficient workflow.” The idea of “valuing the data” has been continued and developed in the ISO 19650 series “Organization and digitization of information about buildings and civil engineering works, including BIM.”

Which are the real engineering authoring tools?

Data is used by multiple groups in many areas, but it is not created by any of them. Sometimes, a portion of the data from one group is passed to others for them to complete their work. The true authoring tool should be the tool where the information is maintained. The key to each authoring tool is the single central source (i.e., where the data is maintained and distributed), which is commonly known as “the single source of truth.”

Well-maintained, distributed and consistent data—which is, from the beginning of the project, sourced from a single central repository and then made available to the construction team—allows for the full implementation of advanced work packaging (AWP) based on construction work packages (CWPs) and, in the next step, the most effective workface planning (WFP) with well-defined installation work packages (IWPs).

How can data be managed and used effectively?

It helps to reflect on how data has improved daily activities to see what will work for certain areas. In the past, households were provided paper phone books yearly. The phone numbers were sorted alphabetically by family name. This was a very simple way to look up phone numbers. However, this compilation of data had its limitations, such as requirements for regional phone directories and annual updates. Compiling changes to existing phone numbers and adding new phone numbers required other methods of data collection.

Today, society has the Internet and smart phones, which digitally provides a list of the latest phone numbers. Smart phones can also collect recent incoming and outgoing phone numbers, along with call displays. Synchronization of contacts between family members and friends is done automatically.

If one looks at some of these fundamental data management concepts, there are important aspects that experts should work toward and leverage. First, there must always be a central repository of master data that has revision control. Second, the synchronization of the data needs to be fully automatic. Lastly, nowhere in the data replication can there be disconnected data that can be lost.

If one looks at the cost of data management, older measurements can be used. For some types of drawings, estimates for engineering and design hours could total 100 hr. For a single drawing, total hourly billing rates could equal $10,000. Data can be measured in the same way; however, the right format of data can be worth much more if it can be entered once and then captured and reused by either a single consumer or multiple consumers.

This data creation must be tightly controlled, but still be robust enough for project personnel to do their work. This requires significant understanding of engineering work processes and who needs what data. The limitation of standardized data is that software vendors have a very limited catalog of data or internal control of code lists and attribute assignments. It is key to have a software vendor that has an open-source data structure and the ability to link data to external sources. Proprietary or encrypted vendor software data structures will limit the freedom to develop workflows to meet business needs.

Data work processes that require hundreds of individual files or models are inefficient and costly. Due to human error, any multiple or manual handling of data could require additional back-checking. The result of this disconnected data workflow is inconsistent data, which is harder to capture and store for future use. Effective data does not always appear impressive from the outside, since it takes fewer people and has clear instructions and rigid standardization control. Avoid any quick solutions or impressive displays of sales. Just focus on how the data is created and managed. A phone book with a fancy cover is still just a phone book.

In the age of digitalization, personnel use data in a new way, such as for multi-classification, taxonomy and relational references. Consistent data does not mean limiting it to 3D information; it can be used in additional dimensions such as 4D (schedule), 5D (cost), 6D (sustainability) and 7D (facility management), among others. Classification is a kind of multi-parameter code. The code (data dictionary and reference library) could be used to manage specific data, along with part of the code, to build a new, specific classification required on a specific level of the project. With multi-parameter codes, one can use any required data when needed. Personnel can analyze data at any stage of the process and can transfer the data between processes or project phases to facility management. The data enables personnel to validate the design by real-time visualization and simulation, including AR/VR, analytical data management and the tracking/checking of data. The collection, monitoring and analysis of data make up the essence of VDC. This builds a paperless data classification method based on standards, parameters and current requirements.

How can you achieve consistent data?

The authors’ company has done significant groundwork building workflows to standardize engineering data. Data from one project is entered the same way for all projects. This consistency means that the captured data can be consolidated and reused to maintain quality and consistent improvements.

The secondary benefit of collating data in a consistent manner is that a company can analyze historical data to forecast how current projects may perform against metrics and trends generated from the historical data.

Data comprises the new building blocks for engineering. Users will need to know how and why the data was built. Just like the pyramids, data does not tell you how it was made. Since it will be everywhere, one needs to ensure that the data structure is well documented. HP

The Authors

Related Articles

From the Archive

Comments

Comments

{{ error }}
{{ comment.comment.Name }} • {{ comment.timeAgo }}
{{ comment.comment.Text }}