MRO Magazine

Digitalization, Operations Technology, and the Importance of Data Quality

The subject of data quality is not a particularly exciting one, and this is likely why it is often overlooked or just pushed off to the side.

July 26, 2021 | By Tim White

Photo: © ra2 studio / Adobe Stock

Photo: © ra2 studio / Adobe Stock


Its importance is not always understood, and poor data quality is commonly the root cause of why many new system implementations are perceived by end users as unsuccessful. Though, data quality has become more important than ever, as companies embark on the journey of using OT and IIoT data to monitor their systems and assets. Having implemented systems such as this, I can assure you that if you do not think you have data quality issues, just wait.
Defining data quality
Data quality is deemed to be high if it correctly represents the “real world” construct to which it refers. Characteristics of high-quality OT data are complete, standards-based, consistent, accurate, and timely. An organization must be able to trust its data and be able to extract useful information from it easily. ISO 8000 breaks data quality down into three distinct areas: syntactic, semantic, and pragmatic.
Syntactic data quality: Syntactic quality is the degree to which data conforms to its specified syntax. Syntax refers to rules on the arrangement of words. In the OT data arena, this can apply to not only how data are described, but also the correct spelling of words. In a nutshell, what is being referred to here is consistency.
Semantic data quality: Semantic quality is the degree to which the data corresponds to what it represents. Simply put, the data must be meaningful to the user and represent the “real world”. For example, a temperature reading from a control system would have no meaning unless we know what it is measuring. Is it the ambient temperature or of a
running machine?
Pragmatic data quality: Pragmatic quality is the relevancy of the data being collected. It must be timely and useful to the end user. A measure of this may be somewhat subjective because different individuals may have different requirements. For example, a data scientist may need data collected at a 1Hz (once per second) rate for analysis but a five-second rate may be acceptable for a technician’s monitoring consoles.
Managing data quality from OT systems
Raw data from process control systems is normally not useful without processing before analysis. Data tags must be mapped to the asset they are relevant to, defined as to what they are measuring, and many times even need units of measure assigned. As a first step in your OT journey, an organization should develop and document a standard of how this will be completed and then managed going forward.
Start with the syntactic quality first. Remember that the goal here is consistency. Areas that should be addressed include:
• Data tag descriptions – Tag descriptions should have a consistent syntax. Have a strategy session to define your approach. Many times, software is built with limitations on character counts, so be sure that this is considered.
• Abbreviations – Nothing can be more frustrating than to have different abbreviations within a data set. For example, will it be Celsius or C, Motor or MTR.
• Acronyms – Determine which words and phrases will be written out and which will use regularly accepted acronyms. This could apply to site names, units, manufacturing lines, etc. For example, is it Pipestills 4 or PS4.
• Hierarchy – OT data historians normally require data tags to be mapped to the related assets they are monitoring. This usually requires a hierarchy to be built for the information. As a best practice, this should be a carbon copy of the hierarchy within the EAM system.
• Units of Measure (UOM) – It is not uncommon for OT systems data to not have units of measure associated with them. To make matters worse, units of measure can be different based on the location of the plant. With global companies, this can be sometimes confusing and can affect data models and their outputs. A great example is temperature. Is it providing a reading in Fahrenheit or Celsius? Which UOM will you standardize on or will you calculate the missing measurement and provide both?
• Labels and Identifiers – Most OT data historians also provide the ability to use monitoring screens to display the information being collected. Take this opportunity to define how screens and the content they display will be labeled. While this will not affect the use of the data, it will provide for a professional and standardized look for the overall system.
This brings us next to semantic data quality. It is imperative that you understand what the data is referencing and that the data being recorded is accurate. This can be a very time-consuming initiative, but the data for the new systems used by analysts and data scientists must be meaningful and represent the real world.
Most of us are dealing with dated plants and control systems. Many times, these systems do not have the reference information needed to understand what the data is representing. The tag identifier in the control system will help determine what the reading is referencing, but may not have the unit of measure. This will all need to be researched and the additional information created within your historian as new control system data is added.
Sensors can also send incorrect data, which in turn will cause calculations and models to return inaccurate results; commonly referred to as false positives. There are a couple of different scenarios in which this happens.
Data spikes – Sensors occasionally will send an incorrect reading that would be physically impossible to have occurred. As an example, a temperature that has been steady at 100 degrees suddenly spikes to 500 degrees and returns to 100 in a matter of seconds. Consider building persistence gates into the models so that these readings are ignored. If spikes become more frequent, then that is an indicator that the sensor may be going bad and should be replaced.
Sensor drift – Over time, sensor readings may start to drift, thereby giving an incorrect reading. This may point to a need for calibration. This can be included as part of the data model and alerts can be established to automatically indicate the need for maintenance
to be performed.
Semantic data quality will require the most effort to resolve. As issues are discovered and addressed, be sure to document them as part of the standards. Successful use of the data and turning it into meaningful information will rely on this work.
Finally, we will discuss pragmatic data quality. Is the data complete and timely? When using OT data to perform further analysis, this can arguably be the most important criteria. One must ensure that all the expected data tags are being collected and that all of the data is arriving with no unintentional gaps between readings. Failures with pragmatic data quality can often be hidden and will reveal themselves right at the time one is trying to perform a manual analysis or applying even simple data calculations or models. Defined standards and monitoring will help inform administrators of potential problems.
Data collection standards – The OT data from industrial control systems can have many parameters set for how it is collected, but the wrong decisions will affect its usefulness. This is why standards should be developed for data collection, which must be carefully designed in light of how the data will be used. Below are some settings to consider.
Rate of collection – Data communication rates within industrial control systems are described in Hertz (Hz) or cycles per second. Modern systems designed to capture OT data will have settings to adjust this rate. The typical rate of collection is 1Hz however there are instances where the desired rate may be higher or lower.
Exceptions – Exceptions can be set in systems that define when data will record (be saved). In the case of analog data tags, values are many times carried out to as much as four decimal places. This level of granularity may not be useful, so it is common to have exceptions set to the rate of collection. For example, pressures could be set to only record when there is a change that meets a certain threshold, such as two psi or two bar. Additionally, a switch that is normally in the off position would not need to record a value of “0” every second. This data tag could be set to only record on a change of value, so when the switch is moved to the “on” position it would record a “1” and then record a “0” when it is switched back off.
Minimum rate of collection – As discussed earlier, pragmatic data quality failures can many times be hidden. When the data tag collection rate is set to only record on a change of value (as previously described), the failure to collect data may not always be obvious. Because of this, it is useful to set minimum rates of collection to ensure that a data tag is still active. Consider setting these tags with a minimum rate of four hours. This will instruct the system to record the value whether it has changed or not as long as the tag is still active.
Monitoring – Another way of ensuring pragmatic data quality is through monitoring of the systems responsible for collecting the data. There will be multiple components involved, such as interfaces, tunnellers, and gateways, along with processing and storage. Sometimes, due to interface limitations, multiple instances of the interface have to be set up in order to collect all of the desired tags. Any one of these components can fail, often without any indication. A good practice is to add dashboards to your system that monitor the status of each of these tools.
Taking the time to define standards of data collection will pay huge dividends long-term. These standards, along with system monitoring, will help to ensure that your pragmatic data quality
is acceptable.
Data quality is rarely addressed at the beginning of a digital transformation. The focus is always on “getting the models built” and showing an ROI quickly. This becomes a double-edged sword, because when poor quality data begins to affect the success of the initiative, it causes rework in order to fix the problem. Take the time up front to address the issues discussed here. If that is done, the rest of the implementation will move smoothly, scalability will increase, and that ROI will become a reality. MRO
___________________
References
1. 2015 ed. ISO 8000-8: Data Quality – Part 8: Information and data quality: Concepts and measuring, ISO
2. Oliver Adam Mølskov Bech, 16 November 2018, Data Quality Management, DTU, 13 April 2021, http://apppm.man.dtu.dk/index.php/Data_Quality_Management
___________________
Tim White is a Senior Manager at T. A. Cook, providing services related to digital asset performance management. Previously he worked in the industry as a global director for asset management, responsible for 83 sites across the globe.

Advertisement

Stories continue below

Print this page