In manufacturing and process plants, control systems consist of the integration of Human Machine Interface (HMI) software, Programmable Logic Controllers (PLCs), Distributed Control Systems (DCSs), computers, and a wide range of automation software through the use of high-speed Ethernet-based communications. In geographically distributed systems, such as oil & gas production and pipelines, control systems are much different. They consist of the integration of SCADA and a more loosely integrated combination of control devices in the field, local HMI software, and wide-area communications that use a mixture of wireless, fibre optic and telephone services.
In operations involving production and pipeline monitoring and control, SCADA and Electronic Flow Measurement (EFM) applications require access to data from a wide variety of automation devices. These devices include PLCs, Remote Terminal Units (RTUs), Flow Computers, and other data sources that are not directly connected to the computers on which the applications reside. The communication bridge between the applications and field devices typically requires the use of radios, cellular networks, satellite links, or other types of wireless technology in multiple combinations. Each of these communication mediums has bandwidth limitations, where performance and reliability are easily impacted by the level of traffic sent over the networks—as well as other factors like physical obstructions, weather and environmental elements. Depending on who owns the communications backbone, there may be costs associated with the volume of data that is transferred across the network, where the need for more data results in more operational expenses. Lastly, this information needs to be securely transmitted to ensure that sensitive data cannot be intercepted and used for malicious purposes. Together, these factors result in a complex and expensive architecture for remote communications within an oil & gas operation.
The current Host-Centric Model
Some form of data collection must exist in order to provide connectivity between the applications consuming the data and field devices providing the data. Historically, this data collection resides on the same computer as the SCADA host. Data collection can be owned by the SCADA Polling Engine, which must contain the required protocol drivers that are used to pull data directly from the field devices. In other instances, separate standalone applications that expose a generic interface may be responsible for the data collection between the applications and field devices. Unfortunately, the many types of field devices that originate from a wide variety of vendors do not support a universal protocol. As such, there is a 1:1 correlation between the number of data collectors required to run on the host communication server and the number of vendor-specific device types that are part of the overall operation. With bandwidth, cost and security concerns, the current Host-Centric Model has several shortcomings.
First, available bandwidth can quickly become diminished as more applications and devices are added, each increasing the communications throughput over the network. This model results in the periodic dropping of data requests that never make it to the device. Next, the model is not a cost-effective solution when the system must be scaled. Typically, there are multiple client applications running on multiple computers that are interested in collecting the same data and lastly, many of the vendorspecific protocols were developed with the knowledge of these bandwidth limitations and cost concerns. As such, vendors have focused on engineering these protocols down to the bare minimum required to access the data wth device.
Distributed Communications Architecture
Feature-rich and properly implemented Distributed Communications Architecture addresses these issues. In this model, data collectors are no longer required to live on the same computer as the client applications. Instead, they can exist on any computer that is tied into the communications network. In this way, a single data collector can service multiple client applications interested in the same data from the same devices. By removing the inefficiency of making repeated requests, less bandwidth is needed to provide the same data set. Multiple data collectors can be spread out across multiple computers that are closer to the field devices, each with their ownexclusive connection to the network.
Even though communication failures will still occur, this architecture allows to minimise points of failure within the system. It is intuitive to place the data collector as close to the device as possible; the connection may even be hardwired. This proximity increases the likelihood that data will be retrieved from the device as needed. The data collector may even have the ability to buffer and store the data in the event that the remote client applications are unavailable, which enables the data collector to provide the applications with this data in the near future and prevents the loss of data across the system. This can be accomplished through a deferred real-time data playback mechanism or preferably with a more suitable historical data interface for retrieving the stored data.
By distributing the data collection from the client applications, we have introduced an abstraction layer between the vendor-specific protocol and the sharing of the information contained within the protocol. Additionally, we can limit the exposure of these unsecure vendor-specific protocols over a wider area network by placing the data collector as close to the device as possible. Now it is possible to have a single secure protocol that connects each client application to the applicable data collectors, removing the concerns for where this data may need to travel in order to reach its destination.
Although there are many ways in which this architecture could be implemented, there is one, de facto industrial automation standard whose purpose is to allow vendors to solve the very problems previously discussed. This is the OPC Unified Architecture (UA) standard: a multipurpose set of services that a data collector (known as an OPC server) provides to an application (known as an OPC client) that is ready to consume this information. The OPC UA service set generalises the methods that are used to discover, collect, and manipulate real-time, historical, and alarm and event data by abstracting away the vendor-specific protocols. OPC UA alsoprovides the secure exchange of data between these components by prescribing well-known and adopted IT practices. By building out the Distributed Communications Architecture based on an open standard such as OPC UA, one will have a greater chance of interoperability between the applications one is aware of today and those one may need to add in the future— all while securely optimising data throughput across the network.
The technology needed to move from a Host-Centric Model to a Distributed Communications Architecture is available today. The transition requires minimal downtime, as configuration can be accomplished without disrupting established communications. The new architecture provides oil & gas operations with an alternative to the current model that is more secure and cost-effective, and ready to scale to meet the needs of tomorrow ☐
Article authored by Tony Paine, President & CEO Kepware Technologies,tony.paine@kepware com& Russel Treat ,President & CEO,EnerSys Corporation