All the latest news from the industry weekly compiled by the editorial team for you free of charge.
This eMail is already registered.
An unexpected error occured.
Please accept our Terms of Use.
Registration successful.
1 Rating

CLOUD COMPUTING IN MANUFACTURING Real-time AI powered by edge-deployed digital twins

Aug 18, 2020

Cloud-based digital twins deliver significant advantages. However, deploying the cloud-trained digital twin at the edge opens up new opportunities for autonomous systems, such as novel real-time Artificial Intelligence (AI) applications, based on self-learning. By connecting software to its electrification, robotics, automation and motion portfolio, ABB energizes the transformation of society and industry to achieve a more productive, sustainable future. The article, along with relevant uses cases, explains how edge execution of digital twin should be deemed as an edge and cloud synergy and not an edge vs cloud execution. - Mithun P Acharya, ABB Corporate Research

A digital twin is a digital representation of a physical asset (physical twin) that can be used for various purposes, such as simulating the behaviour of the physical asset. For example, a performance model of a solar inverter is a digital twin that characterises the performance of the inverter. Moving the digital twin to the edge will enable novel real-time Artificial Intelligence (AI) applications. Depending upon the specific use case, there may be more than one digital twin for the said asset, process or system. This composability of digital/physical twins gives great flexibility to define, evolve, compose and leverage twins for real-time Internet of Things (IoT) applications, such as edge computing and analytics. Here, the term ‘edge’ refers to software or hardware installed on the customer premises, such as edge gateways, historians and plant devices & controllers.

Edge-deployed digital twin

The digital twin usually resides in and is maintained in the cloud and is fed with data from field devices or simulations. Digital twins are used to understand the past and present operations and make predictions by leveraging Machine Learning (ML) approaches to condition monitoring, anomaly detection and failure forecasting. One example application of a digital twin is the (predictive) maintenance of the physical asset it represents.

What new applications are possible if one allows the cloud-learned digital twin to travel to the edge and meet its physical twin? While edge devices in the past were used primarily for data acquisition and basic calculations, edge computing delivers advanced computations and cloud-assisted analytics that enable faster and localised decision making at the edge. Moving the digital twin to the edge will enable novel real-time AI applications due to the following:

  • Lower analytics latency: In some cases, roundtrip latencies to the cloud are not acceptable. With the digital twin executing at the edge, applications that require sub-second latencies are possible, for example, device protection functions, including shut off, can be instantly invoked if a hazard is detected or forecast by the digital twin analytics.

  • Closed-loop integration of analytics and local control: Analytics produced by the digital twin can guide the local control and vice versa. This leads to proactive control on applications resulting in autonomous operations. For example, a forecasted critical anomaly could be mitigated without human intervention.

  • Faster evolution of the digital twin: Using approaches like online ML on streaming data and real-time reinforcement learning, the digital twin can continuously self-learn and evolve. This results in optimised system operations and self-tuning devices. For example, learning from and evolving its digital twin at the edge, a grid-connected energy device can infer optimal operational or control parameters for maximised output without impacting the grid stability.

Executing the digital twin at the edge, instead of the cloud, also has business advantages like -

  • Reduced cloud hosting costs. Sending all the data to the cloud for storage and analysis can be costly.

  • Data pre-processing reduces the data volume transmitted to the cloud

  • Sensitive data need not be sent to the cloud

  • Increased resilience. Analytics can be performed even when the digital twin is disconnected.

Three use cases illustrate the possibilities opened up by edge-deployed digital twins.

Real-time solar performance tracking for a resilient grid

Smart solar inverters can provide ancillary services to the grid, including voltage and frequency regulation, power-factor correction and reactive power control. These advanced capabilities can help smooth out fluctuations in supply and demand resulting from intermittent renewables and elastic loads. To operationalise smart inverter capabilities in a solar power site, the site’s output power needs to be curtailed (by about 10%) so that there is headroom for these types of on-demand regulations.

The key to maintaining a desired regulation range and curtailment performance is the Available Peak Power (APP) estimation. This quantity needs to be estimated and communicated within each control update cycle (typically under 4s). The APP estimation algorithm considers solar irradiation, photovoltaic (PV) module current & voltage characteristics, panel temperatures, inverter efficiencies and other variables. The accuracy of the APP estimation is crucial for the regulation accuracy of the PV site, especially concerning up-regulation.

ML and physics-based models are built in the cloud to model the APP, using historical data across the inverter fleet. A digital twin can then be derived from these models and transferred to ABB solar inverters to run edge analytics, on demand and in real time, in synergy with the cloud. To show how the edge-deployed digital twin can be used on streaming inverter data to estimate inverter output in real-time, a proofof- concept demonstrator was realised in a Python environment. This demonstrator used field data from a solar site and an edge computing device similar to a Raspberry Pi.

Executing the digital twin at the edge, instead of in the cloud, also has business advantages. The subject was a single-phase, 2.5kW inverter installed in a school property in the United States. A streaming program that runs the data frames into the estimator and plots the output results on a visualiser shows a hypothetical example of setting up the inverter for grid services. In this case, the output of the inverter is regulated constantly at 10% below the APP to leave room for up-regulations. The expected power at any given time is treated as the APP for that time. This estimate is the reference value for the next dispatch or control interval. As the solar input changes, so too does the APP to maintain a healthy margin for local inverter control.

Real-time transformer physical security

In this use case, the physical security of a large power transformer is twinned in the cloud and leveraged at the edge to monitor the transformer’s physical security in real-time. Enhancement of physical security of substations and transformers addresses a critical vulnerability that can affect the grid reliability. With controlled experiments and by using three vibration sensors and one acoustic sensor on the transformer, ML models are first designed and trained in the cloud to classify a physical impact on a transformer as benign or catastrophic.

The insight gained from this experiment can inform ‘goodness-of-fit’ decisions, matching analytics performance, with application requirements. The trained models, which represent the digital twin of the transformer physical security, are deployed on the transformer at the edge for reporting and acting upon sensor-registered physical impacts on the transformer in real-time.

Streaming analytics refers to applying ML and other data processing techniques to data in motion. This contrasts with conventional data analytics practices in which data is stored first and analysed later. Streaming analytics can provide an instant reaction to stimuli as well as reduce data volumes.

A key consideration in streaming analytics is the rate at which an edge processor ingests streaming data and applies a ML model to deliver the intended outcome (class label or regression). This important performance characteristic was quantified here using the sensor data (at 52,000 samples per second). Streaming this data rate into a cloud engine is challenging and thus, provides a good representative case to understand how fast is fast enough and where bottlenecks might hinder real-time performance. The insight gained from this experiment can inform ‘goodness-of-fit’ decisions, matching analytics performance with application requirements. Model development (K-means clustering and decision tree classifiers) was done in a Python environment. Model deployment was achieved in a Raspberry Pi as a proxy for a single-board computer edge device.

The objective was to score the machine-learning model against the incoming data stream. The output from the model was the class of impact: benign (e.g. aircraft flying over) or malignant (e.g. hammer strike). In the experiments on this representative use case, the performance varied noticeably between a Linux virtual machine with eight cores (2.1 GHz) that was proxy for a high-end edge gateway and a quad-core (1.2 GHz) Raspberry Pi 3 Model B. The performance also varied notably in transitioning from pure data ingestion to model scoring. It is likely that performance can improve with code optimisation. However, the order of magnitude difference between monitoring and scoring mode would be significant for time-sensitive applications.

Semi-autonomous plant electrical system assessment

In this use case, safety in a food and beverage (F&B) plant was twinned in the cloud using image recognition. Using images (hazardous and non-hazardous scenarios) of the plant from past safety assessments, deep convolutional neural nets were trained in the cloud to automatically detect electrical hazards. These neural nets can then be deployed on the plant operators’ smartphones or video cameras on the plant floor for real-time hazard detection.

Deep learning is the newest addition to the Artificial Intelligence (AI) toolbox. Deep learning made it into the hype cycle in 2017 for the first time following a breakthrough in training algorithms and error rates. Deep learning can be applied on a digital twin at the edge to recognise electrical and food safety hazards in F&B facilities. While deep learning has been long known and applied extensively for cloud-level applications, such as image processing, natural language processing and speech recognition, its application at the edge is quite novel, especially given the computational requirements.

A trained model could be cost effectively deployed at the edge – another example of the synergistic use of edge and cloud technologies for analytics and ML. Corrosion was chosen as the hazard category as it is the number one issue in F&B facilities and one for which ABB had a reasonable training data set (with hundreds of thousands of training variables). Google TensorFlow and Keras libraries were used for training various deep learning networks. As is common in analytics projects, the quantity and quality of training data played a significant role in the accuracy of the classification results. A 90% accuracy, in the best-case scenario, was achieved, which demonstrates the potential for using machine intelligence. As a reference, the benchmark human recognition rate for images is 95%.

A hardware demonstrator was built using a $35 single-board computer (Raspberry Pi Model B). The deep-learning models were developed in a Graphics Processing Unit (GPU) cluster and deployed on the Pi for real-time scoring. Python libraries were extensively used in this exercise. This experiment proved that deep learning is not necessarily an expensive and exclusive technique for cloud computing applications. Its application at the edge is quite feasible and can motivate many use cases that rely on image and other high-dimensional data. The ultimate goal was to demonstrate that a trained model could be cost-effectively deployed at the edge. This achievement serves as another example of the synergistic use of edge and cloud technologies for analytics and ML. As edge computing technologies advance, one should expect edge execution of digital twins to bring benefits in many use cases.

Cutting edge

Although this article concentrates on edge execution of digital twins, the take-home message should be understood as ‘edge-cloud synergy’ and not ‘edge versus cloud’. As edge computing technologies advance, one should expect edge execution of digital twins to bring benefits in many use cases, including, but not limited to, smart grids, smart cities, smart plants, robotics, the IoT and smart transportation.

Image Gallery

  • Deep-learning framework for safety hazard recognition

  • Streaming analytics on vibro-acoustic sensor data for power transformer physical security

  • The digital twin travels to the edge to meet its physical twin. The digital twin is executed and evolved on the edge gateway or on-device using self-learning approaches.

  • Mithun P Acharya

    ABB Corporate Research

Companies related to this article
Related articles