The digital twin is defined as the living virtual model of a physical system. The live representation of reality via ‘Digital Twin’ allows us to gather real-time data and introduce new manufacturing practices, ensuring robust product development. A reliable, efficient and cost-effective traction motor is essential for the success of e-mobility. Robust manufacturing facilities play an important role in realising the design with minimum performance variation, in-house rejection and warranty. This article aims to gather critical data from different stages of manufacturing machines, while measuring critical dimensions of the components and comparing them with the real-time models (digital twins) and predicts the performance before ‘EOL (End of Line)’ testing. At EOL, a comparison of test results is performed with the digital twin output. The digital twin is also capable of learning and improving the process based on the rejection it records at EOL or warranty testing.
Digital twin technology
The ‘Digital Twin’ technology was conceived by the aircraft industry to predict the structural behaviour of an aircraft in 2011. NASA further defined this technology to mirror the space vehicles’ behaviour to predict and update fleet history and sensor behaviour. Now, the ‘Digital Twin’ technology has evolved into a much broader concept to simulate the behaviour of manufacturing machines, people, assets, and processes. This is possible with the advanced development of number-crunching processors, data acquisition, and machine learning. The digital twin technology is at the forefront of Industry 4.0 facilitated by data analytics and IoT (Internet of Things) connectivity.
The digital twin is defined as a living model of the physical asset, which continually adapts to operational changes based on collected online data and information and can forecast the future of a corresponding physical counterpart. In this article, a traction motor manufacturing line is used as a research model to define the digital twin technology, its application, and benefits.
The digital twin technology is often confused with a digital model or a digital shadow where no ‘automatic data exchange’ or ‘one-way data exchange’ is established between the physical object and the virtual model. Digital Twin reflects the two-way dynamic mapping between a physical object and its virtual model in cyberspace. The two-way impact, i.e., impact of data on virtual model and impact of decision-making on the physical object is an essential feature of a digital twin. A Digital Twin can provide an abstract physical representation to make real-time decisions. Figure 1 shows a Digital Twin reference model. The development of Digital Twin needs three components (1) An information/mathematical model that abstracts the specifications of a physical object (2) A communication mechanism to transfer data between a Digital Twin and its physical counterpart, and (3) A data processing module that can extract information from many sources to construct the real-time representation of the physical object. These three components must work together to construct a Digital Twin.
It is becoming more evident that Digital Twin runs in parallel with Artificial Intelligence (AI) and IoT technology resulting in shared challenges. Some of the challenges are IT infrastructure, data, privacy and security, and an increased expectation. The rapid growth of AI needs high performance and an expensive IT infrastructure to process real-time data. High-quality data is needed to produce meaningful results. This means ‘data cleansing’ is important and creates challenges at times. The last challenge for data analytics is the expectation that it can be used to solve all our problems. Careful consideration is vital to identify the correct application, ensuring standard models would not produce the same results. Similar, to other new technologies, they have the potential to work hand-in-hand by strengthening manufacturing. Potential users focus only on the benefits and expect an instant save on time and money. The field is still in an intermediary stage, and the challenges need to be kept in mind while conducting data analytics.
Based on application, the Digital Twin can be segregated into three different categories, namely Unit level, System level, and SoS (System of System) level. The Unit level is the single digital twin model which replicates the behaviour of the unit under consideration. The system-level digital twin can be obtained by integrating multiple unit-level digital twins, and by cooperating with each other. Multiple system-level digital twins constitute the SoS level digital twin.
Traction motor manufacturing & Digital Twin
This session’s focus is to apply the digital twin technology to the traction motor (TM) manufacturing process to ensure a robust product by repeatedly meeting performance requirements, reducing in-house rejection and warranty. A TM manufacturing line has a stator assembly, a rotor assembly, a motor assembly, and the EOL testing system. At the time of design, one should be aware of the control factors that finally determine the ‘Critical to Quality (CTQ)’ factors of the TM. An example would be the magnetic field strength, which has a direct correlation to the maximum torque produced. At the design stage, it is also assumed that the motor virtual model (digital twin) is available, and a good correlation is proven during prototype testing and validation. At this stage, characterisation of the TM and the equivalent circuit parameters will enhance the quality of the digital twin for the TM. This enhanced model can provide the acceptable limits of equivalent circuit parameters which will later be utilised for detection at the process phase. This virtual representation can consist of pure mathematical modelling of the TM or a hybrid approach where there can be a combination of mathematical and data-driven methods used. In both cases, one should be able to achieve a good level of accuracy. A mathematical model would be derived using the physics of the system and non-linearities should be considered for achieving more accuracy. The data-driven approach would start with understanding and labelling the collected data, and then using that to build the models.
At the manufacturing line, we need to identify ‘critical to quality’ factors of each machine and the key factors that affect them. In a stator assembly line, the winding machine is an example. The winding strength provides ‘critical to quality’ for the TM performance. The factors affecting these could be estimated from the current and voltage characteristics of the machine actuators. A particular pattern of current and voltage waveform will ensure proper winding strength. So, the data acquisition in this machine can be the current and voltage waveforms. Further characterisation of this machine can help us in correlating equivalent circuit parameters and thereby developing a digital twin of the winding machine. The data captured during the winding process by the machine is assessed in real-time and any anomaly can be detected before the next station. This is an example of a unit-level digital twin for TM manufacturing. Another important consideration during this stage can understand the acceptable limits of the equivalent circuit parameters for the given TM. This can serve as a starting point for baseline models for doing rule-based rejections or anomaly detection algorithms. These threshold and anomaly detection can provide suitable information for performing a predictive maintenance of the machine.
Similarly, we should be able to identify key machines or processes and related parameters which will impact the performance of the TM. Some examples of such processes can be enamel removal, resistance welding, auto gauging, rotor balancing and correction, cover tightening, etc. All the data is captured in real-time and compared at each stage to ensure there are no errors due to machine wear and tear, or any human intervention.
A data capturing mechanism at the part-inspection stage is also needed to ensure we have the right parts before assembly. An example would be to assess the magnetic field strength of magnets and the material used. Assume that we have the cleaned-up data available from the parts inspection stage and from the key machines and processes as identified before. Let us represent this data as x1, x2 …. xn and y1, y2,… yn, where x represents the data from the winding machine and y represents the data from the parts inspection. Similarly, we can have data from other machines/process from 1,2, … n. This set of data represents all the key machines and processes of the TM line.
Once we achieve this relation for all the critical machines, we will have two sets of data called ‘Machine data pass’ and ‘fail bins’. The number of ‘n’ product samples data can be collected during a sample build and at pre-production stages. If, enough data is not available a set of production samples can also be used to gather the data. The minimum ‘n’ samples of TM can be determined scientifically. Once we have enough sample data, we should be able to arrive at a relation with TM digital twin and the ‘machine data pass bin’. The ‘machine data fail bin’ can be utilised to improve our thresholds and anomaly detection algorithms. There should be a map between the input to TM and this data set.
The result of the classification of Figure 2 will map ‘machine data pass bin’ to digital twin TM input as in the EOL machine. This input data will be based on the EOL tests and can be logged to create this mapping. Let us call the outcome of this step as ‘EOL input map’. This ‘EOL input map’ can also help us in highlighting the important critical to quality parameters from the list of CTQs selected for building this model.
At this stage, we are ready to use the ‘EOL input map’ and digital twin of the TM to check the machine and the processes at each stage to ensure robust manufacturing. This baseline ‘EOL input map’ can be utilized along with the real EOL data and other continual learning techniques to improve the model further. In the long-term as one gains experience, ‘EOL input map’ becomes more accurate, and the dependency on EOL can be reduced.
At each machine stage the data gathered is fed to the ‘EOL input map’ and the digital twin of the TM is executed in real time. If there is any discrepancy at this stage, the process of assembly stops. So, at each stage, say at machine-1, the actual data x1,x2…xn is fed to the ‘EOL input map’ digital twin of the TM, other data from other machines, say 1…n is taken from ‘machine data pass bin’ as it is assuming they are correct.
The smart manufacturing approach using digital twins will ensure a robust process for product manufacturing and will also consider the wear and tear of the machines recorded at each stage. Weightage of the machines or process parameters can also be considered from the warranty data analysis, ensuring the learning from warranty data is fed back to the digital twin models.
The key steps in the approach are as follows:
Develop a digital twin of the product under manufacturing based on its key CTQs.
Identity key machines and processes contributing to the product CTQs.
Identify data to be captured at each stage as identified in step 2.
Classify pass and fail machine/process data for ‘n’ samples based on EOL testing results.
Map the data of step 4 to provide input to the digital twin of the product.
At each machine/process execute the digital twin of the product and pass/fail criteria.
Maintain the digital twin of the machines/process to capture learning.
A careful analysis and study are required to capture the required data and filter them to make it useful. This can be a planned stage-wise considering one machine/process at a time and step by step cover all critical stages.