All the latest news from the industry weekly compiled by the editorial team for you free of charge.
This eMail is already registered.
An unexpected error occured.
Please accept our Terms of Use.
Registration successful.

Figure 1: Mercedes 'Magic Body Control' comprising of Road Surface Scan followed by analysis of road contours being fed into the active suspension and body control system to provide a smooth ride

Image: Mercedes Benz

Image Processing How to turn your Machine Vision System into an Integrated Smart System?

Apr 12, 2017

The article highlights how shifting from a hardware-centric approach to a software-centric design approach can help simplify machine vision system structure by consolidating different tasks into a single, powerful embedded system, thereby, turning it into an integrated smart system

Machine vision has long been used in industrial automation systems to improve production quality and throughput by replacing manual inspection traditionally conducted by humans. Ranging from pick and place and object tracking to metrology, defect detection, and more, visual data is used to increase the performance of the entire system by providing simple pass-fail information or closing control loops. The use of vision doesn’t stop with industrial automation; we’ve all witnessed the mass incorporation of cameras in our daily lives, such as in computers, mobile devices, and especially in automobiles. Just a few years ago, cameras were introduced in cars to assist the driver during parking; but companies like Mercedes have taken it to the next level. The “Magic Body Control” launched in the Mercedes S class has a set of stereo cameras on the windshield that are used to scan the terrain of the road in front and provide predictive control to the suspension.

Opening new opportunities

But perhaps the biggest technological advancement in machine vision has been processing power. With the performance of processors doubling every two years and the continued focus on parallel processing technologies, such as multicore CPUs, GPUs, and FPGAs, vision system designers can now apply highly sophisticated algorithms to visual data and create more intelligent systems. This technological advancement opens new opportunities beyond just more intelligent or powerful algorithms.

Let us consider the use case of adding vision to a manufacturing machine. These are usually distributed subsystems working in their own silos and sharing collaborative information or signals amongst them over a mix of Ethernet and other time-critical field buses. Each of these subsystems have separate processors each running a specific vendor defined software such as a vision software, a different motion software and a separate software for Distributed Control System(DCS) and Process Control. This is especially challenging for design teams where a small team is responsible for many components of the design and hence the engineer needs to have expertise in all of these software as well as the skill to integrate all of them seamlessly leading to a larger time and cost of development. Moreover, because of the connectivity between each of these subsystems through Ethernet or any other hardware or software connectivity options (ActiveX or .NET), there are potential problems in terms of the latency and determinism of the control loop and the test throughput. Earlier the processing power was the bottleneck for the above parameters but with better parallel processors available in the market, the solution to these integration challenges needs to be sought to ensure productivity.

Software-centric approach

What if we shift our thinking away from a hardware-centric view, and use a software-centric approach? If we use programming tools that provide the ability to use a single design tool to implement different tasks, designers can reflect the modularity of the system in their software.

This allows designers to simplify the control system structure by consolidating different automation tasks, including visual inspection, motion control, I/O, and HMIs within a single powerful embedded system. This eliminates the challenges of subsystem communication because now all subsystems are running in the same software environment on a single controller.

Nevertheless, a key advantage of the hardware-centric architecture mentioned above is its scalability, which is mainly due to the Ethernet link between systems. But special attention must be given to the communication across that link as well. The challenge with the Ethernet link is that it is nondeterministic, and the bandwidth is limited. For most vision-guided motion tasks where guidance is given at the beginning of the task only, this is acceptable, but there could be other situations where the variation in latency could be a challenge.

Moving to a centralised processing architecture for this design has many advantages. First, development complexity is reduced because both the vision and the motion system can be developed using the same software, and the designer doesn’t need to be familiar with multiple programming languages or environments. Second, the potential performance bottleneck across the Ethernet networks is removed because now data is being passed between loops within a single application only, rather than across a physical layer. This is especially valuable when bringing vision directly into the control loop. Here, the vision system continuously captures images of the actuator and the targeted part during the move until the move is complete. These captured images are used to provide feedback on the success of the move.

Going beyond vision

If such a system needs to be designed, it should have processing elements that can implement the time- critical control algorithm like motion control with utmost precision and determinism along with executing the non-time critical supervisory control, data communication and HMI. A perfect architecture for an application like this is a heterogeneous processing architecture that combines a processor and an FPGA with the sensors and actuators. There have been many industry investments in this type of architecture, including the Xilinx Zynq All-Programmable SoCs (which combine an ARM processor with Xilinx 7-Series FPGA fabric), the multi-billion dollar acquisition of Altera by Intel, as well as numerous vision systems on the market today that use this architecture. For vision systems, specifically, using an FPGA is especially beneficial because of its inherent parallelism. Algorithms can be split up and can remain completely independent. But this architecture has benefits that go beyond just vision—it also has numerous benefits for motion control systems and I/O as well. Processors and FPGAs can be used to perform advanced processing, computation, and decision making. Designers can connect to almost any sensor and actuator through analog and digital I/O, industrial protocols, custom protocols and so on. This architecture also addresses other requirements such as timing and synchronisation.

Although this architecture offers a lot of flexibility, a conventional way of developing this kind of a system involves a lot of expertise in diverse fields of programming, especially on the end of writing embedded software for a microprocessor versus writing code for the FPGA using Hardware Description Languages. On top of it, even with the coding expertise implementing image processing algorithms from scratch on C or on VHDL sometimes consumes a lot of time. This introduces significant risk to designers and this can make using the architecture impractical or even impossible. However, using integrated graphical software, such as NI LabVIEW and LabVIEW FPGA along with pre-built libraries for image processing, motion control, signal processing, data communication over protocols, etc. designers can increase productivity and reduce risk by abstracting low-level complexity and integrating all the technology they need into a single, unified development environment.

Putting theory into practice

Now it’s one thing to discuss theory, it’s another to see that theory put into practice. Master Machinery is a Taiwanese company that builds semiconductor processing machines like the one seen in Figure 4. This machine uses a combination of machine vision, motion control, and industrial I/O to take chips off a silicon wafer and package them. This is a perfect example of a machine that could use a distributed architecture like the one in Figure 1—each subsystem would be developed separately, and then integrated together through a network. Average machines like this one in the industry yield approximately 2,000 parts per hour. Master Machinery, however, took a different approach. They designed this machine with a centralised, software-centric architecture and incorporated their main machine controller, machine vision and motion systems, I/O, and HMI all into a single controller, all programmed with LabVIEW. In addition to achieving a cost savings from not needing individual subsystems, they saw the performance benefit of this approach as their machine yields approximately 20,000 parts per hour—10X that of the competition.

A key component to Master Machinery’s success was the ability to combine numerous subsystems in a single software stack, specifically the machine vision and motion control system. Using this unified approach allowed Master Machinery to simplify not only the way they design machine vision systems but also how they designed their entire system.

With the advent of complex vision algorithms and the concept of connected systems, machine vision systems have not remained just for visual inspection. They have evolved into sophisticated powerful processing systems which are not just capable of running these complex image processing algorithms but also controlling a lot of other functionality in the system such as the motion control, sensor inputs and outputs for actuators. This can be done by bringing all this functionality into a single software environment with a common approach of intuitive programming so that the system designers can focus on innovations rather than worrying about implementation.

Image Gallery

  • Figure 2: Systems designed as a network of intelligent subsystems that form a collaborative distributive control system allows for modular design, but taking this hardware-centric approach can cause bottlenecks in performance

    Figure 2: Systems designed as a network of intelligent subsystems that form a collaborative distributive control system allows for modular design, but taking this hardware-centric approach can cause bottlenecks in performance

  • Figure 3: A software-centric design approach allows designers to simplify their control system structure by consolidating different automation tasks, including visual inspection, motion control, I/O, and HMIs within a single powerful embedded system

    Figure 3: A software-centric design approach allows designers to simplify their control system structure by consolidating different automation tasks, including visual inspection, motion control, I/O, and HMIs within a single powerful embedded system

  • Figure 4: A heterogeneous architecture combining a processor with an FPGA and I/O is an ideal solution for not only designing a high-performance vision system but also integrating motion control, HMIs, and I/O

    Figure 4: A heterogeneous architecture combining a processor with an FPGA and I/O is an ideal solution for not only designing a high-performance vision system but also integrating motion control, HMIs, and I/O

  • Figure 5: Using a centralised, software-centric approach, Master Machinery incorporated their main machine controller, machine vision and motion system, I/O, and HMI all into a single controller yielding 10X the performance over their competition

    Figure 5: Using a centralised, software-centric approach, Master Machinery incorporated their main machine controller, machine vision and motion system, I/O, and HMI all into a single controller yielding 10X the performance over their competition

Companies related to this article
Related articles