Lawrence Livermore National Laboratory has shown off many new ways to improve the characteristics of metal printing tech using X-rays and powder bed ambient gas analysis. Now, they’re taking a new approach using machine learning and complex algorithms.
For the uninitiated, machine learning is the process that uses statistical techniques to produce a mode of constant improvement for computers, thus allowing them to “learn” as time goes by and develop better techniques. The process has developed into a field of its own recently, with major tech companies applying it everywhere. Much like Google or Amazon, LLNL have also found utilities in improving their own production functions.
The lab’s findings are aiding in the development of “on‐the‐fly assessments of laser track welds” using convolutional neural networks. LLNL are measuring and collecting data using a video and image analysis algorithm. The team used 2,000 video clips of melted laser tracks, varying the power and speed in each, to develop the neural network. They scanned the part surfaces with and generated 3D height maps using that information to train the algorithms to analyse sections of video frames. The impressive software they’re working with can look at just 10 milliseconds of footage and determine whether the part will be defective.
Machine Learning Quality Control
The process has massive benefits, particularly for objects that take weeks to build in overall time scale. Imagine waiting for several days only to get a defective print. The machine learning technology could easily detect it without the need of a post-production analysis.
“This is a revolutionary way to look at the data that you can label video by video, or better yet, frame by frame,” said principal investigator and LLNL researcher Brian Giera. “The advantage is that you can collect video while you’re printing something and ultimately make conclusions as you’re printing it.”
Bodi Yuan, lead author of the paper, enabled the algorithm to automatically analyse the height maps of each build. LLNL then applied the same model to determine the width of the build track, whether the track broke in production and the standard deviation of width. As a result, the algorithm gave them a unique insight into the process as it was happening. Researchers could take footage of in-progress builds and accurately surmise whether the prints exhibited acceptable quality. They estimate that the algorithm displays a 93 per cent accuracy.
Future Research
The LLNL researchers have published their finding here.
The research not only boosts their systems, but is also cross-applicable. The authors of the study believe that the algorithm could easily work with other systems. Other researchers could also follow the same steps, gather footage and analyse it for their own systems. However, the system is not perfect yet. While the accuracy with defect detection is high, there are still issues with detecting voids. While voids within parts aren’t predictable with height map scans but methods like ex situ X-ray radiography suffice.
“Right now, any type of detection is considered a huge win. If we can fix it on the fly, that is the greater end goal,” Giera said. “Given the volumes of data we’re collecting that machine learning algorithms are designed to handle, machine learning is going to play a central role in creating parts right the first time.”
The researchers’ next step is to add more modalities to the system. They wish to go beyond just video and image, allowing the machine learning to become far more comprehensive.
Featured image courtesy of Jeannette Yusko and Ryan Chen/LLNL.