Adaptive Alogrithms – The Double Edged Sword in Manufacturing

By Nulltek

With the intense focus the world currently has on data, there has been a push towards collating this useful information to improve the runtime efficiency of historically slow processes. The flow on effect of this trend is companies pushing towards optimisation and efficiency in all facets of algorithmic programming.

An algorithm, in the programming sense, is a term used to refer to the method a piece of software uses to arrive at a solution. In this blog post it’s going to focus more specifically on finding a solution which consists of an input value, that satisfies an output condition within a threshold window. Eg: Tuning DAC counts to achieve a measured voltage between 1-1.1V.

Traditionally in the manufacturing space, tuning a value would be done using a static algorithm. Pick a wide start-point, and increment using a linear (or non-linear) step until you achieve the result or exceed the operational range. However, this can be extremely slow, and is heavily dependent on step resolution and start point selection.

Nowadays with the availability of data, and compute power to parse this data, it is possible to approach these same problems in a much more efficient manner. Surprisingly unlike many other fields, manufacturing has very clear-cut trends in its datasets. You have historical data that easily predicts median value as a start-point and the spread that can calculate the search space.

The real trick to improving performance, in electronics manufacturing, is to filter the data and tune the algorithm for the specific unit being tested. This usually means looking back at historical data that most relates to the unit in question, and depending how far back you look you will get more hysteresis in trends. This is helpful because electronic components see batch to batch variation and so long-term trends do not always produce short-term optimisation. Using these methods, you have algorithms that are constantly tuning themselves to achieve quicker execution times and are tailored to the individual product being tested.

Now the downside to these methods is you can’t rely on the above working 100% of the time. When a reel of components is swapped over, the first units to be tested don’t have historical data that indicates a shift in performance. This could mean your search space may not be wide enough, and unless your algorithm is intelligent enough to fall-back on a wider search space, this will cause a production failure. In addition, the more complex you make these routines, the more chance there is of hitting oscillation points or the algorithm becoming unstable.

Another major issue is raised during deployment of these methods to a contract manufacturer. Contact manufacturers are priced based on test-times (as well as a range of other things such as operator complexity), and they treat these test times as constant. When in reality, especially with the adaptive algorithms, your test durations will vary wildly depending on how length of hysteresis and consistency to the historical data. It’s an incredibly hard thing to explain to non-technical people why a test may take 5 minutes on one unit and 10 on another even though they are identical models of product.

Obviously, I’m in favour of moving towards adaptive methods, and manufacturing is an extremely exciting space to for making algorithmic improvements. Stay tuned for my opinions on deep neural networks (RNNs & CNNs) in manufacturing.