Completed chips coming in from the foundry are topic to a battery of assessments. For these destined for crucial techniques in vehicles, these assessments are notably intensive and may add 5 to 10 % to the price of a chip. However do you actually need to do each single take a look at?
Engineers at NXP have developed a machine-learning algorithm that learns the patterns of take a look at outcomes and figures out the subset of assessments which are actually wanted and those who they may safely do with out. The NXP engineers described the method on the IEEE International Test Conference in San Diego final week.
NXP makes all kinds of chips with complicated circuitry and advanced chip-making technology, together with inverters for EV motors, audio chips for consumer electronics, and key-fob transponders to safe your automobile. These chips are examined with totally different alerts at totally different voltages and at totally different temperatures in a take a look at course of referred to as continue-on-fail. In that course of, chips are examined in teams and are all subjected to the whole battery, even when some elements fail a number of the assessments alongside the best way.
Chips have been topic to between 41 and 164 assessments, and the algorithm was in a position to advocate eradicating 42 to 74 % of these assessments.
“Now we have to make sure stringent high quality necessities within the subject, so we’ve to do lots of testing,” says Mehul Shroff, an NXP Fellow who led the analysis. However with a lot of the particular manufacturing and packaging of chips outsourced to different firms, testing is without doubt one of the few knobs most chip firms can flip to regulate prices. “What we have been attempting to do right here is give you a strategy to cut back take a look at value in a means that was statistically rigorous and gave us good outcomes with out compromising subject high quality.”
A Take a look at Recommender System
Shroff says the issue has sure similarities to the machine learning-based recommender systems utilized in e-commerce. “We took the idea from the retail world, the place a knowledge analyst can have a look at receipts and see what gadgets persons are shopping for collectively,” he says. “As a substitute of a transaction receipt, we’ve a novel half identifier and as an alternative of the gadgets {that a} client would buy, we’ve an inventory of failing assessments.”
The NXP algorithm then found which assessments fail collectively. After all, what’s at stake for whether or not a purchaser of bread will wish to purchase butter is kind of totally different from whether or not a take a look at of an automotive half at a selected temperature means different assessments don’t must be performed. “We have to have 100% or close to 100% certainty,” Shroff says. “We function in a distinct house with respect to statistical rigor in comparison with the retail world, but it surely’s borrowing the identical idea.”
As rigorous because the outcomes are, Shroff says that they shouldn’t be relied upon on their very own. It’s important to “be sure it is smart from engineering perspective and that you may perceive it in technical phrases,” he says. “Solely then, take away the take a look at.”
Shroff and his colleagues analyzed information obtained from testing seven microcontrollers and functions processors constructed utilizing superior chipmaking processes. Relying on which chip was concerned, they have been topic to between 41 and 164 assessments, and the algorithm was in a position to advocate eradicating 42 to 74 % of these assessments. Extending the evaluation to information from different sorts of chips led to an excellent wider vary of alternatives to trim testing.
The algorithm is a pilot challenge for now, and the NXP workforce is trying to increase it to a broader set of elements, cut back the computational overhead, and make it simpler to make use of.
From Your Web site Articles
Associated Articles Across the Internet