How do you advance Machine Learning using Machine Learning? – The Ultimate Bootstrap!
Well, the title is catchy and is meant to attract you, the reader! We are not going to solve the problem of creating the most awesome Neural Compute Engine by using coolest Machine Learning algorithm on the most complex semiconductor wafer data set produced by the most advanced photolithography tool, in the space of this article (the maximum length of which has been mandated to be within 1000 words).
However, we are going to try speculating, imagining boldly, if you will, about the realm of possibilities which will open up if every capable and curious semiconductor/electronics designer starts thinking of data as a powerful ingredient with which they can make a beautiful soup of amazing products.
How does the problem landscape look like?
There could be many Data Science/Machine Learning problem statements for the semiconductor industry. A few that comes to mind are as follows:
• Function learning problem: Given a set of input and output (binary state) learn the optimum/smallest circuit that can produce this pattern
• Prediction problem: Given the output of a (limited) set of experiments, involving various wafer process/design parameters, predict the precise output of the process/device/circuit/system for any arbitrary combination of the governing parameters
• Classification problem: Given a set of test results and previous training data, predict/classify a device/circuit/system as functional or faulty
• Classification with estimation problem: Same as above, but when the test results are noisy and subject to measurement errors – need estimation of error and updating the confidence of the classification probability accordingly
• Recommendation problem: Given a set of previous knowledge/preference of designers (i.e. what worked and what not), recommend a new combination of design choices which might be aligned to what the designer ‘likes’ i.e. where the probability of success could be highest
• Outlier detection problem: Pick the ‘odd one out’ by looking at high-dimensional combination of test data – extremely useful to predict potential failure of an IC in the field, whose standard test data cannot catch hidden weakness
So, what’s stopping us from writing cool Python code in TensorFlow and solving all of these problems?
An oversimplified answer is that these problems are computationally ‘hard’ (non-polynomial complexity), high-dimensional, and difficult to visualize for making common-sense approximations, and they often lack in the size of the training data set (due to either requirement of expensive experimentation or infeasible measurement technique) required to generate high-accuracy, interpretable, easy to apply, Machine Learning models.
Hereby, we begin our "AI in Semiconductors" series of posts, the first episode of which is being shared by Dr. Tirthajyoti Sarkar. We cordially invite all of you in the high-tech / electronics domain, to share your thoughts in the Comments section, & write with us. Stay tuned for the subsequent episodes in this series. Thank You. :-)