ADCS - Automatic Defect Classification System (ADCS) for SSMC

Related tags

Text Data & NLPADCS
Overview

Table of Contents

  1. Table of Contents
  2. ADCS Overview
  3. System Design
  4. Full Program Settings
  5. Abbreviations Guide

ADCS Overview

Automatic Defect Classification System (ADCS) for FS, BS & EN Model Deployment

By: Tam Zher Min
Email: [email protected]

Summary

This is an indepently architected sequential system (similar to AXI recipes), threaded alongside a Tkinter GUI. It can automatically classify and sort wafer image scans locally for SSMC and can also train new machine learning models. Total ~2000 LOC (lines of code). Run the ADCS.vbs file to start.

Operator's Guide

Slides for operators can be found in the /ADCS/notes/guides folder or through this link. This guide is for operators looking to check the wafer lots with defects and for how to sort the wafer scans after they are classified by the ADCS.

Demo

Figma Design Mockup

ADCS Demo


System Design

System Logic

This is a full-fledged system that I planned and wrote every single line myself during my 4-month internship at SSMC (AUG 23 '21 — JAN 07 '22). This system deploys 2 CNN models locally (up to 3 needed) and performs inference on all images found from continuously polling the folder where all the wafer scans are transferred to.

The system also needs to parse through a weird file format to extract relevant information. This file is also required to be edited because SSMC's software can only understand this format. The way it is parsed is a little hacky and not 100% fool-proof but because it does not have a fixed format, there are no easy ways around it.

The Tkinter GUI was very challenging to code because UI systems are usually very finnicky. However, I managed to make it work, allowing users to change settings and logging to the GUI with a queue and using threading to run the production or training mode separate from the main Tkinter GUI thread.

All system design logic, flow, structure and considerations were by me — good in that I managed to produce something of this scale alone, bad in that I am not sure if these are the best practices or if I had missed out on any glaring problems; but I did what I could.

Training Mode

Take note, for training mode, the "Balanced no. of Samples per Class" value is important. You should derive this number by looking at the number of images you have for each class. It should be more than the number of samples in each class but lower than the most majority class.

For example, if the chipping class has only 30 images while the stain, scratch and whitedot classes have 100 images each and the AOK class has 1000 images, then you should pick between a minimum of 100 to a maximum of 1000. A good number might be 300 for this case. You can refer to the table below to get a sensing.

Hence, it heavily depends on the number of samples you have for training. As more images get sorted into the trainval folder for future retraining, this value should increase over time, otherwise you are not fully utilising the images to train the models.

eg. aok chipping scratch stain whitedot # RANGE # # INPUT #
1. 400 10 20 40 20 40-400 100
2. 1000 30 100 150 200 200-1000 300
3. 800 100 200 150 75 200-800 400

Production System Flow

  1. AXI scans wafers and generates FBE images
  2. KLA files and images fed into ADC drive's "new" directory (dir)
  3. ADCS continuously polls "new" dir for KLA files
  4. If KLA files found, start model inference; else, poll again after some wait time
  5. Model Inference
    1. Reads oldest KLA file and stores relevant information into "wafer" data structures
    2. Checks if filenames referenced in KLA file can be found in the "new" dir
    3. If all found or after timeout, feed FS/BS/EN images into their respective models
    4. FBE models classify images and modify the KLA file's CLASSNUMBERs to the predictions
    5. Results will also be saved to CSV (Excel) files for future reference
    6. Move KLA file and images to ADC drive's "old" directory and also copy them to K drive
    7. Predicted files in "unsorted" folder require manual sorting for future retraining
  6. Repeat

Folder Structure (Critical Files Only)

Do follow this folder structure to ensure reproducibility

[K drive]                   // modified KLA file and images copied here after inference
[ADC drive]                 // houses all wafer data and ADCS application code
│
├── /data                   // stores all KLA files and images from AXI
│   ├── /new                // unpredicted lots
│   └── /old                // predicted lots for backup and retraining
│       ├── /backside       // test/trainval/unsorted folders will have folders for all 5 classes
│       │    ├── /test      // manually sorted images for model testing to simulate new images
│       │    ├── /trainval  // manually sorted images for model training and validation
│       │    └── /unsorted  // predicted images to be sorted into /trainval for future retraining
│       │        ├── /aok
│       │        ├── /chipping
│       │        ├── /scratch
│       │        ├── /stain
│       │        └── /whitedot
│       ├── /edgenormal     // test/trainval/unsorted folders will have folders for all 2 classes
│       │    ├── /test
│       │    ├── /trainval
│       │    └── /unsorted
│       │        ├── /aok
│       │        └── /chipping
│       ├── /frontside      // any frontside scans found will be backed up here
│       └── /unclassified   // all ignored defect codes, eg. edgetop (176) and wafer maps (172)
│
└── /ADCS                   // the Automatic Defect Classification System
    ├── /assets             // miscellaneous files
    │   ├── icon.ico        // wafer icon found online
    │   ├── requirements.txt// necessary python libraries and versions
    │   └── run.bat         // batch file that runs main.py using specified python.exe file
    ├── /models             // trained FBE .h5 tensorflow models
    │   ├── /backside
    │   ├── /edgenormal
    │   └── /frontside
    ├── /results            // FBE predictions in CSV for production and training modes
    │   ├── /production
    │   │   ├── /backside
    │   │   ├── /edgenormal
    │   │   └── /frontside
    │   └── /training
    │       ├── /backside
    │       ├── /edgenormal
    │       └── /frontside
    ├── /src                // helper modules for ADCS in OOP style
    │   ├── adcs_modes.py   // script file with the 2 modes chosen in the GUI
    │   ├── be_trainer.py   // model training code for backside and edgenormal models
    │   ├── kla_reader.py   // code to parse and edit KLA files
    │   └── predictor.py    // model prediction code generic for FBE models
    │
    ├── *ADCS.vbs           // starts the ADCS app
    ├── debug.log           // log file of the latest run of main.py for debugging
    ├── main.py             // python script of the ADCS GUI to START/STOP
    ├── README.md           // this user guide text file you're reading; open in notepad
    └── settings.yaml       // config file for users to easily change settings and modes

Full Program Settings

Below are the descriptions for all of the settings found in the settings.yaml file. They allow users to change advanced settings for the code outside of the GUI such as the delay times and whether to turn off predictions for front/back/edge, etc.

The descriptions below help users understand what each setting does in a readable manner because the actual settings.yaml file is automatically generated in alphabetical order.

Note, there is technically no need to change anything in the settings.yaml file. Also note that all settings are case-sensitive. You can read more about the YAML syntax here.

Understanding the Descriptions

setting_name: [option A / option B] (default=x)
    # description

The setting's name will be before the colon followed by the available options in square brackets and the recommended default values in round brackets. The next indented line will be a short description of the setting. However, in the actual settings.yaml file, you would just write:

setting_name: setting_value

All Available Settings

ADCS Mode

adcs_mode: [PRODUCTION / TRAINING] (default=PRODUCTION)
    # either production (classification) or training mode

Folder Locations

adc_drive_new:
    # folder where all new AXI scans are transferred to
adc_drive_old:
    # folder where all old predicted wafer lots and images are stored for backup
k_drive:
    # folder where Klarity Defect finds all KLA files and wafer scans

Pause Times

pause_if_no_kla: (default=30)
    # long pause time in seconds in between checking cycles if no KLA files found
pause_if_kla: (default=5)
    # short pause time in seconds in between checking cycles if there are KLA files

times_to_find_imgs: (default=3)
    # no. of times to try and find images referenced in KLA file
pause_to_find_imgs: (default=10)
    # pause time in seconds to try and find the images referenced in KLA file

Model Configs

BATCH_SIZE: (default=8)
    # no. of images to classify at a time, higher requires more RAM
CONF_THRESHOLD: [0 - 100] (default=95)
    # min. % confidence threshold to clear to be considered confident

BS Predictor Configs

BS Original Code: [174] AVI_Backside Defect

bs_model:
    # specific model to use, leave empty to use latest model
bs_defect_mapping: # correct KLA defect codes for BS defects
    aok: 0         # Unclassified
    chipping: 188  # OQA_Edge Chipping (BS)
    scratch: 190   # OQA_BS-Scratch (Cat Claw)
    stain: 195     # OQA_BS-Stain
    whitedot: 196  # OQA_BS-White Dot

EN Predictor Configs

EN Original Code: [173] AVI_Bevel Defect

en_model:
    # specific model to use, leave empty to use latest model
en_defect_mapping: # correct KLA defect codes for EN defects
    aok: 0         # Unclassified
    chipping: 189  # OQA_Edge Chipping (FS)

FS Predictor Configs

FS Original Code: [056] AVI Def

unimplemented

BE Trainer Configs

Basic Trainer Configs

training_runs: (default=5)
    # no. of models to train
training_subdir: [BACKSIDE / EDGENORMAL]
    # to train either backside or edgenormal models
training_n: (default=300)
    # balanced number of samples per class
training_saving_threshold: [0 - 100] (default=95)
    # min. % test accuracy to clear before the trained model is saved

Advanced Hyperparameter Configs

dense_layers: (default=1)
    # no. of dense layers after the layers of the pretrained model
dense_layer_size: (default=16)
    # size of each dense layer, bigger size results in a bigger .h5 model
dropout: (default=0.2)
    # % of weights to drop randomly to mitigate overfitting
patience: (default=10)
    # no. of epochs to wait before early stopping and take best model

Custom Testing Mode

training_mode: [true / false] (default=true)
    # false if you want to test a specific model
test_model: (default=empty)
    # test this model name if training_mode is false

Abbreviations Guide

  • SSMC: Systems on Silicon Manufacturing Company (TSMC & NXP JV)
  • Defect Classes (the other classes are self-explanatory)
    • aok: ALL-OK, meaning a normal image with no defect (false positive)
  • Domain
    • FS: Frontside
    • BE: Back & Edge (Backside + Edgenormal)
    • BS: Backside
    • EN: Edge Normal
    • ET: Edge Top (ignored)
    • FBE: Frontside-Backside-EdgeNormal
    • AXI: Advanced 3D X-Ray Inspection
    • KLA: File format used by SSMC's infrastructure
  • System
    • CNN: Convolutional Neural Network, the machine learning model used
    • CLI: Command Line Interface
    • GUI: Graphical User Interface
    • df: Dataframe, think of it as Excel but in code
Owner
Tam Zher Min
Penultimate NUS Electrical Engineering Undergraduate
Tam Zher Min
A retro text-to-speech bot for Discord

hawking A retro text-to-speech bot for Discord, designed to work with all of the stuff you might've seen in Moonbase Alpha, using the existing command

Nick Schorr 23 Dec 25, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 37 Jan 04, 2023
硕士期间自学的NLP子任务,供学习参考

NLP_Chinese_down_stream_task 自学的NLP子任务,供学习参考 任务1 :短文本分类 (1).数据集:THUCNews中文文本数据集(10分类) (2).模型:BERT+FC/LSTM,Pytorch实现 (3).使用方法: 预训练模型使用的是中文BERT-WWM, 下载地

12 May 31, 2022
All the code I wrote for Overwatch-related projects that I still own the rights to.

overwatch_shit.zip This is (eventually) going to contain all the software I wrote during my five-year imprisonment stay playing Overwatch. I'll be add

zkxjzmswkwl 2 Dec 31, 2021
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
A repo for open resources & information for people to succeed in PhD in CS & career in AI / NLP

A repo for open resources & information for people to succeed in PhD in CS & career in AI / NLP

420 Dec 28, 2022
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

Fanbo Xiang 68 Jan 06, 2023
TaCL: Improve BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improve BERT Pre-training with Token-aware Contrastive Learning

Yixuan Su 26 Oct 17, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。

GPT2-NewsTitle 带有超详细注释的GPT2新闻标题生成项目 UpDate 01.02.2021 从网上收集数据,将清华新闻数据、搜狗新闻数据等新闻数据集,以及开源的一些摘要数据进行整理清洗,构建一个较完善的中文摘要数据集。 数据集清洗时,仅进行了简单地规则清洗。

logCong 785 Dec 29, 2022
GPT-3: Language Models are Few-Shot Learners

GPT-3: Language Models are Few-Shot Learners arXiv link Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-trainin

OpenAI 12.5k Jan 05, 2023
A framework for cleaning Chinese dialog data

A framework for cleaning Chinese dialog data

Yida 136 Dec 20, 2022
초성 해석기 based on ko-BART

초성 해석기 개요 한국어 초성만으로 이루어진 문장을 입력하면, 완성된 문장을 예측하는 초성 해석기입니다. 초성: ㄴㄴ ㄴㄹ ㅈㅇㅎ 예측 문장: 나는 너를 좋아해 모델 모델은 SKT-AI에서 공개한 Ko-BART를 이용합니다. 데이터 문장 단위로 이루어진 아무 코퍼스나

Dawoon Jung 29 Oct 28, 2022
Maix Speech AI lib, including ASR, chat, TTS etc.

Maix-Speech 中文 | English Brief Now only support Chinese, See 中文 Build Clone code by: git clone https://github.com/sipeed/Maix-Speech Compile x86x64 c

Sipeed 267 Dec 25, 2022
SGMC: Spectral Graph Matrix Completion

SGMC: Spectral Graph Matrix Completion Code for AAAI21 paper "Scalable and Explainable 1-Bit Matrix Completion via Graph Signal Learning". Data Format

Chao Chen 8 Dec 12, 2022
This repository contains the code, data, and models of the paper titled "CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs".

CrossSum This repository contains the code, data, and models of the paper titled "CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summ

BUET CSE NLP Group 29 Nov 19, 2022
Legal text retrieval for python

legal-text-retrieval Overview This system contains 2 steps: generate training data containing negative sample found by mixture score of cosine(tfidf)

Nguyễn Minh Phương 22 Dec 06, 2022
What are the best Systems? New Perspectives on NLP Benchmarking

What are the best Systems? New Perspectives on NLP Benchmarking In Machine Learning, a benchmark refers to an ensemble of datasets associated with one

Pierre Colombo 12 Nov 03, 2022
This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text.

Text Summarizer This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text. Team Members This mini-project was

1 Nov 16, 2021
Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration

Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration This is the official repository for the EMNLP 2021 long pa

70 Dec 11, 2022