At Sheertex, I led the development of a machine vision system to automate quality assurance (QA) for our proprietary textile material. The goal was to reduce the cost to produce (CTP) by identifying defects in real time, ensuring only high-quality products reached packaging. I implemented and designed an image processing pipeline using OpenCV, integrating Keyence cameras and PLCs, and optimizing for low false positives.
The textile manufacturing process at Sheertex produced materials with several types of defects:
Manual QA to detect these defects was expensive and error-prone.
The machine vision system consisted of several hardware and software components, integrated into a cohesive workflow:
Keyence Camera → Keyence Vision Controller → Ethernet (Keyence SDK) → IPC (OpenCV Program) → Ethernet (Modbus TCP) → PLC → Pneumatic Pusher
Keyence Camera and Lighting:
Industrial PC (IPC):
Programmable Logic Controller (PLC):
The image processing pipeline was designed to detect defects with high accuracy while minimizing computational overhead. It consisted of four stages: preprocessing, defect detection, postprocessing, and integration. Following are also some sample OpenCV code snippets that were used in the final program.
Preprocessing:
Image Acquisition: The Keyence SDK was used to capture images from the camera.
Image Enhancement: Images were converted to grayscale and normalized to improve contrast, for example:
import cv2
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
normalized_image = cv2.normalize(gray_image, None, 0, 255, cv2.NORM_MINMAX)
Defect Detection:
Holes: Detected using contour analysis and circular Hough transforms:
edges = cv2.Canny(normalizedimage, 100, 200)
contours, = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
holes = [cnt for cnt in contours if cv2.contourArea(cnt) > threshold]
Needle Lines: Detected using line detection algorithms like the Hough Line Transform:
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, threshold=50, minLineLength=100, maxLineGap=10)
Barre: Detected using Fourier Transform to identify repetitive patterns:
f_transform = np.fft.fft2(normalized_image)
f_shift = np.fft.fftshift(f_transform)
magnitude_spectrum = 20 * np.log(np.abs(f_shift))
Postprocessing:
False Positive Reduction: Applied morphological operations to remove noise:
kernel = np.ones((5, 5), np.uint8)
cleaned_image = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, kernel)
Defect Classification: Used a rule-based approach to classify defects based on size, shape, and other metrics capturing in previous steps
PLC Communication: The IPC sent defect classification results to the PLC using Modbus TCP:
from pyModbusTCP.client import ModbusClient
client = ModbusClient(host="PLC_IP", port=502)
client.write_single_register(0, defect_flag) # 0: No defect, 1: Defect
PLC Code:
The PLC was programmed using ladder logic, a graphical programming language commonly used in industrial automation. The PLC received a binary signal from the IPC over Modbus TCP, indicating whether a defect was detected (1
for defect, 0
for no defect). Example ladder logic:
IF DEFECT_FLAG == 1 THEN
ACTIVATE(PNEUMATIC_PUSHER)
ELSE
DEACTIVATE(PNEUMATIC_PUSHER)
END_IF
Latency and Timing Considerations: Cycle Time: The PLC was programmed to process signals and activate the pneumatic pusher within 100 milliseconds to keep up with the production line speed.
The total processing time of the system was roughly 600ms on average, which was fast enough to prevent causing delays on the manufacturing line as there is about 1 unit processed every 2000 ms. The rest of the engineering team assisted with the time analysis of the entire manufacturing process.
To evaluate the system’s effectiveness, I used statistical methods to measure false positive rates (FPR) and false negative rates (FNR), relying on the manual QA team to evaluate false/trueness:
Calculated statistics: Confusion matrix:
Predicted Defect | Predicted No Defect | Total | |
---|---|---|---|
Actual Defect | 88 | 4 | 92 |
Actual No Defect | 3 | 155 | 158 |
Total | 91 | 159 | 250 |
Precision:
Precision measures how many of the predicted defects are actual defects: 96.70%
Recall (Sensitivity):
Recall measures how many of the actual defects are correctly predicted: 95.65%.
F1-Score:
F1-Score is the harmonic mean of precision and recall, balancing the two metrics: 96.17%.
False Negative Rate (FNR):
FNR measures the proportion of actual defects that were incorrectly predicted as "no defect.":4.35%.
False Positive Rate (FPR):
FPR measures the proportion of actual non-defects that were incorrectly predicted as "defect.": 1.90%.
High Precision (96.70%): The system has a 96.70% accuracy in predicting defects, meaning very few non-defective products are flagged as defective.
High Recall (95.65%): The system correctly identifies 95.65% of all defects, which is excellent given the priority on minimizing false negatives.
Very Low FPR (1.90%): The system has a 1.90% false positive rate, meaning a very small percentage of non-defective products are incorrectly flagged as defective. This is an excellent result, as it minimizes the burden on the manual QA team.
Prioritizing FNR over FPR: The system is tuned to minimize false negatives, ensuring that almost all defective products are caught. This results in an extremely low false positive rate, which is ideal for reducing the workload on the manual QA team.
One of the biggest challenges was detecting barre defects, which were subtle and often missed by initial algorithms. Through root cause analysis, I identified that the issue was due to low contrast and noise in the images.
Solution: I implemented adaptive thresholding, for example:
adaptive_thresh = cv2.adaptiveThreshold(normalized_image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2)
The machine vision system delivered significant improvements:
75% Reduction in Manual QA Time: Automated defect detection reduced the workload on the QA team.
High Accuracy: Achieved a 95% recall rate for critical defects, ensuring minimal false negatives.
Seamless Integration: The system was fully integrated into the production line, with minimal disruption to existing processes.
This project demonstrated the power of traditional image processing in manufacturing applications and the importance of root cause analysis and statistical evaluation in developing reliable automation systems.