A QUANTITATIVE ASSESSMENT OF SECURE NEURAL NETWORK ARCHITECTURES FOR FAULT DETECTION IN INDUSTRIAL CONTROL SYSTEMS
DOI:
https://doi.org/10.63125/3m7gbs97Keywords:
Secure Neural Networks, Industrial Control Systems (ICS), Fault Detection, Adversarial Robustness, Cyber-Physical SecurityAbstract
Industrial Control Systems (ICS) form the core infrastructure for critical sectors such as energy, water, manufacturing, and transportation, yet their increasing digital interconnectivity has exposed them to complex fault dynamics and sophisticated cyber-physical threats. Traditional fault detection mechanisms—whether rule-based or model-driven—often fail to cope with the nonlinearity, high dimensionality, and adversarial vulnerabilities prevalent in modern ICS environments. To address these limitations, this study conducts a comprehensive quantitative evaluation of secure neural network architectures tailored for ICS fault detection. Specifically, the research compares standard deep learning models—including Multilayer Perceptrons (MLP), Convolutional Neural Networks (CNN), and Long Short-Term Memory networks (LSTM)—with their security-enhanced counterparts, such as adversarially trained LSTM (AT-LSTM) and autoencoder-based input sanitization models (AE-S). Using two publicly available benchmark datasets—SWaT and WADI—and simulating three distinct adversarial threat scenarios (white-box, black-box, and gray-box), the study systematically measures performance across multiple dimensions including accuracy, F1-score, robustness accuracy, attack success rate, inference latency, and fault detection delay. The results reveal that secure architectures not only retain over 80% classification accuracy under white-box attacks but also maintain low false positive rates and detection delays under two seconds, validating their suitability for real-time deployment. Furthermore, secure models exhibit superior generalization across rare fault classes and higher consistency in adversarial environments, outperforming baseline models by wide margins across all tested metrics. These findings confirm that integrating adversarial defense mechanisms into neural network designs substantially improves the operational reliability and cybersecurity resilience of ICS fault detection systems. The study provides a validated framework and practical insights to guide the deployment of robust AI-based monitoring in safety-critical industrial domains, highlighting the role of secure neural networks as a foundational component for next-generation intelligent control systems.