Session: 41-05 Artificial Intelligence Applied to Wind Energy
Paper Number: 154084
Towards Interpretable Autoencoders for Anomaly Detection: An Application of Gradient-Based XAI Methods
As wind energy exploitation continues to advance rapidly, the need for effective condition monitoring and maintenance of wind turbines has become increasingly important. Wind turbines are subjected to harsh environmental conditions, making them prone to various faults and failures. Therefore detection of these anomalies early on is crucial to prevent costly downtimes and to ensure the reliability of energy production. With the growing complexity of machines, there is a clear need for better anomaly detection models.
Physics-based models offer a transparent reasoning but require a lot of expertise. These models need to change for different operating conditions and different wind turbines, making them case-specific. Combined with the vast amounts of data generated in current wind farms, data-driven models such as deep learning have become more popular in research. The supervisory control and data acquisition (SCADA) system is often applied for remote control and status monitoring applications of wind farms. Most of the data collected is from healthy wind turbines only as repairs are directly carried out when fault and failures are found. Data is also mostly unlabelled, which makes it suitable for unsupervised deep learning techniques. This also saves the time consuming and costly job of labelling the data.
Using machine learning and especially deep learning techniques is promising, due to their high nonlinear mapping ability. However, the inherent complexity of deep learning models make them challenging to understand and interpret. This black-box nature hinders trust in the model’s reliability and accuracy. To improve this, especially in critical situations, this research proposes to extend unsupervised deep learning approaches using autoencoders (AE) with gradient-based explainable artificial intelligence (XAI) methods. This way, predictions of the machine learning network can be explained post-hoc.
The method consists of a deep learning autoencoder (AE) with support vector data description (SVDD) to analyse sensor correlations from supervisory control and data acquisition (SCADA) data, effectively mitigating issues from environmental noise and operational variations. It employs a dual optimisation scheme, minimising both the reconstruction loss and the hypersphere volume of the latent representation of the input. Afterwards, to ensure model transparency, explainable AI (XAI) techniques are used. More specifically, gradient-based XAI techniques are employed to calculate sensitivities with respect to the anomaly scores calculated, effectively linearising the deep learning model locally. They determine the relative importance of each sensor value in the decision of the deep learning model. Different algorithms, such as Gradient, SmoothGrad and Integrated Gradients are used to calculate respective sensitivity maps. This way, deeper insights are provided into the anomaly detection results and the deep learning model achieves greater transparency, ensuring the anomaly scores and detection mechanisms are comprehensible to humans. Post-hoc explanations of the deep learning results are beneficial for both machine learning model developers and experts in the field.
The methodology is validated on multiple SCADA datasets, to ensure comprehensive evaluation across diverse operational conditions. Using gradient-based XAI methods, model reasoning can be visualised and trustworthiness can be assessed. Together, the methodology offers valuable insights for improving maintenance strategies and operational efficiency in wind energy systems. By combining advanced deep learning with an interpretable layer, we not only advance the field of anomaly detection but also contribute to the broader goal of optimizing the sustainability and reliability of wind energy production.
Presenting Author: Jaron Tulleners KU Leuven
Presenting Author Biography: Jaron Tulleners received a master in mechanical engineering from KU Leuven, Belgium in 2024. He is now working on a PhD track at the Department of Mechanical Engineering of KU Leuven. His research interests include condition monitoring, explainable artificial intelligence and diagnostics.
Authors:
Jaron Tulleners KU LeuvenDandan Peng KU Leuven
Ludovico Terzi ENGIE Italia
Wim Desmet KU Leuven
Konstantinos Gryllias KU Leuven
Towards Interpretable Autoencoders for Anomaly Detection: An Application of Gradient-Based XAI Methods
Paper Type
Technical Paper Publication