3D Convolutional and Recurrent Neural Networks for Reactor Perturbation Unfolding Anomaly Detection

Aiden Durrant, Georgios Leontidis* (Corresponding Author), Stefanos Kollias

*Corresponding author for this work

Research output: Contribution to conferencePosterpeer-review

10 Downloads (Pure)


With Europe’s ageing fleet of nuclear reactors operating closer to their safety limits, the monitoring of such reactors through complex models has become of great interest to maintain a high level of availability and safety. Therefore, we propose an extended Deep Learning framework as part of the CORTEX Horizon 2020 EU project for the unfolding of reactor transfer functions from induced neutron noise sources. The unfolding allows for the identification and localisation of reactor core perturbation sources from neutron detector readings in Pressurised Water Reactors. Through the monitoring of reactor signals at nominal conditions, a vast understanding can be developed for the early detection of anomalies. Many techniques have attempted to provide this insight, with many model-driven and data-driven approaches, however, deep learning provides state-of-the-art performance with the potential for real-time prediction whilst being robust to variation. This framework provides analysis of such perturbations in both the Time and Frequency domains, of which data has been modelled by the SIMULATE-3K and CORE SIM+ simulations retrospectively. These simulations provide large-scale datasets to train and test the proposed approaches with 261,120 data samples for the frequency and 509,952 in the time domain. In the frequency domain, 3D Convolutional Neural Networks (3D-CNN) have been employed, analysing spatial relationships within the core volume for a number of perturbations. More specifically, advanced dense connections have been implemented allowing for greater flow of information through the network both during the forward pass and backpropagation. Additionally, in the time domain, Recurrent Neural Networks (RNN) have been taken advantage of to learn temporal dependencies of sequential data signals. The RNN network identifies perturbations induced by fuel assembly vibrations and thermal-hydraulic fluctuations at the core inlet. To classify perturbation type, a multi-sigmoid classification layer was implemented to handle the multiple-perturbation and multiple-classification nature of the problem. Both networks share this classification layer with the RNN outputting 512-dimensional representations while the CNN outputting a 128-dimensional feature-rich representation. Once a perturbation type has been classified, the network output is fed to a fully connected network utilising the extracted features to regress the coordinates of the induced perturbation source. The results of the framework show that perturbation types can be identified successfully with an accuracy of 96.41% in the time domain with multiple perturbations. Similarly, in the frequency domain, accuracy remains high with perturbation types being classified at high precision with 99.85% accuracy. Additionally, localisation of perturbation source coordinates is shown to be highly effective with a Mean Absolute Error (MAE) of 0.2954 coordinate voxel error in the frequency domain. This project helps result in a deepened understanding of the physical processes involved, allowing for the early detection of operational problems improving reliability and, further contributing to reducing the carbon footprint and impact on the environment.
Original languageEnglish
Publication statusPublished - 4 Jun 2019
Event9th European Commission Conference on EURATOM Research and Training in Safety of Reactor Systems - Romania
Duration: 4 Jun 20197 Jun 2019


Conference9th European Commission Conference on EURATOM Research and Training in Safety of Reactor Systems
Internet address


  • machine learning
  • nuclear reactors


Dive into the research topics of '3D Convolutional and Recurrent Neural Networks for Reactor Perturbation Unfolding Anomaly Detection'. Together they form a unique fingerprint.

Cite this