paint-brush
Study Finds ClassBD Outperforms Top Fault Diagnosis Methods in Noisy Scenariosby@deconvolute
New Story

Study Finds ClassBD Outperforms Top Fault Diagnosis Methods in Noisy Scenarios

by Deconvolute TechnologyDecember 23rd, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In computational experiments, ClassBD was evaluated against state-of-the-art fault diagnosis methods, showing improved performance under noisy conditions with varying SNR levels.
featured image - Study Finds ClassBD Outperforms Top Fault Diagnosis Methods in Noisy Scenarios
Deconvolute Technology HackerNoon profile picture

Abstract and 1. Introduction

2. Preliminaries and 2.1. Blind deconvolution

2.2. Quadratic neural networks

3. Methodology

3.1. Time domain quadratic convolutional filter

3.2. Superiority of cyclic features extraction by QCNN

3.3. Frequency domain linear filter with envelope spectrum objective function

3.4. Integral optimization with uncertainty-aware weighing scheme

4. Computational experiments

4.1. Experimental configurations

4.2. Case study 1: PU dataset

4.3. Case study 2: JNU dataset

4.4. Case study 3: HIT dataset

5. Computational experiments

5.1. Comparison of BD methods

5.2. Classification results on various noise conditions

5.3. Employing ClassBD to deep learning classifiers

5.4. Employing ClassBD to machine learning classifiers

5.5. Feature extraction ability of quadratic and conventional networks

5.6. Comparison of ClassBD filters

6. Conclusions

Appendix and References

4. Computational experiments

4.1. Experimental configurations

4.1.1. Signal preprocessing


In this experiment, we inject additive Gaussian white noise (AWGN) to simulate scenarios with significant noise and validate the classification performance of our method. The level of noise is determined based on the Signal-to-Noise Ratio (SNR), which is defined as follows:



Under this configuration, we partition the datasets as per [71]. Firstly, the raw signals are segmented following the time sequence to separate training and test sets, thereby preventing information leakage. Secondly, the sub-sequence is split with or without overlapping sampling, contingent on the volume of the data. Thirdly, we add noise to the datasets at varying SNR levels, which are determined by the performance of models under severe degradation. The noisy signals are then normalized using Z-score standardization.


It is noteworthy that in our settings, we simulate a more challenging scenario where the noise is generated according to each sub-sequence. In other words, the noise power of each sub-sequence varies. This operation further increases the difficulty of discriminating between similar signals compared to calculating the noise using the entire signal. Finally, for the chosen datasets, the segmentation ratio differs slightly and will be elaborated upon subsequently.


4.1.2. Baselines and training settings


We adopt some state-of-the-art time domain bearing fault diagnosis methods as the baselines: 1) Deep residual shrinkage neural network for bearing fault diagnosis (DRSN) [72]; 2) A wavelet convolutional neural network using Laplace wavelet kernel (WaveletKernelNet) [73]; 3) An enhanced semi-shrinkage wavelet weight initialization network (EWSNet) [74]; 4) A Gramian time frequency enhancement network (GTFENet) [75]; 5) A time-frequency transformbased neural network (TFN) [76]. Then, for ClassBD, we adopt WDCNN [54] as our classifier.


Furthermore, the hyperparameters of all the methods have an identical configuration. All the methods have a maximum training epochs of 200, batch size of 128, learning rate within the range [0.1, 0.3, 0.5, 0.8]. It is noteworthy that we employ SGD [77] as the optimizer and utilize CosineAnnealingLR [78] to dynamically adjust the learning rate throughout the training process. The experiments are executed on a Nvidia RTX 4090 24GB GPU and implemented using Python 3.8 with PyTorch 2.1. All reported results represent the average of ten independent runs.


4.1.3. Evaluation metrics


We adopt the commonly used false positive rate (FPR) and F1 score to benchmark the performance of all the considered methods. Formally, these two metrics are defined as below:



where TP, TN, FP, and FN stands for the number of true positive, true negative, false positive, and false negative, respectively.


Authors:

(1) Jing-Xiao Liao, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China and School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;

(2) Chao He, School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China;

(3) Jipu Li, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China;

(4) Jinwei Sun, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;

(5) Shiping Zhang (Corresponding author), School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;

(6) Xiaoge Zhang (Corresponding author), Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China.


This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.