Fault Detection in the Tennessee Eastman Benchmark Process Using Principal Component Difference Based on K-Nearest Neighbors. Zhang, C., Guo, Q., & Li, Y. IEEE Access, 8:49999–50009, 2020. Conference Name: IEEE Accessdoi abstract bibtex Industrial data usually have nonlinear or multimodal characteristics which do not meet the data assumptions of statistics in principal component analysis (PCA). Therefore, PCA has a lower fault detection rate in industrial processes. Aiming at the above limitations of PCA, a fault detection method using principal component difference based on k-nearest neighbors (Diff-PCA) is proposed in this paper. First, find the k nearest neighbors set of each sample in the training data set and calculate its mean vector. Second, build an augmented vector using each sample and its corresponding mean vector. Third, calculate the loading matrix and score matrix using PCA. Next, calculate the estimated scores using the mean vector of each sample and missing data imputation technique for PCA. At last, build two new statistics using the difference between the real scores and estimated scores to detect faults. In addition, the fault diagnosis method based on contribution plots of monitored variables is also proposed in this paper. In Diff-PCA, the difference skill can eliminate the impact of the nonlinear and multimodal structure on fault detection. Meanwhile, the monitored subspaces by the two new statistics are different from that by T2 and SPE in PCA. The efficiency of the proposed strategy is implemented in two numerical cases (nonlinear and multimode) and the Tennessee Eastman (TE) processes. The fault detection results indicate that Diff-PCA outperforms the conventional PCA, Kernel PCA, dynamic PCA, principal component-based k nearest neighbor rule and k nearest neighbor rule.
@article{zhang_fault_2020,
title = {Fault {Detection} in the {Tennessee} {Eastman} {Benchmark} {Process} {Using} {Principal} {Component} {Difference} {Based} on {K}-{Nearest} {Neighbors}},
volume = {8},
issn = {2169-3536},
doi = {10.1109/ACCESS.2020.2977421},
abstract = {Industrial data usually have nonlinear or multimodal characteristics which do not meet the data assumptions of statistics in principal component analysis (PCA). Therefore, PCA has a lower fault detection rate in industrial processes. Aiming at the above limitations of PCA, a fault detection method using principal component difference based on k-nearest neighbors (Diff-PCA) is proposed in this paper. First, find the k nearest neighbors set of each sample in the training data set and calculate its mean vector. Second, build an augmented vector using each sample and its corresponding mean vector. Third, calculate the loading matrix and score matrix using PCA. Next, calculate the estimated scores using the mean vector of each sample and missing data imputation technique for PCA. At last, build two new statistics using the difference between the real scores and estimated scores to detect faults. In addition, the fault diagnosis method based on contribution plots of monitored variables is also proposed in this paper. In Diff-PCA, the difference skill can eliminate the impact of the nonlinear and multimodal structure on fault detection. Meanwhile, the monitored subspaces by the two new statistics are different from that by T2 and SPE in PCA. The efficiency of the proposed strategy is implemented in two numerical cases (nonlinear and multimode) and the Tennessee Eastman (TE) processes. The fault detection results indicate that Diff-PCA outperforms the conventional PCA, Kernel PCA, dynamic PCA, principal component-based k nearest neighbor rule and k nearest neighbor rule.},
journal = {IEEE Access},
author = {Zhang, Cheng and Guo, Qingxiu and Li, Yuan},
year = {2020},
note = {Conference Name: IEEE Access},
keywords = {Covariance matrices, Fault detection, Fault detection and diagnosis, Fault diagnosis, Loading, Monitoring, Principal component analysis, Tennessee Eastman processes, Training data, k nearest neighbors, principal component analysis, principal component difference},
pages = {49999--50009},
}
Downloads: 0
{"_id":"FFdEWE9HMXSYPB2qG","bibbaseid":"zhang-guo-li-faultdetectioninthetennesseeeastmanbenchmarkprocessusingprincipalcomponentdifferencebasedonknearestneighbors-2020","author_short":["Zhang, C.","Guo, Q.","Li, Y."],"bibdata":{"bibtype":"article","type":"article","title":"Fault Detection in the Tennessee Eastman Benchmark Process Using Principal Component Difference Based on K-Nearest Neighbors","volume":"8","issn":"2169-3536","doi":"10.1109/ACCESS.2020.2977421","abstract":"Industrial data usually have nonlinear or multimodal characteristics which do not meet the data assumptions of statistics in principal component analysis (PCA). Therefore, PCA has a lower fault detection rate in industrial processes. Aiming at the above limitations of PCA, a fault detection method using principal component difference based on k-nearest neighbors (Diff-PCA) is proposed in this paper. First, find the k nearest neighbors set of each sample in the training data set and calculate its mean vector. Second, build an augmented vector using each sample and its corresponding mean vector. Third, calculate the loading matrix and score matrix using PCA. Next, calculate the estimated scores using the mean vector of each sample and missing data imputation technique for PCA. At last, build two new statistics using the difference between the real scores and estimated scores to detect faults. In addition, the fault diagnosis method based on contribution plots of monitored variables is also proposed in this paper. In Diff-PCA, the difference skill can eliminate the impact of the nonlinear and multimodal structure on fault detection. Meanwhile, the monitored subspaces by the two new statistics are different from that by T2 and SPE in PCA. The efficiency of the proposed strategy is implemented in two numerical cases (nonlinear and multimode) and the Tennessee Eastman (TE) processes. The fault detection results indicate that Diff-PCA outperforms the conventional PCA, Kernel PCA, dynamic PCA, principal component-based k nearest neighbor rule and k nearest neighbor rule.","journal":"IEEE Access","author":[{"propositions":[],"lastnames":["Zhang"],"firstnames":["Cheng"],"suffixes":[]},{"propositions":[],"lastnames":["Guo"],"firstnames":["Qingxiu"],"suffixes":[]},{"propositions":[],"lastnames":["Li"],"firstnames":["Yuan"],"suffixes":[]}],"year":"2020","note":"Conference Name: IEEE Access","keywords":"Covariance matrices, Fault detection, Fault detection and diagnosis, Fault diagnosis, Loading, Monitoring, Principal component analysis, Tennessee Eastman processes, Training data, k nearest neighbors, principal component analysis, principal component difference","pages":"49999–50009","bibtex":"@article{zhang_fault_2020,\n\ttitle = {Fault {Detection} in the {Tennessee} {Eastman} {Benchmark} {Process} {Using} {Principal} {Component} {Difference} {Based} on {K}-{Nearest} {Neighbors}},\n\tvolume = {8},\n\tissn = {2169-3536},\n\tdoi = {10.1109/ACCESS.2020.2977421},\n\tabstract = {Industrial data usually have nonlinear or multimodal characteristics which do not meet the data assumptions of statistics in principal component analysis (PCA). Therefore, PCA has a lower fault detection rate in industrial processes. Aiming at the above limitations of PCA, a fault detection method using principal component difference based on k-nearest neighbors (Diff-PCA) is proposed in this paper. First, find the k nearest neighbors set of each sample in the training data set and calculate its mean vector. Second, build an augmented vector using each sample and its corresponding mean vector. Third, calculate the loading matrix and score matrix using PCA. Next, calculate the estimated scores using the mean vector of each sample and missing data imputation technique for PCA. At last, build two new statistics using the difference between the real scores and estimated scores to detect faults. In addition, the fault diagnosis method based on contribution plots of monitored variables is also proposed in this paper. In Diff-PCA, the difference skill can eliminate the impact of the nonlinear and multimodal structure on fault detection. Meanwhile, the monitored subspaces by the two new statistics are different from that by T2 and SPE in PCA. The efficiency of the proposed strategy is implemented in two numerical cases (nonlinear and multimode) and the Tennessee Eastman (TE) processes. The fault detection results indicate that Diff-PCA outperforms the conventional PCA, Kernel PCA, dynamic PCA, principal component-based k nearest neighbor rule and k nearest neighbor rule.},\n\tjournal = {IEEE Access},\n\tauthor = {Zhang, Cheng and Guo, Qingxiu and Li, Yuan},\n\tyear = {2020},\n\tnote = {Conference Name: IEEE Access},\n\tkeywords = {Covariance matrices, Fault detection, Fault detection and diagnosis, Fault diagnosis, Loading, Monitoring, Principal component analysis, Tennessee Eastman processes, Training data, k nearest neighbors, principal component analysis, principal component difference},\n\tpages = {49999--50009},\n}\n\n\n\n","author_short":["Zhang, C.","Guo, Q.","Li, Y."],"key":"zhang_fault_2020","id":"zhang_fault_2020","bibbaseid":"zhang-guo-li-faultdetectioninthetennesseeeastmanbenchmarkprocessusingprincipalcomponentdifferencebasedonknearestneighbors-2020","role":"author","urls":{},"keyword":["Covariance matrices","Fault detection","Fault detection and diagnosis","Fault diagnosis","Loading","Monitoring","Principal component analysis","Tennessee Eastman processes","Training data","k nearest neighbors","principal component analysis","principal component difference"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/mh_lenguyen","dataSources":["iwKepCrWBps7ojhDx"],"keywords":["covariance matrices","fault detection","fault detection and diagnosis","fault diagnosis","loading","monitoring","principal component analysis","tennessee eastman processes","training data","k nearest neighbors","principal component analysis","principal component difference"],"search_terms":["fault","detection","tennessee","eastman","benchmark","process","using","principal","component","difference","based","nearest","neighbors","zhang","guo","li"],"title":"Fault Detection in the Tennessee Eastman Benchmark Process Using Principal Component Difference Based on K-Nearest Neighbors","year":2020}