Research Article
BibTex RIS Cite
Year 2020, Volume: 8 Issue: 2, 181 - 185, 30.04.2020
https://doi.org/10.17694/bajece.679662

Abstract

References

  • [1] Ö. G. Alma, S. Kurt and U. Aybars, “Genetic algorithms for outlier detection in multiple regression with different information criteria,” vol. 9655, 2011.
  • [2] C. Pardo, J. F. Diez-Pastor, C. García-Osorio and J. J. Rodríguez, “Rotation Forests for regression,” Appl. Math. Comput., vol. 219, no. 19, pp. 9914–9924, 2013.
  • [3] L. Chen, S. Gao and X. Cao, “Research on real-time outlier detection over big data streams,” Int. J. Comput. Appl., vol. 7074, pp. 1–9, 2017.
  • [4] N. Simidjievski, “Predicting long-term population dynamics with bagging and boosting of process-based models,” vol. 42, pp. 8484–8496, 2015.
  • [5] C. Zhang and J. Zhang, “RotBoost : A technique for combining Rotation Forest and AdaBoost,” vol. 29, pp. 1524–1536, 2008.
  • [6] A. Bagnall, M. Flynn, J. Large, J. Line, A. Bostrom and G. Cawley, “Is rotation forest the best classifier for problems with continuous features?,” 2018.
  • [7] E. Taşcı, “A Meta-Ensemble Classifier Approach: Random Rotation Forest,” Balk. J. Electr. Comput. Eng., vol. 7, no. 2, pp. 182–187, 2019.
  • [8] P. Du, A. Samat, B. Waske, S. Liu and Z. Li, “Random Forest and Rotation Forest for fully polarized SAR image classification using polarimetric and spatial features,” ISPRS J. Photogramm. Remote Sens., vol. 105, pp. 38–53, 2015.
  • [9] S. Agarwal and C. R. Chowdary, “A-Stacking and A-Bagging: Adaptive versions of ensemble learning algorithms for spoof fingerprint detection,” Expert Syst. Appl., vol. 146, p. 113160, 2020.
  • [10] J. zhou Feng, Y. Wang, J. Peng, M. wei Sun, J. Zeng and H. Jiang, “Comparison between logistic regression and machine learning algorithms on survival prediction of traumatic brain injuries,” J. Crit. Care, vol. 54, pp. 110–116, 2019.
  • [11] Eibe Frank, Mark A. Hall and Ian H. Witten (2016). The WEKA Workbench. Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques", Morgan Kaufmann, Fourth Edition, 2016.
  • [12] T. A. Engel, A. S. Charão, M. Kirsch-Pinheiro and L. A. Steffenel, “Performance improvement of data mining in weka through GPU acceleration,” Procedia Comput. Sci., vol. 32, pp. 93–100, 2014.
  • [13] Shebuti Rayana (2016). ODDS Library [http://odds.cs.stonybrook.edu]. Stony Brook, NY: Stony Brook University, Department of Computer Science.
  • [14] Y. Zhou and G. Qiu, “Random forest for label ranking,” Expert Syst. Appl., vol. 112, pp. 99–109, 2018.
  • [15] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006.
  • [16] L. A. Bull, K. Worden, R. Fuentes, G. Manson, E. J. Cross, and N. Dervilis, “Outlier ensembles: A robust method for damage detection and unsupervised feature extraction from high-dimensional data,” J. Sound Vib., vol. 453, pp. 126–150, 2019.

A Stacking-based Ensemble Learning Method for Outlier Detection

Year 2020, Volume: 8 Issue: 2, 181 - 185, 30.04.2020
https://doi.org/10.17694/bajece.679662

Abstract

Outlier detection is considered as one of the crucial research areas for data mining. Many methods have been studied widely and utilized for achieving better results in outlier detection from existing literature; however, the effects of these few ways are inadequate. In this paper, a stacking-based ensemble classifier has been proposed along with four base learners (namely, Rotation Forest, Random Forest, Bagging and Boosting) and a Meta-learner (namely, Logistic Regression) to progress the outlier detection performance. The proposed mechanism is evaluated on five datasets from the ODDS library by adopting five performance criteria. The experimental outcomes demonstrate that the proposed method outperforms than the conventional ensemble approaches concerning the accuracy, AUC (Area Under Curve), precision, recall and F-measure values. This method can be used for image recognition and machine learning problems, such as binary classification.

References

  • [1] Ö. G. Alma, S. Kurt and U. Aybars, “Genetic algorithms for outlier detection in multiple regression with different information criteria,” vol. 9655, 2011.
  • [2] C. Pardo, J. F. Diez-Pastor, C. García-Osorio and J. J. Rodríguez, “Rotation Forests for regression,” Appl. Math. Comput., vol. 219, no. 19, pp. 9914–9924, 2013.
  • [3] L. Chen, S. Gao and X. Cao, “Research on real-time outlier detection over big data streams,” Int. J. Comput. Appl., vol. 7074, pp. 1–9, 2017.
  • [4] N. Simidjievski, “Predicting long-term population dynamics with bagging and boosting of process-based models,” vol. 42, pp. 8484–8496, 2015.
  • [5] C. Zhang and J. Zhang, “RotBoost : A technique for combining Rotation Forest and AdaBoost,” vol. 29, pp. 1524–1536, 2008.
  • [6] A. Bagnall, M. Flynn, J. Large, J. Line, A. Bostrom and G. Cawley, “Is rotation forest the best classifier for problems with continuous features?,” 2018.
  • [7] E. Taşcı, “A Meta-Ensemble Classifier Approach: Random Rotation Forest,” Balk. J. Electr. Comput. Eng., vol. 7, no. 2, pp. 182–187, 2019.
  • [8] P. Du, A. Samat, B. Waske, S. Liu and Z. Li, “Random Forest and Rotation Forest for fully polarized SAR image classification using polarimetric and spatial features,” ISPRS J. Photogramm. Remote Sens., vol. 105, pp. 38–53, 2015.
  • [9] S. Agarwal and C. R. Chowdary, “A-Stacking and A-Bagging: Adaptive versions of ensemble learning algorithms for spoof fingerprint detection,” Expert Syst. Appl., vol. 146, p. 113160, 2020.
  • [10] J. zhou Feng, Y. Wang, J. Peng, M. wei Sun, J. Zeng and H. Jiang, “Comparison between logistic regression and machine learning algorithms on survival prediction of traumatic brain injuries,” J. Crit. Care, vol. 54, pp. 110–116, 2019.
  • [11] Eibe Frank, Mark A. Hall and Ian H. Witten (2016). The WEKA Workbench. Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques", Morgan Kaufmann, Fourth Edition, 2016.
  • [12] T. A. Engel, A. S. Charão, M. Kirsch-Pinheiro and L. A. Steffenel, “Performance improvement of data mining in weka through GPU acceleration,” Procedia Comput. Sci., vol. 32, pp. 93–100, 2014.
  • [13] Shebuti Rayana (2016). ODDS Library [http://odds.cs.stonybrook.edu]. Stony Brook, NY: Stony Brook University, Department of Computer Science.
  • [14] Y. Zhou and G. Qiu, “Random forest for label ranking,” Expert Syst. Appl., vol. 112, pp. 99–109, 2018.
  • [15] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006.
  • [16] L. A. Bull, K. Worden, R. Fuentes, G. Manson, E. J. Cross, and N. Dervilis, “Outlier ensembles: A robust method for damage detection and unsupervised feature extraction from high-dimensional data,” J. Sound Vib., vol. 453, pp. 126–150, 2019.
There are 16 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Araştırma Articlessi
Authors

Abdul Ahad Abro 0000-0002-3591-9231

Erdal Taşcı 0000-0001-6754-2187

Aybars Ugur 0000-0003-3622-7672

Publication Date April 30, 2020
Published in Issue Year 2020 Volume: 8 Issue: 2

Cite

APA Abro, A. A., Taşcı, E., & Ugur, A. (2020). A Stacking-based Ensemble Learning Method for Outlier Detection. Balkan Journal of Electrical and Computer Engineering, 8(2), 181-185. https://doi.org/10.17694/bajece.679662

Cited By






Vote-Based: Ensemble Approach
Sakarya Üniversitesi Fen Bilimleri Enstitüsü Dergisi
https://doi.org/10.16984/saufenbilder.901960



All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı