Main Article Content
Feature selection is a term standardin data mining to reduce inputs to a manageable size for analysis and processing which also focuses on identifying irrelevant information without affecting the accuracy of the classifier.FS selects a subset of relevantfeatures andremoves irrelevant and redundant features from the raw data to builda robust learning model. FS is very important, not only because of the curse of dimensionality, but also because ofdata complexities andthequantities of the data faced by multiple disciplines, such as machine learning, data mining,statistics,pattern recognition andbioinformatics. In recent years, we have seen extensiveresearch infeature selectionwhichhas been expandingin depth and in breadthfrom simple to more advanced techniques, from supervised to unsupervised and semi-supervised feature selection.This paper presentsa state-of-art survey of feature selection techniques.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.