Saturday, March 5, 2011

Feature Selection for MLP Neural Network: The Use of Random Permutation of Probabilistic Outputs

Feature Selection for MLP Neural Network: The Use of Random Permutation of Probabilistic Outputs

Abstract

This paper presents a new wrapper-based feature selection method for multilayer perceptron (MLP) neural networks. It uses a feature ranking criterion to measure the importance of a feature by computing the aggregate difference, over the feature space, of the probabilistic outputs of the MLP with and without the feature. Thus, a score of importance with respect to every feature can be provided using this criterion. Based on the numerical experiments on several artificial and real-world data sets, the proposed method performs, in general, better than several selected feature selection methods for MLP, particularly when the data set is sparse or has many redundant features. In addition, as a wrapper-based approach, the computational cost for the proposed method is modest. Subscribe to Complete IEEE Topics and Abstract by Email For more IEEE Topics .

No comments:

Post a Comment

We don't Host any IEEE Papers. These are Only Abstract.