Boosting Methods

Although in many applications the neural networks are a powerful tool, in problems difficult to solve a single network is insufficient. To overcome this difficulty, neural network ensembles propose to combine different networks in such a way that an entity capable of solve the problem at hand is built, also providing a simpler and easier to understand design. Among the neural network ensembles methods include "Boosting" and, particularly, the algorithm "AdaBoost".

The algorithm AdaBoost iteratively trains a number of base classifiers, so that each new classifier pays more attention to data misclassified by previous classifiers, and combine them to obtain a classifier with improved performance. To this end, during a number of iterations it trains a new a classifier, assigns it an output weight, and adds it to the ensemble so that the global output of the system is obtained as weighted linear combination of all base classifiers.

In order to get each new classifier focuses on the most erroneous data, Real AdaBooost uses a weighting function that emphasizes the importance of each data while the classifier training. A detailed analysis of this emphasis function allows us to decompose it on the product of two terms, one related to the mean square error of the samples and another associated with the proximity of these to the border. Consequently, the structure of the AdaBoost emphasis function can be generalized by introducing an adjustable mixing parameter, λ, to control the tradeoff between the two terms of emphasis [2].
 
Following this research line, two alternatives have been explored to select the adequate value of the mixing parameter:

  1. The first one, published in [3], considers the edge parameter used by the RA algorithm (a weighted correlation between the learner outputs and their corresponding tags), and proposes to dynamically adjust the mixing parameter during the ensemble growth.
  2. The second one, given in [4], instead of trying to find the best value of λ, it combines the outputs of a number of RA-we ensembles trained with different λ values. Thus, it takes advantage of the diversity introduced by the mixing parameter to construct a committee of RA-we.

In parallel to this work, and to reduce the high computational cost of this type of networks during the operation stage, in [1] has been proposed an acceleration scheme that exploits the fact that many of the classification patterns are simple and do not require their evaluation of all subnets to obtain the entire network output.

Following the research line of designing of neural network ensembles, we have also proposed other "Boosting'' which present a more compact structure (with a lower number of elements) and are capable of reducing the classification error in comparison to tradicional neural network ensembles [5,6].

 

REFERENCIAS

 

[1] Arenas-García, J., Gómez-Verdejo, V. and Figueiras- Vidal, A. R. (2007a). Fast evaluation of neural networks via confidence rating. Neurocomputing, 70:2775–2782.

[2] Gómez-Verdejo, V., Ortega-Moral, M., Arenas-García, J. and Figueiras-Vidal, A. R. (2006). Boosting by weighting critical and erroneous samples. Neurocomputing, 69:679–685.

[3] Gómez-Verdejo, V., Arenas-García, J. and Figueiras- Vidal, A. R. (2008). A dynamically adjusted mixed emphasis method for building boosting ensembles. IEEE Transactions on Neural Networks, 19:3–17.

[4] Gómez-Verdejo, V., Arenas-García, J. and Figueiras- Vidal, A. R. (2010). Committees of adaboost ensembles with modified emphasis functions. Neurocomputing, 73:1289–1292.

[5] Mayhua-López, E., Gómez-Verdejo, V. and Figueiras- Vidal, A. R. (2012). Real adaboost with gate controlled fusion. IEEE Transactions on Neural Networks and Learning Systems, 23(12):2003-2009.

[6] Muñoz-Romero, S., Gómez-Verdejo, V. and Arenas- García, J. (2009). Real adaboost ensembles with emphasized subsampling. En Proc. 10th Intl. Work-Conference on Artificial Neural Networks, LNCS 5517, pág. 440–447, Salamanca, Spain.