A Normative Examination of Ensemble Learning Algorithms
- David M. Pennock ,
- Pedrito Maynard-Reid II ,
- C. Lee Giles ,
- Eric Horvitz
Proceedings of the Seventeenth International Conference on Machine Learning (ICML-2000) |
Published by Morgan Kaufmann, San Francisco, 2000
Ensemble learning algorithms combine the results of several classifiers to yield an aggregate classification. We present a normative evaluation of combination methods, applying and extending existing axiomatizations from social choice theory and statistics. For the case of multiple classes, we show that several seemingly innocuous and desirable properties are mutually satisfied only by a dictatorship. A weaker set of properties admit only the weighted average combination rule. For the case of binary classification, we give axiomatic justifications for majority vote and for weighted majority. We also show that, even when all component algorithms report that an attribute is probabilistically independent of the classification, common ensemble algorithms often destroy this independence information. We exemplify these theoretical results with experiments on stock market data, demonstrating how ensembles of classifiers can exhibit canonical voting paradoxes.