A method for improving model accuracy more powerful than tailoring the algorithm has been discovered: bundling models into ensembles.
Ensemble models—built by methods such as bagging, boosting, and Bayesian model averaging—appear dauntingly complex, yet tend to strongly outperform their component models on new data.
This white paper explores how bundling competing models into ensembles almost always improves generalization.