Adaptation for Regularization Operators in Learning Theory
MASSACHUSETTS INST OF TECH CAMBRIDGE COMPUTER SCIENCE AND ARTIFICIAL INTELLIGENCE LAB
Pagination or Media Count:
We consider learning algorithms induced by regularization methods in the regression setting. We show that previously obtained error bounds for these algorithms using a-priori choices of the regularization parameter, can be attained using a suitable a-posteriori choice based on validation. In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the effective dimension of the problem. We also show universal consistency for this class methods.
- Statistics and Probability
- Computer Programming and Software