Non-stochastic Best Arm Identification and Hyperparameter Optimization (original) (raw)
[edit]
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:240-248, 2016.
Abstract
Motivated by the task of hyperparameter optimization, we introduce the \em non-stochastic best-arm identification problem. We identify an attractive algorithm for this setting that makes no assumptions on the convergence behavior of the arms’ losses, has no free-parameters to adjust, provably outperforms the uniform allocation baseline in favorable conditions, and performs comparably (up to \log factors) otherwise. Next, by leveraging the iterative nature of many learning algorithms, we cast hyperparameter optimization as an instance of non-stochastic best-arm identification. Our empirical results show that, by allocating more resources to promising hyperparameter settings, our approach achieves comparable test accuracies an order of magnitude faster than the uniform strategy. The robustness and simplicity of our approach makes it well-suited to ultimately replace the uniform strategy currently used in most machine learning software packages.
Cite this Paper
BibTeX
@InProceedings{pmlr-v51-jamieson16, title = {Non-stochastic Best Arm Identification and Hyperparameter Optimization}, author = {Jamieson, Kevin and Talwalkar, Ameet}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {240--248}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/jamieson16.pdf}, url = {https://proceedings.mlr.press/v51/jamieson16.html}, abstract = {Motivated by the task of hyperparameter optimization, we introduce the \em non-stochastic best-arm identification problem. We identify an attractive algorithm for this setting that makes no assumptions on the convergence behavior of the arms’ losses, has no free-parameters to adjust, provably outperforms the uniform allocation baseline in favorable conditions, and performs comparably (up to \log factors) otherwise. Next, by leveraging the iterative nature of many learning algorithms, we cast hyperparameter optimization as an instance of non-stochastic best-arm identification. Our empirical results show that, by allocating more resources to promising hyperparameter settings, our approach achieves comparable test accuracies an order of magnitude faster than the uniform strategy. The robustness and simplicity of our approach makes it well-suited to ultimately replace the uniform strategy currently used in most machine learning software packages.} }
Endnote
%0 Conference Paper %T Non-stochastic Best Arm Identification and Hyperparameter Optimization %A Kevin Jamieson %A Ameet Talwalkar %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-jamieson16 %I PMLR %P 240--248 %U https://proceedings.mlr.press/v51/jamieson16.html %V 51 %X Motivated by the task of hyperparameter optimization, we introduce the \em non-stochastic best-arm identification problem. We identify an attractive algorithm for this setting that makes no assumptions on the convergence behavior of the arms’ losses, has no free-parameters to adjust, provably outperforms the uniform allocation baseline in favorable conditions, and performs comparably (up to \log factors) otherwise. Next, by leveraging the iterative nature of many learning algorithms, we cast hyperparameter optimization as an instance of non-stochastic best-arm identification. Our empirical results show that, by allocating more resources to promising hyperparameter settings, our approach achieves comparable test accuracies an order of magnitude faster than the uniform strategy. The robustness and simplicity of our approach makes it well-suited to ultimately replace the uniform strategy currently used in most machine learning software packages.
RIS
TY - CPAPER TI - Non-stochastic Best Arm Identification and Hyperparameter Optimization AU - Kevin Jamieson AU - Ameet Talwalkar BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-jamieson16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 240 EP - 248 L1 - http://proceedings.mlr.press/v51/jamieson16.pdf UR - https://proceedings.mlr.press/v51/jamieson16.html AB - Motivated by the task of hyperparameter optimization, we introduce the \em non-stochastic best-arm identification problem. We identify an attractive algorithm for this setting that makes no assumptions on the convergence behavior of the arms’ losses, has no free-parameters to adjust, provably outperforms the uniform allocation baseline in favorable conditions, and performs comparably (up to \log factors) otherwise. Next, by leveraging the iterative nature of many learning algorithms, we cast hyperparameter optimization as an instance of non-stochastic best-arm identification. Our empirical results show that, by allocating more resources to promising hyperparameter settings, our approach achieves comparable test accuracies an order of magnitude faster than the uniform strategy. The robustness and simplicity of our approach makes it well-suited to ultimately replace the uniform strategy currently used in most machine learning software packages. ER -
APA
Jamieson, K. & Talwalkar, A.. (2016). Non-stochastic Best Arm Identification and Hyperparameter Optimization. _Proceedings of the 19th International Conference on Artificial Intelligence and Statistics_, in _Proceedings of Machine Learning Research_ 51:240-248 Available from https://proceedings.mlr.press/v51/jamieson16.html.