- Seminar Calendar
- Seminar Archive
- 2024-2025 Semester 1
- 2023-2024 Semester 2
- 2023-2024 Semester 1
- 2022-2023 Semester 2
- 2022-2023 Semester 1
- 2021-2022 Semester 2
- 2021-2022 Semester 1
- 2020-2021 Semester 2
- 2020-2021 Semester 1
- 2019-2020 Semester 2
- 2019-2020 Semester 1
- 2018-2019 Semester 2
- 2018-2019 Semester 1
- 2017-2018 Semester 2
- 2017-2018 Semester 1
- 2016-2017 Semester 2
- 2016-2017 Semester 1
- 2015-2016 Semester 1
- 2015-2016 Semester 2
- 2014-2015 Semester 2
- 2014-2015 Semester 1
- 2013-2014 Semester 2
- 2013-2014 Semester 1
- 2012-2013 Semester 2
- 2012-2013 Semester 1
- 2011-2012 Semester 2
- 2011-2012 Semester 1
- 2010-2011 Semester 2
- 2010-2011 Semester 1
- 2009-2010 Semester 2
- 2009-2010 Semester 1
- 2008-2009 Semester 2
- 2008-2009 Semester 1
- 2007-2008 Semester 2
- 2007-2008 Semester 1
- 2006-2007 Semester 2
- 2006-2007 Semester 1
- 2005-2006 Semester 2
- 2005-2006 Semester 1
- Contact
- Site Map
Calibration of robust empirical optimization problems
Seminar
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
----------------------------------------------------------------------------------------------------------
Title: Calibration of robust empirical optimization problems
Speaker: Prof. Andrew Lim
Department of Analytics and Operations and the Department of Finance
National University of Singapore
Abstract: In this talk, I will discuss recent results on the out-of-sample properties of robust empirical optimization and develop a theory for data-driven calibration of the “robustness parameter” for worst-case maximization problems with concave reward functions. Building on the intuition that robust optimization reduces the sensitivity to model misspecification by controlling the spread of the reward distribution, I show that the first-order benefit of “little bit of robustness” is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that a substantial reduction in the variance of the out-of-sample reward (i.e., sensitivity of the expected reward to model misspecification) is possible at little cost if the robustness parameter is properly calibrated. To this end, I introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Examples show that robust solutions resulting from “open loop” calibration methods (e.g., selecting a 90% confidence level regardless of the data and objective function) can be very conservative out-of-sample, while selecting an ambiguity parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance is often insufficiently robust. I also explain why the out-of-sample expected reward generated by the solution of a worst-case problem can sometimes exceed that of a sample-average optimizer. (Joint work with Junya Gotoh (Chuo University) and Michael Kim (University of British Columbia)).
Date:
Friday, July 10, 2020 - 16:30