- Seminar Calendar
- Seminar Archive
- 2025-2026 Semester 2
- 2025-2026 Semester 1
- 2024-2025 Semester 2
- 2024-2025 Semester 1
- 2023-2024 Semester 2
- 2023-2024 Semester 1
- 2022-2023 Semester 2
- 2022-2023 Semester 1
- 2021-2022 Semester 2
- 2021-2022 Semester 1
- 2020-2021 Semester 2
- 2020-2021 Semester 1
- 2019-2020 Semester 2
- 2019-2020 Semester 1
- 2018-2019 Semester 2
- 2018-2019 Semester 1
- 2017-2018 Semester 2
- 2017-2018 Semester 1
- 2016-2017 Semester 2
- 2016-2017 Semester 1
- 2015-2016 Semester 1
- 2015-2016 Semester 2
- 2014-2015 Semester 2
- 2014-2015 Semester 1
- 2013-2014 Semester 2
- 2013-2014 Semester 1
- 2012-2013 Semester 2
- 2012-2013 Semester 1
- 2011-2012 Semester 2
- 2011-2012 Semester 1
- 2010-2011 Semester 2
- 2010-2011 Semester 1
- 2009-2010 Semester 2
- 2009-2010 Semester 1
- 2008-2009 Semester 2
- 2008-2009 Semester 1
- 2007-2008 Semester 2
- 2007-2008 Semester 1
- 2006-2007 Semester 2
- 2006-2007 Semester 1
- 2005-2006 Semester 2
- 2005-2006 Semester 1
- Contact
- Site Map
Seminar: High-Order Error Bounds for Markovian LSA with Richardson-Romberg Extrapolation
----------------------------------------------------------------------------------------------------
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
----------------------------------------------------------------------------------------------------
Date: 16:30 pm - 17:30 pm on 19 December (Friday)
Venue: ERB 513, The Chinese University of Hong Kong
Title: High-Order Error Bounds for Markovian LSA with Richardson-Romberg Extrapolation
Speaker: Ilya Levin, Faculty of Computer Science, HSE University
Abstract:
We study the bias and high-order error bounds of the Linear Stochastic Approximation (LSA) algorithm with Polyak-Ruppert (PR) averaging under Markovian noise. We focus on the version of the algorithm with constant step size and propose a novel decomposition of the bias via a linearization technique. We analyze the structure of the bias and show that the leading-order term is linear in and cannot be eliminated by PR averaging. To address this, we apply the Richardson-Romberg (RR) extrapolation procedure, which effectively cancels the leading bias term. We derive high-order moment bounds for the RR iterates and show that the leading error term aligns with the asymptotically optimal covariance matrix of the vanilla averaged LSA iterates.
Bio:
Ilya Levin a Junior Research Fellow at the International Laboratory of Stochastic Algorithms and High-Dimensional Inference, and a 3rd-year PhD student in Faculty of Computer Science, HSE University, Moscow, Russia. His research interests include stochastic approximation, federated learning and reinforcement learning. Specifically, his work involves probabilistic inference for stochastic approximation and federated stochastic approximation.
Everyone is welcome to attend the talk!
Date:
Friday, December 19, 2025 - 16:30 to 17:30


