paint-brush
Anchored Value Iteration and Its Impact on Bellman Consistency in Reinforcement Learningby@anchoring
New Story

Anchored Value Iteration and Its Impact on Bellman Consistency in Reinforcement Learning

by AnchoringJanuary 14th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Anc-VI accelerates Bellman consistency in value iteration, providing a cru
featured image - Anchored Value Iteration and Its Impact on Bellman Consistency in Reinforcement Learning
Anchoring HackerNoon profile picture
0-item

Authors:

(1) Jongmin Lee, Department of Mathematical Science, Seoul National University;

(2) Ernest K. Ryu, Department of Mathematical Science, Seoul National University and Interdisciplinary Program in Artificial Intelligence, Seoul National University.

Abstract and 1 Introduction

1.1 Notations and preliminaries

1.2 Prior works

2 Anchored Value Iteration

2.1 Accelerated rate for Bellman consistency operator

2.2 Accelerated rate for Bellman optimality opera

3 Convergence when y=1

4 Complexity lower bound

5 Approximate Anchored Value Iteration

6 Gauss–Seidel Anchored Value Iteration

7 Conclusion, Acknowledgments and Disclosure of Funding and References

A Preliminaries

B Omitted proofs in Section 2

C Omitted proofs in Section 3

D Omitted proofs in Section 4

E Omitted proofs in Section 5

F Omitted proofs in Section 6

G Broader Impacts

H Limitations

2 Anchored Value Iteration


The accelerated rate of Anc-VI for the Bellman optimality operator is more technically challenging and is, in our view, the stronger contribution. However, we start by presenting the result for the Bellman consistency operator because it is commonly studied in the prior RL theory literature on accelerating value iteration [1, 31, 37, 38] and because the analysis in the Bellman consistency setup will serve as a good conceptual stepping stone towards the analysis in the Bellman optimality setup.


This paper is available on arxiv under CC BY 4.0 DEED license.