10.1_The_simplest_actor-critic_algorithm_QAC
10.1 The simplest actor-critic algorithm (QAC)
This section introduces the simplest actor-critic algorithm. This algorithm can be easily obtained by extending the policy gradient algorithm in (9.32).
Recall that the idea of the policy gradient method is to search for an optimal policy by maximizing a scalar metric . The gradient-ascent algorithm for maximizing is
where is a distribution of the states (see Theorem 9.1 for more information). Since the true gradient is unknown, we can use a stochastic gradient to approximate it:
This is the algorithm given in (9.32).
Equation (10.2) is important because it clearly shows how policy-based and value-based methods can be combined. On the one hand, it is a policy-based algorithm since it directly updates the policy parameter. On the other hand, this equation requires knowing , which is an estimate of the action value . As a result, another value-based algorithm is required to generate . So far, we have studied two ways to estimate action values in this book. The first is based on Monte Carlo learning and the second is temporal-difference (TD) learning.
If is estimated by Monte Carlo learning, the corresponding algorithm is called REINFORCE or Monte Carlo policy gradient, which has already been introduced in Chapter 9.
If is estimated by TD learning, the corresponding algorithms are usually called actor-critic. Therefore, actor-critic methods can be obtained by incorporating TD-based value estimation into policy gradient methods.
The procedure of the simplest actor-critic algorithm is summarized in Algorithm 10.1. The critic corresponds to the value update step via the Sarsa algorithm presented in (8.35). The action values are represented by a parameterized function . The actor corresponds to the policy update step in (10.2). This actor-citric algorithm is sometimes called Q actor-critic (QAC). Although it is simple, QAC reveals the core idea of actor-critic methods. It can be extended to generate many advanced ones as shown in the rest of this chapter.
Algorithm 10.1: The simplest actor-critic algorithm (QAC)
Initialization: A policy function where is the initial parameter. A value function where is the initial parameter. .
Goal: Learn an optimal policy to maximize .
At time step in each episode, do
Generate following , observe , and then generate following .
Actor (policy update):
Critic(valueupdate):