2.3_State_values
2.3 State values
We mentioned that returns can be used to evaluate policies. However, they are inapplicable to stochastic systems because starting from one state may lead to different returns. Motivated by this problem, we introduce the concept of state value in this section.
First, we need to introduce some necessary notations. Consider a sequence of time steps . At time , the agent is in state , and the action taken following a policy is . The next state is , and the immediate reward obtained is . This process can be expressed concisely as
Note that are all random variables. Moreover, , , and .
Starting from , we can obtain a state-action-reward trajectory:
By definition, the discounted return along the trajectory is
where is the discount rate. Note that is a random variable since are all random variables.
Since is a random variable, we can calculate its expected value (also called the expectation or mean):
Here, is called the state-value function or simply the state value of . Some important remarks are given below.
depends on . This is because its definition is a conditional expectation with the condition that the agent starts from .
depends on . This is because the trajectories are generated by following the policy . For a different policy, the state value may be different.
does not depend on . If the agent moves in the state space, represents the current time step. The value of is determined once the policy is given.
The relationship between state values and returns is further clarified as follows. When both the policy and the system model are deterministic, starting from a state always leads to the same trajectory. In this case, the return obtained starting from a state is equal to the value of that state. By contrast, when either the policy or the system model is stochastic, starting from the same state may generate different trajectories. In this case, the returns of different trajectories are different, and the state value is the mean of these returns.
Although returns can be used to evaluate policies as shown in Section 2.1, it is more formal to use state values to evaluate policies: policies that generate greater state values are better. Therefore, state values constitute a core concept in reinforcement learning. While state values are important, a question that immediately follows is how to calculate them. This question is answered in the next section.