In this section we show that off-line TD(), as defined
mechanistically above, achieves the same weight updates as the off-line -return
algorithm. In this sense we align the forward (theoretical) and
backward (mechanistic) views of
TD(). Let denote the update at time of according to the
-return algorithm (7.4), and let denote the update
at time of state according to the mechanistic definition of TD() as
given by (7.7). Then our goal is to
show that the sum of all the updates over an episode is the same for the two
algorithms:
First note that an accumulating eligibility trace can be written
explicitly (nonrecursively) as
Now we turn to the right-hand side of (7.8).
Consider an individual update of the -return algorithm:
In the case of on-line updating, the approximation made above will be close as long as is small and thus changes little during an episode. Even in the on-line case we can expect the updates of TD() and of the -return algorithm to be similar.
For the moment let us assume that the increments are small enough during an episode that on-line TD() gives essentially the same update over the course of an episode as does the -return algorithm. There still remain interesting questions about what happens during an episode. Consider the updating of the value of state in midepisode, at time . Under on-line TD(), the effect at is just as if we had done a -return update treating the last observed state as the terminal state of the episode with a nonzero terminal value equal to its current estimated value. This relationship is maintained from step to step as each new state is observed.
Example 7.3: Random Walk with TD() Because off-line TD() is equivalent to the -return algorithm, we already have the results for off-line TD() on the 19-state random walk task; they are shown in Figure 7.6. The comparable results for on-line TD() are shown in Figure 7.9. Note that the on-line algorithm works better over a broader range of parameters. This is often found to be the case for on-line methods.
Exercise 7.5 Although TD() only approximates the -return algorithm when done online, perhaps there's a slightly different TD method that would maintain the equivalence even in the on-line case. One idea is to define the TD error instead as and the -step return as . Show that in this case the modified TD() algorithm would then achieve exactly