nip.utils.bugfix.reward2go

Contents

nip.utils.bugfix.reward2go#

nip.utils.bugfix.reward2go(reward, done, gamma, time_dim: int = -2)[source]#

Compute the discounted cumulative sum of rewards for multiple trajectories.

THIS IS THE FIXED VERSION OF THE FUNCTION. The original version had a bug where the reward-to-go was reshaped rather than transposed.

Parameters:
  • reward (torch.Tensor) – A tensor containing the rewards received at each time step over multiple trajectories.

  • done (torch.Tensor) – Boolean flag for end of episode. Differs from truncated, where the episode did not end but was interrupted.

  • gamma (float, optional) – The discount factor to use for computing the discounted cumulative sum of rewards. Defaults to 1.0.

  • time_dim (int, optional) – Dimension where the time is unrolled. Defaults to -2.

Returns:

torch.Tensor – A tensor of shape [B, T] containing the discounted cumulative sum of rewards (reward-to-go) at each time step.

Examples

>>> reward = torch.ones(1, 10)
>>> done = torch.zeros(1, 10, dtype=torch.bool)
>>> done[:, [3, 7]] = True
>>> reward2go(reward, done, 0.99, time_dim=-1)
tensor([[3.9404],
        [2.9701],
        [1.9900],
        [1.0000],
        [3.9404],
        [2.9701],
        [1.9900],
        [1.0000],
        [1.9900],
        [1.0000]])