nip.rl_objectives.ReinforceLossImproved#
- class nip.rl_objectives.ReinforceLossImproved(*args, **kwargs)[source]#
Reinforce loss which allows multiple actions keys and normalises advantages.
The implementation is also tweaked slightly to allow it to work without a critic. In this case reward-to-go is used instead of the advantage.
The __init__ method is copied from the original ReinforceLoss class with some tweaks.
See
torchrl.objectives.ReinforceLoss
for more details- Parameters:
actor_network (ProbabilisticTensorDictSequential) – The policy operator.
critic_network (TensorDictModule, optional) – The value operator, if using a critic.
loss_weighting_type (str, optional) – The type of weighting to use in the loss. Can be one of “advantage” or “reward_to_go”. The former requires a critic network. Defaults to “advantage” when a critic is used, otherwise “reward_to_go”.
delay_value (bool, optional) – If
True
, a target network is needed for the critic. Defaults toFalse
. Incompatible withfunctional=False
.loss_critic_type (str, default="smooth_l1") – Loss function for the value discrepancy. Can be one of “l1”, “l2” or “smooth_l1”.
gamma (float, optional) – The discount factor. Required if
loss_weighting_type="reward_to_go"
.separate_losses (bool, default=False) – If
True
, shared parameters between policy and critic will only be trained on the policy loss. Defaults toFalse
, ie. gradients are propagated to shared parameters for both policy and critic losses.functional (bool, default=True) – Whether modules should be functionalized. Functionalizing permits features like meta-RL, but makes it impossible to use distributed models (DDP, FSDP, …) and comes with a little cost. Defaults to
True
.normalize_advantage (bool, default=True) – Whether to normalise the advantage. Defaults to
True
.(float (clip_value) – If provided, it will be used to compute a clipped version of the value prediction with respect to the input tensordict value estimate and use it to calculate the value loss. The purpose of clipping is to limit the impact of extreme value predictions, helping stabilize training and preventing large updates. However, it will have no impact if the value estimate was done by the current version of the value estimator. Defaults to
None
.optional) – If provided, it will be used to compute a clipped version of the value prediction with respect to the input tensordict value estimate and use it to calculate the value loss. The purpose of clipping is to limit the impact of extreme value predictions, helping stabilize training and preventing large updates. However, it will have no impact if the value estimate was done by the current version of the value estimator. Defaults to
None
.
Methods Summary
__init__
(actor_network[, critic_network, ...])Initialize internal Module state, shared by both nn.Module and ScriptModule.
_get_advantage
(tensordict)Get the advantage for a tensordict, normalising it if required.
_log_weight
(sample)Compute the log weight for the given TensorDict sample.
_loss_critic
(tensordict)Get the critic loss without the clip fraction.
backward
(loss_vals)Perform the backward pass for the loss.
forward
(tensordict)Compute the loss for the given input TensorDict.
set_keys
(**kwargs)Set the keys of the input TensorDict that are used by this loss.
Attributes
SEP
TARGET_NET_WARNING
T_destination
action_keys
call_super_init
default_keys
default_value_estimator
dump_patches
functional
in_keys
out_keys
out_keys_source
tensor_keys
value_estimator
The value function blends in the reward and value estimate(s) from upcoming state(s)/state-action pair(s) into a target value estimate for the value network.
vmap_randomness
training
Methods
- __init__(actor_network: ProbabilisticTensorDictSequential, critic_network: TensorDictModule | None = None, *, loss_weighting_type: str | None = None, delay_value: bool = False, loss_critic_type: str = 'smooth_l1', gamma: float | None = None, advantage_key: str | None = None, value_target_key: str | None = None, separate_losses: bool = False, functional: bool = True, normalize_advantage: bool = True, clip_value: float | None = None)[source]#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- _get_advantage(tensordict: TensorDictBase) Tensor [source]#
Get the advantage for a tensordict, normalising it if required.
- Parameters:
tensordict (TensorDictBase) – The input TensorDict.
- Returns:
advantage (torch.Tensor) – The normalised advantage.
- _log_weight(sample: TensorDictBase) tuple[Tensor, Tensor, Distribution] [source]#
Compute the log weight for the given TensorDict sample.
- Parameters:
sample (TensorDictBase) – The sample TensorDict.
- Returns:
log_prob (torch.Tensor) – The log probabilities of the sample
log_weight (torch.Tensor) – The log weight of the sample
dist (torch.distributions.Distribution) – The distribution used to compute the log weight.
- _loss_critic(tensordict: TensorDictBase) Tensor [source]#
Get the critic loss without the clip fraction.
TorchRL’s
loss_critic
method returns a tuple with the critic loss and the clip fraction. This method returns only the critic loss.
- backward(loss_vals: TensorDictBase)[source]#
Perform the backward pass for the loss.
- Parameters:
loss_vals (TensorDictBase) – The loss values.
- forward(tensordict: TensorDictBase) TensorDictBase [source]#
Compute the loss for the given input TensorDict.
- Parameters:
tensordict (TensorDictBase) – The input TensorDict.
- Returns:
TensorDictBase – The output TensorDict containing the loss values.
- set_keys(**kwargs)[source]#
Set the keys of the input TensorDict that are used by this loss.
The keyword argument ‘action’ is treated specially. This should be an iterable of action keys. These are not validated against the set of accepted keys for this class. Instead, each is added to the set of accepted keys.
All other keyword arguments should match
self._AcceptedKeys
.- Parameters:
**kwargs – The keyword arguments to set.