nip.utils.torch.DummyOptimizer#
- class nip.utils.torch.DummyOptimizer(*args, **kwargs)[source]#
A dummy optimizer which does nothing.
Methods Summary
__init__(*args, **kwargs)step(*args, **kwargs)Perform a single optimization step to update parameter.
zero_grad(*args, **kwargs)Reset the gradients of all optimized
torch.Tensors.Attributes
OptimizerPostHookalias of
Callable[[Self,tuple[Any, ...],dict[str,Any]],None]OptimizerPreHookalias of
Callable[[Self,tuple[Any, ...],dict[str,Any]],tuple[tuple[Any, ...],dict[str,Any]] |None]Methods
- step(*args, **kwargs)[source]#
Perform a single optimization step to update parameter.
- Parameters:
closure (Callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.
- zero_grad(*args, **kwargs)[source]#
Reset the gradients of all optimized
torch.Tensors.- Parameters:
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
zero_grad(set_to_none=True)followed by a backward pass,.grads are guaranteed to be None for params that did not receive a gradient. 3.torch.optimoptimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).