nip.timing.timeables.TrainingTimeable#

class nip.timing.timeables.TrainingTimeable(*, param_scale: float = 1.0, wait: int = 2, warmup: int = 1, active: int = 3, repeat: int = 2)[source]#

Base class timeable which involves some kind of training.

The schedule is as follows:

  1. For the first wait steps of training, do nothing.

  2. For each of the repeat cycles:

    1. For the first warmup steps of the cycle, run the profiler but don’t record.

    2. For the next active steps of the cycle, run the profiler and record.

Parameters:
  • param_scale (float, default=1.0) – Key default parameters (if any) will be scaled by this factor.

  • wait (int, default=2) – The number of training steps to wait before starting to profile.

  • warmup (int, default=1) – The number of warmup steps in each cycle.

  • active (int, default=3) – The number of steps to profile in each cycle.

  • repeat (int, default=2) – The number of cycles to repeat.

Methods Summary

__init__(*[, param_scale, wait, warmup, ...])

_get_profiler_args(log_dir, record_shapes, ...)

Get the arguments for the PyTorch profiler.

run(profiler)

Run the action.

time([log_dir, record_shapes, ...])

Time the action.

Methods

__init__(*, param_scale: float = 1.0, wait: int = 2, warmup: int = 1, active: int = 3, repeat: int = 2)[source]#
_get_profiler_args(log_dir: str | None, record_shapes: bool, profile_memory: bool, with_stack: bool) dict[source]#

Get the arguments for the PyTorch profiler.

Parameters:
  • log_dir (str, optional) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler_args (dict) – The arguments for the PyTorch profiler.

abstract run(profiler: profile)[source]#

Run the action.

Parameters:

profiler (torch.profiler.profile) – The PyTorch profiler which is being used to time the action.

time(log_dir: str | None = None, record_shapes: bool = True, profile_memory: bool = True, with_stack: bool = False) profile[source]#

Time the action.

Parameters:
  • log_dir (str, optional) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler (torch.profiler.profile) – The PyTorch profiler containing the timing information.