nip.timing.run.RunTimeable#

class nip.timing.run.RunTimeable(*, param_scale: float = 1.0, wait: int = 2, warmup: int = 1, active: int = 3, repeat: int = 2, force_cpu: bool = False, pretrain: bool = False)[source]#

Base class for a timeable that performs a complete experiment run.

Other than the arguments to the constructor, all other experiment hyper_params are their defaults.

The schedule is as follows:

  1. For the first wait steps of training, do nothing.

  2. For each of the repeat cycles:
    1. For the first warmup steps of the cycle, run the profiler but don’t record.

    2. For the next active steps of the cycle, run the profiler and record.

To subclass, define the class attributes below.

Parameters:
  • param_scale (float, default=1.0) – Scale factor for key default experiment parameters.

  • wait (int, default=2) – The number of training steps to wait before starting to profile.

  • warmup (int, default=1) – The number of warmup steps in each cycle.

  • active (int, default=3) – The number of steps to profile in each cycle.

  • repeat (int, default=2) – The number of cycles to repeat.

  • force_cpu (bool, default=False) – Whether to force everything to run on the CPU, even if a GPU is available.

  • pretrain (bool, default=False) – When running an RL experiment, whether to pretrain the model.

scenario#

The scenario which defines the model architecture and datasets.

Type:

ClassVar[ScenarioType]

dataset#

The name of the dataset to use.

Type:

ClassVar[str]

agent_name#

The name of the agent to use for the model.

Type:

ClassVar[str]

Methods Summary

__init__(*[, param_scale, wait, warmup, ...])

_get_params()

Get the parameters which define the experiment.

_get_profiler_args(log_dir, record_shapes, ...)

Get the arguments for the PyTorch profiler.

run(profiler)

Run the experiment.

time([log_dir, record_shapes, ...])

Time the action.

Attributes

Methods

__init__(*, param_scale: float = 1.0, wait: int = 2, warmup: int = 1, active: int = 3, repeat: int = 2, force_cpu: bool = False, pretrain: bool = False)[source]#
_get_params() HyperParameters[source]#

Get the parameters which define the experiment.

Returns:

hyper_params (HyperParameters) – The parameters of the experiment.

_get_profiler_args(log_dir: str | None, record_shapes: bool, profile_memory: bool, with_stack: bool) dict[source]#

Get the arguments for the PyTorch profiler.

Parameters:
  • log_dir (str, optional) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler_args (dict) – The arguments for the PyTorch profiler.

run(profiler: profile)[source]#

Run the experiment.

Parameters:

profiler (torch.profiler.profile) – The profiler to use.

time(log_dir: str | None = None, record_shapes: bool = True, profile_memory: bool = True, with_stack: bool = False) profile[source]#

Time the action.

Parameters:
  • log_dir (str, optional) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler (torch.profiler.profile) – The PyTorch profiler containing the timing information.