nip.timing.models.ModelTimeable#

class nip.timing.models.ModelTimeable(*, param_scale: float = 1.0, force_cpu: bool = False, batch_size: int = 64, num_batches: int = 100)[source]#

Base class for a timeable that runs a model.

To subclass, define the class attributes below.

Parameters:
  • param_scale (float, default=1.0) – Scale factor for key default parameters (currently unused)

  • force_cpu (bool, default=False) – Whether to force the model to run on the CPU, even if a GPU is available.

  • batch_size (int, default=64) – The batch size to use for the model.

  • num_batches (int, default=100) – The number of batches to run the model on.

scenario#

The scenario which defines the model architecture and datasets.

Type:

ClassVar[ScenarioType]

dataset#

The name of the dataset to use.

Type:

ClassVar[str]

agent_name#

The name of the agent to use for the model.

Type:

ClassVar[str]

Methods Summary

__init__(*[, param_scale, force_cpu, ...])

_get_params()

Get the parameters which define the experiment containing the model.

_get_profiler_args(log_dir, record_shapes, ...)

Get the arguments for the PyTorch profiler.

run(profiler)

Run the model.

time([log_dir, record_shapes, ...])

Time the action.

Attributes

Methods

__init__(*, param_scale: float = 1.0, force_cpu: bool = False, batch_size: int = 64, num_batches: int = 100)[source]#
_get_params() HyperParameters[source]#

Get the parameters which define the experiment containing the model.

Returns:

hyper_params (HyperParameters) – The parameters of the experiment.

_get_profiler_args(log_dir: str | None, record_shapes: bool, profile_memory: bool, with_stack: bool) dict[source]#

Get the arguments for the PyTorch profiler.

Parameters:
  • log_dir (str or None) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler_args (dict) – The arguments for the PyTorch profiler.

run(profiler: profile)[source]#

Run the model.

Parameters:

profiler (torch.profiler.profile) – The profiler to run the model with.

time(log_dir: str | None = None, record_shapes: bool = True, profile_memory: bool = True, with_stack: bool = False) profile[source]#

Time the action.

Parameters:
  • log_dir (str, optional) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler (torch.profiler.profile) – The PyTorch profiler containing the timing information.