nip.timing.models.GraphIsomorphismVerifierTimeable#

class nip.timing.models.GraphIsomorphismVerifierTimeable(*, param_scale: float = 1.0, force_cpu: bool = False, batch_size: int = 64, num_batches: int = 100)[source]#

Timeable to run the graph isomorphism verifier model.

Methods Summary

__init__(*[, param_scale, force_cpu, ...])

_get_params()

Get the parameters which define the experiment containing the model.

_get_profiler_args(log_dir, record_shapes, ...)

Get the arguments for the PyTorch profiler.

run(profiler)

Run the model.

time([log_dir, record_shapes, ...])

Time the action.

Attributes

agent_name

dataset

scenario

Methods

__init__(*, param_scale: float = 1.0, force_cpu: bool = False, batch_size: int = 64, num_batches: int = 100)[source]#
_get_params() HyperParameters[source]#

Get the parameters which define the experiment containing the model.

Returns:

hyper_params (HyperParameters) – The parameters of the experiment.

_get_profiler_args(log_dir: str | None, record_shapes: bool, profile_memory: bool, with_stack: bool) dict[source]#

Get the arguments for the PyTorch profiler.

Parameters:
  • log_dir (str or None) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler_args (dict) – The arguments for the PyTorch profiler.

run(profiler: profile)[source]#

Run the model.

Parameters:

profiler (torch.profiler.profile) – The profiler to run the model with.

time(log_dir: str | None = None, record_shapes: bool = True, profile_memory: bool = True, with_stack: bool = False) profile[source]#

Time the action.

Parameters:
  • log_dir (str, optional) – The directory to save the profiling results to, if any.

  • record_shapes (bool) – Whether to record tensor shapes. This introduces an additional overhead.

  • profile_memory (bool) – Whether to profile memory usage.

  • with_stack (bool) – Whether to record the stack trace. This introduces an additional overhead.

Returns:

profiler (torch.profiler.profile) – The PyTorch profiler containing the timing information.