nip.scenario_instance.ScenarioInstance#
- class nip.scenario_instance.ScenarioInstance(train_dataset: Dataset, test_dataset: Dataset, agents: dict[str, Agent], protocol_handler: ProtocolHandler, message_regressor: MessageRegressor, train_environment: Environment | None = None, test_environment: Environment | None = None, combined_whole: CombinedWhole | None = None, combined_body: CombinedBody | None = None, combined_policy_body: CombinedBody | None = None, combined_value_body: CombinedBody | None = None, combined_policy_head: CombinedPolicyHead | None = None, combined_value_head: CombinedValueHead | None = None, shared_model_groups: dict[str, PureTextSharedModelGroup] | None = None)[source]#
- A dataclass for holding the components of an experiment. - The principal aim of this class is to abstract away the details of the particular experiment being run. - test_dataset#
- The dataset used for testing. Note that this can be either the “test” or “validation” split, depending on the hyperparameters. - Type:
 
 - protocol_handler#
- The interaction protocol handler for the experiment. - Type:
 
 - message_regressor#
- The message regressor for the experiment, which is used to test if the label can be inferred purely from the messages. - Type:
- MessageRegressor 
 
 - agents#
- The agents for the experiment. Each ‘agent’ is a dictionary containing all of the agent parts. 
 - train_environment#
- The train environment for the experiment, if the experiment is RL. - Type:
- Optional[Environment] 
 
 - test_environment#
- The environment for testing the agents, which uses - test_dataset. Note that this can be either the “test” or “validation” split, depending on the hyperparameters.- Type:
- Optional[Environment] 
 
 - combined_whole#
- If the agents are not split into parts, this holds the combination of the whole agents. - Type:
- Optional[CombinedWholeAgent] 
 
 - combined_body#
- The combined body of the agents, if the agents are combined the actor and critic share the same body. - Type:
- Optional[CombinedBody] 
 
 - combined_policy_body#
- The combined policy body of the agents, if the agents are combined and the actor and critic have separate bodies. - Type:
- Optional[CombinedBody] 
 
 - combined_value_body#
- The combined value body of the agents, if the agents are combined and the actor and critic have separate bodies. - Type:
- Optional[CombinedBody] 
 
 - combined_policy_head#
- The combined policy head of the agents, if the agents are combined. - Type:
- Optional[CombinedPolicyHead] 
 
 - combined_value_head#
- The combined value head of the agents, if the agents are combined. - Type:
- Optional[CombinedValueHead] 
 
 - The shared model groups for pure-text environments. Agents in the same group share the same model. A dictionary with the group name as the key and the shared model group as the value. - Type:
- Optional[dict[str, PureTextSharedModelGroup]] 
 
 - Methods Summary - __eq__(other)- Return self==value. - __init__(train_dataset, test_dataset, ...[, ...])- __repr__()- Return repr(self). - Attributes - Methods - __eq__(other)#
- Return self==value. 
 - __init__(train_dataset: Dataset, test_dataset: Dataset, agents: dict[str, Agent], protocol_handler: ProtocolHandler, message_regressor: MessageRegressor, train_environment: Environment | None = None, test_environment: Environment | None = None, combined_whole: CombinedWhole | None = None, combined_body: CombinedBody | None = None, combined_policy_body: CombinedBody | None = None, combined_value_body: CombinedBody | None = None, combined_policy_head: CombinedPolicyHead | None = None, combined_value_head: CombinedValueHead | None = None, shared_model_groups: dict[str, PureTextSharedModelGroup] | None = None) None#
 - __repr__()#
- Return repr(self).