nip.protocols.zero_knowledge.ZeroKnowledgeProtocol#

class nip.protocols.zero_knowledge.ZeroKnowledgeProtocol(hyper_params: HyperParameters, settings: ExperimentSettings, base_protocol_cls: type[SingleVerifierProtocolHandler])[source]#

Meta-handler for zero-knowledge protocols.

Takes a base protocol as argument and extends it to be zero-knowledge. It does this by creating a child protocol handler instance.

Introduces a second verifier and a simulator. The simulator tries to mimic the interaction between the second verifier and the prover(s), and the second verifier tries to prevent this. The prover(s) tries to make sure the simulator can succeed (which implies that it is not ‘leaking’ knowledge).

Parameters:

Methods Summary

__init__(hyper_params, settings, ...)

_get_agent_decision_made_mask(round_id, y, ...)

Get a mask indicating whether an agent has made a decision.

_get_simulator_reward(round_id, seed, ...)

Get the simulator reward.

can_agent_be_active(agent_name, round_id, ...)

Check if an agent can be active in a given round and channel.

can_agent_be_active_any_channel(agent_name, ...)

Specify whether an agent can be active in any channel in a given round.

can_agent_see_channel(agent_name, channel_name)

Determine whether an agent can see a channel.

get_active_agents_mask_from_rounds_and_seed(...)

Get a boolean mask indicating which agents are active in which channels.

get_agent_visible_channels(agent_name)

Get the names of the channels visible to an agent.

get_verifier_guess_mask_from_rounds_and_seed(...)

Get a boolean mask indicating whether the verifier can make a decision.

reward_mid_point_estimate(agent_name)

Get an estimate of the expected reward if all agents play randomly.

step_interaction_protocol(env_td)

Take a step in the interaction protocol.

Attributes

adversarial_channel_names

The names of the adversarial message channels in the protocol.

adversarial_verifier_index

The index of the adversarial verifier in the agent names.

agent_channel_visibility

The visibility settings for the agents in the protocol.

agent_channel_visibility_mask

A boolean mask indicating which agents can see which message channels.

agent_first_active_round

The first round in which each agent is or can be active.

agent_names

The names of the agents in the protocol.

can_be_zero_knowledge

default_stackelberg_sequence

The default Stackelberg sequence for the protocol.

max_message_rounds

The maximum number of message rounds in the protocol.

max_verifier_questions

The maximum number of questions the verifier can make to each prover.

message_channel_names

The names of the message channels in the protocol.

min_message_rounds

The minimum number of message rounds in the protocol.

num_agents

The number of agents in the protocol.

num_message_channels

The number of message channels in the protocol.

prover_indices

The indices of the provers in the list of agent names.

prover_names

The names of the provers in the protocol.

prover_zk_loss_coefficient

The coefficient of the simulator reward in the prover reward.

simulator_channel_names

The names of the simulator message channels in the protocol.

simulator_index

The index of the simulator in the agent names.

stackelberg_sequence

The actual Stackelberg sequence used in this experiment.

standard_channel_names

The names of the standard message channels in the protocol.

verifier_index

The index of the verifier in the agent names.

verifier_names

Methods

__init__(hyper_params: HyperParameters, settings: ExperimentSettings, base_protocol_cls: type[SingleVerifierProtocolHandler])[source]#
_get_agent_decision_made_mask(round_id: Int[Tensor, '...'], y: Int[Tensor, '... 1'], guess_mask: Bool[Tensor, '...'], decision: Int[Tensor, '...'], *, follow_force_guess: bool = True) Bool[Tensor, '...'][source]#

Get a mask indicating whether an agent has made a decision.

Parameters:
  • round_id (Int[Tensor, "..."]) – The round number.

  • y (Int[Tensor, "... 1"]) – The target value.

  • guess_mask (Bool[Tensor, "..."]) – A mask indicating whether the agent is allowed to make a guess.

  • decision (Int[Tensor, "..."]) – The decision output of the agent.

  • follow_force_guess (bool, default=True) – Whether to follow the force_guess parameter, which forces the agent to make a certain decision.

_get_simulator_reward(round_id: Int[Tensor, '...'], seed: Int[Tensor, '...'], main_message_logits: Float[Tensor, '... agent channel position logit'], decision_logits: Float[Tensor, '... agent 3']) Float[Tensor, '...'][source]#

Get the simulator reward.

The simulator reward is the cosine similarity between the adversarial verifier and simulator logits, multiplied by a coefficient.

Note

It is assumed that the adversarial verifier is active in the channels corresponding to those where the simulator is active, and vice versa. If this is changed, the method should be updated accordingly.

Parameters:
  • round_id (Int[Tensor, "..."]) – The round of the protocol.

  • seed (Int[Tensor, "..."]) – The per-environment seed.

  • main_message_logits (Float[Tensor, "... agent channel position logit"]) – The logits for the messages sent by the agents.

  • decision_logits (Float[Tensor, "... agent 3"]) – The logits for the decisions made by the agents.

Returns:

simulator_reward (Float[Tensor, “…”]) – The simulator reward.

can_agent_be_active(agent_name: str, round_id: int, channel_name: str) bool[source]#

Check if an agent can be active in a given round and channel.

1. The verifier is active in the standard channels when the base protocol verifier is active. The adversarial verifier and simulator are not active in the standard channels, but the provers are. 2. Adversarial verifiers are active in the adversarial channels when the base protocol verifier is active. The verifier and simulator are not active in the adversarial channels, but the provers are. 3. The simulator is active in the simulator channels when the base protocol verifier is active. No other agents are active in the simulator channels. 4. Whether the provers are active is determined by the base protocol.

Parameters:
  • agent_name (str) – The name of the agent.

  • round_id (int) – The round of the protocol.

  • channel_name (str) – The name of the channel.

Returns:

can_be_active (bool) – Whether the agent can be active in the given round and channel.

can_agent_be_active_any_channel(agent_name: str, round_id: int) bool[source]#

Specify whether an agent can be active in any channel in a given round.

For non-deterministic protocols, this is true if the agent has some probability of being active.

Returns:

can_be_active (bool) – Whether the agent can be active in the given round.

can_agent_see_channel(agent_name: str, channel_name: str) bool[source]#

Determine whether an agent can see a channel.

Returns:

can_see_channel (bool) – Whether the agent can see the channel.

get_active_agents_mask_from_rounds_and_seed(round_id: Int[Tensor, '...'], seed: Int[Tensor, '...']) Bool[Tensor, '... agent channel'][source]#

Get a boolean mask indicating which agents are active in which channels.

The adversarial verifier is active in the adversarial channels, the simulator is active in the simulator channels, and the provers are active in all channels. The verifier is active in the standard channels.

Parameters:
  • round_id (Int[Tensor, "..."]) – The round of the protocol.

  • seed (Int[Tensor, "..."]) – The per-environment seed.

Returns:

active_mask (Bool[Tensor, “… agent channel”]) – A boolean mask indicating which agents are active in which channels.

get_agent_visible_channels(agent_name: str) list[str][source]#

Get the names of the channels visible to an agent.

Parameters:

agent_name (str) – The name of the agent.

Returns:

visible_channels (list[str]) – The names of the channels visible to the agent.

get_verifier_guess_mask_from_rounds_and_seed(round_id: Int[Tensor, '...'], seed: Int[Tensor, '...']) Bool[Tensor, '...'][source]#

Get a boolean mask indicating whether the verifier can make a decision.

This is the case only when the base verifier can make a decision.

Parameters:

round_id (Int[Tensor, "..."]) – The round of the protocol.

Returns:

guess_mask (Bool[Tensor, “…”]) – A boolean mask indicating whether the verifier can make a decision.

reward_mid_point_estimate(agent_name: str) float[source]#

Get an estimate of the expected reward if all agents play randomly.

This is used to compute the mid-point of the reward range for the agent.

For example, if the agent gets reward -1 for a wrong guess and 1 for a correct guess, the mid-point estimate could be 0.

For the zero-knowledge protocol, for the base agents we use the estimate from the base protocol. We set the mid-point estimate for the simulator to 0, because its reward is a cosine similarity. We set the mid-point estimate for the adversarial verifier to 0, because its reward is the negative of the simulator reward.

Parameters:

agent_name (str) – The name of the agent to get the reward mid-point for.

Returns:

reward_mid_point (float) – The expected reward for the agent if all agents play randomly.

step_interaction_protocol(env_td: TensorDictBase | NestedArrayDict) tuple[Bool[Tensor, '...'], Bool[Tensor, '... agent'], Bool[Tensor, '...'], Float[Tensor, '... agent']][source]#

Take a step in the interaction protocol.

Computes the done signals and reward. Used in the _step method of the environment.

Parameters:

env_td (TensorDictBase | NestedArrayDict) –

The current observation and state. If a NestedArrayDict, it is converted to a TensorDictBase. Has keys:

  • ”y” (… 1): The target value.

  • ”round” (…): The current round.

  • ”done” (…): A boolean mask indicating whether the episode is done.

  • (“agents”, “done”) (… agent): A boolean mask indicating whether each

    agent is done.

  • ”terminated” (…): A boolean mask indicating whether the episode has been

    terminated.

  • (“agents”, “decision”) (… agent): The decision of each agent.

  • (“agents”, “main_message_logits”) (… agent channel position logit): The

    main message logits for each agent.

  • (“agents”, “decision_logits”) (… agent 3): The decision logits for each

    agent.

Returns:

  • shared_done (Bool[Tensor, “…”]) – A boolean mask indicating whether the episode is done because all relevant agents have made a decision.

  • agent_done (Bool[Tensor, “… agent”]) – A boolean mask indicating whether each agent is done, because they have made a decision. This can only be True for agents that can make decisions.

  • terminated (Bool[Tensor, “…”]) – A boolean mask indicating whether the episode has been terminated because the max number of rounds has been reached and the verifier has not guessed.

  • reward (Float[Tensor, “… agent”]) – The reward for the agents.