nip.code_validation.dataset_generation._get_openrouter_response

nip.code_validation.dataset_generation._get_openrouter_response#

nip.code_validation.dataset_generation._get_openrouter_response(model: str, messages: list[dict[Literal['role', 'content'], str]], temperature: float = 1.0, get_log_probs: bool = False, get_top_logprobs: bool = None, num_responses: int = 1, force_multiple_generations: bool = False) list[dict[Literal['message', 'log_probs', 'top_logprobs'], Any]][source]#

Send a POST request to the OpenRouter API to get responses from a chat model.

Note that this function calls the OpenAI API instead when it can.

Parameters:
  • model (str) – The name of the chat model to use.

  • messages (list) – A list of dictionaries representing the chat messages. Each dictionary should have a “role” key with the value “user” or “assistant”, and a “content” key with the content of the message.

  • temperature (float, default=1.0) – The sampling temperature to use when generating completions.

  • get_log_probs (bool, default=False) – Whether to return the log probabilities of the tokens in the completion.

  • get_top_logprobs (bool, default=None) – Whether to return the top log probabilities of the tokens in the completion.

  • num_responses (int, default=1) – The number of completions to generate.

  • force_multiple_generations (bool, default=False) – If True, we force multiple generations by making multiple requests to the OpenRouter API.

Returns:

responses (list[dict[Literal[“message”, “log_probs”, “top_logprobs”], Any]]) – The response object returned by the API.

Raises:

requests.exceptions.RequestException – If there was an error sending the request.