nip.code_validation.dataset_generation._get_openai_response#
- nip.code_validation.dataset_generation._get_openai_response(model: str, messages: list[dict[Literal['role', 'content'], str]], temperature: float = 1.0, log_probs: bool = False, top_logprobs: bool = None, num_responses: int = 1) list[Choice] [source]#
Get completions from the OpenAI API for a chat model.
- Parameters:
model (str) – The name of the chat model to use.
messages (list[dict[Literal["role", "content"], str]]) – A list of dictionaries representing the chat messages. Each dictionary should have a “role” key with the value “user” or “assistant”, and a “content” key with the content of the message.
temperature (float, default=1.0) – The sampling temperature to use when generating completions.
log_probs (bool, default=False) – Whether to return the log probabilities of the tokens in the completion.
top_logprobs (bool, default=None) – Whether to return the top log probabilities of the tokens in the completion.
num_responses (int, default=1) – The number of completions to generate.
- Returns:
completions (list[OpenAIChoice]) – A list of completions returned by the API.