Packages

package model

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. model
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. final case class AssistantFileObject(id: String, object: Object, createdAt: Int, assistantId: String) extends Product with Serializable

    AssistantFileObject model

    AssistantFileObject model

    A list of [Files](/docs/api-reference/files) attached to an assistant.

    id

    The identifier, which can be referenced in API endpoints.

    object

    The object type, which is always assistant.file.

    createdAt

    The Unix timestamp (in seconds) for when the assistant file was created.

    assistantId

    The assistant ID that the file is attached to.

  2. final case class AssistantObject(id: String, object: Object, createdAt: Int, name: Optional[Name], description: Optional[Description], model: String, instructions: Optional[Instructions], tools: Chunk[ToolsItem], fileIds: Chunk[String], metadata: Optional[Metadata]) extends Product with Serializable

    AssistantObject model

    AssistantObject model

    Represents an assistant that can call the model and use tools.

    id

    The identifier, which can be referenced in API endpoints.

    object

    The object type, which is always assistant.

    createdAt

    The Unix timestamp (in seconds) for when the assistant was created.

    name

    The name of the assistant. The maximum length is 256 characters.

    description

    The description of the assistant. The maximum length is 512 characters.

    model

    ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.

    instructions

    The system instructions that the assistant uses. The maximum length is 32768 characters.

    tools

    A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, retrieval, or function.

    fileIds

    A list of [file](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  3. final case class AssistantToolsCode(type: Type) extends Product with Serializable

    AssistantToolsCode model

    AssistantToolsCode model

    type

    The type of tool being defined: code_interpreter

  4. final case class AssistantToolsFunction(type: Type, function: FunctionObject) extends Product with Serializable

    AssistantToolsFunction model

    AssistantToolsFunction model

    type

    The type of tool being defined: function

  5. final case class AssistantToolsRetrieval(type: Type) extends Product with Serializable

    AssistantToolsRetrieval model

    AssistantToolsRetrieval model

    type

    The type of tool being defined: retrieval

  6. sealed trait AssistantsListAssistantFilesOrder extends AnyRef

    assistants_listAssistantFiles_order model

  7. sealed trait AssistantsListAssistantsOrder extends AnyRef

    assistants_listAssistants_order model

  8. type CaseType1 = model.CaseType1.Type

    CaseType1 model

  9. final case class ChatCompletionFunctionCallOption(name: String) extends Product with Serializable

    ChatCompletionFunctionCallOption model

    ChatCompletionFunctionCallOption model

    Specifying a particular function via {"name": "my_function"} forces the model to call that function.

    name

    The name of the function to call.

  10. final case class ChatCompletionFunctions(description: Optional[String] = Optional.Absent, name: String, parameters: Optional[FunctionParameters] = Optional.Absent) extends Product with Serializable

    ChatCompletionFunctions model

    ChatCompletionFunctions model

    description

    A description of what the function does, used by the model to choose when and how to call the function.

    name

    The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

  11. final case class ChatCompletionMessageToolCall(id: String, type: Type, function: Function) extends Product with Serializable

    ChatCompletionMessageToolCall model

    ChatCompletionMessageToolCall model

    id

    The ID of the tool call.

    type

    The type of the tool. Currently, only function is supported.

    function

    The function that the model called.

  12. final case class ChatCompletionMessageToolCallChunk(index: Int, id: Optional[String] = Optional.Absent, type: Optional[Type] = Optional.Absent, function: Optional[Function] = Optional.Absent) extends Product with Serializable

    ChatCompletionMessageToolCallChunk model

    ChatCompletionMessageToolCallChunk model

    id

    The ID of the tool call.

    type

    The type of the tool. Currently, only function is supported.

  13. final case class ChatCompletionNamedToolChoice(type: Type, function: Function) extends Product with Serializable

    ChatCompletionNamedToolChoice model

    ChatCompletionNamedToolChoice model

    Specifies a tool the model should use. Use to force the model to call a specific function.

    type

    The type of the tool. Currently, only function is supported.

  14. final case class ChatCompletionRequestAssistantMessage(content: Optional[String] = Optional.Absent, role: Role, name: Optional[String] = Optional.Absent, toolCalls: Optional[Chunk[ChatCompletionMessageToolCall]] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent) extends Product with Serializable

    ChatCompletionRequestAssistantMessage model

    ChatCompletionRequestAssistantMessage model

    content

    The contents of the assistant message. Required unless tool_calls or function_call is specified.

    role

    The role of the messages author, in this case assistant.

    name

    An optional name for the participant. Provides the model information to differentiate between participants of the same role.

    functionCall

    Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.

  15. final case class ChatCompletionRequestFunctionMessage(role: Role, content: Optional[String], name: String) extends Product with Serializable

    ChatCompletionRequestFunctionMessage model

    ChatCompletionRequestFunctionMessage model

    role

    The role of the messages author, in this case function.

    content

    The contents of the function message.

    name

    The name of the function to call.

  16. sealed trait ChatCompletionRequestMessage extends AnyRef

    ChatCompletionRequestMessage model

  17. sealed trait ChatCompletionRequestMessageContentPart extends AnyRef

    ChatCompletionRequestMessageContentPart model

  18. final case class ChatCompletionRequestMessageContentPartImage(type: Type, imageUrl: ImageUrl) extends Product with Serializable

    ChatCompletionRequestMessageContentPartImage model

    ChatCompletionRequestMessageContentPartImage model

    type

    The type of the content part.

  19. final case class ChatCompletionRequestMessageContentPartText(type: Type, text: String) extends Product with Serializable

    ChatCompletionRequestMessageContentPartText model

    ChatCompletionRequestMessageContentPartText model

    type

    The type of the content part.

    text

    The text content.

  20. final case class ChatCompletionRequestSystemMessage(content: String, role: Role, name: Optional[String] = Optional.Absent) extends Product with Serializable

    ChatCompletionRequestSystemMessage model

    ChatCompletionRequestSystemMessage model

    content

    The contents of the system message.

    role

    The role of the messages author, in this case system.

    name

    An optional name for the participant. Provides the model information to differentiate between participants of the same role.

  21. final case class ChatCompletionRequestToolMessage(role: Role, content: String, toolCallId: String) extends Product with Serializable

    ChatCompletionRequestToolMessage model

    ChatCompletionRequestToolMessage model

    role

    The role of the messages author, in this case tool.

    content

    The contents of the tool message.

    toolCallId

    Tool call that this message is responding to.

  22. final case class ChatCompletionRequestUserMessage(content: Content, role: Role, name: Optional[String] = Optional.Absent) extends Product with Serializable

    ChatCompletionRequestUserMessage model

    ChatCompletionRequestUserMessage model

    content

    The contents of the user message.

    role

    The role of the messages author, in this case user.

    name

    An optional name for the participant. Provides the model information to differentiate between participants of the same role.

  23. final case class ChatCompletionResponseMessage(content: Optional[String], toolCalls: Optional[Chunk[ChatCompletionMessageToolCall]] = Optional.Absent, role: Role, functionCall: Optional[FunctionCall] = Optional.Absent) extends Product with Serializable

    ChatCompletionResponseMessage model

    ChatCompletionResponseMessage model

    A chat completion message generated by the model.

    content

    The contents of the message.

    role

    The role of the author of this message.

    functionCall

    Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.

  24. sealed trait ChatCompletionRole extends AnyRef

    ChatCompletionRole model

    ChatCompletionRole model

    The role of the author of a message

  25. final case class ChatCompletionStreamResponseDelta(content: Optional[String] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, toolCalls: Optional[Chunk[ChatCompletionMessageToolCallChunk]] = Optional.Absent, role: Optional[Role] = Optional.Absent) extends Product with Serializable

    ChatCompletionStreamResponseDelta model

    ChatCompletionStreamResponseDelta model

    A chat completion delta generated by streamed model responses.

    content

    The contents of the chunk message.

    functionCall

    Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.

    role

    The role of the author of this message.

  26. final case class ChatCompletionTokenLogprob(token: String, logprob: Double, bytes: Optional[Chunk[Int]], topLogprobs: Chunk[TopLogprobsItem]) extends Product with Serializable

    ChatCompletionTokenLogprob model

    ChatCompletionTokenLogprob model

    token

    The token.

    logprob

    The log probability of this token.

    bytes

    A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

    topLogprobs

    List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.

  27. final case class ChatCompletionTool(type: Type, function: FunctionObject) extends Product with Serializable

    ChatCompletionTool model

    ChatCompletionTool model

    type

    The type of the tool. Currently, only function is supported.

  28. sealed trait ChatCompletionToolChoiceOption extends AnyRef

    ChatCompletionToolChoiceOption model

    ChatCompletionToolChoiceOption model

    Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type": "function", "function": {"name": "my_function"}} forces the model to call that function.

    none is the default when no functions are present. auto is the default if functions are present.

  29. sealed trait Code extends AnyRef

    code model

    code model

    One of server_error or rate_limit_exceeded.

  30. final case class CompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable

    CompletionUsage model

    CompletionUsage model

    Usage statistics for the completion request.

    completionTokens

    Number of tokens in the generated completion.

    promptTokens

    Number of tokens in the prompt.

    totalTokens

    Total number of tokens used in the request (prompt + completion).

  31. final case class CreateAssistantFileRequest(fileId: String) extends Product with Serializable

    CreateAssistantFileRequest model

    CreateAssistantFileRequest model

    fileId

    A [File](/docs/api-reference/files) ID (with purpose="assistants") that the assistant should use. Useful for tools like retrieval and code_interpreter that can access files.

  32. final case class CreateAssistantRequest(model: CreateAssistantRequest.Model, name: Optional[Name] = Optional.Absent, description: Optional[Description] = Optional.Absent, instructions: Optional[Instructions] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, fileIds: Optional[Chunk[String]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    CreateAssistantRequest model

    CreateAssistantRequest model

    model

    ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.

    name

    The name of the assistant. The maximum length is 256 characters.

    description

    The description of the assistant. The maximum length is 512 characters.

    instructions

    The system instructions that the assistant uses. The maximum length is 32768 characters.

    tools

    A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, retrieval, or function.

    fileIds

    A list of [file](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  33. final case class CreateChatCompletionFunctionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable

    CreateChatCompletionFunctionResponse model

    CreateChatCompletionFunctionResponse model

    Represents a chat completion response returned by model, based on the provided input.

    id

    A unique identifier for the chat completion.

    choices

    A list of chat completion choices. Can be more than one if n is greater than 1.

    created

    The Unix timestamp (in seconds) of when the chat completion was created.

    model

    The model used for the chat completion.

    systemFingerprint

    This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

    object

    The object type, which is always chat.completion.

  34. final case class CreateChatCompletionImageResponse(values: Map[String, Json]) extends DynamicObject[CreateChatCompletionImageResponse] with Product with Serializable

    CreateChatCompletionImageResponse model

    CreateChatCompletionImageResponse model

    Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

    values

    The dynamic list of key-value pairs of the object

  35. final case class CreateChatCompletionRequest(messages: NonEmptyChunk[ChatCompletionRequestMessage], model: CreateChatCompletionRequest.Model, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, logprobs: Optional[Boolean] = Optional.Absent, topLogprobs: Optional[TopLogprobs] = Optional.Absent, maxTokens: Optional[Int] = Optional.Absent, n: Optional[CreateChatCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, responseFormat: Optional[CreateChatCompletionRequest.ResponseFormat] = Optional.Absent, seed: Optional[Seed] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, tools: Optional[Chunk[ChatCompletionTool]] = Optional.Absent, toolChoice: Optional[ChatCompletionToolChoiceOption] = Optional.Absent, user: Optional[String] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, functions: Optional[NonEmptyChunk[ChatCompletionFunctions]] = Optional.Absent) extends Product with Serializable

    CreateChatCompletionRequest model

    CreateChatCompletionRequest model

    messages

    A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).

    model

    ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.

    frequencyPenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)

    logitBias

    Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

    logprobs

    Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. This option is currently not available on the gpt-4-vision-preview model.

    topLogprobs

    An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.

    maxTokens

    The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.

    n

    How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.

    presencePenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)

    responseFormat

    An object specifying the format that the model must output. Compatible with [GPT-4 Turbo](/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON. **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

    seed

    This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.

    stop

    Up to 4 sequences where the API will stop generating further tokens.

    stream

    If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a data: [DONE] message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).

    temperature

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

    topP

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

    tools

    A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

    functionCall

    Deprecated in favor of tool_choice. Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.

    functions

    Deprecated in favor of tools. A list of functions the model may generate JSON inputs for.

  36. final case class CreateChatCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable

    CreateChatCompletionResponse model

    CreateChatCompletionResponse model

    Represents a chat completion response returned by model, based on the provided input.

    id

    A unique identifier for the chat completion.

    choices

    A list of chat completion choices. Can be more than one if n is greater than 1.

    created

    The Unix timestamp (in seconds) of when the chat completion was created.

    model

    The model used for the chat completion.

    systemFingerprint

    This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

    object

    The object type, which is always chat.completion.

  37. final case class CreateChatCompletionStreamResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object) extends Product with Serializable

    CreateChatCompletionStreamResponse model

    CreateChatCompletionStreamResponse model

    Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

    id

    A unique identifier for the chat completion. Each chunk has the same ID.

    choices

    A list of chat completion choices. Can be more than one if n is greater than 1.

    created

    The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

    model

    The model to generate the completion.

    systemFingerprint

    This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

    object

    The object type, which is always chat.completion.chunk.

  38. final case class CreateCompletionRequest(model: CreateCompletionRequest.Model, prompt: Optional[Prompt], bestOf: Optional[BestOf] = Optional.Absent, echo: Optional[Boolean] = Optional.Absent, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, logprobs: Optional[Logprobs] = Optional.Absent, maxTokens: Optional[MaxTokens] = Optional.Absent, n: Optional[CreateCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, seed: Optional[Seed] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, suffix: Optional[String] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateCompletionRequest model

    CreateCompletionRequest model

    model

    ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.

    prompt

    The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.

    bestOf

    Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

    echo

    Echo back the prompt in addition to the completion

    frequencyPenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)

    logitBias

    Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.

    logprobs

    Include the log probabilities on the logprobs most likely output tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.

    maxTokens

    The maximum number of [tokens](/tokenizer) that can be generated in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.

    n

    How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

    presencePenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)

    seed

    If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.

    stop

    Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

    stream

    Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a data: [DONE] message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).

    suffix

    The suffix that comes after a completion of inserted text.

    temperature

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

    topP

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  39. final case class CreateCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable

    CreateCompletionResponse model

    CreateCompletionResponse model

    Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).

    id

    A unique identifier for the completion.

    choices

    The list of completion choices the model generated for the input prompt.

    created

    The Unix timestamp (in seconds) of when the completion was created.

    model

    The model used for completion.

    systemFingerprint

    This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

    object

    The object type, which is always "text_completion"

  40. final case class CreateEmbeddingRequest(input: Input, model: CreateEmbeddingRequest.Model, encodingFormat: Optional[EncodingFormat] = Optional.Absent, dimensions: Optional[Dimensions] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateEmbeddingRequest model

    CreateEmbeddingRequest model

    input

    Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for text-embedding-ada-002), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.

    model

    ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.

    encodingFormat

    The format to return the embeddings in. Can be either float or [base64](https://pypi.org/project/pybase64/).

    dimensions

    The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  41. final case class CreateEmbeddingResponse(data: Chunk[Embedding], model: String, object: Object, usage: Usage) extends Product with Serializable

    CreateEmbeddingResponse model

    CreateEmbeddingResponse model

    data

    The list of embeddings generated by the model.

    model

    The name of the model used to generate the embedding.

    object

    The object type, which is always "list".

    usage

    The usage information for the request.

  42. final case class CreateFileRequest(file: File, purpose: Purpose) extends Product with Serializable

    CreateFileRequest model

    CreateFileRequest model

    file

    The File object (not file name) to be uploaded.

    purpose

    The intended purpose of the uploaded file. Use "fine-tune" for [Fine-tuning](/docs/api-reference/fine-tuning) and "assistants" for [Assistants](/docs/api-reference/assistants) and [Messages](/docs/api-reference/messages). This allows us to validate the format of the uploaded file is correct for fine-tuning.

  43. final case class CreateFineTuningJobRequest(model: CreateFineTuningJobRequest.Model, trainingFile: String, hyperparameters: Optional[Hyperparameters] = Optional.Absent, suffix: Optional[Suffix] = Optional.Absent, validationFile: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateFineTuningJobRequest model

    CreateFineTuningJobRequest model

    model

    The name of the model to fine-tune. You can select one of the [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).

    trainingFile

    The ID of an uploaded file that contains training data. See [upload file](/docs/api-reference/files/upload) for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.

    hyperparameters

    The hyperparameters used for the fine-tuning job.

    suffix

    A string of up to 18 characters that will be added to your fine-tuned model name. For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel.

    validationFile

    The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.

  44. final case class CreateImageEditRequest(image: File, prompt: String, mask: Optional[File] = Optional.Absent, model: Optional[CreateImageEditRequest.Model] = Optional.Absent, n: Optional[CreateImageEditRequest.N] = Optional.Absent, size: Optional[Size] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateImageEditRequest model

    CreateImageEditRequest model

    image

    The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.

    prompt

    A text description of the desired image(s). The maximum length is 1000 characters.

    mask

    An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.

    model

    The model to use for image generation. Only dall-e-2 is supported at this time.

    n

    The number of images to generate. Must be between 1 and 10.

    size

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.

    responseFormat

    The format in which the generated images are returned. Must be one of url or b64_json.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  45. final case class CreateImageRequest(prompt: String, model: Optional[CreateImageRequest.Model] = Optional.Absent, n: Optional[N] = Optional.Absent, quality: Optional[Quality] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[CreateImageRequest.Size] = Optional.Absent, style: Optional[Style] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateImageRequest model

    CreateImageRequest model

    prompt

    A text description of the desired image(s). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.

    model

    The model to use for image generation.

    n

    The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.

    quality

    The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image. This param is only supported for dall-e-3.

    responseFormat

    The format in which the generated images are returned. Must be one of url or b64_json.

    size

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models.

    style

    The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  46. final case class CreateImageVariationRequest(image: File, model: Optional[CreateImageVariationRequest.Model] = Optional.Absent, n: Optional[N] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[Size] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateImageVariationRequest model

    CreateImageVariationRequest model

    image

    The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.

    model

    The model to use for image generation. Only dall-e-2 is supported at this time.

    n

    The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.

    responseFormat

    The format in which the generated images are returned. Must be one of url or b64_json.

    size

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  47. final case class CreateMessageRequest(role: Role, content: Content, fileIds: Optional[NonEmptyChunk[String]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    CreateMessageRequest model

    CreateMessageRequest model

    role

    The role of the entity that is creating the message. Currently only user is supported.

    content

    The content of the message.

    fileIds

    A list of [File](/docs/api-reference/files) IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like retrieval and code_interpreter that can access and use files.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  48. final case class CreateModerationRequest(input: Input, model: Optional[CreateModerationRequest.Model] = Optional.Absent) extends Product with Serializable

    CreateModerationRequest model

    CreateModerationRequest model

    input

    The input text to classify

    model

    Two content moderations models are available: text-moderation-stable and text-moderation-latest. The default is text-moderation-latest which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use text-moderation-stable, we will provide advanced notice before updating the model. Accuracy of text-moderation-stable may be slightly lower than for text-moderation-latest.

  49. final case class CreateModerationResponse(id: String, model: String, results: Chunk[ResultsItem]) extends Product with Serializable

    CreateModerationResponse model

    CreateModerationResponse model

    Represents policy compliance report by OpenAI's content moderation model against a given input.

    id

    The unique identifier for the moderation request.

    model

    The model used to generate the moderation results.

    results

    A list of moderation objects.

  50. final case class CreateRunRequest(assistantId: String, model: Optional[String] = Optional.Absent, instructions: Optional[String] = Optional.Absent, additionalInstructions: Optional[String] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    CreateRunRequest model

    CreateRunRequest model

    assistantId

    The ID of the [assistant](/docs/api-reference/assistants) to use to execute this run.

    model

    The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.

    instructions

    Overrides the [instructions](/docs/api-reference/assistants/createAssistant) of the assistant. This is useful for modifying the behavior on a per-run basis.

    additionalInstructions

    Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.

    tools

    Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  51. final case class CreateSpeechRequest(model: CreateSpeechRequest.Model, input: Input, voice: Voice, responseFormat: Optional[CreateSpeechRequest.ResponseFormat] = Optional.Absent, speed: Optional[Speed] = Optional.Absent) extends Product with Serializable

    CreateSpeechRequest model

    CreateSpeechRequest model

    model

    One of the available [TTS models](/docs/models/tts): tts-1 or tts-1-hd

    input

    The text to generate audio for. The maximum length is 4096 characters.

    voice

    The voice to use when generating the audio. Supported voices are alloy, echo, fable, onyx, nova, and shimmer. Previews of the voices are available in the [Text to speech guide](/docs/guides/text-to-speech/voice-options).

    responseFormat

    The format to audio in. Supported formats are mp3, opus, aac, and flac.

    speed

    The speed of the generated audio. Select a value from 0.25 to 4.0. 1.0 is the default.

  52. final case class CreateThreadAndRunRequest(assistantId: String, thread: Optional[CreateThreadRequest] = Optional.Absent, model: Optional[String] = Optional.Absent, instructions: Optional[String] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    CreateThreadAndRunRequest model

    CreateThreadAndRunRequest model

    assistantId

    The ID of the [assistant](/docs/api-reference/assistants) to use to execute this run.

    model

    The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.

    instructions

    Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.

    tools

    Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  53. final case class CreateThreadRequest(messages: Optional[Chunk[CreateMessageRequest]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    CreateThreadRequest model

    CreateThreadRequest model

    messages

    A list of [messages](/docs/api-reference/messages) to start the thread with.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  54. final case class CreateTranscriptionRequest(file: File, model: CreateTranscriptionRequest.Model, language: Optional[String] = Optional.Absent, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[CreateTranscriptionRequest.ResponseFormat] = Optional.Absent, temperature: Optional[Double] = Optional.Absent, timestampGranularities[]: Optional[Chunk[TimestampGranularities[]Item]] = Optional.Absent) extends Product with Serializable

    CreateTranscriptionRequest model

    CreateTranscriptionRequest model

    file

    The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

    model

    ID of the model to use. Only whisper-1 is currently available.

    language

    The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.

    prompt

    An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.

    responseFormat

    The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

    temperature

    The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.

    timestampGranularities[]

    The timestamp granularities to populate for this transcription. Any of these options: word, or segment. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.

  55. final case class CreateTranscriptionResponse(text: String) extends Product with Serializable

    CreateTranscriptionResponse model

  56. final case class CreateTranslationRequest(file: File, model: CreateTranslationRequest.Model, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[String] = Optional.Absent, temperature: Optional[Double] = Optional.Absent) extends Product with Serializable

    CreateTranslationRequest model

    CreateTranslationRequest model

    file

    The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

    model

    ID of the model to use. Only whisper-1 is currently available.

    prompt

    An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.

    responseFormat

    The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

    temperature

    The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.

  57. final case class CreateTranslationResponse(text: String) extends Product with Serializable

    CreateTranslationResponse model

  58. final case class DeleteAssistantFileResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable

    DeleteAssistantFileResponse model

    DeleteAssistantFileResponse model

    Deletes the association between the assistant and the file, but does not delete the [File](/docs/api-reference/files) object itself.

  59. final case class DeleteAssistantResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable

    DeleteAssistantResponse model

  60. final case class DeleteFileResponse(id: String, object: Object, deleted: Boolean) extends Product with Serializable

    DeleteFileResponse model

  61. final case class DeleteMessageResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable

    DeleteMessageResponse model

  62. final case class DeleteModelResponse(id: String, deleted: Boolean, object: String) extends Product with Serializable

    DeleteModelResponse model

  63. final case class DeleteThreadResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable

    DeleteThreadResponse model

  64. type Description = model.Description.Type

    description model

    description model

    The description of the assistant. The maximum length is 512 characters.

  65. final case class Embedding(index: Int, embedding: Chunk[Double], object: Object) extends Product with Serializable

    Embedding model

    Embedding model

    Represents an embedding vector returned by embedding endpoint.

    index

    The index of the embedding in the list of embeddings.

    embedding

    The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the [embedding guide](/docs/guides/embeddings).

    object

    The object type, which is always "embedding".

  66. type EndIndex = model.EndIndex.Type

    end_index model

  67. final case class Error(code: Optional[String], message: String, param: Optional[String], type: String) extends Product with Serializable

    Error model

  68. final case class ErrorResponse(error: Error) extends Product with Serializable

    ErrorResponse model

  69. final case class File(data: Chunk[Byte], fileName: String) extends Product with Serializable
  70. final case class FineTuningJob(id: String, createdAt: Int, error: Optional[FineTuningJob.Error], fineTunedModel: Optional[String], finishedAt: Optional[Int], hyperparameters: Hyperparameters, model: String, object: Object, organizationId: String, resultFiles: Chunk[String], status: Status, trainedTokens: Optional[Int], trainingFile: String, validationFile: Optional[String]) extends Product with Serializable

    FineTuningJob model

    FineTuningJob model

    The fine_tuning.job object represents a fine-tuning job that has been created through the API.

    id

    The object identifier, which can be referenced in the API endpoints.

    createdAt

    The Unix timestamp (in seconds) for when the fine-tuning job was created.

    error

    For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.

    fineTunedModel

    The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.

    finishedAt

    The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.

    hyperparameters

    The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.

    model

    The base model that is being fine-tuned.

    object

    The object type, which is always "fine_tuning.job".

    organizationId

    The organization that owns the fine-tuning job.

    resultFiles

    The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the [Files API](/docs/api-reference/files/retrieve-contents).

    status

    The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.

    trainedTokens

    The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.

    trainingFile

    The file ID used for training. You can retrieve the training data with the [Files API](/docs/api-reference/files/retrieve-contents).

    validationFile

    The file ID used for validation. You can retrieve the validation results with the [Files API](/docs/api-reference/files/retrieve-contents).

  71. final case class FineTuningJobEvent(id: String, createdAt: Int, level: Level, message: String, object: Object) extends Product with Serializable

    FineTuningJobEvent model

    FineTuningJobEvent model

    Fine-tuning job event object

  72. sealed trait FinishReason extends AnyRef

    finish_reason model

    finish_reason model

    The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.

  73. type FrequencyPenalty = model.FrequencyPenalty.Type

    frequency_penalty model

    frequency_penalty model

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

    [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)

  74. final case class FunctionObject(description: Optional[String] = Optional.Absent, name: String, parameters: Optional[FunctionParameters] = Optional.Absent) extends Product with Serializable

    FunctionObject model

    FunctionObject model

    description

    A description of what the function does, used by the model to choose when and how to call the function.

    name

    The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

  75. final case class FunctionParameters(values: Map[String, Json]) extends DynamicObject[FunctionParameters] with Product with Serializable

    FunctionParameters model

    FunctionParameters model

    The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.

    Omitting parameters defines a function with an empty parameter list.

    values

    The dynamic list of key-value pairs of the object

  76. final case class Image(b64Json: Optional[String] = Optional.Absent, url: Optional[String] = Optional.Absent, revisedPrompt: Optional[String] = Optional.Absent) extends Product with Serializable

    Image model

    Image model

    Represents the url or the content of an image generated by the OpenAI API.

    b64Json

    The base64-encoded JSON of the generated image, if response_format is b64_json.

    url

    The URL of the generated image, if response_format is url (default).

    revisedPrompt

    The prompt that was used to generate the image, if there was any revision to the prompt.

  77. final case class ImagesResponse(created: Int, data: Chunk[Image]) extends Product with Serializable

    ImagesResponse model

  78. type Instructions = model.Instructions.Type

    instructions model

    instructions model

    The system instructions that the assistant uses. The maximum length is 32768 characters.

  79. final case class ListAssistantFilesResponse(object: String, data: Chunk[AssistantFileObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable

    ListAssistantFilesResponse model

  80. final case class ListAssistantsResponse(object: String, data: Chunk[AssistantObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable

    ListAssistantsResponse model

  81. final case class ListFilesResponse(data: Chunk[OpenAIFile], object: Object) extends Product with Serializable

    ListFilesResponse model

  82. final case class ListFineTuningJobEventsResponse(data: Chunk[FineTuningJobEvent], object: Object) extends Product with Serializable

    ListFineTuningJobEventsResponse model

  83. final case class ListMessageFilesResponse(object: String, data: Chunk[MessageFileObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable

    ListMessageFilesResponse model

  84. final case class ListMessagesResponse(object: String, data: Chunk[MessageObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable

    ListMessagesResponse model

  85. final case class ListModelsResponse(object: Object, data: Chunk[Model]) extends Product with Serializable

    ListModelsResponse model

  86. final case class ListPaginatedFineTuningJobsResponse(data: Chunk[FineTuningJob], hasMore: Boolean, object: Object) extends Product with Serializable

    ListPaginatedFineTuningJobsResponse model

  87. final case class ListRunStepsResponse(object: String, data: Chunk[RunStepObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable

    ListRunStepsResponse model

  88. final case class ListRunsResponse(object: String, data: Chunk[RunObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable

    ListRunsResponse model

  89. final case class ListThreadsResponse(object: String, data: Chunk[ThreadObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable

    ListThreadsResponse model

  90. final case class MessageContentImageFileObject(type: Type, imageFile: ImageFile) extends Product with Serializable

    MessageContentImageFileObject model

    MessageContentImageFileObject model

    References an image [File](/docs/api-reference/files) in the content of a message.

    type

    Always image_file.

  91. final case class MessageContentTextAnnotationsFileCitationObject(type: Type, text: String, fileCitation: FileCitation, startIndex: StartIndex, endIndex: EndIndex) extends Product with Serializable

    MessageContentTextAnnotationsFileCitationObject model

    MessageContentTextAnnotationsFileCitationObject model

    A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the "retrieval" tool to search files.

    type

    Always file_citation.

    text

    The text in the message content that needs to be replaced.

  92. final case class MessageContentTextAnnotationsFilePathObject(type: Type, text: String, filePath: FilePath, startIndex: StartIndex, endIndex: EndIndex) extends Product with Serializable

    MessageContentTextAnnotationsFilePathObject model

    MessageContentTextAnnotationsFilePathObject model

    A URL for the file that's generated when the assistant used the code_interpreter tool to generate a file.

    type

    Always file_path.

    text

    The text in the message content that needs to be replaced.

  93. final case class MessageContentTextObject(type: Type, text: Text) extends Product with Serializable

    MessageContentTextObject model

    MessageContentTextObject model

    The text content that is part of a message.

    type

    Always text.

  94. final case class MessageFileObject(id: String, object: Object, createdAt: Int, messageId: String) extends Product with Serializable

    MessageFileObject model

    MessageFileObject model

    A list of files attached to a message.

    id

    The identifier, which can be referenced in API endpoints.

    object

    The object type, which is always thread.message.file.

    createdAt

    The Unix timestamp (in seconds) for when the message file was created.

    messageId

    The ID of the [message](/docs/api-reference/messages) that the [File](/docs/api-reference/files) is attached to.

  95. final case class MessageObject(id: String, object: Object, createdAt: Int, threadId: String, role: Role, content: Chunk[ContentItem], assistantId: Optional[String], runId: Optional[String], fileIds: Chunk[String], metadata: Optional[Metadata]) extends Product with Serializable

    MessageObject model

    MessageObject model

    Represents a message within a [thread](/docs/api-reference/threads).

    id

    The identifier, which can be referenced in API endpoints.

    object

    The object type, which is always thread.message.

    createdAt

    The Unix timestamp (in seconds) for when the message was created.

    threadId

    The [thread](/docs/api-reference/threads) ID that this message belongs to.

    role

    The entity that produced the message. One of user or assistant.

    content

    The content of the message in array of text and/or images.

    assistantId

    If applicable, the ID of the [assistant](/docs/api-reference/assistants) that authored this message.

    runId

    If applicable, the ID of the [run](/docs/api-reference/runs) associated with the authoring of this message.

    fileIds

    A list of [file](/docs/api-reference/files) IDs that the assistant should use. Useful for tools like retrieval and code_interpreter that can access files. A maximum of 10 files can be attached to a message.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  96. final case class Model(id: String, created: Int, object: Object, ownedBy: String) extends Product with Serializable

    Model model

    Model model

    Describes an OpenAI model offering that can be used with the API.

    id

    The model identifier, which can be referenced in the API endpoints.

    created

    The Unix timestamp (in seconds) when the model was created.

    object

    The object type, which is always "model".

    ownedBy

    The organization that owns the model.

  97. final case class ModifyAssistantRequest(model: Optional[ModifyAssistantRequest.Model] = Optional.Absent, name: Optional[Name] = Optional.Absent, description: Optional[Description] = Optional.Absent, instructions: Optional[Instructions] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, fileIds: Optional[Chunk[String]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    ModifyAssistantRequest model

    ModifyAssistantRequest model

    model

    ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.

    name

    The name of the assistant. The maximum length is 256 characters.

    description

    The description of the assistant. The maximum length is 512 characters.

    instructions

    The system instructions that the assistant uses. The maximum length is 32768 characters.

    tools

    A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, retrieval, or function.

    fileIds

    A list of [File](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. If a file was previously attached to the list but does not show up in the list, it will be deleted from the assistant.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  98. final case class ModifyMessageRequest(metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    ModifyMessageRequest model

    ModifyMessageRequest model

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  99. final case class ModifyRunRequest(metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    ModifyRunRequest model

    ModifyRunRequest model

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  100. final case class ModifyThreadRequest(metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable

    ModifyThreadRequest model

    ModifyThreadRequest model

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  101. type N = model.N.Type

    n model

    n model

    The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.

  102. type Name = model.Name.Type

    name model

    name model

    The name of the assistant. The maximum length is 256 characters.

  103. sealed trait OpenAIFailure extends AnyRef
  104. final case class OpenAIFile(id: String, bytes: Int, createdAt: Int, filename: String, object: Object, purpose: Purpose, status: Status, statusDetails: Optional[String] = Optional.Absent) extends Product with Serializable

    OpenAIFile model

    OpenAIFile model

    The File object represents a document that has been uploaded to OpenAI.

    id

    The file identifier, which can be referenced in the API endpoints.

    bytes

    The size of the file, in bytes.

    createdAt

    The Unix timestamp (in seconds) for when the file was created.

    filename

    The name of the file.

    object

    The object type, which is always file.

    purpose

    The intended purpose of the file. Supported values are fine-tune, fine-tune-results, assistants, and assistants_output.

    status

    Deprecated. The current status of the file, which can be either uploaded, processed, or error.

    statusDetails

    Deprecated. For details on why a fine-tuning training file failed validation, see the error field on fine_tuning.job.

  105. type PresencePenalty = model.PresencePenalty.Type

    presence_penalty model

    presence_penalty model

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

    [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)

  106. sealed trait ResponseFormat extends AnyRef

    response_format model

    response_format model

    The format in which the generated images are returned. Must be one of url or b64_json.

  107. final case class RunCompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable

    RunCompletionUsage model

    RunCompletionUsage model

    Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).

    completionTokens

    Number of completion tokens used over the course of the run.

    promptTokens

    Number of prompt tokens used over the course of the run.

    totalTokens

    Total number of tokens used (prompt + completion).

  108. final case class RunObject(id: String, object: Object, createdAt: Int, threadId: String, assistantId: String, status: Status, requiredAction: Optional[RequiredAction], lastError: Optional[LastError], expiresAt: Int, startedAt: Optional[Int], cancelledAt: Optional[Int], failedAt: Optional[Int], completedAt: Optional[Int], model: String, instructions: String, tools: Chunk[ToolsItem], fileIds: Chunk[String], metadata: Optional[Metadata], usage: RunCompletionUsage) extends Product with Serializable

    RunObject model

    RunObject model

    Represents an execution run on a [thread](/docs/api-reference/threads).

    id

    The identifier, which can be referenced in API endpoints.

    object

    The object type, which is always thread.run.

    createdAt

    The Unix timestamp (in seconds) for when the run was created.

    threadId

    The ID of the [thread](/docs/api-reference/threads) that was executed on as a part of this run.

    assistantId

    The ID of the [assistant](/docs/api-reference/assistants) used for execution of this run.

    status

    The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, or expired.

    requiredAction

    Details on the action required to continue the run. Will be null if no action is required.

    lastError

    The last error associated with this run. Will be null if there are no errors.

    expiresAt

    The Unix timestamp (in seconds) for when the run will expire.

    startedAt

    The Unix timestamp (in seconds) for when the run was started.

    cancelledAt

    The Unix timestamp (in seconds) for when the run was cancelled.

    failedAt

    The Unix timestamp (in seconds) for when the run failed.

    completedAt

    The Unix timestamp (in seconds) for when the run was completed.

    model

    The model that the [assistant](/docs/api-reference/assistants) used for this run.

    instructions

    The instructions that the [assistant](/docs/api-reference/assistants) used for this run.

    tools

    The list of tools that the [assistant](/docs/api-reference/assistants) used for this run.

    fileIds

    The list of [File](/docs/api-reference/files) IDs the [assistant](/docs/api-reference/assistants) used for this run.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  109. final case class RunStepCompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable

    RunStepCompletionUsage model

    RunStepCompletionUsage model

    Usage statistics related to the run step. This value will be null while the run step's status is in_progress.

    completionTokens

    Number of completion tokens used over the course of the run step.

    promptTokens

    Number of prompt tokens used over the course of the run step.

    totalTokens

    Total number of tokens used (prompt + completion).

  110. final case class RunStepDetailsMessageCreationObject(type: Type, messageCreation: MessageCreation) extends Product with Serializable

    RunStepDetailsMessageCreationObject model

    RunStepDetailsMessageCreationObject model

    Details of the message creation by the run step.

    type

    Always message_creation.

  111. final case class RunStepDetailsToolCallsCodeObject(id: String, type: Type, codeInterpreter: CodeInterpreter) extends Product with Serializable

    RunStepDetailsToolCallsCodeObject model

    RunStepDetailsToolCallsCodeObject model

    Details of the Code Interpreter tool call the run step was involved in.

    id

    The ID of the tool call.

    type

    The type of tool call. This is always going to be code_interpreter for this type of tool call.

    codeInterpreter

    The Code Interpreter tool call definition.

  112. final case class RunStepDetailsToolCallsCodeOutputImageObject(type: Type, image: RunStepDetailsToolCallsCodeOutputImageObject.Image) extends Product with Serializable

    RunStepDetailsToolCallsCodeOutputImageObject model

    RunStepDetailsToolCallsCodeOutputImageObject model

    type

    Always image.

  113. final case class RunStepDetailsToolCallsCodeOutputLogsObject(type: Type, logs: String) extends Product with Serializable

    RunStepDetailsToolCallsCodeOutputLogsObject model

    RunStepDetailsToolCallsCodeOutputLogsObject model

    Text output from the Code Interpreter tool call as part of a run step.

    type

    Always logs.

    logs

    The text output from the Code Interpreter tool call.

  114. final case class RunStepDetailsToolCallsFunctionObject(id: String, type: Type, function: Function) extends Product with Serializable

    RunStepDetailsToolCallsFunctionObject model

    RunStepDetailsToolCallsFunctionObject model

    id

    The ID of the tool call object.

    type

    The type of tool call. This is always going to be function for this type of tool call.

    function

    The definition of the function that was called.

  115. final case class RunStepDetailsToolCallsObject(type: Type, toolCalls: Chunk[ToolCallsItem]) extends Product with Serializable

    RunStepDetailsToolCallsObject model

    RunStepDetailsToolCallsObject model

    Details of the tool call.

    type

    Always tool_calls.

    toolCalls

    An array of tool calls the run step was involved in. These can be associated with one of three types of tools: code_interpreter, retrieval, or function.

  116. final case class RunStepDetailsToolCallsRetrievalObject(id: String, type: Type, retrieval: Retrieval) extends Product with Serializable

    RunStepDetailsToolCallsRetrievalObject model

    RunStepDetailsToolCallsRetrievalObject model

    id

    The ID of the tool call object.

    type

    The type of tool call. This is always going to be retrieval for this type of tool call.

    retrieval

    For now, this is always going to be an empty object.

  117. final case class RunStepObject(id: String, object: Object, createdAt: Int, assistantId: String, threadId: String, runId: String, type: Type, status: Status, stepDetails: StepDetails, lastError: Optional[LastError], expiredAt: Optional[Int], cancelledAt: Optional[Int], failedAt: Optional[Int], completedAt: Optional[Int], metadata: Optional[Metadata], usage: RunStepCompletionUsage) extends Product with Serializable

    RunStepObject model

    RunStepObject model

    Represents a step in execution of a run.

    id

    The identifier of the run step, which can be referenced in API endpoints.

    object

    The object type, which is always thread.run.step.

    createdAt

    The Unix timestamp (in seconds) for when the run step was created.

    assistantId

    The ID of the [assistant](/docs/api-reference/assistants) associated with the run step.

    threadId

    The ID of the [thread](/docs/api-reference/threads) that was run.

    runId

    The ID of the [run](/docs/api-reference/runs) that this run step is a part of.

    type

    The type of run step, which can be either message_creation or tool_calls.

    status

    The status of the run step, which can be either in_progress, cancelled, failed, completed, or expired.

    stepDetails

    The details of the run step.

    lastError

    The last error associated with this run step. Will be null if there are no errors.

    expiredAt

    The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.

    cancelledAt

    The Unix timestamp (in seconds) for when the run step was cancelled.

    failedAt

    The Unix timestamp (in seconds) for when the run step failed.

    completedAt

    The Unix timestamp (in seconds) for when the run step completed.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  118. final case class RunToolCallObject(id: String, type: Type, function: Function) extends Product with Serializable

    RunToolCallObject model

    RunToolCallObject model

    Tool call objects

    id

    The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the [Submit tool outputs to run](/docs/api-reference/runs/submitToolOutputs) endpoint.

    type

    The type of tool call the output is required for. For now, this is always function.

    function

    The function definition.

  119. sealed trait Size extends AnyRef

    size model

    size model

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.

  120. type StartIndex = model.StartIndex.Type

    start_index model

  121. final case class SubmitToolOutputsRunRequest(toolOutputs: Chunk[ToolOutputsItem]) extends Product with Serializable

    SubmitToolOutputsRunRequest model

    SubmitToolOutputsRunRequest model

    toolOutputs

    A list of tools for which the outputs are being submitted.

  122. type Temperature = model.Temperature.Type

    temperature model

    temperature model

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

    We generally recommend altering this or top_p but not both.

  123. final case class ThreadObject(id: String, object: Object, createdAt: Int, metadata: Optional[Metadata]) extends Product with Serializable

    ThreadObject model

    ThreadObject model

    Represents a thread that contains [messages](/docs/api-reference/messages).

    id

    The identifier, which can be referenced in API endpoints.

    object

    The object type, which is always thread.

    createdAt

    The Unix timestamp (in seconds) for when the thread was created.

    metadata

    Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

  124. sealed trait ThreadsListMessageFilesOrder extends AnyRef

    threads_listMessageFiles_order model

  125. sealed trait ThreadsListMessagesOrder extends AnyRef

    threads_listMessages_order model

  126. sealed trait ThreadsListRunStepsOrder extends AnyRef

    threads_listRunSteps_order model

  127. sealed trait ThreadsListRunsOrder extends AnyRef

    threads_listRuns_order model

  128. type TopP = model.TopP.Type

    top_p model

    top_p model

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

    We generally recommend altering this or temperature but not both.

Value Members

  1. object AssistantFileObject extends Serializable
  2. object AssistantObject extends Serializable
  3. object AssistantToolsCode extends Serializable
  4. object AssistantToolsFunction extends Serializable
  5. object AssistantToolsRetrieval extends Serializable
  6. object AssistantsListAssistantFilesOrder
  7. object AssistantsListAssistantsOrder
  8. object CaseType1 extends Subtype[Int]
  9. object ChatCompletionFunctionCallOption extends Serializable
  10. object ChatCompletionFunctions extends Serializable
  11. object ChatCompletionMessageToolCall extends Serializable
  12. object ChatCompletionMessageToolCallChunk extends Serializable
  13. object ChatCompletionNamedToolChoice extends Serializable
  14. object ChatCompletionRequestAssistantMessage extends Serializable
  15. object ChatCompletionRequestFunctionMessage extends Serializable
  16. object ChatCompletionRequestMessage
  17. object ChatCompletionRequestMessageContentPart
  18. object ChatCompletionRequestMessageContentPartImage extends Serializable
  19. object ChatCompletionRequestMessageContentPartText extends Serializable
  20. object ChatCompletionRequestSystemMessage extends Serializable
  21. object ChatCompletionRequestToolMessage extends Serializable
  22. object ChatCompletionRequestUserMessage extends Serializable
  23. object ChatCompletionResponseMessage extends Serializable
  24. object ChatCompletionRole
  25. object ChatCompletionStreamResponseDelta extends Serializable
  26. object ChatCompletionTokenLogprob extends Serializable
  27. object ChatCompletionTool extends Serializable
  28. object ChatCompletionToolChoiceOption
  29. object Code
  30. object CompletionUsage extends Serializable
  31. object CreateAssistantFileRequest extends Serializable
  32. object CreateAssistantRequest extends Serializable
  33. object CreateChatCompletionFunctionResponse extends Serializable
  34. object CreateChatCompletionImageResponse extends Serializable
  35. object CreateChatCompletionRequest extends Serializable
  36. object CreateChatCompletionResponse extends Serializable
  37. object CreateChatCompletionStreamResponse extends Serializable
  38. object CreateCompletionRequest extends Serializable
  39. object CreateCompletionResponse extends Serializable
  40. object CreateEmbeddingRequest extends Serializable
  41. object CreateEmbeddingResponse extends Serializable
  42. object CreateFileRequest extends Serializable
  43. object CreateFineTuningJobRequest extends Serializable
  44. object CreateImageEditRequest extends Serializable
  45. object CreateImageRequest extends Serializable
  46. object CreateImageVariationRequest extends Serializable
  47. object CreateMessageRequest extends Serializable
  48. object CreateModerationRequest extends Serializable
  49. object CreateModerationResponse extends Serializable
  50. object CreateRunRequest extends Serializable
  51. object CreateSpeechRequest extends Serializable
  52. object CreateThreadAndRunRequest extends Serializable
  53. object CreateThreadRequest extends Serializable
  54. object CreateTranscriptionRequest extends Serializable
  55. object CreateTranscriptionResponse extends Serializable
  56. object CreateTranslationRequest extends Serializable
  57. object CreateTranslationResponse extends Serializable
  58. object DeleteAssistantFileResponse extends Serializable
  59. object DeleteAssistantResponse extends Serializable
  60. object DeleteFileResponse extends Serializable
  61. object DeleteMessageResponse extends Serializable
  62. object DeleteModelResponse extends Serializable
  63. object DeleteThreadResponse extends Serializable
  64. object Description extends Subtype[String]
  65. object Embedding extends Serializable
  66. object EndIndex extends Subtype[Int]
  67. object Error extends Serializable
  68. object ErrorResponse extends Serializable
  69. object File extends Serializable
  70. object FineTuningJob extends Serializable
  71. object FineTuningJobEvent extends Serializable
  72. object FinishReason
  73. object FrequencyPenalty extends Subtype[Double]
  74. object FunctionObject extends Serializable
  75. object FunctionParameters extends Serializable
  76. object Image extends Serializable
  77. object ImagesResponse extends Serializable
  78. object Instructions extends Subtype[String]
  79. object ListAssistantFilesResponse extends Serializable
  80. object ListAssistantsResponse extends Serializable
  81. object ListFilesResponse extends Serializable
  82. object ListFineTuningJobEventsResponse extends Serializable
  83. object ListMessageFilesResponse extends Serializable
  84. object ListMessagesResponse extends Serializable
  85. object ListModelsResponse extends Serializable
  86. object ListPaginatedFineTuningJobsResponse extends Serializable
  87. object ListRunStepsResponse extends Serializable
  88. object ListRunsResponse extends Serializable
  89. object ListThreadsResponse extends Serializable
  90. object MessageContentImageFileObject extends Serializable
  91. object MessageContentTextAnnotationsFileCitationObject extends Serializable
  92. object MessageContentTextAnnotationsFilePathObject extends Serializable
  93. object MessageContentTextObject extends Serializable
  94. object MessageFileObject extends Serializable
  95. object MessageObject extends Serializable
  96. object Model extends Serializable
  97. object ModifyAssistantRequest extends Serializable
  98. object ModifyMessageRequest extends Serializable
  99. object ModifyRunRequest extends Serializable
  100. object ModifyThreadRequest extends Serializable
  101. object N extends Subtype[Int]
  102. object Name extends Subtype[String]
  103. object OpenAIFailure
  104. object OpenAIFile extends Serializable
  105. object PresencePenalty extends Subtype[Double]
  106. object ResponseFormat
  107. object RunCompletionUsage extends Serializable
  108. object RunObject extends Serializable
  109. object RunStepCompletionUsage extends Serializable
  110. object RunStepDetailsMessageCreationObject extends Serializable
  111. object RunStepDetailsToolCallsCodeObject extends Serializable
  112. object RunStepDetailsToolCallsCodeOutputImageObject extends Serializable
  113. object RunStepDetailsToolCallsCodeOutputLogsObject extends Serializable
  114. object RunStepDetailsToolCallsFunctionObject extends Serializable
  115. object RunStepDetailsToolCallsObject extends Serializable
  116. object RunStepDetailsToolCallsRetrievalObject extends Serializable
  117. object RunStepObject extends Serializable
  118. object RunToolCallObject extends Serializable
  119. object Size
  120. object StartIndex extends Subtype[Int]
  121. object SubmitToolOutputsRunRequest extends Serializable
  122. object Temperature extends Subtype[Double]
  123. object ThreadObject extends Serializable
  124. object ThreadsListMessageFilesOrder
  125. object ThreadsListMessagesOrder
  126. object ThreadsListRunStepsOrder
  127. object ThreadsListRunsOrder
  128. object TopP extends Subtype[Double]

Inherited from AnyRef

Inherited from Any

Ungrouped