package model
- Alphabetic
- By Inheritance
- model
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- final case class AssistantFileObject(id: String, object: Object, createdAt: Int, assistantId: String) extends Product with Serializable
AssistantFileObject model
AssistantFileObject model
A list of [Files](/docs/api-reference/files) attached to an
assistant.- id
The identifier, which can be referenced in API endpoints.
- object
The object type, which is always
assistant.file.- createdAt
The Unix timestamp (in seconds) for when the assistant file was created.
- assistantId
The assistant ID that the file is attached to.
- final case class AssistantObject(id: String, object: Object, createdAt: Int, name: Optional[Name], description: Optional[Description], model: String, instructions: Optional[Instructions], tools: Chunk[ToolsItem], fileIds: Chunk[String], metadata: Optional[Metadata]) extends Product with Serializable
AssistantObject model
AssistantObject model
Represents an
assistantthat can call the model and use tools.- id
The identifier, which can be referenced in API endpoints.
- object
The object type, which is always
assistant.- createdAt
The Unix timestamp (in seconds) for when the assistant was created.
- name
The name of the assistant. The maximum length is 256 characters.
- description
The description of the assistant. The maximum length is 512 characters.
- model
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
- instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
- tools
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter,retrieval, orfunction.- fileIds
A list of [file](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class AssistantToolsCode(type: Type) extends Product with Serializable
AssistantToolsCode model
AssistantToolsCode model
- type
The type of tool being defined:
code_interpreter
- final case class AssistantToolsFunction(type: Type, function: FunctionObject) extends Product with Serializable
AssistantToolsFunction model
AssistantToolsFunction model
- type
The type of tool being defined:
function
- final case class AssistantToolsRetrieval(type: Type) extends Product with Serializable
AssistantToolsRetrieval model
AssistantToolsRetrieval model
- type
The type of tool being defined:
retrieval
- sealed trait AssistantsListAssistantFilesOrder extends AnyRef
assistants_listAssistantFiles_order model
- sealed trait AssistantsListAssistantsOrder extends AnyRef
assistants_listAssistants_order model
- type CaseType1 = model.CaseType1.Type
CaseType1 model
- final case class ChatCompletionFunctionCallOption(name: String) extends Product with Serializable
ChatCompletionFunctionCallOption model
ChatCompletionFunctionCallOption model
Specifying a particular function via
{"name": "my_function"}forces the model to call that function.- name
The name of the function to call.
- final case class ChatCompletionFunctions(description: Optional[String] = Optional.Absent, name: String, parameters: Optional[FunctionParameters] = Optional.Absent) extends Product with Serializable
ChatCompletionFunctions model
ChatCompletionFunctions model
- description
A description of what the function does, used by the model to choose when and how to call the function.
- name
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
- final case class ChatCompletionMessageToolCall(id: String, type: Type, function: Function) extends Product with Serializable
ChatCompletionMessageToolCall model
ChatCompletionMessageToolCall model
- id
The ID of the tool call.
- type
The type of the tool. Currently, only
functionis supported.- function
The function that the model called.
- final case class ChatCompletionMessageToolCallChunk(index: Int, id: Optional[String] = Optional.Absent, type: Optional[Type] = Optional.Absent, function: Optional[Function] = Optional.Absent) extends Product with Serializable
ChatCompletionMessageToolCallChunk model
ChatCompletionMessageToolCallChunk model
- id
The ID of the tool call.
- type
The type of the tool. Currently, only
functionis supported.
- final case class ChatCompletionNamedToolChoice(type: Type, function: Function) extends Product with Serializable
ChatCompletionNamedToolChoice model
ChatCompletionNamedToolChoice model
Specifies a tool the model should use. Use to force the model to call a specific function.
- type
The type of the tool. Currently, only
functionis supported.
- final case class ChatCompletionRequestAssistantMessage(content: Optional[String] = Optional.Absent, role: Role, name: Optional[String] = Optional.Absent, toolCalls: Optional[Chunk[ChatCompletionMessageToolCall]] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent) extends Product with Serializable
ChatCompletionRequestAssistantMessage model
ChatCompletionRequestAssistantMessage model
- content
The contents of the assistant message. Required unless
tool_callsorfunction_callis specified.- role
The role of the messages author, in this case
assistant.- name
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
- functionCall
Deprecated and replaced by
tool_calls. The name and arguments of a function that should be called, as generated by the model.
- final case class ChatCompletionRequestFunctionMessage(role: Role, content: Optional[String], name: String) extends Product with Serializable
ChatCompletionRequestFunctionMessage model
ChatCompletionRequestFunctionMessage model
- role
The role of the messages author, in this case
function.- content
The contents of the function message.
- name
The name of the function to call.
- sealed trait ChatCompletionRequestMessage extends AnyRef
ChatCompletionRequestMessage model
- sealed trait ChatCompletionRequestMessageContentPart extends AnyRef
ChatCompletionRequestMessageContentPart model
- final case class ChatCompletionRequestMessageContentPartImage(type: Type, imageUrl: ImageUrl) extends Product with Serializable
ChatCompletionRequestMessageContentPartImage model
ChatCompletionRequestMessageContentPartImage model
- type
The type of the content part.
- final case class ChatCompletionRequestMessageContentPartText(type: Type, text: String) extends Product with Serializable
ChatCompletionRequestMessageContentPartText model
ChatCompletionRequestMessageContentPartText model
- type
The type of the content part.
- text
The text content.
- final case class ChatCompletionRequestSystemMessage(content: String, role: Role, name: Optional[String] = Optional.Absent) extends Product with Serializable
ChatCompletionRequestSystemMessage model
ChatCompletionRequestSystemMessage model
- content
The contents of the system message.
- role
The role of the messages author, in this case
system.- name
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
- final case class ChatCompletionRequestToolMessage(role: Role, content: String, toolCallId: String) extends Product with Serializable
ChatCompletionRequestToolMessage model
ChatCompletionRequestToolMessage model
- role
The role of the messages author, in this case
tool.- content
The contents of the tool message.
- toolCallId
Tool call that this message is responding to.
- final case class ChatCompletionRequestUserMessage(content: Content, role: Role, name: Optional[String] = Optional.Absent) extends Product with Serializable
ChatCompletionRequestUserMessage model
ChatCompletionRequestUserMessage model
- content
The contents of the user message.
- role
The role of the messages author, in this case
user.- name
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
- final case class ChatCompletionResponseMessage(content: Optional[String], toolCalls: Optional[Chunk[ChatCompletionMessageToolCall]] = Optional.Absent, role: Role, functionCall: Optional[FunctionCall] = Optional.Absent) extends Product with Serializable
ChatCompletionResponseMessage model
ChatCompletionResponseMessage model
A chat completion message generated by the model.
- content
The contents of the message.
- role
The role of the author of this message.
- functionCall
Deprecated and replaced by
tool_calls. The name and arguments of a function that should be called, as generated by the model.
- sealed trait ChatCompletionRole extends AnyRef
ChatCompletionRole model
ChatCompletionRole model
The role of the author of a message
- final case class ChatCompletionStreamResponseDelta(content: Optional[String] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, toolCalls: Optional[Chunk[ChatCompletionMessageToolCallChunk]] = Optional.Absent, role: Optional[Role] = Optional.Absent) extends Product with Serializable
ChatCompletionStreamResponseDelta model
ChatCompletionStreamResponseDelta model
A chat completion delta generated by streamed model responses.
- content
The contents of the chunk message.
- functionCall
Deprecated and replaced by
tool_calls. The name and arguments of a function that should be called, as generated by the model.- role
The role of the author of this message.
- final case class ChatCompletionTokenLogprob(token: String, logprob: Double, bytes: Optional[Chunk[Int]], topLogprobs: Chunk[TopLogprobsItem]) extends Product with Serializable
ChatCompletionTokenLogprob model
ChatCompletionTokenLogprob model
- token
The token.
- logprob
The log probability of this token.
- bytes
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be
nullif there is no bytes representation for the token.- topLogprobs
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested
top_logprobsreturned.
- final case class ChatCompletionTool(type: Type, function: FunctionObject) extends Product with Serializable
ChatCompletionTool model
ChatCompletionTool model
- type
The type of the tool. Currently, only
functionis supported.
- sealed trait ChatCompletionToolChoiceOption extends AnyRef
ChatCompletionToolChoiceOption model
ChatCompletionToolChoiceOption model
Controls which (if any) function is called by the model.
nonemeans the model will not call a function and instead generates a message.automeans the model can pick between generating a message or calling a function. Specifying a particular function via{"type": "function", "function": {"name": "my_function"}}forces the model to call that function.noneis the default when no functions are present.autois the default if functions are present. - sealed trait Code extends AnyRef
code model
code model
One of
server_errororrate_limit_exceeded. - final case class CompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable
CompletionUsage model
CompletionUsage model
Usage statistics for the completion request.
- completionTokens
Number of tokens in the generated completion.
- promptTokens
Number of tokens in the prompt.
- totalTokens
Total number of tokens used in the request (prompt + completion).
- final case class CreateAssistantFileRequest(fileId: String) extends Product with Serializable
CreateAssistantFileRequest model
CreateAssistantFileRequest model
- fileId
A [File](/docs/api-reference/files) ID (with
purpose="assistants") that the assistant should use. Useful for tools likeretrievalandcode_interpreterthat can access files.
- final case class CreateAssistantRequest(model: CreateAssistantRequest.Model, name: Optional[Name] = Optional.Absent, description: Optional[Description] = Optional.Absent, instructions: Optional[Instructions] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, fileIds: Optional[Chunk[String]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
CreateAssistantRequest model
CreateAssistantRequest model
- model
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
- name
The name of the assistant. The maximum length is 256 characters.
- description
The description of the assistant. The maximum length is 512 characters.
- instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
- tools
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter,retrieval, orfunction.- fileIds
A list of [file](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class CreateChatCompletionFunctionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable
CreateChatCompletionFunctionResponse model
CreateChatCompletionFunctionResponse model
Represents a chat completion response returned by model, based on the provided input.
- id
A unique identifier for the chat completion.
- choices
A list of chat completion choices. Can be more than one if
nis greater than 1.- created
The Unix timestamp (in seconds) of when the chat completion was created.
- model
The model used for the chat completion.
- systemFingerprint
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the
seedrequest parameter to understand when backend changes have been made that might impact determinism.- object
The object type, which is always
chat.completion.
- final case class CreateChatCompletionImageResponse(values: Map[String, Json]) extends DynamicObject[CreateChatCompletionImageResponse] with Product with Serializable
CreateChatCompletionImageResponse model
CreateChatCompletionImageResponse model
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
- values
The dynamic list of key-value pairs of the object
- final case class CreateChatCompletionRequest(messages: NonEmptyChunk[ChatCompletionRequestMessage], model: CreateChatCompletionRequest.Model, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, logprobs: Optional[Boolean] = Optional.Absent, topLogprobs: Optional[TopLogprobs] = Optional.Absent, maxTokens: Optional[Int] = Optional.Absent, n: Optional[CreateChatCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, responseFormat: Optional[CreateChatCompletionRequest.ResponseFormat] = Optional.Absent, seed: Optional[Seed] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, tools: Optional[Chunk[ChatCompletionTool]] = Optional.Absent, toolChoice: Optional[ChatCompletionToolChoiceOption] = Optional.Absent, user: Optional[String] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, functions: Optional[NonEmptyChunk[ChatCompletionFunctions]] = Optional.Absent) extends Product with Serializable
CreateChatCompletionRequest model
CreateChatCompletionRequest model
- messages
A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
- model
ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
- frequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
- logitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
- logprobs
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the
contentofmessage. This option is currently not available on thegpt-4-vision-previewmodel.- topLogprobs
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
logprobsmust be set totrueif this parameter is used.- maxTokens
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
- n
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep
nas1to minimize costs.- presencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
- responseFormat
An object specifying the format that the model must output. Compatible with [GPT-4 Turbo](/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than
gpt-3.5-turbo-1106. Setting to{ "type": "json_object" }enables JSON mode, which guarantees the message the model generates is valid JSON. **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off iffinish_reason="length", which indicates the generation exceededmax_tokensor the conversation exceeded the max context length.- seed
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same
seedand parameters should return the same result. Determinism is not guaranteed, and you should refer to thesystem_fingerprintresponse parameter to monitor changes in the backend.- stop
Up to 4 sequences where the API will stop generating further tokens.
- stream
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a
data: [DONE]message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).- temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or
top_pbut not both.- topP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or
temperaturebut not both.- tools
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.
- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- functionCall
Deprecated in favor of
tool_choice. Controls which (if any) function is called by the model.nonemeans the model will not call a function and instead generates a message.automeans the model can pick between generating a message or calling a function. Specifying a particular function via{"name": "my_function"}forces the model to call that function.noneis the default when no functions are present.autois the default if functions are present.- functions
Deprecated in favor of
tools. A list of functions the model may generate JSON inputs for.
- final case class CreateChatCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable
CreateChatCompletionResponse model
CreateChatCompletionResponse model
Represents a chat completion response returned by model, based on the provided input.
- id
A unique identifier for the chat completion.
- choices
A list of chat completion choices. Can be more than one if
nis greater than 1.- created
The Unix timestamp (in seconds) of when the chat completion was created.
- model
The model used for the chat completion.
- systemFingerprint
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the
seedrequest parameter to understand when backend changes have been made that might impact determinism.- object
The object type, which is always
chat.completion.
- final case class CreateChatCompletionStreamResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object) extends Product with Serializable
CreateChatCompletionStreamResponse model
CreateChatCompletionStreamResponse model
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
- id
A unique identifier for the chat completion. Each chunk has the same ID.
- choices
A list of chat completion choices. Can be more than one if
nis greater than 1.- created
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
- model
The model to generate the completion.
- systemFingerprint
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the
seedrequest parameter to understand when backend changes have been made that might impact determinism.- object
The object type, which is always
chat.completion.chunk.
- final case class CreateCompletionRequest(model: CreateCompletionRequest.Model, prompt: Optional[Prompt], bestOf: Optional[BestOf] = Optional.Absent, echo: Optional[Boolean] = Optional.Absent, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, logprobs: Optional[Logprobs] = Optional.Absent, maxTokens: Optional[MaxTokens] = Optional.Absent, n: Optional[CreateCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, seed: Optional[Seed] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, suffix: Optional[String] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateCompletionRequest model
CreateCompletionRequest model
- model
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
- prompt
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
- bestOf
Generates
best_ofcompletions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed. When used withn,best_ofcontrols the number of candidate completions andnspecifies how many to return –best_ofmust be greater thann. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings formax_tokensandstop.- echo
Echo back the prompt in addition to the completion
- frequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
- logitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass
{"50256": -100}to prevent the <|endoftext|> token from being generated.- logprobs
Include the log probabilities on the
logprobsmost likely output tokens, as well the chosen tokens. For example, iflogprobsis 5, the API will return a list of the 5 most likely tokens. The API will always return thelogprobof the sampled token, so there may be up tologprobs+1elements in the response. The maximum value forlogprobsis 5.- maxTokens
The maximum number of [tokens](/tokenizer) that can be generated in the completion. The token count of your prompt plus
max_tokenscannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.- n
How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for
max_tokensandstop.- presencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
- seed
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same
seedand parameters should return the same result. Determinism is not guaranteed, and you should refer to thesystem_fingerprintresponse parameter to monitor changes in the backend.- stop
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
- stream
Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a
data: [DONE]message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).- suffix
The suffix that comes after a completion of inserted text.
- temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or
top_pbut not both.- topP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or
temperaturebut not both.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, systemFingerprint: Optional[String] = Optional.Absent, object: Object, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable
CreateCompletionResponse model
CreateCompletionResponse model
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
- id
A unique identifier for the completion.
- choices
The list of completion choices the model generated for the input prompt.
- created
The Unix timestamp (in seconds) of when the completion was created.
- model
The model used for completion.
- systemFingerprint
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the
seedrequest parameter to understand when backend changes have been made that might impact determinism.- object
The object type, which is always "text_completion"
- final case class CreateEmbeddingRequest(input: Input, model: CreateEmbeddingRequest.Model, encodingFormat: Optional[EncodingFormat] = Optional.Absent, dimensions: Optional[Dimensions] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateEmbeddingRequest model
CreateEmbeddingRequest model
- input
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for
text-embedding-ada-002), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.- model
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
- encodingFormat
The format to return the embeddings in. Can be either
floator [base64](https://pypi.org/project/pybase64/).- dimensions
The number of dimensions the resulting output embeddings should have. Only supported in
text-embedding-3and later models.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateEmbeddingResponse(data: Chunk[Embedding], model: String, object: Object, usage: Usage) extends Product with Serializable
CreateEmbeddingResponse model
CreateEmbeddingResponse model
- data
The list of embeddings generated by the model.
- model
The name of the model used to generate the embedding.
- object
The object type, which is always "list".
- usage
The usage information for the request.
- final case class CreateFileRequest(file: File, purpose: Purpose) extends Product with Serializable
CreateFileRequest model
CreateFileRequest model
- file
The File object (not file name) to be uploaded.
- purpose
The intended purpose of the uploaded file. Use "fine-tune" for [Fine-tuning](/docs/api-reference/fine-tuning) and "assistants" for [Assistants](/docs/api-reference/assistants) and [Messages](/docs/api-reference/messages). This allows us to validate the format of the uploaded file is correct for fine-tuning.
- final case class CreateFineTuningJobRequest(model: CreateFineTuningJobRequest.Model, trainingFile: String, hyperparameters: Optional[Hyperparameters] = Optional.Absent, suffix: Optional[Suffix] = Optional.Absent, validationFile: Optional[String] = Optional.Absent) extends Product with Serializable
CreateFineTuningJobRequest model
CreateFineTuningJobRequest model
- model
The name of the model to fine-tune. You can select one of the [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
- trainingFile
The ID of an uploaded file that contains training data. See [upload file](/docs/api-reference/files/upload) for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose
fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.- hyperparameters
The hyperparameters used for the fine-tuning job.
- suffix
A string of up to 18 characters that will be added to your fine-tuned model name. For example, a
suffixof "custom-model-name" would produce a model name likeft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel.- validationFile
The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose
fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
- final case class CreateImageEditRequest(image: File, prompt: String, mask: Optional[File] = Optional.Absent, model: Optional[CreateImageEditRequest.Model] = Optional.Absent, n: Optional[CreateImageEditRequest.N] = Optional.Absent, size: Optional[Size] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateImageEditRequest model
CreateImageEditRequest model
- image
The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.
- prompt
A text description of the desired image(s). The maximum length is 1000 characters.
- mask
An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where
imageshould be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions asimage.- model
The model to use for image generation. Only
dall-e-2is supported at this time.- n
The number of images to generate. Must be between 1 and 10.
- size
The size of the generated images. Must be one of
256x256,512x512, or1024x1024.- responseFormat
The format in which the generated images are returned. Must be one of
urlorb64_json.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateImageRequest(prompt: String, model: Optional[CreateImageRequest.Model] = Optional.Absent, n: Optional[N] = Optional.Absent, quality: Optional[Quality] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[CreateImageRequest.Size] = Optional.Absent, style: Optional[Style] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateImageRequest model
CreateImageRequest model
- prompt
A text description of the desired image(s). The maximum length is 1000 characters for
dall-e-2and 4000 characters fordall-e-3.- model
The model to use for image generation.
- n
The number of images to generate. Must be between 1 and 10. For
dall-e-3, onlyn=1is supported.- quality
The quality of the image that will be generated.
hdcreates images with finer details and greater consistency across the image. This param is only supported fordall-e-3.- responseFormat
The format in which the generated images are returned. Must be one of
urlorb64_json.- size
The size of the generated images. Must be one of
256x256,512x512, or1024x1024fordall-e-2. Must be one of1024x1024,1792x1024, or1024x1792fordall-e-3models.- style
The style of the generated images. Must be one of
vividornatural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported fordall-e-3.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateImageVariationRequest(image: File, model: Optional[CreateImageVariationRequest.Model] = Optional.Absent, n: Optional[N] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[Size] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateImageVariationRequest model
CreateImageVariationRequest model
- image
The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.
- model
The model to use for image generation. Only
dall-e-2is supported at this time.- n
The number of images to generate. Must be between 1 and 10. For
dall-e-3, onlyn=1is supported.- responseFormat
The format in which the generated images are returned. Must be one of
urlorb64_json.- size
The size of the generated images. Must be one of
256x256,512x512, or1024x1024.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateMessageRequest(role: Role, content: Content, fileIds: Optional[NonEmptyChunk[String]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
CreateMessageRequest model
CreateMessageRequest model
- role
The role of the entity that is creating the message. Currently only
useris supported.- content
The content of the message.
- fileIds
A list of [File](/docs/api-reference/files) IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like
retrievalandcode_interpreterthat can access and use files.- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class CreateModerationRequest(input: Input, model: Optional[CreateModerationRequest.Model] = Optional.Absent) extends Product with Serializable
CreateModerationRequest model
CreateModerationRequest model
- input
The input text to classify
- model
Two content moderations models are available:
text-moderation-stableandtext-moderation-latest. The default istext-moderation-latestwhich will be automatically upgraded over time. This ensures you are always using our most accurate model. If you usetext-moderation-stable, we will provide advanced notice before updating the model. Accuracy oftext-moderation-stablemay be slightly lower than fortext-moderation-latest.
- final case class CreateModerationResponse(id: String, model: String, results: Chunk[ResultsItem]) extends Product with Serializable
CreateModerationResponse model
CreateModerationResponse model
Represents policy compliance report by OpenAI's content moderation model against a given input.
- id
The unique identifier for the moderation request.
- model
The model used to generate the moderation results.
- results
A list of moderation objects.
- final case class CreateRunRequest(assistantId: String, model: Optional[String] = Optional.Absent, instructions: Optional[String] = Optional.Absent, additionalInstructions: Optional[String] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
CreateRunRequest model
CreateRunRequest model
- assistantId
The ID of the [assistant](/docs/api-reference/assistants) to use to execute this run.
- model
The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
- instructions
Overrides the [instructions](/docs/api-reference/assistants/createAssistant) of the assistant. This is useful for modifying the behavior on a per-run basis.
- additionalInstructions
Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.
- tools
Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class CreateSpeechRequest(model: CreateSpeechRequest.Model, input: Input, voice: Voice, responseFormat: Optional[CreateSpeechRequest.ResponseFormat] = Optional.Absent, speed: Optional[Speed] = Optional.Absent) extends Product with Serializable
CreateSpeechRequest model
CreateSpeechRequest model
- model
One of the available [TTS models](/docs/models/tts):
tts-1ortts-1-hd- input
The text to generate audio for. The maximum length is 4096 characters.
- voice
The voice to use when generating the audio. Supported voices are
alloy,echo,fable,onyx,nova, andshimmer. Previews of the voices are available in the [Text to speech guide](/docs/guides/text-to-speech/voice-options).- responseFormat
The format to audio in. Supported formats are
mp3,opus,aac, andflac.- speed
The speed of the generated audio. Select a value from
0.25to4.0.1.0is the default.
- final case class CreateThreadAndRunRequest(assistantId: String, thread: Optional[CreateThreadRequest] = Optional.Absent, model: Optional[String] = Optional.Absent, instructions: Optional[String] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
CreateThreadAndRunRequest model
CreateThreadAndRunRequest model
- assistantId
The ID of the [assistant](/docs/api-reference/assistants) to use to execute this run.
- model
The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
- instructions
Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.
- tools
Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class CreateThreadRequest(messages: Optional[Chunk[CreateMessageRequest]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
CreateThreadRequest model
CreateThreadRequest model
- messages
A list of [messages](/docs/api-reference/messages) to start the thread with.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class CreateTranscriptionRequest(file: File, model: CreateTranscriptionRequest.Model, language: Optional[String] = Optional.Absent, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[CreateTranscriptionRequest.ResponseFormat] = Optional.Absent, temperature: Optional[Double] = Optional.Absent, timestampGranularities[]: Optional[Chunk[TimestampGranularities[]Item]] = Optional.Absent) extends Product with Serializable
CreateTranscriptionRequest model
CreateTranscriptionRequest model
- file
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
- model
ID of the model to use. Only
whisper-1is currently available.- language
The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
- prompt
An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
- responseFormat
The format of the transcript output, in one of these options:
json,text,srt,verbose_json, orvtt.- temperature
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
- timestampGranularities[]
The timestamp granularities to populate for this transcription. Any of these options:
word, orsegment. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
- final case class CreateTranscriptionResponse(text: String) extends Product with Serializable
CreateTranscriptionResponse model
- final case class CreateTranslationRequest(file: File, model: CreateTranslationRequest.Model, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[String] = Optional.Absent, temperature: Optional[Double] = Optional.Absent) extends Product with Serializable
CreateTranslationRequest model
CreateTranslationRequest model
- file
The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
- model
ID of the model to use. Only
whisper-1is currently available.- prompt
An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
- responseFormat
The format of the transcript output, in one of these options:
json,text,srt,verbose_json, orvtt.- temperature
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
- final case class CreateTranslationResponse(text: String) extends Product with Serializable
CreateTranslationResponse model
- final case class DeleteAssistantFileResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable
DeleteAssistantFileResponse model
DeleteAssistantFileResponse model
Deletes the association between the assistant and the file, but does not delete the [File](/docs/api-reference/files) object itself.
- final case class DeleteAssistantResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable
DeleteAssistantResponse model
- final case class DeleteFileResponse(id: String, object: Object, deleted: Boolean) extends Product with Serializable
DeleteFileResponse model
- final case class DeleteMessageResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable
DeleteMessageResponse model
- final case class DeleteModelResponse(id: String, deleted: Boolean, object: String) extends Product with Serializable
DeleteModelResponse model
- final case class DeleteThreadResponse(id: String, deleted: Boolean, object: Object) extends Product with Serializable
DeleteThreadResponse model
- type Description = model.Description.Type
description model
description model
The description of the assistant. The maximum length is 512 characters.
- final case class Embedding(index: Int, embedding: Chunk[Double], object: Object) extends Product with Serializable
Embedding model
Embedding model
Represents an embedding vector returned by embedding endpoint.
- index
The index of the embedding in the list of embeddings.
- embedding
The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the [embedding guide](/docs/guides/embeddings).
- object
The object type, which is always "embedding".
- type EndIndex = model.EndIndex.Type
end_index model
- final case class Error(code: Optional[String], message: String, param: Optional[String], type: String) extends Product with Serializable
Error model
- final case class ErrorResponse(error: Error) extends Product with Serializable
ErrorResponse model
- final case class File(data: Chunk[Byte], fileName: String) extends Product with Serializable
- final case class FineTuningJob(id: String, createdAt: Int, error: Optional[FineTuningJob.Error], fineTunedModel: Optional[String], finishedAt: Optional[Int], hyperparameters: Hyperparameters, model: String, object: Object, organizationId: String, resultFiles: Chunk[String], status: Status, trainedTokens: Optional[Int], trainingFile: String, validationFile: Optional[String]) extends Product with Serializable
FineTuningJob model
FineTuningJob model
The
fine_tuning.jobobject represents a fine-tuning job that has been created through the API.- id
The object identifier, which can be referenced in the API endpoints.
- createdAt
The Unix timestamp (in seconds) for when the fine-tuning job was created.
- error
For fine-tuning jobs that have
failed, this will contain more information on the cause of the failure.- fineTunedModel
The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.
- finishedAt
The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.
- hyperparameters
The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
- model
The base model that is being fine-tuned.
- object
The object type, which is always "fine_tuning.job".
- organizationId
The organization that owns the fine-tuning job.
- resultFiles
The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the [Files API](/docs/api-reference/files/retrieve-contents).
- status
The current status of the fine-tuning job, which can be either
validating_files,queued,running,succeeded,failed, orcancelled.- trainedTokens
The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.
- trainingFile
The file ID used for training. You can retrieve the training data with the [Files API](/docs/api-reference/files/retrieve-contents).
- validationFile
The file ID used for validation. You can retrieve the validation results with the [Files API](/docs/api-reference/files/retrieve-contents).
- final case class FineTuningJobEvent(id: String, createdAt: Int, level: Level, message: String, object: Object) extends Product with Serializable
FineTuningJobEvent model
FineTuningJobEvent model
Fine-tuning job event object
- sealed trait FinishReason extends AnyRef
finish_reason model
finish_reason model
The reason the model stopped generating tokens. This will be
stopif the model hit a natural stop point or a provided stop sequence,lengthif the maximum number of tokens specified in the request was reached,content_filterif content was omitted due to a flag from our content filters,tool_callsif the model called a tool, orfunction_call(deprecated) if the model called a function. - type FrequencyPenalty = model.FrequencyPenalty.Type
frequency_penalty model
frequency_penalty model
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
[See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
- final case class FunctionObject(description: Optional[String] = Optional.Absent, name: String, parameters: Optional[FunctionParameters] = Optional.Absent) extends Product with Serializable
FunctionObject model
FunctionObject model
- description
A description of what the function does, used by the model to choose when and how to call the function.
- name
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
- final case class FunctionParameters(values: Map[String, Json]) extends DynamicObject[FunctionParameters] with Product with Serializable
FunctionParameters model
FunctionParameters model
The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
Omitting
parametersdefines a function with an empty parameter list.- values
The dynamic list of key-value pairs of the object
- final case class Image(b64Json: Optional[String] = Optional.Absent, url: Optional[String] = Optional.Absent, revisedPrompt: Optional[String] = Optional.Absent) extends Product with Serializable
Image model
Image model
Represents the url or the content of an image generated by the OpenAI API.
- b64Json
The base64-encoded JSON of the generated image, if
response_formatisb64_json.- url
The URL of the generated image, if
response_formatisurl(default).- revisedPrompt
The prompt that was used to generate the image, if there was any revision to the prompt.
- final case class ImagesResponse(created: Int, data: Chunk[Image]) extends Product with Serializable
ImagesResponse model
- type Instructions = model.Instructions.Type
instructions model
instructions model
The system instructions that the assistant uses. The maximum length is 32768 characters.
- final case class ListAssistantFilesResponse(object: String, data: Chunk[AssistantFileObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable
ListAssistantFilesResponse model
- final case class ListAssistantsResponse(object: String, data: Chunk[AssistantObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable
ListAssistantsResponse model
- final case class ListFilesResponse(data: Chunk[OpenAIFile], object: Object) extends Product with Serializable
ListFilesResponse model
- final case class ListFineTuningJobEventsResponse(data: Chunk[FineTuningJobEvent], object: Object) extends Product with Serializable
ListFineTuningJobEventsResponse model
- final case class ListMessageFilesResponse(object: String, data: Chunk[MessageFileObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable
ListMessageFilesResponse model
- final case class ListMessagesResponse(object: String, data: Chunk[MessageObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable
ListMessagesResponse model
- final case class ListModelsResponse(object: Object, data: Chunk[Model]) extends Product with Serializable
ListModelsResponse model
- final case class ListPaginatedFineTuningJobsResponse(data: Chunk[FineTuningJob], hasMore: Boolean, object: Object) extends Product with Serializable
ListPaginatedFineTuningJobsResponse model
- final case class ListRunStepsResponse(object: String, data: Chunk[RunStepObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable
ListRunStepsResponse model
- final case class ListRunsResponse(object: String, data: Chunk[RunObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable
ListRunsResponse model
- final case class ListThreadsResponse(object: String, data: Chunk[ThreadObject], firstId: String, lastId: String, hasMore: Boolean) extends Product with Serializable
ListThreadsResponse model
- final case class MessageContentImageFileObject(type: Type, imageFile: ImageFile) extends Product with Serializable
MessageContentImageFileObject model
MessageContentImageFileObject model
References an image [File](/docs/api-reference/files) in the content of a message.
- type
Always
image_file.
- final case class MessageContentTextAnnotationsFileCitationObject(type: Type, text: String, fileCitation: FileCitation, startIndex: StartIndex, endIndex: EndIndex) extends Product with Serializable
MessageContentTextAnnotationsFileCitationObject model
MessageContentTextAnnotationsFileCitationObject model
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the "retrieval" tool to search files.
- type
Always
file_citation.- text
The text in the message content that needs to be replaced.
- final case class MessageContentTextAnnotationsFilePathObject(type: Type, text: String, filePath: FilePath, startIndex: StartIndex, endIndex: EndIndex) extends Product with Serializable
MessageContentTextAnnotationsFilePathObject model
MessageContentTextAnnotationsFilePathObject model
A URL for the file that's generated when the assistant used the
code_interpretertool to generate a file.- type
Always
file_path.- text
The text in the message content that needs to be replaced.
- final case class MessageContentTextObject(type: Type, text: Text) extends Product with Serializable
MessageContentTextObject model
MessageContentTextObject model
The text content that is part of a message.
- type
Always
text.
- final case class MessageFileObject(id: String, object: Object, createdAt: Int, messageId: String) extends Product with Serializable
MessageFileObject model
MessageFileObject model
A list of files attached to a
message.- id
The identifier, which can be referenced in API endpoints.
- object
The object type, which is always
thread.message.file.- createdAt
The Unix timestamp (in seconds) for when the message file was created.
- messageId
The ID of the [message](/docs/api-reference/messages) that the [File](/docs/api-reference/files) is attached to.
- final case class MessageObject(id: String, object: Object, createdAt: Int, threadId: String, role: Role, content: Chunk[ContentItem], assistantId: Optional[String], runId: Optional[String], fileIds: Chunk[String], metadata: Optional[Metadata]) extends Product with Serializable
MessageObject model
MessageObject model
Represents a message within a [thread](/docs/api-reference/threads).
- id
The identifier, which can be referenced in API endpoints.
- object
The object type, which is always
thread.message.- createdAt
The Unix timestamp (in seconds) for when the message was created.
- threadId
The [thread](/docs/api-reference/threads) ID that this message belongs to.
- role
The entity that produced the message. One of
userorassistant.- content
The content of the message in array of text and/or images.
- assistantId
If applicable, the ID of the [assistant](/docs/api-reference/assistants) that authored this message.
- runId
If applicable, the ID of the [run](/docs/api-reference/runs) associated with the authoring of this message.
- fileIds
A list of [file](/docs/api-reference/files) IDs that the assistant should use. Useful for tools like retrieval and code_interpreter that can access files. A maximum of 10 files can be attached to a message.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class Model(id: String, created: Int, object: Object, ownedBy: String) extends Product with Serializable
Model model
Model model
Describes an OpenAI model offering that can be used with the API.
- id
The model identifier, which can be referenced in the API endpoints.
- created
The Unix timestamp (in seconds) when the model was created.
- object
The object type, which is always "model".
- ownedBy
The organization that owns the model.
- final case class ModifyAssistantRequest(model: Optional[ModifyAssistantRequest.Model] = Optional.Absent, name: Optional[Name] = Optional.Absent, description: Optional[Description] = Optional.Absent, instructions: Optional[Instructions] = Optional.Absent, tools: Optional[Chunk[ToolsItem]] = Optional.Absent, fileIds: Optional[Chunk[String]] = Optional.Absent, metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
ModifyAssistantRequest model
ModifyAssistantRequest model
- model
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
- name
The name of the assistant. The maximum length is 256 characters.
- description
The description of the assistant. The maximum length is 512 characters.
- instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
- tools
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter,retrieval, orfunction.- fileIds
A list of [File](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. If a file was previously attached to the list but does not show up in the list, it will be deleted from the assistant.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class ModifyMessageRequest(metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
ModifyMessageRequest model
ModifyMessageRequest model
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class ModifyRunRequest(metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
ModifyRunRequest model
ModifyRunRequest model
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class ModifyThreadRequest(metadata: Optional[Metadata] = Optional.Absent) extends Product with Serializable
ModifyThreadRequest model
ModifyThreadRequest model
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- type N = model.N.Type
n model
n model
The number of images to generate. Must be between 1 and 10. For
dall-e-3, onlyn=1is supported. - type Name = model.Name.Type
name model
name model
The name of the assistant. The maximum length is 256 characters.
- sealed trait OpenAIFailure extends AnyRef
- final case class OpenAIFile(id: String, bytes: Int, createdAt: Int, filename: String, object: Object, purpose: Purpose, status: Status, statusDetails: Optional[String] = Optional.Absent) extends Product with Serializable
OpenAIFile model
OpenAIFile model
The
Fileobject represents a document that has been uploaded to OpenAI.- id
The file identifier, which can be referenced in the API endpoints.
- bytes
The size of the file, in bytes.
- createdAt
The Unix timestamp (in seconds) for when the file was created.
- filename
The name of the file.
- object
The object type, which is always
file.- purpose
The intended purpose of the file. Supported values are
fine-tune,fine-tune-results,assistants, andassistants_output.- status
Deprecated. The current status of the file, which can be either
uploaded,processed, orerror.- statusDetails
Deprecated. For details on why a fine-tuning training file failed validation, see the
errorfield onfine_tuning.job.
- type PresencePenalty = model.PresencePenalty.Type
presence_penalty model
presence_penalty model
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
[See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
- sealed trait ResponseFormat extends AnyRef
response_format model
response_format model
The format in which the generated images are returned. Must be one of
urlorb64_json. - final case class RunCompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable
RunCompletionUsage model
RunCompletionUsage model
Usage statistics related to the run. This value will be
nullif the run is not in a terminal state (i.e.in_progress,queued, etc.).- completionTokens
Number of completion tokens used over the course of the run.
- promptTokens
Number of prompt tokens used over the course of the run.
- totalTokens
Total number of tokens used (prompt + completion).
- final case class RunObject(id: String, object: Object, createdAt: Int, threadId: String, assistantId: String, status: Status, requiredAction: Optional[RequiredAction], lastError: Optional[LastError], expiresAt: Int, startedAt: Optional[Int], cancelledAt: Optional[Int], failedAt: Optional[Int], completedAt: Optional[Int], model: String, instructions: String, tools: Chunk[ToolsItem], fileIds: Chunk[String], metadata: Optional[Metadata], usage: RunCompletionUsage) extends Product with Serializable
RunObject model
RunObject model
Represents an execution run on a [thread](/docs/api-reference/threads).
- id
The identifier, which can be referenced in API endpoints.
- object
The object type, which is always
thread.run.- createdAt
The Unix timestamp (in seconds) for when the run was created.
- threadId
The ID of the [thread](/docs/api-reference/threads) that was executed on as a part of this run.
- assistantId
The ID of the [assistant](/docs/api-reference/assistants) used for execution of this run.
- status
The status of the run, which can be either
queued,in_progress,requires_action,cancelling,cancelled,failed,completed, orexpired.- requiredAction
Details on the action required to continue the run. Will be
nullif no action is required.- lastError
The last error associated with this run. Will be
nullif there are no errors.- expiresAt
The Unix timestamp (in seconds) for when the run will expire.
- startedAt
The Unix timestamp (in seconds) for when the run was started.
- cancelledAt
The Unix timestamp (in seconds) for when the run was cancelled.
- failedAt
The Unix timestamp (in seconds) for when the run failed.
- completedAt
The Unix timestamp (in seconds) for when the run was completed.
- model
The model that the [assistant](/docs/api-reference/assistants) used for this run.
- instructions
The instructions that the [assistant](/docs/api-reference/assistants) used for this run.
- tools
The list of tools that the [assistant](/docs/api-reference/assistants) used for this run.
- fileIds
The list of [File](/docs/api-reference/files) IDs the [assistant](/docs/api-reference/assistants) used for this run.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class RunStepCompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable
RunStepCompletionUsage model
RunStepCompletionUsage model
Usage statistics related to the run step. This value will be
nullwhile the run step's status isin_progress.- completionTokens
Number of completion tokens used over the course of the run step.
- promptTokens
Number of prompt tokens used over the course of the run step.
- totalTokens
Total number of tokens used (prompt + completion).
- final case class RunStepDetailsMessageCreationObject(type: Type, messageCreation: MessageCreation) extends Product with Serializable
RunStepDetailsMessageCreationObject model
RunStepDetailsMessageCreationObject model
Details of the message creation by the run step.
- type
Always
message_creation.
- final case class RunStepDetailsToolCallsCodeObject(id: String, type: Type, codeInterpreter: CodeInterpreter) extends Product with Serializable
RunStepDetailsToolCallsCodeObject model
RunStepDetailsToolCallsCodeObject model
Details of the Code Interpreter tool call the run step was involved in.
- id
The ID of the tool call.
- type
The type of tool call. This is always going to be
code_interpreterfor this type of tool call.- codeInterpreter
The Code Interpreter tool call definition.
- final case class RunStepDetailsToolCallsCodeOutputImageObject(type: Type, image: RunStepDetailsToolCallsCodeOutputImageObject.Image) extends Product with Serializable
RunStepDetailsToolCallsCodeOutputImageObject model
RunStepDetailsToolCallsCodeOutputImageObject model
- type
Always
image.
- final case class RunStepDetailsToolCallsCodeOutputLogsObject(type: Type, logs: String) extends Product with Serializable
RunStepDetailsToolCallsCodeOutputLogsObject model
RunStepDetailsToolCallsCodeOutputLogsObject model
Text output from the Code Interpreter tool call as part of a run step.
- type
Always
logs.- logs
The text output from the Code Interpreter tool call.
- final case class RunStepDetailsToolCallsFunctionObject(id: String, type: Type, function: Function) extends Product with Serializable
RunStepDetailsToolCallsFunctionObject model
RunStepDetailsToolCallsFunctionObject model
- id
The ID of the tool call object.
- type
The type of tool call. This is always going to be
functionfor this type of tool call.- function
The definition of the function that was called.
- final case class RunStepDetailsToolCallsObject(type: Type, toolCalls: Chunk[ToolCallsItem]) extends Product with Serializable
RunStepDetailsToolCallsObject model
RunStepDetailsToolCallsObject model
Details of the tool call.
- type
Always
tool_calls.- toolCalls
An array of tool calls the run step was involved in. These can be associated with one of three types of tools:
code_interpreter,retrieval, orfunction.
- final case class RunStepDetailsToolCallsRetrievalObject(id: String, type: Type, retrieval: Retrieval) extends Product with Serializable
RunStepDetailsToolCallsRetrievalObject model
RunStepDetailsToolCallsRetrievalObject model
- id
The ID of the tool call object.
- type
The type of tool call. This is always going to be
retrievalfor this type of tool call.- retrieval
For now, this is always going to be an empty object.
- final case class RunStepObject(id: String, object: Object, createdAt: Int, assistantId: String, threadId: String, runId: String, type: Type, status: Status, stepDetails: StepDetails, lastError: Optional[LastError], expiredAt: Optional[Int], cancelledAt: Optional[Int], failedAt: Optional[Int], completedAt: Optional[Int], metadata: Optional[Metadata], usage: RunStepCompletionUsage) extends Product with Serializable
RunStepObject model
RunStepObject model
Represents a step in execution of a run.
- id
The identifier of the run step, which can be referenced in API endpoints.
- object
The object type, which is always
thread.run.step.- createdAt
The Unix timestamp (in seconds) for when the run step was created.
- assistantId
The ID of the [assistant](/docs/api-reference/assistants) associated with the run step.
- threadId
The ID of the [thread](/docs/api-reference/threads) that was run.
- runId
The ID of the [run](/docs/api-reference/runs) that this run step is a part of.
- type
The type of run step, which can be either
message_creationortool_calls.- status
The status of the run step, which can be either
in_progress,cancelled,failed,completed, orexpired.- stepDetails
The details of the run step.
- lastError
The last error associated with this run step. Will be
nullif there are no errors.- expiredAt
The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.
- cancelledAt
The Unix timestamp (in seconds) for when the run step was cancelled.
- failedAt
The Unix timestamp (in seconds) for when the run step failed.
- completedAt
The Unix timestamp (in seconds) for when the run step completed.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- final case class RunToolCallObject(id: String, type: Type, function: Function) extends Product with Serializable
RunToolCallObject model
RunToolCallObject model
Tool call objects
- id
The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the [Submit tool outputs to run](/docs/api-reference/runs/submitToolOutputs) endpoint.
- type
The type of tool call the output is required for. For now, this is always
function.- function
The function definition.
- sealed trait Size extends AnyRef
size model
size model
The size of the generated images. Must be one of
256x256,512x512, or1024x1024. - type StartIndex = model.StartIndex.Type
start_index model
- final case class SubmitToolOutputsRunRequest(toolOutputs: Chunk[ToolOutputsItem]) extends Product with Serializable
SubmitToolOutputsRunRequest model
SubmitToolOutputsRunRequest model
- toolOutputs
A list of tools for which the outputs are being submitted.
- type Temperature = model.Temperature.Type
temperature model
temperature model
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or
top_pbut not both. - final case class ThreadObject(id: String, object: Object, createdAt: Int, metadata: Optional[Metadata]) extends Product with Serializable
ThreadObject model
ThreadObject model
Represents a thread that contains [messages](/docs/api-reference/messages).
- id
The identifier, which can be referenced in API endpoints.
- object
The object type, which is always
thread.- createdAt
The Unix timestamp (in seconds) for when the thread was created.
- metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- sealed trait ThreadsListMessageFilesOrder extends AnyRef
threads_listMessageFiles_order model
- sealed trait ThreadsListMessagesOrder extends AnyRef
threads_listMessages_order model
- sealed trait ThreadsListRunStepsOrder extends AnyRef
threads_listRunSteps_order model
- sealed trait ThreadsListRunsOrder extends AnyRef
threads_listRuns_order model
- type TopP = model.TopP.Type
top_p model
top_p model
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or
temperaturebut not both.
Value Members
- object AssistantFileObject extends Serializable
- object AssistantObject extends Serializable
- object AssistantToolsCode extends Serializable
- object AssistantToolsFunction extends Serializable
- object AssistantToolsRetrieval extends Serializable
- object AssistantsListAssistantFilesOrder
- object AssistantsListAssistantsOrder
- object CaseType1 extends Subtype[Int]
- object ChatCompletionFunctionCallOption extends Serializable
- object ChatCompletionFunctions extends Serializable
- object ChatCompletionMessageToolCall extends Serializable
- object ChatCompletionMessageToolCallChunk extends Serializable
- object ChatCompletionNamedToolChoice extends Serializable
- object ChatCompletionRequestAssistantMessage extends Serializable
- object ChatCompletionRequestFunctionMessage extends Serializable
- object ChatCompletionRequestMessage
- object ChatCompletionRequestMessageContentPart
- object ChatCompletionRequestMessageContentPartImage extends Serializable
- object ChatCompletionRequestMessageContentPartText extends Serializable
- object ChatCompletionRequestSystemMessage extends Serializable
- object ChatCompletionRequestToolMessage extends Serializable
- object ChatCompletionRequestUserMessage extends Serializable
- object ChatCompletionResponseMessage extends Serializable
- object ChatCompletionRole
- object ChatCompletionStreamResponseDelta extends Serializable
- object ChatCompletionTokenLogprob extends Serializable
- object ChatCompletionTool extends Serializable
- object ChatCompletionToolChoiceOption
- object Code
- object CompletionUsage extends Serializable
- object CreateAssistantFileRequest extends Serializable
- object CreateAssistantRequest extends Serializable
- object CreateChatCompletionFunctionResponse extends Serializable
- object CreateChatCompletionImageResponse extends Serializable
- object CreateChatCompletionRequest extends Serializable
- object CreateChatCompletionResponse extends Serializable
- object CreateChatCompletionStreamResponse extends Serializable
- object CreateCompletionRequest extends Serializable
- object CreateCompletionResponse extends Serializable
- object CreateEmbeddingRequest extends Serializable
- object CreateEmbeddingResponse extends Serializable
- object CreateFileRequest extends Serializable
- object CreateFineTuningJobRequest extends Serializable
- object CreateImageEditRequest extends Serializable
- object CreateImageRequest extends Serializable
- object CreateImageVariationRequest extends Serializable
- object CreateMessageRequest extends Serializable
- object CreateModerationRequest extends Serializable
- object CreateModerationResponse extends Serializable
- object CreateRunRequest extends Serializable
- object CreateSpeechRequest extends Serializable
- object CreateThreadAndRunRequest extends Serializable
- object CreateThreadRequest extends Serializable
- object CreateTranscriptionRequest extends Serializable
- object CreateTranscriptionResponse extends Serializable
- object CreateTranslationRequest extends Serializable
- object CreateTranslationResponse extends Serializable
- object DeleteAssistantFileResponse extends Serializable
- object DeleteAssistantResponse extends Serializable
- object DeleteFileResponse extends Serializable
- object DeleteMessageResponse extends Serializable
- object DeleteModelResponse extends Serializable
- object DeleteThreadResponse extends Serializable
- object Description extends Subtype[String]
- object Embedding extends Serializable
- object EndIndex extends Subtype[Int]
- object Error extends Serializable
- object ErrorResponse extends Serializable
- object File extends Serializable
- object FineTuningJob extends Serializable
- object FineTuningJobEvent extends Serializable
- object FinishReason
- object FrequencyPenalty extends Subtype[Double]
- object FunctionObject extends Serializable
- object FunctionParameters extends Serializable
- object Image extends Serializable
- object ImagesResponse extends Serializable
- object Instructions extends Subtype[String]
- object ListAssistantFilesResponse extends Serializable
- object ListAssistantsResponse extends Serializable
- object ListFilesResponse extends Serializable
- object ListFineTuningJobEventsResponse extends Serializable
- object ListMessageFilesResponse extends Serializable
- object ListMessagesResponse extends Serializable
- object ListModelsResponse extends Serializable
- object ListPaginatedFineTuningJobsResponse extends Serializable
- object ListRunStepsResponse extends Serializable
- object ListRunsResponse extends Serializable
- object ListThreadsResponse extends Serializable
- object MessageContentImageFileObject extends Serializable
- object MessageContentTextAnnotationsFileCitationObject extends Serializable
- object MessageContentTextAnnotationsFilePathObject extends Serializable
- object MessageContentTextObject extends Serializable
- object MessageFileObject extends Serializable
- object MessageObject extends Serializable
- object Model extends Serializable
- object ModifyAssistantRequest extends Serializable
- object ModifyMessageRequest extends Serializable
- object ModifyRunRequest extends Serializable
- object ModifyThreadRequest extends Serializable
- object N extends Subtype[Int]
- object Name extends Subtype[String]
- object OpenAIFailure
- object OpenAIFile extends Serializable
- object PresencePenalty extends Subtype[Double]
- object ResponseFormat
- object RunCompletionUsage extends Serializable
- object RunObject extends Serializable
- object RunStepCompletionUsage extends Serializable
- object RunStepDetailsMessageCreationObject extends Serializable
- object RunStepDetailsToolCallsCodeObject extends Serializable
- object RunStepDetailsToolCallsCodeOutputImageObject extends Serializable
- object RunStepDetailsToolCallsCodeOutputLogsObject extends Serializable
- object RunStepDetailsToolCallsFunctionObject extends Serializable
- object RunStepDetailsToolCallsObject extends Serializable
- object RunStepDetailsToolCallsRetrievalObject extends Serializable
- object RunStepObject extends Serializable
- object RunToolCallObject extends Serializable
- object Size
- object StartIndex extends Subtype[Int]
- object SubmitToolOutputsRunRequest extends Serializable
- object Temperature extends Subtype[Double]
- object ThreadObject extends Serializable
- object ThreadsListMessageFilesOrder
- object ThreadsListMessagesOrder
- object ThreadsListRunStepsOrder
- object ThreadsListRunsOrder
- object TopP extends Subtype[Double]