package model
- Alphabetic
- By Inheritance
- model
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- type CaseType1 = model.CaseType1.Type
CaseType1 model
- final case class ChatCompletionFunctionCallOption(name: String) extends Product with Serializable
ChatCompletionFunctionCallOption model
ChatCompletionFunctionCallOption model
- name
The name of the function to call.
- final case class ChatCompletionFunctionParameters(values: Map[String, Json]) extends DynamicObject[ChatCompletionFunctionParameters] with Product with Serializable
ChatCompletionFunctionParameters model
ChatCompletionFunctionParameters model
The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/gpt/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
To describe a function that accepts no parameters, provide the value
{"type": "object", "properties": {}}.- values
The dynamic list of key-value pairs of the object
- final case class ChatCompletionFunctions(description: Optional[String] = Optional.Absent, name: String, parameters: ChatCompletionFunctionParameters) extends Product with Serializable
ChatCompletionFunctions model
ChatCompletionFunctions model
- description
A description of what the function does, used by the model to choose when and how to call the function.
- name
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
- final case class ChatCompletionRequestMessage(content: Optional[String], functionCall: Optional[FunctionCall] = Optional.Absent, name: Optional[String] = Optional.Absent, role: Role) extends Product with Serializable
ChatCompletionRequestMessage model
ChatCompletionRequestMessage model
- content
The contents of the message.
contentis required for all messages, and may be null for assistant messages with function calls.- functionCall
The name and arguments of a function that should be called, as generated by the model.
- name
The name of the author of this message.
nameis required if role isfunction, and it should be the name of the function whose response is in thecontent. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.- role
The role of the messages author. One of
system,user,assistant, orfunction.
- final case class ChatCompletionResponseMessage(content: Optional[String], functionCall: Optional[FunctionCall] = Optional.Absent, role: Role) extends Product with Serializable
ChatCompletionResponseMessage model
ChatCompletionResponseMessage model
A chat completion message generated by the model.
- content
The contents of the message.
- functionCall
The name and arguments of a function that should be called, as generated by the model.
- role
The role of the author of this message.
- final case class ChatCompletionStreamResponseDelta(content: Optional[String] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, role: Optional[Role] = Optional.Absent) extends Product with Serializable
ChatCompletionStreamResponseDelta model
ChatCompletionStreamResponseDelta model
A chat completion delta generated by streamed model responses.
- content
The contents of the chunk message.
- functionCall
The name and arguments of a function that should be called, as generated by the model.
- role
The role of the author of this message.
- final case class CompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable
CompletionUsage model
CompletionUsage model
Usage statistics for the completion request.
- completionTokens
Number of tokens in the generated completion.
- promptTokens
Number of tokens in the prompt.
- totalTokens
Total number of tokens used in the request (prompt + completion).
- final case class CreateChatCompletionFunctionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable
CreateChatCompletionFunctionResponse model
CreateChatCompletionFunctionResponse model
Represents a chat completion response returned by model, based on the provided input.
- id
A unique identifier for the chat completion.
- choices
A list of chat completion choices. Can be more than one if
nis greater than 1.- created
The Unix timestamp (in seconds) of when the chat completion was created.
- model
The model used for the chat completion.
- object
The object type, which is always
chat.completion.
- final case class CreateChatCompletionRequest(messages: NonEmptyChunk[ChatCompletionRequestMessage], model: CreateChatCompletionRequest.Model, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, functions: Optional[NonEmptyChunk[ChatCompletionFunctions]] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, maxTokens: Optional[Int] = Optional.Absent, n: Optional[CreateChatCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateChatCompletionRequest model
CreateChatCompletionRequest model
- messages
A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
- model
ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
- frequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
- functionCall
Controls how the model calls functions. "none" means the model will not call a function and instead generates a message. "auto" means the model can pick between generating a message or calling a function. Specifying a particular function via
{"name": "my_function"}forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present.- functions
A list of functions the model may generate JSON inputs for.
- logitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
- maxTokens
The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
- n
How many chat completion choices to generate for each input message.
- presencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
- stop
Up to 4 sequences where the API will stop generating further tokens.
- stream
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a
data: [DONE]message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).- temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or
top_pbut not both.- topP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or
temperaturebut not both.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateChatCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable
CreateChatCompletionResponse model
CreateChatCompletionResponse model
Represents a chat completion response returned by model, based on the provided input.
- id
A unique identifier for the chat completion.
- choices
A list of chat completion choices. Can be more than one if
nis greater than 1.- created
The Unix timestamp (in seconds) of when the chat completion was created.
- model
The model used for the chat completion.
- object
The object type, which is always
chat.completion.
- final case class CreateChatCompletionStreamResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String) extends Product with Serializable
CreateChatCompletionStreamResponse model
CreateChatCompletionStreamResponse model
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
- id
A unique identifier for the chat completion. Each chunk has the same ID.
- choices
A list of chat completion choices. Can be more than one if
nis greater than 1.- created
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
- model
The model to generate the completion.
- object
The object type, which is always
chat.completion.chunk.
- final case class CreateCompletionRequest(model: CreateCompletionRequest.Model, prompt: Optional[Prompt], bestOf: Optional[BestOf] = Optional.Absent, echo: Optional[Boolean] = Optional.Absent, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, logprobs: Optional[Logprobs] = Optional.Absent, maxTokens: Optional[MaxTokens] = Optional.Absent, n: Optional[CreateCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, suffix: Optional[String] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateCompletionRequest model
CreateCompletionRequest model
- model
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
- prompt
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
- bestOf
Generates
best_ofcompletions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed. When used withn,best_ofcontrols the number of candidate completions andnspecifies how many to return –best_ofmust be greater thann. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings formax_tokensandstop.- echo
Echo back the prompt in addition to the completion
- frequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
- logitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass
{"50256": -100}to prevent the <|endoftext|> token from being generated.- logprobs
Include the log probabilities on the
logprobsmost likely tokens, as well the chosen tokens. For example, iflogprobsis 5, the API will return a list of the 5 most likely tokens. The API will always return thelogprobof the sampled token, so there may be up tologprobs+1elements in the response. The maximum value forlogprobsis 5.- maxTokens
The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus
max_tokenscannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.- n
How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for
max_tokensandstop.- presencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
- stop
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
- stream
Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a
data: [DONE]message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).- suffix
The suffix that comes after a completion of inserted text.
- temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or
top_pbut not both.- topP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or
temperaturebut not both.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable
CreateCompletionResponse model
CreateCompletionResponse model
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
- id
A unique identifier for the completion.
- choices
The list of completion choices the model generated for the input prompt.
- created
The Unix timestamp (in seconds) of when the completion was created.
- model
The model used for completion.
- object
The object type, which is always "text_completion"
- final case class CreateEditRequest(instruction: String, model: CreateEditRequest.Model, input: Optional[String] = Optional.Absent, n: Optional[CreateEditRequest.N] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent) extends Product with Serializable
CreateEditRequest model
CreateEditRequest model
- instruction
The instruction that tells the model how to edit the prompt.
- model
ID of the model to use. You can use the
text-davinci-edit-001orcode-davinci-edit-001model with this endpoint.- input
The input text to use as a starting point for the edit.
- n
How many edits to generate for the input and instruction.
- temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or
top_pbut not both.- topP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or
temperaturebut not both.
- final case class CreateEditResponse(choices: Chunk[ChoicesItem], object: String, created: Int, usage: CompletionUsage) extends Product with Serializable
CreateEditResponse model
CreateEditResponse model
- choices
A list of edit choices. Can be more than one if
nis greater than 1.- object
The object type, which is always
edit.- created
The Unix timestamp (in seconds) of when the edit was created.
- final case class CreateEmbeddingRequest(input: Input, model: CreateEmbeddingRequest.Model, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateEmbeddingRequest model
CreateEmbeddingRequest model
- input
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for
text-embedding-ada-002) and cannot be an empty string. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.- model
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateEmbeddingResponse(data: Chunk[Embedding], model: String, object: String, usage: Usage) extends Product with Serializable
CreateEmbeddingResponse model
CreateEmbeddingResponse model
- data
The list of embeddings generated by the model.
- model
The name of the model used to generate the embedding.
- object
The object type, which is always "embedding".
- usage
The usage information for the request.
- final case class CreateFileRequest(file: File, purpose: String) extends Product with Serializable
CreateFileRequest model
CreateFileRequest model
- file
The file object (not file name) to be uploaded. If the
purposeis set to "fine-tune", the file will be used for fine-tuning.- purpose
The intended purpose of the uploaded file. Use "fine-tune" for [fine-tuning](/docs/api-reference/fine-tuning). This allows us to validate the format of the uploaded file is correct for fine-tuning.
- final case class CreateFineTuneRequest(trainingFile: String, batchSize: Optional[Int] = Optional.Absent, classificationBetas: Optional[Chunk[Double]] = Optional.Absent, classificationNClasses: Optional[Int] = Optional.Absent, classificationPositiveClass: Optional[String] = Optional.Absent, computeClassificationMetrics: Optional[Boolean] = Optional.Absent, hyperparameters: Optional[Hyperparameters] = Optional.Absent, learningRateMultiplier: Optional[Double] = Optional.Absent, model: Optional[CreateFineTuneRequest.Model] = Optional.Absent, promptLossWeight: Optional[Double] = Optional.Absent, suffix: Optional[Suffix] = Optional.Absent, validationFile: Optional[String] = Optional.Absent) extends Product with Serializable
CreateFineTuneRequest model
CreateFineTuneRequest model
- trainingFile
The ID of an uploaded file that contains training data. See [upload file](/docs/api-reference/files/upload) for how to upload a file. Your dataset must be formatted as a JSONL file, where each training example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose
fine-tune. See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/creating-training-data) for more details.- batchSize
The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. By default, the batch size will be dynamically configured to be ~0.2% of the number of examples in the training set, capped at 256 - in general, we've found that larger batch sizes tend to work better for larger datasets.
- classificationBetas
If this is provided, we calculate F-beta scores at the specified beta values. The F-beta score is a generalization of F-1 score. This is only used for binary classification. With a beta of 1 (i.e. the F-1 score), precision and recall are given the same weight. A larger beta score puts more weight on recall and less on precision. A smaller beta score puts more weight on precision and less on recall.
- classificationNClasses
The number of classes in a classification task. This parameter is required for multiclass classification.
- classificationPositiveClass
The positive class in binary classification. This parameter is needed to generate precision, recall, and F1 metrics when doing binary classification.
- computeClassificationMetrics
If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. These metrics can be viewed in the [results file](/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model). In order to compute classification metrics, you must provide a
validation_file. Additionally, you must specifyclassification_n_classesfor multiclass classification orclassification_positive_classfor binary classification.- hyperparameters
The hyperparameters used for the fine-tuning job.
- learningRateMultiplier
The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this value. By default, the learning rate multiplier is the 0.05, 0.1, or 0.2 depending on final
batch_size(larger learning rates tend to perform better with larger batch sizes). We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results.- model
The name of the base model to fine-tune. You can select one of "ada", "babbage", "curie", "davinci", or a fine-tuned model created after 2022-04-21 and before 2023-08-22. To learn more about these models, see the [Models](/docs/models) documentation.
- promptLossWeight
The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short. If prompts are extremely long (relative to completions), it may make sense to reduce this weight so as to avoid over-prioritizing learning the prompt.
- suffix
A string of up to 40 characters that will be added to your fine-tuned model name. For example, a
suffixof "custom-model-name" would produce a model name likeada:ft-your-org:custom-model-name-2022-02-15-04-21-04.- validationFile
The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the [fine-tuning results file](/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model). Your train and validation data should be mutually exclusive. Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose
fine-tune. See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/creating-training-data) for more details.
- final case class CreateFineTuningJobRequest(model: CreateFineTuningJobRequest.Model, trainingFile: String, hyperparameters: Optional[Hyperparameters] = Optional.Absent, suffix: Optional[Suffix] = Optional.Absent, validationFile: Optional[String] = Optional.Absent) extends Product with Serializable
CreateFineTuningJobRequest model
CreateFineTuningJobRequest model
- model
The name of the model to fine-tune. You can select one of the [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
- trainingFile
The ID of an uploaded file that contains training data. See [upload file](/docs/api-reference/files/upload) for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose
fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.- hyperparameters
The hyperparameters used for the fine-tuning job.
- suffix
A string of up to 18 characters that will be added to your fine-tuned model name. For example, a
suffixof "custom-model-name" would produce a model name likeft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel.- validationFile
The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose
fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
- final case class CreateImageEditRequest(image: File, prompt: String, mask: Optional[File] = Optional.Absent, n: Optional[N] = Optional.Absent, size: Optional[Size] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateImageEditRequest model
CreateImageEditRequest model
- image
The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.
- prompt
A text description of the desired image(s). The maximum length is 1000 characters.
- mask
An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where
imageshould be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions asimage.- n
The number of images to generate. Must be between 1 and 10.
- size
The size of the generated images. Must be one of
256x256,512x512, or1024x1024.- responseFormat
The format in which the generated images are returned. Must be one of
urlorb64_json.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateImageRequest(prompt: String, n: Optional[N] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[Size] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateImageRequest model
CreateImageRequest model
- prompt
A text description of the desired image(s). The maximum length is 1000 characters.
- n
The number of images to generate. Must be between 1 and 10.
- responseFormat
The format in which the generated images are returned. Must be one of
urlorb64_json.- size
The size of the generated images. Must be one of
256x256,512x512, or1024x1024.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateImageVariationRequest(image: File, n: Optional[N] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[Size] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateImageVariationRequest model
CreateImageVariationRequest model
- image
The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.
- n
The number of images to generate. Must be between 1 and 10.
- responseFormat
The format in which the generated images are returned. Must be one of
urlorb64_json.- size
The size of the generated images. Must be one of
256x256,512x512, or1024x1024.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- final case class CreateModerationRequest(input: Input, model: Optional[CreateModerationRequest.Model] = Optional.Absent) extends Product with Serializable
CreateModerationRequest model
CreateModerationRequest model
- input
The input text to classify
- model
Two content moderations models are available:
text-moderation-stableandtext-moderation-latest. The default istext-moderation-latestwhich will be automatically upgraded over time. This ensures you are always using our most accurate model. If you usetext-moderation-stable, we will provide advanced notice before updating the model. Accuracy oftext-moderation-stablemay be slightly lower than fortext-moderation-latest.
- final case class CreateModerationResponse(id: String, model: String, results: Chunk[ResultsItem]) extends Product with Serializable
CreateModerationResponse model
CreateModerationResponse model
Represents policy compliance report by OpenAI's content moderation model against a given input.
- id
The unique identifier for the moderation request.
- model
The model used to generate the moderation results.
- results
A list of moderation objects.
- final case class CreateTranscriptionRequest(file: File, model: CreateTranscriptionRequest.Model, language: Optional[String] = Optional.Absent, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[CreateTranscriptionRequest.ResponseFormat] = Optional.Absent, temperature: Optional[Double] = Optional.Absent) extends Product with Serializable
CreateTranscriptionRequest model
CreateTranscriptionRequest model
- file
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
- model
ID of the model to use. Only
whisper-1is currently available.- language
The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
- prompt
An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
- responseFormat
The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
- temperature
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
- final case class CreateTranscriptionResponse(text: String) extends Product with Serializable
CreateTranscriptionResponse model
- final case class CreateTranslationRequest(file: File, model: CreateTranslationRequest.Model, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[String] = Optional.Absent, temperature: Optional[Double] = Optional.Absent) extends Product with Serializable
CreateTranslationRequest model
CreateTranslationRequest model
- file
The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
- model
ID of the model to use. Only
whisper-1is currently available.- prompt
An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
- responseFormat
The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
- temperature
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
- final case class CreateTranslationResponse(text: String) extends Product with Serializable
CreateTranslationResponse model
- final case class DeleteFileResponse(id: String, object: String, deleted: Boolean) extends Product with Serializable
DeleteFileResponse model
- final case class DeleteModelResponse(id: String, deleted: Boolean, object: String) extends Product with Serializable
DeleteModelResponse model
- final case class Embedding(index: Int, embedding: Chunk[Double], object: String) extends Product with Serializable
Embedding model
Embedding model
Represents an embedding vector returned by embedding endpoint.
- index
The index of the embedding in the list of embeddings.
- embedding
The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the [embedding guide](/docs/guides/embeddings).
- object
The object type, which is always "embedding".
- final case class Error(code: Optional[String], message: String, param: Optional[String], type: String) extends Product with Serializable
Error model
- final case class ErrorResponse(error: Error) extends Product with Serializable
ErrorResponse model
- final case class File(data: Chunk[Byte], fileName: String) extends Product with Serializable
- final case class FineTune(id: String, createdAt: Int, events: Optional[Chunk[FineTuneEvent]] = Optional.Absent, fineTunedModel: Optional[String], hyperparams: Hyperparams, model: String, object: String, organizationId: String, resultFiles: Chunk[OpenAIFile], status: String, trainingFiles: Chunk[OpenAIFile], updatedAt: Int, validationFiles: Chunk[OpenAIFile]) extends Product with Serializable
FineTune model
FineTune model
The
FineTuneobject represents a legacy fine-tune job that has been created through the API.- id
The object identifier, which can be referenced in the API endpoints.
- createdAt
The Unix timestamp (in seconds) for when the fine-tuning job was created.
- events
The list of events that have been observed in the lifecycle of the FineTune job.
- fineTunedModel
The name of the fine-tuned model that is being created.
- hyperparams
The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/hyperparameters) for more details.
- model
The base model that is being fine-tuned.
- object
The object type, which is always "fine-tune".
- organizationId
The organization that owns the fine-tuning job.
- resultFiles
The compiled results files for the fine-tuning job.
- status
The current status of the fine-tuning job, which can be either
created,running,succeeded,failed, orcancelled.- trainingFiles
The list of files used for training.
- updatedAt
The Unix timestamp (in seconds) for when the fine-tuning job was last updated.
- validationFiles
The list of files used for validation.
- final case class FineTuneEvent(createdAt: Int, level: String, message: String, object: String) extends Product with Serializable
FineTuneEvent model
FineTuneEvent model
Fine-tune event object
- final case class FineTuningJob(id: String, createdAt: Int, error: Optional[FineTuningJob.Error], fineTunedModel: Optional[String], finishedAt: Optional[Int], hyperparameters: Hyperparameters, model: String, object: String, organizationId: String, resultFiles: Chunk[String], status: String, trainedTokens: Optional[Int], trainingFile: String, validationFile: Optional[String]) extends Product with Serializable
FineTuningJob model
FineTuningJob model
The
fine_tuning.jobobject represents a fine-tuning job that has been created through the API.- id
The object identifier, which can be referenced in the API endpoints.
- createdAt
The Unix timestamp (in seconds) for when the fine-tuning job was created.
- error
For fine-tuning jobs that have
failed, this will contain more information on the cause of the failure.- fineTunedModel
The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.
- finishedAt
The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.
- hyperparameters
The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
- model
The base model that is being fine-tuned.
- object
The object type, which is always "fine_tuning.job".
- organizationId
The organization that owns the fine-tuning job.
- resultFiles
The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the [Files API](/docs/api-reference/files/retrieve-contents).
- status
The current status of the fine-tuning job, which can be either
validating_files,queued,running,succeeded,failed, orcancelled.- trainedTokens
The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.
- trainingFile
The file ID used for training. You can retrieve the training data with the [Files API](/docs/api-reference/files/retrieve-contents).
- validationFile
The file ID used for validation. You can retrieve the validation results with the [Files API](/docs/api-reference/files/retrieve-contents).
- final case class FineTuningJobEvent(id: String, createdAt: Int, level: Level, message: String, object: String) extends Product with Serializable
FineTuningJobEvent model
FineTuningJobEvent model
Fine-tuning job event object
- sealed trait FinishReason extends AnyRef
finish_reason model
finish_reason model
The reason the model stopped generating tokens. This will be
stopif the model hit a natural stop point or a provided stop sequence,lengthif the maximum number of tokens specified in the request was reached,content_filterif content was omitted due to a flag from our content filters, orfunction_callif the model called a function. - type FrequencyPenalty = model.FrequencyPenalty.Type
frequency_penalty model
frequency_penalty model
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
[See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
- final case class Image(b64Json: Optional[String] = Optional.Absent, url: Optional[String] = Optional.Absent) extends Product with Serializable
Image model
Image model
Represents the url or the content of an image generated by the OpenAI API.
- b64Json
The base64-encoded JSON of the generated image, if
response_formatisb64_json.- url
The URL of the generated image, if
response_formatisurl(default).
- final case class ImagesResponse(created: Int, data: Chunk[Image]) extends Product with Serializable
ImagesResponse model
- final case class ListFilesResponse(data: Chunk[OpenAIFile], object: String) extends Product with Serializable
ListFilesResponse model
- final case class ListFineTuneEventsResponse(data: Chunk[FineTuneEvent], object: String) extends Product with Serializable
ListFineTuneEventsResponse model
- final case class ListFineTunesResponse(data: Chunk[FineTune], object: String) extends Product with Serializable
ListFineTunesResponse model
- final case class ListFineTuningJobEventsResponse(data: Chunk[FineTuningJobEvent], object: String) extends Product with Serializable
ListFineTuningJobEventsResponse model
- final case class ListModelsResponse(object: String, data: Chunk[Model]) extends Product with Serializable
ListModelsResponse model
- final case class ListPaginatedFineTuningJobsResponse(data: Chunk[FineTuningJob], hasMore: Boolean, object: String) extends Product with Serializable
ListPaginatedFineTuningJobsResponse model
- final case class Model(id: String, created: Int, object: String, ownedBy: String) extends Product with Serializable
Model model
Model model
Describes an OpenAI model offering that can be used with the API.
- id
The model identifier, which can be referenced in the API endpoints.
- created
The Unix timestamp (in seconds) when the model was created.
- object
The object type, which is always "model".
- ownedBy
The organization that owns the model.
- sealed trait Models extends AnyRef
models model
- type N = model.N.Type
n model
n model
The number of images to generate. Must be between 1 and 10.
- sealed trait OpenAIFailure extends AnyRef
- final case class OpenAIFile(id: String, bytes: Int, createdAt: Int, filename: String, object: String, purpose: String, status: Optional[String] = Optional.Absent, statusDetails: Optional[String] = Optional.Absent) extends Product with Serializable
OpenAIFile model
OpenAIFile model
The
Fileobject represents a document that has been uploaded to OpenAI.- id
The file identifier, which can be referenced in the API endpoints.
- bytes
The size of the file in bytes.
- createdAt
The Unix timestamp (in seconds) for when the file was created.
- filename
The name of the file.
- object
The object type, which is always "file".
- purpose
The intended purpose of the file. Currently, only "fine-tune" is supported.
- status
The current status of the file, which can be either
uploaded,processed,pending,error,deletingordeleted.- statusDetails
Additional details about the status of the file. If the file is in the
errorstate, this will include a message describing the error.
- type PresencePenalty = model.PresencePenalty.Type
presence_penalty model
presence_penalty model
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
[See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
- sealed trait ResponseFormat extends AnyRef
response_format model
response_format model
The format in which the generated images are returned. Must be one of
urlorb64_json. - sealed trait Role extends AnyRef
role model
role model
The role of the author of this message.
- sealed trait Size extends AnyRef
size model
size model
The size of the generated images. Must be one of
256x256,512x512, or1024x1024. - type Temperature = model.Temperature.Type
temperature model
temperature model
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or
top_pbut not both. - type TopP = model.TopP.Type
top_p model
top_p model
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or
temperaturebut not both.
Value Members
- object CaseType1 extends Subtype[Int]
- object ChatCompletionFunctionCallOption extends Serializable
- object ChatCompletionFunctionParameters extends Serializable
- object ChatCompletionFunctions extends Serializable
- object ChatCompletionRequestMessage extends Serializable
- object ChatCompletionResponseMessage extends Serializable
- object ChatCompletionStreamResponseDelta extends Serializable
- object CompletionUsage extends Serializable
- object CreateChatCompletionFunctionResponse extends Serializable
- object CreateChatCompletionRequest extends Serializable
- object CreateChatCompletionResponse extends Serializable
- object CreateChatCompletionStreamResponse extends Serializable
- object CreateCompletionRequest extends Serializable
- object CreateCompletionResponse extends Serializable
- object CreateEditRequest extends Serializable
- object CreateEditResponse extends Serializable
- object CreateEmbeddingRequest extends Serializable
- object CreateEmbeddingResponse extends Serializable
- object CreateFileRequest extends Serializable
- object CreateFineTuneRequest extends Serializable
- object CreateFineTuningJobRequest extends Serializable
- object CreateImageEditRequest extends Serializable
- object CreateImageRequest extends Serializable
- object CreateImageVariationRequest extends Serializable
- object CreateModerationRequest extends Serializable
- object CreateModerationResponse extends Serializable
- object CreateTranscriptionRequest extends Serializable
- object CreateTranscriptionResponse extends Serializable
- object CreateTranslationRequest extends Serializable
- object CreateTranslationResponse extends Serializable
- object DeleteFileResponse extends Serializable
- object DeleteModelResponse extends Serializable
- object Embedding extends Serializable
- object Error extends Serializable
- object ErrorResponse extends Serializable
- object File extends Serializable
- object FineTune extends Serializable
- object FineTuneEvent extends Serializable
- object FineTuningJob extends Serializable
- object FineTuningJobEvent extends Serializable
- object FinishReason
- object FrequencyPenalty extends Subtype[Double]
- object Image extends Serializable
- object ImagesResponse extends Serializable
- object ListFilesResponse extends Serializable
- object ListFineTuneEventsResponse extends Serializable
- object ListFineTunesResponse extends Serializable
- object ListFineTuningJobEventsResponse extends Serializable
- object ListModelsResponse extends Serializable
- object ListPaginatedFineTuningJobsResponse extends Serializable
- object Model extends Serializable
- object Models
- object N extends Subtype[Int]
- object OpenAIFailure
- object OpenAIFile extends Serializable
- object PresencePenalty extends Subtype[Double]
- object ResponseFormat
- object Role
- object Size
- object Temperature extends Subtype[Double]
- object TopP extends Subtype[Double]