package model

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. model
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. type CaseType1 = model.CaseType1.Type

    CaseType1 model

  2. final case class ChatCompletionFunctionCallOption(name: String) extends Product with Serializable

    ChatCompletionFunctionCallOption model

    ChatCompletionFunctionCallOption model

    name

    The name of the function to call.

  3. final case class ChatCompletionFunctionParameters(values: Map[String, Json]) extends DynamicObject[ChatCompletionFunctionParameters] with Product with Serializable

    ChatCompletionFunctionParameters model

    ChatCompletionFunctionParameters model

    The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/gpt/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.

    To describe a function that accepts no parameters, provide the value {"type": "object", "properties": {}}.

    values

    The dynamic list of key-value pairs of the object

  4. final case class ChatCompletionFunctions(description: Optional[String] = Optional.Absent, name: String, parameters: ChatCompletionFunctionParameters) extends Product with Serializable

    ChatCompletionFunctions model

    ChatCompletionFunctions model

    description

    A description of what the function does, used by the model to choose when and how to call the function.

    name

    The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

  5. final case class ChatCompletionRequestMessage(content: Optional[String], functionCall: Optional[FunctionCall] = Optional.Absent, name: Optional[String] = Optional.Absent, role: Role) extends Product with Serializable

    ChatCompletionRequestMessage model

    ChatCompletionRequestMessage model

    content

    The contents of the message. content is required for all messages, and may be null for assistant messages with function calls.

    functionCall

    The name and arguments of a function that should be called, as generated by the model.

    name

    The name of the author of this message. name is required if role is function, and it should be the name of the function whose response is in the content. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.

    role

    The role of the messages author. One of system, user, assistant, or function.

  6. final case class ChatCompletionResponseMessage(content: Optional[String], functionCall: Optional[FunctionCall] = Optional.Absent, role: Role) extends Product with Serializable

    ChatCompletionResponseMessage model

    ChatCompletionResponseMessage model

    A chat completion message generated by the model.

    content

    The contents of the message.

    functionCall

    The name and arguments of a function that should be called, as generated by the model.

    role

    The role of the author of this message.

  7. final case class ChatCompletionStreamResponseDelta(content: Optional[String] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, role: Optional[Role] = Optional.Absent) extends Product with Serializable

    ChatCompletionStreamResponseDelta model

    ChatCompletionStreamResponseDelta model

    A chat completion delta generated by streamed model responses.

    content

    The contents of the chunk message.

    functionCall

    The name and arguments of a function that should be called, as generated by the model.

    role

    The role of the author of this message.

  8. final case class CompletionUsage(completionTokens: Int, promptTokens: Int, totalTokens: Int) extends Product with Serializable

    CompletionUsage model

    CompletionUsage model

    Usage statistics for the completion request.

    completionTokens

    Number of tokens in the generated completion.

    promptTokens

    Number of tokens in the prompt.

    totalTokens

    Total number of tokens used in the request (prompt + completion).

  9. final case class CreateChatCompletionFunctionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable

    CreateChatCompletionFunctionResponse model

    CreateChatCompletionFunctionResponse model

    Represents a chat completion response returned by model, based on the provided input.

    id

    A unique identifier for the chat completion.

    choices

    A list of chat completion choices. Can be more than one if n is greater than 1.

    created

    The Unix timestamp (in seconds) of when the chat completion was created.

    model

    The model used for the chat completion.

    object

    The object type, which is always chat.completion.

  10. final case class CreateChatCompletionRequest(messages: NonEmptyChunk[ChatCompletionRequestMessage], model: CreateChatCompletionRequest.Model, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, functionCall: Optional[FunctionCall] = Optional.Absent, functions: Optional[NonEmptyChunk[ChatCompletionFunctions]] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, maxTokens: Optional[Int] = Optional.Absent, n: Optional[CreateChatCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateChatCompletionRequest model

    CreateChatCompletionRequest model

    messages

    A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).

    model

    ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.

    frequencyPenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)

    functionCall

    Controls how the model calls functions. "none" means the model will not call a function and instead generates a message. "auto" means the model can pick between generating a message or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present.

    functions

    A list of functions the model may generate JSON inputs for.

    logitBias

    Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

    maxTokens

    The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.

    n

    How many chat completion choices to generate for each input message.

    presencePenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)

    stop

    Up to 4 sequences where the API will stop generating further tokens.

    stream

    If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a data: [DONE] message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).

    temperature

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

    topP

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  11. final case class CreateChatCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable

    CreateChatCompletionResponse model

    CreateChatCompletionResponse model

    Represents a chat completion response returned by model, based on the provided input.

    id

    A unique identifier for the chat completion.

    choices

    A list of chat completion choices. Can be more than one if n is greater than 1.

    created

    The Unix timestamp (in seconds) of when the chat completion was created.

    model

    The model used for the chat completion.

    object

    The object type, which is always chat.completion.

  12. final case class CreateChatCompletionStreamResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String) extends Product with Serializable

    CreateChatCompletionStreamResponse model

    CreateChatCompletionStreamResponse model

    Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

    id

    A unique identifier for the chat completion. Each chunk has the same ID.

    choices

    A list of chat completion choices. Can be more than one if n is greater than 1.

    created

    The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

    model

    The model to generate the completion.

    object

    The object type, which is always chat.completion.chunk.

  13. final case class CreateCompletionRequest(model: CreateCompletionRequest.Model, prompt: Optional[Prompt], bestOf: Optional[BestOf] = Optional.Absent, echo: Optional[Boolean] = Optional.Absent, frequencyPenalty: Optional[FrequencyPenalty] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, logprobs: Optional[Logprobs] = Optional.Absent, maxTokens: Optional[MaxTokens] = Optional.Absent, n: Optional[CreateCompletionRequest.N] = Optional.Absent, presencePenalty: Optional[PresencePenalty] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, stream: Optional[Boolean] = Optional.Absent, suffix: Optional[String] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateCompletionRequest model

    CreateCompletionRequest model

    model

    ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.

    prompt

    The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.

    bestOf

    Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

    echo

    Echo back the prompt in addition to the completion

    frequencyPenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)

    logitBias

    Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.

    logprobs

    Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.

    maxTokens

    The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.

    n

    How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

    presencePenalty

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)

    stop

    Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

    stream

    Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a data: [DONE] message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).

    suffix

    The suffix that comes after a completion of inserted text.

    temperature

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

    topP

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  14. final case class CreateCompletionResponse(id: String, choices: Chunk[ChoicesItem], created: Int, model: String, object: String, usage: Optional[CompletionUsage] = Optional.Absent) extends Product with Serializable

    CreateCompletionResponse model

    CreateCompletionResponse model

    Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).

    id

    A unique identifier for the completion.

    choices

    The list of completion choices the model generated for the input prompt.

    created

    The Unix timestamp (in seconds) of when the completion was created.

    model

    The model used for completion.

    object

    The object type, which is always "text_completion"

  15. final case class CreateEditRequest(instruction: String, model: CreateEditRequest.Model, input: Optional[String] = Optional.Absent, n: Optional[CreateEditRequest.N] = Optional.Absent, temperature: Optional[Temperature] = Optional.Absent, topP: Optional[TopP] = Optional.Absent) extends Product with Serializable

    CreateEditRequest model

    CreateEditRequest model

    instruction

    The instruction that tells the model how to edit the prompt.

    model

    ID of the model to use. You can use the text-davinci-edit-001 or code-davinci-edit-001 model with this endpoint.

    input

    The input text to use as a starting point for the edit.

    n

    How many edits to generate for the input and instruction.

    temperature

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

    topP

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

  16. final case class CreateEditResponse(choices: Chunk[ChoicesItem], object: String, created: Int, usage: CompletionUsage) extends Product with Serializable

    CreateEditResponse model

    CreateEditResponse model

    choices

    A list of edit choices. Can be more than one if n is greater than 1.

    object

    The object type, which is always edit.

    created

    The Unix timestamp (in seconds) of when the edit was created.

  17. final case class CreateEmbeddingRequest(input: Input, model: CreateEmbeddingRequest.Model, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateEmbeddingRequest model

    CreateEmbeddingRequest model

    input

    Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for text-embedding-ada-002) and cannot be an empty string. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.

    model

    ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  18. final case class CreateEmbeddingResponse(data: Chunk[Embedding], model: String, object: String, usage: Usage) extends Product with Serializable

    CreateEmbeddingResponse model

    CreateEmbeddingResponse model

    data

    The list of embeddings generated by the model.

    model

    The name of the model used to generate the embedding.

    object

    The object type, which is always "embedding".

    usage

    The usage information for the request.

  19. final case class CreateFileRequest(file: File, purpose: String) extends Product with Serializable

    CreateFileRequest model

    CreateFileRequest model

    file

    The file object (not file name) to be uploaded. If the purpose is set to "fine-tune", the file will be used for fine-tuning.

    purpose

    The intended purpose of the uploaded file. Use "fine-tune" for [fine-tuning](/docs/api-reference/fine-tuning). This allows us to validate the format of the uploaded file is correct for fine-tuning.

  20. final case class CreateFineTuneRequest(trainingFile: String, batchSize: Optional[Int] = Optional.Absent, classificationBetas: Optional[Chunk[Double]] = Optional.Absent, classificationNClasses: Optional[Int] = Optional.Absent, classificationPositiveClass: Optional[String] = Optional.Absent, computeClassificationMetrics: Optional[Boolean] = Optional.Absent, hyperparameters: Optional[Hyperparameters] = Optional.Absent, learningRateMultiplier: Optional[Double] = Optional.Absent, model: Optional[CreateFineTuneRequest.Model] = Optional.Absent, promptLossWeight: Optional[Double] = Optional.Absent, suffix: Optional[Suffix] = Optional.Absent, validationFile: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateFineTuneRequest model

    CreateFineTuneRequest model

    trainingFile

    The ID of an uploaded file that contains training data. See [upload file](/docs/api-reference/files/upload) for how to upload a file. Your dataset must be formatted as a JSONL file, where each training example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune. See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/creating-training-data) for more details.

    batchSize

    The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. By default, the batch size will be dynamically configured to be ~0.2% of the number of examples in the training set, capped at 256 - in general, we've found that larger batch sizes tend to work better for larger datasets.

    classificationBetas

    If this is provided, we calculate F-beta scores at the specified beta values. The F-beta score is a generalization of F-1 score. This is only used for binary classification. With a beta of 1 (i.e. the F-1 score), precision and recall are given the same weight. A larger beta score puts more weight on recall and less on precision. A smaller beta score puts more weight on precision and less on recall.

    classificationNClasses

    The number of classes in a classification task. This parameter is required for multiclass classification.

    classificationPositiveClass

    The positive class in binary classification. This parameter is needed to generate precision, recall, and F1 metrics when doing binary classification.

    computeClassificationMetrics

    If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. These metrics can be viewed in the [results file](/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model). In order to compute classification metrics, you must provide a validation_file. Additionally, you must specify classification_n_classes for multiclass classification or classification_positive_class for binary classification.

    hyperparameters

    The hyperparameters used for the fine-tuning job.

    learningRateMultiplier

    The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this value. By default, the learning rate multiplier is the 0.05, 0.1, or 0.2 depending on final batch_size (larger learning rates tend to perform better with larger batch sizes). We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results.

    model

    The name of the base model to fine-tune. You can select one of "ada", "babbage", "curie", "davinci", or a fine-tuned model created after 2022-04-21 and before 2023-08-22. To learn more about these models, see the [Models](/docs/models) documentation.

    promptLossWeight

    The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short. If prompts are extremely long (relative to completions), it may make sense to reduce this weight so as to avoid over-prioritizing learning the prompt.

    suffix

    A string of up to 40 characters that will be added to your fine-tuned model name. For example, a suffix of "custom-model-name" would produce a model name like ada:ft-your-org:custom-model-name-2022-02-15-04-21-04.

    validationFile

    The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the [fine-tuning results file](/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model). Your train and validation data should be mutually exclusive. Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune. See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/creating-training-data) for more details.

  21. final case class CreateFineTuningJobRequest(model: CreateFineTuningJobRequest.Model, trainingFile: String, hyperparameters: Optional[Hyperparameters] = Optional.Absent, suffix: Optional[Suffix] = Optional.Absent, validationFile: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateFineTuningJobRequest model

    CreateFineTuningJobRequest model

    model

    The name of the model to fine-tune. You can select one of the [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).

    trainingFile

    The ID of an uploaded file that contains training data. See [upload file](/docs/api-reference/files/upload) for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.

    hyperparameters

    The hyperparameters used for the fine-tuning job.

    suffix

    A string of up to 18 characters that will be added to your fine-tuned model name. For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel.

    validationFile

    The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.

  22. final case class CreateImageEditRequest(image: File, prompt: String, mask: Optional[File] = Optional.Absent, n: Optional[N] = Optional.Absent, size: Optional[Size] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateImageEditRequest model

    CreateImageEditRequest model

    image

    The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.

    prompt

    A text description of the desired image(s). The maximum length is 1000 characters.

    mask

    An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.

    n

    The number of images to generate. Must be between 1 and 10.

    size

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.

    responseFormat

    The format in which the generated images are returned. Must be one of url or b64_json.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  23. final case class CreateImageRequest(prompt: String, n: Optional[N] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[Size] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateImageRequest model

    CreateImageRequest model

    prompt

    A text description of the desired image(s). The maximum length is 1000 characters.

    n

    The number of images to generate. Must be between 1 and 10.

    responseFormat

    The format in which the generated images are returned. Must be one of url or b64_json.

    size

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  24. final case class CreateImageVariationRequest(image: File, n: Optional[N] = Optional.Absent, responseFormat: Optional[ResponseFormat] = Optional.Absent, size: Optional[Size] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable

    CreateImageVariationRequest model

    CreateImageVariationRequest model

    image

    The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.

    n

    The number of images to generate. Must be between 1 and 10.

    responseFormat

    The format in which the generated images are returned. Must be one of url or b64_json.

    size

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.

    user

    A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).

  25. final case class CreateModerationRequest(input: Input, model: Optional[CreateModerationRequest.Model] = Optional.Absent) extends Product with Serializable

    CreateModerationRequest model

    CreateModerationRequest model

    input

    The input text to classify

    model

    Two content moderations models are available: text-moderation-stable and text-moderation-latest. The default is text-moderation-latest which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use text-moderation-stable, we will provide advanced notice before updating the model. Accuracy of text-moderation-stable may be slightly lower than for text-moderation-latest.

  26. final case class CreateModerationResponse(id: String, model: String, results: Chunk[ResultsItem]) extends Product with Serializable

    CreateModerationResponse model

    CreateModerationResponse model

    Represents policy compliance report by OpenAI's content moderation model against a given input.

    id

    The unique identifier for the moderation request.

    model

    The model used to generate the moderation results.

    results

    A list of moderation objects.

  27. final case class CreateTranscriptionRequest(file: File, model: CreateTranscriptionRequest.Model, language: Optional[String] = Optional.Absent, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[CreateTranscriptionRequest.ResponseFormat] = Optional.Absent, temperature: Optional[Double] = Optional.Absent) extends Product with Serializable

    CreateTranscriptionRequest model

    CreateTranscriptionRequest model

    file

    The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

    model

    ID of the model to use. Only whisper-1 is currently available.

    language

    The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.

    prompt

    An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.

    responseFormat

    The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

    temperature

    The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.

  28. final case class CreateTranscriptionResponse(text: String) extends Product with Serializable

    CreateTranscriptionResponse model

  29. final case class CreateTranslationRequest(file: File, model: CreateTranslationRequest.Model, prompt: Optional[String] = Optional.Absent, responseFormat: Optional[String] = Optional.Absent, temperature: Optional[Double] = Optional.Absent) extends Product with Serializable

    CreateTranslationRequest model

    CreateTranslationRequest model

    file

    The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

    model

    ID of the model to use. Only whisper-1 is currently available.

    prompt

    An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.

    responseFormat

    The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

    temperature

    The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.

  30. final case class CreateTranslationResponse(text: String) extends Product with Serializable

    CreateTranslationResponse model

  31. final case class DeleteFileResponse(id: String, object: String, deleted: Boolean) extends Product with Serializable

    DeleteFileResponse model

  32. final case class DeleteModelResponse(id: String, deleted: Boolean, object: String) extends Product with Serializable

    DeleteModelResponse model

  33. final case class Embedding(index: Int, embedding: Chunk[Double], object: String) extends Product with Serializable

    Embedding model

    Embedding model

    Represents an embedding vector returned by embedding endpoint.

    index

    The index of the embedding in the list of embeddings.

    embedding

    The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the [embedding guide](/docs/guides/embeddings).

    object

    The object type, which is always "embedding".

  34. final case class Error(code: Optional[String], message: String, param: Optional[String], type: String) extends Product with Serializable

    Error model

  35. final case class ErrorResponse(error: Error) extends Product with Serializable

    ErrorResponse model

  36. final case class File(data: Chunk[Byte], fileName: String) extends Product with Serializable
  37. final case class FineTune(id: String, createdAt: Int, events: Optional[Chunk[FineTuneEvent]] = Optional.Absent, fineTunedModel: Optional[String], hyperparams: Hyperparams, model: String, object: String, organizationId: String, resultFiles: Chunk[OpenAIFile], status: String, trainingFiles: Chunk[OpenAIFile], updatedAt: Int, validationFiles: Chunk[OpenAIFile]) extends Product with Serializable

    FineTune model

    FineTune model

    The FineTune object represents a legacy fine-tune job that has been created through the API.

    id

    The object identifier, which can be referenced in the API endpoints.

    createdAt

    The Unix timestamp (in seconds) for when the fine-tuning job was created.

    events

    The list of events that have been observed in the lifecycle of the FineTune job.

    fineTunedModel

    The name of the fine-tuned model that is being created.

    hyperparams

    The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/hyperparameters) for more details.

    model

    The base model that is being fine-tuned.

    object

    The object type, which is always "fine-tune".

    organizationId

    The organization that owns the fine-tuning job.

    resultFiles

    The compiled results files for the fine-tuning job.

    status

    The current status of the fine-tuning job, which can be either created, running, succeeded, failed, or cancelled.

    trainingFiles

    The list of files used for training.

    updatedAt

    The Unix timestamp (in seconds) for when the fine-tuning job was last updated.

    validationFiles

    The list of files used for validation.

  38. final case class FineTuneEvent(createdAt: Int, level: String, message: String, object: String) extends Product with Serializable

    FineTuneEvent model

    FineTuneEvent model

    Fine-tune event object

  39. final case class FineTuningJob(id: String, createdAt: Int, error: Optional[FineTuningJob.Error], fineTunedModel: Optional[String], finishedAt: Optional[Int], hyperparameters: Hyperparameters, model: String, object: String, organizationId: String, resultFiles: Chunk[String], status: String, trainedTokens: Optional[Int], trainingFile: String, validationFile: Optional[String]) extends Product with Serializable

    FineTuningJob model

    FineTuningJob model

    The fine_tuning.job object represents a fine-tuning job that has been created through the API.

    id

    The object identifier, which can be referenced in the API endpoints.

    createdAt

    The Unix timestamp (in seconds) for when the fine-tuning job was created.

    error

    For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.

    fineTunedModel

    The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.

    finishedAt

    The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.

    hyperparameters

    The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.

    model

    The base model that is being fine-tuned.

    object

    The object type, which is always "fine_tuning.job".

    organizationId

    The organization that owns the fine-tuning job.

    resultFiles

    The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the [Files API](/docs/api-reference/files/retrieve-contents).

    status

    The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.

    trainedTokens

    The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.

    trainingFile

    The file ID used for training. You can retrieve the training data with the [Files API](/docs/api-reference/files/retrieve-contents).

    validationFile

    The file ID used for validation. You can retrieve the validation results with the [Files API](/docs/api-reference/files/retrieve-contents).

  40. final case class FineTuningJobEvent(id: String, createdAt: Int, level: Level, message: String, object: String) extends Product with Serializable

    FineTuningJobEvent model

    FineTuningJobEvent model

    Fine-tuning job event object

  41. sealed trait FinishReason extends AnyRef

    finish_reason model

    finish_reason model

    The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, or function_call if the model called a function.

  42. type FrequencyPenalty = model.FrequencyPenalty.Type

    frequency_penalty model

    frequency_penalty model

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

    [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)

  43. final case class Image(b64Json: Optional[String] = Optional.Absent, url: Optional[String] = Optional.Absent) extends Product with Serializable

    Image model

    Image model

    Represents the url or the content of an image generated by the OpenAI API.

    b64Json

    The base64-encoded JSON of the generated image, if response_format is b64_json.

    url

    The URL of the generated image, if response_format is url (default).

  44. final case class ImagesResponse(created: Int, data: Chunk[Image]) extends Product with Serializable

    ImagesResponse model

  45. final case class ListFilesResponse(data: Chunk[OpenAIFile], object: String) extends Product with Serializable

    ListFilesResponse model

  46. final case class ListFineTuneEventsResponse(data: Chunk[FineTuneEvent], object: String) extends Product with Serializable

    ListFineTuneEventsResponse model

  47. final case class ListFineTunesResponse(data: Chunk[FineTune], object: String) extends Product with Serializable

    ListFineTunesResponse model

  48. final case class ListFineTuningJobEventsResponse(data: Chunk[FineTuningJobEvent], object: String) extends Product with Serializable

    ListFineTuningJobEventsResponse model

  49. final case class ListModelsResponse(object: String, data: Chunk[Model]) extends Product with Serializable

    ListModelsResponse model

  50. final case class ListPaginatedFineTuningJobsResponse(data: Chunk[FineTuningJob], hasMore: Boolean, object: String) extends Product with Serializable

    ListPaginatedFineTuningJobsResponse model

  51. final case class Model(id: String, created: Int, object: String, ownedBy: String) extends Product with Serializable

    Model model

    Model model

    Describes an OpenAI model offering that can be used with the API.

    id

    The model identifier, which can be referenced in the API endpoints.

    created

    The Unix timestamp (in seconds) when the model was created.

    object

    The object type, which is always "model".

    ownedBy

    The organization that owns the model.

  52. sealed trait Models extends AnyRef

    models model

  53. type N = model.N.Type

    n model

    n model

    The number of images to generate. Must be between 1 and 10.

  54. sealed trait OpenAIFailure extends AnyRef
  55. final case class OpenAIFile(id: String, bytes: Int, createdAt: Int, filename: String, object: String, purpose: String, status: Optional[String] = Optional.Absent, statusDetails: Optional[String] = Optional.Absent) extends Product with Serializable

    OpenAIFile model

    OpenAIFile model

    The File object represents a document that has been uploaded to OpenAI.

    id

    The file identifier, which can be referenced in the API endpoints.

    bytes

    The size of the file in bytes.

    createdAt

    The Unix timestamp (in seconds) for when the file was created.

    filename

    The name of the file.

    object

    The object type, which is always "file".

    purpose

    The intended purpose of the file. Currently, only "fine-tune" is supported.

    status

    The current status of the file, which can be either uploaded, processed, pending, error, deleting or deleted.

    statusDetails

    Additional details about the status of the file. If the file is in the error state, this will include a message describing the error.

  56. type PresencePenalty = model.PresencePenalty.Type

    presence_penalty model

    presence_penalty model

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

    [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)

  57. sealed trait ResponseFormat extends AnyRef

    response_format model

    response_format model

    The format in which the generated images are returned. Must be one of url or b64_json.

  58. sealed trait Role extends AnyRef

    role model

    role model

    The role of the author of this message.

  59. sealed trait Size extends AnyRef

    size model

    size model

    The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.

  60. type Temperature = model.Temperature.Type

    temperature model

    temperature model

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

    We generally recommend altering this or top_p but not both.

  61. type TopP = model.TopP.Type

    top_p model

    top_p model

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

    We generally recommend altering this or temperature but not both.

Value Members

  1. object CaseType1 extends Subtype[Int]
  2. object ChatCompletionFunctionCallOption extends Serializable
  3. object ChatCompletionFunctionParameters extends Serializable
  4. object ChatCompletionFunctions extends Serializable
  5. object ChatCompletionRequestMessage extends Serializable
  6. object ChatCompletionResponseMessage extends Serializable
  7. object ChatCompletionStreamResponseDelta extends Serializable
  8. object CompletionUsage extends Serializable
  9. object CreateChatCompletionFunctionResponse extends Serializable
  10. object CreateChatCompletionRequest extends Serializable
  11. object CreateChatCompletionResponse extends Serializable
  12. object CreateChatCompletionStreamResponse extends Serializable
  13. object CreateCompletionRequest extends Serializable
  14. object CreateCompletionResponse extends Serializable
  15. object CreateEditRequest extends Serializable
  16. object CreateEditResponse extends Serializable
  17. object CreateEmbeddingRequest extends Serializable
  18. object CreateEmbeddingResponse extends Serializable
  19. object CreateFileRequest extends Serializable
  20. object CreateFineTuneRequest extends Serializable
  21. object CreateFineTuningJobRequest extends Serializable
  22. object CreateImageEditRequest extends Serializable
  23. object CreateImageRequest extends Serializable
  24. object CreateImageVariationRequest extends Serializable
  25. object CreateModerationRequest extends Serializable
  26. object CreateModerationResponse extends Serializable
  27. object CreateTranscriptionRequest extends Serializable
  28. object CreateTranscriptionResponse extends Serializable
  29. object CreateTranslationRequest extends Serializable
  30. object CreateTranslationResponse extends Serializable
  31. object DeleteFileResponse extends Serializable
  32. object DeleteModelResponse extends Serializable
  33. object Embedding extends Serializable
  34. object Error extends Serializable
  35. object ErrorResponse extends Serializable
  36. object File extends Serializable
  37. object FineTune extends Serializable
  38. object FineTuneEvent extends Serializable
  39. object FineTuningJob extends Serializable
  40. object FineTuningJobEvent extends Serializable
  41. object FinishReason
  42. object FrequencyPenalty extends Subtype[Double]
  43. object Image extends Serializable
  44. object ImagesResponse extends Serializable
  45. object ListFilesResponse extends Serializable
  46. object ListFineTuneEventsResponse extends Serializable
  47. object ListFineTunesResponse extends Serializable
  48. object ListFineTuningJobEventsResponse extends Serializable
  49. object ListModelsResponse extends Serializable
  50. object ListPaginatedFineTuningJobsResponse extends Serializable
  51. object Model extends Serializable
  52. object Models
  53. object N extends Subtype[Int]
  54. object OpenAIFailure
  55. object OpenAIFile extends Serializable
  56. object PresencePenalty extends Subtype[Double]
  57. object ResponseFormat
  58. object Role
  59. object Size
  60. object Temperature extends Subtype[Double]
  61. object TopP extends Subtype[Double]

Inherited from AnyRef

Inherited from Any

Ungrouped