object CreateChatCompletionRequest extends Serializable
- Alphabetic
- By Inheritance
- CreateChatCompletionRequest
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- sealed trait FunctionCall extends AnyRef
function_call model
function_call model
Deprecated in favor of
tool_choice.Controls which (if any) function is called by the model.
nonemeans the model will not call a function and instead generates a message.automeans the model can pick between generating a message or calling a function. Specifying a particular function via{"name": "my_function"}forces the model to call that function.noneis the default when no functions are present.autois the default if functions are present. - final case class LogitBias(values: Map[String, Json]) extends DynamicObject[LogitBias] with Product with Serializable
logit_bias model
logit_bias model
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
- values
The dynamic list of key-value pairs of the object
- sealed trait Model extends AnyRef
model model
model model
ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
- type N = CreateChatCompletionRequest.N.Type
n model
n model
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep
nas1to minimize costs. - final case class ResponseFormat(type: Optional[Type] = Optional.Absent) extends Product with Serializable
response_format model
response_format model
An object specifying the format that the model must output. Compatible with [GPT-4 Turbo](/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than
gpt-3.5-turbo-1106.Setting to
{ "type": "json_object" }enables JSON mode, which guarantees the message the model generates is valid JSON.**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if
finish_reason="length", which indicates the generation exceededmax_tokensor the conversation exceeded the max context length.- type
Must be one of
textorjson_object.
- type Seed = CreateChatCompletionRequest.Seed.Type
seed model
seed model
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same
seedand parameters should return the same result. Determinism is not guaranteed, and you should refer to thesystem_fingerprintresponse parameter to monitor changes in the backend. - sealed trait Stop extends AnyRef
stop model
stop model
Up to 4 sequences where the API will stop generating further tokens.
- type TopLogprobs = CreateChatCompletionRequest.TopLogprobs.Type
top_logprobs model
top_logprobs model
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
logprobsmust be set totrueif this parameter is used.
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- implicit val schema: Schema[CreateChatCompletionRequest]
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- object FunctionCall
- object LogitBias extends Serializable
- object Model
- object N extends Subtype[Int]
- object ResponseFormat extends Serializable
- object Seed extends Subtype[Int]
- object Stop
- object TopLogprobs extends Subtype[Int]