object CreateCompletionRequest extends Serializable
- Alphabetic
- By Inheritance
- CreateCompletionRequest
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- type BestOf = CreateCompletionRequest.BestOf.Type
best_of model
best_of model
Generates
best_ofcompletions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.When used with
n,best_ofcontrols the number of candidate completions andnspecifies how many to return –best_ofmust be greater thann.**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for
max_tokensandstop. - type FrequencyPenalty = CreateCompletionRequest.FrequencyPenalty.Type
frequency_penalty model
frequency_penalty model
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
[See more information about frequency and presence penalties.](/docs/api-reference/parameter-details)
- final case class LogitBias(values: Map[String, Json]) extends DynamicObject[LogitBias] with Product with Serializable
logit_bias model
logit_bias model
Modify the likelihood of specified tokens appearing in the completion.
Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
As an example, you can pass
{"50256": -100}to prevent the <|endoftext|> token from being generated.- values
The dynamic list of key-value pairs of the object
- type Logprobs = CreateCompletionRequest.Logprobs.Type
logprobs model
logprobs model
Include the log probabilities on the
logprobsmost likely tokens, as well the chosen tokens. For example, iflogprobsis 5, the API will return a list of the 5 most likely tokens. The API will always return thelogprobof the sampled token, so there may be up tologprobs+1elements in the response.The maximum value for
logprobsis 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case. - type MaxTokens = CreateCompletionRequest.MaxTokens.Type
max_tokens model
max_tokens model
The maximum number of [tokens](/tokenizer) to generate in the completion.
The token count of your prompt plus
max_tokenscannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). - type N = CreateCompletionRequest.N.Type
n model
n model
How many completions to generate for each prompt.
**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for
max_tokensandstop. - type PresencePenalty = CreateCompletionRequest.PresencePenalty.Type
presence_penalty model
presence_penalty model
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
[See more information about frequency and presence penalties.](/docs/api-reference/parameter-details)
- sealed trait Prompt extends AnyRef
prompt model
prompt model
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
- sealed trait Stop extends AnyRef
stop model
stop model
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- implicit val schema: Schema[CreateCompletionRequest]
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- object BestOf extends Subtype[Int]
- object FrequencyPenalty extends Subtype[Double]
- object LogitBias extends Serializable
- object Logprobs extends Subtype[Int]
- object MaxTokens extends Subtype[Int]
- object N extends Subtype[Int]
- object PresencePenalty extends Subtype[Double]
- object Prompt
- object Stop