final case class CreateAnswerRequest(model: String, question: Question, examples: NonEmptyChunk[Chunk[ExamplesItemItem]], examplesContext: String, documents: Optional[Chunk[String]] = Optional.Absent, file: Optional[String] = Optional.Absent, searchModel: Optional[String] = Optional.Absent, maxRerank: Optional[Int] = Optional.Absent, temperature: Optional[Double] = Optional.Absent, logprobs: Optional[Logprobs] = Optional.Absent, maxTokens: Optional[Int] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, n: Optional[CreateAnswerRequest.N] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, returnMetadata: Optional[Boolean] = Optional.Absent, returnPrompt: Optional[Boolean] = Optional.Absent, expand: Optional[Chunk[ExpandItem]] = Optional.Absent, user: Optional[String] = Optional.Absent) extends Product with Serializable
CreateAnswerRequest model
- model
ID of the model to use for completion. You can select one of
ada,babbage,curie, ordavinci.- question
Question to get answered.
- examples
List of (question, answer) pairs that will help steer the model towards the tone and answer format you'd like. We recommend adding 2 to 3 examples.
- examplesContext
A text snippet containing the contextual information used to generate the answers for the
examplesyou provide.- documents
List of documents from which the answer for the input
questionshould be derived. If this is an empty list, the question will be answered based on the question-answer examples. You should specify eitherdocumentsor afile, but not both.- file
The ID of an uploaded file that contains documents to search over. See [upload file](/docs/api-reference/files/upload) for how to upload a file of the desired format and purpose. You should specify either
documentsor afile, but not both.- searchModel
ID of the model to use for [Search](/docs/api-reference/searches/create). You can select one of
ada,babbage,curie, ordavinci.- maxRerank
The maximum number of documents to be ranked by [Search](/docs/api-reference/searches/create) when using
file. Setting it to a higher value leads to improved accuracy but with increased latency and cost.- temperature
What [sampling temperature](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277) to use. Higher values mean the model will take more risks and value 0 (argmax sampling) works better for scenarios with a well-defined answer.
- logprobs
Include the log probabilities on the
logprobsmost likely tokens, as well the chosen tokens. For example, iflogprobsis 5, the API will return a list of the 5 most likely tokens. The API will always return thelogprobof the sampled token, so there may be up tologprobs+1elements in the response. The maximum value forlogprobsis 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case. Whenlogprobsis set,completionwill be automatically added intoexpandto get the logprobs.- maxTokens
The maximum number of tokens allowed for the generated answer
- stop
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
- n
How many answers to generate for each question.
- logitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass
{"50256": -100}to prevent the <|endoftext|> token from being generated.- returnMetadata
A special boolean flag for showing metadata. If set to
true, each document entry in the returned JSON will contain a "metadata" field. This flag only takes effect whenfileis set.- returnPrompt
If set to
true, the returned JSON will include a "prompt" field containing the final prompt that was used to request a completion. This is mainly useful for debugging purposes.- expand
If an object name is in the list, we provide the full information of the object; otherwise, we only provide the object ID. Currently we support
completionandfileobjects for expansion.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
- Alphabetic
- By Inheritance
- CreateAnswerRequest
- Serializable
- Product
- Equals
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new CreateAnswerRequest(model: String, question: Question, examples: NonEmptyChunk[Chunk[ExamplesItemItem]], examplesContext: String, documents: Optional[Chunk[String]] = Optional.Absent, file: Optional[String] = Optional.Absent, searchModel: Optional[String] = Optional.Absent, maxRerank: Optional[Int] = Optional.Absent, temperature: Optional[Double] = Optional.Absent, logprobs: Optional[Logprobs] = Optional.Absent, maxTokens: Optional[Int] = Optional.Absent, stop: Optional[Stop] = Optional.Absent, n: Optional[CreateAnswerRequest.N] = Optional.Absent, logitBias: Optional[LogitBias] = Optional.Absent, returnMetadata: Optional[Boolean] = Optional.Absent, returnPrompt: Optional[Boolean] = Optional.Absent, expand: Optional[Chunk[ExpandItem]] = Optional.Absent, user: Optional[String] = Optional.Absent)
- model
ID of the model to use for completion. You can select one of
ada,babbage,curie, ordavinci.- question
Question to get answered.
- examples
List of (question, answer) pairs that will help steer the model towards the tone and answer format you'd like. We recommend adding 2 to 3 examples.
- examplesContext
A text snippet containing the contextual information used to generate the answers for the
examplesyou provide.- documents
List of documents from which the answer for the input
questionshould be derived. If this is an empty list, the question will be answered based on the question-answer examples. You should specify eitherdocumentsor afile, but not both.- file
The ID of an uploaded file that contains documents to search over. See [upload file](/docs/api-reference/files/upload) for how to upload a file of the desired format and purpose. You should specify either
documentsor afile, but not both.- searchModel
ID of the model to use for [Search](/docs/api-reference/searches/create). You can select one of
ada,babbage,curie, ordavinci.- maxRerank
The maximum number of documents to be ranked by [Search](/docs/api-reference/searches/create) when using
file. Setting it to a higher value leads to improved accuracy but with increased latency and cost.- temperature
What [sampling temperature](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277) to use. Higher values mean the model will take more risks and value 0 (argmax sampling) works better for scenarios with a well-defined answer.
- logprobs
Include the log probabilities on the
logprobsmost likely tokens, as well the chosen tokens. For example, iflogprobsis 5, the API will return a list of the 5 most likely tokens. The API will always return thelogprobof the sampled token, so there may be up tologprobs+1elements in the response. The maximum value forlogprobsis 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case. Whenlogprobsis set,completionwill be automatically added intoexpandto get the logprobs.- maxTokens
The maximum number of tokens allowed for the generated answer
- stop
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
- n
How many answers to generate for each question.
- logitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass
{"50256": -100}to prevent the <|endoftext|> token from being generated.- returnMetadata
A special boolean flag for showing metadata. If set to
true, each document entry in the returned JSON will contain a "metadata" field. This flag only takes effect whenfileis set.- returnPrompt
If set to
true, the returned JSON will include a "prompt" field containing the final prompt that was used to request a completion. This is mainly useful for debugging purposes.- expand
If an object name is in the list, we provide the full information of the object; otherwise, we only provide the object ID. Currently we support
completionandfileobjects for expansion.- user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- val documents: Optional[Chunk[String]]
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- val examples: NonEmptyChunk[Chunk[ExamplesItemItem]]
- val examplesContext: String
- val expand: Optional[Chunk[ExpandItem]]
- val file: Optional[String]
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val logitBias: Optional[LogitBias]
- val logprobs: Optional[Logprobs]
- val maxRerank: Optional[Int]
- val maxTokens: Optional[Int]
- val model: String
- val n: Optional[CreateAnswerRequest.N]
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def productElementNames: Iterator[String]
- Definition Classes
- Product
- val question: Question
- val returnMetadata: Optional[Boolean]
- val returnPrompt: Optional[Boolean]
- val searchModel: Optional[String]
- val stop: Optional[Stop]
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- val temperature: Optional[Double]
- val user: Optional[String]
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()