Interface EmbeddingModelConfig
public interface EmbeddingModelConfig
-
Method Summary
Modifier and TypeMethodDescriptionWhether embedding model requests should be loggedWhether embedding model responses should be loggedMaximum number of tokens to predict when generating textstop()Sets the stop sequences to use.The temperature of the model.topK()Reduces the probability of generating nonsense.topP()Works together with top-k.
-
Method Details
-
temperature
The temperature of the model. Increasing the temperature will make the model answer with more variability. A lower temperature will make the model answer more conservatively. -
numPredict
Maximum number of tokens to predict when generating text -
stop
Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return -
topP
Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text -
topK
Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative -
logRequests
@ConfigDocDefault("false") @WithDefault("${quarkus.langchain4j.ollama.log-requests}") Optional<Boolean> logRequests()Whether embedding model requests should be logged -
logResponses
@ConfigDocDefault("false") @WithDefault("${quarkus.langchain4j.ollama.log-responses}") Optional<Boolean> logResponses()Whether embedding model responses should be logged
-