Package dev.langchain4j.model.mistralai
Class MistralAiChatModel
java.lang.Object
dev.langchain4j.model.mistralai.MistralAiChatModel
- All Implemented Interfaces:
ChatLanguageModel
Represents a Mistral AI Chat Model with a chat completion interface, such as mistral-tiny and mistral-small.
This model allows generating chat completion of a sync way based on a list of chat messages.
You can find description of parameters here.
-
Constructor Summary
ConstructorsConstructorDescriptionMistralAiChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, Duration timeout, Boolean logRequests, Boolean logResponses, Integer maxRetries) Constructs a MistralAiChatModel with the specified parameters. -
Method Summary
Modifier and TypeMethodDescriptiongenerate(List<ChatMessage> messages) Generates chat response based on the given list of messages.static MistralAiChatModelwithApiKey(String apiKey) Creates a MistralAiChatModel with the specified API key.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface dev.langchain4j.model.chat.ChatLanguageModel
generate, generate, generate, generate
-
Constructor Details
-
MistralAiChatModel
public MistralAiChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, Duration timeout, Boolean logRequests, Boolean logResponses, Integer maxRetries) Constructs a MistralAiChatModel with the specified parameters.- Parameters:
baseUrl- the base URL of the Mistral AI API. It uses the default value if not specifiedapiKey- the API key for authenticationmodelName- the name of the Mistral AI model to usetemperature- the temperature parameter for generating chat responsestopP- the top-p parameter for generating chat responsesmaxTokens- the maximum number of new tokens to generate in a chat responsesafePrompt- a flag indicating whether to use a safe prompt for generating chat responsesrandomSeed- the random seed for generating chat responsestimeout- the timeout duration for API requestsThe default value is 60 seconds
logRequests- a flag indicating whether to log API requestslogResponses- a flag indicating whether to log API responsesmaxRetries- the maximum number of retries for API requests. It uses the default value 3 if not specified
-
-
Method Details
-
withApiKey
Creates a MistralAiChatModel with the specified API key.- Parameters:
apiKey- the API key for authentication- Returns:
- a MistralAiChatModel instance
-
generate
Generates chat response based on the given list of messages.- Specified by:
generatein interfaceChatLanguageModel- Parameters:
messages- the list of chat messages
-