Package dev.langchain4j.model.mistralai
Class MistralAiStreamingChatModel
java.lang.Object
dev.langchain4j.model.mistralai.MistralAiStreamingChatModel
- All Implemented Interfaces:
StreamingChatLanguageModel
Represents a Mistral AI Chat Model with a chat completion interface, such as mistral-tiny and mistral-small.
The model's response is streamed token by token and should be handled with
StreamingResponseHandler.
You can find description of parameters here.-
Constructor Summary
ConstructorsConstructorDescriptionMistralAiStreamingChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, Boolean logRequests, Boolean logResponses, Duration timeout) Constructs a MistralAiStreamingChatModel with the specified parameters. -
Method Summary
Modifier and TypeMethodDescriptionvoidgenerate(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler) Generates streamed token response based on the given list of messages.static MistralAiStreamingChatModelwithApiKey(String apiKey) Creates a MistralAiStreamingChatModel with the specified API key.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface dev.langchain4j.model.chat.StreamingChatLanguageModel
generate, generate, generate
-
Constructor Details
-
MistralAiStreamingChatModel
public MistralAiStreamingChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, Boolean logRequests, Boolean logResponses, Duration timeout) Constructs a MistralAiStreamingChatModel with the specified parameters.- Parameters:
baseUrl- the base URL of the Mistral AI API. It uses the default value if not specifiedapiKey- the API key for authenticationmodelName- the name of the Mistral AI model to usetemperature- the temperature parameter for generating chat responsestopP- the top-p parameter for generating chat responsesmaxTokens- the maximum number of new tokens to generate in a chat responsesafePrompt- a flag indicating whether to use a safe prompt for generating chat responsesrandomSeed- the random seed for generating chat responses (if not specified, a random number is used)logRequests- a flag indicating whether to log raw HTTP requestslogResponses- a flag indicating whether to log raw HTTP responsestimeout- the timeout duration for API requests
-
-
Method Details
-
withApiKey
Creates a MistralAiStreamingChatModel with the specified API key.- Parameters:
apiKey- the API key for authentication- Returns:
- a MistralAiStreamingChatModel instance
-
generate
Generates streamed token response based on the given list of messages.- Specified by:
generatein interfaceStreamingChatLanguageModel- Parameters:
messages- the list of chat messageshandler- the response handler for processing the generated chat chunk responses
-