Mistral AI announced the launch of Mistral Large, its newest flagship AI model available through its own la Plateforme or through Azure AI, which is the first external partner to host Mistral’s models.
The model’s reasoning capabilities can be used for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.
Mistral Large introduces a range of advanced features and improvements. It is natively proficient in multiple languages, including English, French, Spanish, German, and Italian. This multilingual capability is not just about understanding words; Mistral Large has a deep grasp of grammar and cultural nuances, allowing for more accurate and context-aware translations and interactions. Such linguistic versatility ensures that it can serve a broad audience, offering services and responses that respect the linguistic and cultural contexts of its users, according to Mistral.
Another significant enhancement is Mistral Large’s extended context window of 32,000 tokens. This substantial increase in the amount of text it can consider at once enables the model to pull precise information from larger documents. This capability is crucial for tasks that involve detailed analysis or require synthesizing information from extensive sources.
Mistral Large also excels in precise instruction following and has integrated features for developers to craft custom moderation policies, illustrating its use in moderating content on platforms like its chat interface le Chat.
These capabilities open new avenues for application development, allowing for the creation of more sophisticated, interactive, and tailored software solutions that can meet the evolving needs of businesses and consumers alike.
Also, its JSON format mode forces the output of the model to be valid JSON, allowing developers to extract information in a structured format that can be easily plugged into their applications.
Alongside Mistral Large, the company also released a new optimized model, Mistral Small, optimized for latency and cost.