You must log in or register to comment.
Is this expected to be released on ollama?
This is so exciting! Glad to see mistral at it with more bangers.
Anyone tested it at high context yet? I find all Mistral models peter out after like 16K-24K tokes no matter what they advertise the context length as.
A GPT-4o-mini comparable system that you can run on a RTX 4090 isn’t going to solve direct problems, but it might have enterprise uses. Text generation automation for personal use should be strong, for example - in place of having a third party API do it.
English version: https://mistral.ai/news/mistral-small-3-1