- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
Another day, another model.
Just one day after Meta released their new frontier models, Mistral AI surprised us with a new model, Mistral Large 2.
It’s quite a big one with 123B parameters, so I’m not sure if I would be able to run it at all. However, based on their numbers, it seems to come close to GPT-4o. They claim to be on par with GPT-4o, Claude 3 Opus, and the fresh Llama 3 405B regarding coding related tasks.
It’s multilingual, and from what they said in their blog post, it was trained on a large coding data set as well covering 80+ programming languages. They also claim that it is “trained to acknowledge when it cannot find solutions or does not have sufficient information to provide a confident answer”
On the licensing side, it’s free for research and non-commercial applications, but you have to pay them for commercial use.
Wake me up when I can ran that locally on my potato Laptop.