Model Comparison
Authormistralai
Context Length33K
With 22 billion parameters, Mistral Small v24.09 offers a convenient mid-point between (Mistral NeMo 12B)[/mistralai/mistral-nemo] and (Mistral Large 2)[/mistralai/mistral-large], providing a cost-effective solution that can be deployed across various platforms and environments. It has better reasoning, exhibits more capabilities, can produce and reason about code, and is multiligual, supporting English, French, German, Italian, and Spanish.
Pricing
Input$0.20 / M tokens
Output$0.60 / M tokens
Images– –
Endpoint Features
Quantizationunknown
Max Tokens (input + output)33K
Max Output Tokens– –
Stream cancellation– –
Supports Tools
No Prompt Training
Reasoning– –