Supported Models

1. Meta-Llama 3.1 70B Instruct (Quantized)

"hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4"

Description: A quantized version of Meta AI's 70-billion-parameter model, optimized for multilingual dialogue applications.

Details:

  • Model Size: 70 billion parameters

  • Quantization: INT4 using AutoAWQ

  • Use Cases: Multilingual dialogue applications

More Information: Meta-Llama 3.1 70B Instruct - Hugging Face


2. Mistral Small 3.1 24B Instruct

"mrfakename/mistral-small-3.1-24b-instruct-2503-hf"

Description: A 24-billion-parameter instruction-tuned model, focusing on text generation tasks.

Details:

  • Model Size: 24 billion parameters

  • Format: Hugging Face Transformers

  • Use Cases: Text generation tasks

More Information: Mistral Small 3.1 24B Instruct - Hugging Face


3. Gemma 3 27B (Upcoming)

Description: A 27-billion-parameter multimodal model from Google, capable of handling both text and image inputs.

Details:

  • Model Size: 27 billion parameters

  • Capabilities: Multimodal (text and image input, text output)

  • Use Cases: Question answering, summarization, reasoning

More Information: Gemma 3 27B - Hugging Face


To learn more about our Language Models please visit our Substack

Last updated