The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
Training Data |
Params |
Input modalities |
Output modalities |
Context length |
GQA |
Token count |
Knowledge cutoff |
Llama 3.3 (text only) |
A new mix of publicly available online data. |
70B |
Multilingual Text |
Multilingual Text and code |
128k |
Yes |
15T+ |
December 2023 |
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
Llama 3.3 model. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Click here to read more.