Meta Llama 3.1 is a series of multilingual large pre-trained and instruction-tuned generative models, available in sizes of 8B, 70B, and 405B. These models are specifically optimized for multilingual dialogue use cases and outperform many open-source and closed-source chat models in standard industry benchmarks. The models use an optimized transformer architecture and are fine-tuned through supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for usefulness and safety.