LLaMA is a series of autoregressive language models developed by Meta AI based on the Transformer architecture, including versions with different parameter scales such as 7B, 13B, 33B, and 65B. This project is the LLaMA-13B version converted on April 8, 2023, which solves the EOS token problem and can be used with HuggingFace Transformers.
Natural Language Processing
Transformers