MaLA-500 is a large language model aimed at covering 534 languages. Trained using vocabulary expansion and continuous pre-training on LLaMA 2, as well as Glot500-c, MaLA-500 achieves state-of-the-art results in contextual learning as demonstrated by experiments on SIB-200. This model is positioned to improve natural language processing performance for low-resource languages.