SignLLM is the first multilingual sign language generation model. It is built upon a public sign language dataset, encompassing American Sign Language (ASL) and seven other sign languages. This model can generate sign language gestures from text or prompts and utilizes reinforcement learning to accelerate the training process, enhancing data sampling quality. SignLLM achieves state-of-the-art performance on sign language production tasks across eight sign languages.