Translated data: The Peking University and the Beijing Academy of Artificial Intelligence have released the LLaMA-Rider training framework, which empowers large language models with the ability to autonomously explore and learn in the open world. LLaMA-Rider employs a feedback-modification mechanism for active exploration, enhancing the model's multi-task solving capabilities. Experimental results show that LLaMA-Rider exhibits high sampling efficiency and low training costs in multi-task solving. The framework demonstrates the ability to generalize to new tasks, providing new insights for large language models to autonomously learn in the open world. LLaMA-Rider holds broad application prospects and will drive the advancement of large language models.