Researchers have open-sourced a project on GitHub called AgentTuning, which introduces a novel approach to fine-tuning language models. This method trains and adjusts language models through interaction trajectories in multiple agent tasks, aiming to better adapt to various tasks and scenarios. This approach can enhance the effectiveness and generalization capabilities of language models while reducing the manual tuning effort. AgentTuning has been validated in several natural language processing tasks, including dialogue generation, question-answering systems, and summary generation. The project also holds significant potential for other types of models.