ReFT is an open-source research project aimed at fine-tuning large language models using deep reinforcement learning techniques to enhance their performance on specific tasks. The project offers detailed code and data to enable researchers and developers to reproduce the results presented in the papers. The main advantages of ReFT include the ability to automatically adjust model parameters through reinforcement learning and improve model performance on specific tasks via fine-tuning. The product is based on Codellama and Galactica models, adhering to the Apache 2.0 license.