mistral-finetune is a lightweight codebase that utilizes the LoRA training paradigm, allowing fine-tuning by training only 1-2% of the additional weights in the form of low-rank matrix perturbations while freezing most of the original weights. It is optimized for multi-GPU single-node training setups. For smaller models, like the 7B model, a single GPU is sufficient. This codebase aims to provide a simple and guided fine-tuning entry point, particularly in data formatting, and does not intend to cover a wide range of model architectures or hardware types.