Cappy is a novel approach designed to improve the performance and efficiency of large, multi-task language models. It is a lightweight, pre-trained scoring model based on RoBERTa, with only 360 million parameters. Cappy can independently solve classification tasks or act as an auxiliary component to enhance the performance of language models. Fine-tuning Cappy on downstream tasks effectively integrates supervisory information, improving model performance. This process does not require backpropagation to the language model parameters, reducing memory requirements. Cappy is applicable to both open-source and closed-source language models, providing an efficient model fine-tuning method.