Google researchers have introduced a pre-trained scoring model called Cappy, aimed at improving the performance of large multitask language models. Cappy's architecture is based on RoBERTa and is pre-trained on a diverse collection of datasets to improve the performance and efficiency of multitask law master's models. By introducing the lightweight pre-trained scoring model Cappy, it addresses the challenge of effectively utilizing large language models in multitask scenarios. Researchers have proposed a data construction method that meets the diversity requirements of labels in pre-training data, generating a large and effective regression pre-training dataset.