The University of Science and Technology of China, in collaboration with the IDEA Research Institute's Fengshenbang team, has developed a large-scale language model for the Chinese medical field called ChiMed-GPT. This model is built upon the Fengshenbang team's Ziya2-13B model, featuring 13 billion parameters, and is tailored to meet the needs of medical text processing through comprehensive pre-training, supervised fine-tuning, and reinforcement learning with human feedback. ChiMed-GPT outperforms other open-source models of similar scale in tasks such as medical information extraction, question answering, and dialogue generation, and surpasses GPT-3.5 in several metrics. The model is not only effective in handling medical text data but also capable of generating content suitable for answering patient inquiries.