This article discusses how to integrate structural information into large language models to enhance their performance in knowledge graph completion. It explores training-free methods such as zero-shot inference and context learning to effectively leverage large language models for downstream tasks. The article also explains the design principles and functions of the Knowledge Prefix Adapter (KoPA) and its advantages in the triple classification task. Research findings indicate that KoPA effectively incorporates structural information into large language models, improving both the performance and transferability of the models.