Translated data: NVIDIA's latest Tied-Lora technology aims to enhance the parameter efficiency of the LoRA method. By employing weight tying and selective training strategies, it achieves an optimal balance between performance and trainable parameters. Experiments have demonstrated that Tied-Lora, across various tasks and foundational language models, can achieve comparable performance levels using only 13% of the parameters required by the standard LoRA method. This technology, by improving parameter efficiency, offers better model performance for developers and researchers in the field of natural language processing.