Translated data: Recent studies indicate that fine-tuning large language models affects performance, but a new method, URIAL, proposes alignment without fine-tuning. Through contextual learning, URIAL achieves the performance of traditional fine-tuning methods with only 3 samples and 1 system prompt. Researchers question the impact of fine-tuning on the surface behavior of LLMs, providing engineers with a new approach to reduce the need for fine-tuning.