The data to be translated: Research has shown that large-scale pre-trained language models (LLMs), such as GPT-3, possess remarkable abilities to understand and respond to questions posed by humans, assist in coding tasks, and more. Recently, researchers have introduced the RAIN method, enabling LLMs to self-assess and improve without the need for additional data or fine-tuning. This approach not only enhances the performance of LLMs but also reduces the success rate of adversarial attacks, leading to more coordinated and secure responses from AI. This research offers a new method for adjusting LLMs to align with human preferences without the need for extra information or cumbersome fine-tuning.