In recent years, the emergence of the Transformer architecture has made generative AI based on large-scale language models possible. The article delves into how Transformers enhance language processing capabilities through self-attention mechanisms, supporting a variety of generative tasks. Despite limitations such as "hallucinations," this technology has already given rise to numerous innovative applications and is expanding into more fields, reshaping the development of artificial intelligence.