The article introduces the essential resources required for getting started with large language models, including papers, blogs, and GitHub resources. The author shares core tags in the field of large model technology, such as the Transformer architecture and instruction tuning. The article also mentions the latest research directions, such as contextual learning and thought chains, as well as methods for evaluating large language models. Additionally, the author summarizes some development tools and frameworks, such as LangChain and Huggingface libraries.