Data to be translated: Chinese scholars have introduced a novel large-scale model window extension method called SelfExtended (abbreviated as SE), which can triple the window length of large models with just four lines of code. SE is a "plug-and-play" method, compatible with any large model, and has been successfully tested on Mistral and Llama2. After processing with SE, the model's performance in long-text tasks has significantly improved. SE employs two attention mechanisms to address the encoding limit issues encountered by large models when processing long texts.