ZhipuAI has announced that the LLM GLM-4-Long, which supports ultra-long context lengths, is now live on the open platform bigmodel.cn. Designed specifically for handling extremely long texts, this model can read an amount equivalent to two copies of "Dream of the Red Chamber" or 125 research papers at once, and is widely used in scenarios such as translating lengthy documents, comprehensive analysis of financial reports, extracting key information, and building chatbots with ultra-long memory.

GLM-4-Long offers a significant advantage in terms of cost, with input and output prices as low as 0.001 yuan per thousand tokens, providing an economical and efficient solution for businesses and developers. This model has continuously pursued leading context capabilities during its technical iterations, expanding from an initial 2K context to the current 1M context length, integrating numerous research achievements in long text processing.

WeChat Screenshot_20240813175600.png

In the "Needle in a Haystack" evaluation experiment, GLM-4-Long demonstrated its ability to process information without loss, proving its exceptional performance at 1M context length. Additionally, GLM-4-Long performed excellently in practical application tests such as financial report reading, paper summarization, and novel reading, accurately extracting and analyzing key information.

The application of GLM-4-Long brings significant advantages to businesses, including in-depth dialogue understanding, complex document processing, more coherent content generation, and enhanced data analysis capabilities. These capabilities are particularly important in fields such as customer service, law, finance, research, marketing, advertising, and big data analysis.

API Documentation:

https://bigmodel.cn/dev/api#glm-4

Experience Center:

https://bigmodel.cn/console/trialcenter