DocLLM

Multimodal Document Understanding Model

CommonProductProductivityMultimodalDocument Understanding
DocLLM is a platform that provides a multimodal document understanding model, aiming to process both text and spatial layout within enterprise documents, delivering performance superior to existing large language models (LLMs). Its model employs a lightweight extension, avoiding expensive image encoders and focusing on bounding box information to incorporate spatial layout structure. By deconstructing the attention mechanism in classic Transformers, it captures cross-modality alignment between text and spatial modalities. Furthermore, a pre-training objective is designed to learn text paragraph filling, addressing the irregular layouts and heterogeneous content frequently encountered in visual documents. This solution outperforms existing LLMs on 14 tasks across 16 datasets and demonstrates good generalization capabilities on 5 previously unseen datasets.
Visit

DocLLM Visit Over Time

Monthly Visits

20899836

Bounce Rate

46.04%

Page per Visit

5.2

Visit Duration

00:04:57

DocLLM Visit Trend

DocLLM Visit Geography

DocLLM Traffic Sources

DocLLM Alternatives