FaceWall Intelligence has announced that, as contributors and beneficiaries of the open-source community, FaceWall Intelligence, in conjunction with the Tsinghua NLP Lab, has decided to make the FaceWall "Cannon" MiniCPM available for free commercial use. This includes both the MiniCPM and MiniCPM-V models, which are now open for free commercial use to academic researchers and individuals in businesses. This move is undoubtedly set to drive innovation and development in both the academic and commercial sectors.

Dual Openness for Academics and Business: Promoting Knowledge Sharing and Technological Advancement

The weights of the MiniCPM series models will be fully open to academic researchers, fostering the sharing of knowledge and the advancement of academic research. At the same time, businesses and individuals can start commercial use of these models after completing a simple questionnaire registration. This open strategy not only demonstrates support for academic research but also encourages commercial innovation.

Adherence to License Agreements: Ensuring Compliance in Use

It is important to note that community members using the MiniCPM series models must adhere to the Apache2.0 license agreement and the "MiniCPM Model Community License Agreement." This ensures the compliance of model usage and also protects the rights of developers and users.

MiniCPM-Llama3-V2.5: Outstanding Performance, Multi-Modal Acceleration on the Edge

The MiniCPM-Llama3-V2.5 has become a focal point due to its outstanding performance. This model surpasses Gemini Pro and GPT-4V in comprehensive multi-modal performance on the edge, particularly in OCR capabilities, reaching the SOTA (State of the Art) level, and accurately recognizing long texts in complex images.

High Energy Efficiency, Support for Multiple Languages

Another highlight of the MiniCPM-Llama3-V2.5 is its high energy efficiency. Running on a mobile device, it requires only 8GB of VRAM, combined with an NVIDIA GeForce RTX4070 graphics card, to achieve rapid inference with extremely fast operating speeds. Additionally, the image encoding speed has been increased by 150 times, achieving the first system-level multi-modal acceleration on the edge. Impressively, the model supports over 30 languages, providing strong support for multilingual environments.

MiniCPM Series Models: Drivers of Innovation

With the free commercial use of the MiniCPM series models, we look forward to seeing more innovative applications emerge. Whether in academic research or commercial applications, the MiniCPM series models will serve as drivers of innovation, bringing profound impacts to various industries.