On February 17, the Opera team under Kunlun Wanwei integrated the DeepSeek R1 series models into Opera Developer, enabling local personalized deployment. This initiative marks a further expansion of Opera's application of AI technology, providing users with more powerful local AI capabilities.

In 2024, Opera became the first to introduce built-in local large language models (LLM) into web browsers, offering users access to over 50 types of LLMs. After integrating the DeepSeek R1 series models, Opera Developer utilizes the Ollama framework (implemented by llama.cpp), allowing users to run the model locally, thereby enhancing the browser's AI capabilities.

Users can run the DeepSeek R1 model on their local devices through simple steps. First, they need to download and update to the latest version of Opera Developer, open the browser, and click on the Aria icon in the left sidebar. Once in the Aria interface, click the logo in the upper left corner to expand the chat history. Next, go to the settings menu and select the "Local AI Model" option. In the search box, type "deepseek" and choose one of the models to download. After the download is complete, users simply need to open a new chat window with Aria and select the downloaded model to start using it.

WeChat Screenshot_20250220085630.png

Opera offers users a variety of DeepSeek R1 model options, allowing them to choose a suitable model based on the performance capabilities of their devices.

The integration of the DeepSeek R1 models by Opera not only provides users with stronger local AI capabilities but also further promotes the application and popularization of AI technology in the browser field. This update in Opera Developer brings users a more personalized and efficient AI experience, while also showcasing Kunlun Wanwei's continuous innovation and application capabilities in the field of AI technology.