Amazon has recently been reported to be developing a multimodal large language model named "Olympus," which is expected to be officially announced at next week's AWS re:Invent conference. According to The Information, the internal code name for this algorithm is "Olympus."

In November last year, Reuters reported that Amazon invested millions of dollars in training a large language model called "Olympus," which has a parameter count of up to 2 trillion. It remains unclear whether the model mentioned in this report is the same as the previous "Olympus," a new version, or an entirely new system.

Audio Artificial Intelligence

Image Source Note: Image generated by AI, image authorized by service provider Midjourney

It is reported that the new "Olympus" model can handle not only text but also images and videos. This means users can search for specific clips in a video library using natural language commands. Additionally, the model is believed to assist energy companies in analyzing geological data, indicating that "Olympus" has broad application potential.

Sources from The Information expect that Amazon may announce this new model at next week's AWS re:Invent conference. If "Olympus" is indeed released at the event, it is likely to be offered through Amazon Web Services (AWS), possibly as part of AWS Bedrock. AWS Bedrock is a managed service launched by Amazon in April last year, allowing users to access cutting-edge models hosted in the cloud.

Currently, the service offers over six models developed by Amazon, with the most advanced being Amazon Titan Text Premier, which supports inputs of up to 32,000 tokens and can generate text and code while performing step-by-step reasoning analysis.

Bedrock also includes three generative embedding models developed by Amazon, which provide a mathematical structure for information storage in machine learning applications. One of these models supports embedding generation for multimodal data, potentially making it easier for customers to use the multimodal features of "Olympus."

In addition to Amazon's self-developed models, Bedrock also offers language models from other companies, including Anthropic PBC, which has a close relationship with Amazon. Recently, Anthropic secured $8 billion in funding from Amazon, with the $4 billion funding announced last week being particularly noteworthy.

The release of the "Olympus" model may be a way for Amazon to reduce its reliance on Anthropic. At the same time, other tech giants are working to internalize more AI technologies, such as Meta developing its own search engine to lessen its dependence on Microsoft's and Google's search technologies.

Amazon's AI strategy extends beyond software to hardware as well. The company has developed two series of chips optimized for training and inference workloads: AWS Trainium and AWS Inferentia. Last week, Anthropic also partnered with Amazon to enhance the performance of Trainium chips.

Key Points:  

💡 Amazon is developing a multimodal language model called "Olympus," expected to be released at next week's AWS re:Invent conference.  

🎥 The new model can handle text, images, and videos, supporting natural language searches for specific clips in a video library.  

💻 "Olympus" may be offered through the AWS Bedrock service, and Amazon's AI strategy also includes hardware developments.