The research team at the Singapore University of Technology and Design has developed a 550MB AI model named TinyLlama. This model is slated to undergo training using a dataset of 3 trillion tokens over a period of 90 days, designed to be suitable for memory-constrained edge devices. The team utilized 16 A100-40G GPUs and aims to complete the training within 90 days. The success of TinyLlama will provide high-performance AI solutions for various applications such as real-time machine translation. This model will be part of a suite of smaller language models, intended for building a wide range of applications.