As artificial intelligence increasingly integrates into corporate workflows and products, the market's demand for Machine Learning Operations platforms (MLOps) is on the rise. These platforms help businesses more easily create, test, and deploy machine learning models. However, despite the presence of numerous competitors such as startups like InfuseAI and Comet, as well as major companies like Google Cloud, Azure, and AWS, South Korea's VESSL AI aims to carve out its own niche by focusing on optimizing GPU costs.

Robot Counting Money Investment

Image source: Picture generated by AI, authorized by Midjourney

Recently, VESSL AI successfully completed a $12 million Series A funding round, aiming to accelerate the development of its infrastructure, primarily serving enterprises that wish to develop custom Large Language Models (LLMs) and vertical AI agents. The company currently has 50 corporate clients, including renowned companies such as Hyundai Motor, LIG Nex1 (a South Korean aerospace and weapons manufacturer), and TMAP Mobility (a joint venture between Uber and SK Telecom). Additionally, VESSL AI has established strategic partnerships with US-based companies like Oracle and Google Cloud.

The founding team of VESSL AI consists of Jaeman Kuss An (CEO), Jihwan Jay Chun (CTO), Intae Ryoo (Chief Product Officer), and Yongseon Sean Lee (Technical Lead). Before founding the company, they worked at well-known enterprises like Google and PUBG, as well as some AI startups. Founder An, during his previous work in medical technology developing machine learning models, found the process cumbersome and resource-intensive. Thus, they decided to leverage a hybrid infrastructure to enhance efficiency and reduce costs.

VESSL AI's MLOps platform employs a multi-cloud strategy, utilizing GPUs from different cloud service providers to help businesses reduce GPU expenditures by up to 80%. This approach not only addresses GPU shortages but also optimizes the training, deployment, and operation of AI models, especially in managing Large Language Models. An mentioned that the system can automatically select the most cost-effective and efficient resources, saving customers money.

VESSL's product has four core features, including: VESSL Run (automated AI model training), VESSL Serve (supports real-time deployment), VESSL Pipelines (integrates model training and data preprocessing to streamline workflows), and VESSL Cluster (optimizes GPU resource usage in cluster environments). Following this round of funding, VESSL AI's total funding has reached $16.8 million, with the company employing 35 staff in South Korea and San Mateo, USA.

Key Points:

🌟 VESSL AI secures $12 million in Series A funding, focusing on optimizing corporate GPU costs.

💼 Currently serves 50 enterprise clients, including Hyundai Motor and LIG Nex1.

🚀 Platform reduces GPU costs by up to 80% through a multi-cloud strategy and offers multiple core features.