Hume AI's Empathetic Voice Interface (EVI) is an API driven by an Empathetic Large Language Model (eLLM), capable of understanding and simulating voice tone, word stress, and more to optimize human-computer interaction. Based on over a decade of research, millions of patent data points, and more than 30 published papers in leading journals, EVI aims to provide any application with a more natural and empathetic voice interface, making interactions with AI more human-like. The technology can be widely applied in fields such as sales/conference analysis, health and wellness, AI research services, and social networking.