Ferret-UI is the first multimodal large language model (MLLM) centered on user interfaces, specifically designed for gesture expression, localization, and reasoning tasks. Built on Gemma-2B and Llama-3-8B, it is capable of performing complex user interface tasks. This version aligns with Apple's research paper and serves as a powerful tool for image-to-text tasks, excelling in dialogue and text generation.