Translated data: Microsoft and UC Berkeley researchers have developed Gorilla, a large language model based on LLaMA, which excels in generating accurate API calls. Gorilla surpasses advanced LLMs like GPT-4 by addressing the issue of hallucinations and adapting to document changes. The model is trained on extensive datasets from Torch Hub, TensorFlow Hub, and Hugging Face. The code, models, data, and demos for Gorilla are available on GitHub, with plans to add more domains in the future, such as Kubernetes, GCP, AWS, and OpenAPI.