LMSYS Org is an organization dedicated to democratizing the technology of large models and their underlying systems infrastructure. They developed the Vicuna chatbot, which rivals GPT-4 in scale (7B/13B/33B) and achieves 90% of ChatGPT's quality. They also offer Chatbot Arena, a platform for large-scale, gamified evaluation of LLMs using crowdsourcing and Elo rating systems. SGLang provides an efficient interface and runtime environment for complex LLM programs. LMSYS-Chat-1M is a large-scale real-world LLM dialogue dataset. FastChat is an open platform for training, serving, and evaluating LLM-based chatbots. MT-Bench is a suite of challenging, multi-turn, open-ended questions designed to evaluate chatbots.