DriveVLM is an autonomous driving system that leverages visual language models (VLMs) to augment scene understanding and planning capabilities. The system employs a unique combination of reasoning modules, encompassing scene description, scene analysis, and hierarchical planning, to enhance comprehension of complex and long-tail scenarios. Addressing the limitations of VLMs in spatial reasoning and computational demands, DriveVLM-Dual was developed as a hybrid system, integrating the strengths of DriveVLM with traditional autonomous driving pipelines. Experiments on the nuScenes and SUP-AD datasets demonstrate the effectiveness of DriveVLM and DriveVLM-Dual in handling complex and unpredictable driving conditions. Ultimately, DriveVLM-Dual has been deployed in production vehicles, validating its efficacy in real-world autonomous driving environments.