SpatialVLM is a visual language model developed by Google DeepMind that can understand and reason about spatial relationships. Trained on massive synthetic datasets, it has acquired the ability to perform quantitative spatial reasoning intuitively, like humans. This not only improves its performance on spatial VQA tasks but also opens up new possibilities for downstream tasks such as chain-of-thought spatial reasoning and robot control.