Segment Anything Model 2 (SAM 2) is a visual segmentation model launched by Meta's AI research division, FAIR. It achieves real-time video processing through a simple transformer architecture and streaming memory design. The model builds a loop data engine through user interaction, gathering the largest video segmentation dataset to date, SA-V. SAM 2 is trained on this dataset, delivering outstanding performance across a wide range of tasks and visual domains.