Meta has announced the launch of two 24K H100 GPU clusters on its official website, specifically designed for training the large-scale model Llama-3. Llama-3 utilizes the RoCEv2 network and Tectonic/Hammerspace's NFS/FUSE network storage. It is expected to go live by the end of April or mid-May, potentially as a multimodal model and will continue to be open-sourced. Meta plans to have a computing power of 600,000 H100s by the end of 2024.