The latest results from the MLPerf inference benchmark for large models show that Moortec Artificial Intelligence's S30 computing card leads in single-card, 4-card, and 8-card inference performance for the GPT-J model. Through its proprietary sparse computing technology, Moortec has achieved a 1.8 times computational advantage over NVIDIA's H100 on the GPT-J large model. Moortec's success demonstrates that sparse computing is a key technological innovation for resolving the contradiction between the rapid growth in large model parameter sizes and computational power requirements. This MLPerf championship is a recognition of Moortec's sparse computing product capabilities and further validates that sparse computing will lead to revolutionary breakthroughs in large model computational power.