Meta recently unveiled the Meta LLM Compiler (Meta Large Language Model Compiler), a news that sent shockwaves through the programming industry. It is reported that the LLM compiler's FTD (tuned version) achieved a 5.24% improvement in code size optimization, surpassing the optimization option -Oz, while GPT-4 Turbo managed only a 0.03% improvement.

The Meta LLM is a series of models built on the Meta Code Llama, including 7B and 13B versions. In terms of disassembly capabilities, the LLM compiler FTD achieved a 0.96 reciprocal BLEU score, far surpassing GPT-4 Turbo's 0.43.