Meta has introduced the all-new Meta LLM Compiler (Meta Large Language Model Compiler)! This compiler is based on the Meta Code Llama model series, available in 7B and 13B versions. The Meta LLM boasts robust code optimization and compiler functionalities, akin to thinking capabilities, enabling it to simulate compilers, predict the optimal paths for code optimization, and even disassemble code.
It is reported that this compiler can also be fine-tuned for specific optimization and compiler tasks, as if it is continuously learning and improving. The LLM Compiler FTD (Fine-Tuned Version) excels in code size optimization, achieving a 5.24% improvement over the -Oz optimization option, far surpassing the 0.03% improvement of GPT-4Turbo.
Additionally, in terms of disassembly capabilities, the LLM Compiler FTD demonstrates formidable abilities, achieving a round-trip BLEU score of 0.96, significantly higher than GPT-4Turbo's 0.43.
This news has garnered widespread attention on the internet. Interested readers can visit the links to the model and the paper for more information.