Researchers at Tencent have discovered that the performance of large language models improves with the increase in the number of instantiated agents, without the need for a complex multi-LLM agents collaboration framework. Experimental results show that an ensemble of multiple smaller LMs can outperform larger LMs in performance. The paper explores the relationship between performance enhancement and the difficulty of the problem, and proposes two optimization strategies: progressive sampling and voting, and hierarchical sampling and voting.
Tencent Reveals: The More Agents, the Better the Performance of Large Language Models
机器之心
31
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/5983