Llama-3-70B-Tool-Use

70B parameter large language model optimized for tool usage

CommonProductProgramming[\large language model\\tool usage\
Llama-3-70B-Tool-Use is a 70B parameter large language model optimized for advanced tool usage and feature call tasks. It achieves an overall accuracy of 90.76% on the Berkeley Feature Call Leaderboard (BFCL), outperforming all open-source 70B language models. This model enhances transformer architecture and is fine-tuned and trained with Direct Preference Optimization (DPO) on top of the Llama 3 70B base model. It takes text as input and produces text as output, with enhanced tool usage and feature call capabilities. While its main use case is tool usage and feature calls, it may be more appropriate for general knowledge or open-ended tasks to use a general language model. The model may produce inaccurate or biased content in some cases, and users should implement appropriate safety measures suitable for their specific use cases. The model is highly sensitive to temperature and top_p sampling configurations and requires proper adjustments to optimize performance.
Visit

Llama-3-70B-Tool-Use Visit Over Time

Monthly Visits

18200568

Bounce Rate

44.11%

Page per Visit

5.8

Visit Duration

00:05:46

Llama-3-70B-Tool-Use Visit Trend

Llama-3-70B-Tool-Use Visit Geography

Llama-3-70B-Tool-Use Traffic Sources

Llama-3-70B-Tool-Use Alternatives