OpenAI and DeepMind have differing perspectives and approaches in their research on Scaling Laws. Scaling Laws can predict the changes in loss when the parameters, data volume, and computational resources of large models are altered. Their competition will drive the advancement of artificial intelligence and influence the future of human-computer coexistence. During the pre-training process of large language models, there is a strategic trade-off involving model size, data volume, and training costs. Scaling Laws can aid in optimizing design decisions. DeepMind suggests that model size and data volume should scale proportionally, whereas OpenAI prefers larger models. DeepMind has developed AlphaGo and AlphaFold, showcasing the potential of deep reinforcement learning and neural networks, while OpenAI has created the GPT series, demonstrating exceptional capabilities in generative models. Research findings indicate that the three elements affecting model performance are interrelated, with DeepMind's Chinchilla model performing exceptionally well. Domestic entities like Baichuan Intelligence and Mingde Large Models have also made contributions to Scaling Laws research. DeepMind has proposed the Levels of AGI classification method, revealing different stages of AI development.
The Dispute Between OpenAI and DeepMind on Scaling Laws
神州问学
68
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/6413