In recent years, the concept of "open" artificial intelligence (AI) has gained significant attention with the rapid development of AI technologies. However, a recent study indicates that the promotion of "open" AI may actually mislead the public and policymakers, creating a false perception of industry centralization. This research was authored by David Widder, a postdoctoral researcher at Cornell University, and published in the journal Nature.
Image Source Note: Image generated by AI, authorized by service provider Midjourney
The study points out that many claims about "open" AI lack accuracy, often focusing only on a specific stage of the AI system's development and deployment lifecycle while neglecting the significant impacts of industry concentration in large-scale AI development and deployment. The research compares "open" AI to free and open-source software, exploring the relationships between IBM and Linux, Google and Android, Amazon and MongoDB, as well as Meta and PyTorch.
The research notes that while open-source software has somewhat democratized software development and ensured code integrity and security, "open" AI does not possess the same characteristics. Powerful tech companies are leveraging the term "open" AI to shape policies, claiming it either promotes innovation and democracy or poses security threats. Therefore, clear definitions are crucial in policy-making.
Additionally, the study analyzes the nature of AI and the implications of "openness," examining aspects such as models, data, workforce, frameworks, and computing power. Although "open" AI systems can offer transparency, reusability, and scalability, Meta's LLaMA-3 model has been criticized for lacking true openness, as it only provides API access or restricted model downloads, a situation referred to as "open-washing."
In comparison, EleutherAI's Pythia is regarded as the most open AI model, providing source code, training data, and comprehensive documentation, licensed under terms consistent with open-source initiatives. However, despite some progress in the openness of the AI field, the market advantages of tech giants remain significant, as the data, development time, and computing power required to build large models still pose substantial barriers to market entry.
The study concludes that relying solely on "open" AI cannot achieve a more diverse, accountable, or democratized industry environment. Large companies often exploit "open" AI to solidify their market positions while obscuring their monopolistic behaviors. Therefore, to create a fairer market environment, additional measures such as antitrust enforcement and data privacy protection are necessary. The researchers ultimately point out that merely hoping for "open" AI is insufficient to change the status quo and may complicate issues further within the context of corporate centralization.
Key Points:
🧩 The study shows that the promotion of "open" AI often blurs definitions, misleading the public's understanding of industry centralization.
🔍 "Open" AI operates differently from open-source software, with many large tech companies utilizing this concept to protect their own interests.
⚖️ Achieving diversity and fair competition in the AI industry requires more measures, such as antitrust enforcement and data privacy protection.