The U.S. Department of Commerce has released a report expressing support for "open-weight" generative AI models, such as Meta's Llama3.1. The report highlights that open models enable small companies, researchers, non-profit organizations, and individual developers to more easily access generative AI resources. Therefore, the government should not impose restrictions on access to open models until it investigates whether such restrictions could harm the market.

Brain Large Model AI

Image source note: The image was generated by AI, provided by the image licensing service Midjourney

This view aligns with the Federal Trade Commission (FTC) Chair Lina Khan's perspective. Khan believes that open models can allow more small businesses to bring their ideas to market, thereby promoting healthy competition. Alan Davidson, Assistant Secretary for Communications and Information at the U.S. Department of Commerce, stated in a declaration: "The openness of the largest AI systems will affect competition, innovation, and the risks of these revolutionary tools." He added that the NTIA's report recognizes the importance of open AI systems and calls for more proactive risk monitoring of the widespread availability of large AI models.

Currently, both domestic and international regulatory agencies are considering rules that may restrict or impose new requirements on companies wishing to release open-weight models. For example, California is about to pass the SB1047 bill, which requires any company using more than 1026FLOP computational power to train models to enhance cybersecurity and develop a method to "shut down" model copies. Overseas, the EU has recently finalized the compliance deadline for companies under its AI Act, which sets new rules for copyright, transparency, and AI applications.

Meta has stated that the EU's AI policy will prevent it from releasing certain open models in the future. Some startups and major tech companies also oppose California's law, arguing that the bill's requirements are too stringent.

However, the NTIA's model governance philosophy is not entirely laissez-faire. The report calls on the government to establish a continuous project to collect evidence of the risks and benefits of open models, evaluate this evidence, and take action accordingly, including imposing certain restrictions on model availability when necessary. Specifically, the report recommends that the government study the safety of various AI models, support risk mitigation research, and establish thresholds for "specific risk" indicators to signal when policy changes are needed.

These measures align with President Joe Biden's AI executive order, which requires government agencies and companies to set new standards to responsibly drive innovation in the creation, deployment, and use of AI. Commerce Secretary Gina Raimondo stated in a press release: "The Biden-Harris administration is doing everything possible to maximize the potential of AI while minimizing its risks. Today's report provides a path for responsible AI innovation and American leadership, advocating openness and suggesting how the U.S. government can prepare for potential future challenges."

Key Points:

🌟 Open models promote competition among small businesses, and the government should not easily restrict access.

🔍 The government calls for enhanced risk monitoring of open models to ensure their safety.

📅 Domestic and international regulatory agencies are considering new regulations that may impose more requirements on open models.