Google has recently updated its terms of use for generative AI, explicitly allowing clients to use its generative AI tools for "automated decision-making" in "high-risk" areas such as healthcare and employment, provided there is human oversight. This change is reflected in the company's latest generative AI usage policy.
Image Source Note: Image generated by AI, image authorized by service provider Midjourney
According to the updated policy, clients can use Google's generative AI to make automated decisions that may have a "significant adverse impact" on individual rights under supervision. These high-risk areas include employment, housing, insurance, and social welfare. The previous terms seemed to impose a blanket ban on high-risk automated decision-making, but Google stated that it has allowed such decisions with human oversight from the beginning.
A Google spokesperson responded to the media by stating, "The requirement for human oversight has always been part of our policy, covering all high-risk areas. We have simply reclassified some terms and provided clearer examples for user understanding."
In contrast to Google's approach, major competitors like OpenAI and Anthropic have stricter regulations regarding high-risk automated decision-making. OpenAI prohibits the use of its services for automated decisions related to credit, employment, housing, education, social scoring, and insurance. Anthropic allows its AI to make automated decisions in high-risk fields like law, insurance, and healthcare, but only under the supervision of "qualified professionals" and requires clients to clearly disclose their use of AI for such decisions.
Regulatory bodies have expressed concerns about AI systems for automated decision-making, believing that such technology may lead to biased outcomes. For example, research shows that AI may perpetuate historical discrimination in loan and mortgage application approvals.
Non-profit organizations such as Human Rights Watch have specifically called for a ban on "social scoring" systems, arguing that these threaten individuals' access to social security and may violate privacy and create biased profiling.
In the European Union, under the AI Act, high-risk AI systems, including those involving personal credit and employment decisions, face the strictest regulations. Providers of these systems must register in a database, conduct quality and risk management, employ human supervisors, and report incidents to relevant authorities.
In the United States, Colorado recently passed a law requiring AI developers to disclose information about "high-risk" AI systems and publish summaries of the systems' capabilities and limitations. Meanwhile, New York City prohibits employers from using automated tools to screen candidates unless the tool has undergone a bias audit in the past year.
Key Points:
🌟 Google allows the use of generative AI in high-risk areas, but requires human oversight.
🛡️ Other AI companies like OpenAI and Anthropic have stricter limitations on high-risk decisions.
⚖️ Regulatory bodies in various countries are reviewing AI systems for automated decision-making to prevent biased outcomes.