Google recently updated its generative AI usage policy, clearly stating that clients can deploy its generative AI tools for "automated decision-making" in "high-risk" areas such as healthcare, provided there is human oversight.
According to the latest version of the "Generative AI Prohibited Use Policy" released by Google on Tuesday, clients can use Google's generative AI to make "automated decisions" that may have a "significant adverse impact" on individual rights. As long as there is some form of human oversight, clients can use Google's generative AI to make decisions in employment, housing, insurance, social welfare, and other "high-risk" areas.
In the field of AI, automated decision-making refers to decisions made by AI systems based on factual and inferential data. For example, a system might automatically decide whether to approve a loan application or screen job applicants.
Previously, Google's draft terms suggested a complete ban on the use of its generative AI in high-risk automated decision-making. However, Google told TechCrunch that clients can always use its generative AI for automated decision-making as long as there is human oversight, even for high-risk applications.
"For all high-risk areas, our requirement for human oversight has always been present in our policy," a Google spokesperson stated via email. "We are reclassifying some items in [the terms] and listing some examples more clearly to help clients understand better."
Google's main AI competitors, OpenAI and Anthropic, have stricter regulations regarding the use of AI in high-risk automated decision-making. For instance, OpenAI prohibits its services from being used for automated decisions related to credit, employment, housing, education, social scoring, and insurance. Anthropic allows its AI to be used for automated decision-making in legal, insurance, healthcare, and other high-risk fields, but requires supervision by "qualified professionals" and mandates that clients disclose their use of AI for these purposes.
Automated decision-making AI that affects individuals is under close scrutiny from regulators, who are concerned about the potential biased outcomes this technology may produce. Research has shown that AI used for approving credit and mortgage applications may perpetuate historical discrimination.
The non-profit organization Human Rights Watch has called for a ban on "social scoring" systems, arguing that these could undermine people's access to social support, harm their privacy, and portray them in biased ways.
Under the EU's AI Act, high-risk AI systems (including those making personal credit and employment decisions) face the strictest regulations. Providers of these systems must register in a database, implement quality and risk management, employ human overseers, and report incidents to relevant authorities, among other requirements.
In the United States, Colorado recently passed a law requiring AI developers to disclose information about "high-risk" AI systems and to publish statements summarizing the systems' functionalities and limitations. Meanwhile, New York City prohibits employers from using automated tools to screen candidates for employment decisions unless the tool has undergone a bias audit in the past year.
This clarification of AI usage terms by Google indicates the company's stance on AI application regulation. It allows for automated decision-making in high-risk areas while emphasizing the importance of human oversight, reflecting both the potential of AI technology and a vigilance towards potential risks.