ShieldGemma
Security content auditing model
CommonProductProgrammingContent AuditingText Generation
ShieldGemma is a series of security content auditing models built on Gemma 2, developed by Google, focusing on four harm categories: adult content, dangerous content, hate speech, and harassment. These are text-to-text decoder-only large language models available only in English, with open weights, including models with 2B, 9B, and 27B parameter sizes. These models are designed as part of a responsible generative AI toolkit to enhance the safety of AI applications.
ShieldGemma Visit Over Time
Monthly Visits
19075321
Bounce Rate
45.07%
Page per Visit
5.5
Visit Duration
00:05:32