Meta has released an AI benchmarking tool called FACET, designed to assess the fairness of AI models. This tool includes 32,000 images and 50,000 individual labels, enabling the evaluation of fairness in classification, detection, segmentation, and localization tasks across different demographic attributes. The use of FACET can help researchers and practitioners better understand biases within their own models and monitor the impact of mitigation measures on these biases. By applying FACET to Meta's own DINOv2 algorithm, some biases were identified, including biases towards individuals of certain genders and the tendency to misclassify female photos as "nurses." Meta acknowledges that FACET may not fully capture real-world concepts and demographic groups and provides a web-based dataset browser tool for developers.