Translated data: Researchers from Brigham and Women's Hospital evaluated whether GPT-4 exhibits racial and gender biases in clinical decision-making. They found that GPT-4 demonstrates significant biases when generating patient cases, formulating diagnoses and treatment plans, and assessing patient characteristics. The study calls for bias assessments of large language models to ensure their application in the medical field does not exacerbate societal biases. The findings have been published in The Lancet Digital Health journal.