An Australian former Paralympic swimmer, Jess Smith, says ChatGPT’s image generator initially failed to render her accurately as a woman missing part of her left arm, often adding a second arm or a prosthetic. After media inquiries, Smith found the tool could finally produce a realistic image, underscoring how limited training data can erase people with disabilities. OpenAI said it has made “meaningful improvements” and is refining post-training methods and diversifying examples to reduce bias. Another user with an eye condition reported similar misrepresentations even after explicit instructions, fueling calls for more rigorous training and testing. Experts note model bias often mirrors societal blind spots and stress the importance of diverse teams labeling and curating data. The issue echoes past findings, including a 2019 U.S. government study showing higher error rates in facial recognition for African-American and Asian faces. The episode highlights both rapid model iteration and the persistent need for inclusion, transparency, and oversight, with some critics also flagging AI’s environmental costs.
Related articles:
NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software
NIST AI Risk Management Framework
Energy and Policy Considerations for Deep Learning in NLP
Making AI Less Thirsty: Uncovering and Addressing the Secret Water Footprint of AI Models





























