In a recent study published by Oxford University Press, researchers from Curtin University scrutinized leading generative AI tools—including Adobe Firefly, Dall-E 3, Meta AI, and Midjourney—for their portrayals of Australian identities. Their findings revealed deeply seeded biases: prompt basic queries like “an Australian father,” and the AI consistently produced images of white men in stereotypical outdoor settings—one even pictured cradling an iguana, an animal foreign to the continent. Efforts to depict Aboriginal Australians returned images laced with dated and offensive tropes, while images of Australian homes defaulted to suburban affluence for white Australians and primitive huts for Aboriginal families. Although recent AI updates have been released, initial results show persistent misrepresentations. The study raises red flags about AI’s role in shaping—and distorting—public perceptions, underscoring the need for oversight, transparency, and inclusivity in the development of AI training data.





























