Meta AI’s image generator is facing criticism for consistently creating images of people of the same race, even when prompted otherwise. According to The Verge, prompts for “Asian man and Caucasian friend” or “Asian man and white wife” resulted in images of individuals of the same race, instead of the desired diverse representation. Engadget also confirmed these findings through their own testing of Meta’s image generator.
This lack of diversity in Meta AI’s generated images is not the only issue. The AI tool struggles with accurate depictions of diverse groups of people, often generating images of primarily white individuals. Additionally, signs of bias have been observed, such as making Asian men appear older and Asian women appear younger. In some cases, culturally specific attire is added to the generated images, even when not requested in the prompt.
This controversy is reminiscent of Google’s Gemini image generator, which paused its ability to create images of people after facing similar scrutiny for its depiction of race and diversity. The tool overcorrected for diversity in response to prompts about historical figures, sparking a debate about the ethical implications of AI technology in creating diverse and unbiased representations.
As discussions around AI ethics and inclusivity continue to evolve, it is essential for companies like Meta to address these issues and strive for more accurate and unbiased image generation. In a rapidly advancing technological landscape, ensuring diversity and representation in AI-generated content is crucial for promoting inclusivity and combatting bias in the digital realm.
“Zombie enthusiast. Subtly charming travel practitioner. Webaholic. Internet expert.”