To be fair, I used a Chinese AI picture generator app with my face and it made it more Asian looking. It's obvious that each software has biases towards the people who made and trained it. It's not good, but it's expected and happening everywhere.
Ok, but she asked it to make her look professional and the only thing it changed was her race. Not the background, not her clothes. Last I checked, a university sweatshirt wasn’t exactly professional wear.
And overall in I'd say.. 7 out of 10 images this is a white woman in a Google search. So the probability is high that the training data also has a bias towards that.
Someone in the original lemmy.nz post said they did the exact same thing, same image, same prompt, and it turned her Indian. So if you have very wide training data the result would be rather "random". Or you have very narrow training data and the result will always be looking similar.
Grab an app focused on an Asian audience with beauty filters for example and it will turn a white person into an Asian one. But no one complains there that the app is racist.
Machine learning is biased towards its training data. If the image generation algorithm (notice I'm not saying AI) is trained on photos of "" professionals " being of a certain demographic that's what it will prefer when it's generating an image.
So these shocking exposés should simply be this image generator was trained with biased data. But the human condition is building biases. So we're never really going to get away from that.
Playground AI founder Suhail Doshi said that “models aren’t instructable like that” and will pick “any generic thing based on the prompt.” However, he said in another tweet that Playground AI is “quite displeased with this and hope to solve it.”
So the model wasn't even designed to be used in the way she was trying to use it.
Half of the outrage against ai models can be attributed to the users not even understanding what they are doing. Like when people complain about ChatGPT giving wrong information, when warnings about it are written right there on the page where users are typing in their prompts.
"Asian MIT grad who knows exactly what she is doing, pretends to be shocked after intentionally triggering industry known bias that are already acknowledged and being worked on"
This is just a student manufacturing controversy ensuring she has a great talking piece at her interviews.
Sorry, not sure how to ! post so it opens in your instance.
TL;DR
Any result is going to be biased. If it generated a crab wearing liederhosen, it's obviously a bias towards crabs. You can't not have a biased output because the prompting is controlling the bias. There's no cause for concern here. The model is outputting by default the general trend of the data it was trained with. If it was trained with crabs, it would be generating crab-like images.
I recall a somewhat similar incident when I was showing an in-law of mine how Stable Diffusion worked a while back. She's of Indian descent, and she asked Stable Diffusion to generate a picture of an Indian woman. All of the women it generated had Bindis and other "traditional" Indian cultural garb on, and she was initially kind of annoyed by that. But I explained that that's because most of the photos of women in the training set that were explicitly tagged as Indian were dressed that way, whereas the rest of the Indian women in the training set probably weren't explicitly tagged. They were just women.
It was kind of interesting trying to figure out which option was more biased. Realizing that there was an understandable reason behind that helped ease her annoyance.
Yes, but they trained on easily accessible data in large amounts. Which actually says that stock photo websites are the biased ones there.
No model can be trained on an equal amount of diverse data for everyone, and it's not supposed to anyway. I bet it was hardly if at all trained on Mongolian goat herders, but you could hardly say it's biased against them, just that there wasn't an easily accessible large amount of pictures of them.
Frankly I think we're overlooking the silver lining. She got a picture that resembles her but couldn't possibly be used to identify her in real life. That's exactly what I'd want to use for an online profile.
I mean wouldn’t this just be due to like the sheer number of BS “female professional” stock photos used on the websites of call centers globally, that the AI ingested? Said “professional white person” photos being used especially in non-western websites in order to gain legitimacy in the west?
Like given what little I know about how AI ingests and spits out data, it might be correlating the buzzword “professional”, and stock photos of white people that were ingested from Asian websites. It might be “wrong” but the AI doesn’t attempt to be “right” it’s just trying to give you what you expect based on the data it has.