Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.
An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.
Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.
Meanwhile every trained model on Civit.ai produces 12/10 Asian women...
Joking aside, what you feed the model is what you get. Model is trained. You train it on white people, it's going to create white people, you train it on big titty anime girls it's not going to produce WWII images either.
Then there's a study cited that claims Dall-e has a bias when producing images of CEO or director as cis-white males. Think of CEOs that you know. Better yet, google them. It's shit but it's the world we live in. I think the focus should be on not having so many white privileged people in the real world, not telling AI to discard the data.
This is not surprising if you follow the tech, but I think the signal boost from articles like this is important because there are constantly new people just learning about how AI works, and it's very very important to understand the bias embedded into them.
It's also worth actually learning how to use them, too. People expect them to be magic, it seems. They are not magic.
If you're going to try something like this, you should describe yourself as clearly as possible. Describe your eye color, hair color/length/style, age, expression, angle, and obviously race. Basically, describe any feature you want it to retain.
I have not used the specific program mentioned in the article, but the ones I have used simply do not work the way she's trying to use them. The phrase she used, "the girl from the original photo", would have no meaning in Stable Diffusion, for example (which I'd bet Playground AI is based on, though they don't specify). The img2img function makes a new image, with the original as a starting point. It does NOT analyze the content of the original or attempt to retain any features not included in the prompt. There's no connection between the prompt and the input image, so "the girl from the original photo" is garbage input. Garbage in, garbage out.
There are special-purpose programs designed for exactly the task of making photos look professional, which presumably go to the trouble to analyze the original, guess these things, and pass those through to the generator to retain the features. (I haven't tried them, personally, so perhaps I'm giving them too much credit...)
ML training data sets are only as good as their data, and almost all data is inherently flawed. Biases are just more pronounced in these models because they scale the bias with the size of the model, becoming more and more noticeable.
Can we talk about how a lot of these AI-generated faces have goat pupils? That's some major bias that is often swept under the rug. An AI that thinks only goats can be professionals could cause huge disadvantages for human applicants.
These biases have always existed in the training data used for ML models (society and all that influencing the data we collect and the inherent biases that are latent within), but it’s definitely interesting that generative models now make these biases much much more visible (figuratively and literally with image models) to the lay person
She asked the AI to make her photo more like what society stereotypes as professional, and it made her photo more like what society stereotypes as professional.
Did anyone bother to fact check this? I ran her exact photo and prompt through Playground AI and it pumped out a bad photo of an Indian woman. Are we supposed to play the raical bias card against Indian women now?
This entire article can be summarized as "Playground AI isn't very good, but that's boring news so let's dress it up as something else"
Media: "I don't understand technology" even though writing about the technology multiple times.
AIs are completely based on the training data that they'll use. If they only loaded professional headshots of Asian people, a white person would turn Asian if added.
Besides which you run it multiple times, and choose the one you want, I'm sure if you did that, it'd change her eye color multiple times.
Really blame the AI, not AI in general. Or blame the media for making clickbait articles in the first place.
Ask AI to generate an image of a basketball player and see what happens.
This isn't some OMG ThE CoMpUtER Is tHe rAcIsT... this is using historical data and using that data to alter or generation a new image. But our news media will of course try to turn it into some clickbait BS.
Honestly news stories about dumb ideas not working out don't really bother me much. Congrats, the plagiarism machine tried to make you look like you fit in to a world that, to the surprise of nobody but idealists, still has a shitload of racial preferences.
Interestingly, many stable diffusion models are trained on pictures of Asian people and thus often generate people that look more or less Asian if there's no specific input or tuning otherwise. It's all in the training data and tuning.
This is just dumb rage-bait. At worst this shows a bias in training data, probably because the AI was developed in a majority white country that used images of majority white people to train it.
And likely its not even that. The AI has no concept of race, so doesnt know to make white people white and asian people asian, so would also be likely to do the reverse.
It reminds me of Google back in the day (probably early 2010s). If you searched for White Women, it returned professional and respectable images. But if you searched for Black Women, it returned explicit images.
Machine learning algorithms are like sponges and learn from existing social biases.
So? There are white people in the world. Ten bucks says she tuned it to make her look white for the clicks. I've seen this in person several times at my local college. People die for attention, and shit like this is an easy-in.
Like what some has already said here: it's a commentary of what Anglo-centric societies view as "professional" at the time the model is trained. Why Anglo-centric? By virtue that the US is the center of internet activity.
This issue here is business insiders baiting article.
I assume the MIT student knows the limitations of the technology and probably could have avoided the issue by inputting more criteria for their request to the image processor.
It is known that these image producing systems utilize aggregated data from skewed sources. We surely need to expand the variety of datasets these systems have access to, but we also need to more effectively teach people how to use them to get what they actually want from the system.
The article didn't tell us the exact prompt that was given to the system. I'm sure it was extremely basically and that is why it yielded an image in line with its average image of "professional". If the MIT student has added in the prompt "asian" it probably would have done what they wanted. Again I'm sure the student knows that and the article just picked this up to bait people for clicks.
About a decade ago I found an a website that claimed it could objectively measure how attractive you are. Tested it for way too long and it had a strong racial bias to not only whites but like a belt from Northern Italy to Germany. It hated hair with any curls, that was black, eyes that were brown, olive skin tone or darker, diminshed nose bridge, bigger nostrils.
I wrote up my results and sent it to them. A decade later and zero improvement
Disappointing but not surprising. The world is full of racial bias, and people don't do a good job at all addressing this in their training data. If bias is what you're showing the model, that's exactly what it'll learn, too.
I asked a taxi driver in Bollywood to take me to the home of someone famous. He took me to an Indian person's house. Does he think all famous people are Indian?
Or.. and Im just spit balling here. Dont ask it to do something you knew probably wouldnt give you something youre happy with and you wont be insulted..