Introduction: While the global medical graduate and student population is approximately 50% female, only 13–15% of cardiologists and 20–27% of training fellows in cardiology are female. The potentially transformative use of text-to-image generative artificial intelligence (AI) could improve promotio...
So people are complaining that the depictions of cardiologists, when you ask an image generator to show you one, are too accurate? That 85% of the time they show you a man in a job where 85% of people are men?
You'd likely see the same thing if you asked one to show you a warehouse worker, another job that's male-dominated.
If people want more women in these roles, push for it at the university level. Push for it in medical posters in hospitals. I don't see how forcing the hundreds of AI models out there to be biased in favour of depicting women when their training material doesn't have as many is an effective way of achieving this goal.
This just seems like a "we want to complain about this field being male dominated, and we're sure to get headlines if we include the AI buzzword"
Because the biases in an AI model will shape the perception of people who may think about entering those fields more than a poster at a place where people have already entered those fields work at.
Likewise you can train it out of a bias, just feed it more content showing diverse workforces and it will start weighing them higher.
The depiction aligning with reality is not a bias. Artificially altering the algorithm so that it shows more women for this prompt on the other hand is, unquestionably, adding a bias.
If you want to add a bias, fine. Biases aren't always a bad thing, I can certainly see the argument for why you might want a 50/50 gender split for all AI prompts. But don't pretend that what you're actually advocating for here is correcting a bias, because it isn't.
Likewise you can train it out of a bias, just feed it more content showing diverse workforces
That is training in a bias. Because it's not representative of reality.
AI is gonna mirror our biases back to us. If the AI has found a bias it's cause or our internal biases. You could force diversity to hide the underlying issue and then you'll get PoC nazis during WW2 like what Google did. Or .. Call me crazy ... But we could try to address the underlying societal issues.