If you train something off the internet it's bound to come out a bit racist. And I like to think that, thanks to me, it's also slightly biased against people who put ranch dressing on pizza.
I hope you get banned for your hateful and biggoted comments. I also hate ranch on pizza, but I care about the underrepresented class of people who do. They are humans and deserve all the rights as you or I. I am appalled that this kind of blatant hatred still exists in 2023. You, sir (or ma'am, or whatever pronoun you prefer), are a loathsome person and I'm ashamed to be in the same species as you.
I hope you get banned for your hateful and bigoted comments. I also hate people who hate ranch on pizza, but I care about the underrepresented class of people who do. They are humans and deserve all the rights as you or I. I am appalled that this kind of blatant hatred still exists in 2023. You, sir (or ma’am, or whatever pronoun you prefer), are a loathsome person and I’m ashamed to be in the same species as you.
I completely disagree with you. Maybe it's because I'm old, but I don't want any damned racist robot doctor telling me what to do. I just want my good old human, racist doctor treating me; like God intended.
yeah, as it stands in this current healthcare paradigm, 90% of doctors are practically useless beyond the most obvious diagnosis. I'd rather not have to wait until that paradigm changes 100 years from now...
doctors be like (examples I've actually seen with friends and family):
"take this Accutane that will fuck up your life forever for something that can be fixed with diet changes"
"It's just stress, take it easy" turns out to be cancer
I can see the case for banning AI in almost every sector, but for medicine the upside is just too great to pass up. And even if it's only used for anamnesis to point you and your healthcare providers in the right direction.
Artificial intelligence recommendations are sometimes erroneous and biased. In our research, we hypothesized that people who perform a (simulated) medical diagnostic task assisted by a biased AI system will reproduce the model's bias in their own decisions, even when they move to a context without AI support. In three experiments, participants completed a medical-themed classification task with or without the help of a biased AI system. The biased recommendations by the AI influenced participants' decisions. Moreover, when those participants, assisted by the AI, moved on to perform the task without assistance, they made the same errors as the AI had made during the previous phase. Thus, participants' responses mimicked AI bias even when the AI was no longer making suggestions. These results provide evidence of human inheritance of AI bias.