Whisper is a popular transcription tool powered by artificial intelligence, but it has a major flaw. It makes things up that were never said.
Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
as much as the speech-to-text gets wrong on my phone, I can only imagine what it does with doctors' notes.
one of my million previous jobs was in medical transcription, and it is so easy to misunderstand things even when you have a good grasp of specialty-specific terminology and basic anatomy.
they enunciate the shit they're recording about your case about as well as they legibly write. you really have to get a feel for a doctor's speaking style and common phrases to not turn in a bunch of errors.
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
Edit: oh yeah, ✨ innovation ✨
While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.
Edit 2: it gets better and better
In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”
But the transcription software added: “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.”
A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding “two other girls and one lady, um, which were Black.”
In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”
Edit 3: wonder if the Organ Procurement Organizations are going to try to use this to blame for the extremely fucked up shit that's been happening
I've been using Whisper with TankieTube and I'm curious whether these errors were made with the Large-v2 or the Large-v3 model. I suspect it was the latter, because its dataset includes output from the other.
The Whisper large-v3 model was trained on 1 million hours of weakly labeled audio and 4 million hours of pseudo-labeled audio collected using Whisper large-v2.
Probably audio quality. I can't imagine the acoustics in a hospital room or the hallway outside are anything close to most YouTube videos being recorded with a professional mic
Seems a bit stupid to use a transcription aid that can literally invent things, but when has something completely failing to do what it is supposed to do stopped capitalists from saving a buck
I managed to discover that with unassisted casual use really quickly. People are asleep at the wheel if they tried to give important duties to an AI. You don't let a dog drive your car and hope for the best
At least crypto wasn't this annoying. You could just point and laugh from the outside. AI is being shoved into everything and makes anything it touches significantly worse.
This is fucked, you don't use a black box approach in anything high risk without human supervision. Whisper probably could be used to help accelerate a transcriptions done by an expert, maybe some sort of "first pass" that needs to be validated, but even then it might not help speed things up and might impact quality (see coding with copilot). Maybe also use the timestamp information for some filtering of the most egregious hallucinations, or a bespoke fine-tuning setup (assuming it was fine-tuned it the first place)? Just spitballing here, I should probably read the paper to see what the common error cases are.
It's funny, because this is the openAI model I had the least cynicism towards, did they bazinga it up when I wasn't looking?