Vocaloids were invented in 2000, with commercial release in 2004. Human singers aren't extinct yet.
It may be possible in the future for a synthetic voice to sound fully human with a full range of emotions. But I believe that human actors and voice actors will still be used because 1) it's easier to explain what to do to a human professional, 2) unions exist and they will push back against it.
Acting is an art. What world is it where robots do art while humans do the tedious manual labor?
I think you're probably right, but a world where robots do art and humans do the tedious manual labor sounds eerily similar to the world we live in. At least, it is not outside the realm of possibility.
Quite some work left to do to achieve a sociaty with universal basic income, if even the technologies developed for the purpose are twisted and used against it.
Oh no... I figured it out. Quark never left this timeline when he jumped back to Roswell! We are living in a universe where Quark secretly runs the world! It's the only explanation for this madness!
I also think it will likely be quite some time before AI can accurately reproduce the range of emotions a human can. Simple emotional responses, sure, but I'm not so certain about complex ones in the near future.
Vocaloids are far from perfect, but they can be damn good in hands of a good producer. Plus, isn't that the original point? "AI VA were invented so soon all VAs will be AI"?
And to produce the example you provided, it required a big voice bank from people who are very experienced in voicework. Top Gear/The Grand Tour have over 200 episodes where the hosts have basically the same characters throughout the show spanning like 20 years. And it still ain't perfect. It's damn good, but there are hiccups here and there.
So to produce a good AI voiceover, you'll need experienced people doing a lot of work. And to get experienced human actors you will need humans acting. Hence, my point
Well, if you talk about the newest AI-powered UTAU voicebanks, that's because the developers finally thought about crossing the streams, and instead of having the singers merely pronounce syllables in several pitches, they used that data (expanded to also include several syllable clusters) to train an AI. Unlike most trained AI models, where the voice samples are recorded from live performances, so they vary in quality and on data points for each individual syllable, these have the full set of voice training data prerecorded by design, so the quality of every possible combination of phonemes is as clear as possible.