VLC media player, the popular open-source software developed by nonprofit VideoLAN, has topped 6 billion downloads worldwide and teased an AI-powered VLC media player, the open-source video software developed by nonprofit VideoLan, has topped 6 billion downloads.
Still no live audio encoding without CLI (unless you stream to yourself), so no plug and play with Dolby/DTS
Encoding params still max out at 512 kpbs on every codec without CLI.
Can't switch audio backends live (minor inconvenience, tbh)
Creates a barely usable non standard M3A format when saving a playlist.
I think that's about my only complaints for VLC. The default subtitles are solid, especially with multiple text boxes for signs. Playback has been solid for ages. Handles lots of tracks well, and doesn't just wrap ffmpeg so it's very useful for testing or debugging your setup against mplayer or mpv.
I know people are gonna freak out about the AI part in this.
But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.
VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!
Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.
I've been waiting for this break-free playback for a long time. Just play Dark Side of the Moon without breaks in between tracks. Surely a single thread could look ahead and see the next track doesn't need any different codecs launched, it's technically identical to the current track, there's no need to have a break.
/rant
I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.
In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.
I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.
I don't mind the idea, but I would be curious where the training data comes from. You can't just train them off of the user's (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn't seem to help.
Fuck no. Leave the subtitles alone. Make people learn something, like searching and applying subtitles files or actually make them write their own and give back, for a change.