Give them a break guys, they're just a small indie company that doesn't have the money to pay animators to do an actually good job /s
Aside from how awful the animations are, I'm wondering why they felt they needed them at all. The little chibi portraits would have looked fine as static images
AI images can be shockingly good. AI animation... sucks. That'll change. There's too much training data not to. Every minute of video is hundreds of adjacent frames to tell the machine what can happen between adjacent frames. But right now, it's either fuzzy and bad, or clean and worse, and I cannot comprehend how anyone saw these and said "that'll do."
Just pick a good frame and wiggle the parts in Live2D or something.
Just pick a good frame and wiggle the parts in Live2D or something.
The hilarious part is that hoyo is constantly pushing the boundaries of what can be done with live2d; it's heavily used in Genshin character teasers, and their otome game uses it extensively. They're really good at this. Why get AI involved?
Each existing frame of video, especially older video, contains a limited amount of information. You can maybe do some static image upscaling -- and AI upscaling is actually pretty remarkable. I was blown away by what Stable Diffusion could do with some old comic book scans.
But more than that...there's a whole video of video of the characters and scenes. For most of the video, that information can, given the right software and a 3d model, be incorporated back into frames to generate a higher-resolution image.
To say nothing of frame interpolation to generate higher-frame-rate video.
Like, I like Lawrence of Arabia. That movie actually has pretty good-quality footage. But...there's still film grain. And the frame rate is only so high. But there is a whole lot of footage of Lawrence in that movie, enough information to do a pretty good job, if used effectively, of dropping film grain, generating intermediate frames, and increasing the resolution.
Like, I like Lawrence of Arabia. That movie actually has pretty good-quality footage. But…there’s still film grain. And the frame rate is only so high. But there is a whole lot of footage of Lawrence in that movie, enough information to do a pretty good job, if used effectively, of dropping film grain, generating intermediate frames, and increasing the resolution.
This is possible today, and without much effort. Most Stable Diffusion kits just come with upscalers and, as long as you pick the right ones for the job, the models act like fucking magic. Way way better than any of the "nearest neighbor" algorithms image editors provide.
Video editors already have really good tools for interpolating frames for slow motion. They are a bit fiddly in high motion situations, but work well otherwise.