Skip Navigation

What is it about ai art that makes it so recognisable? (Beyond obvious artifacts like fucked up hands or writing)

Stumbled onto this shitty game trailer on youtube and instantly clocked it as ai art

Cant really put to words why exactly they're so instantly recognisable

Something about the wierd sharp lines and doll like faces

38 comments
  • Everything looks "too perfect", like a bunch of elements that are each the platonic ideal or textbook photo of the thing. Everything pops out at you, every last thing is the focus, there isn't really a background built into it.

    Also, perhaps as a result of this, the lighting and shading are often incoherent.

  • I don't think the lines are very sharp at all, if anything the style stands out because it tends to have lots of vague blurry regions, where a human drawn image would have details and texture. Something about AI generated images tends to make them much less detailed, at least with far less recognizable detail, than what humans would draw. And when there are details they're nondescript.

  • It's because the way it's made is the opposite of how digital art is actually made. things like clothing brushes, symmetry and perspective rulers, and textured brushes, just isn't used by ai obviously. because of that, human made art has a level of cohesion that ai art cannot replicate. this is especially true in shading. while ai might be able to replicate the style of a textured brush, it ends up looking like they used 5,000 different ones for no reason at all. there's no clear intention behind any of the creative decisions made. even if you think certain things in digital art are bad because the creative decision being made is "this saves time", it's still a human making a decision.

    so your brain is parsing a jarbled mess, and even if you can't quite place what about it is jarbled, it's easy to pattern recognize that sorta nonsense as ai after seeing enough of it

  • It also falls into the same three categories every time

    Bad Pixar, Bad Anime, Bad Corporate Memphis

    And don't get me started on all the weird gibberish text that always shows up

  • Incredible detail for a piece made without any clear artistic vision? idk what i'm talking about but this feels like i'm onto something

  • I'm not as alert to as other people are to this but the above is in the category of "things nobody would ever make" as is a lot of AI stuff. An answer without even the whiff of a question.

    But if someone was using this to make a more obviously-desirable thing I probably couldn't recognize it just from the aesthetics.

  • There's usually going to be a hegemonic style for AI art, since for most people making this stuff they're just going to put some vague keywords for a direction of the style then stuff the rest of the prompt with quality keywords. Often times hosted inference services will actually do the quality keyword stuffing for you or train in a house style. Whatever you don't specify is going to be filled in with essentially the model average (which is, of course, not going to be a representative average image, it's going to be the average of the "preferred" set for their preference optimization training). Practically nobody asks for mediocre images (because why would you), and people making models especially on hosted services often effectively won't let you.

    Think of what you'd expect to get from requesting an image of "a beautiful woman". There's certainly a lot of different ideas that people have of which women are beautiful and what traits make a woman beautiful, across different individuals and especially across different cultures and time periods. But if you take a set of every picture that someone thought of as having a beautiful woman in it, and look at the mode of that distribution, it's going to settle on conventionally attractive by the standards of whatever group is labeling the images. And the same thing will happen with an AI model, training on those images labeled as "a beautiful woman" will shift its output towards conventionally attractive women. If you consider it as a set of traits contributing to conventional attractiveness, then it's also fairly likely that every "a beautiful woman" image will end up looking like a flawless supermodel, since the mode will be a woman with all of the most common traits in the "a beautiful woman" dataset. That often won't look natural, because we're not used to seeing flawless supermodels all of the time.

    That's more or less what is happening when people make these AI images, but with the whole image and its style. The set of images labeled as "high quality" or whatever quality keyword, or that are in their preference optimization set, have attributes that are more common in those images than they are in other images. Those attributes end up becoming dominant and a lot of them will show up in a generated image stuffed with quality keywords or on a heavily DPO-tuned model, which may look unnatural when a typical good-looking natural image may have only a few of those traits. And the problem is exacerbated by each model having its own default flavor, and people heavily reusing the same sets of quality keywords, and I would honestly fully expect that I could pin part of it on how some text encoders work (CLIP's embeddings are hard to separate distinct concepts from and this does manifest in how images are generated, but a lot of recent popular models don't use CLIP so this doesn't necessarily always apply).

  • Because each diffusion image is built on uniform noise, they come out almost perfectly balanced.

    Meaning the sum average of all color in the image is near perfect grey.

    This also applies to shape, where shapes and shape distribution tend to be very balanced, symmetrical, and uniform.

    • This is untrue.

      • Well, it was true for the first big models. The most recent generation of models do not have this problem.

        Earlier models like Stable Diffusion 1.5 worked on noise (ϵ) prediction. All diffusion models work by training to predict where the noise is in an image, given images with differing levels of noise in them, and then you can sample from the model using a solver to get a coherent image in a smaller amount of steps. So, using ϵ as the prediction target, you're obviously not going to learn anything by trying to predict what part of pure noise is noise, because the entire image is noise. During sampling, the model will (correctly) predict on the first step that the pure noise input is pure noise, and remove the noise giving you a black image. To prevent this, people trained models with a non-zero SNR for the highest noise timestep. That way, they are telling the model that there is something actually meaningful in the random noise we're giving it. But since the noise we're giving it is always uniform, it ends up biasing the model towards making images with average brightness. The parts of the initial noise that it retains (since remember, we're no longer asking it to remove all of the noise, we're lying to it and telling it some of it is actually signal) usually also end up causing unusual artifacting. An easy test for these issues is to try to prompt "a solid black background" -- early models will usually output neutral gray squares or grayscale geometric patterns.

        One of the early hacks for solving the average brightness issue was training with a random channelwise offset to the noise, and models like Stable Diffusion XL used this method. This allowed models to make very dark and light images, but also often made images end up being too dark or light, it's possible that you saw some of these about a year into the AI craze when this was the latest fad. The proper solution came with Bytedance's paper ( https://arxiv.org/pdf/2305.08891 ) showing a method allowing training with a SNR of zero at the highest noise timestep. The main change is that instead of predicting noise (ϵ), the model needs to predict velocity (v), which is a weighted combination between predicting noise and predicting the original sample x0. With that, at the highest noise timestep the sampler will predict the dataset mean (which will manifest as an incredibly blurry mess in the vague shape of whatever you're trying to make an image of). People didn't actually implement this as-is for any new foundation model, most of what I saw of it was independent researchers running finetune projects, apparently because it was taking too much trial and error for larger companies to make it work well. actually this isn't entirely true, people working on video models ended up adopting it more quickly because the artifacts from residual noise get very bad when you add a time dimension. A couple of groups made SDXL clones using this method.

        The latest fad is using rectified flow which is a very different process from diffusion. The diffusion process is described by a stochastic differential equation (SDE), which adds some randomness and essentially follows a meandering path from input noise to the resulting image. The rectified flow process is an ordinary differential equation (ODE), which (ideally) follows a straight-line path from the input noise to the image, and can actually be run either forwards or backwards (since it's an ODE). Flux (the model used with Twitter's AI stuff) and Stable Diffusion 3/3.5 both use rectified flow. They don't have the average brightness issue at all because it makes zero mathematical or practical sense to have the end point be anything but pure noise. I've also heard people say that rectified flow doesn't typically show the same uniform level of detail that a few people in this thread have mentioned, I haven't really looked into that myself at all but I would be cautious about using uniform detail as a litmus test for that reason.

      • My bad, I'm probably working on outdated information. I think it was a computerphile video where they showed how diffusion images tend towards a uniform intensity on all color channels due to them starting out as noise.

        That all goes out the window of course if the input is not pure white noise of course.

  • if I had a nickel every time I saw knockoff Willy Wonka related AI slop I'd have 2 nickels, which isn't a lot but it's weird that it happened twice

38 comments