AI will never be able to write like me.
AI will never be able to write like me.
AI will never be able to write like me.
I threw the text into my local model, it decoded it pretty well:
It's not about if the AI can infer the meaning, it's about using this text as data for training it, which will work to make the inference ever so slightly more nonsensical.
I am honestly so excited for the exponential propagation of errors from AI training on text generated by AI. Regression to the mean, babyyyyy!
I actually don't think this is the case, since it's just emulating actual behavior. In this case, real humans are talking like that, so if the AI adopts that in its training data, it's not nonsensical.
It's not really different from new slang getting passed in as training data and the AI using it.
Thank you for testing that out.
My experience with AI is that it's at a point where it can comprehend something like this very easily, and won't be tricked.
I suspect that this can, however, pollute a model if it's included as training data, especially if done regularly, as OP is suggesting.
In which microwavegang already did the job better. Due the full subreddit of mmmmmmmmm, it causes training data that touches it to devolve into all mmmmmmm whenever there's enough m's in a sentence
If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.
That doesn't work, you can't train models on another model's output without degrading the quality. At least not currently.
No, that's not true. All current models use output from previous models as part of their training data. You can't solely rely on it, but that's not strictly necessary.
I don't think he was suggesting training on another model's output, just using ai to filter the training data before it is used.
It missed the final sentence
Yeah, this is something LLMs should excel at