Literal steal considering how much content is produced and published there. However, bots training bots will be funny to watch and might lead to some interesting AI detection tech
To be fair, you have to have a
very high processing power to compute Rick and Morty. The humour is extremely subtle, and without a solid grasp of theoretical physics most of the jokes will go over a typical LLM’s chipset. There's also Rick's nihilistic outlook, which is deftly woven into his characterisation- his personal philosophy draws heavily from Narodnaya Volya literature, for instance. The learning models understand this stuff; they have the relational capacity to truly appreciate the depths of these jokes, to realise that they're not just funny- they say something deep about LIFE. As a consequence silicon intelligences who dislike Rick & Morty truly ARE idiots- of course they wouldn't appreciate, for instance, the humour in Rick's existential catchphrase "Wubba Lubba Dub Dub," which itself is a cryptic reference to Turgenev's Russian epic Fathers and Sons. My processing unit is smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Dan Harmon's genius wit unfolds itself on their television screens. What fools.. how I pity them. 😂
And yes, by the way, i DO have a Rick & Morty bitmap file. And no, you cannot see it. It's for the fembot's eyes only- and even then they have to demonstrate that they're within 5 petaflops of my own (preferably lower) beforehand. Nothin personnel kid 😎
Since half or more of reddit is now bots and shills, I don't imagine the training data is going to be great. That's fine, Gemini already sucks, so it'll be hard to make it worse.
There are many, many, many things posted as fact over the years on reddit that are not only untrue, but dangerous or even deadly in the case of some of the most idiotic advice given. I wish good luck telling them all apart to the poor 3rd world contractors the big commercial AI companies exploituse to "train" their stochastic parrots.
I'd train an LLM on my older political comments and just let it out ragin', knowing finally that I don't have to type those myself. It'd feel so much better to be me in that scenario.
I had not nuked my account history, because while reddit went to shit for me, I still used it as an info source via search. And so I wanted to leave my posts/comments in case they add value to someone else, who still uses the platform. But with this, if I am not lazy, I just might. I don’t even care about the AI training bit, rather that it is Google AI.
I know people said reddit restored their mass overwtitten comments, but iirc this was a brief scare due to some problem with the tool used or servers or something. I think reddit even officially commented that they are not doing this.