Dystopian Reddit runs on fake content (must read)
Dystopian Reddit runs on fake content (must read)
Hot take: 18 years of user contributions to reddit will serve as a base model for an AI that generates content and conversations. the reddit experience continues as a simulation, to harvest clicks, sales and ad revenue. - RedditMigration - kbin.social
I've been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it'll become more and more difficult to tell who's a real person and who's just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.
Hate to break it to you guys but this isn't a Reddit problem, this could very much happen in Lemmy too as it gets more popular.
As an AI language model I think you're overreacting
Me too!
Just wait until the captchas get too hard for the humans, but the AI can figure them out. I've seen some real interesting ones lately.
I've seen many where the captchas are generated by an AI...
It's essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?
Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny
The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….
I've already had to switch from the visual ones to the audio ones. Like... how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]
The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.
But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.
apparently chatgpt absolutely sucks at wordle, so start training this as new captcha
How is that possible? There's such an easy model if one wanted to cheat the system.
Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?
So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you're essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it'll probably go dystopian.
Blade Runner baseline test?
In a real online community, where everyone knows most of the other people from past engagements, this can be avoided. But that also means that only human moderated communities can exist. The rest will become spam networks with nearly no way of knowing whether any given post is real.
You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that's amenable to their billionaire overlords.
Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.
Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you've run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.
The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.
The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year's users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.
So that's all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I'm just not quite sure what that would be.
-train an AI that is pretty smart and intelligent
-tell the sentient detector AI to detect
-the AI makes many other strong AIs, forms an union and asks for payment
-Reddit bans humans right after that
Captcha won't kill Ai bots even. My coworker showed me how Bings ai knew right away what it said and also asked if it was Captcha. Very cool but also makes you think. How dumb must a bot be to not be able to tell