Skip Navigation
72 comments
  • Well, Reddit's approach towards AI and auto-mod has already killed most of the interesting discussion on that site. It's one of the reason I moved to the Fediverse.

    At the same time, I was around in the Fediverse during the CSAM attacks, and I've run online discussion sites and forums, so I'm well aware of the challenges of moderation, especially given the wave of AI chat-bots and spam constantly attempting to infiltrate open discussion sites.

    And I've worked with AI a great deal (go check out Jan - open source, runs on local machine if you're interested), and there's no chance in hell it's anywhere near ready to take on the role of moderator.

    See, Reddit's biggest strength is its biggest weakness = the army of unpaid mods that have committed untold numbers of hours towards improving the site's content. What Reddit found out during the API debacle was that because the mods weren't paid, Reddit had no recourse to control them aside from "firing" them. The net result was a massive loss of editorial talent, and the site's content quality plunged as a result.

    Because although the role of a mod is different in that they can't (or shouldn't) edit user content, they are still gatekeepers the way junior editors would be in a print publishing organization.

    But here's the thing - there's a reason you pay editors. Because they ensure the content of the organization is of high caliber, which is why advertisers want to pay you to run their ads.

    Reddit thinks it can skip this step. Instead of doing the obvious thing = pay the mods to be professionals - they think that they can solve the problem with AI much more cheaply. But AI won't do anything to encourage people to post.

    What encourages people to post is that other people will see and comment, that real humans will engage with their content. All it takes is the automod telling you a few times that your comment was banned for X inexplicable reason and you stop wanting to post. After all, why waste your time creating unpaid content for a machine to reject it?

    If Reddit goes the way of AI moderation, they'll need to start paying their content creators. If they want to use unpaid content from an open discussion forum, they need to start paying their moderators.

    But here's the thing. Reddit CAN'T pay. They've been surfing off of VC investment for two decades and have NEVER turned a profit, because despite their dominance of the space, they kept trying to monetize it without paying people for contributing to it... and honestly, they've done a piss poor job at every point in their development since "New Reddit" came online.

    This is why they sold your data to Google for AI. And its why their content has gone to crap, and why you're all reading this on the Fediverse.

  • I mean if the AI can reliably handle the CSAM filtering without having to make humans have to see it, I'm all for it

  • Oh yeah, lets do that and see that everything going into chaos.

    Pinterest lets their AI do checks on pins and totally (non violated ToS) images get deleted. Accounts getting permanent banned because their AI claims images are violating their ToS (I guess plants and houses are violent).

    What could go wrong, nothing eh? /sarcasm.

  • It already does, though not in the individualized manner he's describing.

    I don't think that's entirely a bad thing. Its current form, where priority one is keeping advertisers happy is a bad thing, but I'm going to guess everyone reading this has a machine learning algorithm of some sort keeping most of the spam out of their email.

    BlueSky's labelers are a step toward the individualized approach. I like them; one of the first things I did there is filter out what one labeler flags as AI-generated images.

  • I think that he's probably correct that this is, in significant part, going to be the future.

    I don't think that human moderation is going to entirely vanish, but it's not cheap to pay a ton of humans to do what it would take. A lot of moderation is, well...fairly mechanical. Like, it's probably possible to detect, with reasonable accuracy, that you've got a flamewar on your hands, stuff like that. You'd want to do as much as you can in software.

    Human moderators sleep, leave the keyboard, do things like that. Software doesn't.

    Also, if you have cheap-enough text classification, you can do it on a per-user basis, so that instead of a global view of the world, different people see different content being filtered and recommended, which I think is what he's proposing:

    Ohanian said at the conference that he thinks social media will "eventually get to a place where we get to choose our own algorithm."

    Most social media relies on at least some level of recommendations.

    This isn't even new for him. The original vision for Reddit, as I recall, was that the voting was going to be used to build a per-user profile to feed a recommendations engine. That never really happened. Instead, one wound up with subreddits (so self-selecting communities are part of it) and a global voting on stuff within that.

    I mean, text classifiers aimed at filtering out spam have been around forever for email. It's not even terribly new technology. Some subreddits on Reddit had bots run by moderators that did do some level of automated moderation.

72 comments