Skip Navigation

How do you feel about your content getting scraped by AI models?

I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.


#Prompt Update

The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

For more information, check this comment.


Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)

Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one's digital footprint is used in training AI models. (12/28/2024)

Edit³: I added the second image to the post and its description. (12/29/2024).

105 comments
  • It's Perplexity AI, so it'll do web searches on demand. You asked about your username, then it searched for your username on the web. Fediverse content is indexed, even content from instances that blocks web crawling (e.g. via robots.txt, or via UA blacklisting on server-side), because the contents will be federated to servers that are indexed by web crawlers.

    Now, when we say about offline models and pre-trained content, the way transformers work will often "scramble" the art and the artist. If a content doesn't explicitly mention the author (also, if the content isn't well spread across different sources), LLMs will "know" the information you posted online, but it won't be capable of linking such content to you when asked for it.

    Let me exemplify it: suppose you conveyed an unique quote. Nobody else wrote it. You published it on Lemmy. Your quote becomes part of the training data for GPT-n or any other LLM out there. When anyone ask them "Who said the quote '...'?", it'll either hallucinate (i.e. citing a very random famous writer) or it'll say something like "I don't have such information".

    It's why AIs are often (and understandably) called as plagiarist by the anti-AI people, because AIs don't cite their sources. Technically, the current state-of-the-art transformers even can't because LLMs are, under the hood, some fancy-crazy kind of "Will it blend?" for entire corpora across the web, where AI devs gather the most data they possibly can (legally or illegally), dropping it all inside the "AI blender cup" and voila, an LLM was trained, without actually storing each content entirely, just their statistical associations.

    • I understand that Perplexity employs various language models to handle queries and that the responses generated may not directly come from the training data used by these models; since a significant portion of the output comes from what it scraped from the web. However, a significant concern for some individuals is the potential for their posts to be scraped and also used to train AI models, hence my post.

      I'm not anti AI, and, I see your point that transformers often dissociate the content from its creator. However, one could argue this doesn't fully mitigate the concern. Even if the model can't link the content back to the original author, it's still using their data without explicit consent. The fact that LLMs might hallucinate or fail to attribute quotes accurately doesn't resolve the potential plagiarism issue; instead, it highlights another problematic aspect of these models imo.

  • As with any public forum, by putting content on Lemmy you make it available to the world at large to do basically whatever they want with. I don’t like AI scrapers in general, but I can’t reasonably take issue with this.

  • As an artist, I feel the majority of AI art is very anti-human. I really don't like the idea that they could train AI off my art so it may replicate something like it. Why automate something so deeply human? We're supposed to automate more mundane tasks so we can focus on art, not the other way around! I also never expected every tech company to suddenly participate in what feels like blatant copyright infringement, I always assumed at least art was safe in their hands.

    Public conversations though? I dunno. I kinda already assume that anything I post is going to be data-mined, so it doesn't feel very different than it was. There's a lot of usefulness that can come from datamining the internet theoretically, but we exist under capitalism, so I imagine it'll be for much more nefarious uses.

  • No matter how I feel about it, it's one of those things I know I will never be able to do a fucking thing about, so all I can do is accept it as the new reality I live in.

  • I think this is inevitable, which is why we (worldwide) need laws where if a model scrapes public data should become open itself as well.

  • I don't like it, as I don't like this technology and I don't like the people behind it. On my personal website I have banned all AI scrapers I can identify in robots.txt, but I don't think they care much.

    I can't be bothered adding a copyright signature in social media, but as far as I'm concerned everything I ever publish is CC BY-NC. AI does not give credit and it is commercial, so that's a problem. And I don't think the fact that something is online gives everyone the automatic right to do whatever the fuck they want with it.

  • I'm okay with it as long as it's not locked to the exclusive use of one entity.

  • if I have no other choice, then I'll use my data to reduce AI into an unusable state, or at the very least a state where it's aware that everything it spews out happens to be bullshit and ends each prompt with something like "but what I say likely isn't true. Please double check with these sources..." or something productive that reduces the reliance on AI in general

  • I mean I dont really take issue with the use my comments part. but I do take issue with the scraping part as there are apis for getting content which makes it a lot easier for my system but these bots really do it the stupidest way with many hundreds of requests per hour. Therefore I had to put in a system to find and ban them.

  • I don’t care. Most of what I post is personal opinion, sarcasm, and/or attempts at humor. It’s nothing I’ve put a significant amount of time or effort into. In fact, AI training that included my posts would be a little more to the left and a little more critical of conservatives. That’s fine with me.

  • Whatever I put on Lemmy or elsewhere on the fediverse implicitly grants a revocable license to everyone that allows them to view and replicate the verbatim content, by way of how the fediverse works. You may apply all the rights that e.g. fair use grants you of course but it does not grant you the right to perform derivative works; my content must be unaltered.

    When I delete some piece of content, that license is effectively revoked and nobody is allowed to perform the verbatim content any longer. Continuing to do so is a clear copyright violation IMHO but it can be ethically fine in some specific cases (e.g. archival).

    Due to the nature of how the fediverse, you can't expect it to take effect immediately but it should at some point take effect and I should be able to manually cause it to immediately come into effect by e.g. contacting an instance admin to ask for a removed post of mine to be removed on their instance aswell.

  • Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?

    I don't exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.

    See also: Forer Effect aka Barnum Effect

    • Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?

      That doesn't make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I've shared some initial points, I'm more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.

      I don't exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.

      Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity's official docs.

      • Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity’s official docs.

        You're aware that it's in their best interest to make everyone think their """AI""" can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it's mostly faked?

        Taking what an """AI""" company has to say about their product at face value in this part of the hype cycle is questionable at best.

  • Questions? There are no questions. Just wait until it starts spewing copyrighted contents in violation of the license it's distributed with, then sue.

  • theyre not training it
    its basically just a glorified search engine.

    • Not Perplexity specifically; I'm taking about the broader "issue" of data-mining and it's implications :)

105 comments