A new report from plagiarism detector Copyleaks found that 60% of OpenAI's GPT-3.5 outputs contained some form of plagiarism.
Why it matters: Content creators from authors and songwriters to The New York Times are arguing in court that generative AI trained on copyrighted material ends up spitting out exact copies.
The individual GPT-3.5 output with the highest similarity score was in computer science (100%), followed by physics (92%), and psychology (88%).
And that’s why this claim is mostly bullshit. These use cases are all sciences, where the correct solution is usually the same or highly similar no matter who writes it. Small snippets of computer code cannot be copyrighted anyway.
Not surprisingly, softer subjects like “English” and “Theatre” rank extremely low on this scale.
Not to mention that a response "containing" plagiarism is a pretty poorly defined criterion. The system being used here is proprietary so we don't even know how it works.
I went and looked at how low theater and such were and it's dramatic:
The lowest similarity scores appeared in theater (0.9%), humanities (2.8%) and English language (5.4%).
So, if the Ai gives you a correct answer to a science question, it’s “infringing copyright” and if it spits out a bullshit answer, it’s giving you wrong, and unsupported claims.
Right? Nod doubt that output can be similar to training data, and I would believe that some of it is plagiarism, but plagiarism detectors are infamous among uni students for being completely unreliable and flagging pronouns, dates and citations. Until someone can go "here's an example of actual plagiarism" (which is obvious when pointed out), these claims make no sense.
Eh, kinda. It’s not like a science paper is just going to be an equation and nothing else. An author’s synthesis of the results is always going to have unique language. And that is even more true for a social science paper.
You can’t write a paper covering scientific topics without plagiarism. A human would be required to. Generative AI should be held to at least as high of a standard.
This looks like an ad. They go on about what their proprietary detection method found without any details about how it came to these conclusions or even how they generated the test data. They give 0 actual examples for any of their claims.
Probably very few. The bias for these companies is in false negatives, not false positives, since false positives create controversy when students appeal a ruling.
ChatGPT itself doesn't know where it got the info from, so it makes up links and names - it's a language model, not a search engine.
On the other hand, if you manage to find a reputable source and give it relevant metadata, it can format a nice citation for you, saving you time on that instead.
Copilot is GPT under the hood, it just starts with a search step that finds (hopefully) relevant content and then passes that to GPT for summarization.
It depends on how they're using it behind the scenes. Chatbots like ChatGPT can't cite sources, because they are just generating text on the fly. However, some approaches (if links/sources are provided) use an approach called Rag (Retrevial Augmented Generation). This approach uses similarity in search terms to find sources first, then uses the sources to augment/generate its answer.
That being said there are pros and cons to both approaches.
No. It's not really clear what LLMs do, but it certainly depends on context.
What they fundamentally do is continue a text. That's what they were originally trained to do. Then they were fine-tuned to continue a chat log or respond to an instruction. To be able to do that, they have learned a lot. Unfortunately, we do not know what.
If you ask for a summary of some text, it will give you one; regardless of whether the text even exists.
The summary could be one written by a human that it has memorized. Or it could be complete nonsense, that it is making up on the fly. You never know.
One AI company throwing accusations at another AI company, and the evidence on both sides is to point their fingers at their own black-box LLMs like they're magic...
ok, so? plagiarism is a meaningless, tenuous call that can be avoided simply with some quotation marks and a link. isn't this supposed to unite humanity's knowledge rather than nitter and ditter around about meaningless technicalities? it's not writing a fucking paper (and if you copy and paste it for a paper that's already plagiarism anyway)
if it had to remember a citation for everything it knew, it'd only be able to remember half as much information because its memory would be cluttered with useless citations that you could easily find by googling if you really cared to know. most people just want quick facts