Generalization Bias in Large Language Model Summarization of Scientific Research
Generalization Bias in Large Language Model Summarization of Scientific Research
royalsocietypublishing.org
Just a moment...
Generalization Bias in Large Language Model Summarization of Scientific Research
Just a moment...
It took me a bit to understand this, but I think this is about the AI making conclusions that the research is not saying. For example:
Here deepseek changes it to become that the surgery is associated with lower cancers, while the research isn't making that claim and just presenting data.
I don't really see the issue, but please, explain to me.
A common use case of LLMs is to summarize articles that people don't want to bother reading, the study is showing the dangers of doing that.
Yes, I was wondering why it is so dangerous when the summarization is so close to the real article.