Where was all this coming from? Well, I don’t know what Stern or Esquire’s source was. But I know Navarro-Cardenas’, because she had a follow-up message for critics: “Take it up with Chat GPT.”
The absolute gall of this woman to blame her own negligence and incompetence on a tool she grossly misused.
This is why Melon and the AI chud brigade are so obsessed with having a chatbot (sorry, “AI”) that always agrees with them: a stupid number of people think LLMs are search engines, or worse, search engines but better, some diviner of truth.
In general I agree with the sentiment of the article, but I think the broader issue is media literacy. When the Internet came about, people had similar reservations about the quality of information, and most of us learned in school how to find quality information online.
LLMs are a tool, and people need to learn how to use them correctly and responsibly. I’ve been using Perplexity.AI as a search engine for a while now, and I think they’re taking the right approach. It employs LLMs at different stages to parse your query, perform web searches on your behalf, and summarize findings. It provides in-text citations as well, which is an opportunity for a media-literate person to confirm the validity of anything important.
Google search results are often completely unrelated so it's not any better. If the thing I'm looking for is obscure, AI often finds some thread that I can follow, but I always double check that information.
Know your tool limits, after hundreds of prompts I've learned pretty well when the AI is spitting bullshit answers.
Real people on the internet can be just as wrong and biased, so it's best to find multiple independent sources
Eh....I got it to find a product that met the specs I was looking for on Amazon when no other search worked. It's certainly a last resort but it worked. Idk why whenever I'm looking to buy anything lately somehow the only criteria I care about are never documented properly...
I've used it for very, very specific cases. I'm on Kagi, so it's a built in feature (that isn't intrusive), and it typically generates great answers. That is, unless I'm getting into something obscure. I've used it less than five times, all in all.
Generative AI is a tool, sometimes is useful, sometimes it's not useful. If you want a recipe for pancakes you'll get there a lot quicker using ChatGPT than using Google. It's also worth noting that you can ask tools like ChatGPT for it's references.
No. Learn to become media literate. Just like looking at the preview of the first google result is not enough blindly trusting LLMs is a bad idea. And given how shitty google has become lately ChatGPT might be the lesser of two evils.
If sites (especially news outlets and scientific sites) were more open, maybe people would have means of researching information. But there's a simultaneous phenomenon happening as the Web is flooded with AI outputs: paywalls. Yeah, I know that "the authors need to get money" (hey, look, a bird flew across the skies carrying some dollar bills, all birds are skilled on something useful to the bird society, it's obviously the way they eat and survive! After all, we all know that "capitalism" and "market" emerged on the first moments of Big Bang, together with the four fundamental forces of physics). Curiously, AI engines are, in practice, "free to use" (of course there are daily limitations, but these aren't a barrier just like a paywall is), what's so different here? The costs exist for both of them, maybe AI platforms have even higher costs than news and scientific publication websites, for obvious reasons. So, while the paywalls try to bring dimes to journalism and science (as if everyone had spare dimes for hundreds or thousands of different mugs from sites where information would be scattered, especially with rising costs of house rents, groceries and everything else), the web and its users will still face fake news and disinformation, no matter how hard rules and laws beat them. AI slops aren't a cause, they're a consequence.