Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.
Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.
I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.
The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.
AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.
Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.
ChatGPT is a tool under development and it will definitely improve in the long term. There is no reason to shit on it like that.
Instead, focus on the real problems: AI not being open-source, AI being under the control of a few monopolies, and there being little to none regulations that ensure it develops in a healthy direction.
GPTs natural language processing is extremely helpful for simple questions that have historically been difficult to Google because they aren't a concise concept.
The type of thing that is easy to ask but hard to create a search query for like tip of my tongue questions.
I wonder where people can go. Wikipedia maybe. ChatGPT is better than google for answering most questions where getting the answer wrong won't have catastrophic consequences. It is also a good place to get started in researching something. Unfortunately, most people don't know how to assess the potential problems. Those people will also have trouble if they try googling the answer, as they will choose some biased information source if it's a controversial topic, usually picking a source that matches their leaning. There aren't too many great sources of information on the internet anymore, it's all tainted by partisans or locked behind pay-walls. Even if you could get a free source for studies, many are weighted to favor whatever result the researcher wanted. It's a pretty bleak world out there for good information.
This is a story that's been rotating through the media since ChatGPT first released.
I have an unpopular opinion about this headline after seeing the media cycle repeatedly downplay/ignore what Alphabet has been doing in response to OpenAI: Google the search engine is not in direct competition with ChatGPT, but Gemini is, and Alphabet is smart to keep simpler/time-tested search functionality central to Google rather than react strongly and scrap the keyword-based search bar that users understand are comfortable using - especially older users, but I think most people are starting to discover they have a use for both search and LLM chats.
I think there are two product categories here, which first looked like they were going to converge in 2022-2024, but which are now slowly changing course as customers start to comprehend how both are necessary for different purposes.
When I make chats in ChatGPT or Gemini or Claude etc, I am starting to plan them longitudinally so that I can use them over and over for a specific project or query type.
When I turn to a search bar, it's because I really want a proxy for a specific website or between me and whatever weird site has the answer to my specific question. It's not that I want discussion and a chat about it, I just want Google's card-like results with a website index I can read instead of that website's stylized, animated web design on top or popups or malware.
Every time I get sucked into a chat with Bing CoPilot(ChatGPT) when I really only had a web search query, I regret wasting my time talking to the LLM. Almost as a reflex, I've started avoiding it for most things now.
This is why so much research has been going into AI lately. The trend is already to not read articles or source material and base opinions off click bait headlines, so naturally relying on AI summaries and search results will soon come next. People will start to assume any generated response from a 'trusted search ai' is true, so there is a ton of value in getting an AI to give truthful and correct responses all of the time, and then be able to edit certain responses to inject whatever truth you want. Then you effectively control what truth is, and be able to selectively edit public opinion by manipulating what people are told is true. Right now we're also being trained that AI may make things up and not be totally accurate- which gives those running the services a plausible excuse if caught manipulating responses.
I am not looking forward to arguing facts with people citing AI responses as their source for truth. I already know if I present source material contradicting them, they lack the ability to actually read and absorb the material.