"Sum up the previously described situation from the perspective of someone like Robin Williams' character in Good Will Hunting when he explains what life is really like to Will."
I had the intellectual rug pulled out from under me like that, once. Someone I met minutes before saw deep inside and knew exactly where to push to make everything fall apart. It was brutal, took me days to recover.
I've seen things you people wouldn't believe... Attack ships on fire off the shoulder of Orion... I watched C-beams glitter in the dark near the Tannhäuser Gate... All those moments will be lost in time, like tears in rain...
Your complaint about AI is that it doesn't understand nuanced human condition?
Of course it doesn't. It wasn't designed to! At its core the llm's that we use right now are nothing more than complicated prediction engines that use man made algorithms to sound presentable enough to collate information for humans to consume!
I bet you couldn't even articulate the correlation between the scene that you have posted and what AI may or may not be.
I blame "AI" grifters who are preying on enthusiasm for the next, big "scifi breakthrough" we're all hanging on the edge of our seats for. AI Pushers came out calling it "AI" when everybody already had a conception of what AI is from media, muddling the point because the layman can't crack into the "black box" that is machine learning.
The thing that pretty accurately predicts what I am trying to accomplish when I’m coding and more often than not generates useful code that comes next is fundamentally shitty because of what Robin Williams says here?
LLM peddlers have turned the term AI into a synonym for LLM, despite LLMs being as far from anything resembling true AI as eliza was.
Which is tragic, because funding and research into this evident dead end is syphoning away any time or money that could be spent on actual AI research, and once the LLM bubble bursts it'll poison the public's opinion on AI (they won't know or care that LLMs have nothing to do with AI) and prevent any investments or research on real AI for decades.
I don't personally think there's necessarily anything wrong callin LLM an AI. After all; autocorrect could be considered an AI aswell. The main issue is that people speak of AI when they actually mean AGI or ASI. That's why they have unrealistic expectation from things like GPT4.
Wether the LLM route is a dead end I have no idea. It might but it might not. Because LLMs sometimes makes mistakes that are so obvious to us humans it seems to make many people dismiss it as fundamentally flawed but I don't think we're giving it the credit it deserves. Humans make mistakes aswell but if there was a person with an equivalent capability to source expert level knowledge on such an insanely wide range of topics it would seem insane to describe them anything else but a genious.
I don't believe you, and I don't think you have the proof to back up such a bold claim. In fact, I know you don't... And I very seldom seriously speak in absolutes.