AI only does a half-ass job, so you need a real human stepping in to "fix it in post."
We're seeing corporations throwing money hand over fist at AI because corporations want it to replace workers.
We're burning an extra planets worth of energy for something that still needs human intervention to be usable.
Maybe, just maybe, we could just pay humans a living fucking wage to do the same work to begin with instead of constantly trying to find more and more ways to just not pay people at all.
Like, shocker, if you have a fully staffed customer service department, you actually will solve problems for people faster than an AI staffed customer service department.
The corpos don't care, they're not actually interested in solving our problems. They'll burn the planet to the ground in effort to avoid paying us a living wage.
From their point of view the goal isn't to abolish human involvement, but to minimise the cost. So if they can do the job at the same quality with a quarter of the personnel through AI assistance for less cost, obviously they're gonna do that.
At the same time, just because humans having crappy jobs is the current way we solve the problem of people getting money, doesn't mean we should keep on doing that. Basic income would be a much nicer solution for that, for example. Try to think a bit less conservatively.
What about the cost to the environment? That cost is just a negative externality to them and you, apparently. Yet I'm the one accused of thinking "conservatively."
Burning ten times as many fossil fuels to "minimise the costs" is literally fucking stupid and short-sighted.
This was pretty clear when observing the output of tldrbot. It would just randomly select paragraphs, ignoring surrounding context, and call it a summary.
The bot demonstrated very well what this article is about. I don’t know the internals, but I also can’t image the bot was using the best and most expensive ways of doing analysis.
It was pretty bad at “getting the point” even when it was obvious, a better system should be able to do so. Sometimes the point is more difficult to discern and there has to be some judgement, you can see this in comments sometimes where people discuss what “the point” was and not just the data. I imagine an AI would have some difficulty determining what is worth summarizing in these situations especially.
Anecdotally, this was my experience as a student when I tried to use AI to summarize and outline textbook content. The result says almost always incomplete such that I'd have to have already read the chapter to include what the model missed.
I'm not sure how long ago that was, but LLM context sizes have grown exponentially in the past year, from 4k tokens to over a hundred k. That doesn't necessarily affect the quality of the output, although you can't expect it to summarize what it can't hold on memory.