AI computers aren’t selling because users don’t care
AI computers aren’t selling because users don’t care

AI computers aren’t selling because users don’t care

AI computers aren’t selling because users don’t care
AI computers aren’t selling because users don’t care
One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.
This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.
The battle is easy. Buy out and collude with the competition so the customer has no choice but to purchase a AI device.
Ah, like with the TPM blackbox?
This would only work for a service that customers want or need
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
iOS still doesn’t do what was sold in the ads, almost a full year later.
Edit: also things like email summary don’t work, the email categories are awful, notification summaries are straight up unhinged, and I don’t think anyone asked for image playground.
Insert 'Full Self Driving' Here.
Also, outlook's auto alt text function told me that a conveyor belt was a picture of someone's screen today.
Calling it “Full Self Driving” is such blatant false advertising.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
Add Amazon's Alexa+ to that list. It's nearly a year overdue and still nowhere in sight.
capitalism working against itself
More like: capitalism reaching its own logical conclusion
I’m making a generous assumption by suggesting that “ready” is even possible
To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.
There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn't going to be possible to get where they want to go.
do you have any articles on this? i have heard this claim quite a few times, but im wondering how they put numbers on the capabilities of those models.
OpenAI admitted that with o1! they included graphs directly showing gains taking exponential effort
Yeah but first to market is sooooo good for stock price. Then you can sell at the top and gtfo before people find out it's trash
(I’m making a generous assumption by suggesting that “ready” is even possible)
It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.
The current marketing for AI won't apply to anything that meets the marketing in the foreseeable future. The desired complexity isn't going to exist in silicone at a reasonable scale.
I they didn't over promise, they wouldn't have had mountain loads of money to burn, so they wouldn't have advanced the technology as much.
Tech giants can't wait decades until the technology is ready, they want their VC money now.
Sure, but if the tech in the end doesn't deliver it's all that money burnt.
If it does deliver it's still oligarchs deciding what tech we get.
Yes. The ones that have power are the ones that decide. And oligarchs by definition have a lot of power.