AI computers aren’t selling because users don’t care
AI computers aren’t selling because users don’t care

AI computers aren’t selling because users don’t care

AI computers aren’t selling because users don’t care
AI computers aren’t selling because users don’t care
No thanks. I’m perfectly capable of coming up with incorrect answers on my own.
you're right tho
Even non tech people I talk to know AI is bad because the companies are pushing it so hard. They intuit that if the product was good, they wouldn't be giving it away, much less begging you to use it.
You're right - and even if the user is not conscious of this observation, many are subconsciously behaving in accordance with it. Having AI shoved into everything is offputting.
customers dont want AI, but only thhe corporation heads seem obssed with it.
It's partly that and partly a mad dash for market share in case the get it to work usefully. Although this is kind of pointless because AI isn't very sticky. There's not much to keep you from using another company's AI service. And only the early adopter nerds are figuring out how to run it on their own hardware.
One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.
This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
iOS still doesn’t do what was sold in the ads, almost a full year later.
Edit: also things like email summary don’t work, the email categories are awful, notification summaries are straight up unhinged, and I don’t think anyone asked for image playground.
Insert 'Full Self Driving' Here.
Also, outlook's auto alt text function told me that a conveyor belt was a picture of someone's screen today.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
Add Amazon's Alexa+ to that list. It's nearly a year overdue and still nowhere in sight.
capitalism working against itself
More like: capitalism reaching its own logical conclusion
(I’m making a generous assumption by suggesting that “ready” is even possible)
It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.
The current marketing for AI won't apply to anything that meets the marketing in the foreseeable future. The desired complexity isn't going to exist in silicone at a reasonable scale.
I’m making a generous assumption by suggesting that “ready” is even possible
To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.
Yeah but first to market is sooooo good for stock price. Then you can sell at the top and gtfo before people find out it's trash
The battle is easy. Buy out and collude with the competition so the customer has no choice but to purchase a AI device.
This would only work for a service that customers want or need
Ah, like with the TPM blackbox?
I they didn't over promise, they wouldn't have had mountain loads of money to burn, so they wouldn't have advanced the technology as much.
Tech giants can't wait decades until the technology is ready, they want their VC money now.
I think people care.
They care so much they actively avoid them.
Oh we care alright. We care about keeping it OUT of our FUCKING LIVES.
AI is going to be this eras Betamax, HD-Dvd, or 3d TV glasses. It doesn't do what was promised and nobody gives a shit.
Betamax had better image and sound, but was limited by running time and then VHS doubled down with even lower quality to increase how many hours would fit on a tape. VHS was simply more convenient without being that much lower quality for normal tape length.
HD-DVD was comparable to BluRay and just happened to lose out because the industry won't allow two similar technologies to exist at the same time.
Neither failed to do what they promised. They were both perfectly fine technologies that lost in a competition that only allows a single winner.
BluRay was slightly better if I recall correctly. With the rise in higher definition televisions, people wanted to max out the quality possible, even if most people (still) can’t tell the difference
Afaik betamax did not have any porn content, which might have contributed to the sale of VHS systems.
Dude don’t throw Betamax in there, that was a better product than the VHS. AI is just ass.
I was just about to mention porn and how each new format of the past came down to that very same factor.
If AI computers were incredible at making AI porn I bet you they'd be selling a lot better haha
Betamax actually found use in Television broadcast until the switch to HDTV occurred in 2009
No, I’m sorry. It is very useful and isn’t going away. This threads is either full of Luddites or disingenuous people.
nobody asked you to post in this thread. you came and posted this shit in here because the thread is very popular, because lots and lots of people correctly fucking hate generative AI
so I guess please enjoy being the only “non-disingenuous” bootlicker you know outside of work, where everyone’s required (under implicit threat to their livelihood) to love this shitty fucking technology
but most of all: don’t fucking come back, none of us Luddites need your mid ass
@blarth @TheThrillOfTime huh. You totally name at least one use case then, huh
You only didn't because it's so blindingly obvious
(It's BS)
Also, learn about Luddites, man
I have friends who are computer engineers and they say that it does a pretty good job of generating code, but that's not a general population use case. For most people, AI is a nearly useless product. It makes Google searches worse. It makes your phone voice assistant worse. It's not as good as human artists. And it's mostly used to create dumbass posts on Reddit to farm engagement. In my life, AI has not made anything better.
Maybe I'm just getting old, but I honestly can't think of any practical use case for AI in my day-to-day routine.
ML algorithms are just fancy statistics machines, and to that end, I can see plenty of research and industry applications where large datasets need to be assessed (weather, medicine, ...) with human oversight.
But for me in my day to day?
I don't need a statistics bot making decisions for me at work, because if it was that easy I wouldn't be getting paid to do it.
I don't need a giant calculator telling me when to eat or sleep or what game to play.
I don't need a Roomba with a graphics card automatically replying to my text messages.
Handing over my entire life's data just so a ML algorithm might be able to tell me what that one website I visited 3 years ago that sold kangaroo testicles was isn't a filing system. There's nothing I care about losing enough to go the effort of setting up copilot, but not enough to just, you know, bookmark it, or save it with a clear enough file name.
Long rant, but really, what does copilot actually do for me?
Our boss all but ordered us to have IT set this shit up on our PCs. So far I've been stalling, but I don't know how long I can keep doing it.
Tell your boss you talked to legal and they caution that all copilot data is potentially discoverable.
Set it up. People have to find out by themselves.
same here, i mostly dont even use it on the phone. my bro is into it thought, thinking ai generate dpicture is good.
Before ChatGPT was invented, everyone kind of liked how you could type in "bird" into Google Photos, and it would show you some of your photos that had birds.
The only feature that actually seems useful for on-device AI is voice to text that doesn't need an Internet connection.
I use it to speed up my work.
For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
Then there's just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? There's tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a sec...
it’s really embarrassing when the promptfans come here to brag about how they’re using the technology that’s burning the earth and it’s just basic editor shit they never learned. and then you watch these fuckers “work” and it’s miserably slow cause they’re prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task that’s just a fucking key combo.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.
presumably everyone who has to work with you spits in your coffee/tea, too?
adding brackets and changing upper/lower capitalization
I have used a system wide service in macOS for that for decades by now.
changing upper/lower capitalization
That's literally a built-in VSCode command my dude, it does it in milliseconds and doesn't require switching a window or even a conscious thought from you
Gotta be real, LLMs for queries makes me uneasy. We're already in a place where data modeling isn't as common and people don't put indexes or relationships between tables (and some tools didn't really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, it's usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesn't really inspire confidence.
If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if that's the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, I'm totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk I'm not sold.
Data understanding is part of the job anyhow, that's largely conceptual which maybe LLMs could work as an extension for, but I really wouldn't trust it to generate full on queries in most of the environments I've seen, data is overwhelmingly super messy and orgs don't love putting effort towards governance.
The first two examples I really like since you're able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?
I use it to parse log files, compare logs from successful and failed requests and that sort of stuff.
How about real-time subtitles on movies in any language you want that are always synced?
VLC is working on that with the use of LLMs
I tried feeding Japanese audio to an LLM to generate English subs and it started translating silence and music as requests to donate to anime fansubbers.
No, really. Fansubbed anime would put their donation message over the intro music or when there wasn't any speech to sub and the LLM learned that.
We've had speech to text since the 90s. Current iterations have improved, like most technology has improved since the 90s. But, no, I wouldn't buy a new computer with glaring privacy concerns for real time subtitles in movies.
You're thinking too small. AI could automatically dub the entire movie while mimicking the actors voice while simultaneously moving their lips and mouth to form the words correctly.
It would just take your daily home power usage to do a single 2hr movie.
Apparently it's useful for extraction of information out of a text to a format you specify. A Friend is using it to extract transactions out of 500 year old texts. However to get rid of hallucinations the temperature reds to be 0. So the only way is to self host.
Setting the temperature to 0 doesn't get rid of hallucinations.
It might slightly increase accuracy, but it's still going to go wrong.
Well, LLMs are capable (but hallucinant) and cost an absolute fuckton of energy. There have been purpose trained efficient ML models that we've used for years. Document Understanding and Computer Vision are great, just don't use a LLM for them.
They're great for document management. You can let it build indices, locally on your machine with no internet connection. Then when you want to find things you can ask it in human terms. I've got a few gb of documents and finding things is a bitch - I'm actually waiting on the miniforums a1 pro whatever the fuck to be released with an option to buy it without windows (because fuck m$) to do exactly this for our home documents.
a local search engine but shitty, stochastic, and needs way too much compute for “a few gb of documents”, got it, thanks for chiming in
Offline indexing has been working just fine for me for years. I don't think I've ever needed to search for something esoteric like "the report with the blue header and the photo of 3 goats having an orgy", if I really can't remember the file name, or what it's associated with in my filing system, I can still remember some key words from the text.
Better indexing / automatic tagging of my photos could be nice, but that's a rare occurrence, not a "I NEED a button for this POS on my keyboard and also want it always listening to everything I do" kind of situation
Reducing computer performance:
Turbo button 🤝 AI button
now that you mention it, kinda surprised I haven't ever seen a spate of custom 3D-printed turbo buttons from overclocker circles
That's not fair! I care! A lot!
Just had to buy a new laptop for new place of employment. It took real time, effort, and care, but I've finally found a recent laptop matching my hardware requirements and sense of aesthetics at a reasonable price, without that hideous copilot button :)
quite annoyed that the Snapdragon laptops are bootlocked cos they'd make great Linux boxes
Which laptop did you buy if you don't mind sharing?
Imagine that, a new fledgingly technology hamfistedly inserted into every part of the user experience, while offering meager functionality in exchange for the most aggressive data privacy invasion ever attempted on this scale, and no one likes it.
WTF is an AI computer? Is that some marketing bullshit?
afaict they're computers with a GPU that has some hardware dedicated to the kind of matrix multiplication common in inference in current neural networks. pure marketing BS because most GPUs come with that these days, and some will still not he powerful enough to be useful
This comment is the most importantly one in this thread. Laptops already had GPUs. Does the copilot button actually result in you conversing with an LLM locally or is inference done in the cloud? If the latter, it’s even more useless.
@Matriks404 @dgerard got it in one! It's MS's marketing campaign for PCs with a certain amount of "AI" FLOPS
IDK if the double pun was intended, but a FLOPS is a measurement of how many (floating point) operations can a computer make per second
"Y2k ready" vibes.
Yes.
Y'all remember when 3D TVs were going to be revolutionary?
A friend of mine is a streamer. On his discord, the topic of the Switch 2 came up, and one of his fans stated their desire for it to support 3D TV. Rather than saying my gut reaction -- "are you crazy?" -- I simply asked why. I consider it a great moment of personal self control.
I mean the thought of big screen 3ds emulation would be pretty fun, but yeah that technology died a decade ago. Thats like asking why the Switch 2 doesn't have a slot for SNES carts!
It's not care. Its want. We don't want AI.
FR I think more people actively dislike it, which is a form of care.
Depends on the implementation.
Just about everyone I know loves how iPhones can take a picture and readily identify a plant or animal. That's actually delightful. Some AI tech is great.
Now put an LLM chatbox where people expect a search box, and see what happens... yeah that shit sucks.
Whenever I ask random people who are not on IT, they either don't know about it or they love it.
Speak for yourself.
Google, Apple, Microsoft, Nvidia and everyone else is hyping up AI. Consumers are not really seeing much benefit by making everything AI-ified. Executives are raving over it but maybe aren't realize that people outside of the C-suite aren't that excited? Having it shoved in our faces constantly, or crammed in places companies hope they can save money is not helping either.
It's FOMO amplified by capitalistic competition. No company wants to be the one left behind. I guarantee Google, Meta and even OpenAI know the limitations of their products. They don't care, they just want to be at least as good as their competitors, because they assume at some point one of them will reach "good enough." And at that moment, if they're not in position to grab market share, they'll lose a once-in-a-generation chance for billions or trillions of dollars in value.
We're the casualties, because the people in the middle - companies with no AI but whose C-suite buys into the hype - demand we use unworkable products because they're too willfully ignorant to know they're not panaceas to whatever is bothering those C-suite execs at the moment.
Quarterly Driven Development
My problem is that it's not that fucking useful. I got the Pixel 9 specifically because of its advertised AI chip for the assistant and I swear it's just gotten worse since the Pixel 7. I used to be able to ask Google anything through the assistant, and now 90% of my questions are answered with "can't find the information."
They also advertised (or at least heavily alluded to) the use of the AI chip when you are in low network areas but it works just as good outside of 4g+ coverage as it ever did without the stupid chip.
Whats the point of adding AI branded nonsense if there's no practical use for it. And that doesn't even start to cover the issues with AI's reliability as a source of information. Garbage in = garbage out.
I need to unlock my phone for the Gemini bs to skip or pause music.
i dint get a pixel for that reason after PIxel 5a died, the exonys chip is significantly weaker than other flagship phones, and they sacrificed thier battery power/efficiency capacity since 5A(which was a very defective phone) just to prop up AI.
We know google was saving money on not using QUALCOMM/snapdragon chips, which most others are using. AI is just thier excuse so they can put less effort into making quality product.
When Gemini can find the information, they added flowery "social" bullshit before, in the middle and after the information I asked for wasting my time
I was looking at new phones and basically every one was advertising their AI assistant. Are any of them better than the digital assistants from 2016?
AI on phones peaked with MS Contana on W10 mobile circa 2014. "Remind me to jack off when I'm home". And it fucking did what i wanted. I didn't even have to say words, i could type it into a text box... it also worked offline.
Bad news for people who use google: they've removed the same feature, so their assistant is more useless than Cortana a decade ago (only a mild exaggeration)
Seriously missed an opportunity to being that back as their agent.
Legitimately though, Cortana was pretty great. There was a feature to help plan commutes (before I went more or less full remote), all it really did was watch traffic and adjust a suggest time to depart but it was pretty nice.
Say it every time someone mentions WP7/8/10, those lumia devices were fantastic and I totally miss mine, the 1020 had a fantastic camera on it, especially for the early 2010s
I loved my Lumia. I have the windows phone launcher on my phone currently haha
As technology advanced, humans grew accustomed to relying on the machines.
have you seen the new samsung tech? It literally sucks dick
it’s really weird that this turned into a tantrum where you tried to report other users for their jokes???
the fuck? please go be weird somewhere else
Comment removed for being weird (derogatory). I refrained just barely from hitting the "ban from community" button on the slim chance it was a badly misfired joke from a person who can otherwise behave themself, but I won't object if any other mod goes ahead with the banhammer.
Fuck AI
I would actively avoid the extra hassle of an AI computer.
The only real purpose of AI is to get sweet VC money. Beyond that...
The fuck does Microsoft need VC money for?
They don't, its data mining.
I care. I care enough to crater copilot.
Do they care? No! Will they push more AI? Yes! Will they listen to the consumers? I don't think so.
\
Same thing happens with lot of products over the years. Companies push new stuff that we don't want, and a year later becomes a regular thing! They push AI day by day, from websites AI chat help to in app AI assistant. Do consumers like it? No, but still you gonna find it everywhere! and now they push it in computers and looks what it happens! No sales!
Call me crazy, but at some point, they need to look at their data or their consumers and do the right thing.
It's maddening that they did actually take away the headphone jack from all modern phones and there's nothing we can do about it even though it objectively sucks
But there's no space on the new thin phones.
STFU yes there is. Gimme my 3.5mm.
most, Sony still has them
there’s nothing we can do about it
Outright rejection of their shit, I won't buy new smartphones from them. Currently using a dumbphone although the case is breaking and they don't make this one any more. Nokia could work but costs quite a bit tbh. Getting rid of the phone entirely is tempting.
If I ever buy a smartphone again it will be the cheapest second hand thing I can find. Maybe don't even take it out the house, it can stay at home like a landline and will be restricted to at most LAN connections only.
Does anyone sell a case with a built-in USB DAC and 3.5mm jack?
You can still buy phones with a headphone jack. It's just that most people buy the one without because most people have a wireless headset they use with the phone.
It's SUPER simple: if something new comes along and sells, it will become the standard. If it doesn't sell, it won't.
Removing the headphone jack allowed manufacturers to make phones thinner/lighter/cheaper/whatever and people didn't vote against it, therefore it stays.
Microsoft pushing a feature that most users will never use or care about? Never!
Laughs in Window 8 optimized for touchscreens
It's because they're looking at data and a lot of you forget that. They don't care if realistically everyone hates it if the data says everyone would use and benefit from it. Why is this so much more important? If you looked at the marketing behind AI they faked this entire industry by showing companies the "right" data to have them back them up but it's just manipulation from the industry to make something profitable like NFTs.
The average technical person realises ai is shit.
The average non-technical person doesn't need an ai computer, because chatgpt is free.
not exactly true on the 2nd one, just because cpgt exist is not the reason an average person believes they dont need AI. they know AI is shit from the get-go, doesnt take a programming genius to realizing, when you know that only the corporations are the only ones obsessed with it, and it hadnt produced any valuble product.
and then had to deal with many AI system, like chat boxes, or google searches,, especially on reddit.
If I want at AI I have a multitude of options. It's baked into my editors and easily available on the web. I just paste some crap into a text box and we're off to the races.
I don't want it in my OS. I don't want it embedded in my phone. I'll keep disabling it as long as that is an option.
Google, Facebook, etc. have been burning money to gain market share and "good will" from users knowing that when the money faucet stopped or if they found a way to make money, they'd abuse their market share and squeeze their users for profit.
Once interest rates increased and the VC infinite money glitch went away (borrow at low interest rates, gamble on companies, repeat), the masks came off and the screws started turning, hard. Anything they can do to monetize anyone else involved, they're trying.
The same story has been happening with AI but without the infinite money glitch - just investors desperate for a good bet getting hyped to hell and back. They need adoption and they need business to become dependent on their product. Each of these companies are basically billions in the hole on AI.
Users, especially technical users, should know that not only is the product failing to live up to the hype but that embracing AI is basically turning the other cheek for these companies to have their way with your wallet even faster and more aggressively than they already are with everything else they've given away.
Can the NPU be used for practical purposes other than generative AI? If not, I don't need it.
Most features are relabelled years old shit...google on tap is now gemini screen search.
Things like chatbots have gotten better but bleh, I dont want to give up my privacy for this shit
Fuck the ai os wave. Do not force that shit into my life. I'm fine with using ai, but ai is not gonna stare over my shoulder. I decide when I use it. Never going back from Linux. Still stuck with a samsung phone but we'll see
Even if Microsoft, Apple, et al, drop the 'inescapable integrated AI assistant' bullshit in their OS' it's almost a guarantee that they will forever reside in some hidden background service quietly sending off reports. The temptation for them is too great, and the legal consequences are nil.
you are right.... Linux for the win!
I switched to GrapheneOS with zero regrets. Mainly because Google is deeply embeded in the Android ecosystem. There was no way i was going to add Samsung AI crap on top of that mess.
I have a Samsung and it doesnt have ai features (except for google but i can opt out of it)
I don't even want Windows 11 specifically because of AI. It's intrusive, unnecessary, and the average person has no use for it. The only time I have used AI for anything productive was when I needed to ask some very obscure questions for Linux since I'm trying to get rid of Windows entirely.
Year of Linux
I am prepping my laptop to have it installed. I think Mint will be my distro.
Great Distro from what I hear. I made it easy for me to play Steam games quickly by installing Pop!Os. You can do the same things anyway on any Distro you pick. I just don't mess with Arch. I don't got time for that lol.
Don’t care AND are not stupid.
Nein!
Doch!
Ooh!
they are useless... Copilot is not worth even $50 of an update
Oh hey, I got one of those buttons on my new laptop that literally never booted into Windows. Pressing it Linux says it's "Meta + CTRL" (I think), which is pretty useful. Got it for the good price/performance/build-quality ratio.
Didn't yet find a good use for that fancy NPU, the XDNA driver just arrived a month ago or so. Perhaps for use with Upscayl or something actually useful.
Technically it's Meta+Shift+F23
Now it's up to the Linux desktop environments for determining what to do if the new Copilot key is pressed.
launch ELIZA obv
These "AI Computers" are a solution looking for a problem. The marketing people naming these "AI" computers think that AI is just some magic fairy dust term you can add to a product and it will increase demand.
What's the "killer features" of these new laptops, and what % price increase is it worth?
What’s the “killer features” of these new laptops
LLM
and what % price increase is it worth?
negative eighty, tops
Gen AI should be private, secure, local and easier to train by it's users to fit their own needs. Closest thing to this at the moment seems to be Kobold.
Nah we're up to running Qwen3 and Deepseek r1 locally with accessible hardware at this point so we have access to what you describe. Ollama is the app.
The problem continues to be that LLMs are not suitable for many applications, and where they are useful, they are sloppy and inconsistent.
My laptop is one of the ones they are talking about in the article. It has an AMD NPU, it's a 780M APU that also runs games about as well as an older budget graphics card. It handles running local models really well for its size and power draw. Running local models is still lame as hell, not how I end up utilizing the hardware. 😑
Does Ollama accept custom parameters now?
I wasn't talking about their effectiveness though. Yeah, they're sloppy as hell, but I'd rather trust a sloppy tool I set up at home and use myself than having someone I don't trust at home using their sloppy tools, tinkering with my property without permission when I'm not looking and changing their terms and prices each day.
But granted your point is a really good one. These AI ready laptops don't give the bang for your buck you'd expect. We're all better off taking good care of our older harware and waiting longer for components that are a true improvement to replace them.
Hell yeah
How did this thread blow up so much?
sometimes a thread breaks containment, the "all" algorithm feeds it to even more people, and we see that Lemmy really does replace Reddit
I imagine it’s paying a premium for issue
What the hell is an AI computer? Like one with a beefy GPU?
"coprocessors, but matrix-math specific"
the various *PUs are "things that help a lot of ML models run faster" sidecar chipset designs
it's actually kinda hard to get concrete details, afaict. I've had a bit of a look around for silicon teardowns and shit, and haven't really found any good ones yet
What is even the point of an AI coprocessor for an end user (excluding ML devs)? Most of the AI features run in the cloud and even if they could run locally, companies are very happy to ask you rent for services and keep you vendor locked in.
Please stop shoving ai into everything,please give us opt out from AI icons and stuff /srs
even better: opt-in, over opt-out
none of these fuckers want to go for that, because it'd make their darling shitpile look oh so bad
but it's also a stark thing: ~20y ago tech actually asked first, and (mostly) fucked off if you told it to
the way a lot of tech companies approach consent (and how users have been primed following that) right now is some mad toxic fucked up bullshit
Yep
Local AI kind of sucks right now, and everything is massively over branded as AI ready these days.
There aren’t a lot of compelling local use cases and the memory constraints of local mean you end up with fairly weak models.
You need a high end high memory local setup to get decent token rates, and I’m finding right now 30-70b models are the minimum viable size.
That doesn’t compare with speed of online models running on GPUs that cost more than luxury cars.
Ai bro here. The reason there shit aint selling is because its useless for any actual ai aplication. Ai runs on gpus even an ai cpu will be so much slower than what an nvidea gpu can do. Of course no one buys it. Nvideas gpus still sell very well, and not just because of the gamers.
ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “useful for any actual ai application” is to throw a shitload of money at nvidia. weird how that works!
A lot of these systems are silly because they don't have a lot of RAM and things don't begin to get interesting with LLMs until you can run 70B and above
The Mac Studio has seemed an affordable way to achieve running 200B+ models mainly due to the unified memory architecture (compare getting 512GB of RAM in a Mac Studio to building a machine with enough GPU to get there)
If you look the industry in general is starting to move towards that sort of design now
The framework desktop for instance can be configured with 128GB of RAM ($2k) and should be good for handling 70B models while maintaining something that looks like efficiency.
You will not train, or refine models with these setups (I think you would still benefit from the raw power GPUs offer) but the main sticking point in running local models has been VRAM and how much it costs to get that from AMD / Nvidia
That said, I only care about all of this because I mess around with a lot of RAG things. I am not a typical consumer
I will not go into ethics.
then stop wasting our fucking time
I didn’t read the rest of your post but it’s vaguely LLM-shaped so off you fuck
Math Yes, it is fairly good at math. You can’t trust it,
I can trust a calculator that uses so little electricity that it works with ambient light. Why would I want to use an untrustworthy AI?
Fucking blood diamonds that don't even cut glass.
me: writing software for a calculator I don’t even own yet so I can maybe hopefully have a pocket version of one of the only good products of the first AI boom
them: Math Yes, it is fairly good at math. You can’t trust it,
Your first line is a confession that you are a bad person.
this post has been deleted as a prion
The thing LLMs can effectively replace is Google search (and other search engines). Microsoft is shoving copilot down your throat because shoving Bing up your ass was harder when it was already full of other shit.
we’ve all seen how well LLMs replace Google search and the product’s fucking unusable
good to see the quality posts from dbzero are back tho
The thing LLMs can effectively replace is Google search (and other search engines).
This statement is true on zero known planets.
I hear the youths are using ChatGPT like we used Google when we were their age, but they must be way better than I am, because when I ask it things I don't already know, sometimes it gives me truth and sometimes lies.
It's better than google for a lot of searches. That is until whatever AI platform you are using enshitifies.
The only issue here is that there is no really useful ubiquitous feature yet. Once that comes, people will not care about any security issues or any other reason against it. It's coming for sure. Maybe they need recall feature to train right now, maybe they won't anymore at some point.
Lemmy people do not like your comment, but I think you are spot on. Laypeople just do not care about privacy, like, at all. That's how instagram, facebook and such are popular despite all the privacy infractions.
It's so sad how people 1000% prefer convenience over privacy..
"so true bestie" says the dipshit, brushing under the carpet all the notes about overt efforts and capital expended by monopolist captive operators
Some people are really crazy around here and they will downvote anything and everything AI, especially posts from someone who might not be an anti-ai activist.
I've got my own view. I've worked in a company that does ai research and I've seen the how fast the progress is. I'm also old enough to remember vividly a time when people couldn't imagine that anyone would put their entire life online for everyone else to see. Yet here we are.
Edit: You can see already some crazy triggered poor soul commenting here 😂
People expect AI to be default feature. Image search was once what was "AI". Photo recognition was once what was "AI". Voice recognition was once what was "AI". These all fall under the field of AI/ML. It's until the next state of the art comes along. Then it's no longer "AI" but a standard feature.
I have no idea why this phenomenon is but that's the way it's been. When the field of AI/ML makes its leap to the next frontier. The current "AI" which is LLMs will longer be "AI" but a standard feature.
Maybe because fictional media as set the goalpost at AGI. So nobody is expecting to be buying "AI" hardware until they are buying an AGI machine that is a conscious cybernetic lifeform. Otherwise practical AI as we know it is assumed to be just another software package that runs on any computer.
you know you could just write your own posts, right? it’s free! you don’t need to regurgitate the marketing that’s being pushed by all these vendors. they already have paid marketers to push the bilge
people expect a Cylon level level of AI intelligence or skynet, both are quite similar fictional concepts. funny thing in the cylons case, it was based of the brainwaves of a human girl, that was replicated into robots, which developed thier own AI/conscieness overtime.