Skip Navigation

If "AI" doesn't attract customers why are companies hyping it so hard? Are they just dumb or is there another factor?

I've seen reports and studies that show products advertised as including / involving AI are off-putting to consumers. And this matches what almost every person I hear irl or online says. Regardless of whether they think that in the long-term AI will be useful, problematic or apocalyptic, nobody is impressed Spotify offering a "AI DJ" or "AI coffee machines".

I understand that AI tech companies might want to promote their own AI products if they think there's a market for them. And they might even try to create a market by hyping the possibilities of "AI". But rebranding your existing service or algorithms as being AI seems like super dumb move, obviously stupid for tech literate people and off-putting / scary for others. Have they just completely misjudged the world's enthusiasm for this buzzword? Or is there some other reason?

77 comments
  • They believe that demand and offer in the market is an egg and chicken situation, so right now they're force feeding us the offer waiting for the demand to adapt

  • It's a "don't want to miss the ship" thing where companies have to invest in whatever's trending in case it becomes successful and gives you an advantage. If they wait until it's proven, they might miss a competitive advantage (having to start learning after others). In the case of AI it's even more important since the promise sounds actually useful (the summarize anything quickly bit at least), unlike, say, NFT. At least that's kind of how it got explained to me at one of my jobs.

  • Think like a venture investor.

    A small chance of huge growth via new technology can have a big payoff. They expect most companies to fail and are more worried about missing an opportunity than losing money in a single bad investment.

    Nobody is quite sure where AI technology will be in ten years, but if it's big, it's going to make people who got in early very rich. It doesn't matter that it sucks now; the web sucked in 1995, but it made people who got in (and out) at the right time very rich.

  • I personally think people were "burned" by the whole NFT situation. During the NFT "hype" a year or two ago, a lot of companies were slow to get on board with releasing NFT products, and so they missed the bubble entirely. NFTs are, of course, silly but if they did take off, companies would have loved to have been part of the boon.

    Fast forward to now and you have AI bros shilling AI in the same way cryptobros were shilling NFTs. However, this time it's different! They have results, they have technology. Microsoft is on board! They have fancy tech demos which are not staged at all! If you didn't have experience with the technology and limitations, you would be lead to believe that this is the same as the NFT bubble, but it's actually going to be a real technology rather than snake oil.

    I think there's also the issue that it takes a long time to bring a product to market. Imagine you've spent millions developing software and hardware for your AI coffee machine or whatever and it turns out that there's no market demand. You can't really turn to your stakeholders and say "oops, we made a mistake and have to cancel this product. Sorry!", you have to finish the product and try to recoup losses where you can. That's why there's all these weird posts advertising AI products - they can't just not release a product, and AI bros might be tempted to buy it.

    I also wonder if the whole AI hate is bias due to us being here on mastodonlemmy... We tend towards fairly cynical people who are critical towards new technology and corporations. Maybe actual consumers who aren't online all day and clued into the tech scene are wowwed by AI. I've certainly seen people here casually remark that they use ChatGPT and Copilot.

    • I also wonder if the whole AI hate is bias due to us being here on mastodonlemmy…

      Yeah, we're cynical but we have every right to be.

      I use ChatGPT, Copilot, and image generators for different things and I'm generally not on board with the blind hate because it's been nice to have an assistant that can do all these manial things. But honestly, I've gotten mixed results and don't see this tech correcting its obvious problems. The latest ChatGPT-4o release was great with its web browser, images, and speech, but it still struggles with accuracy to a tangible degree. Or worse, other companies use it for the wrong things as a cash grab to change perfectly working products. Even the applications that do seem perfect for it are not.

      For example, I can't get Gemini to answer anything but simple questions about Google Docs without it getting confused and repeating the same thing. Copilot will sometimes reach conclusions wildly different from the sources it cites. ChatGPT will give you suboptimal code samples, be subtly wrong about the meaning of words in other languages, or suddenly forget part of my instructions. And now people are adding it to the fucking coffee machine for crying out loud. I'd have a different opinion if it were more accurate most of the time and genuinely useful, but using it more often only cements it in my mind as a secondary productivity tool rather than the main feature.

      I hope the hype dies down and AI is seen as an afterthought enhancement rather than a stupid selling point. Anybody selling AI now looks clueless to me.

    • You're essentially describing FOMO. The hype bros are telling the CEOs "if you don't offer AI then your competition will, and they'll take all your customers."

    • You can't really turn to your stakeholders and say "oops, we made a mistake and have to cancel this product. Sorry!", you have to finish the product and try to recoup losses where you can.

      You can and should. You're describing sunk cost fallacy, which is pretty close to universally understood as a terrible money vacuum of a flaw in our reasoning. (I would have made this comment if I hadn't read Quit literally yesterday, but it really is an excellent book about the value of abandoning bad decisions when new information makes it clear that they're bad decisions.) Buying time and raising expectations with a dead end nonsense tech might be better 6 months from now, but 5 years from now, being the guy that saw the writing on the wall that the continued investment was lighting money on fire will leave you better off.

      LLMs have limited applications, and will in the future, but nowhere near enough to warrant the obscene amount of resource burn companies are spending to get there.

      I also wonder if the whole AI hate is bias due to us being here on mastodonlemmy... We tend towards fairly cynical people who are critical towards new technology and corporations

      Corporations, sure, but tech? Anti-tech people aren't early adopters of new tech products. Early adopters are just generally more aware of the actual shape of the field than people jumping on hype trains once they've already started moving.

      • So firstly, if you were the person running a several million dollar project which then gets cancelled, you are absolutely getting fired. If you were acting entirely in your own self interest, it would be better to make the project last as long as you can. Maybe it ends up succeding by a fluke and you keep your job?

        Secondly, you're assuming that all that money just vanishes when the product is released. The product is still out there in the market, and there are still some people will buy it. If you're 80% of the way through the project, it might be worth spending the remaining 20% in order to recoup at least some of your costs.

  • I see two basic reasons.

    1. it gives companies plausible argument to embed telemetry into their products. Should your TV manufacturer or coffee maker manufacturer be able to monitor every single button you press on your device? Probably not, but they would like to “because AI”! Now they have an excuse to be as invasive as they want, “to serve you better”. The dream - for them - would be total surveillance of your habits to sell you more shit. Remember, it always comes back to money.
    2. The old adage never fails: if it’s free, you are the product. Imagine AI being so pervasive, that now everywhere you look, everything you interact with can subtly suggest things. It doesn’t have to be overt. But if AI can nudge the behavior of the masses to do a thing, like buy more soda, or favor one brand over another, then it has succeeded in boosting company bottom line. Sure the AI can do useful shit for you, but the true AI problem companies want to solve is “say or do the right shit to influence this consumer to buy my thing”. You are the target the AI is operating on. And with billions of interactions and tremendous training, it will find the optimal way to influence the masses to buy the thing.
  • AI is the new ad driven model.

    Everything that AI touches, ends up machine learning content.

    AI DJ? I now have your name your email address and every single taste you have in music. As you use the app I will gain more insight into more music that you are or might be interested in.

    That roomba thats running around your house looking for socks and cables not to run over, is also image processing on everything in your house. We know how big your house is they probably know how big my TV is.

    They're not just farming your email and text messages to figure out what to sell you they know at a core intimate level what you're interests are.

    They're in for a rude awakening in a few years. All of this AI information gathering is a bubble. You have companies like anovo complaining that they can't afford to host a single website. All this AI training is not cheap and the return on investment is not great after the initial plunge right?

  • I wonder how many of the people pushing it believe in some variation of Roko's Basilisk. Either that or they believe AI is going to enhance their data collection abilities; and that if everyone pushes AI together, there won't be any AI-less options and the consumer will be trapped into giving someone even more data than they already do.

  • AI has some useful applications, just most of them are a bit niche and/or have ethical issues so while it's worth having the tools and functionality to do things, no one can do much with them.

    Like for example we pretty much have AIs that could generate really good audio books using your favourite actors voi e likeness, but it's a legal nightmare, and audio books are a niche already.

    In game development being able to use AI for texture generation, rigging, animations are pretty good and can save lots of time, but it comes at the cost of jobs.

    Some useful applications for end users are things like noise removal and dynamic audio enhancement AIs which can make your mic not sound like you are talking from a tunnel under a motorway when in meetings, or being able to do basic voice activation of certain tools, even spam filtering.

    The whole using AI to sidestep being creative or trying to pretend to collate knowledge in any meaningful way is a bit out of grasp at the moment. Don't get me wrong it has a good go at it, but it's not actually intelligent it's just throwing out lots of nonsense hoping for the best.

  • They want to create some hype and look cool by using AI chatbots. And most normies don't care about privacy and the dangers of AI in the future, they only care about "wow I can use AI for bla.. bla.."

    But they have no idea, that one day AI could take over their jobs.. and rich people like Sam Altman are getting richer, and he only pays you with UBI money some pieces of computing

    https://x.com/tsarnick/status/1789107043825262706

    Also, AI companies aim for government contracts and medium / big corpos.

77 comments