Skip Navigation
242 comments
  • A big issue that a lot of these tech companies seem to have is that they don't understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.

    AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.

    I do not understand all the hype around AI. I can understand the danger; people who don't see that it's bad are using it in place of people who know how to do things. But in my teaching for example I've never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can't imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it's actually impressive technology - we've had computers that can do advanced math highly accurately for a while, but we've finally developed one that's worse at math than the average undergrad in a gen-ed class!)

    • The answer is that it's all about "growth". The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

      To make share price go up like that, you have to do one of two things; show that you're bringing in new customers, or show that you can make your existing customers pay more.

      For the big tech companies, there are no new customers left. The whole planet is online. Everyone who wants to use their services is using their services. So they have to find new things to sell instead.

      And that's what "AI" looked like it was going to be. LLMs burst onto the scene promising to replace entire industries, entire workforces. Huge new opportunities for growth. Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

      And now they have to show investors that it was worth it. Which means they have to produce metrics that show people are paying for, or might pay for, AI flavoured products. That's why they're shoving it into everything they can. If they put AI in notepad then they can claim that every time you open notepad you're "engaging" with one of their AI products. If they put Recall on your PC, every Windows user becomes an AI user. Google can now claim that every search is an AI interaction because of the bad summary that no one reads. The point is to show "engagement", "interest", which they can then use to promise that down the line huge piles of money will fall out of this pinata.

      The hype is all artificial. They need to hype these products so that people will pay attention to them, because they need to keep pretending that their massive investments got them in on the ground floor of a trillion dollar industry, and weren't just them setting huge piles of money on fire.

      • I know I'm an enthusiast, but can I just say I'm excited about NotebookLLM? I think it will be great for documenting application development. Having a shared notebook that knows the environment and configuration and architecture and standards for an application and can answer specific questions about it could be really useful.

        "AI Notepad" is really underselling it. I'm trying to load up massive Markdown documents to feed into NotebookLLM to try it out. I don't know if it'll work as well as I'm hoping because it takes time to put together enough information to be worthwhile in a format the AI can easily digest. But I'm hopeful.

        That's not to take away from your point: the average person probably has little use for this, and wouldn't want to put in the effort to make it worthwhile. But spending way too much time obsessing about nerd things is my calling.

      • The answer is that it’s all about “growth”. The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

        As you can see, this can't go on indefinitely. And also such unpleasantries are well known after every huge technological revolution. Every time eventually resolved, and not in favor of those on the quick buck train.

        It's still not a dead end. The cycle of birth, growth, old age, death, rebirth from the ashes and so on still works. It's only the competitive, evolutionary, "fast" model has been killed - temporarily.

        These corporations will still die unless they make themselves effectively part of the state.

        BTW, that's what happened in Germany described by Marx, so despite my distaste for marxism, some of its core ideas may be locally applicable with the process we observe.

        It's like a worldwide gold rush IMHO, but not even really worldwide. There are plenty of solutions to be developed and sold in developing countries in place of what fits Americans and Europeans and Chinese and so on, but doesn't fit the rest. Markets are not exhausted for everyone. Just for these corporations because they are unable to evolve.

        Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

        If only Sun survived till now, I feel they would have good days. What made them fail then would make them more profitable now. They were planning too far ahead probably, and were too careless with actually keeping the company afloat.

        My point is that Sun could, unlike these corporations, function as some kind of "the phone company", or "the construction company", etc. Basically what Microsoft pretended to be in the 00s. They were bad with choosing the right kind of hype, but good with having a comprehensive vision of computing. Except that vision and its relation to finances had schizoaffective traits.

        Same with DEC.

        The point is to show “engagement”, “interest”, which they can then use to promise that down the line huge piles of money will fall out of this pinata.

        Well. It's not unprecedented for business opportunities to dry out. It's actually normal. What's more important, the investors supporting that are the dumber kind, and the investors investing in more real things are the smarter kind. So when these crash (for a few years hunger will probably become a real issue not just in developing countries when that happens), those preserving power will tend to be rather insightful people.

    • I've ran some college hw through 4o just to see and it's remarkably good at generating proofs for math and algorithms. Sometimes it's not quite right but usually on the right track to get started.

      In some of the busier classes I'm almost certain students do this because my hw grades would be lower than the mean and my exam grades would be well above the mean.

    • I understand some of the hype. LLMs are pretty amazing nowadays (though closedai is unethical af so don't use them).

      I need to program complex cryptography code for university. Claude sonnet 3.5 solves some of the challenges instantly.

      And it's not trivial stuff, but things like "how do I divide polynomials, where each coefficient of that polynomial is an element of GF(2^128)." Given the context (my source code), it adds it seamlessly, writes unit tests, and it just works. (That is important for AES-GCM, the thing TLS relies on most of the time .)

      Besides that, LLMs are good at what I call moving words around. Writing cute little short stories in fictional worlds given some info material, or checking for spelling, or re-formulating a message into a very diplomatic nice message, so on.

      On the other side, it's often complete BS shoehorning LLMs into things, because "AI cool word line go up".

  • oh wow who would have guessed that business consultancy companies are generally built on bullshitting about things which they dont really have a grasp of

  • To have a bubble you need companies with no clear path to monetization, being over-valued to an extreme degree. This leaves me wondering : what company specifically ? Are they talking about nVidia ? OpenAI ? MidJourney ? Or the slew of LLM-powered SaaS products that have started appearing ? How exactly are we defining "over-valuation" here ? Are we talking about the tech industry as a whole ?

    We often invite the comparison to the DotCom bubble but that's apples to oranges. You had companies making social networks for dogs or similar bullshit, valued in the billions and getting a ticker at the stock market before making a single dime. Or companies with outlandish promises such as delivering to any home in the US, in <1 hour, for a low price, and building warehouses by the hundreds before having a storefront. What would be the 2024 equivalent ? If a bubble is about to deflate then there should be dozens of comparable examples.

  • Education is one area where GenAI is having a huge impact. Teachers work with text and language all day long. They have too much to do and not enough time to do it. Ideally, for example, they should "differentiate" for EACH and EVERY student. Of course that almost never happens, but second best is to differentiate for specific groups - students with IEPs (special ed), English Learners, maybe advanced / gifted.

    More tech aware teachers are now using ChatGPT and friends to help them do this. They are (usually) subject area experts, so they can quickly read through a generated or modified text and fix or remove errors - hallucinations are less (ime) of an issue in this situation. Now, instead of one reading that only a few students can actually understand, they have three at different levels, each with their own DOK questions.

    People have started saying "AI won't replace teachers. Teachers who use AI will replace teachers who don't."

    Of course, it will be interesting to see what happens when VC funding dries up, and the AI companies can't afford to lose money on every single interaction. Like with everything else in USA education, better off districts may be able to afford AI, and less-well-off (aka black / brown / poor) districts may not be able to.

  • "Today’s hype will have lasting effects that constrain tomorrow’s possibilities."

    Nope. No it won't. I'd love to have the patience to be more diplomatic but they're just wrong... and dumb.

    I'm getting so sick of these anti AI cultists who seem to be made up of grumpy tech nerds behaving like "I was using AI before it was cool" hipsters and panicking artists and writers. Everyone needs to calm their tits right down. AI isn't going anywhere. It's giving creative and executive options to millions of people that just weren't there before.

    We're in an adjustment phase right now and boundaries are being re-drawn around what constitutes creativity. My leading theory at the moment is that we'll all mostly eventually settle down to the idea that AI is just a tool. Once we're used to it and less starry eyed about it's output then individual creativity, possibly supported by AI tools, will flourish again. It's going to come down to the question of whether you prefer reading something cogitated, written, drawn or motion rendered by AI or you enjoy the perspective of a human being more. Both will be true in different scenarios I expect.

    Honestly, I've had to nope out of quite a few forums and servers permanently now because all they do in there is circlejerk about the death of AI. Like this one theory that keeps popping up that image generating AI specifically is inevitably going to collapse in on itself and stop producing quality images. The reverse is so obviously true but they just don't want to see it. Otherwise smart people are just being so stubborn with this and it's, quite frankly, depressing to see.

    Also, the tech nerds arguing that AI is just a fancy word and pixel regurgitating engine and that we'll never have an AGI are probably the same people that were really hoping Data would be classified as a sentient lifeform when Bruce Maddox wanted to dissassemble him in "The Measure of a Man".

    How's that for whiplash?

242 comments