Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
43
Comments
535
Joined
1 yr. ago

  • I've been beating this dead horse for a while (since July of last year AFAIK), but its clear to me that the AI bubble's done horrendous damage to the public image of artificial intelligence as a whole.

    Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst - a trend I expect that'll last for a good while after the bubble pops.

    To beat a slightly younger dead horse, I also anticipate AI as a concept will die thanks to this bubble, with its utterly toxic optics as a major reason why. With relentless slop, nonstop hallucinations and miscellaneous humiliation (re)defining how the public views and conceptualises AI, I expect any future AI systems will be viewed as pale imitations of human intelligence, theft-machines powered by theft, or a combination of the two.

  • New-ish thread from Baldur Bjarnason:

    Wrote this back on the mansplainiverse (mastodon):

    It's understandable that coders feel conflicted about LLMs even if you assume the tech works as promised, because they've just changed jobs from thoughtful problem-solving to babysitting

    In the long run, a babysitter gets paid much less an expert

    What people don't get is that when it comes to LLMs and software dev, critics like me are the optimists. The future where copilots and coding agents work as promised for programming is one where software development ceases to be a career. This is not the kind of automation that increases employment

    A future where the fundamental issues with LLMs lead them to cause more problems than they solve, resulting in much of it being rolled back after the "AI" financial bubble pops, is the least bad future for dev as a career. It's the one future where that career still exists

    Because monitoring automation is a low-wage activity and an industry dominated by that kind of automation requires much much fewer workers that are all paid much much less than one that's fundamentally built on expertise.

    Anyways, here's my sidenote:

    To continue a train of thought Baldur indirectly started, the rise of LLMs and their impact on coding is likely gonna wipe a significant amount of prestige off of software dev as a profession, no matter how it shakes out:

    • If LLMs worked as advertised, then they'd effectively kill software dev as a profession as Baldur noted, wiping out whatever prestige it had in the process
    • If LLMs didn't work as advertised, then software dev as a profession gets a massive amount of egg on its face as AI's widespread costs on artists, the environment, etcetera end up being all for nothing.
  • To sorta repeat a prediction of mine, shit like this is gonna tank the public image of coding as a profession.

    Inevitable software issues aside, "vibe coding" as a concept undermines any notion of coding as being a difficult/skillful thing, making it sound like coders are doing the equivalent of throwing shit at the wall and seeing what sticks. That the software produced by this method is inevitably derivative, dogshit or derivative dogshit is gonna help damage coding's image, too.

  • New thread from Ed Zitron, focusing on the general trashfire that is CoreWeave. Jumping straight to the money-shot, he noted how the company is losing money selling shovels in the gold rush:

    You want my off-the-cuff prediction, CoreWeave will probably be treated as the Leyman Brothers of the 2020s, an unofficial mascot of everything wrong with Wall Street (if not the world) during the AI bubble.

  • In other news, a piece from Paris Marx came to my attention, titled "We need an international alliance against the US and its tech industry". Personally gonna point to a specific paragraph which caught my eye:

    The only country to effectively challenge [US] dominance is China, in large part because it rejected US assertions about the internet. The Great Firewall, often solely pegged as an act of censorship, was an important economic policy to protect local competitors until they could reach the scale and develop the technical foundations to properly compete with their American peers. In other industries, it’s long been recognized that trade barriers were an important tool — such that a declining United States is now bringing in its own with the view they’re essential to projects its tech companies and other industries.

    I will say, it does strike me as telling that Paris was able to present the unofficial mascot of Chinese censorship this way without getting any backlash.

  • New piece from Brian Merchant, focusing on Musk's double-tapping of 18F. In lieu of going deep into the article, here's my personal sidenote:

    I've touched on this before, but I fully expect that the coming years will deal a massive blow to tech's public image, expecting them to be viewed as "incompetent fools at best and unrepentant fascists at worst" - and with the wanton carnage DOGE is causing (and indirectly crediting to AI), I expect Musk's governmental antics will deal plenty of damage on its own.

    18F's demise in particular will probably also deal a blow on its own - 18F was "a diverse team staffed by people of color and LGBTQ workers, and publicly pushed for humane and inclusive policies", as Merchant put it, and its demise will likely be seen as another sign of tech revealing its nature as a Nazi bar.

  • Starting things off here with a sneer thread from Baldur Bjarnason:

    Keeping up a personal schtick of mine, here's a random prediction:

    If the arts/humanities gain a significant degree of respect in the wake of the AI bubble, it will almost certainly gain that respect at the expense of STEM's public image.

    Focusing on the arts specifically, the rise of generative AI and the resultant slop-nami has likely produced an image of programmers/software engineers as inherently incapable of making or understanding art, given AI slop's soulless nature and inhumanly poor quality, if not outright hostile to art/artists thanks to gen-AI's use in killing artists' jobs and livelihoods.

  • New opinion piece from the Guardian: AI is ‘beating’ humans at empathy and creativity. But these games are rigged

    The piece is one lengthy sneer aimed at tests trying to prove humanlike qualities in AI, with a passage at the end publicly skewering techno-optimism:

    Techno-optimism is more accurately described as “human pessimism” when it assumes that the quality of our character is easily reducible to code. We can acknowledge AI as a technical achievement without mistaking its narrow abilities for the richer qualities we treasure in each other.