nightsky @ nightsky @awful.systems Posts 0Comments 135Joined 9 mo. ago

I’m heckin’ moving to Switzerland next month holy hell.
Good luck!!
they posted these two videos to TikTok in response to the AI backlash
The cringey "hello, fellow kids" vibe is really unbearable... good that people are not falling for that.
Seeing a lot of talk about OpenAI acquiring a company with Jony Ive and he's supposedly going to design them some AI gadget.
Calling it now: it will be a huge flop. Just like the Humane Pin and that Rabbit thing. Only the size of the marketing campaign, and maybe its endurance due to greater funding, will make it last a little longer.
It appears that many people think that Jony Ive can perform some kind of magic that will make a product successful, I wonder if Sam Altman believes that too, or maybe he just wants the big name for marketing purposes.
Personally, I've not been impressed with Ive's design work in the past many years. Well, I'm sure the thing is going to look very nice, probably a really pleasingly shaped chunk of aluminium. (Will they do a video with Ive in a featureless white room where he can talk about how "unapologetically honest" the design is?) But IMO Ive has long ago lost touch with designing things to be actually useful, at some point he went all in on heavily prioritizing form over function (or maybe he always did, I'm not so sure anymore). Combine that with the overall loss of connection to reality from the AI true believers and I think the resulting product could turn to be actually hilarious.
The open question is: will the tech press react with ridicule, like it did for the Humane Pin? Or will we have to endure excruciating months of critihype?
I guess Apple can breathe a sigh of relief though. One day there will be listicles for "the biggest gadget flops of the 2020s", and that upcoming OpenAI device might push Vision Pro to second place.
If the companies wanted to produce an LLM that didn’t output toxic waste, they could just not put toxic waste into it.
The article title and that part remind me of this quote from Charles Babbage in 1864:
On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
It feels as if Babbage had already interacted with today's AI pushers.
Re the GitLab marketing: what does it mean, what toolchains are they referring to, and what is "native AI"? Does that even mean anything, or is it just marketing gibberish to impress executives?
scrolls down
GitLab Duo named a Leader in the Gartner® Magic Quadrant™ for AI Code Assistants.
[eternal screaming]
Oh god, so many horror quotes in there.
With a community of 116 million users a month, Duolingo has amassed loads of data about how people learn
...and that's why I try to avoid using smartphone apps as much as possible.
“Ultimately, I’m not sure that there’s anything computers can’t really teach you,”
How about common sense..
“it’s just a lot more scalable to teach with AI than with teachers.”
Ugh. So terrible. Tech's obsession with "scaling" is one of the worst things about tech.
If “it’s one teacher and like 30 students, each teacher cannot give individualized attention to each student,” he said. “But the computer can.
No, it cannot. It's a statistical model, it cannot give attention to anything or anyone, what are you talking about.
Duolingo’s CFO made similar comments last year, saying, “AI helps us replicate what a good teacher does”
Did this person ever have a good teacher in their life
the company has essentially run 16,000 A/B tests over its existence
Aaaarrgh. Tech's obsession with A/B testing is another one of the worst things about tech.
Ok I stop here now, there's more, almost every paragraph contains something horrible.
Maybe this is a bit old woman yells at cloud
Yell at cloud computing instead, that is usually justified.
More seriously: it's not at all that. The AI pushers want to make people feel that way -- "it's inevitable", "it's here to stay", etc. But the threat to learning and maintaining skills is real (although the former worries me more than the latter -- what has been learned before can often be regained rather quickly, but what if learning itself is inhibited?).
My opinion of Microsoft has gone through many stages over time.
In the late 90s I hated them, for some very good reasons but admittedly also some bad and silly reasons.
This carried over into the 2000s, but in the mid-to-late 00s there was a time when I thought they had changed. I used Windows much more again, I bought a student license of Office 2007 and I used it for a lot of uni stuff (Word finally had decent equation entry/rendering!). And I even learned some Win32, and then C#, which I really liked at the time.
In the 2010s I turned away from Windows again to other platforms, for mostly tech-related reasons, but I didn't dislike Microsoft much per se. This changed around the release of Win 10 with its forced spyware privacy violation telemetry since I categorically reject such coercion. Suddenly Microsoft did one of the very things that they were wrongly accused of doing 15 years earlier.
Now it's the 2020s and they push GenAI on users with force, and then they align with fascists (see link at the beginning of this comment). I despise them more now than I ever did before, I hope the AI bubble burst will bankrupt them.
“For sure, there are some legitimate uses of AI” or “Of course, I’m not claiming AI is useless” like why are you not claiming that.
Yes, thank you!! I'm frustrated by that as well. Another one I have seen way too often is "Of course, AI is not like cryptocurrency, because it has some real benefits [blah blah blah]"... uhm... no?
As for the "study", due to Brandolini's law this will continue to be a problem. I wonder whether research about "AI productivity gains" will eventually become like studies about the efficacy of pseudo-medicine, i.e. the proponents will just make baseless claims that an effect were present, and that science is just not advanced enough yet to detect or explain it.
Over 1 million, whoooooo!
If this is real it would be double infuriating, not just because of the AI nonsense, but also because just 3 days ago SAP went bootlicker and announced ending diversity programs.
Is Brother still the least worst brand for them?
Can't offer experience with Brother printers, but I'd throw in Canon as another option -- at least I've had a small colour laser from their "i-Sensys" office line for many years now and it still works exactly as well as on the day I bought it, no complaints at all. Also works nicely on Linux (I did install a Canon thing for it, but IIRC it might even work without). Although keep in mind of course this is just a single anecdote with one model from many years ago.
That whole plot angle feels dead today
It doesn't have to be IMO, in particular when it's an older work.
I don't mind at all to rewatch e.g. AI-themed episodes of TNG, such as the various episodes with a focus on Data, or the one where the ship computer gains sentience (it's a great episode actually).
On the other hand, a while ago I stopped listening to a contemporary (published in 2022) audiobook halfway throuh, it was an utopian AI scifi story. The theme of "AI could be great and save the world" just bugged me too much in relation to the current real-world situation. I couldn't enjoy it at all.
I don't know why I feel so differently about these two examples. Maybe it's simply because TNG is old enough that I do not associate it with current events, and the first time I saw the episodes was so long ago. Or maybe it's because TNG plays in a far-future scenario, clearly disconnected from today, while the audiobook plays in a current-day scenario. Hm, it's strange.
(and btw queer loneliness is an interesting theme, wonder if I could find an audiobook involving it)
The AI problem is still in an earlier stage at my job, but I've already witnessed in a code review that code was pointed out as questionable, and then it was justified with what amounted to "the AI generated this, it wasn't me". I really don't like where this is going.
AI will see a sharp decline in usage as a plot device
Today I was looking for some new audiobooks again, and I was scrolling through curated1 lists for various genres. In the sci-fi genre, there is a noticeable uptick in AI-related fiction books. I have noticed this for a while already, and it's getting more intense. Most seem about "what if AI, but really powerful and scary" and singularity-related scenarios. While such fiction themes aren't new at all, it appears to me that there's a wave of it now, although it's possible as well that I am just more cognisant of it.
I think that's another reason that will make your prediction true: sooner or later demand for this sub-genre will peak, as many people eventually become bored with it as a fiction theme as well. Like it happened with e.g. vampires and zombies.
(1 Not sure when "curation" is even human-sourced these days. The overall state of curation, genre-sorting, tagging and algorithmic "recommendations" in commercial books and audiobooks is so terrible... but that's a different rant for another day.)
If someone creates the world's worst playlist, that would play right after RMS's free software song.
For the part on generative AI skills as job requirement: just came across this, and it's beautiful. Made even better by the answer post from an audiobook narrator.
Amazon publishes Generative AI Adoption Index and the results are something! And by "something" I mean "annoying".
I don't know how seriously I should take the numbers, because it's Amazon after all and they want to make money with this crap, but on the other hand they surveyed "senior IT decision-makers".. and my opinion on that crowd isn't the highest either.
Highlights:
- Prioritizing spending on GenAI over spending on security. Yes, that is not going to cause problems at all. I do not see how this could go wrong.
- The junk chart about "job roles with generative AI skills as a requirement". What the fuck does that even mean, what is the skill? Do job interviews now include a section where you have to demonstrate promptfondling "skills"? (Also, the scale of the horizontal axis is wrong, but maybe no one noticed because they were so dazzled by the bars being suitcases for some reason.)
- Cherry on top: one box to the left they list "limited understanding of generative AI skilling needs" as a barrier for "generative AI training". So yeah...
- "CAIO". I hate that I just learned that.
I'm not sure I want to know, but what is the relation from beef tallow to fascism, is it related to the whole seed oil conspiracy? Or is it one of these imagined ultra manly masculine man things for maxxing the intake of meat? (I'm losing track of all the insane bullshit, there's just too much.)
The myth of the "10x programmer" has broken the brains of many people in software. They appear to think that it's all about how much code you can crank out, as fast as possible. Taking some time to think? Hah, that's just a sign of weakness, not necessary for the ultra-brained.
I don't hear artists or writers and such bragging about how many works they can pump out per week. I don't hear them gluing their hands to the pen of a graphing plotter to increase the speed of drawing. How did we end up like this in programming?
Update on my comment from yesterday: it seems I fell for satire (?). (I don't know the people involved, so no idea, but it seems plausible.)