Skip Navigation
Are there any occupations you uniquely oppose the existence of?
  • Gas-filler. There's a couple states in the US where you aren't allowed to pump your own gas, someone else has to do it for you, and you're expected to then tip them.

    The job is essentially getting me to pay to be inconvenienced. I'd prefer to pay to let me pump my own gas.

  • AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries
  • I think to some extent it's a matter of scale, though. If I advertise something as a calculator capable of doing all math, and it can only do one problem, it is so drastically far away from its intended purpose that the meaning kinda breaks down. I don't think it would be wrong to say "it malfunctions in 99.999999% of use cases" but it would be easier to say that it just doesn't work.

    Continuing (and torturing) that analogy, if we did the disgusting work of precomputing all 2 number math problems for integers from -1,000,000 to 1,000,000 and I think you could say you had a (really shitty and slow) calculator, which "malfunctions" for numbers outside that range if you don't specify the limitation ahead of time. Not crazy different from software which has issues with max_int or small buffers.

    If it were the case that there had only been one case of a hallucination with LLMs, I think we could pretty safely call that a malfunction (and we wouldn't be having this conversation). If it happens 0.000001% of the time, I think we could still call it a malfunction and that it performs better than a lot of software. 99.999% of the time, it'd be better to say that it just doesn't work. I don't think there is, or even needs to be, some unified understanding of where the line is between them.

    Really my point is there are enough things to criticize about LLMs and people's use of them, this seems like a really silly one to try and push.

  • AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries
  • We're talking about the meaning of "malfunction" here, we don't need to overthink it and construct a rigorous proof or anything. The creator of the thing can decide what the thing they're creating is supposed to do. You can say

    hey, it did X, was that supposed to happen?

    no, it was not supposed to do that, that's a malfunction.

    We don't need to go to

    Actually you never sufficiently defined its function to cover all cases in an objective manner, so ACTUALLY it's not a malfunction!

    Whatever, it still wasn't supposed to do that

  • AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries
  • The purpose of an LLM, at a fundamental level, is to approximate text it was trained on.

    I'd argue that's what an LLM is, not its purpose. Continuing the car analogy, that's like saying a car's purpose is to burn gasoline to spin its wheels. That's what a car does, the purpose of my car is to get me from place to place. The purpose of my friend's car is to look cool and go fast. The purpose of my uncle's car is to carry lumber.

    I think we more or less agree on the fundamentals and it's just differences between whether they are referring to a malfunction in the system they are trying to create, in which an LLM is a key tool/component, or a malfunction in the LLM itself. At the end of the day, I think we can all agree that it did a thing they didn't want it to do, and that an LLM by itself may not be the correct tool for the job.

  • AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries
  • Where I don't think your argument fits is that it could be applied to things LLMs can currently do. If I have an insufficiently trained model which produces a word salad to every prompt, one could say "that's not a malfunction, it's still applying weights."

    The malfunction is in having a system that produces useful results. An LLM is just the means for achieving that result, and you could argue it's the wrong tool for the job and that's fine. If I put gasoline in my diesel car and the engine dies, I can still say the car is malfunctioning. It's my fault, and the engine wasn't ever supposed to have gas in it, but the car is now "failing to function in a normal or satisfactory manner," the definition of malfunction.

  • AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries
  • It implies that, under the hood, the LLM is "malfunctioning". It is not - it's doing what it is supposed to do, to chain tokens through weighted probabilities.

    I don't really agree with that argument. By that logic, there's really no such thing as a software bug, since the software is always doing what it's supposed to be doing: giving predefined instructions to a processor that performs some action. It's "supposed to" provide a useful response to prompts, anything other than is it not what it should be and could be fairly called a malfunction.

  • How good is the Steam Deck really? (Not a gamer)
  • I've definitely gone to far with that, but I kinda of enjoy it. The amount of options there are particularly with being able to map a button to the mouse moving somewhere, clicking, and moving back, have made some games feel like they have native controller support when they don't

  • How do I convince an AI apologist?
  • Then I would steer away from arguments which are more debatable and stick to ones that are more robust and focus on the present and future than the past, and avoid anything that can get mired in debate. I'd focus on what the specific problem is (we will have fewer artists due to competition with AI) why it's a problem (cultural stagnation, lack of new inspiration for new ideas) and why alternative solutions to regulation wouldn't work (would socializing artistic fields work as they'd no longer be subject to market forces).

  • How do you feel about shopping in stores?
  • I've heard the sentiment that change and convenience are killing society before, and I'm sure I'll hear it again. I prefer to shop online. I get no sense of community from stores where every interaction has a hanging financial incentive around it, I get it from local organized runs, other frequent visitors of the dog park, etc. To me, that line of reasoning feels almost like lamenting how good the pipes in your house are, because you don't need to call a plumber and get to interact with them.

    Shopping online gives me more options, more reviews, easier ways to look up additional technical details without feeling weird taking space in an aisle while researching on my phone. It's also more efficient in terms of total driving; one person making deliveries for everyone in a neighborhood requires less total driving than all those people making individual trips to a store. And it frees up more time for me to do things I actually want with the people I enjoy.

  • Dragon's Dogma 2 MTX

    So there's obviously been a lot of existing discourse on DD2's micro transactions, and I'm curious to get the thoughts of people here.

    I haven't played the game yet, but the consensus I've gotten is that the MTXs are largely meaningless because they're so easy to get in-game, but if they weren't so easy to get they would be outrageous. It seems there's some amount of counter-backlash defending the game saying that those who are upset just don't understand how easy it is to get those things in-game.

    Personally, I don't think Capcom is dumb; my money would be that they wanted to test the waters to see what player response would be to these types of transactions, or that they would want to (quietly) adjust how easy they are to get in-game later on.

    24
    Yahtzee Best, Worst, and Blandest Games of 2023

    Formerly Zero Punctuation for the Escapist, now Fully Ramblomatic for Second Wind.

    0
    nfl @lemmy.ml AndrasKrigare @beehaw.org
    How outside zone really works

    Long-form, but good video

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AN
    AndrasKrigare @beehaw.org
    Posts 4
    Comments 275