semioticbreakdown [she/her] @ semioticbreakdown @hexbear.net Posts 0Comments 40Joined 4 wk. ago
as a now-canned midlevel moron I wish I had actually taken ownership of my career and learned how to actually design systems and apply theory instead of just working on x feature in isolation
i noticed they made a comment about Israel too. That it's called Israel because "God is real" aka it sounds like that in English
based on some of their comments about numbers I think they might just be unwell.
"today it is the territory whose shreds slowly rot across the extent of the map"
bump
our poor poor oil magnates
how will they feed their families!these passages in particular are explicitly used to reinforce and reproduce patriarchal modes of oppression - i dont think that makes them any better because two verses at the end are good in the abstract
they say if its grey its good for you
Me too, whatever forms the middle ring especially. Do not like and the fact that this existed for real bothers me tremendously
love it :)
I dont think hallucination is that poorly understood, tbh. Its related to the grounding problem to an extent, but also a result of the fact that its a non-cognitive generative model. Youre just sampling from a distribution. Its considered "wrong output" because to us, truth is obvious. But if you look at generative models beyond language models, the universality of this behavior is obvious. You cannot have the ability to make a picture of minion JD vance without LLMs hallucinating (or the ability to have creative writing, for a same-domain analogy). You can see it in humans too in things like wrong words or word salads/verbal diarrhea/certain aphasias. Language function is also preserved in instances even when logical ability damaged. With no way to re-examine and make judgements about its output, and also no relation to reality (or some version of it), the unconstrained output of the generative process is inherently untrustworthy. That is to say - all LLM output is hallucination, and only has relation to the real when interpreted by the user. Massive amounts of training data are used to "bake" the model such that the likelihood of producing text that we would consider "True" is better than random (or pretty high in some cases). This extends to the math realm too, and is likely why CoT improves apparent reasoning so dramatically (And also likely why CoT reasoning only works when a model is of sufficient size). They are just dreams, and only gain meaning through our own interpretation. They do not reflect reality.
make weird prompts and get it to do weird outputs, it's kind of fun. I put in such a strange prompt that I got it to say something suuuuper reddity like "le bacon is epic xD" completely unprompted. I think my prompt involved trying to make the chatbot's role like an unhinged quirky catgirl or something. unfortunately this tends to break the writing model pretty quickly and it starts injecting structural tokens back into the stream but its very funny
thank you anthropic for letting everyone know your product is hot garbage
Yeah its fucked. These things are Completely unmoored from reality. People are outsourcing their higher cognitive functions to a machine that provably doesn't have them. Absolutely terrifying.