Skip Navigation
Autism rule
  • You're normal in that respect:

    https://onlinelibrary.wiley.com/doi/abs/10.1002/aur.1962

    In fact, the idea that autistic individuals are immune to propaganda is, itself, media propaganda. The study that those articles report on was a single study that found that autistic individuals show less of a framing effect on their own preferences. It's much more easily explained by autistic individuals having strong, internal preferences for their own likes/dislikes than it is by autistic individuals being immune to propaganda.

    Speaking from experience here, too.

  • TIL about the Gell-Mann amnesia effect; when experts find articles published within their field to be full of errors, but trust articles about other fields in the same publication
  • Journal quality can buffer this by getting better reviewers (MDPI shouldn't be seen as having peer review at all, but peer review at the best journals--because professors want to say on their merit raise annual evals that they are doing the most service to the field by reviewing at the best journals--is usually good enough at weeding out bad papers), but it gets offset by the institutional prestige of authors when peer-review isn't double-blind. I've seen some garbage published in top journals by folks that are the caliber of Harvard professors (thinking of one in particular) because reviewers use institutional prestige as a heuristic.

    When I'm teaching new grad students, I tell them exactly what you said, with the exception that they can use field-recognized journal quality (not shitty metrics like impact factor) as a relative heuristic until they can evaluate methods for themselves.

  • Oregonian driving
  • Oregonians almost take pleasure in driving slowly in front of you. Maybe they've just gotten used to going slow because the entire state freeway system is always under construction. People driving crazily is infuriating for a completely different reason.

  • Why sex bias in labs means women are the losers in research into ageing
  • This is a problem that's becoming outdated, thanks to NIH now requiring females to be included in studies in order to receive grant funding--barring an exceptional reason for studying males alone (e.g., male-specific problems). They are even requiring cell lines for in vitro studies to be derived, at least in part, from females, rather than from males alone.

  • Samsung going all in on Google Messages in US, stops pre-installing Samsung Messages on Galaxy phones
  • Sorry, what? Not sure if you're joking, but Americans use texts because they're free and the ability to use them comes preloaded on the phone (no need to download something that takes up more space). I have Signal and WhatsApp on my phone for my international friends, but I use texts to communicate with US friends because RCS works with everyone and it's integrated much better into my phone, watch, etc. than any app can be without an absurd amount of permissions given to the app.

  • But don't say it out loud
  • I never understand why lemmy downvotes someone who is trying to help by providing accurate information, presumably because they think that there's a very small chance that the person they're replying to isn't being sarcastic.

  • SOLVED: This apparently is a baking dish. Thrift store find for my wife. But why is there a hole in the side? What is it?
  • Engagement helps posts in various algorithms, though I'm not sure that Lemmy uses comments for Hot or anything else. More importantly, I think there's truth to the meme that the quickest way to get an answer to your question on the internet isn't to ask the question, it's to tell someone else the wrong answer. People will then chime in with the right answer if they know it. Wrong answers can be useful in that respect.

  • China’s AI overload: Baidu CEO warns of too many models, too few applications
  • I actually took that bit out because LLMs are pro climate and against everything that makes the environment worse. That's a result of being trained on a lot of scientific literature. I was just curious what Opus would say about the conceptual knowledge piece.

  • China’s AI overload: Baidu CEO warns of too many models, too few applications
  • Claude Opus disagrees, lol (edit to add: all of what follows is Claude; not me):

    I respectfully disagree with the claim that current language models are logically incapable of mapping language to actual concepts. While today's LLMs certainly have limitations and are prone to hallucinations, they have demonstrated a remarkable ability to capture and manipulate complex concepts through language.

    A few key points:

    LLMs are trained on vast amounts of human-generated text which is grounded in real-world concepts, allowing the models to build rich representations that go beyond surface-level word associations. Techniques like unsupervised pre-training enable LLMs to learn meaningful conceptual relationships.
    
    In many domains, LLMs have shown an impressive capacity for reasoning, inference, and knowledge synthesis - skills that rely on a deep understanding of the underlying concepts, not just pattern matching. Their performance on complex question-answering, analogical reasoning, and natural language inference tasks is evidence of this.
    
    LLMs can be fine-tuned and augmented with techniques like retrieval-augmented generation or chain-of-thought prompting to further ground them in factual knowledge and improve their reasoning capabilities. This allows more robust conceptual mappings.
    
    Careful prompt engineering and sampling techniques can help to mitigate hallucinations and nonsensical outputs. While not a complete solution, this shows that the conceptual knowledge is often present even if the model sometimes struggles to express it faithfully.
    

    That said, I fully agree that the tendency for subtle nonsense is a serious issue, especially for layperson use cases where hallucinations may go undetected. Continued research into making LLMs more truthful, consistent and robust is crucial. Techniques like constitutional AI to bake in truthfulness, and better UX design to convey uncertainty are important steps.

    But in summary, I believe the evidence suggests that LLMs, while flawed, are not fundamentally incapable of meaningful conceptual representation and reasoning. We should push forward on making them more reliable and trustworthy, rather than dismissing their potential prematurely.

  • The old primary argument against panpsychism has now become the primary argument for it

    Panpsychism is the idea that everything is conscious to some degree (which, to be clear, isn't what I think). In the past, the common response to the idea was, "So, rocks are conscious?" This argument was meant to illustrate the absurdity of panpsychism.

    Now, we have made rocks represent pins and switches, enabling us to use them as computers. We made them complex enough that we developed neural networks and created large language models--the most complex of which have nodes that represent space, time, and the abstraction of truth, according to some papers. So many people are convinced these things are conscious, which has many suggesting that everything may be conscious to some degree.

    In other words, the possibility of rocks being conscious is now commonly used to argue in favor of panpsychism, when previously it was used to argue against it.

    20
    What's with the hype for The Godfather?

    I watched it recently for the first time, and I really don't get why it's so loved. IMDB rates it as the second-best movie of all time, but it seems far worse than that to me. I like most old movies and see their hype, but The Godfather didn't do it for me. What am I missing?

    81
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CA
    canihasaccount @lemmy.world
    Posts 5
    Comments 129