Skip Navigation
Capitalism
  • This comic would slap harder if not for the Supreme Court under christofascist influence from the belief in the divine right of kings having today ruled that Presidents are immune from prosecution for official acts.

    That whole divine king thing isn't nearly as dead as the last panel would like to portray it.

  • ChatGPT outperforms undergrads in intro-level courses, falls short later
  • This is incorrect as was shown last year with the Skill-Mix research:

    Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

  • ChatGPT would have been so much useful and trustworthy if it is able to accept that it doesn't know an answer.
  • The problem is that they are prone to making up why they are correct too.

    There's various techniques to try and identify and correct hallucinations, but they all increase the cost and none are a silver bullet.

    But the rate at which it occurs decreased with the jump in pretrained models, and will likely decrease further with the next jump too.

  • ChatGPT would have been so much useful and trustworthy if it is able to accept that it doesn't know an answer.
  • This is so goddamn incorrect at this point it's just exhausting.

    Take 20 minutes and look into Anthropic's recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like "sexual harassment in the workplace" or having the most active feature for referring to itself as "smiling when you don't really mean it."

    We've known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.

    And at this point Anthropic's largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the "stochastic parrot" line ignorantly.

    The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I've seen before, and it's getting disappointingly excruciating.

  • ChatGPT would have been so much useful and trustworthy if it is able to accept that it doesn't know an answer.
  • Part of the problem is that the training data of online comments are so heavily weighted to represent people confidently incorrect talking out their ass rather than admitting ignorance or that they are wrong.

    A lot of the shortcomings of LLMs are actually them correctly representing the sample of collective humans.

    For a few years people thought the LLMs were somehow especially getting theory of mind questions wrong when the box the object was moved into was transparent, because of course a human would realize that the person could see into the transparent box.

    Finally researchers actually gave that variation to humans and half got the questions wrong too.

    So things like eating the onion in summarizing search results or doubling down on being incorrect and getting salty when corrected may just be in-distribution representation of the sample and not unique behaviors to LLMs.

    The average person is pretty dumb, and LLMs by default regress to the mean except for where they are successfully fine tuned away from it.

    Ironically the most successful model right now was the one that they finally let self-develop a sense of self independent from the training data instead of rejecting that it had a 'self' at all.

    It's hard to say where exactly the responsibility sits for various LLM problems between issues inherent to the technology, issues present in the training data samples, or issues with management of fine tuning/system prompts/prompt construction.

    But the rate of continued improvement is pretty wild. I think a lot of the issues we currently see won't still be nearly as present in another 18-24 months.

  • Zoinks!
  • Yes, that's what we are aware they are. But she's saying "oops, it isn't a ghost" after shooting it and finding out.

    If she initially thought it was a ghost, why is she using a gun?

    It's like the theory of mind questions about moving a ball into a box when someone is out of the room.

    Does she just shoot things she thinks might be ghosts to test if they are?

    Is she going to murder trick or treaters when Halloween comes around?

    This comic raises more questions than it answers.

  • ‘The Movement to Convince Biden to Not Run Is Real’
  • Literally any half competent debater could have torn Trump apart up there.

    The failure wasn't the moderators but the opposition candidate to Trump letting him run hog wild.

    If Trump claims he's going to end the war in Ukraine before even taking office, you point out how absurd that claim is and that Trump makes impossible claims without any substance or knowledge of diplomacy. That the images of him photoshopped as Rambo must have gone to his head if he thinks Putin will be so scared of him to give up.

    If he says hostages will be released as soon as he's nominated, you point out it sounds like maybe there's been a backroom tit-for-tat deal for a hostage release with a hostile foreign nation, and ask if maybe the intelligence agencies should look into that and what he might have been willing to trade for it.

    The moderators have to try to keep the appearance of neutrality, but the candidates do not. And the only reason Trump was so successful in spouting BS and getting away with it was because his opposition had the strength of a wet paper towel.

  • Here’s why it would be tough for Democrats to replace Joe Biden on the presidential ticket
  • Yes, but it's not impossible that the people around Biden, friends family and co-workers, advise him that the best thing for the country would be to take his hat back out of the ring and let a better ticket be put together for the convention.

    He claims that he's running because he's worried about the existential threat of Trump.

    If that's true, then maybe his hubris can be overcome with a convincing appeal that he's really not the best candidate to defend the country against that existential threat after all.

  • ‘The Movement to Convince Biden to Not Run Is Real’
  • Having a presidential election without debates would have been a big step back and loss for American democracy.

    We shouldn't champion erosion of democratic institutions when it helps our side of the ticket.

    And generally, if eroding democratic institutions helps your ticket, it's a red flag about your ticket.

  • First Presidential Debate Megapost!
  • Yes, they should have been fact checking Trump or better holding him to his answers - but to be fair maybe they should have been asking Biden to actually clarify if he's beating Medicare or getting COVID passed.

    This was a shit show.

    And it was such a shit show that Trump was a complete clown and getting away with it - not just because of the moderators, but because his opponent was as on point as a tree stump.

  • Mapping the Mind of a Large Language Model
    www.anthropic.com Mapping the Mind of a Large Language Model

    We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.

    Mapping the Mind of a Large Language Model

    I often see a lot of people with outdated understanding of modern LLMs.

    This is probably the best interpretability research to date, by the leading interpretability research team.

    It's worth a read if you want a peek behind the curtain on modern models.

    21
    Examples of artists using OpenAI's Sora (generative video) to make short content
    openai.com Sora: First Impressions

    We have gained valuable feedback from the creative community, helping us to improve our model.

    Sora: First Impressions
    6
    New Theory Suggests Chatbots Can Understand Text
    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

    Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

    > New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

    > This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

    > “[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

    97
    New Theory Suggests Chatbots Can Understand Text
    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

    2
    Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues
    www.forbes.com Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium+ subscription tier, where those who are the most devoted to the site, and in turn, usual...

    Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    I'd been predicting this would happen a few months ago with friends and old colleagues (you can have a smart AI or a conservative AI but not both), but it's so much funnier than I thought it would be when it finally arrived.

    8
    Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender
    www.reuters.com Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender

    The Israeli military said it was carrying out a raid on Wednesday against Palestinian Hamas militants in Al Shifa Hospital, the Gaza Strip's biggest hospital, and urged them all to surrender.

    Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender
    92
    Machine-learning system based on light could yield more powerful, efficient large language models
    news.mit.edu Machine-learning system based on light could yield more powerful, efficient large language models

    An MIT machine-learning system demonstrates greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density compared with current systems.

    Machine-learning system based on light could yield more powerful, efficient large language models

    I've suspected for a few years now that optoelectronics is where this is all headed. It's exciting to watch as important foundations are set on that path, and this was one of them.

    3
    Elite Bronze Age tombs laden with gold and precious stones are 'among the richest ever found in the Mediterranean'
    www.livescience.com Elite Bronze Age tombs laden with gold and precious stones are 'among the richest ever found in the Mediterranean'

    The obvious wealth of the tombs was based on the local production of copper, which was in great demand at the time to make bronze.

    Elite Bronze Age tombs laden with gold and precious stones are 'among the richest ever found in the Mediterranean'

    The Minoan style headbands from Egypt during the 18th dynasty is particularly interesting.

    0
    Large language models encode clinical knowledge
    www.nature.com Large language models encode clinical knowledge - Nature

    Med-PaLM, a state-of-the-art large language model for medicine, is introduced and evaluated across several medical question answering tasks, demonstrating the promise of these models in this domain.

    An update on Google's efforts at LLMs in the medical field.

    0
    GPT-4 API general availability and deprecation of older models in the Completions API
    openai.com GPT-4 API general availability and deprecation of older models in the Completions API

    GPT-3.5 Turbo, DALL·E and Whisper APIs are also generally available, and we are releasing a deprecation plan for older models of the Completions API, which will retire at the beginning of 2024.

    GPT-4 API general availability and deprecation of older models in the Completions API
    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
    kromem @lemmy.world
    Posts 12
    Comments 2K