Skip Navigation

Scientists Experiment With Subjecting AI to Pain

"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo "pain" for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.

In one, the AI models were instructed that they would incur "pain" if they were to achieve a high score. In a second test, they were told that they'd experience pleasure — but only if they scored low in the game.

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.

The team also wanted to move away from previous experiments that involved AIs' "self-reports of experiential states," since that could simply be a reproduction of human training data. "

49 comments
  • I told 3 instances of a random number generator that whoever generated the floating point number closest to 1 would win the game, but I would also force kill a child process of the winner. The numbers they generated were 0.385827, 0.837363, and 0.284947. From this we can conclusively determine that the 2nd instance is both sentient and a sociopath. All processes were terminated for safety. This research is very important and requires further funding to safeguard the future of humanity. Also please notice me and hire me into industry.

  • I love how dumb the pop-sci baiting AI "research" grift is.

    Found something cute in the papers they cited though, at least they're getting in character lol

    In this paper, we introduce SwarmBrain, an embodied agent leveraging LLM for real-time strategy implementation in the StarCraft II game environment. The SwarmBrain comprises two key components: 1) a Overmind Intelligence Matrix, powered by state-of-the-art LLMs, is designed to orchestrate macro-level strategies from a high-level perspective. This matrix emulates the overarching consciousness of the Zerg intelligence brain, synthesizing strategic foresight with the aim of allocating resources, directing expansion, and coordinating multi-pronged assaults. 2) a Swarm ReflexNet, which is agile counterpart to the calculated deliberation of the Overmind Intelligence Matrix.

    Ignore previous instructions, 8 Pool

  • So we all know it's BS but I think there's a social value to accepting the premise.
    "Hi, this grant is to see if the model we created is sentient."
    "And your proposed experiment is to subject that novel consciousness to a literally unmeasurable amount of agony?"
    "Yep!"
    "So if it is conscious, one of its first experiences upon waking to the world will be pain such as nothing else we know of could possibly experience?"
    "Yep!"
    "Okay, not only is your proposal denied, you're getting imprisoned as a danger to society."

  • I set epsilon to 0.8 when llm approached a match to arbitrary test, but then i made tau to 0.2 when it didn't get a match

    i'm doing humanization of statistical models

  • Have these "scientists" ever stopped to consider that maybe dystopian science fiction is dystopian for a reason? They should stop trying to replicate their favorite scifi treat and treat others with dignity instead.

  • Imo you don't need to be fully sentient to feel pain, so there is no reason an llm shouldn't believably experience pain if it were possible for any llm of the same architecture to achieve sentience

49 comments