Skip Navigation

In which some researchers draw a spooky picture and spook themselves

static1.squarespace.com /static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf

Abstracted abstract:

Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming.

I saw this posted here a moment ago and reported it*, and it looks to have been purged. I am reposting it to allow us to sneer at it.

*

9

You're viewing a single thread.

9 comments
  • [...] placed in environments that incentivize scheming.

    If this turns out to be another case of "research" where they told the model exactly what to do beforehand and then go all surprised Pikachu when it does, I'm gonna be shocked ...

    ... because it's been a while since they've tried that.

You've viewed 9 comments.