Skip Navigation

in absolutely the funniest outcome so far, you can send data to an LLM that pops a Remote Code Execution vulnerability

mastodon.social Kenn White (@kennwhite@mastodon.social)

Attached: 3 images Incredible research at BlackHat Asia today by Tong Liu and team from the Institute of Information Engineering, Chinese Academy of Sciences (在iie.ac.cn 的电子邮件经过验证) A dozen+ RCEs on popular LLM framework libraries like LangChain and LlamaIndex - used in lots of chat-assisted apps i...

Kenn White (@kennwhite@mastodon.social)

courtesy @self

can't wait for the crypto spammers to hit every web page with a ChatGPT prompt. AI vs Crypto: whoever loses, we win

5 comments
  • the inputs required to cause this are so basic, I really want to dig in and find out if this is a stupid attempt to make the LLM better at evaluating code (by doing a lazy match on the input for “evaluate” and using the LLM to guess the language) or intern-level bad code in the frameworks that integrate the LLM with the hosting websites. both paths are pretty fucking embarrassing mistakes for supposedly world-class researchers to make, though the first option points to a pretty hilarious amount of cheating going on when LLMs are supposedly evaluating and analyzing code in-model.