LLM vendors are incredibly bad at responding to security issues
LLM vendors are incredibly bad at responding to security issues
IT consultant Mark Pesce was building an LLM-based similarity finder for a legal client. He discovered a prompt that reliably caused multiple LLMs to go nuts and output complete gibberish: “it desc…
You're viewing a single thread.
Sloppy LLM programming? Never!
In completely unrelated news I've been staring at this spinner icon for the past five minutes after asking an LLM to output nothing at all:
32 0 Replysame energy as “your request could not be processed due to the following error: Success”
22 0 ReplyWhat are the chances that the front end was not programmed to handle the LLM returning an empty string?
19 0 ReplyQuite likely yeah. There's no way they don't have a timeout on the backend.
16 0 Reply
boooo Gemini now replies "I'm just a language model, so I can't help you with that."
10 0 Reply"what would a reply with no text look like?" or similar?
8 0 Replywhat would a reply with no text look like?
nah it just described what an empty reply might look like in a messaging app
they seem to have done quite well at making Gemini do mundane responses
8 0 Replythat's a hilarious response (from it). perfectly understand how it got there, and even more laughable
8 0 Reply
now I wonder if you can make the LLM do fuzzing attacks on the backend by asking you to reply with some possible attacks.
1 0 Reply