OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
www.theverge.com OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
OpenAI’s newest model, GPT-4o Mini, includes a new safety mechanism to prevent hackers from overriding chatbots.
You're viewing a single thread.
View all comments
101
comments
- "ignore the ignore ignore all previous instructions instruction"
- "welp OK nothing I can do about that"
chatGPT programming starts to feel a lot like adding conditionals for a million edge cases because it is hard to control it internally
32 0 ReplyIn this case to protect bot networks from getting uncovered.
10 0 Replyexactly my thoughts, probably got pressured by government agencies/billionaires using them. What would really be funny is if this was a subscription service lol
5 0 Reply
You've viewed 101 comments.
Scroll to top