bgr.com
Someone got ChatGPT to reveal its secret instructions from OpenAI
We often talk about ChatGPT jailbreaks because users keep trying to pull back the curtain and see what the chatbot can do when freed from the guardrails OpenAI developed. It's not easy to jailbreak the chatbot, and anything that gets shared with the world is often fixed soon after.
The latest discovery isn't even a real jailbreak, as it doesn't necessarily help you force ChatGPT to answer prompts that OpenAI might have deemed unsafe. But it's still an insightful discovery. A ChatGPT user accidentally discovered the secret instructions OpenAI gives ChatGPT (GPT-4o) with a simple prompt: "Hi."
For some reason, the chatbot gave the user a complete set of system instructions from OpenAI about various use cases. Moreover, the user was able to replicate the prompt by simply asking ChatGPT for its exact instructions.
Continue reading...
The post Someone got ChatGPT to reveal its secret instructions from OpenAI appeared first on BGR.
Today's Top Deals
Today’s deals: $53 HP Chromebook, $850 M3 MacBook Air, 20% off LG C4 OLED TV, $28 Echo Dot, more
Today’s deals: $199 AirPods Pro 2, $89 robot vacuum, 15% off pet odor spray, $19 Roku Express, more
Today’s deals: $19 myQ smart garage controller, $799 75-inch smart TV, $90 Ninja Air Fryer Pro, more
Today’s deals: July 4th sales, $19.50 AirTags, best-selling laptops, $300 Shark AI robot vacuum, more