www.allsides.com
Microsoft knows you love tricking its AI chatbots into doing weird stuff and it’s designing ‘prompt shields’ to stop you
Microsoft Corp. is trying to make it harder for people to trick artificial intelligence chatbots into doing weird things. New safety features are being built into Azure AI Studio which lets developers build customized AI assistants using their own data‚ the Redmond‚ Washington-based company said in a blog post on Thursday. The tools include “prompt shields‚” which are designed to detect and block deliberate attempts — also known as prompt injection attacks or jailbreaks — to make an AI model...