I Tested the Limits of Azure OpenAI Safety - Here's What Happened
I ran a mini-red-team exercise against Azure OpenAI, testing common jailbreak tactics and prompt engineering attacks. Microsoft Defender for Cloud caught nearly everything. The key? A feature called User Prompt Evidence that turned vague alerts into precise, real-time context. If you’re running AI workloads in Azure and haven’t turned this on yet, you’re playing defense blindfolded.