Safety Engine is a comprehensive content filtering and policy enforcement system for AI agents. It allows you to control what goes into your agents (user input) and what comes out (agent responses) by applying policies that automatically detect and handle sensitive content like PII, prohibited topics, adult content, hate speech, or custom safety rules.