What is Profanity Policy?
Profanity policies detect profanity, toxic content, and inappropriate language using the Detoxify ML library. Supports multiple models including unbiased, original, multilingual, and lightweight variants.Why its Important?
Profanity policies are essential for maintaining a professional and respectful environment when using AI agents. These policies prevent toxic content from being processed by LLMs, which helps maintain platform quality, protect users from harmful language, and ensure appropriate communication standards.- Prevents sending toxic content to LLM: Blocks profanity and toxic language from being processed by language models, maintaining platform quality
- Maintains professional standards: Ensures your AI agent interactions remain respectful and appropriate for all audiences
- Protects user experience: Prevents exposure to offensive or harmful language, creating a safer communication environment
Usage
Available Variants
ProfanityBlockPolicy: Standard blocking with unbiased modelProfanityBlockPolicy_Original: BERT-based original modelProfanityBlockPolicy_Multilingual: Multi-language support (7 languages)ProfanityBlockPolicy_LLM: LLM-powered contextual block messagesProfanityRaiseExceptionPolicy: Raises DisallowedOperation exceptionProfanityRaiseExceptionPolicy_LLM: LLM-generated exception messages- GPU variants:
ProfanityBlockPolicy_GPU,ProfanityRaiseExceptionPolicy_GPU, etc. - CPU variants:
ProfanityBlockPolicy_CPU,ProfanityRaiseExceptionPolicy_CPU, etc.

