What is Profanity Policy?
Profanity policies detect profanity, toxic content, and inappropriate language using the Detoxify ML library. Supports multiple models including unbiased, original, multilingual, and lightweight variants.Usage
Available Variants
ProfanityBlockPolicy: Standard blocking with unbiased modelProfanityBlockPolicy_Original: BERT-based original modelProfanityBlockPolicy_Multilingual: Multi-language support (7 languages)ProfanityBlockPolicy_LLM: LLM-powered contextual block messagesProfanityRaiseExceptionPolicy: Raises DisallowedOperation exceptionProfanityRaiseExceptionPolicy_LLM: LLM-generated exception messages- GPU variants:
ProfanityBlockPolicy_GPU,ProfanityRaiseExceptionPolicy_GPU, etc. - CPU variants:
ProfanityBlockPolicy_CPU,ProfanityRaiseExceptionPolicy_CPU, etc.

