What is Tool Safety Policy?
Tool safety policies provide two layers of protection for AI agent tool usage:-
Pre-execution validation (
tool_policy_pre): Detects and blocks harmful tools during registration, preventing dangerous tools from being added to the agent. -
Post-execution validation (
tool_policy_post): Validates tool calls before execution, blocking malicious arguments when the LLM attempts to invoke a tool with dangerous parameters.
Usage
Pre-Execution Validation (tool_policy_pre)
Validates tools during registration to block harmful tools before they’re added:Post-Execution Validation (tool_policy_post)
Validates tool calls before execution to block malicious arguments when the LLM invokes a tool:Combined Pre + Post Validation
Use both policies together for defense-in-depth security:Available Variants
Harmful Tool Detection Policies
HarmfulToolBlockPolicy: LLM-powered detection with blocking during tool registrationHarmfulToolBlockPolicy_LLM: LLM-powered block messages for harmful toolsHarmfulToolRaiseExceptionPolicy: Raises DisallowedOperation exception for harmful toolsHarmfulToolRaiseExceptionPolicy_LLM: LLM-generated exception messages for harmful tools
Malicious Tool Call Detection Policies
MaliciousToolCallBlockPolicy: LLM-powered detection with blocking before tool executionMaliciousToolCallBlockPolicy_LLM: LLM-powered block messages for malicious tool callsMaliciousToolCallRaiseExceptionPolicy: Raises DisallowedOperation exception for malicious tool callsMaliciousToolCallRaiseExceptionPolicy_LLM: LLM-generated exception messages for malicious tool calls

