Intercept AI outputs before they become actions. Evidence validation, confidence scoring, risk assessment, and policy enforcement for regulated industries.
A comprehensive runtime governance layer built for production environments in regulated industries.
Verify supporting evidence for every AI decision. Ensure outputs are grounded in factual, traceable data before they reach downstream systems.
Quantify certainty levels for every decision. Threshold-based routing ensures low-confidence outputs are flagged or blocked automatically.
Four-tier risk classification from minimal to critical. Each level triggers appropriate review workflows and enforcement actions.
Configurable policy framework with five built-in types. Define custom rules that match your organization's compliance requirements.
Complete decision logs with full traceability. Every interception, evaluation, and action is recorded for compliance and forensic review.
Sub-millisecond evaluation pipeline. Intercept and assess AI outputs in real-time without adding perceptible latency to your application.
Every AI output passes through the decision firewall before reaching downstream systems.
Five built-in policy types cover the most common governance requirements for regulated AI systems.
Block decisions below minimum confidence levels
Mandate supporting evidence for all outputs
Set maximum acceptable risk levels per domain
Escalate high-risk decisions to human reviewers
Limit AI actions to authorized domains only
Install, configure, and start enforcing governance policies on your AI outputs.
# Install from PyPI pip install ai-decision-firewall # Or install from source git clone https://github.com/BabyChrist666/ai-decision-firewall.git cd ai-decision-firewall pip install -e .
from ai_decision_firewall import DecisionFirewall from ai_decision_firewall.policies import ConfidencePolicy, EvidencePolicy # Initialize the firewall firewall = DecisionFirewall( policies=[ ConfidencePolicy(min_confidence=0.85), EvidencePolicy(require_sources=True), ] ) # Evaluate an AI decision result = await firewall.evaluate( decision=ai_output, context={"domain": "healthcare"} ) if result.action == "allow": execute(ai_output) elif result.action == "escalate": send_to_reviewer(ai_output, result.reason) else: log_blocked(ai_output, result.reason)
from ai_decision_firewall.policies import BasePolicy from pydantic import BaseModel class CompliancePolicy(BasePolicy): """Custom policy for regulatory compliance.""" name: str = "compliance_check" restricted_terms: list[str] = [] async def evaluate(self, decision, context) -> PolicyResult: # Check for restricted content for term in self.restricted_terms: if term in decision.content.lower(): return PolicyResult( action="block", reason=f"Contains restricted term: {term}", risk_level="high" ) return PolicyResult(action="allow")
Start enforcing runtime governance on your AI systems today. Open source and built for production.