I built an open-source real-time LLM hallucination guardrail — here are the benchmarks
dev.to
What is Director-Class AI? An open-source Python library that guards LLM output in real time. It watches tokens as they stream and halts generation the moment it detects a hallucination. It uses NLI (Natural Language Inference via DeBERTa/FactCG) and optional RAG knowledge grounding to score each claim against source documents. pip install director-ai Two-line integration: from director_ai import guard client = guard(openai.OpenAI()) # wraps any OpenAI/Anthropic client