Building an Explainable AI Toolkit for Laravel (Not Just Another ChatGPT Wrapper)

php dev.to

AI is everywhere right now - but most integrations have one big problem:

They give answers, but not explanations.

If you’re building real applications (customer support tools, decision systems, analytics dashboards), that’s a serious limitation.

So I built something to fix that.

The Problem

Most AI integrations in web apps look like this:

$response = AI::ask("Summarize this feedback");

And you get:

“The customer is unhappy and requests a refund.”

But:

  • Why did the system decide that?

  • What signals influenced the output?

  • How confident is it?

  • Can we audit or trace this decision later?

This becomes a huge issue in real-world systems:

  • customer support automation

  • decision workflows

  • enterprise dashboards

  • compliance-sensitive environments

The Idea: Explainable AI for Applications

Instead of just generating responses, what if AI systems could return:

  • structured outputs

  • reasoning / explanation

  • confidence scores

  • decision traces

That’s where explainable AI (XAI) meets backend engineering.

What I Built

I created an open-source Laravel package:

laravel-explainable-ai

GitHub: https://github.com/mukundhan-mohan/laravel-explainable-ai
Packagist: https://packagist.org/packages/mukundhanmohan/laravel-explainable-ai

Features

  • AI integration with clean Laravel API

  • Structured JSON outputs (no messy parsing)

  • Explanation layer (why the result was generated)

  • Confidence scoring

  • Prompt templates

  • Audit logging

  • Queue + async support

Example Usage

$result = AI::prompt('summarize_feedback')
->input(['feedback' => $text])
->withExplanation()
->withConfidence()
->execute();

Output

{
"content": "Escalate this complaint to support.",
"explanation": {
"summary": "Negative sentiment and repeated complaint detected.",
"factors": [
"negative sentiment",
"refund request",
"repeat complaint"
],
"confidence": 0.91
}
}

This makes AI decisions:

  • understandable

  • traceable

  • usable in workflows

Architecture (Simplified)

Instead of treating AI as a black box, I designed it as a pipeline:

Input → Processing → Decision → Explainability → Action

Where:

  • AI handles inference

  • rules/logic handle decisions

  • explainability makes results usable

Why This Matters

In real systems:

  • Engineers need structured outputs

  • Teams need trust

  • Businesses need auditability

Real Use Cases

This approach works for:

Customer feedback analysis
sentiment + action recommendation

Support automation
escalation decisions with reasoning

Risk detection
anomaly alerts with evidence

Enterprise dashboards
explainable insights

What’s Next

I’m continuing to improve the package:

  • more providers (Anthropic, etc.)

  • better explainability models

  • RAG support

  • workflow automation tools

Final Thought

  • AI is powerful
  • But explainable AI is usable AI

Source: dev.to

arrow_back Back to Tutorials