MCP servers are the fastest-growing part of the AI stack. They have zero observability.

typescript dev.to

Your LLM agent calls a tool via MCP. The tool fails. Your trace shows tools/call search — error. That's it. Not why it failed. Not how long it took. Not what arguments were passed. Not whether it was a timeout, a validation error, or a rate limit from a downstream API. Because nobody instruments the server side. Every MCP observability tool watches the client. We built the first middleware that watches from inside the server. One import, one function call: import { toadEyeMiddleware } from

Read Full Tutorial open_in_new
arrow_back Back to Tutorials