Microsoft dropped Agent Framework 1.0 on April 3, 2026 — and it changes the calculus for enterprise teams building AI agents. This isn't another experimental SDK. It's the production-ready convergence of two projects that collectively racked up over 75,000 GitHub stars: Semantic Kernel and AutoGen. Both are now in unified maintenance, with all new development happening in a single open-source repo at github.com/microsoft/agent-framework.
If you've been waiting for a stable, LTS-committed foundation before betting production workloads on agentic AI, the wait is over.
Why This Matters: The AutoGen + Semantic Kernel Merger
Before you can appreciate what Agent Framework 1.0 delivers, you need to understand the problem it solves. For the past two years, Microsoft had two overlapping AI frameworks pulling in the same direction from different angles:
- Semantic Kernel — enterprise-grade orchestration, plugin model, Azure integrations, Entra ID auth, 26,000+ GitHub stars
- AutoGen — multi-agent conversations, group chat, Magentic-One patterns, 50,400+ GitHub stars
Teams had to choose between them, or worse, try to combine them themselves. That problem is now solved. Agent Framework 1.0 uses Semantic Kernel as its foundation layer — the kernel, plugin model, and connector system survive intact — then rebuilds AutoGen's multi-agent orchestration concepts on top as the graph-based workflow engine.
AutoGen has been moved to maintenance mode. New development happens exclusively in the unified repo. If you're still building on raw AutoGen, start planning your migration now.
What Agent Framework 1.0 Actually Ships
The 1.0 release locks in a stable surface area across both .NET and Python runtimes. Here's what's production-ready today:
Core Agent Abstraction
The core single-agent abstraction is identical across both runtimes. An agent receives context, reasons using a connected LLM, calls tools, and returns a structured response. The abstraction is clean enough to swap providers without touching business logic.
Multi-provider model support is first-party and built-in. The framework ships connectors for:
- Microsoft Foundry
- Azure OpenAI
- OpenAI
- Anthropic Claude
- Amazon Bedrock
- Google Gemini
- Ollama (for local/edge deployments)
No third-party wrappers. No adapter gymnastics. One framework, any model.
MCP Tool Discovery and Invocation
This is the feature that makes Agent Framework 1.0 immediately useful for real-world tooling. Full Model Context Protocol (MCP) support lets agents dynamically discover and invoke tools exposed by any MCP-compliant server — at runtime, without code changes.
In practice, this means your agents can connect to any of the thousands of MCP servers that have emerged in the ecosystem. A browser automation tool, a database connector, a code execution sandbox — all become first-class agent capabilities through a single protocol. Agents discover available tools via the MCP server's tool manifest, call them using the protocol's structured invocation format, and receive results that flow directly back into the reasoning loop.
from agent_framework import Agent, AgentConfig
from agent_framework.tools.mcp import McpServerToolProvider
# Connect an MCP server — tools are discovered automatically at runtime
tool_provider = McpServerToolProvider(server_url="http://localhost:8080")
agent = Agent(
config=AgentConfig(
instructions="You are a helpful assistant with access to external tools.",
model="azure-openai/gpt-4o",
),
tool_providers=[tool_provider],
)
response = await agent.run("Summarize the latest PRs from my GitHub repo")
print(response.content)
Agent-to-Agent (A2A) Protocol
A2A support enables cross-runtime agent collaboration. Agents running in different frameworks — or even different languages — can coordinate through structured, protocol-driven messaging. A2A 1.0 support was listed as "arriving imminently" at the 1.0 release, making this one of the few capabilities that's nearly but not fully production-ready at launch.
The architecture is important here: A2A and MCP serve different purposes. MCP connects agents to tools. A2A connects agents to other agents. Together, they give you a complete interoperability story for complex multi-agent systems.
Multi-Agent Orchestration Patterns
The framework ships five stabilized multi-agent patterns, each suited to different workflow topologies:
| Pattern | Description | Best For |
|---|---|---|
| Sequential | Agents run in order, each receiving the previous output | Pipelines with clear handoffs |
| Concurrent | Agents run in parallel, results merged | Research, parallel data gathering |
| Handoff | One agent delegates to a specialist based on intent | Routing, triage workflows |
| Group Chat | Multiple agents in a shared conversation | Debate, review, consensus tasks |
| Magentic-One | Hierarchical orchestrator + sub-agents | Complex autonomous tasks |
Choosing the right pattern has a large impact on cost and latency. Sequential is cheapest per run but serializes every step. Concurrent burns tokens in parallel but finishes faster. Magentic-One is powerful but introduces the overhead of orchestrator reasoning at each step.
Getting Started: Your First Agent in Python
Install the framework with all sub-packages:
pip install agent-framework
For selective installation:
pip install agent-framework-core agent-framework-openai
Here's a minimal working agent that uses Azure OpenAI and a custom Python tool:
from agent_framework import Agent, AgentConfig
from agent_framework.tools import tool
# Define a tool using the @tool decorator
@tool(description="Get the current weather for a city")
def get_weather(city: str) -> str:
# In production, call a real weather API here
return f"The weather in {city} is currently 72°F and sunny."
# Configure and instantiate the agent
agent = Agent(
config=AgentConfig(
name="WeatherAssistant",
instructions="You help users check weather conditions. Always be concise.",
model="azure-openai/gpt-4o",
),
tools=[get_weather],
)
# Run synchronously
import asyncio
async def main():
response = await agent.run("What's the weather like in Seattle?")
print(response.content)
asyncio.run(main())
Declarative Agents: YAML-First Workflows
One of the most enterprise-relevant features in 1.0 is declarative agent and workflow definition. Instead of configuring agents imperatively in code, you define them in version-controlled YAML files and load them at runtime with a single API call.
# agents/research-pipeline.yaml
name: research-pipeline
description: Multi-agent research and summarization pipeline
agents:
- id: searcher
model: azure-openai/gpt-4o
instructions: |
You are a research specialist. Find relevant information
on the given topic from multiple perspectives.
tools:
- type: mcp
server_url: http://search-mcp-server:8080
- id: synthesizer
model: azure-openai/gpt-4o
instructions: |
You synthesize research into clear, structured summaries.
Always cite your sources.
workflow:
type: sequential
steps:
- agent: searcher
- agent: synthesizer
Loading and running this pipeline:
from agent_framework import WorkflowLoader
loader = WorkflowLoader()
pipeline = loader.load("agents/research-pipeline.yaml")
result = await pipeline.run("Summarize recent advances in quantum error correction")
print(result.final_output)
YAML-defined agents mean your entire agent topology can live in source control, go through code review, and be versioned alongside your application. This alone is worth significant operational overhead compared to hardcoded orchestration logic.
The Middleware Pipeline
The middleware system is where Agent Framework 1.0 shows its enterprise DNA. There are three distinct interception layers, each targeting a different scope of agent execution:
- AgentMiddleware — turn-level concerns: rate limiting, quotas, audit logging, circuit breakers
- FunctionMiddleware — tool-level concerns: input validation, output sanitization, tool call logging
- ChatMiddleware — model-level concerns: prompt injection detection, content safety filtering, token counting
from agent_framework.middleware import AgentMiddleware, MiddlewareContext
class AuditMiddleware(AgentMiddleware):
async def on_turn_start(self, ctx: MiddlewareContext) -> None:
print(f"[AUDIT] Agent {ctx.agent_id} started turn {ctx.turn_id}")
async def on_turn_end(self, ctx: MiddlewareContext) -> None:
print(f"[AUDIT] Turn {ctx.turn_id} completed in {ctx.elapsed_ms}ms")
print(f"[AUDIT] Tokens used: {ctx.token_usage.total}")
agent = Agent(
config=AgentConfig(model="azure-openai/gpt-4o"),
middleware=[AuditMiddleware()],
)
For regulated industries — financial services, healthcare, legal — middleware is where compliance policies live. Content safety filters, PII detection, response validation, and regulatory guardrails can all be implemented as middleware without coupling them to agent logic.
Observability and Authentication
Agent Framework 1.0 ships first-party OpenTelemetry integration. Every agent turn, tool call, and model invocation emits structured traces compatible with Azure Monitor, Datadog, Jaeger, and any other OTel-compatible backend.
Authentication uses Microsoft Entra ID (formerly Azure AD) out of the box. For teams already running on Azure, this means agent deployments can inherit existing identity and access management policies without additional configuration.
Common Mistakes to Avoid
Using the wrong orchestration pattern for your workload. Sequential pipelines are tempting for their simplicity, but if your agents can work in parallel without dependencies, you're leaving throughput on the table. Map your data dependencies first, then choose the pattern.
Skipping middleware for "simple" agents. Even simple agents benefit from turn-level rate limiting and audit logging in production. It's much harder to bolt on compliance controls after incidents than to configure middleware from the start.
Treating YAML agents as a production deployment mechanism. YAML declarative agents are excellent for versioning and review, but they still need a runtime host, proper secret management, and infrastructure. The YAML file describes the agent; it doesn't replace DevOps.
Not planning the AutoGen migration. AutoGen is now in maintenance mode. It will continue receiving security patches, but new features and orchestration patterns will only land in Agent Framework. The migration guide covers most common patterns, and the unified architecture means most AutoGen concepts map directly.
Conflating MCP and A2A. These protocols are complementary, not interchangeable. MCP is for tool access. A2A is for agent-to-agent communication. Using MCP where you need A2A results in brittle, overly coupled tool definitions.
FAQ
Q: Is Microsoft Agent Framework 1.0 production-ready?
Yes. The 1.0 release comes with a long-term support commitment from Microsoft, stable APIs with a documented upgrade path, and months of pre-release validation with enterprise customers. It's designed for production workloads, not experimentation.
Q: What's the difference between Agent Framework and Semantic Kernel?
Semantic Kernel is now the foundation layer inside Agent Framework. It provides the kernel, plugin model, and service connector infrastructure. Agent Framework adds multi-agent orchestration, YAML declarative workflows, middleware pipelines, A2A support, and first-class MCP integration on top of that foundation. For new projects, start with Agent Framework. Use Semantic Kernel directly only if you need its lower-level APIs.
Q: Should I migrate from AutoGen?
If you're building new agents, use Agent Framework. If you're running AutoGen in production, it will remain stable under maintenance mode — but new orchestration capabilities, MCP support, and A2A interoperability won't come to AutoGen. Plan a migration when your next major feature cycle begins.
Q: Does Agent Framework 1.0 support local/offline LLMs?
Yes. The Ollama connector is first-party and ships with 1.0, giving you a straightforward path to running agents against local models. For air-gapped environments or cost-sensitive edge deployments, this is a first-class option.
Q: How does MCP support work in practice?
Agents connect to MCP servers at startup or dynamically at runtime. The server exposes a tool manifest describing available functions, parameters, and schemas. The agent framework uses this manifest to make tools available for LLM tool-calling. When the model decides to use a tool, the framework handles the MCP invocation, receives the result, and feeds it back into the reasoning loop — transparently.
Q: Is the .NET version feature-parity with Python?
At 1.0, the core agent abstraction, middleware pipeline, YAML declarative workflows, and multi-provider support are stable and feature-parity across both runtimes. Some advanced orchestration patterns and integrations may have minor differences — check the release notes for the specific version. The team has committed to keeping both runtimes aligned at each release.
Key Takeaways
- Agent Framework 1.0 is the production successor to both AutoGen and Semantic Kernel. Both projects converge here; AutoGen is in maintenance mode.
- Full MCP support lets agents dynamically connect to thousands of tools without code changes. It's one of the cleanest implementations of MCP in any framework.
- Five stabilized orchestration patterns cover the vast majority of real-world multi-agent topologies out of the box.
- YAML declarative agents bring agent configuration into version control and code review workflows — a major operational maturity upgrade.
- Three-tier middleware is where compliance, observability, and content safety live. Enterprise teams should configure this from day one.
- Multi-provider LLM support (Azure OpenAI, OpenAI, Anthropic, Bedrock, Gemini, Ollama) means you're not locked into a single model vendor.
- A2A support is landing imminently post-1.0 — watch the repo for this if cross-framework agent collaboration is on your roadmap.
For enterprise teams, Agent Framework 1.0 is the most complete production-ready answer to the question: "What do we actually build AI agents on?" The LTS commitment removes the biggest objection to production adoption, and the MCP integration future-proofs your tool ecosystem as that protocol continues to standardize across the industry.
Bottom Line
Microsoft Agent Framework 1.0 is the right foundation for enterprise AI agents in .NET and Python. By converging AutoGen and Semantic Kernel into a single LTS-committed SDK with native MCP support and a three-tier middleware pipeline, Microsoft has answered the production readiness question that's been blocking serious adoption. Start here for new projects; migrate from AutoGen when your next feature cycle allows.