Leveraging the Google Agent Development Kit (ADK) and the underlying Gemini LLM to build Multi-Agent Applications with A2A protocol support using the Python programming language.
Aren’t There a Billion Python ADK Demos?
Yes there are.
Python has traditionally been the main coding language for ML and AI tools. The goal of this article is to provide a multi-agent test bed for building, debugging, and deploying multi-agent applications.
What you talkin ‘bout Willis?
So what is different about this lab compared to all the others out there?
This is one of the first deep dives into a Multi-Agent application leveraging the advanced tooling of Gemini CLI. The starting point for the demo was an existing Codelab- which was updated and re-engineered with Gemini CLI.
The original Codelab- is here:
Building a Multi-Agent System | Google Codelabs
What Is Python?
Python is an interpreted language that allows for rapid development and testing and has deep libraries for working with ML and AI:
Python Version Management
One of the downsides of the wide deployment of Python has been managing the language versions across platforms and maintaining a supported version.
The pyenv tool enables deploying consistent versions of Python:
GitHub - pyenv/pyenv: Simple Python version management
As of writing — the mainstream python version is 3.13. To validate your current Python:
python --version
Python 3.13.13
Amazon Lightsail
Amazon Lightsail is an easy-to-use virtual private server (VPS) provider and cloud platform designed by AWS for simpler workloads, offering developers pre-configured compute, storage, and networking for a low, predictable monthly price. It is ideal for hosting small websites, simple web apps, or creating development environments.
More information is available on the official site here:
Amazon's Simple Cloud Server | Amazon Lightsail
And this is the direct URL to the console:
https://lightsail.aws.amazon.com/ls/webapp/home/containers
The Lightsail console will look similar to:
Gemini CLI
If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:
npm install -g @google/gemini-cli
Testing the Gemini CLI Environment
Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:
▝▜▄ Gemini CLI v0.33.1
▝▜▄
▗▟▀ Logged in with Google /auth
▝▀ Gemini Code Assist Standard /upgrade no sandbox (see /docs) /model Auto (Gemini 3) | 239.8 MB
Node Version Management
Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:
Agent Development Kit
The Google Agent Development Kit (ADK) is an open-source, Python-based framework designed to streamline the creation, deployment, and orchestration of sophisticated, multi-agent AI systems. It treats agent development like software engineering, offering modularity, state management, and built-in tools (like Google Search) to build autonomous agents.
The ADK can be installed from here:
Agent Skills
Gemini CLI can be customized to work with ADK agents. Both an Agent Development MCP server, and specific Agent skills are available.
More details are here:
To get the Agent Skills in Gemini CLI:
> /skills list
Available Agent Skills:
- adk-cheatsheet
MUST READ before writing or modifying ADK agent code. ADK API quick reference for Python — agent types, tool definitions, orchestration
patterns, callbacks, and state management. Includes an index of all ADK documentation pages. Do NOT use for creating new projects (use
adk-scaffold).
- adk-deploy-guide
MUST READ before deploying any ADK agent. ADK deployment guide — Agent Engine, Cloud Run, GKE, CI/CD pipelines, secrets, observability, and
production workflows. Use when deploying agents to Google Cloud or troubleshooting deployments. Do NOT use for API code patterns (use
adk-cheatsheet), evaluation (use adk-eval-guide), or project scaffolding (use adk-scaffold).
- adk-dev-guide
ALWAYS ACTIVE — read at the start of any ADK agent development session. ADK development lifecycle and mandatory coding guidelines —
spec-driven workflow, code preservation rules, model selection, and troubleshooting.
- adk-eval-guide
MUST READ before running any ADK evaluation. ADK evaluation methodology — eval metrics, evalset schema, LLM-as-judge, tool trajectory
scoring, and common failure causes. Use when evaluating agent quality, running adk eval, or debugging eval results. Do NOT use for API code
patterns (use adk-cheatsheet), deployment (use adk-deploy-guide), or project scaffolding (use adk-scaffold).
- adk-observability-guide
MUST READ before setting up observability for ADK agents or when analyzing production traffic, debugging agent behavior, or improving agent
performance. ADK observability guide — Cloud Trace, prompt-response logging, BigQuery Agent Analytics, third-party integrations, and
troubleshooting. Use when configuring monitoring, tracing, or logging for agents, or when understanding how a deployed agent handles real
traffic.
- adk-scaffold
MUST READ before creating or enhancing any ADK agent project. Use when the user wants to build a new agent (e.g. "build me a search agent")
or enhance an existing project (e.g. "add CI/CD to my project", "add RAG").
and the ADK documentation:
> /mcp list
Configured MCP servers:
🟢 adk-docs-mcp (from adk-docs-ext) - Ready (2 tools)
Tools:
- mcp_adk-docs-mcp_fetch_docs
- mcp_adk-docs-mcp_list_doc_sources
Where do I start?
The strategy for starting multi agent development is a incremental step by step approach.
First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.
Then, ADK Multi-Agent is built, debugged, and tested locally. Finally — the entire solution is deployed to Google Cloud Run.
Setup the Basic Environment
At this point you should have a working Python environment and a working Gemini CLI installation. All of the relevant code examples and documentation is available in GitHub.
The next step is to clone the GitHub repository to your local environment:
cd ~
git clone https://github.com/xbill9/gemini-cli-aws
Then run init2.sh from the cloned directory.
The script will attempt to determine your shell environment and set the correct variables:
source init2.sh
If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:
source set_env.sh
Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.
Finally install the packages and dependencies:
cd multi-lightsail
make install
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ Shell make install [current working directory /home/xbill/multi-agent] (Running make install again after installing shared-utils in edita… │
│ │
│ ... first 101 lines hidden (Ctrl+O to show) ... │
│ Installing frontend dependencies... │
│ cd app/frontend && npm install │
│ │
│ up to date, audited 16 packages in 502ms │
│ │
│ 5 packages are looking for funding │
│ run `npm fund` for details │
│ │
│ found 0 vulnerabilities │
│ Output too long and was saved to: │
│ /home/xbill/.gemini/tmp/multi-agent/tool-outputs/session-7ec4dae7-3acf-49f5-b396-306929c22231/run_shell_command_1775912739511_0.txt │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Verify The ADK Installation
To verify the setup, run the ADK CLI locally with the researcher agent:
xbill@penguin:~/gemini-cli-aws/multi-lightsail$cdagentsxbill@penguin:~/gemini-cli-aws/multi-lightsail/agents$adkrunresearcher//home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/features/_feature_decorator.py:72:UserWarning:[EXPERIMENTAL]featureFeatureName.PLUGGABLE_AUTHisenabled.check_feature_enabled()Logsetupcomplete:/tmp/agents_log/agent.20260415_170203.logToaccesslatestlog:tail-F/tmp/agents_log/agent.latest.log{"asctime":"2026-04-15 17:02:03,412","name":"root","levelname":"INFO","message":"Logging initialized for researcher","filename":"logging_config.py","lineno":54,"service":"researcher","log_level":"INFO"}{"asctime":"2026-04-15 17:02:03,413","name":"researcher.agent","levelname":"INFO","message":"Initialized researcher agent with model: gemini-2.5-flash","filename":"agent.py","lineno":85}{"asctime":"2026-04-15 17:02:03,413","name":"google_adk.google.adk.cli.utils.envs","levelname":"INFO","message":"Loaded .env file for researcher at /home/xbill/gemini-cli-aws/.env","filename":"envs.py","lineno":83}{"asctime":"2026-04-15 17:02:03,413","name":"google_adk.google.adk.cli.utils.local_storage","levelname":"INFO","message":"Using per-agent session storage rooted at /home/xbill/gemini-cli-aws/multi-lightsail/agents","filename":"local_storage.py","lineno":84}{"asctime":"2026-04-15 17:02:03,414","name":"google_adk.google.adk.cli.utils.local_storage","levelname":"INFO","message":"Using file artifact service at /home/xbill/gemini-cli-aws/multi-lightsail/agents/researcher/.adk/artifacts","filename":"local_storage.py","lineno":110}{"asctime":"2026-04-15 17:02:03,414","name":"google_adk.google.adk.cli.utils.service_factory","levelname":"INFO","message":"Using in-memory memory service","filename":"service_factory.py","lineno":266}{"asctime":"2026-04-15 17:02:03,422","name":"google_adk.google.adk.cli.utils.local_storage","levelname":"INFO","message":"Creating local session service at /home/xbill/gemini-cli-aws/multi-lightsail/agents/researcher/.adk/session.db","filename":"local_storage.py","lineno":60}Runningagentresearcher,typeexittoexit.[user]:
Test The ADK Web Interface
This tests the ADK agent interactions with a browser:
xbill@penguin:~/gemini-cli-aws/multi-lightsail/agents$ adk web --host 0.0.0.0
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/features/_feature_decorator.py:72: UserWarning: [EXPERIMENTAL] feature FeatureName.PLUGGABLE_AUTH is enabled.
check_feature_enabled()
2026-04-15 17:02:33,314 - INFO - service_factory.py:266 - Using in-memory memory service
2026-04-15 17:02:33,314 - INFO - local_storage.py:84 - Using per-agent session storage rooted at /home/xbill/gemini-cli-aws/multi-lightsail/agents
2026-04-15 17:02:33,315 - INFO - local_storage.py:110 - Using file artifact service at /home/xbill/gemini-cli-aws/multi-lightsail/agents/.adk/artifacts
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/cli/fast_api.py:198: UserWarning: [EXPERIMENTAL] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
credential_service = InMemoryCredentialService()
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/auth/credential_service/in_memory_credential_service.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
super(). __init__ ()
INFO: Started server process [6882]
INFO: Waiting for application startup.
+-----------------------------------------------------------------------------+
| ADK Web Server started |
| |
| For local testing, access at http://0.0.0.0:8000. |
+-----------------------------------------------------------------------------+
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Then use the web interface — either on the local interface 127.0.0.1 or the catch-all web interface 0.0.0.0 -depending on your environment:
Special note for Google Cloud Shell Deployments- add a CORS allow_origins configuration exemption to allow the ADK agent to run:
adk web --host 0.0.0.0 --allow_origins 'regex:.*'
Multi Agent Design
The multi-agent deployment consists of 5 agents:
- Researcher
- Judge
- Orchestrator
- Content Builder
- Course Builder
A high level overview of the application can be found here:
Multi-Agent A2A with the Agent Development Kit(ADK), Cloud Run, Agent Skills, and Gemini CLI
Running/Testing/Debugging Locally
The main Makefile has been extended with extensive targets for managing the agents on the local development environment.
The key targets include:
xbill@penguin:~/gemini-cli-aws/multi-lightsail$ make help
Available commands:
install - Install all dependencies for root, agents, and app
start - Start all services locally (alias for start-local)
stop - Stop all local services (alias for stop-local)
run - Start all services locally (alias for start-local)
local - Show local service URLs
local-status - Check status of local processes
start-local - Start all local services in background
stop-local - Stop all local processes
test - Run all tests (pytest)
e2e-test - Run end-to-end test against localhost
e2e-test-lightsail - Run end-to-end test against Lightsail endpoint
lint - Run linting checks (ruff)
deploy - Deploy all services to AWS Lightsail
status - Show Lightsail service status
endpoint - Show Lightsail service endpoint
destroy - Delete Lightsail service
deploy-lightsail - Deploy all services to AWS Lightsail
lightsail-status - Show Lightsail service status
endpoint-lightsail - Show Lightsail service endpoint
destroy-lightsail - Delete Lightsail service
clean - Remove caches and logs
First check for local running agents:
xbill@penguin:~/gemini-cli-aws/multi-lightsail$ make local-status
Checking status of locally running agents and servers...
--- Network Status ---
No services listening on expected ports (8000-8004, 5173).
--- Process Status ---
No matching processes found.
Then all the agents can be started together:
xbill@penguin:~/gemini-cli-aws/multi-lightsail$ make start
Stopping any existing agent and server processes...
Starting all agents in background...
Waiting for sub-agents to start...
All agents started. Logs: researcher.log, judge.log, content_builder.log, orchestrator.log
Starting App Backend in background...
Starting Frontend dev server in background...
All services started. Logs: researcher.log, judge.log, content_builder.log, orchestrator.log, backend.log, frontend.log
Frontend: [http://localhost:5173](http://localhost:5173)
Backend: [http://localhost:8000](http://localhost:8000)
The entire project can be linted and tested as unit:
> make local-status
xbill@penguin:~/gemini-cli-aws/multi-lightsail$ make local-status
Checking status of locally running agents and servers...
--- Network Status ---
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 8052/python3
tcp 0 0 0.0.0.0:8001 0.0.0.0:* LISTEN 7673/python3
tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 7676/python3
tcp 0 0 0.0.0.0:8003 0.0.0.0:* LISTEN 7678/python3
tcp 0 0 0.0.0.0:8004 0.0.0.0:* LISTEN 8040/python3
tcp 0 0 0.0.0.0:5173 0.0.0.0:* LISTEN 8393/node
--- Process Status ---
xbill 7670 0.0 0.0 2584 1712 pts/1 S 17:06 0:00 /bin/sh -c /bin/bash -c "source .env 2>/dev/null || true; \ /home/xbill/.pyenv/shims/python3 -m shared.adk_app --host 0.0.0.0 --port 8001 --a2a agents/researcher"
xbill 7673 6.5 2.6 276948 172316 pts/1 S 17:06 0:03 /home/xbill/.pyenv/versions/3.13.13/bin/python3 -m shared.adk_app --host 0.0.0.0 --port 8001 --a2a agents/researcher
xbill 7674 0.0 0.0 2584 1716 pts/1 S 17:06 0:00 /bin/sh -c /bin/bash -c "source .env 2>/dev/null || true; \ /home/xbill/.pyenv/shims/python3 -m shared.adk_app --host 0.0.0.0 --port 8002 --a2a agents/judge"
xbill 7676 8.1 2.6 276952 172044 pts/1 S 17:06 0:03 /home/xbill/.pyenv/versions/3.13.13/bin/python3 -m shared.adk_app --host 0.0.0.0 --port 8002 --a2a agents/judge
xbill 7677 0.0 0.0 2584 1716 pts/1 S 17:06 0:00 /bin/sh -c /bin/bash -c "source .env 2>/dev/null || true; \ /home/xbill/.pyenv/shims/python3 -m shared.adk_app --host 0.0.0.0 --port 8003 --a2a agents/content_builder"
xbill 7678 5.8 2.6 277072 171964 pts/1 S 17:06 0:02 /home/xbill/.pyenv/versions/3.13.13/bin/python3 -m shared.adk_app --host 0.0.0.0 --port 8003 --a2a agents/content_builder
xbill 8037 0.0 0.0 2588 1648 pts/1 S 17:06 0:00 /bin/sh -c /bin/bash -c "source .env 2>/dev/null || true; \ export RESEARCHER_AGENT_CARD_URL=http://localhost:8001/a2a/researcher/.well-known/agent-card.json; \ export JUDGE_AGENT_CARD_URL=http://localhost:8002/a2a/judge/.well-known/agent-card.json; \ export CONTENT_BUILDER_AGENT_CARD_URL=http://localhost:8003/a2a/content_builder/.well-known/agent-card.json; \ /home/xbill/.pyenv/shims/python3 -m shared.adk_app --host 0.0.0.0 --port 8004 agents/orchestrator"
xbill 8040 6.3 2.3 259240 153756 pts/1 S 17:06 0:02 /home/xbill/.pyenv/versions/3.13.13/bin/python3 -m shared.adk_app --host 0.0.0.0 --port 8004 agents/orchestrator
xbill 8047 0.0 0.0 2584 1716 pts/1 S 17:06 0:00 /bin/sh -c /bin/bash -c "source .env 2>/dev/null || true; \ export AGENT_SERVER_URL=http://localhost:8004; \ export AGENT_NAME=orchestrator; \ export PORT=8000; \ cd app && /home/xbill/.pyenv/shims/python3 main.py"
xbill 8049 0.0 0.0 6940 3284 pts/1 S 17:06 0:00 /bin/bash -c source .env 2>/dev/null || true; export AGENT_SERVER_URL=http://localhost:8004; export AGENT_NAME=orchestrator; export PORT=8000; cd app && /home/xbill/.pyenv/shims/python3 main.py
xbill 8052 5.6 2.1 316764 138524 pts/1 Sl 17:06 0:02 /home/xbill/.pyenv/versions/3.13.13/bin/python3 main.py
xbill 8392 0.0 0.0 2588 1792 pts/1 S 17:06 0:00 sh -c vite --host 0.0.0.0
xbill 8393 1.0 1.3 9956808 88484 pts/1 Sl 17:06 0:00 node /home/xbill/gemini-cli-aws/multi-lightsail/app/frontend/node_modules/.bin/vite --host 0.0.0.0
And tested end to end locally:
xbill@penguin:~/gemini-cli-aws/multi-lightsail$makee2e-testRunningend-to-endtestagainsthttp://localhost:8000...TemporaryJSONfilecontent:{"message":"Create a short course about the history of the internet","user_id":"e2e_test_user"}Executing:curl-s-XPOSThttp://localhost:8000/api/chat_stream-H"Content-Type: application/json"-d@/tmp/tmp.WIIdN0xDv4--no-buffer{"type":"progress","text":"\ud83d\ude80 Connected to backend, starting research..."}{"type":"progress","text":"\ud83d\ude80 Starting the course creation pipeline..."}{"type":"progress","text":"\ud83d\udd0d Research is starting..."}{"type":"progress","text":"\ud83d\udd0d Researcher is gathering information..."}{"type":"progress","text":"\u2696\ufe0f Judge is evaluating findings..."}{"type":"progress","text":"\u2696\ufe0f Judge is evaluating findings..."}{"type":"progress","text":"\u270d\ufe0f Building the final course content..."}{"type":"progress","text":"\u270d\ufe0f Content Builder is writing the course..."}
Then connect to the front end locally:
And the entire agent system will run in the local environment:
Local Logging / Debugging
Gemini CLI has full access to the local agent logs for debugging and troubleshooting:
✦ I've identified that the log files are located in the project root: researcher.log, judge.log, content_builder.log, orchestrator.log,
backend.log, and frontend.log. I will now check the last few lines of each to get an overview of their current status.
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ Shell Read the last 20 lines of each log file. │
│ │
│ ... first 236 lines hidden (Ctrl+O to show) ... │
│ │
│ > ai-course-creator-frontend@1.0.0 dev │
│ > vite --host 0.0.0.0 │
│ │
│ │
│ VITE v6.4.2 ready in 265 ms │
│ │
│ ➜ Local: http://localhost:5173/ │
│ ➜ Network: http://100.115.92.203:5173/ │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ The logs confirm a successful end-to-end run of the AI Course Creator. The backend received a request at 17:08:25, and the orchestrator
coordinated the researcher, judge, and content_builder agents, finishing at 17:09:14 with 8,348 characters of generated content. Specifically,
the judge approved the 11,888 characters of research findings at 17:09:02, triggering the final content construction. All services reported 200
OK status codes and the Vite frontend remains active on http://localhost:5173.
Deploying to Amazon Lightsail
The project level Makefile has targets for managing the Agent deployment to serverless endpoints. First authenticate to AWS:
xbill@penguin:~/gemini-cli-aws/multi-lightsail$ aws login --remote
Browser will not be automatically opened.
Please visit the following URL:
And save the credentials:
xbill@penguin:~/gemini-cli-aws/multi-lightsail$ source save-aws-creds.sh
Exporting AWS credentials...
Successfully saved credentials to .aws_creds
The Makefile will now automatically use these for deployments.
xbill@penguin:~/gemini-cli-aws/multi-lightsail$
A utility script does the deployment to Lightsail. You can submit the build for cloud deployment:
✦ The Lightsail deployment and E2E test were successful. I've refreshed the AWS credentials, deployed all five agent containers, verified the
endpoint, and confirmed the system's functionality by generating a "History of the Internet" course.
Deployment Summary
- Service Name: course-creator-service
- Region: us-east-1
- Endpoint: https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/
(https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/)
- Status: ACTIVE (Verified via HTTP 200 and E2E test)
Agents Deployed
1. Researcher: gemini-2.5-flash with Google Search.
2. Judge: Content validation agent.
3. Content Builder: Markdown course generator.
4. Orchestrator: Sequential pipeline manager.
5. App: FastAPI backend/SSE streamer.
You can now access the web UI at the endpoint URL above.
The deployment status:
✦ The service is RUNNING and ACTIVE. Although the URL is showing as None in the table output—likely because publicEndpoint.url is nested
differently than expected—I've already confirmed the correct URL with make endpoint-lightsail.
Service Status
- State: RUNNING
- Deployment: ACTIVE
- Verified URL: https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/
(https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/)
You're all set! The multi-agent system is live and fully functional on AWS Lightsail.
Once the containers are deployed- you can then get the endpoint:
> make endpoint
✦ I will run make endpoint to retrieve the Lightsail service URL.
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ Shell make endpoint │
│ │
│ https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/ │
│ │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ The Lightsail service endpoint is:
https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/
(https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/)
The service will be visible in the Lightsail console:
Running the Web Interface
Start a connection to the Cloud Run deployed app:
https://course-creator-service.6wpv8vensby5c.us-east-1.cs.amazonlightsail.com/
Then connect to the app :
Then use online course generator:
Final Gemini CLI Code Review
As a final step — Gemini CLI was used for a full code review of the project:
✦ This multi-agent project is a well-engineered application of ADK 2.5 and the A2A protocol. The separation of specialized agents (Researcher,
Judge, Content Builder) coordinated by a central Orchestrator demonstrates a mature microservice-oriented design.
Key Strengths
* Coordinated Orchestration: The use of SequentialAgent and LoopAgent creates a robust, iterative research-judge cycle.
* Resilient State Flow: The strategy of passing data through event content and "recovering" it via before_agent_callback heuristics is a clever
way to handle state across distributed, independent session stores.
* Polished Streaming: The web app's SSE implementation, specifically the greedy overlap deduplication (merge_strings) and system message
cleanup, ensures a high-quality user experience despite the inherent noise in multi-agent LLM streams.
* Cloud-Native Readiness: Using Identity Tokens for authenticated service-to-service communication and middleware for dynamic A2A URL rewriting
makes the system ready for production deployment on Cloud Run.
Summary
The Agent Development Kit (ADK) was used to build a multi-agent system with A2A support using the Gemini Flash LLM Model. This application was tested locally with Gemini CLI and then deployed to AWS Lightsail. Several key take-aways and lessons learned were summarized from debugging and testing the multi-agent system- including deep log reviews. Finally, Gemini CLI was used for a complete project code review.