Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications with Python from a local development environment deployed to the Lambda Managed Instance (LMI) service on AWS.
Aren’t There a Gazillion Python MCP Demos?
Yes there are.
Python has traditionally been the main coding language for ML and AI tools. The goal of this article is to provide a minimal viable basic working MCP stdio server that can be run locally without any unneeded extra code or extensions.
What Is Python?
Python is an interpreted language that allows for rapid development and testing and has deep libraries for working with ML and AI:
Python Version Management
One of the downsides of the wide deployment of Python has been managing the language versions across platforms and maintaining a supported version.
The pyenv tool enables deploying consistent versions of Python:
GitHub - pyenv/pyenv: Simple Python version management
As of writing — the mainstream python version is 3.13. To validate your current Python:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$python --version
Python 3.13.13
Gemini CLI
If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:
npm install -g @google/gemini-cli
Testing the Gemini CLI Environment
Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:
gemini
admin@ip-172-31-70-211:~/gemini-cli-aws/mcp-lambda-python-aws$gemini
▝▜▄ Gemini CLI v0.33.1
▝▜▄
▗▟▀ Logged in with Google /auth
▝▀ Gemini Code Assist Standard /upgrade no sandbox (see /docs) /model Auto (Gemini 3) | 239.8 MB
Node Version Management
Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:
Python MCP Documentation
The official GitHub Repo provides samples and documentation for getting started:
The most common MCP Python deployment path uses the FASTMCP library:
Docker Version Management
The AWS Cli tools and Lightsail extensions need current version of Docker. If your environment does not provide a recent docker tool- the Docker Version Manager can be used to downlaod the latest supported Docker:
To check the version of Docker:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$dvm --version
Docker Version Manager version 1.0.3 (e59ba7b)
Amazon Lamba Managed Instances
Lambda Managed Instances (LMI) allow running AWS Lambda functions on dedicated EC2 instances, blending serverless simplicity with EC2 performance flexibility. They feature fully managed infrastructure (patching, scaling) with higher throughput, no cold starts, and cost-efficient scaling for predictable, high-volume, memory-intensive, or specialized hardware workloads
Full details are here:
AWS CLI
The AWS CLI provides a command line tool to directly access AWS services from your current environment. Full details on the CLI are available here:
Install Docker, AWS CLI, and the Lightsail Control plugin for containers
You can version check the tool after installation:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$aws --version
aws-cli/2.34.43 Python/3.14.4 Linux/6.6.99-09128-g14e87a8a9b71 exe/x86_64.debian.12
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$
Where do I start?
The strategy for starting MCP development is a incremental step by step approach.
First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.
Then, a minimal Hello World Style Python MCP Server is built with HTTP transport. This server is validated with Gemini CLI in the local environment.
This setup validates the connection from Gemini CLI to the local process via MCP. The MCP client (Gemini CLI) and the Python MCP server both run in the same local environment.
Next- the MCP server is wrapped in a container with docker and deployed to Amazon Lambda Instances. This remote deployment is validated with Gemini CLI running as a MCP client.
Setup the Basic Environment
At this point you should have a working Python interpreter and a working Gemini CLI installation. The next step is to clone the GitHub samples repository with support scripts:
cd ~
git clone https://github.com/xbill9/gemini-cli-aws
Then run init.sh from the cloned directory.
The script will attempt to determine your shell environment and set the correct variables:
cd gemini-cli-aws
source init.sh
If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:
cd gemini-cli-aws
source set_env.sh
Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.
Hello World with HTTP Transport
One of the key features that the standard MCP libraries provide is abstracting various transport methods.
The high level MCP tool implementation is the same no matter what low level transport channel/method that the MCP Client uses to connect to a MCP Server.
The simplest transport that the SDK supports is the stdio (stdio/stdout) transport — which connects a locally running process. Both the MCP client and MCP Server must be running in the same environment.
The HTTP transport allows the MCP client and server to run in the same environment or distributed over the Internet.
The connection over HTTP will look similar to this:
mcp.run(
transport="http",
host="0.0.0.0",
port=port,
)
Running the Python Code
First- switch the directory with the Python MCP sample code:
cd ~/gemini-cli-aws/mcp-lambdami-python-aws
Refresh the AWS credentials:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$aws login --remote
Browser will not be automatically opened.
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$source save-aws-creds.sh
Exporting AWS credentials...
Successfully saved credentials to .aws_creds
The Makefile will now automatically use these for deployments.
xbill@penguin:~/gemini-cli-aws/mcp-lmi-python-aws$
Run the deploy version on the local system:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$source save-aws-creds.sh
Exporting AWS credentials...
Successfully saved credentials to .aws_creds
The Makefile will now automatically use these for deployments.
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$make deploy
Step 0: Running Tests...
make[1]: Entering directory '/home/xbill/gemini-cli-aws/mcp-lambdami-python-aws'
Running tests...
============================================================== test session starts ===============================================================
platform linux -- Python 3.13.13, pytest-9.0.2, pluggy-1.6.0
rootdir: /home/xbill
configfile: pyproject.toml
plugins: anyio-4.11.0, asyncio-1.3.0, langsmith-0.8.0
asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 3 items
tests/test_main.py ... [100%]
=============================================================== 3 passed in 0.74s ================================================================
make[1]: Leaving directory '/home/xbill/gemini-cli-aws/mcp-lambdami-python-aws'
Step 1: Building and Pushing Docker Image to ECR...
make[1]: Entering directory '/home/xbill/gemini-cli-aws/mcp-lambdami-python-aws'
Checking if ECR repository exists...
Logging in to Amazon ECR...
You can validate the final result by checking the messages:
Step 7: Publishing to LATEST.PUBLISHED for Managed Instances support...
Step 8: Finalizing API Gateway...
Deployment complete!
API Endpoint URL: https://3x10n3uguc.execute-api.us-east-1.amazonaws.com
MCP HTTP URL: https://3x10n3uguc.execute-api.us-east-1.amazonaws.com/mcp
Health Check URL: https://3x10n3uguc.execute-api.us-east-1.amazonaws.com/health
Once the everything is deployed:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$make status
Function Status (mcp-lambda-python-aws):
---------------------------------------------------------------------------------
| GetFunction |
+------------------------------+-------------------------+----------------------+
| LastModified | Name | Status |
+------------------------------+-------------------------+----------------------+
| 2026-05-06T16:33:26.000+0000| mcp-lambda-python-aws | ActiveNonInvocable |
+------------------------------+-------------------------+----------------------+
Prod Alias Status:
---------------------
| GetAlias |
+-------+-----------+
| Name | Version |
+-------+-----------+
| prod | 28 |
+-------+-----------+
You can then get the endpoint:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$make endpoint
https://3x10n3uguc.execute-api.us-east-1.amazonaws.com
Gemini CLI settings.json
Once you have the deployed endpoint — update the Gemini CLI MCP settings:
{"mcpServers":{"mcp-lambdami-python-aws":{"httpUrl":"https://3x10n3uguc.execute-api.us-east-1.amazonaws.com/mcp/"}}}
Remote MCP Server Testing
Restart Gemini CLI and check for the new MCP tools:
xbill@penguin:~/gemini-cli-aws/mcp-lambdami-python-aws$gemini
▝▜▄ Gemini CLI v0.41.1
▝▜▄
▗▟▀ Signed in with Google /auth
▝▀ Plan: Gemini Code Assist Standard /upgrade
/mcp list
🟢 mcp-lambdami-python-aws - Ready (1 tool)
Tools:
- mcp_mcp-lambdami-python-aws_greet
Then the remote MCP tool in Lambda Managed Instances can be called:
> greet Lambda Managed Instances!
Researching MCP Server Implementation: Reading main.py to understand the greet tool implementation.
✓ ReadFile main.py
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ greet (mcp-lambdami-python-aws MCP Server) {"param":"Lambda Managed Instances"} │
│ │
│ Hello, Lambda Managed Instances! │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ Hello, Lambda Managed Instances!
Gemini CLI can also summarize the deployment:
✦ The Lambda Managed Instances (LMI) deployment in this project is a hybrid setup that combines CloudFormation for infrastructure and AWS CLI for
specialized Lambda configurations. It is designed to provide high-performance, dedicated compute for the MCP server with zero cold starts and
multi-concurrency.
1. Core Architecture
* Compute: Lambda functions run on dedicated EC2 capacity managed by AWS but living in your VPC.
* Networking: A VPC with Public and Private subnets. The Lambda resides in the Private subnet and uses a NAT Gateway for outbound internet
access (required for external tool calls).
* Inbound: API Gateway (HTTP API) acts as the entry point, routing traffic to a Lambda Alias (prod).
2. Infrastructure (CloudFormation)
The template.yaml defines the foundational components:
* VPC & NAT Gateway: Essential for providing the dedicated instances with a secure environment and internet connectivity.
* IAM Roles:
* Execution Role: Standard Lambda permissions + VPC access.
* Operator Role: Allows the Lambda service to manage EC2 resources (Scaling, Patching) on your behalf via the
AWSLambdaManagedEC2ResourceOperator policy.
* API Gateway: Configured with a $default route pointing to the prod alias of the Lambda.
3. Managed Instances Configuration (deploy.sh)
Because LMI features are often newer or require specific CLI sequences, deploy.sh handles the following:
* Capacity Provider: A named resource (mcp-lambda-python-aws-cp) that defines:
* Instance Requirements: Architectures (x86_64) and Scaling (Max vCPU count).
* VPC Placement: Subnets and Security Groups.
* Lambda Function: Created with a capacity-provider-config that links it to the Capacity Provider.
* Memory/vCPU: Set to 2048MB (2GB) minimum to meet LMI requirements.
* Package Type: Image (Container-based using the project's Dockerfile).
4. Deployment Workflow
1. Build & Push: Docker image is built and pushed to Amazon ECR.
2. Infra Deploy: CloudFormation creates the VPC, IAM roles, and API Gateway.
3. CP Setup: The script ensures the Capacity Provider exists and is configured.
4. Function Update/Create: The Lambda function is deployed or updated with the LMI configuration.
5. Versioning:
* A new version is published.
* The prod alias is updated to point to this version.
* LATEST_PUBLISHED support: The function is published to the special LATEST_PUBLISHED state, which is a requirement for LMI to manage
instances effectively.
5. Key Operational Characteristics
* Multi-concurrency: Unlike standard Lambda, LMI handles multiple concurrent requests per instance (default 8), making it ideal for the
persistent connections used in MCP SSE.
* Stateless HTTP: The server uses stateless_http=True in FastMCP, ensuring compatibility with the Lambda lifecycle.
* Warm-up Period: Functions may show as ActiveNonInvocable initially while the Managed Instances are being provisioned and warmed up.
Summary
The strategy for using Python for MCP development with Gemini CLI on Amazon Lambda Managed Instances was validated with a incremental step by step approach.
A minimal HTTP transport MCP Server was started from Python source code and validated with Gemini CLI running as a MCP client in the same local environment.
This MCP server was then wrapped in a Docker container and deployed to Amazon Lambda Instances. The local copy of Gemini CLI was used as a MCP client to validate the connection.
This approach can be extended to more complex deployments using other MCP transports and Cloud based options.