Every developer has faced the dilemma: you want AI to help analyze your code, but shipping proprietary source code to a cloud API feels wrong. What if the analysis happened entirely on your machine?
Over the past few months, I've built a suite of open-source developer productivity tools powered by local LLMs. No API keys. No cloud dependencies. No data leaving your laptop. In this post, I'll walk through five of these tools, explain why local-first AI matters for code analysis, and share real code you can run today.
Why Code Analysis Should Stay Local
Before diving into the tools, let's talk about why running AI locally for code analysis isn't just a nice-to-have — it's often the right default.
Proprietary code stays private. When you're analyzing enterprise codebases, client projects, or pre-release features, sending code to a third-party API creates compliance and IP risks. Local inference means your code never leaves your machine.
Speed without rate limits. Cloud APIs throttle requests, add network latency, and sometimes go down entirely. A local model running on your GPU responds in seconds with zero network dependency.
Cost drops to zero. API calls add up fast when you're analyzing hundreds of files. Local models have a one-time setup cost (downloading the model) and then run for free, forever.
Offline capability. Airports, coffee shops with spotty WiFi, or air-gapped environments — local AI works everywhere.
In my experience building developer tools, the combination of privacy and zero-cost iteration makes local LLMs the ideal foundation for code analysis workflows.
The Stack: Ollama + Gemma 3 + Python
All five tools share a common architecture:
- Ollama as the local model server — it manages model downloads, GPU allocation, and exposes a simple REST API
- Gemma 3 (Google's open-weight model) as the LLM — excellent at code understanding, translation, and structured analysis
-
Python with Streamlit for interactive UIs and
requestsfor Ollama communication
Here's the base pattern every tool uses:
import requests
import json
def query_ollama(prompt, model="gemma3"):
"""Send a prompt to the local Ollama instance and return the response."""
response = requests.post(
"http://localhost:11434/api/generate",
json={
"model": model,
"prompt": prompt,
"stream": False
}
)
return response.json()["response"]
This simple function is the foundation of every tool. No API keys, no authentication, no billing — just a local HTTP call.
Tool 1: Code Complexity Analyzer
Repo: code-complexity-analyzer
Understanding code complexity is critical for maintainability. This tool uses Gemma 3 to analyze functions and classes, identifying cyclomatic complexity, cognitive complexity, and potential refactoring opportunities that static analyzers miss.
def analyze_complexity(code_snippet, language="python"):
prompt = f"""Analyze the following {language} code for complexity:
{code_snippet}
Provide:
1. Cyclomatic complexity score (1-10)
2. Cognitive complexity assessment
3. Nested depth analysis
4. Specific refactoring suggestions
5. Overall maintainability rating
Format your response as structured JSON."""
result = query_ollama(prompt)
return json.loads(result)
What makes this powerful is that the LLM doesn't just count branches — it understands semantic complexity. It can flag a function that's technically simple but cognitively confusing because of poor naming or implicit side effects. In my experience, running this across a codebase of 500+ files takes about 10 minutes locally versus costing $15–20 in cloud API calls.
Tool 2: Code Translator
Repo: code-translator
Migrating code between languages is one of the most tedious tasks in software engineering. This tool translates code between Python, JavaScript, TypeScript, Go, Rust, Java, and C# while preserving logic, comments, and structure.
def translate_code(source_code, source_lang, target_lang):
prompt = f"""Translate the following {source_lang} code to {target_lang}.
Source ({source_lang}):
{source_code}
Requirements:
- Preserve all logic and edge cases
- Use idiomatic {target_lang} patterns and conventions
- Include equivalent error handling
- Add comments explaining non-obvious translations
Provide only the translated code."""
return query_ollama(prompt)
# Example: Python to Rust
python_code = """
def fibonacci(n):
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return b
"""
rust_code = translate_code(python_code, "python", "rust")
print(rust_code)
The Streamlit UI makes this particularly interactive — paste code on the left, select your target language, and get idiomatic translated code on the right. Gemma 3 handles nuances like Python's dynamic typing to Rust's ownership model surprisingly well.
Tool 3: Resume Analyzer
Repo: resume-analyzer
While not strictly a code analysis tool, this demonstrates how the same local LLM pattern applies to document analysis for developers. The resume analyzer parses resumes, scores them against job descriptions, and suggests improvements.
def analyze_resume(resume_text, job_description=None):
jd_section = f"Job Description: {job_description}" if job_description else ""
prompt = f"""Analyze this resume for a software engineering position:
Resume:
{resume_text}{jd_section}
Provide:
1. Overall strength score (1-100)
2. Technical skills identified
3. Experience level assessment
4. Key strengths and areas for improvement
5. Missing keywords for ATS optimization
6. Formatting suggestions
Be specific and actionable."""
return query_ollama(prompt)
I built this because I've seen too many talented engineers get filtered out by ATS systems. Running it locally means your resume content — which contains highly personal information — never touches a cloud service. Privacy matters here more than anywhere.
Tool 4: Email Draft Assistant
Repo: email-draft-assistant
Developer productivity isn't just about code. This tool helps craft professional emails — standup updates, project proposals, status reports, and technical discussions — using context-aware AI generation.
def draft_email(context, tone="professional", email_type="status_update"):
prompt = f"""Draft a {tone}{email_type} email based on this context:
Context:
{context}
Requirements:
- Clear subject line
- Concise but comprehensive body
- Appropriate greeting and sign-off
- Action items clearly highlighted
- Tone: {tone}
Format with Subject, Body, and suggested follow-up actions."""
return query_ollama(prompt)
# Example usage
email = draft_email(
context="Completed feature branch for dashboard redesign. "
"Fixed 3 UI bugs. Next up: performance optimization.",
tone="professional",
email_type="status_update"
)
print(email)
The beauty of running this locally is speed of iteration. You can generate five drafts, tweak the tone parameter, and settle on the right version in under a minute — with zero cost per generation.
Tool 5: Sentiment Analyzer
Repo: sentiment-analyzer
Understanding sentiment in code reviews, team communications, and user feedback is an underrated developer skill. This tool analyzes text for emotional tone, constructiveness, and communication quality.
def analyze_sentiment(text, context="code_review"):
prompt = f"""Analyze the sentiment of this {context} text:
Text: "{text}"
Provide:
1. Overall sentiment (positive/negative/neutral)
2. Confidence score (0-1)
3. Emotional tone (constructive, critical, encouraging, etc.)
4. Specific phrases driving the sentiment
5. Suggestion for more constructive rephrasing if negative
Return as structured JSON."""
result = query_ollama(prompt)
return json.loads(result)
# Analyzing a code review comment
review = "This function is a mess. Why didn't you just use a dictionary?"
analysis = analyze_sentiment(review, context="code_review")
print(json.dumps(analysis, indent=2))
I built this after noticing how much tone affects code review culture. Running it locally means team communications stay private while still getting AI-powered insights into communication patterns.
Patterns That Emerged
After building all five tools, several patterns crystallized:
Prompt engineering is the real differentiator. The model is the same across all tools — what changes is how you structure the prompt. Specific, structured prompts with clear output formats consistently produce better results than vague instructions.
Streamlit is the perfect rapid UI. Every tool ships with a Streamlit interface that took less than an hour to build. For developer tools, it's the sweet spot between CLI and full web app.
Gemma 3 punches above its weight. For code-related tasks especially, Gemma 3 running locally produces results that rival much larger cloud models. The combination of Google's training data and Ollama's efficient inference makes it genuinely practical for daily use.
Composability matters. Because each tool follows the same pattern, they can be chained. Analyze complexity, translate to a better-suited language, generate the PR description email. The Unix philosophy applies to AI tools too.
Getting Started
Every repo follows the same setup:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull Gemma 3
ollama pull gemma3
# Clone any tool (example: code-complexity-analyzer)
git clone https://github.com/kennedyraju55/code-complexity-analyzer
cd code-complexity-analyzer
# Install dependencies and run
pip install -r requirements.txt
streamlit run app.py
All five tools are open source, MIT licensed, and designed to be forked and customized. If you're building developer tools, I'd encourage you to start with the query_ollama pattern above and see where it takes you.
The future of developer tooling is local, private, and composable. These five projects are my contribution to that future — and with 116+ repos and counting, I'm just getting started.
Nrk Raju Guthikonda is a Senior Software Engineer at Microsoft on the Copilot Search Infrastructure team. He maintains 116+ open-source repositories exploring AI, local LLMs, and developer productivity tools. Find him on GitHub, Dev.to, and LinkedIn.