Last year I was trying to add Reddit research to my Claude Code workflow. Seemed straightforward. Then I spent an afternoon registering an OAuth app, managing token refresh, and watching my script hit rate limits after 10 requests. Tried PRAW next, same wall. Tried raw scraping, it broke within a week when Reddit changed their markup.
I stepped back and thought: I'm already logged into Reddit in Chrome. Why can't my AI tools just use that session?
That's how SuperMCP started.
What It Actually Does
SuperMCP is an MCP server (Model Context Protocol, Anthropic's open standard for connecting AI tools to data sources). It gives Claude, Cursor, Windsurf, or any MCP-compatible tool access to:
- Reddit: search posts, read full threads with comments, browse subreddits, check user activity
- Twitter/X: search tweets, get reply threads, pull user timelines
- Google Trends: real-time trending topics by region
- Google News: search articles, top headlines, topic-filtered news
13 tools total, all running on your machine as a local process.
The Trick: Your Chrome Login Session
Here's what makes this different from every Reddit scraper tutorial on dev.to.
SuperMCP reads cookies from your Chrome browser's local database, the same way password managers do. It spins up a headless Chromium instance that browses Reddit and Twitter as you. You don't need API keys for Reddit or Twitter, don't need to register an OAuth app, don't need to deal with token refresh.
You (logged into Reddit in Chrome)
↓
Chrome's cookie database (on your disk)
↓
SuperMCP reads cookies locally
↓
Headless Chromium browses as you
↓
Results returned to Claude/Cursor via MCP
Google Trends and News use public RSS feeds directly. No login, no browser, instant results.
Your data never leaves your machine. The only external call is API key validation to webmatrices.com.
Setup: 3 Commands
pip install supermcp
supermcp setup # paste your API key, auto-installs Chromium
claude mcp add supermcp -- supermcp
That's it. If you use uvx:
uvx --from supermcp supermcp setup
claude mcp add supermcp -- uvx --from supermcp supermcp
For Cursor, add to .cursor/mcp.json:
{"supermcp":{"command":"supermcp"}}
Get your free API key at webmatrices.com/supermcp. 100 requests/day on the free tier, unlimited for a one-time $9.
What This Looks Like in Practice
Once installed, you just talk to Claude normally. Here are prompts I use:
Market research before building a feature:
"Search Reddit for posts about 'invoice automation for freelancers'. What are people actually complaining about?"
Claude calls reddit_search, pulls real threads with real comments, and summarizes the pain points. No copy-pasting URLs, no switching tabs.
Monitoring competitor sentiment:
"Search Twitter for mentions of [competitor] from the last week. What's the general sentiment?"
Trend validation:
"What's trending on Google Trends in the US right now? Anything related to developer tools?"
Deep-dive on a thread:
"Get this Reddit post with all comments: [url]. Summarize the key takeaways and any tools people are recommending."
Content research:
"What's hot on r/SideProject this week? Any common themes?"
The 13 Tools
| Tool | What It Does |
|---|---|
reddit_search |
Search all of Reddit |
reddit_get_post |
Full post + comments |
reddit_get_subreddit_posts |
Browse any subreddit (hot/new/top) |
reddit_search_subreddit |
Search within a subreddit |
reddit_get_user_activity |
A user's recent posts & comments |
twitter_search |
Search tweets |
twitter_get_tweet |
Tweet + full reply thread |
twitter_get_user_tweets |
Recent tweets from any account |
trends_get_trending |
Real-time trending by region |
news_search |
Search Google News |
news_top |
Top headlines |
news_by_topic |
News by category |
trends_interest_by_region |
Regional interest for any term |
(and constantly adding new MCP for linkedin, Medium, Dev.to, and other social platforms)
Why Not Just Use the APIs Directly?
I tried. Here's what happened:
Reddit API: Free tier is heavily rate-limited. The paid Data API charges per request, and it adds up fast when your AI agent makes dozens of calls per conversation. You also need to register an app, manage OAuth tokens, handle refresh flows. I got it working once, and then spent more time maintaining the auth than actually using the data.
Twitter/X API: The free tier only lets you post tweets. Reading requires the Basic tier at $100/month. The Pro tier is $5,000/month. For a developer tool that just needs to search tweets? Absurd.
Google Trends: No official API exists. The popular pytrends library reverse-engineers Google's internal endpoints and breaks regularly.
SuperMCP sidesteps all of this. You're logged into Reddit and Twitter in Chrome already. SuperMCP just uses that session.
FAQ
Do I need Chrome open?
No. SuperMCP reads cookies from Chrome's database on disk. Chrome doesn't need to be running.
Does it work on macOS?
Yes. On first run, macOS will ask for your login keychain password to read Chrome's cookies. This is the standard macOS security prompt. Click "Always Allow" so it doesn't ask again.
Can I use this with tools other than Claude?
Any MCP client works. Claude Desktop, Claude Code, Cursor, Windsurf, Cline, and anything else that supports the MCP protocol.
What about Firefox or other browsers?
Chrome only for now. Firefox stores cookies differently and would need a separate integration.
Is this scraping?
SuperMCP browses Reddit and Twitter as you, using your authenticated session in a headless browser. It's equivalent to you opening a tab and reading the page yourself.
PyPI: pypi.org/project/supermcp · Python 3.10+ · macOS / Windows / Linux
If you're building with MCP servers, I'd be curious what data sources you wish you had access to. Drop a comment.