Most AI market research starts too late.
A founder, marketer, or product manager opens a blank chat box and asks:
Analyze this market. Tell me the customer pain points. Find competitors. Suggest positioning.
The model can produce a polished answer. But if the agent has not read the market first, the answer is usually built on stale knowledge, generic assumptions, and whatever the prompt happened to include.
A better workflow is simple:
Do not ask the AI to guess the market. Send it to read the market.
That means giving the agent a browser, a clear research plan, and permission to collect evidence from the places where customers and competitors actually exist: Reddit threads, Amazon reviews, G2 complaints, Capterra pages, Product Hunt launches, TikTok comments, YouTube reviews, Taobao listings, support docs, pricing pages, and logged-in tools when needed.
This article shows a practical way to build that workflow with BrowserMan and an AI agent such as OpenClaw.
The job is not "write a report"
The job is to collect usable market evidence.
Before asking for conclusions, your agent should build a research table like this:
| Source | What to collect | Why it matters |
|---|---|---|
| complaint threads, alternatives, buying questions | raw customer language and objections | |
| Amazon reviews | 1-star and 5-star review patterns | pains, delight moments, feature gaps |
| G2 / Capterra | competitor complaints and comparisons | B2B switching triggers |
| Product Hunt | launch comments and positioning | early adopter language |
| YouTube / TikTok comments | emotional reactions and repeated phrases | content hooks and desire language |
| Taobao / marketplaces | price bands, product claims, seller positioning | category structure and merchandising |
| Competitor websites | pricing, use cases, docs, changelog | positioning, packaging, roadmap signals |
The report comes after this table, not before it.
Why a real browser matters
Search APIs are useful, but market research often lives on messy web surfaces:
- pages that render dynamically,
- search results that change by region or login state,
- reviews hidden behind tabs and filters,
- social platforms with their own interaction patterns,
- ecommerce pages that require a real session,
- SaaS dashboards, CRMs, support inboxes, and admin panels with no public API.
A browser-based agent can click, search, filter, scroll, open product pages, read comments, and save raw evidence.
A real browser session adds one more important property: the agent can use the same web surface the human would use, while the human keeps control of the session.
That is the BrowserMan pattern:
- the agent gets delegated browser access,
- the user stays logged in locally,
- cookies remain in the user's browser,
- tasks can be scoped,
- actions can be audited,
- access can be revoked.
For market research, this is often the difference between a toy demo and a workflow that can actually inspect the sites your team depends on.
A practical workflow
Here is the workflow I would start with.
1. Pick one narrow research question
Bad prompt:
Analyze the skincare market.
Better prompt:
Research sunscreen products for oily skin in China. Collect Taobao search results, price bands, repeated product-title claims, and common positioning angles. Do not write conclusions until raw data is saved.
Or for SaaS:
Research customer complaints about AI support tools. Collect Reddit threads, G2 reviews, Capterra reviews, competitor pricing pages, and Product Hunt comments. Extract repeated objections, switching triggers, and comparison language.
The narrower the question, the better the browser agent can collect evidence.
2. Give the agent source categories, not just keywords
A useful agent plan should include source types:
- community conversations,
- review sites,
- marketplace listings,
- competitor pages,
- social comments,
- internal or logged-in tools if permitted.
For example:
Use these sources:
1. Reddit threads for customer complaints and alternatives.
2. Amazon reviews for adjacent products.
3. G2/Capterra for B2B competitor complaints.
4. Competitor pricing and docs pages.
5. Taobao search results if the category is China ecommerce.
For each source, save the URL, title, short excerpt, extracted signal, and confidence.
This prevents the agent from turning the task into a shallow web summary.
3. Save raw evidence first
Ask the agent to save intermediate files:
-
raw-search-results.json, -
review-sample.csv, -
competitor-pages.md, -
source-notes.md, -
market-signals.csv.
This matters because market research should be inspectable. You want to know where a claim came from.
If an agent says "customers care about trust," that is not enough. It should be able to point to the review, thread, comment, or product page that produced the claim.
4. Convert messy web pages into structured rows
A good market-research row might look like this:
| Field | Example |
|---|---|
| source | |
| URL | thread URL |
| segment | ecommerce founder |
| pain | ad hooks sound generic |
| exact phrase | "AI keeps writing the same bland copy" |
| competitor mentioned | tool name |
| implication | build a customer-language bank before writing copy |
For ecommerce marketplace research:
| Field | Example |
|---|---|
| source | Taobao search |
| query | 防晒霜 |
| title | sunscreen milk SPF50+ for oily skin |
| price | RMB 89 |
| seller claim | lightweight, non-greasy, waterproof |
| target persona | oily skin / outdoor use |
| product URL | item page |
Once the agent has rows, the analysis gets much better.
5. Analyze patterns, not isolated pages
Now the agent can answer useful questions:
- Which complaints repeat across sources?
- Which competitors are mentioned together?
- Which price bands dominate?
- Which claims are common and therefore commoditized?
- Which claims are rare but emotionally strong?
- What objections appear before purchase?
- What language do customers use when they are frustrated?
- What does the market already believe?
This is where AI becomes useful: clustering messy evidence into patterns.
6. Turn the output into decisions
The final output should not only be a summary. It should produce decisions or next actions:
- positioning angles,
- landing-page sections,
- ad hook banks,
- competitor battle cards,
- product gap hypotheses,
- pricing observations,
- content topics,
- sales enablement notes,
- research questions for a follow-up run.
Market research is only useful when it changes what you do next.
Example prompt: market research agent
Here is a reusable prompt.
You are a market research agent with access to my browser through BrowserMan.
Goal:
Research [category/persona/problem] and create an evidence-backed market brief.
Rules:
1. Do not start by writing the final report.
2. First collect raw evidence from real web sources.
3. Save source URLs and short excerpts for every important claim.
4. Prefer customer language over generic summaries.
5. Separate evidence from interpretation.
6. If a source requires my logged-in browser session, use BrowserMan and do not ask for passwords or cookies.
Sources to inspect:
- Reddit threads and comments
- Amazon or marketplace reviews
- G2/Capterra/Trustpilot reviews if relevant
- competitor pricing pages and docs
- Product Hunt or launch comments
- YouTube/TikTok comments if relevant
- [specific sites for this market]
Deliverables:
1. Raw source list with URLs.
2. Structured CSV of signals.
3. Top repeated pains and exact phrases.
4. Competitor comparison notes.
5. Positioning opportunities.
6. Recommended next research questions.
Example: ecommerce market research
For an ecommerce category, the agent might collect:
- marketplace product titles,
- price bands,
- review complaints,
- frequently repeated claims,
- seller positioning,
- influencer and social-comment language,
- competitor SKU differences.
In a recent BrowserMan workflow, an agent collected Taobao sunscreen search results and turned them into a small market-analysis dataset: product titles, prices, seller claims, and positioning signals.
That research produced a separate report on the China sunscreen category, but the deeper lesson was the workflow: collect real market data first, then ask the agent to analyze it.
The same pattern works for Amazon, Shopify stores, Etsy, TikTok Shop, JD, Tmall, and other ecommerce surfaces.
Example: SaaS competitor research
For a SaaS category, the agent might collect:
- G2 and Capterra complaints,
- Reddit alternative threads,
- competitor pricing pages,
- changelogs and docs,
- Product Hunt comments,
- LinkedIn or X launch posts,
- support-center gaps.
The output could become:
- a competitor battle card,
- a landing-page rewrite,
- a sales objection sheet,
- a roadmap hypothesis,
- a list of comparison pages to publish.
Again, the value comes from reading real market surfaces before writing strategy.
Where BrowserMan fits
BrowserMan is not the report writer. Your model can already write.
BrowserMan is the browser layer that lets the agent collect better inputs.
Use BrowserMan when the task needs to:
- read rendered web pages,
- use your logged-in browser session,
- search platforms with real UI behavior,
- collect data from sites without clean APIs,
- inspect pages that depend on cookies or session state,
- preserve a human-controlled access boundary.
The best market-research agents will not be the ones with the fanciest prose. They will be the ones with the best evidence loop.
Real sources in. Structured evidence out. Then analysis.
The operating principle
If your AI market research starts from a blank chat box, it will sound like every other AI answer.
If it starts from real customer language, competitor pages, reviews, marketplace data, and social comments, it becomes useful.
That is the shift:
AI market research should not begin with generation. It should begin with browser-based collection.
Give the agent a real browser, a narrow question, source categories, and a requirement to save evidence before conclusions.
Then the report has a chance to matter.