Too many AI interviews became architecture work, so I built this

dev.to

"Cool. I just did discovery, systems design, and tool selection for free."

We'd spent most of the conversation talking through workflow bottlenecks, where an internal AI system would probably break, what I'd ship first, and which parts I wouldn't trust in a v1. By the end of it, I'd already done the interesting engineering work.

There still wasn't a clear role. Or a scope. Or a budget.

That stuck with me because it wasn't unusual. A lot of AI hiring conversations right now ask developers to do real implementation thinking before there's any trust in place. You aren't just answering questions. You're mapping workflows, naming failure modes, sketching architecture, and giving away judgment that normally comes much later.

Not every company does this, and I want to be fair about that.

But it happened often enough that I stopped seeing it as only a hiring problem. It started to look like a product problem, and more specifically a data-shape problem.

The real problem looked like a data-shape problem

What felt broken to me wasn't "there are no good AI people."

It was that most ways of presenting AI expertise are structurally weak.

The usual format is some combination of title, paragraph, maybe a GitHub link, maybe a LinkedIn link, and then a lot of guessing. That's bad for people trying to evaluate a profile, but it's also bad from a product standpoint. If the data shape is mushy, everything downstream gets worse:

  • search is weak
  • filtering is weak
  • moderation is subjective
  • profile quality drifts fast
  • every card starts to sound the same

So instead of just complaining about AI hiring, I started building DontMakeMeCode.

The hard part wasn't rendering a directory page. It was keeping profile data structured enough to survive search, draft state, and moderation without turning into generic sludge after a few submissions.

That's why the stack choices mattered. I wanted something fast to iterate on, strongly typed, and boring in the right places.

Under the hood it's a Next.js 16 + React 19 + TypeScript app with tRPC, Drizzle, PostgreSQL on Neon, and Better Auth. The product itself is pretty small right now, but the shape is intentional:

  • a public directory
  • public expert profiles
  • a draft-based expert submission flow
  • a moderation layer that tries to keep signal high without pretending trust can be fully automated

That was the part I found interesting enough to build.

I started with build paths instead of only categories

The first thing I learned was that categories alone were too vague.

"AI Automation" is technically useful, but it's not how most people think when they're trying to find help. They usually think in jobs to be done: I need a support bot, I need a workflow cleaned up, I need to get something useful out of internal data.

So the directory has a build-path strip that groups the shortlist around those kinds of entry points first.

The build-path strip is there because categories alone made the directory feel too broad too fast.

This wasn't just copy. It changed how I thought about the page structure. Instead of treating the directory like a generic index, I started treating it more like a routing problem: what are the few concrete entry points that help someone narrow the list without reading 50 vague cards?

That made the page feel more opinionated, which in this case was a good thing.

The directory page is really a stateful search surface

Once I got past the content problem, the next problem was state.

The directory page has search, category filtering, build-path filtering, sort options, and result pagination. I didn't want that state trapped in memory or hidden behind a form submit. So the current implementation reflects search and filter state in the URL and updates it with router.replace(...), which makes the page shareable and easier to reason about.

There's also a bunch of instrumentation on the page because I wanted to learn from real usage instead of trusting my instincts too much. Right now it tracks things like:

  • DIRECTORY_VIEWED
  • DIRECTORY_FILTER_CHANGED
  • DIRECTORY_SEARCH_SUBMITTED
  • DIRECTORY_RESULTS_VIEWED

That sounds small, but I think it matters. If you're building a niche directory, the difference between "people visited the page" and "people actually used the build-path strip, changed filters, and saw zero results" is the difference between guessing and learning.

The shortlist page does more than render cards. It manages query state, tracked interactions, and a few different ways to narrow the results.

I also reset pagination when filters change and keep a signature of the current filter state so I don't fire analytics events twice for the same interaction. None of that is glamorous, but it's the kind of detail that makes the page feel less sloppy.

The public profile is a formatting problem, not just a content problem

The public profile page taught me another thing: profile quality is heavily shaped by the input format.

If you give people one big textarea and tell them to "describe what you do," most of them write something broad and forgettable. That's not because they're bad. It's because the UI gave them too much empty space and not enough structure.

So I started treating the profile like a formatting problem. The public page pulls together a small set of fields that each have a job:

  • title
  • short description
  • markdown bio
  • categories and skills
  • proof links

The goal is not to make every profile look polished. It's to make weak profiles harder to hide inside generic language.

The public profile works best when each field does one job instead of asking one paragraph to carry everything.

This is also why I ended up using markdown for the longer bio. A markdown bio is still simple enough for normal use, but it nudges people toward structure. Bullets, emphasis, and short sections tend to produce better profiles than one dense block of text.

I kept the submission flow in draft mode on purpose

The submission flow is where most of the product judgment ended up.

I did not want profiles to go live the second somebody filled out a form. That creates the worst possible incentive: write the fastest plausible thing, publish it, and fix it later.

So the flow is draft-first. You build the profile privately, preview it, and only then send it for review.

The editor itself is intentionally simple. There's a markdown bio editor with Write and Preview tabs, a few formatting controls, structured links, categories, pricing, and showcase projects. There's also an AI autofill section because profile creation is boring, but I only want AI helping with drafts, not deciding what gets published.

Draft state matters. It gives people room to improve the profile before it turns into public sludge.

That same logic shows up in moderation. I ended up with a hybrid review layer in lib/ai/profile-review.ts:

  1. deterministic checks for content quality, pricing, links, skills, categories, and completeness
  2. safe live-link verification
  3. LLM synthesis that summarizes the structured audit for an admin

The model is not the final judge. The admin is.

That decision felt important. I didn't want AI scoring profiles behind the scenes and pretending that was trust. Right now, AI is a summary layer for reviewers, not an automatic approval engine.

What I learned building it

The biggest lesson is that "trust" is a terrible primitive if you try to model it too early.

Once you start saying "we'll just rank good profiles higher" or "we'll let AI score credibility," you're already skipping past the hard part. The hard part is deciding what data you actually trust enough to structure.

A few things became obvious pretty quickly:

  • query state matters more than I expected
  • markdown plus structured fields produces better profiles than one open-ended textarea
  • manual moderation is annoying, but it teaches you where the data model is weak
  • AI is more useful as a reviewer-facing synthesis layer than as an invisible judge

I also learned that the product is more interesting to me when I treat it like an interface problem instead of a directory-growth problem.

How do you represent expertise in a way that survives search, filtering, moderation, and public display?

How much structure is enough before profiles start feeling rigid?

How much AI should sit inside the workflow before the whole thing starts feeling fake?

That's the part I keep coming back to.

I'm still early, and I don't think a directory fixes the bigger mess around AI hiring. But I do think building a small system like this is a good way to make the problem concrete.

If you were building something like this, how would you keep moderation from becoming a bottleneck? And how much AI would you trust in the review loop before it starts doing more harm than good?

Source: dev.to

arrow_back Back to News