I trusted the code AI wrote for me. My data was silently broken the whole time.

python dev.to

When you're vibe coding with AI, a pattern develops quickly.

Describe the feature → get the code → run it → see no errors → move on to the next thing.

It's fast. But "no errors" and "working as intended" are different things. I learned that the hard way.


The feature I added

I built a post-alert tracking system into my crypto bot.

After each alert fires, it was supposed to record the price at 30 minutes, 1 hour, and 4 hours later — building a dataset to answer "when is the best time to sell?"

The bot ran cleanly. No errors. Data flowing into Google Sheets every day.


What I found

One day I opened the spreadsheet: 30-minute column full, 1-hour and 4-hour columns nearly empty.

Why didn't I catch it sooner? To verify the 4-hour entries, you'd need to fire an alert and come back four hours later. The bot looked fine, so I assumed it was working.

The problem was in the code AI had written:

# The broken version
if elapsed >= 30:
    record_30min()
elif elapsed >= 60:
    record_60min()
elif elapsed >= 240:
    record_240min()
Enter fullscreen mode Exit fullscreen mode

Once the 30-minute condition matched, the elif branches never ran. Every alert logged 30 minutes — then silently skipped 1-hour and 4-hour entirely.

# The fix
if elapsed >= 30 and not recorded_30min:
    record_30min()
if elapsed >= 60 and not recorded_60min:
    record_60min()
if elapsed >= 240 and not recorded_240min:
    record_240min()
Enter fullscreen mode Exit fullscreen mode

When I write code myself, I think through the logic line by line. When AI writes it, I check if it runs — and this one ran fine. The structural issue slipped through.


The part that bothered me more

If I'd run analysis on that dataset, I would have concluded: "always sell within 30 minutes."

The data looked clean. No errors, no flags. I would have had no reason to question it. Wrong data → wrong conclusion → wrong strategy, all from a bug that never made a sound.


What changed after this

AI-assisted coding is fast. But asking "does this work?" gets you "yes, it runs." That's not what you actually need to know.

After this, I started asking a different question when reviewing AI-generated code:

"What could be wrong with this?"

That shift caught more issues than anything else I'd tried.

Source: dev.to

arrow_back Back to Tutorials