AI · 4 min read

I Let AI Build Two Apps. Here's What Actually Happened.

By Edson Ferreira  ·  March 2026

There's a lot of noise around AI right now. Some people say it will replace entire teams. Others say it's overhyped. I'm not interested in either argument. I'm interested in what actually happens when you use it — not in a demo, not in a LinkedIn post, but in a real task with real stakes.

This note is for people who want to cut through the hype and understand what AI can actually do today, and what it still can't. It's especially for people early in their careers wondering if there's still a place for them. The honest answer: yes — but the game has changed, and the foundation you build now matters more than ever.

Most of my AI learning hasn't come from courses or articles. It's come from trying things, watching what breaks, and adjusting. That loop — do, reflect, learn, try again — is basically what Kolb (1984) calls experiential learning. I didn't plan it that way. It just happened to be the only approach that taught me anything I could actually use.

Over the past few months, I ran two experiments to understand what AI can do when handed a real development task. What people are calling "vibe coding" — describing what you want in plain language and letting AI write the code — is exactly what I did. The two experiments were different in complexity. That difference turned out to be the whole lesson.

Experiment one — a simple static page

The first task was a basic HTML and JavaScript page. Static, no backend, nothing fancy. Something I could have built myself, but slower and messier.

AI moved fast. The result looked good. I was genuinely impressed — until I noticed my telemetry wasn't recording correctly.

I opened the browser inspector and started digging. The issue wasn't in the logic. It was structural: a few closing tags — body, script, html — had been left open somewhere in the file. The page rendered fine. Browsers are forgiving. But that broken structure was silently killing my tracking events.

I pointed it out to AI. It couldn't find the problem. I had to read the code myself and spot it manually.

There was another issue. Partway through our back-and-forth, AI lost track of the folder structure and started writing changes to the wrong directory — the production folder, not the staging area I had explicitly set up for this experiment. I caught it in time. But only because I was still reviewing every change before accepting it. I had decided early on that I wasn't going to let AI commit directly to my git repository. That call was right.

What this taught me: If you can't read the code, you can't validate the output. AI builds fast — but it doesn't always build clean, and it won't catch every mistake it makes. You still need to know the language well enough to read it, and you need to test what you ship.

Experiment two — a full application

The second experiment was more ambitious. I asked AI to build a financial dashboard from scratch: parse bank statements, calculate financial health metrics, display everything locally. That meant Python, file processing, a few external dependencies. Not a static page anymore.

It built an MVP that looked legitimate. Honestly impressive for a first pass.

Then I tried to fix something, and things got hard fast.

Debugging an app AI wrote from scratch felt like being dropped into someone else's six-year-old codebase with no documentation and no handoff. The logic was there, but I hadn't written any of it. I didn't have the context for why certain decisions were made. The dependencies AI chose were ones I wouldn't have picked. Some issues took me longer to fix than they would have if I'd written the code myself from the beginning — because I was essentially doing reverse engineering on every step.

What this taught me: Letting AI build an entire app end-to-end creates a different kind of risk. Not a build risk — it can build. The risk is that when something breaks, you're on your own inside a codebase you don't fully understand.

What this means for teams — and for people just starting out

Yes, AI can reduce team size. A small team with the right foundation and the right tools can move faster than a large team without them. That part of the argument is real.

But here's what my experiments confirmed: the people who get replaced aren't the ones who know how to code or think critically. They're the ones who only do the work AI can now do on its own. The people who stay — and the ones who will be hired — are the ones who can direct AI, validate its output, catch what it misses, and fix what breaks. That requires a real foundation. It requires knowing enough to know when something is wrong.

If you're just finishing school and wondering if there's still room for you, there is. But don't show up expecting to build things in isolation. Show up ready to work alongside AI — and know enough to keep it honest.

What I concluded from the experiments

Vibe coding works. For small, scoped tasks — a component, a script, a proof of concept — AI is genuinely fast and covers ground quickly. Give it a clear definition of done, enough context, and a contained problem. It delivers.

But the bigger the scope, the harder recovery becomes when something breaks. An entire app built by AI is an app you didn't write — which means troubleshooting it requires skills you still need to have. Vibe coding is a great way to start. It's a bad strategy to run on autopilot end to end.

Lesson one

Know the language, even if AI writes the code

You don't need to write every line. But you need to read it, spot issues, and understand enough to validate what was built. Without that, you're trusting blindly.

Lesson two

Small chunks work. Full apps are risky.

POCs, scripts, and isolated components are where AI shines. An entire application built in one shot creates a codebase that's hard to reason about and even harder to debug.

Lesson three

Keep control of the critical steps

Code review, testing, and committing to production are still yours. AI can accelerate the build — but certification and final judgment belong to the human in the loop.

AI can write your code. It can't fully own what breaks. That part is still on you — which means staying in the loop isn't optional.

Written by Edson Ferreira — Senior Principal PM, building AI-native product systems.
← More notes