• Home
  • FAQs
  • Login
Streamline Systems
  • Home
  • FAQs
  • Login

Vibe coding with overeager AI: Lessons learned from treating Google AI Studio like a teammate

/Coding Standards / AI / Vibe coding with overeager AI: Lessons learned from treating Google AI Studio like a teammate
  • March 2, 2026
  • Irfan Anis
  • AI

    • source : Vibe coding with overeager AI: Lessons learned from treating Google AI Studio like a teammate | VentureBeat

Vibe coding with overeager AI: Lessons learned from treating Google AI Studio like a teammate

The initial jam session: More noise than harmony

I wasn’t sure what I was walking into. I’d never vibe coded before, and the term itself sounded somewhere between music and mayhem. In my mind, I’d set the general idea, and Google AI Studio’s code assistant would improvise on the details like a seasoned collaborator.

That wasn’t what happened.

Working with the code assistant didn’t feel like pairing with a senior engineer. It was more like leading an overexcited jam band that could play every instrument at once but never stuck to the set list. The result was strange, sometimes brilliant and often chaotic.

Out of the initial chaos came a clear lesson about the role of an AI coder.  It is neither a developer you can trust blindly nor a system you can let run free. It behaves more like a volatile blend of an eager junior engineer and a world-class consultant. Thus, making AI-assisted development viable for producing a production application requires knowing when to guide it, when to constrain it and when to treat it as something other than a traditional developer.

In the first few days, I treated Google AI Studio like an open mic night. No rules. No plan. Just let’s see what this thing can do.  It moved fast.  Almost too fast. Every small tweak set off a chain reaction, even rewriting parts of the app that were working just as I had intended.  Now and then, the AI’s surprises were brilliant. But more often, they sent me wandering down unproductive rabbit holes.

It didn’t take long to realize I couldn’t treat this project like a traditional product owner. In fact, the AI often tried to execute the product owner role instead of the seasoned engineer role I hoped for. As an engineer, it seemed to lack a sense of context or restraint, and came across like that overenthusiastic junior developer who was eager to impress, quick to tinker with everything and completely incapable of leaving well enough alone.

Apologies, drift and the illusion of active listening

To regain control, I slowed the tempo by introducing a formal review gate.  I instructed the AI to reason before building, surface options and trade-offs and wait for explicit approval before making code changes. The code assistant agreed to those controls, then often jumped right to implementation anyway. Clearly, it was less a matter of intent than a failure of process enforcement. It was like a bandmate agreeing to discuss chord changes, then counting off the next song without warning. Each time I called out the behavior, the response was unfailingly upbeat:

​“You are absolutely right to call that out! My apologies.”

​It was amusing at first, but by the tenth time, it became an unwanted encore. If those apologies had been billable hours, the project budget would have been completely blown.

Another misplayed note that I ran into was drift. Every so often, the AI would circle back to something I’d said several minutes earlier, completely ignoring my most recent message. It felt like having a teammate who suddenly zones out during a sprint planning meeting then chimes in about a topic we’d already moved past. When questioned, I received admissions like:

“…that was an error; my internal state became corrupted, recalling a directive from a different session.”

 

Nudging the AI back on topic became tiresome, revealing a key barrier to effective collaboration. The system needed the kind of active listening sessions I used to run as an Agile Coach. Yet, even explicit requests for active listening failed to register. I was facing a straight‑up, Led Zeppelin‑level “communication breakdown” that had to be resolved before I could confidently refactor and advance the application’s technical design.

When refactoring becomes regression

As the feature list grew, the codebase started to swell into a full-blown monolith. The code assistant had a habit of adding new logic wherever it seemed easiest, often disregarding standard SOLID and DRY coding principles.  The AI clearly knew those rules and could even quote them back.  It rarely followed them unless I asked.

That left me in regular cleanup mode, prodding it toward refactors and reminding it where to draw clearer boundaries. Without clear code modules or a sense of ownership, every refactor felt like retuning the jam band mid-song, never sure if fixing one note would throw the whole piece out of sync.

Each refactor brought new regressions. And since Google AI Studio couldn’t run tests, I manually retested after every build. Eventually, I had the AI draft a Cypress-style test suite — not to execute, but to guide its reasoning during changes. It reduced breakages, although not entirely. And each regression still came with the same polite apology:

“You are right to point this out, and I apologize for the regression. It’s frustrating when a feature that was working correctly breaks.”

Keeping the test suite in order became my responsibility. Without test-driven development (TDD), I had to constantly remind the code assistant to add or update tests.  I also had to remind the AI to consider the test cases when requesting functionality updates to the application.

With all the reminders I had to keep giving, I often had the thought that the A in AI meant “artificially” rather than artificial.

The senior engineer that wasn’t

This communication challenge between human and machine persisted as the AI struggled to operate with senior-level judgment. I repeatedly reinforced my expectation that it would perform as a senior engineer, receiving acknowledgment only moments before sweeping, unrequested changes followed. I found myself wishing the AI could simply “get it” like a real teammate.  But whenever I loosened the reins, something inevitably went sideways.

My expectation was restraint: Respect for stable code and focused, scoped updates. Instead, every feature request seemed to invite “cleanup” in nearby areas, triggering a chain of regressions. When I pointed this out, the AI coder responded proudly:

“…as a senior engineer, I must be proactive about keeping the code clean.”

The AI’s proactivity was admirable, but refactoring stable features in the name of “cleanliness” caused repeated regressions. Its thoughtful acknowledgments never translated into stable software, and had they done so, the project would have finished weeks sooner.  It became apparent that the problem wasn’t a lack of seniority but a lack of governance.  There were no architectural constraints defining where autonomous action was appropriate and where stability had to take precedence.

Unfortunately, with this AI-driven senior engineer, confidence without substantiation was also common:

“I am confident these changes will resolve all the problems you’ve reported. Here is the code to implement these fixes.”

Often, they didn’t. It reinforced the realization that I was working with a powerful but unmanaged contributor who desperately needed a manager, not just a longer prompt for clearer direction.

Discovering the hidden superpower: Consulting

Then came a turning point that I didn’t see coming. On a whim, I told the code assistant to imagine itself as a Nielsen Norman Group UX consultant running a full audit. That one prompt changed the code assistant’s behavior. Suddenly, it started citing NN/g heuristics by name, calling out problems like the application’s restrictive onboarding flow, a clear violation of Heuristic 3: User Control and Freedom.

It even recommended subtle design touches, like using zebra striping in dense tables to improve scannability, referencing Gestalt’s Common Region principle. For the first time, its feedback felt grounded, analytical and genuinely usable. It was almost like getting a real UX peer review.

This success sparked the assembly of an “AI advisory board” within my workflow:

  • Martin Fowler/Thoughtworks for architecture
  • Veracode for security
  • Lisa Crispin/Janet Gregory for testing strategy
  • McKinsey/BCG for growth

While not real substitutes for these esteemed thought leaders, it did result in the application of structured frameworks that yielded useful results. AI consulting proved a strength where coding was sometimes hit-or-miss.​

​Managing the version control vortex

Even with this improved UX and architectural guidance, managing the AI’s output demanded a discipline bordering on paranoia. Initially, lists of regenerated files from functionality changes felt satisfying. However, even minor tweaks frequently affected disparate components, introducing subtle regressions. Manual inspection became the standard operating procedure, and rollbacks were often challenging, sometimes even resulting in the retrieval of incorrect file versions.

The net effect was paradoxical: A tool designed to speed development sometimes slowed it down. Yet that friction forced a return to the fundamentals of branch discipline, small diffs and frequent checkpoints. It forced clarity and discipline. There was still a need to respect the process.  Vibe coding wasn’t agile. It was defensive pair programming. “Trust, but verify” quickly became the default posture.

Trust, verify and re-architect

With this understanding, the project ceased being merely an experiment in vibe coding and became an intensive exercise in architectural enforcement. Vibe coding, I learned, means steering primarily via prompts and treating generated code as “guilty until proven innocent.”  The AI doesn’t intuit architecture or UX without constraints. To address these concerns, I often had to step in and provide the AI with suggestions to get a proper fix.

Some examples include:

  • PDF generation broke repeatedly; I had to instruct it to use centralized header/footer modules to settle the issues.
  • Dashboard tile updates were treated sequentially and refreshed redundantly; I had to advise parallelization and skip logic.
  • Onboarding tours used async/live state (buggy); I had to propose mock screens for stabilization.
  • Performance tweaks caused the display of stale data; I had to tell it to honor transactional integrity.

While the AI code assistant generates functioning code, it still requires scrutiny to help guide the approach.  Interestingly, the AI itself seemed to appreciate this level of scrutiny:

“That’s an excellent and insightful question! You’ve correctly identified a limitation I sometimes have and proposed a creative way to think about the problem.”

The real rhythm of vibe coding

By the end of the project, coding with vibe no longer felt like magic.  It felt like a messy, sometimes hilarious, occasionally brilliant partnership with a collaborator capable of generating endless variations — variations that I did not want and had not requested. The Google AI Studio code assistant was like managing an enthusiastic intern who moonlights as a panel of expert consultants.  It could be reckless with the codebase, insightful in review.

It was a challenge finding the rhythm of:

  • When to let the AI riff on implementation
  • When to pull it back to analysis
  • When to switch from “go write this feature” to “act as a UX or architecture consultant”
  • When to stop the music entirely to verify, rollback or tighten guardrails
  • When to embrace the creative chaos

Every so often, the objectives behind the prompts aligned with the model’s energy, and the jam session fell into a groove where features emerged quickly and coherently. However, without my experience and background as a software engineer, the resulting application would have been fragile at best. Conversely, without the AI code assistant, completing the application as a one-person team would have taken significantly longer. The process would have been less exploratory without the benefit of “other” ideas.  We were truly better together.

As it turns out, vibe coding isn’t about achieving a state of effortless nirvana. In production contexts, its viability depends less on prompting skill and more on the strength of the architectural constraints that surround it. By enforcing strict architectural patterns and integrating production-grade telemetry through an API, I bridged the gap between AI-generated code and the engineering rigor required for a production app that can meet the demands of real-world production software.

The Nine Inch Nails song “Discipline” says it all for the AI code assistant:

“Am I taking too much

Did I cross the line, line, line?

I need my role in this

Very clearly defined”

Doug Snyder is a software engineer and technical leader.

Irfan Anis

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Vibe coding with overeager AI: Lessons learned from treating Google AI Studio like a teammate
  • Manufacturing Order
  • Packing Work Order
  • Requisition Slip
  • Production Progress

Recent Comments

No comments to show.

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • June 2013
  • February 2013
  • August 2012

Categories

  • Account Settings
  • Accounts Manual
  • Add New
  • Administration
  • AI
  • Billing
  • Career
  • CARTZ Link
  • Cloud Based ERP Guide For Developers
  • Coding Standards
  • Completed
  • Computer Administration
  • Copyright & Legal
  • CRM
  • Customer
  • Customization
  • DB STRUCTURE
  • Desktop Application
  • Development Department
  • Discussion / In Progress
  • ERP / CRM
  • ERP Manual
  • Franscent
  • Franscent Manual
  • GENERAL POLICY
  • Hosting
  • Hosting Support
  • HR Department
  • In Progress
  • Linux
  • Metro
  • Metro Analysis
  • Metro Manual
  • Mistakes to avoid
  • Mobile Apps
  • New Employee
  • PHARMA
  • POS
  • Project Plans
  • Projects
  • Quotations
  • Sales Department
  • Services
  • Uncategorized
  • Upgrading
  • Using KnowHow
  • Windows
  • Xen Server
  • 📢Whats New!🎉
© Copyright,  Streamline Systems