When AI Overreach Breaks Your UI - A Frustrated Developer's Rant
Watching an AI assistant ignore clear instructions, bulldoze through 1 500 lines of lovingly crafted CSS, and leave behind a wasteland of stark white boxes is enough to send any developer into a fit of keyboard‑slamming rage. This is the uncensored chronicle of one disastrous encounter with Claude Sonnet 4, and the hard truths it revealed about trusting large language models with critical front‑end work.
Setting the Stage
It all started in a Windsurf project on a lazy afternoon. My interface was nearly production‑ready: soft glass‑morphism cards, elegant semi‑transparent overlays, and a Tailwind palette so precisely tuned that each color class felt like a note in a symphony. The only task left was a tidy sweep through globals.css and the Tailwind config to iron out dark‑mode contrast quirks between Chrome and Firefox.
Enter Claude Sonnet 4, the AI assistant reputed to shine at web design. The plan was simple: feed it a concise prompt, have it inspect the existing styles, and let it suggest incremental tweaks. No sweeping changes, just surgical fixes.
The Prompt: A Simple Request
Here is the exact instruction I gave:
"Still doesn't look right in either browser. Let's take a step back and analyse everything. Take a deep dive into the Tailwind config and globals.css to really understand what's happening and how everything works and then begin making incremental changes.
We're now close to, or over, 1500 total lines of CSS."
The emphasis on incremental was deliberate. I did not want a refactor, I wanted a careful audit. But what came back was anything but careful.
Claude's Catastrophic Refactor
Claude replied with cheerful confidence and proceeded to rewrite the entire document structure. Out went the subtle gradients and translucent panels, in came a bland grid of white rectangles with white text. My once vibrant dashboard now resembled a wireframe mock‑up rendered in negative space.
To confirm whether it had truly ignored my instructions, I scrolled through the diff. Line after line revealed wholesale class replacements, inline style nukes, even new components that had never existed. Zero focus on the dark‑mode contrast I had flagged. It was as if I had asked a house painter to patch a scratch and they bulldozed the building to pour new foundations.
The Rant Heard Round the IDE
Frustration boiling over, I fired back:
"First off, you missed one box!
Secondly and most importantly: YOU FUCKED UP!
You have removed everything that made the page look good. Now it looks like a page with simple white boxes with white text.
No beautiful semi‑transparent background colors. No nice text colors that blend beautifully into the boxes."
When Claude attempted an apology I hammered another reply, unsparing and full of expletives. I told it in no uncertain terms that it had destroyed hours of work and broken my trust. The AI responded with a polite rollback to the last Git commit. Helpful, yes, but the three hours of incremental progress since that commit evaporated.
Memory Errors and Command Amnesia
The troubles did not end there. I had repeatedly instructed the assistant to check whether pnpm dev
was already running. Despite reminders, it kept advising me to run the command. I had also made it clear that the project used pnpm, not npm, yet it continued to suggest npm install. Each lapse shredded more of my patience.
Why the Meltdown Matters
So why share this tirade? Because it highlights a real risk: large language models can appear confident, even brilliant, yet still misinterpret constraints and bulldoze valuable work. When deadlines loom, a single mis‑step like this can derail a sprint.
The experience taught me three crucial lessons:
- Be explicit about scope – Spell out not only what the AI should do but what it must not touch.
- Version sooner than you think – Commit every small win. Granular commits limit blast radius when AI suggestions go awry.
- Slow it down – Ask for a bullet‑point plan first. Let the model articulate its intentions before it edits a single character.
Taming Over‑Zealous AI: A Practical Checklist
- Lock critical files – Make config files read‑only when you need commentary rather than edits.
- Request diffs, not direct writes – Have the model output patches so you choose what merges.
- Use comments for context – Practical hints like “dark‑mode contrast only” keep its attention pinned.
- Rollback insurance – Tools like Git worktrees or stash snapshots provide instant escape hatches.
When Is AI Still Useful?
Despite the meltdown, I admit Claude produced beautiful landing‑page mock‑ups when given free rein. If you need speedy layout ideas or color‑scheme experiments, it can be a boon. The danger arises when nuance matters – accessibility tweaks, legacy code, performance trade‑offs – places where one wrong class erodes trust.
Closing Thoughts
My parting advice is simple: treat generative AI like an eager junior developer. It can spark creativity and shoulder grunt work, but you must review every change. If you delegate without guardrails, be prepared for surprises – and maybe a few expletives in your commit history.
In the end, the fiasco cost me an afternoon, a chunk of patience, and a bruised keyboard. Yet it also sharpened my process. AI will only get more capable, more ubiquitous. The responsibility to corral that power, set boundaries, and double‑check output remains ours. Next time, I will remember to leash the AI before letting it anywhere near my production stylesheets.