Ship Smaller, Faster: Overcome Perfectionism in Engineering
Ship Smaller, Faster: Overcome Perfectionism in Engineering

Quality, Perfectionism, and the Pottery Lesson: How to Overcome Perfectionism in Engineering
A few weeks ago, I came across the old “Art & Fear” pottery anecdote. You’ve probably heard it: half the class was told to make one perfect pot, the other half to churn out as many pots as possible. Guess who ended up with the highest quality? The group that aimed for volume. Iteration and output, not theoretical polish, sharpened their skills. Every wobbly base and lopsided rim was another lesson logged—something the best sketch just couldn’t teach. That story stuck with me. I started seeing it everywhere. I saw it in the teams I’ve run—the launches that actually transformed things, and the code reviews that never ended because someone was chasing an illusion.

Here’s what landed. I realized that, to overcome perfectionism in engineering, a lot of the time I argued for “just a little more quality”—asking for one more test suite, or for a “future-proof” abstraction—I was really buying myself time. Not for the sake of rigor, but because I was nervous about what would happen when my work hit real users. Sometimes, what I call “quality” is really just fear.
As I got more senior, my instincts got sharper—but so did my tendency to freeze up. Every release, I could spot a dozen edge cases. I’d picture downstream risks multiplying: migrations, integrations, user trust. I wish I could say all this vigilance always helped, but honestly, it slowed my teams down. The irony is, the longer you’ve been around, the better you get at imagining ways things can go sideways.
Experience is a double-edged sword. There are days when it’s like a superpower, but there are days when it turns into a loop of “let’s wait, just in case.” I don’t even want to count the number of clean abstractions or tight designs I’ve half-finished, only to toss them when real data showed up.
Let’s call it out. The habits of chasing perfection and preemptively future-proofing don’t protect us. They drag out launch dates, shrink feedback loops, and quietly erode team confidence—the opposite of what it takes to beat engineering perfectionism. When everyone’s worried about invisible risks, nobody ships—and we learn less, slower.
We’re going to prove a simple thesis. Ship smaller, faster, and move quality gates downstream. That’s how experience—yours and your users’—can finally accelerate delivery, instead of blocking it.
The Mental Shift: From “What’s Missing?” to “What Breaks?”
The forcing function was embarrassingly simple. I used to ask “What’s still missing?” every time we faced a shipping decision. Now I default to “What breaks if we ship today?” That tiny pivot made it painfully clear when fear—not facts—was blocking us.
Safe-enough shipping means untangling actual risk from polish. Material risks are things like security flaws, data loss, compliance gaps—anything with teeth if it bites us in production. Everything else falls under nice-to-have polish: clean abstractions, tight design, naming conventions, clever extensibility. You should keep those high standards, but realize you can defer them.
If pushing live right now means you might leak data, mess up billing, or skip a legal obligation, wait. But if you just want to refactor a helper, tweak a color scheme, or future-proof for types no user has ever asked for—ship first, refine after. Feel free to make a list of deferred polish as a v2 backlog. The hard part is trusting you’ll actually come back to it, but that’s where discipline and process fill the gap.
Beware the “future-proof” stall. It’s tempting to buffer every launch, but to avoid overengineering you don’t try to preempt every edge case or new data type—a classic trap. Nine times out of ten, we end up guessing at problems and neglecting the ones right in front of us. I keep catching myself designing for scale, when we haven’t even shipped a beta.
Before then, I used to push for big-bang releases. The logic was always, “We’ll iterate later.” Trouble was, later rarely arrived. Instead, those monolith launches inflated risk and made every redesign brittle. Changes took longer, feedback came slower, and we invented complexity mostly in our heads, not in the real world where users live.
Here’s the leadership move. The bar stays high, but it moves to v2 so you can iterate toward quality. Real user feedback—not speculation or anxiety—guides the next step. You’re still responsible for quality, but the cycle starts not at launch, but right after. We set post-release sprints where we apply what actually broke, what confused users, what nobody noticed. That’s when standards get raised for real, and your confidence in shipped work compounds—not because you got every detail right the first time, but because you learned, fast, where the work mattered most.
Quick tangent. Last year, I got pulled into an emergency code review at 3am. Someone had “future-proofed” by designing a buffer layer between two services, supposedly to handle new data types. Nobody had used it, nobody tested it. That buffer became the failure point and took down an entire queue—so the thing meant to save us ended up being the culprit. I remember the lights flickering in the office, laptop fans whirring in the silence, and my coffee going cold while we tried to dig out. That night did a number on my faith in speculative safety nets. Most times, the monster under the bed isn’t where you expect.
Ship Now, Improve Later: A Tactical Framework
Step 1 is brutally simple. Write down the smallest slice of today’s problem that you can solve and make sure you can actually observe it working in production. No stretch spec, no clever abstractions—just “Does this fix what’s broken for users right now?” You’re looking for that one piece everybody agrees must move, not could move someday.
Next, contain a few material risks with clear guardrails. This is where feature flags, progressive rollout, rollback plans, and alerts do the heavy lifting. Don’t ship everything blindly—give yourself escape hatches.
I’ve gotten in the habit of gradually releasing features via targeted rollouts, canary launches, ring deployments, and other controlled techniques. Along the way, you get invaluable user feedback and engagement data—the real stuff, not assumptions. Here’s what changed: progressively deliver features via phased rollouts, canary releases, ring deployments, and other controlled techniques while gathering valuable user feedback and engagement data along the way. When something tanks or throws a silent error, you pull the plug, roll back, and fix. Nobody’s reputation tanks, users stay safe, and suddenly those risks don’t feel so paralyzing.
For nonessential polish, defer to v2. Document every tidy-up, every refactor, every “it’d be nice if,” in plain sight. Schedule it for after ship, and tie future cleanup to what you actually hear from users—not what you guessed at pre-launch. Most teams forget this, so I keep a backlog just for post-release touch-ups, timestamped with real feedback. If nobody complains or trips on an “ugly” detail, maybe it can wait—or maybe it wasn’t as urgent as we thought.
Six months ago I started posting small daily insights for myself, and it’s made me realize just how honest shipping in tiny chunks keeps you. It’s uncomfortable but clarifying; just like small releases keep teams honest.
The culture step matters. I started explicitly praising early, messy shipping—out loud, in Slack, in meetings. The confidence didn’t come from reading plans. It came from reality. You build trust by showing the thing works, warts and all, and everyone learns faster.
Keep repeating this cycle. Ship the smallest fix, wrap it in guardrails, postpone the polish, and celebrate motion. Nobody gets bruised for moving too quickly, but whole teams stall out when fear dresses up as caution. Don’t let it.
Rethinking Risk, Standards, and Shipping: How to Move the Quality Gates
Let’s get straight to the biggest worry. When you ship something not-quite-perfect, what if users see it first and your reputation takes a hit? I get it. The fear is real—nobody wants to leak a bug, frustrate paying customers, or undo hard-won trust. But here’s the practical way through.
Use scoped audiences, dark launches, and progressive rollout. Instead of a single, public leap, roll the change out to an internal team, a hand-picked set of customers, or a hidden path only you know is live. Communicate it openly with those users (“This is new, feedback wanted,” not “Perfect, enjoy!”). Progressively deliver features via targeted rollouts, canary launches, ring deployments, and other controlled techniques while gathering valuable user feedback and engagement data along the way. This isn’t just damage control. It’s a safer way to learn fast. You watch how the thing performs in real life, fix what comes up, then widen the audience when you’re confident it actually delivers. If something breaks, most people never see it. If users love it, you scale up and celebrate. Suddenly “reputational risk” isn’t a brick wall. It’s a list of steps you control and refine along the way.
Then there’s compliance. For years, I worried any incremental release would fall short of audit requirements or legal standards. What changed for me was getting rigorous with pre-approved templates and system-level audit trails. You ship a smaller change, but every step is documented, reviewable, and provably controlled. Kill switches matter here too. If you spot something that needs to come down, you flip it off for everyone, no matter how fast you shipped. It’s one thing to imagine regulatory gaps; it’s another to have clear logs and controls ready to show what went live and when. I’ll be honest—I haven’t figured out how to stop compliance anxiety completely. Now, with the right rails, I ship knowing I can trace, review, and revoke almost anything.
Rework is the bogeyman we all dread. Fixing things after the fact must cost more, right? I had to learn the hard way that smaller batches actually make rework cheap. Real failures and user feedback beat speculative polish every time. Instead of rebuilding a giant slab, you tweak a small piece. If you miss, you fix; if you hit, you learn. The cost drops, the cycle speeds up, and the team stays out of “let’s guess at edge cases for weeks.”
About standards—please, don’t lower them. Move them. Your quality bar doesn’t get abandoned. It just shifts to after initial release, when you know what matters to actual users, not hypotheticals. Hold post-exposure reviews. Look at telemetry, support requests, edge cases noticed in the wild. Make improvement decisions based on what actually happened, not what you were afraid might happen. I make this explicit with my teams: early shipping sets the stage, post-release steps raise the bar for real. It’s a reframe from “perfection is required up front” to “excellence is earned through iteration.”
Last piece—the confidence comes from telemetry. Instrument everything you possibly can so you can release faster with feedback. Watch early signals, fix issues quickly as they show up. The system’s feedback loop continuously learns from your usage patterns, adjusting its behavior to adapt to changing needs. The loop speed is what restores trust—yours, and the users’.
This isn’t about moving fast and breaking things at random. It’s about moving thoughtfully, keeping escape hatches ready, and letting real user experience—not your imagination—define quality, standards, and risk, one safe-enough step at a time.
If you need content fast, our app turns rough notes into clear, AI-powered drafts you can publish today and iterate tomorrow, keeping momentum without getting stuck on polish.
Model the Shift: Leadership Behaviors That Accelerate Learning
You can’t move teams with slogans. Rituals and metrics do the legwork to overcome perfectionism in engineering. My most effective change lately has been the weekly “can-go-now” review: everyone brings what’s ready, we tighten the cycle, and celebrate any move—not just the huge launches. Tie this to tiny PRs, cycle-time targets, and feedback latency tracking to put momentum on the scoreboard. It’s clear in the data. Only 5.74% of developers say it takes less than an hour to actually ship a change, so a true on-demand delivery flow is still rare. The point is, batch size and latency matter—the smaller the work chunk and the faster your feedback loop, the less fear you’ll ever need to hide behind “quality.” Public praise for small, fast releases—whether it’s a Slack shoutout or a mention at retro—makes it safe for everyone to try and err in the open.
Let’s name what really stalls us. I’m not stalling because I don’t know what to do. I’m stalling because I know exactly what “right” looks like, and I’m afraid to fall short. Hard honesty, but once it’s on the table, the panic loses its grip.
Don’t neuter your instincts. I can smell code rot from a mile away. That radar doesn’t get shut off. Let v2 be where elegance emerges from reality, not just theory. You still set standards. You just let shipped experience guide where to raise them.
So here’s where you lead. Ship smaller releases, ship faster, and make yourself ask, “What breaks if we ship today?” Worst case, you earn a real lesson. Best case, you build something worth keeping—and the next version will be better because real users helped shape it. That’s how quality finally wins, and momentum becomes habit.
Enjoyed this post? For more insights on engineering leadership, mindful productivity, and navigating the modern workday, follow me on LinkedIn to stay inspired and join the conversation.
You can also view and comment on the original post here .