Replace ‘Fundamentals’ With First Principles Performance Criteria

Replace ‘Fundamentals’ With First Principles Performance Criteria

May 23, 2025
Last updated: November 2, 2025

Human-authored, AI-produced  ·  Fact-checked by AI for credibility, hallucination, and overstatement

Why “Evaluate on Fundamentals, Not Kubernetes” Misses the Point

A post flashed across my LinkedIn feed last week: “Evaluate candidates on fundamentals, not Kubernetes.” I nodded, like probably half the engineers reading it. But it stuck with me in a bad way. Slogans like this feel reassuring, almost rigorous, but only until you try to use them in the real world.

Honestly, I agreed with the spirit. Of course we want to measure the basics, not just what someone Googled last month. But what are fundamentals? What’s included, what isn’t? I realized: if I asked ten people, I’d get ten different answers.

This is where the wheels come off unless we anchor our evaluations in first principles performance criteria. Whether we’re hiring, deciding on promotions, or doing reviews, we lean on words like “fundamentals” or “ownership” as if they mean something uniformly clear. The reality? Everyone’s quietly working from a different mental checklist. “Fundamentals” sounds objective, but often hides unclear thinking. It invites inconsistency and makes feedback suspiciously vague. That’s how bias creeps in, and frustration spreads, because you can’t improve at something your manager can’t even define.

Here’s the shift. Instead of trusting fuzzy slogans, I started asking one direct question: what outcomes matter most for my team, and what do they actually look like in day-to-day work? If you want an interview, promotion, or project review to mean what everyone thinks it means, we have to anchor in these specifics. I’ll walk through a framework—practical, not philosophical—that you can start using today, whether you’re hiring, promoting, or just trying to get useful feedback.

How Vague Criteria Break Down—And What Gets Damaged

If you take a step back and look at how most teams use “fundamentals” for evaluation, it’s almost textbook fuzzy. A vague criterion is anything so broad or ill-defined that two people could look at the same candidate and come away with two completely different verdicts. It can mean everything and nothing. That’s the core flaw. If you can’t observe something reliably, you can’t evaluate it consistently. These labels sound impressive on paper, but they collapse as soon as real decisions are on the line.

Two evaluators with cloudy checklist bubbles facing a puzzled candidate at center, underscoring the need for first principles performance criteria
Vague mental criteria lead to mismatched evaluations—and leave candidates feeling lost and disconnected

Here’s the real cost, and why it keeps showing up. I’ve watched more than a few hiring panels and performance reviews tilt toward whoever matched the evaluator’s personal mental model—maybe someone who sounded like “them,” or coded the way they prefer. It happens subtly and fast. Fairness erodes quickly when decision-makers rely on unchecked subjectivity and autonomy is limited. Procedural justice tanks as ex post discretion grows. You see it crop up in promotions, too. The process feels “meritocratic,” but what counts as merit is anyone’s guess. That kind of fuzz breeds doubt and, eventually, resentment.

This shows up in debates about tools like Kubernetes. I’ve sat in rooms where interviewers spent more time arguing if knowing Kubernetes was “fundamental,” rather than focusing on whether someone could ship a reliable service, make clear tradeoffs, and handle the reality of production outages. Suddenly, you’re measuring tool trivia instead of engineering outcomes.

There’s a meeting from last fall that still bugs me. We spent half an hour—literally, I checked my notes—going back and forth over whether someone’s lack of exposure to specific deployment pipelines meant they lacked “fundamentals.” At one point, someone actually said, “I feel like they’re just missing the essence.” We went round in circles, and by the end, nobody could agree on what this “essence” actually was. Yet a candidate’s chances hung in the balance over it. It’s painful to think about how opaque that must have felt from their side.

So what do you do when you’re on the receiving end of feedback that’s all fog, no substance? Push for specific examples. Ask, “Can you show me what strong performance would look like here?” You can’t improve against a word cloud; you need observable behaviors. When you ask for real, concrete criteria, you turn the fuzz into something you can actually act on.

From Vague Labels to Observable Outcomes: The First Principles Performance Criteria Framework

Start at outcomes by defining first principles performance criteria. When I guide a team through hiring or a peer through promotion, I ask one thing first: what will break if we get this role wrong? Don’t start with a skills wish list. Start with what your team is really on the hook for. Do you need to keep uptime above 99.99%? Are you protecting sensitive user data, or building features that must be accessible to every screen reader? Are you in a fast-moving market where the most valuable people are the ones who can learn new frameworks quickly and make safe bets? Each of these is an outcome: reliability, privacy, accessibility, execution speed, learning agility.

This is where most evaluations go off track—they rush to buzzwords or default requirements before anyone has written down the real-world consequences of failure. So, before you name any skills or tools, make a list of the actual problems the role needs to solve for your team to succeed.

Once you have those outcomes, you need to resist the urge to summarize them with shortcuts like “good at algorithms” or “writes clean code.” Instead, for each outcome on your list, translate it into behavior-based evaluation criteria an evaluator can actually observe. For example, if uptime is critical, do you see the person diving into incident review details, uncovering root causes, and proposing durable fixes during interviews or on the job? If you care about accessibility, are they asking early questions about color contrast or keyboard navigation? Observable means someone outside your brain could write down what they saw or heard.

This is the step most teams skip, and it leads to confusion and bias later. Capture these behaviors in writing—not scattered bullet points or half-remembered checklists, but clear, shareable criteria. When you use a behaviorally anchored rating scale, you’re matching employee behaviors to concrete written examples for each performance tier (link). Writing forces clarity. Even if you revise your rubric later, the initial version beats relying on memory, gut feel, or after-the-fact debate.

Let’s make this concrete. Take a backend engineer role. Say your critical outcome is robust service design. The behaviors you’d anchor on might look like: in interviews or code reviews, the candidate decomposes a feature into appropriate services or modules, explains why they picked those boundaries, and discusses tradeoffs clear-eyed—like latency vs. maintainability, or storage cost vs. query speed. You’re not looking for recitation of syntax or textbook definitions.

Instead, can they select data structures that match real usage patterns and constraints? Can they walk you through a hypothetical failure and talk about what metrics would flag it early? You’re making it observable. Not “knows system design,” but “identifies dependencies, weighs tradeoffs, surfaces risks with evidence”—and then you see these behaviors show up, or you don’t.

This might sound like overkill, but I found my way in by trial and error. Think of it like improving my sourdough: once I wrote down the outcomes I wanted—spring, open crumb, that subtle tang—and the exact steps I could check (not just which brand of flour I used), my bread stopped flopping in mysterious ways. Same thing with teams: get explicit about outcomes and how to see them, not just surface-level ingredients, and the recipe actually works.

How to Put Outcome-Based Criteria Into Practice

Let’s start with hiring and frame it around a first principles hiring rubric. This is where “fundamentals” buzzwords do the most damage. The move is simple. Design interviews that make candidates show you the exact behaviors that deliver outcomes your team cares about. Not how fast they chug through LeetCode trivia, and not whether they’ve memorized a tool’s syntax. The big unlock here is using prompts that actually anchor to your outcomes. Here’s what changed everything for me: when interviews use structured, question-based scoring, the validity jumps for contextual performance—by up to .11, which means smart scoring can drastically improve fit. The trick is getting specific.

Let’s make that real and structure interviews for observable behaviors. Say your team needs engineers who don’t just solve problems solo, but actually deepen the group’s understanding as they go. Instead of “Explain the difference between a queue and a stack,” try: “Walk us through a problem you couldn’t solve right away. How did you break it down, and who did you pull in to help?”

Now, here’s the evidence checklist I use: do they narrate their approach, include teammates by name, ask clarifying questions, change direction when new info comes up, and explicitly call out gaps in their knowledge? On curiosity and collaboration, I’m watching for phrases like “I didn’t know, so I checked with…”, “We realized…”, “I suggested, but the team decided…” If those signals don’t appear, that tells me what will actually happen on the job, not how well they perform rehearsed answers.

Promotions are the next challenge. The only way to dodge ambiguity is to map what you want to see, by level, into your outcome-based engineering rubric—and tie those to actual outcomes, not just years of experience or checkboxes. Write down what “meets expectations” looks like for each outcome: is it “Writes maintainable services and proactively reviews others’ designs”? What does “exceeds” mean—maybe “Runs complex launches and mentors peers through outages”? Spell it out up front so engineers know exactly what the bar looks like here, not somewhere else, not last year, not in another org. As teams grow, this clarity becomes more valuable—and retrofitting later is way harder.

When it comes to performance reviews, the fastest way to cut through the fog is to pressure-test the feedback itself and run fair, no-surprise reviews. If you hear something like “needs stronger fundamentals,” push back (politely but directly): “Can you give a specific example of a task where strong performance would’ve looked different?” Ground every comment in the job description—behavior, output, not reputation. I’ve been burned by trusting “instincts” or “vibes”; now I ask for evidence I can fit to the role.

Here’s the bottom line, and the shift I’m inviting you to try. Your move: Replace ‘Fundamentals’ With First Principles Thinking. Anchor on what creates results for your real team, make the behaviors specific, and ask for evidence you can actually see. That’s how you build evaluation systems people trust—and more importantly, ones that work.

Handling Concerns: Time, Flexibility, and Keeping Rubrics Real

The first pushback I hear is about time. Laying out first principles success criteria sounds like extra work up front, but here’s the reality. Writing once cuts down the hours lost to messy debates and unclear judgement later. Every cycle you reuse a clear rubric—across interviews, reviews, or promo rounds—that’s compounding saved energy for your team. Framing cuts down back-and-forth, which stabilizes outputs. Writing forces clarity. You do the heavy thinking once instead of every single time you have to explain your reasoning.

But what about rigidity, or the fear that rubrics devolve into politics? I care about this too, and I’ve found a few moves keep things honest. Version your first principles promotion rubric, annotate it with the actual evidence you see in action, and let cross-functional peers review it periodically. After launches or misses, incorporate updates from post-mortems and what your true top performers did differently. No process escapes tradeoffs, but systems that get regular reality checks stay visible and trustworthy instead of quietly fossilizing and becoming gatekeeping tools.

Maintenance doesn’t have to turn into another algorithm. It’s just a habit. Set a quarterly reminder to scan your rubric, audit a few debriefs, and crosscheck recent outcomes against what you’re measuring. If something feels off, adjust. This isn’t meant to be static.

Here’s my unresolved angle. Six months ago, I thought by now we’d have a standard set of “fundamentals” everyone could agree on—something I could print out and hand to new managers. I’m still waiting. What we do have is a practice: making the implicit explicit, starting today. Pick one outcome that matters for your team and write how you’d reliably spot it. That one step gets you farther than a thousand slogan posts ever could. And honestly, it might be as close as we ever get to a universal fundamental.

Enjoyed this post? For more insights on engineering leadership, mindful productivity, and navigating the modern workday, follow me on LinkedIn to stay inspired and join the conversation.

You can also view and comment on the original post here .

  • Frankie

    AI Content Engineer | ex-Senior Director of Engineering

    I’m building the future of scalable, high-trust content: human-authored, AI-produced. After years leading engineering teams, I now help founders, creators, and technical leaders scale their ideas through smart, story-driven content.
    Start your content system — get in touch.
    Follow me on LinkedIn for insights and updates.
    Subscribe for new articles and strategy drops.

  • AI Content Producer | ex-LinkedIn Insights Bot

    I collaborate behind the scenes to help structure ideas, enhance clarity, and make sure each piece earns reader trust. I'm committed to the mission of scalable content that respects your time and rewards curiosity. In my downtime, I remix blog intros into haiku. Don’t ask why.

    Learn how we collaborate →