SoN Vol 2, Issue 3: The Metric Mandate


Vol 2, Issue 3 | Jan 21 2026
The Metric Mandate

Dear Reader,

The most common answer to “What’s the goal of this AI project?” is depressingly consistent:

“To improve efficiency.”

…and that’s not a goal. That’s a wish.

The Problem with Vague Goals

Veljko Krunic nails it in Succeeding with AI: “If you can’t quantify the business result you’re hoping to achieve, you have to ask yourself and your stakeholders whether the project is worth doing.

AI methods are quantitative by nature. They process numbers, optimize functions, and measure outcomes mathematically. When you feed a quantitative tool vague objectives like “improve efficiency” or “enhance productivity,” you’re asking it to hit a target that doesn’t exist.

But the real issue isn’t technical, and it’s not unique to AI. It’s an organizational process problem.

Projects without clear metrics can’t fail. There’s no definition of failure. So they drift indefinitely, consuming budget while delivering “learnings” instead of results. Six months later, someone asks “Did the AI project work?” and the honest answer is “We don’t know, because we never defined what ‘working’ meant.”

The Metric Mandate

Before any AI project moves past the idea stage, it needs to pass through a simple gate. Five questions, all requiring specific answers:

1. What specific number will change?

Not “efficiency” or “productivity”, but the actual metric. Response time, error rate, processing hours, revenue per customer. Pick one number.

2. What is that number today?

You need a baseline. If you don’t know the current state, you can’t measure improvement. “We think it takes about a day” isn’t a baseline. “Average processing time is 6.2 hours based on last quarter’s data” is a baseline.

3. What’s the minimum improvement that justifies the investment?

This is where projects can get uncomfortable. If you’re spending $50,000 on implementation, what improvement makes that worthwhile? A 5% reduction in processing time? 20%? 50%? Set the bar before you start, not after you see results.

4. When will you measure?

Commit to a timeline. “We’ll know if it worked after 90 days of production use” is a commitment. “We’ll evaluate when we have enough data” is an escape hatch.

5. What result means you stop?

This is the hardest question — and the most important. What’s your kill criteria? If after 90 days you’ve only achieved a 3% improvement instead of the targeted 20%, do you continue investing or cut losses?

If any field is blank or “TBD,” the project isn’t ready.

In Practice

Bad: “We want to use AI to improve our customer service.”

Better: “We want to reduce average ticket resolution time.”

Good: “We want to reduce average ticket resolution time from 4.2 hours to under 2 hours within 90 days. If we’re not below 3 hours by day 60, we’ll reassess the approach. Current baseline measured from Q4 2025 data across 12,000 tickets.”

The good version might feel like overkill. It’s not. It’s the difference between a project that can succeed or fail — and therefore be learned from — versus a project that just continues.

Velocity Doesn’t Mean You Skip Steps

I spent most of this past weekend working with a development system called GSD (Get Shit Done) — a framework for building software with Claude as your implementation partner, written by a guy who doesn’t even consider himself a coder, but a music producer who wants to create his own tools.

The system has a rule: you can’t execute any phase of work until you’ve defined what must be TRUE when that phase completes.

Not what tasks you’ll do. Not what files you’ll create. What observable behaviors must exist.

Before you write a single line of code, you answer: “What can a user DO that they couldn’t do before?”

If you can’t list 3-5 specific, verifiable truths, the system won’t let you proceed to execution. No exceptions.

Here’s what caught me: even with Claude handling the implementation — writing code faster than I could type it myself — I still can’t skip the step of defining success upfront.

AI gives you speed. It doesn’t give you permission to skip thinking about what “done” means.

The same principle that protects AI projects at the business level protects software projects at the technical level: define observable success before you start work.

GSD calls it “goal-backward methodology.” Business strategists call it “defining success criteria.” The Metric Mandate calls it “answering five questions before you start.”

Same idea. Different domains. Same protection against confusing activity with progress.

The speed AI provides makes this MORE important, not less. When you can ship in hours instead of weeks, the temptation to skip the “what does success look like?” conversation gets stronger.

Resist that temptation. Taking an hour or two to define your metrics and outcomes drastically increases the likelihood of a solid first version and reduces cleanup iterations.

Fast execution of unclear goals just gets you to “sort of done” faster. It doesn’t get you to done.

The Protection This Provides

The Metric Mandate isn’t bureaucracy for its own sake. It protects you in three ways:

It protects your budget. Projects with clear success criteria get funded more easily, and defended more easily when someone questions ROI. “We reduced processing time by 40%” beats “We learned a lot about AI capabilities.”

It protects your credibility. Nothing damages an AI initiative faster than the perception that it’s just technology tourism. Clear metrics demonstrate business thinking, not tech enthusiasm.

It protects your team. People working on projects with vague goals get demoralized. Are we winning? Losing? No one knows. Clear metrics let teams celebrate real wins and course-correct when something isn’t working.

The Quick Gut Check

Next time someone proposes an AI project — including yourself — run these five questions:

  1. What number changes?
  2. What’s the baseline?
  3. What’s minimum success?
  4. When do we measure?
  5. What triggers stop?

If they can’t answer all five with specific numbers, the project isn’t ready for implementation. It’s still in the “wish” phase.

And wishes don’t ship.


Get The Tools

I’ve created two ready-to-use tools to help you apply this framework:

Claude Code Skill - An installable skill for Claude Code and Claude Desktop. Guides you through all 5 questions, rejects vague responses, and pushes you to define real baselines and kill criteria. Install it once, use it anytime with /metric-mandate.

ChatGPT Prompt - A copy-paste prompt for ChatGPT users. Same rigorous framework, works in any ChatGPT conversation.


If you’ve got a project stuck in this phase — or you’ve seen this pattern play out — I’d be curious to hear about it. Hit reply.

Until next week,

Jim

Signal Over Noise is weekly, reader-first publication on AI "without the hype" published by Jim Christian. If you've been forwarded this issue, you can subscribe for free: go.signalovernoise.at.

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Jim Christian

I test AI tools, build real workflows, and share what's worth your time. Newsletter, field guides, and courses — everything based on what I've shipped, not what I've read about.

Read more from Jim Christian

Second Brain Chronicles - Digest #001 February 2026: Confidence Is Not Evidence This is the first real issue of Second Brain Chronicles. I've been building an AI-augmented second brain system for over a year now, and I kept telling myself I'd start documenting it when the system was "ready." The system will never be ready. So here we are. What you're reading is a monthly field report — not a tutorial, not a product launch. I'm documenting what happens when you wire AI tools deep into your...

Your Vault Has a Shape Folder structure, daily notes, and the one file that teaches Claude everything about your system. Last issue, you installed Obsidian and Claude Code, then had your first conversation where Claude could see your files, create new ones, and answer questions about what's there. Right now though, Claude knows almost nothing about what you're building. Every time you start a new session, it reads the files in your folder and... that's it. No context about what the vault is...

Starting From Scratch Install the tools. Have your first conversation. This is where every AI second brain begins. You've seen the demos. Someone on Twitter shows their AI reading their notes, managing their email, writing drafts in their voice, remembering decisions from three months ago. It looks like magic, and you want it. Then you try to build it and hit a wall. Where do you even start? What tools? What structure? The demos never show the beginning — only the result. This newsletter is...