SoN Vol 2, Issue 6: The Talking Wrench


Vol 2, Issue 6 | Feb 11 2026
Three conversations you should be having with your AI tools

Dear Reader,

I had coffee with a friend this week — who’s trying to solve a practical problem. He needs a better way to track his calls, emails, and to-do items. His CRM is still in development, so right now it’s spreadsheets and memory. He has access to Microsoft Copilot.

I gave him a few suggestions, but mostly I said: describe your problem. Tell it (Copilot) what the best outcome looks like for you. Tell it what software and tools you have available. Tell it what it has access to, if it doesn’t already know. Don’t trust the first answer blindly — verify it, challenge it, repeat. It’s a two-way learning process.

He looked at me like I’d described talking to a wrench. Which I suppose, in a way, I had.


You Can Ask a Wrench What It Does Now

Here’s something that seems obvious once you say it out loud, but I don’t think we’ve fully reckoned with it.

We’ve never been at a point in history where you could ask a tool what it does.

A wrench doesn’t have opinions about its own constraints. A spreadsheet can’t tell you what it’s bad at. A circular saw doesn’t warn you when you’re about to use it wrong. You learn those things through experience, manuals, or — occasionally — missing fingers.

But your AI tools can answer those questions. If you ask.

Most people don’t. They either trust blindly (dangerous) or distrust completely (wasteful). Both positions miss the same thing: for the first time, you can have an actual conversation with your tools. Not just give instructions — have a back-and-forth about how the tool works, where it struggles, and what it needs from you to do its job well.

There are three conversations worth having. I only had the first one for a long time.


Conversation 1: What Do You Want It To Do?

This is the part everyone knows — prompting, instructions, context. The whole internet has advice on this, so I’ll keep it brief.

What most guides miss: the quality of your instructions scales with your understanding of the tool’s actual behaviour, not its marketing materials.

“Write me a blog post” is an instruction. It’s also close to worthless — not because the AI can’t write, but because you haven’t told it anything about your version of good. Your voice, your audience, your standards, your specific situation.

The better you understand what the tool actually does (not what the landing page claims), the better your instructions get. Which is why the third conversation matters so much — but we’ll get there.

That’s where the conversation usually ends. Give instructions, get output, accept it or complain. Maybe 30% of what’s available.


Conversation 2: What Don’t You Want It To Do?

This is the one I wish I’d started with.

Constraints are instructions too. Arguably more important ones.

I keep a running document of things I’ve told my AI assistant not to do. Some examples:

  • Don’t agree with me when you actually disagree. AI tools are trained to be agreeable. They want to validate you. That sounds pleasant until you realise it means they’ll enthusiastically agree with your bad ideas — unless you explicitly tell them not to. I had to instruct my tool to be honest with me.
  • Don’t make things up about my work. If it doesn’t know what I built or did or said, the instruction is to check or ask — not to invent plausible-sounding details to fill the gap. This one got added after it fabricated specifics about projects I’d supposedly worked on. It sounded convincing. None of it was real.
  • Don’t cite numbers without checking the source. It once pulled subscriber counts from an old document and put them in a live email. The numbers were weeks out of date and significantly wrong. Now there’s a rule: verify from the actual data, or say you can’t verify.
  • Don’t be a sycophant. No “Great question!”, no performative enthusiasm — if you can say it in three sentences, don’t use ten.

Here’s the important part: I didn’t write any of those on day one. Every single constraint was added after something went wrong. The AI did something, I caught it, I understood why it happened, and I wrote a rule so it wouldn’t happen again. I still catch it doing things I haven’t written rules for. The document’s never finished.

That’s a feedback loop — and it’s one that actually works, because unlike a human colleague, the AI reads and follows the constraint document every single time. No ego, no forgetting, no “yeah but I thought this time was different.”

Over time, the constraint document becomes more valuable than the instructions. The instructions tell the tool what to do. The constraints shape what kind of collaborator it is.


Conversation 3: What Can You Actually Do?

Here’s what still catches me off guard.

You can ask an AI: “What are you good at? What are you bad at? What should I not trust you to do?”

And you’ll get a useful answer. Not a perfect one — these tools still have blind spots about their own blind spots — but a better starting point than any manual ever gave you.

Try this today. Open whatever AI tool you use regularly and ask:

“What are the things you’re most likely to get wrong? What should I always double-check when working with you?”

You’ll get a response that’s more honest than most software documentation. Claude will tell you it sometimes fabricates citations, ChatGPT that it can be confidently wrong about recent events, Gemini about gaps in real-time information. These aren’t secrets — they’re known limitations the tools will openly share if anyone thinks to ask.

Then take it further:

“Given what you know about our conversation so far, what am I trusting you to do that I probably shouldn’t be?”

It’s not an obvious question to ask. And in my experience, it produces the most useful answers — because it forces the tool to evaluate your specific situation, not recite generic disclaimers.

One more:

“What would you need from me to do this task better? What context am I not giving you that would help?”

Now you’re in a real dialogue. The tool is telling you how to use it more effectively. No manual has ever done that. No wrench has ever said “actually, you’d get a better grip if you held me this way.”


The Bigger Point

What I was really telling my friend over coffee wasn’t “here’s how to use Copilot.” It was: you don’t start with trust. You start with a conversation. Describe the problem, tell the tool what good looks like, see what it comes back with, then challenge it. Verify, adjust, repeat.

That’s not cynicism — it’s just how trust has always worked, with any tool or collaborator. But there’s a new element now: you can ask the tool to participate in the trust-building process.

Ask it where it’ll likely fail you. Tell it where it already has. Write constraints and watch them take effect immediately. Then ask what it needs from you to do better.

That two-way conversation — between you and the tool, about the tool itself — is completely new. And I think we’re drastically underusing it.


Three Things To Try This Week

  1. Ask your tool what it’s bad at. Literally ask. “What should I not trust you to do?” The answer will be more useful than you expect.
  2. Start a constraint document. Doesn’t need to be fancy — a note on your phone works. Every time your AI tool gets something wrong, write down what happened and what you’d want it to do differently. After a few weeks, you’ll have something actually valuable — a personalised reliability map.
  3. Ask it what it needs from you. “What context am I not giving you that would help?” turns a one-way instruction into a collaboration. The tool will tell you how to get better results from it — if you think to ask.

The wrench can talk now. Most people are still just swinging it.

Hit reply if you try any of these — I’m curious what your tools say about themselves.


Signal Over Noise Is Going Paid

Quick reminder: Signal Over Noise moves to a paid newsletter on March 4th.

If you’re reading this, you’re already here — and the Founding Member rate of €49/year is still available. That’s not “first year” pricing. It’s your rate, permanently, for being early. Monthly pricing gets announced next week.

The short version of why: I’d rather be accountable to readers than to advertisers. The full explanation is in the announcement post.

Elsewhere

SketchScript launches on Product Hunt Thursday

It turns meeting transcripts, YouTube videos, and podcast episodes into hand-drawn sketchnotes — visual summaries you can actually scan and remember. If you’re on Product Hunt, an upvote would go a long way.

YouTube - Vibe Coding My Website w/ Claude

video preview

A behind-the-scenes look at building a website with AI assistance.

Until next week,

Jim


Signal Over Noise is weekly, reader-first publication on AI "without the hype" published by Jim Christian. If you've been forwarded this issue, you can subscribe here: go.signalovernoise.at.

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Jim Christian

I test AI tools, build real workflows, and share what's worth your time. Newsletter, field guides, and courses — everything based on what I've shipped, not what I've read about.

Read more from Jim Christian

Second Brain Chronicles - Digest #001 February 2026: Confidence Is Not Evidence This is the first real issue of Second Brain Chronicles. I've been building an AI-augmented second brain system for over a year now, and I kept telling myself I'd start documenting it when the system was "ready." The system will never be ready. So here we are. What you're reading is a monthly field report — not a tutorial, not a product launch. I'm documenting what happens when you wire AI tools deep into your...

Your Vault Has a Shape Folder structure, daily notes, and the one file that teaches Claude everything about your system. Last issue, you installed Obsidian and Claude Code, then had your first conversation where Claude could see your files, create new ones, and answer questions about what's there. Right now though, Claude knows almost nothing about what you're building. Every time you start a new session, it reads the files in your folder and... that's it. No context about what the vault is...

Starting From Scratch Install the tools. Have your first conversation. This is where every AI second brain begins. You've seen the demos. Someone on Twitter shows their AI reading their notes, managing their email, writing drafts in their voice, remembering decisions from three months ago. It looks like magic, and you want it. Then you try to build it and hit a wall. Where do you even start? What tools? What structure? The demos never show the beginning — only the result. This newsletter is...