Welcome back to Watch Me AI
If you’re new here, great to have you.
Watch Me AI is where I share practical ways to use AI in your work and life. Here are a couple of past issues to get you started: It's time to switch to Claude and "How can I make more money?"
Hi friends,
I was in The Wall Street Journal this week in a piece about people whispering to their computers. This is funny, mildly embarrassing, and completely accurate.
I use Wispr Flow constantly. I dictate prompts, emails, messy first drafts, strategy notes, and instructions to coding agents. I use dog walks for verbalizing brainstorms, and car rides for lengthy to-do lists. And yes, you can sometimes find my husband and me both muttering to our laptops from the couch.
Apparently I am not alone. I saw a LinkedIn thread this week where someone was making the case that keyboards may soon feel like CD-ROMs: useful for their time, but obviously not the endpoint. That sounds dramatic, but I think there’s something real there. The bigger story is not Wispr Flow. The bigger story is that the keyboard is starting to lose its monopoly as the main interface for knowledge work.
Typing is too tidy
The chat box asks us to package our thoughts before AI ever sees them. It rewards the polished question, the clean prompt, the already-organized idea. But that is not how most thinking starts.
Thinking usually starts messier than that: “Wait, I think what I actually mean is…” or “This might be dumb, but…” or “The thing I’m frustrated by is…” or “I can’t quite explain it, but something feels off…” That messy middle is often where the signal is.
Voice captures more of it. When I dictate, I give AI more context because talking is lower-friction than typing. I include the aside, the caveat, the emotion, the contradiction, the thing I would have edited out if I had to write it cleanly. And AI is often better when it gets the messy version first. And certainly faster at turning that into structure than I am.
From dictation to captured context
Wispr Flow is one version of this shift: solo thoughts becoming text. Granola is another version: conversations becoming context.
I'm fascinated by what you can do once your meeting history becomes searchable and promptable. The feature I’ve been playing with most is recipes, which are basically reusable prompts that run on top of your transcripts.
A few favorites recently:
- Coach Me Matt gives leadership coaching based on the Mochary Method. I’ve been using this to stay focused on my zone of genius and notice when I’m drifting into work I should probably delegate or avoid.
- Create LinkedIn Post turns a meeting transcript into a few possible LinkedIn posts. Not every conversation needs to become content, but some calls contain a useful idea I would otherwise forget.
- Find the Signal looks across a broader set of meetings and tries to identify a pattern no single meeting would reveal. This recently helped me notice a client concern I had moved past too quickly.
AI can now reason across my conversations, and it does so with zero typing required. This feels like a significant part of the shift away from keyboards.
The interface is moving closer to the thought
For a long time, the workflow was: think, type, organize, ask, receive output.
Now it is becoming: talk, capture, process, act.
The bottleneck in AI use is often not model intelligence. It is input friction: the gap between what is in your head and what the machine can understand.
The more that gap shrinks, the more useful AI becomes. A blank chat box is useful. A chat box with your dictated thoughts is better. A chat box with your meeting history, decisions, objections, patterns, and unresolved questions is much better.
That is when AI starts to feel less like a vending machine for answers and more like a thinking layer across your work.
Where this gets weird
Arjun Arora posted this week about Neuralink-style brain-computer interfaces. His point was that the most disruptive interface may not be another app or device, but a direct line between our brains and computers. In other words, the “input layer” could eventually move from keyboard, to voice, to thought itself.
Most people find the brain-chip version of the future insanely alarming. I get that. There are real questions about privacy, consent, control, and what it means to let technology get that close to thought.
But if I’m being honest, my first reaction is more excitement than fear.
Because if you follow this trend to its logical extreme, the endpoint is obvious. We keep trying to reduce the distance between thought and output. The keyboard was one interface. Voice is another. Meeting capture is another. Brain-computer interfaces are the more provocative version of the same question:
What happens when the machine can get closer to the thought before we have translated it into words?
That is exciting. It is also unsettling. And it makes the current moment feel like a transition point.
Today, I am whispering into my laptop. Tomorrow, maybe the interface is closer to thought itself.
For now, I am mostly grateful for anything that helps me more easily get ideas onto paper (or into this newsletter).
How do you feel about the new voice paradigm? And where do you land on the scale of alarmed to excited about brain chips? I'd love to hear.
Until next time,
Mollie
Ready to up level your AI skills?
Get my full catalogue of AI tools, workflows, and advice for saving time + making money with AI.
Get the AI Business Playbook here →