Allie K. Miller said something on the Calum Johnson Show last week that I have not stopped thinking about. Three years ago, she said, Artificial Intelligence had no memory beyond your single conversation with it. You opened a chat, you got an answer, and the next time you came back you started from zero. Now you can hand it everything, your executive's preferences, the cadence of her week, the names she trusts, the way she likes things written, and it holds all of it.

Memory across sessions changes the shelf life of every prompt you have ever written. The prompt you ran six months ago and shelved when it disappointed you is not the same prompt now. The model behind it has different memory, different reach, different ability to take action. Same words on the page, but a different answer in the box. The question for the EAs who want to stay ahead is not what to try next. It is what to come back to.

This week

If you have been experimenting with AI for any length of time, you have a graveyard. A folder, a doc, a notes app full of prompts you wrote, ran, and watched come back disappointing. You filed them away or quietly deleted them, and you moved on to the things that actually worked, because the work does not slow down for our experiments. I have done the same thing more times than I want to admit, and almost every time I have looked back I have realized I gave up too early.

The prompts that did not work yet are not failures, they are research and development for the next version of the tooling, which is shipping faster than any of us can keep up with. The skill is not writing the prompt that works on the first try. The skill is keeping the prompt alive long enough that the tools catch up to the idea, and recognizing the moment when they finally do.

The four-time scheduling agent

I tried to build the same scheduling agent four times over the past year, and it is the reason I now treat my prompt library as living infrastructure rather than a static archive. The first version asked for too much in one shot: scan my email, scan my exec's calendar, scan every leader's calendar I had access to, draft the response, ask me the duration and timezone, send it. All in one prompt. It did a passable job as a local app on my laptop, but when something broke I could not tell which piece had broken, and I could not scale any of it to my team. As I wrote in Issue 5, LLMs work best when you feed them work in building blocks. The first version of my scheduling agent was the version that taught me what that meant.

I rebuilt it three more times after that, in Gumloop, then ChatGPT, then Claude. Each version got closer, each one hit a different ceiling. The prompt did not change. The tooling moved every time. A few weeks ago the official Slack MCP shipped. I pasted the same prompt into Claude cowork, and it became the thing I didn’t know I had been looking for the whole time. An autonomous responder that scans Slack every ten minutes for scheduling requests, checks calendars, and responds on my behalf…autonomously. I had been looking for a Gmail solution, but Slack is where most of my scheduling actually happens, and what I ended up with, using almost the same prompt, turned out to be even more useful.

The prompt was the constant across every attempt. The tooling was the variable. If I had thrown the prompt away after Cursor, or after Gumloop, or after ChatGPT, or after Claude, I would have started from scratch the day the right tooling finally showed up. The work I put in over those four attempts compounded into the working version, because the prompt did the patient work of waiting for the tools to catch up to it.

What this means for your library

This is why the prompt library is the asset, not any single prompt inside it. The prompts that produce a mediocre result today might produce an excellent one in three months, because the models are improving that fast and new connectors are shipping every week. Your library is a record of the ideas you have had about your work, and every time a capability ships, the question is not "what should I try now," it is "which of my ideas just became possible." That second question is much faster to answer if you have already written the prompts down.

The EAs who treat prompts as one-and-done, who write them, run them, and delete the ones that disappoint, are starting from zero every time the tooling shifts. The EAs who treat the library as living infrastructure are walking into every new capability with a backlog of ideas already articulated, already tested, already waiting. Patience with the tooling is becoming a real competitive advantage in this work, because it compounds in a way that nothing else does.

There is a prompt I want you to run today, and it is built for exactly this situation, which is to mine your own graveyard and find the prompts that just became possible.

Here is the prompt to run today

I have a prompt I tried [TIMEFRAME, for example, three months ago, last quarter] that did not produce what I wanted at the time. I want to evaluate whether the tooling has caught up to the idea, and whether it is worth running again now.

Here is the prompt I tried: [PASTE THE OLD PROMPT]

Here is what I wanted it to do: [WHAT YOU WERE TRYING TO ACCOMPLISH]

Here is what went wrong when I ran it: [WHAT FAILED, for example, output was too generic, the tool could not access X, the model could not take actions]

Based on what has shipped in LLM capabilities since then, including longer context windows, tool use, MCP connectors, file handling, agentic features, and improvements in reasoning, please give me all of the following:

Whether this prompt would likely work now and why, with specifics about which capabilities have shifted.

If it would work now, an updated version of the prompt that takes advantage of the current tooling.

If it would not work yet, what specifically would need to ship for it to work, and a stripped-down version I can run today that captures part of the value.

A short note on what I should watch for in the next six months that would unlock the full version.

Ask me clarifying questions to provide the best recommendations.

Run this on three prompts from your graveyard this week. Not ten, not the whole library, three. The point is to build the habit of revisiting on a cadence, because the next time a major capability ships, you will already know which prompts in your library are waiting on it.

A note on patience

Patience with tooling is not the same thing as patience with yourself. The EA who tries something with AI and does not get the result she wanted is not behind. She is doing the work. The tools that produce the result she wanted may not exist yet, and the version of the tools that does will land in her lap with no warning, often in the form of a small update to a tool she already uses. The reps you put in now are what let you recognize that moment when it arrives, because you have already articulated the idea, you already know what good output looks like, and you have already built the prompt that has been waiting for the day it would work.

Save the prompts that almost worked. Date them. Come back to them on a cadence. The graveyard is not dead, it is the most underrated asset in your library, and the EAs who learn to mine theirs are the ones whose work keeps compounding.

→ Grab the free starter library at theforcemultiplier.co

Coming up next

I’ll be taking next week off to take our daughter to Disneyland for the first time, and I plan to be fully present for every minute of it. When I’m back, we’ll pick up on something that’s been on my mind: rest and sustainability in this AI era. The pace of this industry is relentless, and the pressure to keep up can lead to burnout. We’ll dig into what it actually looks like to build a sustainable practice with these tools instead of letting them run you.

You are built for this work, and I mean that.

Please do not wait on the sidelines while the rest of us figure it out.

Come figure it out with us.

Go multiply.

Sabrina

The Force Multiplier

Keep reading