The Thinking Partner, Not the Employee

Most people delegate to LLMs like bad bosses tossing tasks over a cubicle wall. There's a better way -- and it starts with treating the tool like a collaborator, not a subordinate.

I have been playing with these tools since before the ChatGPT interface was a thing. The early GPTs just did completions and I got caught up in the hype and bought "unlimited usage" on an early slop generator. (Since then, I have always looked for a "killer feature" and done the math on token usage versus how much I am likely to use a platform. Things are changing so quickly that I refuse to lock myself into a single platform unless there is a clear advantage.)

If I hadn't had friends encouraging me to dig into the new chat interface instead of the simple "word completion" tools, I would have dismissed the whole thing as hype. I want to encourage you in that same way and make sure you're getting the advantages of these tools and not basing your long-term views on a brief exposure to lackluster examples.

2021 was a big year for learning to work with the tools. I found myself absorbing the patterns and testing them for myself -- a very NLP "unconscious modeling" approach. I quickly found a couple patterns that I leaned heavily upon.

The RTFM Prompt Pattern

The first was what I call my "RTFM prompt" pattern. The acronym typically stands for something like "read the friendly manual" and while a slightly different ordering works best in the prompting, this is more memorable for those who have ever run across the original acronym. Working with human psychology, not against it.

  • Role: what sort of person would be ideal to address what we are going to be doing here. One role or at most two. For a while my #1 role was "You are an I/O psychologist and direct response marketer." (Before I shifted to an inherently ethical approach to marketing and sales and distanced myself from the trappings of direct response marketing...)
  • Task: a brief, high-level idea of what we are doing in this session. Just something quick like "Let's build the content for a new web site" to set the direction.
  • More info: here is where I brain dump anything that seems relevant to the task. Early on, it was pointed out that I interact with the LLMs as if they were my undergrad students -- I assume very little about what they know and get confirmation that we are on the same page before moving forward.
  • Format: what output are you looking for here? A list? Long descriptions? Brief summary? A table? Diagrams? A particular tone or style? Length?

I found that I would automatically drop into this pattern for the majority of my initial prompts. Notice, I said initial prompt. This format is great for seeding a conversation, but we still have to have the rest of the conversation! I saw too many folks trying to "one-shot" the task and not treating it like it was an ongoing process.

The "Toss" Problem

We would all likely pooh-pooh the situation if someone "delegated" a task to another human by saying "write me a 2,000-word article on spirulina" and then got upset that what they got back didn't meet their expectations. We call that kind of delegation "the toss" -- it just gets tossed on their desk as the boss walks by. It has no real attempt at understanding and is not likely to get what they want without a long history and lots of other context.

But LLMs are more like Dory from Finding Nemo. They forget things -- basically everything. You start from scratch with every conversation. If it goes on too long, they forget the start of the conversation. And while the tools are getting better at making up for it, this is the nature of the beast when working with LLMs: the only things it "knows" are what you tell it and what it has been trained on.

What were we talking about? 😇

The "Ask Me Questions Until" Pattern

The other big pattern I noticed I was using is the "ask me questions until" pattern. It wasn't about doing the task itself -- it would typically be around getting things out of my head and clarifying my thinking around a given topic or task. I have had tons of chats that start with "Let's design a software platform to do <something>." It skips the Role from the RTFM pattern, but clearly establishes the Task.

The key difference in this pattern is in the Format block. I still do my brain dump for the More info section, but then I hand the reins over to the LLM. "Ask me questions one at a time until you have enough information to outline an approach to <task>." Notice, I'm not having it do the task -- just outline an approach. I might have it outline content or suggest the building blocks for the system or something at that broad level of output. I don't want it to do the task; I want to make sure I have things clear enough to do it as a separate step.

I started out with just "ask me questions" but found that I was getting a wall of text with so many questions that it basically overwhelmed me and I disengaged. Then I had it as "one or two questions at a time" and still found myself feeling like it was more than a single step's worth of work being handed over to me.

I manage the tools to work with my own psychology, not making me work for the tool!

The first time this went well outlining content, I realized that we had collaborated to generate an interview. It was essentially publishable as-is. And all I had to do was seed the conversation in the right direction and let the LLM suggest the next direction.

Keeping Control of the Direction

There are several other articles hinted at here around not letting the LLM "psych you out" with suggestions -- because you've been socialized to accept them -- and understanding the "context engineering" that is baked into the RTFM prompt pattern. We'll save those and the "tell me about" pre-prompt for later articles. For now, know that you're still in control of the direction things go. You just can rely on the LLM to keep you moving forward instead of having to do that layer of thinking at the same time as the clarifying thinking.

The key here is that I give the LLM its head and then rein it in when I would prefer to go a different direction. It allows me to just explore the ideas around the task at hand. I can just answer the question as asked. Or do a brain dump on an adjacent idea. Or push back and say "no, we are going in this direction instead." And all in ways that can be picked up at a moment's notice and doesn't tax a relationship with another human.

So this is one of the main ways I use the generative AI tools to be a thinking partner instead of an intern or worker. I may then hand off the clarified task -- I often ask late into the conversation for the LLM to "generate a summary of the decisions from this conversation as a standalone document with enough detail to be handed off to a junior developer or LLM coding agent."

What Stays Human

The hyper-human skills stay under my control as the human expert, but the LLM can support my thinking things out in ways that keep the cognitive load in the places it belongs -- the deep thinking on topic rather than managing the process.