It started a few years ago with an innocent enough request: "can you make something like <platform> that generates content and posts it automatically?"
I checked it out and gave my usual answers -- "with enough time and money, yes," or "anything for a friend for a fee." They wanted to handle the marketing, so I didn't dig much deeper than feasibility.
Just because you can build something doesn't mean you should.
The Build
It was simple enough. Keyword or idea in, article out. I knocked out the skeleton in a few days to weeks. (LLM coding agents weren't around yet to build something "while you sleep," but I did use the available LLMs for pair programming.) It could generate solid content in minutes using the internal structures I prefer when writing. Test spikes for posting to WordPress looked promising.
Then came the excuses.
Rebuilding this. Testing that. Prompts needing tweaks. The rest of the SaaS platform needing work. And the project just dragged out indefinitely. It was like that old project car you see the neighbor working on every weekend, but it never makes it out onto the road to be driven.
The Legitimate Problems
Some of it was legitimate. The LLM could write articles, but there was very little guidance, so it spat out generic AI slop -- before that term was even a thing. The answer seemed to be putting the human in the loop earlier by making them manage a topic outline before we drafted anything. Still came out rather generically, so I added a stage to expand into a sentence outline before the draft, so a bit more of the human author could shine through. Then the marketing folks expanded the scope to include a whole business model using emails "inspired by" the main article, so those got added too.
The Less Legitimate Problems
Some of the excuses were more gratuitous. I changed backend libraries. Tested new UI toolkits. Couldn't get the process to flow the way I was happy with. Built an editor view from libraries, then from scratch. Decided "let's change providers to make the ongoing costs cheaper" even though it wasn't even being used at all, so costs were zero.
Looking back, it was anything I could do to keep from putting it out there in front of people. And I couldn't see the real reason why for the longest time.
I did catch that it was happening, but I thought it was just perfectionism. Or an aversion to maintaining the project once it launched. Or fear of marketing it -- even though I had experienced marketing partners champing at the bit to handle that part!
The Realization
Eventually it clicked. I just couldn't put more AI slop out into the world, even with some nominal human-in-the-loop aspect to it.
We tried to slap a band-aid on it with new features. Collect more info about the author and their audience. Build the library of topics to cover. Make the LLM more helper and less actor. But it was all half-hearted -- moving deck chairs on the Titanic. It didn't matter in the long run because my heart just couldn't go on.
It felt like a catch-22. The LLM can do the job, but it loses the human aspects that will make the business model work. Or the human could be doing the work with the platform guiding the process and the LLM doing some suggesting, but we just made more work for the human in the name of taking work off their plate. Neither is what we set out to build or the thing the world needs.
The Pivot
Then I had that moment. In the same way as Andre is attracting the right people with his Tiny Digital Worlds approach, this needs a new way of thinking about it all. I can't take all the work away -- that would be bringing in the "wrong" sort of people for anything I want to attach my name to. This needs to resonate with legitimate experts and scaffold them.
The point isn't to take the task away from the human. It is to put more of the human into the output in ways they may not be able to do on their own. Work with our human psychology, not against it. Keep the parts of the struggle that add to the result. Minimize all the rest.
I'm a big fan of the Pareto principle, the "80/20 rule" -- the majority of our results come from a minority of our effort. It leads to the idea that we can make artisan bread in five minutes a day and skip over all the kneading of dough in favor of a minute or two of "gluten cloaking" to get the same results. It also has us tuned to pay attention to the aspects that matter and "min-maxing" to deemphasize the things that don't make a difference.
So I started looking at how I was interacting with the tools in chat interfaces and started thinking about how best to augment that with some tooling. I flipped the process on its head. Instead of the tool taking over the whole job or becoming a digital task master to the human, I started asking: how can we use the AI to support the human and make their thinking better? There is still work that needs to be done by the human, but we're being smart about when and where to boost that hyper-human work with some generative AI magic.
I'm still working on aligning the user interface with the workflow -- is that an excuse?! -- but this iteration is coming together as a very different tool. Same underlying business model. Same outcome of words on the screen. Very different way of working between the LLM and the human.
What Changed
Intelligence Augmented, not replaced by the artificial version. Amplifying the author's expertise, not trying to fake non-existent experience. Grounded in the expert's background, earned skills, and genuine worldview -- not a regression to the mean with the most statistically likely words following one another in a swirl of mediocrity. Focused human effort, not an easy push-button slop generator.