Recursively Self-Improving Writers

Then, about two weeks ago, I had a real idea…

And now we’re getting closer to that feedback loop I was talking about before.

Over the past few months, I’ve developed several custom LLM prompts that are tailored to help me with various editorial tasks (I’ll also share some of these at the bottom of this post).

But I have a very strong opinion here…

A lot of people are thinking about AI very badly when it comes to the question of writing. A lot of people talk about these tools as if we’re going to have them think for us. Or as if they can be important sparring partners. You hear phrases like this a lot right now, “sparring partners.”

I don’t buy it.

At least not yet.

Currently, I would never let any of these tools anywhere near the task of creating, discovering, or even suggesting ideas to me. Who knows what dark forces you’re summoning!

My mind, my perspective, my education, my motivations, my dreams, and my values are all far more interesting and meaningful and powerful than anything these machines can generate out of thin air. At least for now, and I suspect for quite a long time.

The long term future is very hard to know, so I’m generally not interested in talking about that. I’m interested in the medium term and what can be done.

Now, today, what these tools can absolutely handle are the mechanical aspects of writing and publishing.

Spelling and grammar, for instance, is essentially the application of well defined algorithms.

Translating from one language to another is not as artistic as some people want you to think it is; it’s an algorithm.

Taking some unstructured observations and turning them into a logically sequenced format. That’s not creative or artistic either, it’s mechanical.

Turning an essay into a video outline, or lecture notes—mechanical. Speaking or lecturing, I would not outsource. But turning one format into another is mechanical.

These are all examples of the kind of work that you could train an 18-year-old intern to do for you, while keeping all of your published work 100% original, personal, and in your authentic voice. If your ideas are good and intelligent, the published items will be good and intelligent. If your ideas are bad and stupid, the published items will be bad and stupid. There is no cheating here on anything that matters.

That’s what the AIs are right now: Good, but not genius, 18-year-old interns who are willing to work day and night for an extremely small salary—around $20/mo, or maybe ~$100/mo if you’re doing this stuff every day with the API.

In a capitalist economy, if it’s possible and it produces an edge, someone will do it. Smart writers must learn how to use their new interns correctly—not too much, but also not too little. I think the correct heuristic is to offload as much of the mechanical labor as possible.

Following this heuristic to its limit, could I build a software system that handles all of the mechanical labors of a modern internet writer, in one organized set of pipelines? At the limit, all I would do is live, read, think, and write down the essential ideas, arguments, experiences, concepts, implications, and conclusions that I discover in my research and life—then everything else is lightly edited, chopped up, formatted, and delivered over multiple public channels. All executed automatically through a series of scripts and finely-optimized LLM calls.

Everything is 100% composed of what I input—my ideas, my words, my style— Nothing more than me, and nothing less than me. It’s only the non-human legwork that’s being radically streamlined.

I would never publish anything under my own name that is not, in fact, my work.

But this authenticity and fidelity is not necessarily anti-technological. It’s a matter of tasteful prompt-engineering. In other words, whereas unscrupulous copywriters might want LLM tools to churn out a thousand generic blog posts an hour, thoughtful writers will want LLM tools to not plagiarize and not add anything at all. This is what I’ve been playing with for months now: Designing prompts and pipelines of prompts that can change my own original ideas and text as little as possible, while executing the kind of valuable labor that writers have outsourced to editors and assistants for hundreds of years.

It’s also a matter of fine-tuning the models, which is something I haven’t played with yet. Though I do think that’s going to be an important part of the pipeline, which I’ll start testing soon.

But imagine this pipeline is completely built out. Assume it’s only half as good as I sketched it. Any writer who uses this pipeline is going to be able to publish much more, and much better, than those who don’t.

Then they’ll start using AI to build software tools that are custom fit to the needs of their audience.

By the way, whoever solves the application of AI to publishing efficiency can turn that into a paid software product in its own right. That’s actually what I’ve been working on specifically over the past couple weeks—my second piece of software. I’ve been obsessed, it’s the deepest rabbit hole I’ve fell into since I got into Urbit.

I actually right now have a working web app with login and authentication and a database and everything, where each page is one of my custom pipelines that I’ve built for editorial tasks. The outputs are much, much better than anything I’ve found anywhere else. Just because the quality of outputs is super domain-specific and requires a lot of tinkering for custom use-cases.