tl:dr; Brainstorm stuff, generate a readme, plan a plan, then execute using LLM codegen. Discrete loops. Then magic. ✩₊˚.⋆☾⋆⁺₊✧

My steps are:

  • Step 1: Brainstorming with a web search-enabled LLM
  • Step 2: LLM-generated Readme-Driven Development 
  • Step 3: LLM planning of build steps
  • Step 4: LLM thinking model generation of coding prompts
  • Step 5: Let the LLM directly edit, debug and commit the code 

Returned to Step 1 to start new features. Iterate on later steps as needed.

My LLM Workflow

Step 1: Brainstorming with a web search-enabled LLM

I use Perplexity AI Pro for this phase as it runs internet searches and does LLM summarisation. I will create a few chat threads grouped into a space with a custom system prompt. Dictation also works well to save on typing.

The final step is to start a new thread where I ask it to generate a README.md for the feature we will build. 

Step 2: LLM-generated Readme-Driven Development 

Readme-driven development (RDD) involves creating a git branch and updating the README.md before writing any implementation logic. RDD forces me to think through the scope and outcomes. It also provides a blueprint that I add into the LLM context window in all later steps.

Step 3: LLM planning of build steps

I start a fresh chat with a thinking LLM model and add the README file into the context window. I ask for an implementation plan with testable steps. Getting the LLM to stop writing code during this planning phase is often hard. I encourage it to write the plan as section headers with only bullet points. If the outline plan looks too complex, I trim down the readme to describe a more basic prototype.

If you distract the LLM with minor corrections, costs can run up, you can hit your limits, and the model becomes unfocused. If things go off track, I create a fresh chat, copy over the best content, and rephrase my prompt. See Andrej Karpathy's videos for an expert explanation of why fresh chats work so well.

Step 4: LLM thinking model generation of coding prompts

I create a fresh chat, insert the Readme file, and cut and paste only one step of the plan. I ask the LLM to generate an exact prompt describing what to write as the code.

Getting the LLM to stop writing code during this phase is often tricky. I encourage it to write only text, not to implement the logic, and to describe the tests to write, and which files would need editing.

 Step 5: Let the LLM directly edit, debug and commit the code 

I use Aider, the open-source command-line tool that indexes the local git repo, edits files, debugs tests and commits the code. Do not worry; just type /undo to have it roll back any changes.

You need it to see the test failures, so type /run and paste the command to compile and run the unit tests. You then tell the LLM to fix any issues. At this point, the LLM writes and debugs the code for you ✩₊˚.⋆☾⋆⁺₊✧

Update: GitHub Copilot Edit/Agent mode or Cursor can also write teh code.

Update: I often have the LLM to update the README file after we have finished coding adding nodes about the implementation details. Markdown on GitHub can render "Mermaid" diagrams that LLMs find very easy to generate.

End Nodes

This blog is inspired by Harper Reed's Blog and a conversation with Paul Underwood—many thanks to Paul for explaining his LLM codegen workflow to me.

My current tools are:

  • Perplexity API Pro online using Claud 3.7 Sonnet and US-hosted Deepseek for internet research and brainstorming
  • GitHub Copilot with Claud 3.7 Sonnet Thinking (in both Visual Studio Code and JetBrains IntelliJ)
  • Continue Plugin using DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI (in both Visual Studio Code and JetBrains IntelliJ)
  • aider for direct code editing using both Claud Sonnet and DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI

aider defaults to Claud Sonnet. You can find my settings for a US-hosted DeepSeek at gist.github.com/simbo1905

End.