While building LiveAPI, a tool where you can get all your backend APIs documented in a few minutes, I ran into a frustrating but interesting problem.
The LLM-powered backend would sometimes return malformed JSONs—especially as prompts grew larger and responses more complex.
At first, I thought the issue was with how I was prompting the model.
I tried tightening the language, giving stricter output templates, and even attempted few-shot examples.
It helped a little, but the deeper the prompt stack got, and the larger the output, the more the LLM began to hallucinate—half-finished objects, dangling brackets, JSON inside Markdown code fences, single quotes, trailing commas, the works.
The JSON Cleanup Problem
Most of the broken responses looked like this:
So the issue wasn't about getting some response—it was about turning it into something parsable that I could pass to OpenAPI parsers or load in the frontend.
Even json.Unmarshal()
in Go would just choke and return an error.
Why I Didn't Just Improve the Prompt
I get it—prompt engineering is the default reaction when LLMs go off the rails.
But this wasn't a few broken responses.
This was a systemic thing.
Anytime the prompt + context got large enough, the model started clipping endings or misformatting its output.
This wasn’t just ChatGPT on a bad day—it was how transformers behave when under pressure I guess.
So instead of trying to "prevent" the broken JSON, I decided to just fix it.
Enter json-repair
I stumbled across json-repair, a Go package that intelligently fixes broken JSON strings.
Just takes a string and tries to make a best-effort repair.
The project even lists all kinds of common LLM junk:
Basically a summary of every way an LLM can make JSON unusable.
The Fixer Script
So I wrote a quick script to go over all my generated JSON files, fix them, and save a new version.
Here's what it does:
- Looks into every
b.*
directory under a batch folder - Reads the
controller_analysis.json
file - Pulls out the first entry in the
llm_response
array (which is usually wrapped in markdown code fences) - Cleans it up and runs
jsonrepair.RepairJSON
on it - Saves the fixed version as
fixed.json
in the same folder
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
jsonrepair "github.com/RealAlexandreAI/json-repair"
)
func main() {
basePath := "." // current directory
entries, err := os.ReadDir(basePath)
if err != nil {
panic(err)
}
for _, entry := range entries {
if entry.IsDir() && strings.HasPrefix(entry.Name(), "b.") {
jsonPath := filepath.Join(basePath, entry.Name(), "controller_analysis.json")
fixedPath := filepath.Join(basePath, entry.Name(), "fixed.json")
content, err := ioutil.ReadFile(jsonPath)
if err != nil {
fmt.Printf("Skipping %s: %v\n", jsonPath, err)
continue
}
var data map[string]interface{}
if err := json.Unmarshal(content, &data); err != nil {
fmt.Printf("Skipping %s: %v\n", jsonPath, err)
continue
}
arr, ok := data["llm_response"].([]interface{})
if !ok || len(arr) == 0 {
fmt.Printf("No llm_response in %s\n", jsonPath)
continue
}
raw := arr[0].(string)
cleaned := strings.TrimPrefix(raw, "```
json\n")
cleaned = strings.TrimSuffix(cleaned, "
```")
cleaned = strings.TrimSpace(cleaned)
fixed, err := jsonrepair.RepairJSON(cleaned)
if err != nil {
fmt.Printf("Repair error in %s: %v\n", jsonPath, err)
continue
}
if err := ioutil.WriteFile(fixedPath, []byte(fixed), 0644); err != nil {
fmt.Printf("Write error in %s: %v\n", fixedPath, err)
continue
}
fmt.Printf("Fixed JSON written to %s\n", fixedPath)
}
}
}
It worked out beautifully.
Now...
Final Thoughts
When you’re building tools that rely on AI-generated content, you’re going to have to build in some guardrails.
Sometimes that’s at the prompt level, sometimes at the data level.
But often, it’s just about accepting the mess and writing good cleanup scripts.
json-repair
isn’t flashy.
It’s not backed by OpenAI or Hugging Face.
But it does the job better than anything else I’ve found.
If your LLM is being a bit too creative with its JSON, you know what to do.
I’ve been actively working on a super-convenient tool called LiveAPI.
LiveAPI helps you get all your backend APIs documented in a few minutes
With LiveAPI, you can quickly generate interactive API documentation that allows users to execute APIs directly from the browser.
If you’re tired of manually creating docs for your APIs, this tool might just make your life easier.