A fully operational GitOps-native upgrade pipeline, designed for reuse and scalability
This is Part 5 of my GitOps Architecture series.
This series was originally written and published in a linear progression (Part 1 to 6).
On Dev.to, I’m republishing it starting from the final system design (Part 6), then tracing backward to how it was built — from system to source.
🧱 A GitOps-Native Upgrade Pipeline That Scales
This post introduces a fully operational GitOps-native promotion pipeline — designed for long-term reusability, modularity, and automation.
It’s not a concept — it runs in production.
And it became the foundation for my GitOps architecture.
📌 This is Part 5 of the series:
👉 Part 1: Why Argo CD Wasn't Enough
👉 Part 2: From Kro RGD to Full GitOps
👉 Part 3: Designing a Maintainable GitOps Repo
👉 Part 4: GitOps Promotion with Kargo
👉 ✅ Part 6: Designing a Maintainable GitOps Architecture (Start Here)
🚀 From Flowchart to Real-World Workflow
In the first four parts, I started with the limitations of Argo CD and worked toward a deployment model centered around Kro and instance.yaml
.
In Part 4, I introduced a clean, Git-native promotion flow:
Image Tag
→Git Commit
→Argo Sync
But this post is where the system truly goes live.
This time, I implemented Kargo’s three core components and stitched them into a fully working GitOps flow:
- Warehouse: Tracks image updates
- Stage: Decides whether to promote
- PromotionTask: Executes the update → commit → push → sync pipeline
- ApplicationSet + annotation: Targets the correct ArgoCD app
Then I refactored the logic for maintainability by extracting promotion steps into a shared PromotionTask.
🏗 Warehouse — Per-Service Image Tracking
Each service has its own Warehouse
to isolate update detection.
This lets each service operate with its own frequency and tag rules:
apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
name: frontend-dev-image
namespace: develop
spec:
freightCreationPolicy: Automatic
interval: 5m0s
subscriptions:
- image:
repoURL: docker.io/zxc204fghasd/wsp-web-server-v2
imageSelectionStrategy: SemVer
strictSemvers: true
discoveryLimit: 20
Stage – From Embedded Logic to Task Delegation
I originally stuffed all promotion steps inside the Stage:
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
annotations:
kargo.akuity.io/color: lime
name: frontend-dev-stage
namespace: develop
spec:
promotionTemplate:
spec:
steps:
- uses: git-clone
config:
checkout:
- branch: main
path: ${{ vars.srcPath }}
repoURL: ${{ vars.gitopsRepo }}
- uses: yaml-parse
config:
outputs:
- fromExpression: spec.values.deployment.tag
name: oldTag
path: ${{ vars.srcPath }}/develop/frontend/wsp-web-instance.yaml
- as: update-image
uses: yaml-update
config:
path: ${{ vars.srcPath }}/develop/frontend/wsp-web-instance.yaml
updates:
- key: spec.values.deployment.tag
value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)).Tag) }}
- as: commit
uses: git-commit
config:
messageFromSteps:
- update-image
path: ${{ vars.outPath }}
- uses: git-push
config:
path: ${{ vars.outPath }}
- uses: argocd-update
config:
apps:
- name: frontend-dev-app
sources:
- desiredRevision: ${{ outputs.commit.commit }}
repoURL: ${{ vars.gitopsRepo }}
vars:
- name: gitopsRepo
value: https://your.git/repo.git
- name: imageRepo
value:
- name: srcPath
value: ./repo
- name: outPath
value: ./repo
- name: targetBranch
value: main
- name: warehouseName
value: frontend-dev-image
requestedFreight:
- origin:
kind: Warehouse
name: frontend-dev-image
sources:
direct: true
stages: []
It worked — but maintaining this became a nightmare.
Maintaining this across services became a burden — any logic change meant updating multiple files by hand.
So I refactored.
Stage now focuses only on deciding when to promote , and delegates promotion logic to a reusable PromotionTask:
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
name: frontend-dev-stage
namespace: develop
annotations:
kargo.akuity.io/color: lime
spec:
requestedFreight:
- origin:
kind: Warehouse
name: frontend-dev-image
sources:
direct: true
stages: []
promotionTemplate:
spec:
steps:
- task:
name: promote-kro-instance
This wasn’t just for readability.
It was the only way to make promotions modular, reusable, and maintainable.
PromotionTask — The Core of Modular Promotion Logic
PromotionTask contains all the steps needed to perform a promotion.
To make it reusable, I parameterized the task using variables:
-
gitopsRepo
: Git repo URL with token -
imageRepo
: Docker image registry -
instancePath
: Path to the service'sinstance.yaml
-
warehouseName
: Warehouse to track tags -
appName
: Argo CD app to sync after promotion
Here's the full YAML:
apiVersion: kargo.akuity.io/v1alpha1
kind: PromotionTask
metadata:
name: promote-image
namespace: develop
spec:
vars:
- name: gitopsRepo
value: https://your.git/repo.git
- name: imageRepo
value:
- name: instancePath
value: develop/frontend/instance.yaml
- name: warehouseName
value: frontend-dev-image
- name: appName
value: frontend-dev-app
steps:
- uses: git-clone
config:
repoURL: ${{ vars.gitopsRepo }}
checkout:
- branch: main
path: ./repo
- uses: yaml-parse
config:
path: ./repo/${{ vars.instancePath }}
outputs:
- name: oldTag
fromExpression: spec.values.deployment.tag
- uses: yaml-update
as: update-image
config:
path: ./repo/${{ vars.instancePath }}
updates:
- key: spec.values.deployment.tag
value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)).Tag) }}
- uses: git-commit
as: commit
config:
path: ./repo
message: ${{ task.outputs['update-image'].commitMessage }}
- uses: git-push
config:
path: ./repo
- uses: argocd-update
config:
apps:
- name: ${{ vars.appName }}
sources:
- repoURL: ${{ vars.gitopsRepo }}
desiredRevision: ${{ task.outputs['commit'].commit }}
Why Break Out Tasks? Because Templates Are the Long-Term Answer
This wasn't about writing cleaner YAML.
It was about making the system scale.
What happens when you have 10 services, all running similar — but slightly different — promotion flows?
If you embed everything inside the Stage, you'll end up copy-pasting YAML everywhere.
When logic changes, you have to update every file manually.
That doesn't work at scale.
✅ The answer is: extract the logic into a reusable, parameterized PromotionTask.
With this design:
- Stage handles the decision: should we promote?
- Task handles the execution: how do we promote?
- The logic lives in one place, and can be reused across services
- A single task can power multiple services, just by changing a few variables
- If one service needs extra validation, just fork the task — it won’t affect the rest
This is how you make promotion logic modular, maintainable, and scalable — even when things get complicated.
Don’t Forget the Kargo Project — The Key to Automatic Promotion
One often-overlooked piece when working with Kargo is the Project resource.
Even if you’ve defined your Warehouse, Stage, and PromotionTask correctly,nothing will actually run unless you declare a Project.
In my design, every namespace has its own Project.
This separates environments (like develop
and production
) and scopes promotion logic clearly.
Here’s the Project I defined for the develop
namespace:
apiVersion: kargo.akuity.io/v1alpha1
kind: Project
metadata:
name: develop
namespace: develop
spec:
promotionPolicies:
- autoPromotionEnabled: true
stage: frontend-dev-stage
This setup helped me achieve several things:
- Defined a dedicated Project for the
develop
environment, managing all resources in one place - Enabled
autoPromotionEnabled: true
, allowing Kargo to automatically push qualifying Freight to the appropriate Stage - Made the entire flow — from image tag → Freight → Stage → Task — fully automated, with zero manual triggers
⚠️ Pitfall
I ran into this myself.
Freight was getting created just fine, but Stage wasn’t promoting.
It turns out — I simply forgot to declare a Project.
Without a Project, Kargo will not run any promotion logic — no matter how well your YAML is structured.
Pitfalls I Hit (and How I Fixed Them)
These weren’t theoretical issues.
These are all real problems I encountered — and fixed.
image.tag
unknown
In my early implementation, I used an invalid variable name like image.tag inside yaml-parse, and the whole promotion failed.
✅ Fix: Use values from Freight, or reference pre-defined variables via vars.
Freight was
This often happened when the Warehouse configuration was wrong — usually a bad repoURL or a tag that couldn’t be parsed properly.
✅ Fix: Make sure your image repo is correct, use SemVer, and always wrap expressions with quote().
YAML file not found
My yaml-parse step couldn’t find the file—not because the file was missing, but because the git-clone path didn’t match.
✅ Fix: Manually clone the repo to confirm the file structure, then double-check that your Stage uses the exact same path.
Git push failed (authentication)
I kept hitting auth errors on git-push. The actual reason? I forgot to include the GitLab token.
✅ Fix: Inject the token directly in the gitopsRepo URL like this: https://@gitlab.com/....
argocd-update
unauthorized
Even when promotion completed, the argocd-update step failed. Turned out the Application wasn’t authorized.
✅ Fix: Add the correct kargo.akuity.io/authorized-stage annotation to the Application metadata.
Final Thoughts: From Design to Reality
In this post, I made the promotion pipeline real.
- I implemented the full Warehouse → Stage → PromotionTask chain
- Designed promotion logic to be reusable and maintainable
- Built a system that doesn’t just run once, but runs day after day
This was the shift from “YAML that works” to “a system that survives change.”
I haven’t added validation gates, PR mode, or Slack notifications yet — but I’ve left room for all of that.
My Stage structure supports conditions, and my Tasks are ready to evolve.
If you’ve solved promotion workflows in a different way, I’d love to hear how you approached it.
Or if you’re currently stuck somewhere in your GitOps pipeline — feel free to drop a comment or message.
Maybe we’re solving the same thing from different angles.
The pitfalls I hit?
I hope you don’t have to hit them too.
💬 If this post helped clarify how to modularize Kargo promotion logic, give it a ❤️ or drop a comment.
I'm sharing more GitOps internals — next up is Part 4: the architectural reasoning behind this pipeline.