Introduction:

Argo CD is a powerful GitOps tool that helps you declaratively manage your Kubernetes applications. But let’s be real, getting everything set up efficiently can sometimes feel like assembling IKEA furniture without a manual. 😅

If you want to automate deployments while keeping your Kubernetes configurations clean and versioned, you’re in the right place! Here are three game-changing hacks that I used in my implementation of Argo CD for my DevOps portfolio. These strategies should make your Argo CD experience seamless, scalable, and smooth. Let's dive right in.


Hack 1: Use Helm to Manage Kubernetes Templates Like a Pro

Why Helm?

Managing raw Kubernetes manifests can get messy, real fast. Helm acts as the package manager for Kubernetes, helping you:

  • Template and reuse configurations instead of duplicating YAML files.
  • Use parameterized values for environment-specific configurations.
  • Maintain a clear version history for deployments.

With Helm, you can define once, reuse everywhere. Need to deploy a new version? Just update the values.yaml. No more editing multiple YAML files manually!

Quick example on how it works:

Instead of manually writing multiple deployment files, Helm allows you to define reusable Chart templates inside the helm-charts directory:

# node-app-helm/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: {{ .Release.Name }}-app
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}-app
    spec:
      containers:
        - name: app
          image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
          ports:
            - containerPort: 3000

If you need more info on what Helm has in store, you can read my introductory blog on Helm, it also covers the changes I made in my Argo CD project. Links to both are mentioned below:

Cruise-on-Kubernetes-with-Helm
Project Repository


Hack 2: Automate Helm Versioning in GitHub Actions using sed

The Problem

Argo CD tracks the main branch to determine the latest version of your app. But how do you automate updating the desired values in values.yaml and Chart.yaml every time a new Docker image is built?

The sed Solution

In the GitHub Actions workflow (build-and-run.yaml), we dynamically update values.yaml and Chart.yaml using sed. Here's how it works:

- name: Update Helm chart with new image tag
  run: |
    sed -i -E "s/(tag: \")([a-z0-9]*)(\")/tag: \"${{ github.sha }}\"/"  helm-chart-directory/values.yaml
    sed -i -E "s/(appVersion: \")([a-z0-9.]*)(\")/appVersion: \"${{ github.sha }}\"/" helm-chart-directory/Chart.yaml

Breaking Down the Command 🔍

  • sed -i: Edits the file in place.
  • [a-z0-9]: Matches the existing SHA-like string.
  • *: The * makes the alphanumeric SHA-like string optional, allowing for both SHA-like and blank values.
  • s|tag: \"[a-z0-9]*\"|tag: "${{ github.sha }}"|: Replaces the existing tag with the new GitHub SHA.
  • s|version: \"[0-9.]*\"|version: "1.0.${{ github.run_number }}"|: Updates Chart.yaml with a new version similarly.

Goes without saying this is not the only way to achieve this change. But the point I wanna drive home is automate this portion so there's no manual dependency to merge a pull request to update main branch.

Other alternatives:

  • Use yq commands to get the desired changes since you'd be dealing with YAML files. In my case the commands would look like:
IMAGE_TAG=${{ github.sha }} yq -i eval '.image.tag=env(IMAGE_TAG)' helm-chart-directory/values.yaml
IMAGE_TAG=${{ github.sha }} yq -i eval '.appVersion=env(IMAGE_TAG)' helm-chart-directory/Chart.yaml
  • Set the targetRevision field in the application.yaml to a desired branch where image tag is already available. This can be set using "--revision" flag in argocd CLI.

Example: This can work in a setup where you pre-test your image in a controlled environment before deploying it to Argo CD. Say your current built image is number n but the current deployed image is n-1. This way you can store the image on a dedicated branch which has a static name, something like ready-to-deploy, and update the tag manually/programmatically post testing.

  • You can manage a centralized repository for Argo CD to monitor. For this, you'd need to copy/sync helm manifests from one repository to another.
  • Th above approach is good for large micro-services architecture to manage helm charts in one place. You can try out this GitHub Marketplace Action I created to sync files between repositories.

Sync Files Across Repositories GitHub Action

I'm working on doing the same for the next release of my project and will share my strategies and implementation in coming blogs. Do follow along to grab advanced learning opportunity from this venture.


Hack 3: Use Argo CD CLI for Helm Manifest Rendering & Deployment

Argo CD CLI

Along with powerful web UI capabilities of Argo CD, the CLI has a broad area of use cases as well. In my implementation it helped easily automate the creation of the Argocd Application resource instead of creating an application.yaml file, we'll now explore the below commands regarding the same:

✅ Create applications programmatically
✅ Sync applications on-demand

How It Works

Post install verification script

Once the Kubernetes cluster (Minikube) is running inside GitHub Actions, and the installation for Argo CD is completed, the workflow verifies if all the pods created in the argocd namespace are in a 'Running' state, if not it'll wait and re-check in 30 seconds:

- name: Check if all pods are in "Running" state
  run: |
    while true; do
      echo -e "Printing current status of Argo CD Pods:\n"
      kubectl get pods -n argocd
      POD_STATUSES=$(kubectl get pods -n argocd -o=jsonpath='{.items[*].status.phase}')
      echo "POD_STATUSES: $POD_STATUSES"
      if [[ $(echo "$POD_STATUSES" | tr ' ' '\n' | sort -u) == "Running" ]]; then
          echo "All pods are in Running state."
          break
      else
          echo "Waiting for all pods to be in Running state...Re-checking in 30 seconds"
          sleep 30
      fi
    done

Log in to Argo CD after forwarding the port using kubectl:

  • Using the kubectl port-forward command shown below, the local Argo CD server is forwarded to port 8080 in the runner machine.
  • The & at the end of the command ensures the process is run in the background, AKA detached state, so the execution in the terminal is not halted.
  • The STD_OUT logs are re-directed to a file port-forward.log and this can be used for troubleshooting if something goes south.
  • yes command will continuously output y in the terminal which ensures the login commands gets a 'y' when prompted for (Yes/No). This can also be achieved by echo "y" instead of yes
  • The username is admin and password for first login is also fetched using argocd CLI.
- name: Login to ArgoCD
  run: |
    kubectl port-forward svc/argocd-server 8080:443 -n argocd > port-forward.log 2>&1 &
    sleep 3
    if ! grep -q "Forwarding from" port-forward.log; then
        echo "Port forwarding failed, check logs."
        cat port-forward.log
        exit 1
    fi    
    yes | argocd login localhost:8080 --username admin --password $(argocd admin initial-password -n argocd | head -n 1)

Create an Argo CD application using Helm:

- name: Create ArgoCD Application
  run: |
    argocd app create  \
      --repo https://github.com//.git \
      --path  \
      --dest-server  \
      --dest-namespace  \
      --sync-policy automated

After filling the values for my project repository the command looked like:

- name: Create ArgoCD Application
  run: |
    argocd app create node-app \
      --repo https://github.com/cloud-sky-ops/node-app.git \
      --path node-app-helm \
      --dest-server https://kubernetes.default.svc \
      --dest-namespace default \
      --sync-policy automated

Sync the application to deploy the latest changes:

This step isn't required when --sync-policy is set to automated, however, it's a good to know because you might wanna disable auto-sync in production environments.

- name: Sync ArgoCD Application
  run: |
    argocd app sync node-app

Wrapping Up:

By combining Helm templating, GitHub Actions automation, and the Argo CD CLI, we’ve just built a fully automated Kubernetes deployment pipeline. Here’s a quick recap:

✅ Use Helm to template Kubernetes manifests.
✅ Automate versioning updates with sed in GitHub Actions.
✅ Use the Argo CD CLI to create, manage, and sync applications.

These three hacks make Kubernetes deployments faster, more predictable, and entirely Git-driven. 🚀

Want to see all this in action? Check out the node-app repository and start building your own automated GitOps workflow today!

Made it till the end? Drop your thoughts, suggestions, and queries in the comment section down below. Thank You!!