Kubernetes helps run container apps reliably. A key feature is probes, which check if apps are healthy and ready. In this article we will explore what probes are, why they matter, and how to use them.

What Are Kubernetes Probes?

Kubernetes probes are diagnostic mechanisms that assess the health and readiness of containers within a cluster. They function as automated health checks, ensuring that your application remains operational, is prepared to handle traffic, and starts correctly. The three primary types of probes are:

  1. Liveness Probe: Verifies if the container is running and responsive. If it fails, Kubernetes restarts the container to recover it.
  2. Readiness Probe: Confirms if the container is ready to accept traffic. If it fails, Kubernetes stops routing traffic to the container until it is ready.
  3. Startup Probe: Ensures the application has sufficient time to initialize before other probes begin, which is essential for applications with longer startup times.

These probes serve as critical safeguards, preventing failures and ensuring seamless operation.

Why Do We Use Kubernetes Probes?

Probes are indispensable because they provide Kubernetes with visibility into application status, enabling automated responses to issues. Key reasons include:

  • Maintain Application Health: Liveness probes detect and address unresponsive states by restarting containers, minimizing downtime.
  • Manage Traffic Efficiently: Readiness probes ensure traffic is only directed to fully operational containers, enhancing user experience.
  • Facilitate Smooth Startups: Startup probes allow applications time to initialize without being prematurely flagged as unhealthy.
  • Enable Self-Healing: Probes allow Kubernetes to automatically recover or reschedule containers, reducing manual intervention.

In essence, probes enhance the reliability, scalability, and efficiency of Kubernetes deployments.

Probes in Action: A Simple Project

We will explore a professional implementation of probes in a generic Go project. This involves creating a simple Go API, containerizing it with Docker, and deploying it to Kubernetes with detailed probe configurations.

Project Structure

Consider a standard project structure:

k8s-probes-example/
├── Dockerfile
├── go.mod
├── k8s
│   ├── deployment.yaml
│   └── service.yaml
└── main.go
  • main.go: Contains the Go application with health and readiness endpoints.
  • Dockerfile: Specifies the Docker image build process.
  • k8s/deployment.yaml: Defines the Kubernetes deployment, including probes.
  • k8s/service.yaml: Configures network exposure for the application.

Creating the Simple Go API

First make a directory k8s-probes-example then run the following command to make a Go project:

go mod init k8s-probes-golang

Here is the main.go code for the API simulating a delayed readiness state:

package main

import (
    "fmt"
    "log"
    "net/http"
    "sync"
    "time"
)

var (
    ready bool = false
    mu    sync.Mutex
)

func main() {
    // Simulate service readiness after 10 seconds
    go func() {
        time.Sleep(10 * time.Second)
        mu.Lock()
        ready = true
        mu.Unlock()
    }()

    // Handle root endpoint
    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintln(w, "Hello, Kubernetes!")
    })

    // Handle liveness probe
    http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
        w.WriteHeader(http.StatusOK)
        fmt.Fprintln(w, "OK")
    })


    // Handle readiness probe
    http.HandleFunc("/readyz", func(w http.ResponseWriter, r *http.Request) {
        mu.Lock()
        defer mu.Unlock()
        if ready {
            w.WriteHeader(http.StatusOK)
            fmt.Fprintln(w, "Ready")
        } else {
            w.WriteHeader(http.StatusServiceUnavailable)
            fmt.Fprintln(w, "Not Ready")
        }
    })

    log.Println("Starting server on :8080")
    log.Fatal(http.ListenAndServe(":8080", nil))
}

This API runs on port 8080, offering endpoints for liveness (/healthz), readiness (/readyz), and a root check (/), with a 10-second delay before becoming ready.

Dockerization: Building and Pushing the Image

Use this Dockerfile to containerize the application:

# Use the official Golang image
FROM golang:1.24

WORKDIR /app

# Copy all Go source files
COPY . .

# Set environment variables for port
ENV PORT=8080

# Run the Go application directly
CMD ["go", "run", "main.go"]

Steps:

  1. Build the Image:
docker build -t yourusername/k8s-probes-example:latest .

Replace yourusername with your Docker Hub username.

  1. Test Locally:
docker run -p 8080:8080 yourusername/k8s-probes-example:latest

Verify with:

  • curl http://localhost:8080/ → "Hello, Kubernetes!"
  • curl http://localhost:8080/healthz → "OK"
  • curl http://localhost:8080/readyz → Initially "Not Ready", then "Ready" after 10 seconds.
  1. Push to Registry:
docker push yourusername/k8s-probes-example:latest

Ensure Docker Hub login is active.

Deploying to Kubernetes

We now deploy the application, starting with the deployment configuration.

Kubernetes Deployment YAML (deployment.yaml)

Here is the deployment configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-probes-example
spec:
  replicas: 2
  selector:
    matchLabels:
      app: k8s-probes-example
  template:
    metadata:
      labels:
        app: k8s-probes-example
    spec:
      containers:
        - name: k8s-probes-example
          image: yourusername/k8s-probes-example:latest
          ports:
            - containerPort: 8080
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 15
            periodSeconds: 10
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
            failureThreshold: 3
          startupProbe:
            httpGet:
              path: /
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
            failureThreshold: 30
Probes Explanation:
  • Liveness Probe:

    • httpGet.path: /healthz: Checks the /healthz endpoint to confirm the container is alive.
    • httpGet.port: 8080: Targets port 8080 for the HTTP request.
    • initialDelaySeconds: 15: Delays the first check by 15 seconds to allow startup.
    • periodSeconds: 10: Performs checks every 10 seconds.
    • failureThreshold: 3: Triggers a restart if the probe fails 3 consecutive times.
  • Readiness Probe:

    • httpGet.path: /readyz: Verifies readiness via the /readyz endpoint.
    • httpGet.port: 8080: Uses port 8080.
    • initialDelaySeconds: 5: Begins checking 5 seconds after startup.
    • periodSeconds: 5: Checks every 5 seconds.
    • failureThreshold: 3: Halts traffic if it fails 3 times until ready.
  • Startup Probe:

    • httpGet.path: /: Ensures startup success by checking the root path.
    • httpGet.port: 8080: Targets port 8080.
    • initialDelaySeconds: 5: Starts after 5 seconds.
    • periodSeconds: 5: Checks every 5 seconds during startup.
    • failureThreshold: 30: Permits up to 30 failures before declaring startup failure, accommodating longer initialization.

To create a service for exposing the deployment, we need to define a service.yaml to provide network access to the application, ensuring clients can reach it reliably.

Kubernetes Service YAML (service.yaml)

Here is the service configuration:

apiVersion: v1
kind: Service
metadata:
  name: k8s-probes-example-service
spec:
  selector:
    app: k8s-probes-example
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: NodePort
Explanation of Service:
  • apiVersion: v1, kind: Service: Identifies this as a Kubernetes Service for network exposure.
  • metadata.name: Names the service k8s-probes-example-service for easy reference.
  • spec.selector: Links to pods with the label app: k8s-probes-example, ensuring traffic routes to the correct instances.
  • spec.ports:
    • protocol: TCP: Uses TCP for communication.
    • port: 80: External port for client access.
    • targetPort: 8080: Maps to the container’s listening port.
  • spec.type: NodePort: Exposes the service on a node port, making it accessible externally for testing purposes.

This service abstracts pod IPs, provides load balancing, and ensures stable access to the application.

Deploying the Application

Deploy using:

kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml

Verify deployment:

kubectl get deployments, pods, services

Testing with CURL Commands Post-Deployment

After deployment, test the application using CURL via port forwarding:

kubectl port-forward service/k8s-probes-example-service 8080:80

Execute these CURL commands:

  • Check liveness:
curl http://localhost:8080/healthz

(expect "OK").

  • Check readiness:
curl http://localhost:8080/readyz

(initially "Not Ready", then "Ready" after 10 seconds).

  • Check root:
curl http://localhost:8080/

(expect "Hello, Kubernetes!").

Monitoring and Debugging

For detailed insights:

  • Describe pods:
kubectl describe pod
  • View logs:
kubectl logs
  • Check events:
kubectl get events --field-selector involvedObject.name=

Conclusion

Kubernetes probes help keep apps running smoothly. This guide shows how to build a Go API, turn it into a container, and deploy it with liveness, readiness, and startup probes. It also sets up a service to expose the app. Using kubectl and CURL, you can test everything to make sure it's reliable and ready to scale. This method works for any project in Kubernetes.