Streamline Your EKS Deployments with NGINX Ingress: A Comprehensive Guide

Managing multiple applications in Amazon EKS (Elastic Kubernetes Service) can become complex, especially when dealing with load balancing. Assigning a dedicated LoadBalancer to each application quickly becomes expensive and unwieldy. This is where the power of an NGINX Ingress Controller shines. Let's explore how it simplifies your EKS architecture and boosts efficiency.

Why Choose NGINX Ingress for Your EKS Cluster?

Imagine juggling multiple load balancers, each requiring individual configuration and management. That's a recipe for complexity and increased costs. NGINX Ingress acts as a central traffic manager, routing requests to your various applications through a single LoadBalancer. This centralized approach offers significant advantages:

  • Cost Savings: Reduce expenses by using a single LoadBalancer instead of many.
  • Simplified Architecture: A cleaner, more manageable infrastructure.
  • Enhanced Scalability: Easily handle growing traffic demands.
  • Flexibility: Support for both HTTP and TCP routing.

What You'll Learn

This guide will walk you through setting up and configuring NGINX Ingress in your EKS cluster, covering:

  • Deploying NGINX Ingress using Helm
  • Configuring HTTP routing with Ingress resources
  • Enabling TCP routing for non-HTTP services (like databases)
  • Integrating with Cloudflare for DNS management and enhanced security

Prerequisites: Getting Started

Before we begin, ensure you have the following:

  • An active EKS cluster: With kubectl, eksctl, and Helm installed.
  • A Cloudflare account: With a domain you wish to manage.
  • Applications deployed in Kubernetes: Ready to receive traffic.

Installing the NGINX Ingress Controller with Helm

Helm is a package manager for Kubernetes, simplifying the deployment process. Let's install the NGINX Ingress Controller:

  1. Add the Ingress Nginx Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  1. Install the Ingress Controller: This command creates a namespace called ingress-nginx and deploys the controller as a LoadBalancer service.

Code explanation:

helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace ingress-nginx --set controller.service.type=LoadBalancer

This creates the necessary Kubernetes resources. The --set controller.service.type=LoadBalancer option is crucial; it ensures AWS creates a LoadBalancer for your Ingress Controller.

Obtaining Your ELB Endpoint and Configuring Cloudflare DNS

  1. Get the LoadBalancer's External IP:
kubectl get svc -n ingress-nginx

Locate the EXTERNAL-IP address for the ingress-nginx-controller service. This is your LoadBalancer's public IP address.

  1. Configure Cloudflare DNS: Add a new DNS record in your Cloudflare dashboard:
  • Type: A (or CNAME, depending on your preference)
  • Name: Your desired subdomain (e.g., app)
  • Target: The EXTERNAL-IP you obtained in the previous step.
  • Proxy: Choose "DNS only" for performance or "Proxied" for added security.

Configuring HTTP Routing with Ingress Resources

Now let's configure HTTP routing. This example routes traffic to a service named your-service running on port 8080. Replace placeholders with your actual values.

Create a file named ingress.yaml with the following content:

Code explanation:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: app.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: your-service
            port:
              number: 8080

Apply the configuration:

Code explanation:

kubectl apply -f ingress.yaml

This sets up host-based routing; all requests to app.yourdomain.com will be directed to your-service.

Adding TCP Support for Non-HTTP Services

NGINX Ingress also handles TCP traffic. Let's enable TCP routing for a PostgreSQL database service running on port 5432:

  1. Create a ConfigMap: This ConfigMap defines the mapping between TCP ports and Kubernetes services.

Code explanation:

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  "5432": "your-namespace/postgres-service:5432"
  1. Update the NGINX Ingress Controller: This step tells the controller to use the ConfigMap.

Code explanation:

helm upgrade ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --set controller.extraArgs.tcp-services-configmap=ingress-nginx/tcp-services
  1. Expose the Port on the LoadBalancer: Edit the LoadBalancer service to expose port 5432.

Code explanation:

kubectl edit svc ingress-nginx-controller -n ingress-nginx

Add the following under the ports section:

Code explanation:

- name: postgres
  port: 5432
  targetPort: 5432
  protocol: TCP

Now, TCP connections to port 5432 on your LoadBalancer will be routed to your PostgreSQL service.

Routing Overview: A Summary

The NGINX Ingress Controller acts as a reverse proxy, receiving incoming requests and forwarding them to the appropriate backend service based on the host, path, and port. This intelligent routing simplifies traffic management significantly.

Tips and Best Practices

  • Use ingressClassName: nginx: This ensures your Ingress resources use the NGINX controller.
  • Implement TLS/SSL: Use cert-manager for automated certificate management and secure your applications.
  • Utilize Namespaces: Isolate applications for better organization and security.
  • Annotate Ingresses: Leverage annotations for advanced features like caching and request rewriting.

Conclusion: Simplified EKS Management

By leveraging the power of NGINX Ingress, you can significantly simplify the management of your EKS deployments. A single LoadBalancer, combined with the flexibility of HTTP and TCP routing, provides a cost-effective and scalable solution for all your application needs. Integrate with Cloudflare for enhanced security and a robust DNS setup, and you'll have a production-ready architecture optimized for efficiency.


💬 Your thoughts?
Did this help you? Have questions? Drop a comment below!

🔗 Read more
Full article on our blog with additional examples and resources.