Kubernetes is an open-source container orchestration platform used to handle and automate the deployment and scaling of containerized functions. It has gained reputation in recent times because of its capacity to supply a constant expertise throughout completely different cloud suppliers and on-premises environments.
The NGINX ingress controller is a manufacturing‑grade ingress controller that runs NGINX Open Supply in a Kubernetes surroundings. The daemon screens Kubernetes ingress assets to find requests for companies that require ingress load balancing.
On this article, we’ll dig into the flexibility and ease of this ingress controller to implement completely different frequent use circumstances. You can see another ones in numerous articles (reminiscent of Kubernetes NGINX Ingress: 10 Useful Configuration Options) however none of them has each described and regrouped those beneath, but these are extensively used for net functions in manufacturing.
These apply to a number of Cloud suppliers, at the very least AWS, GCP and OVHCloud, besides when a selected Cloud supplier is talked about.
These are additionally absolutely appropriate with one another, besides when structure differ (for instance, TLS termination on load balancer versus termination on NGINX pods).
As future experiences demand, we’ll increase its content material with further use circumstances, making certain its relevance continues to flourish
Every part within the YAML snippets beneath — apart from ingress configuration — pertains to configuring the NGINX ingress controller. This consists of customizing the default configuration.
To start, be certain that your Helm distribution is conscious of the chart utilizing this command:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm repo replace
After getting ready or updating your {custom} nginx.helm.values.yml
file, deploy or replace the Helm deployment utilizing this command:
helm -n system improve --install ngx ingress-nginx/ingress-nginx --version 4.3.0 --create-namespace -f nginx.helm.values.yml
Change 4.3.0 with the newest model discovered on ArtifactHUB, and proceed in accordance with your improve technique.
By default, it’s important to specify the category in every of your ingress :
metadata:
annotations:
kubernetes.io/ingress.class: nginx
However when you have a single ingress controller in your cluster, simply configure it to be the default :
nginx.helm.values.yml
controller:
watchIngressWithoutClass: true
No extra want for the kubernetes.io/ingress.class: nginx
annotation. Ever.
By default with Kubernetes incoming visitors, SSL/TLS termination needs to be dealt with by goal software, one after the other. One other software, one other TLS termination dealing with, with certificates dealing with.
A easy but highly effective approach of abstracting TLS dealing with is to terminate on load balancer, and have HTTP contained in the cluster by default.
As a pre-requisite, it’s important to request a public ACM certificate in AWS.
After getting the certificates ARN, use it in beneath configuration underneath service.beta.kubernetes.io/aws-load-balancer-ssl-cert
annotation :
nginx.helm.values.yml
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:94xxxxxxx:certificates/2c0c2512-a829-4dd5-bc06-b3yyyyy
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" # Should you do not specify this annotation, controller creates TLS listener for all of the service ports
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
By default, NGINX ingress controller provides you impartial but boring error pages :
These ones will be changed with good polished and animated ones, reminiscent of this one from tarampampam’s repository :
It has some good aspect options, like computerized mild/darkish modes, and routing particulars displayable fore debugging function.
Examples from a number of themes are showcased here for everybody to select from.
When you discovered your theme, configure your favourite ingress controller :
nginx.helm.values.yml
controller:
config:
custom-http-errors: 404,408,500,501,502,503,504,505
# Prepackaged default error pages from https://github.com/tarampampam/error-pages/wiki/Kubernetes-&-ingress-nginx
# a number of themes right here: https://tarampampam.github.io/error-pages/
defaultBackend:
enabled: true
picture:
repository: ghcr.io/tarampampam/error-pages
tag: 2.21 # newest as of 01/04/2023 right here: https://github.com/tarampampam/error-pages/pkgs/container/error-pages
extraEnvs:
- title: TEMPLATE_NAME
worth: lost-in-space # one in all: app-down, cats, connection, ghost, hacker-terminal, l7-dark, l7-light, lost-in-space, matrix, noise, shuffle
- title: SHOW_DETAILS # Optionally available: permits the output of further info on error pages
worth: "false"
After getting all of your net routes configured to dealt with SSL/TLS/HTTPS, HTTP routes haven’t any purpose to be, and is even harmful to maintain, security-wise.
As an alternative of disabling the port, which will be annoying to your customers, you may routinely redirect HTTP to HTTPS with this configuration :
nginx.helm.values.yml
controller:
containerPort:
http: 80
https: 443
tohttps: 2443 # from https://github.com/kubernetes/ingress-nginx/points/8017
service:
enableHttp: true
enableHttps: true
targetPorts:
http: tohttps # from https://github.com/kubernetes/ingress-nginx/points/8017
https: https
# Will add {custom} configuration choices to Nginx ConfigMap
config:
# from https://github.com/kubernetes/ingress-nginx/points/8017
http-snippet: |
server{
pay attention 2443;
return 308 https://$host$request_uri;
}
use-forwarded-headers: "true" # from https://github.com/kubernetes/ingress-nginx/points/1957
Once you terminate your TLS on the load balancer or the ingress controller, software doesn’t know of the TLS incoming calls: all the things contained in the cluster is HTTP. Therefore, when an software must redirect you some other place contained in the cluster to a different path, it would attempt to redirect you on HTTP, similar because it obtained.
For every ingress redirecting internally, apply this configuration :
apiVersion: networking.k8s.io/v1
form: Ingress
metadata:
title: auth-server
annotations:
nginx.ingress.kubernetes.io/proxy-redirect-from: http://
nginx.ingress.kubernetes.io/proxy-redirect-to: https://
spec:
# [...]
When you do not have the choice to terminate TLS on load balancer, NGINX Ingress Controller can be utilized to do the TLS termination. It might be too lengthy to element right here, if wanted yow will discover litterature on web, reminiscent of kubernetes + ingress + cert-manager + letsencrypt = https, or Installing an NGINX Ingress controller with a Let’s Encrypt certificate manager, or else How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.
When this state of affairs is in place, every ingress route get it is personal certificates, it may be the identical certificates. It may also be the identical secret, if the companies are in the identical namespace.
However the default NGINX certificates, for non-configured routes, will nonetheless be the NGINX auto-signed certificates.
To repair that, you may reuse an identical wildcard certificates that you have already got someplace within the cluster, generated utilizing Cert-Supervisor. NGINX ingress controller will be configured to focus on it, even from one other namespace :
nginx.helm.values.yml
controller:
extraArgs:
default-ssl-certificate: "my-namespace/my-certificate"
By default, the NGINX ingress controller enable a most of 1 Mb payload switch.
For every ingress route the place you want extra, apply this configuration :
apiVersion: networking.k8s.io/v1
form: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 100m
[...]
Finally, the visitors of your net software will develop, and the ingress controller preliminary configuration could change into out of date.
One technique to do a simple autoscaling is utilizing a daemonset, one pod for every node :
nginx.helm.values.yml
controller:
form: DaemonSet # Deployment or DaemonSet
One other approach is autoscaling on NGINX CPU and reminiscence :
nginx.helm.values.yml
controller:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 200
targetMemoryUtilizationPercentage: 200
If this isn’t adequate, collect your incoming connections metrics and autoscale based mostly on them. This wants advanced operations, so we simply ahead you to the superb article Autoscaling Ingress controllers in Kubernetes by Daniele Polencic
Purposes in Kubernetes cluster should be principally stateless, by usually there may be nonetheless an ephemeral session relying on the pod the consumer is reaching. If the customers finally ends up on one other pod, the session will be disrupted. On this case we’d like a “sticky session”.
The enabling of sticky classes is on the ingress aspect :
nginx.helm.values.yml
apiVersion: networking.k8s.io/v1
form: Ingress
metadata:
annotations:
# sticky session, from documentation: https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent" # change to "balanced" (default) to redistribute some classes when scaling pods
nginx.ingress.kubernetes.io/session-cookie-name: "name-distinguishing-services"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" # in seconds, equal to 48h
[...]
By default for managed load balancers, the shopper IP seen to your software isn’t the one from the actual shopper.
You possibly can have it outlined in X-Actual-Ip
request header by setting this NGINX ingress controller configuration :
For AWS :
nginx.helm.values.yml
controller:
service:
externalTrafficPolicy: "Native"
Or for OVHCloud, from official documentation :
nginx.helm.values.yml
controller:
service:
externalTrafficPolicy: "Native"
annotations:
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v2"
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
proxy-real-ip-cidr: "xx.yy.zz.aa/nn"
This shall be efficient on Helm set up
however not all the time on improve
, relying on the standing of your launch ; generally it’s important to edit the NGINX LoadBalancer service to outline the worth in spec.externalTrafficPolicy
, after which restart NGINX pods to make use of the config half (concentrating on the configmap).
Extra info in Kubernetes Documentation.
You might have already puzzled how you would have your customers know that you’re presently deploying, to assist them patiently wait in your web site to be out there once more.
There are a number of light-weight methods to try this, and a few of them contain NGINX ingress controller.
DevOps Directive has made an superior job on this discipline described the article Kubernetes Maintenance Page. The answer makes use of a devoted deployment + a service with none {custom} Docker picture, you could goal with any ingress throughout upkeep.
In circumstances the place you are coping with a massively used ingress that is drowning out your NGINX logs, there is a answer. This usually crops up in growth environments, particularly when a high-frequency device like an APM server comes into play. These instruments set off frequent calls, even throughout idle consumer moments.
To fight this, leverage the nginx.ingress.kubernetes.io/enable-access-log
annotation:
apiVersion: networking.k8s.io/v1
form: Ingress
metadata:
title: apm-server
labels:
app: apm-server
annotations:
nginx.ingress.kubernetes.io/enable-access-log: "false"
spec:
guidelines:
- host: apm.my-app.com
We now have coated a number of NGINX ingress controller use-cases for net software, that we will use for a big number of conditions.
Should you assume one or two others are frequent and lacking right here, do not hesitate to remark within the part beneath 🤓
Illustrations generated domestically by Automatic1111 utilizing Lyriel mannequin
This text was enhanced with the help of an AI language mannequin to make sure readability and accuracy within the content material, as English isn’t my native language.