Desk of Contents
Notes
Earlier than we dive into the templates and instructions, listed here are some notes about Kubernetes that it is best to remember:
Cluster is a set of nodes
Grasp node has:
- api sever (frontend for kube)
- etcd (key retailer for knowledge used to handle cluster)
- scheduler (assigns containers to nodes)
- controller (carry up containers after they go down)
Labored node has:
- kubelet (agent in every node in cluster that ensures containers working on nodes as anticipated)
- runtime (software program to run containers within the background i.e docker)
Multi container pods can talk to one another by way of localhost
Assets in a namespace can refer to one another by their names
pods have a 1 to 1 relationship with containers
entrypoint in docker -> command in kubernetes
cmd in docker -> args in kubernetes
kube system - sources for inside functions for kubernetes
kube public - sources to be made obtainable to all customers
kubeapiserver path is /and many others/kubernetes/manifests/kube-apiserver.yaml
To verify settings handed to kubeapi server: ps -aux | grep authorization
To view api sources and their shortnames:
kubectl api-resources
To allow alpha variations: --runtime-config=api/model
Run watch crictl ps
to attend for kube api server to come back again on-line
To deal with api deprecations:
kubectl convert -f <old-file> --output-version <new-api-version> > <new-file>
Now, let’s dive into the templates and instructions you should use for sources.
Pod
Single occasion of an software (smallest object we will create in k8)
We scale pods up or down
kubectl get pods -o large
kubectl get pods -A
kubectl label pod/<pod-name> <label>=<worth>
kubectl get pods,svc
kubectl get pods --no-headers | wc -l
kubectl run <pod-name> --image=<image-name> -n <namespace> --dry-run=shopper -o yaml > take a look at.yaml
kubectl set picture pod <pod-name> <container-name>=<picture>
kubectl get pod <pod-name> -o yaml > pod-definition.yaml
kubectl exec -it podname -- commandtorun
kubectl exchange --force -f app.yaml
kubectl apply -f take a look at.yaml
kubeclt edit pod <pod-name>
kubectl clarify pods --recursive | much less
kubectl clarify pods --recursive | grep envFrom -A<number-of-lines>
kubectl get pods --selector label=worth
kubectl get pods --selector label=worth,label2=value2
-
Selectors match labels described i.e match labels on the pod or service to match labels of a pod
-
Annotation information particulars for informatory functions e.g construct info, instrument used.
Replication Controller and Replicaset
Replication controller is in v1 whereas replicaset in apps/v1
Duplicate set makes use of selector to find out pods to observe and handle even current pods
kubectl create -f definition.yaml
kubectl get replicationcontroller
kubectl get rs
kubectl delete replicaset <replicaset-name>
kubectl edit replicaset <replicaset-name>
kubectl set picture rs <replica-set> <container-name>=<picture>
kubectl describe replicaset <replicaset-name>
kubectl apply -f definition.yaml
kubectl exchange -f definition.yaml
kubectl scale -replicas=10 -f definition.yaml
kubectl scale --replicas=0 replicaset/<replicaset-name>
Deployments
kubectl create deployment nginx --image=nginx
kubectl scale deploy/webapp --replicas=3
kubectl get deploy
kubectl get deploy -o large
kubectl delete deployment <deployment -name>
kubectl create deploy redis-deploy --image=redis --replicas=2 -n dev-ns
- Deployment methods:
- Recreate – Take away all purposes working on older verision and convey up purposes working on newer verision
- Rolling replace (default technique) – Take away a single software and convey up a brand new one one after the other till the newer model is working on all purposes
- A brand new reproduction set is created underneath the hood after we do deployment upgrades
- We use file flag to avoid wasting instructions used to create/replace deployments
- We use the to revision flag to rollback to a particular revision
kubectl set picture deploy/deploymentname <container-name>=<picture>
kubectl set picture deploy/deploymentname <container-name>=<picture> --record
kubectl rollout restart deploy/deploymentname
kubectl rollout standing deploy/deploymentname
kubectl rollout historical past deploy/deploymentname
kubectl rollout undo deploy/deploymentname
kubectl rollout undo deploy/deploymentname --to-revision=1
Namespaces
kubectl create ns <namespace-name>
kubectl config set-context $(kubectl config current-context) -n <namespace-name>
Providers
Allow communication between elements throughout the software
Sorts:
- Node Port
- Node Port Definition file
- Map a port on node to a port on the pod
- The node’s port Can solely be within the vary 30000 to 32767
- Node port -> Service -> Goal port (Pod’s port)
- Node port and repair port usually are not necessary, if not offered, node port is allotted an obtainable ip within the vary 30000 to 32767 whereas service port is assumed to be similar as port
- Acts as loadbalancer if we’ve a number of pods with the identical label, it makes use of a random algorithm to pick which pod to ship requests to.
-
Cluster Ip
- Cluster Ip Definition file
- Service assigned an ip within the cluster and it is used to entry the service by different pods within the service.
-
Load Balancer
- Builds on prime of node port and permits balancing of requests to the service to it is goal purposes
To connect with one other service in a distinct namespace: we use the next syntax:
<service-name>.<namespace>.<svc>.<area>
test-service.test-ns.svc.cluster.native
kubectl expose useful resource <resource-name> --type=<kind> --port=<port> --target-port=<target-port> --name=<service-name>
kubectl create service <kind> <service-name> --tcp=<port>:<port>
Config Map
kubectl create configmap
app-config --from-literal=APP_COLOR=RED
--from-literal=APP_TYPE=AAB
--from-literal=APP_ENV=PROD
kubectl create configmap
app-config --from-file=app_config.properties
kubectl create -f test-config-map.yaml
kubectl get configmaps
kubectl get cm
kubectl describe configmap
Secrets and techniques
kubectl create secret generic
app-secret --from-literal=DB_HOST=mysql
--from-literal=DB_PORT=1000
--from-literal=DB_NAME=take a look at
kubectl create secret generic
app-secret --from-file=take a look at.env
kubectl create -f test-secret.yaml
kubectl get secrets and techniques
kubectl get secret <secret-name> -o yaml
kubectl describe secrets and techniques
Safety Context
We are able to add safety context on the pod degree or the container degree. If each are specified, the container degree takes priority.
Cabapilites are supported solely on the container degree
Service Account
kubectl create serviceaccount test-sa
kubectl get serviceaccount
kubectl describe serviceaccount test-sa
kubectl create token SERVICEACCOUNTNAME
Taints and Tolerations
kubectl taint nodes NODENAME app=pink:taint-effect
kubectl taint nodes NODENAME app=pink:taint-effect-
taint-effects embody: NoSchedule|NoExecute|PreferNoSchedule
Node Selectors
kubectl label nodes NODENAME kind=take a look at
We can not apply superior filters e.g not , or
Node Affinity
Node affinity sorts:
- requiredDuringSchedulingIgnoredDuringExecution
- PreferredDuringSchedulingIgnoredDuringExecution
- requiredDuringSchedulingrequiredDuringExecution
Throughout Scheduling | Throughout Execution | |
---|---|---|
1 | Required | Ignored |
2 | Most well-liked | Ignored |
3 | Required | Required |
Multicontainer Pods
-
Sidecar containers -> assist the primary container e.g to logging agent to ship logs to log server
-
Adapter containers -> course of knowledge for the primary container e.g converts logs to readable format earlier than sending to logging server
-
Ambassador containers -> proxy requests from the primary container e.g ship requests to db on predominant container’s behalf
-
Init containers -> Course of inside init container should end earlier than different containers begin.
-
If the init container fails, the pod is restarted till the init container succeeds
Observability
Readiness Probe:
- Carry out take a look at to verify if the container is up earlier than marking the container as prepared.
- For readiness, we will do http calls, tcp calls or run a command that when succesfull, we mark the container as prepared
Liveness Probe:
- Periodically take a look at if software inside container is wholesome.
-
For liveness, we will do http calls, tcp calls or run a command that after they fail, we mark the container as unhealthy and it is restarted
Logging:
Present logs
kubectl logs podname
Present reside logs
kubectl logs -f podname
For multi container we specify container title
kubectl logs -f podname container-name
Metric Server:
- We are able to have 1 metric server per cluster
- Receives metrics from nodes and pods and shops them in reminiscence ( we will not see historic knowledge with metric server)
- To put in on cluster,we clone the metric server repo and run kubectl create -f repourl
kubectl prime node
kubectl prime pod
Jobs and Cronjobs
Job – Duties that may be run and exit after they’ve finalized
kubectl get jobs
Cronjob – To run jobs periodically
kubectl get cronjob
Ingress
-
Allows customers entry software by way of an externally accessible url that we will configure to path to totally different providers within the cluster primarily based on url path whereas implementing ssl as effectively
-
We have to expose it so it may be accessible exterior the cluster
-
Ingress controller:
- Doesn’t include kubernetes as default. We have to deploy if first.
- Examples are istio, nginx, haproxy
-
Ingress sources:
- Guidelines and configs utilized to ingress controller to ahead site visitors to single purposes, through paths or through area title
kubectl get ingress
kubectl create ingress <ingress-name> --rule="host/path=service-name:port"
kubectl create ingress world --rule=world.universe.mine/europe*=europe:80 --rule=world.universe.mine/asia*=asia:80 --class=nginx -n world
Community Insurance policies
kubectl get netpol
Storage
-
Volumes
-
Persistent Volumes
-
Persistent Quantity Declare
kubeclt delete pvc <pvc-claim>
Authentication
All consumer entry is managed by the api server
We are able to retailer consumer credentials as:
# Add this to kubeapi server service or pod definition file
--basic-auth-file=user-credentials.csv
curl -v -k <api-url> -u "consumer:password"
# Add this to kubeapi server service or pod definition file
--token-auth-file=user-credentials.csv
curl -v -k <api-url> --header "Authorization: Bearer <TOKEN>"
We are able to use kubeconfig to handle which clusters we will entry
kubectl config view --kubeconfig=my-custom-file
kubectl config use-context developer-development-playground
kubectl config get-context developer-development-playground
Authorization
Authorization modes:
- Node authorizer -> handles node requests (consumer ought to have title prefixed system node)
- Attribute primarily based authotization -> Assosiate customers/group of customers with a set of permissions(troublesome to handle)
- Function bases entry management -> We outline roles and affiliate customers with particular roles
- Webhook -> Outsource authorization to third social gathering instruments
- AlwaysAllow -> Permits all requests with out doing authorization checks (default)
- AlwaysDeny -> Denys all requests
On kubeapiserver, we specify modes to make use of --authorization-mode=Node,RBAC,Webhook
Function primarily based entry management:
kubectl get roles
kubectl create function take a look at --verb=listing,create --resource=pods
kubectl describe function <role-name>
kubectl get rolebindings
kubectl create rolebinding test-rb --role=take a look at --user=user1 --group=group1
kubectl describe rolebindings <rolebindings-name>
kubectl auth can-i create deploy
kubectl auth can-i create deploy --as dev-user --namespace dev
kubectl api-resources --namespaced=false
We are able to create roles scoped on clusters. We are able to additionally create cluster roles on namespace scoped sources
Cluster Function primarily based entry management:
kubectl create clusterrole take a look at --verb=* --resource=*
kubectl create clusterrolebinding test-rb --clusterrole=take a look at --user=consumer --group=group
Admission Controllers
- Implement safety measures to implement how a cluster is used.
- It may validate, change or reject requests from customers
- It may additionally carry out operations within the backend
kube-apiserver -h | grep enable-admission-plugins
On kubeapiserver, to allow an admission controller we replace the --enable-admission-plugins=NodeRestriction,NamespaceLifecycle
On kubeapiserver, to disable an admission controller we replace the --disable-admission-plugins=DefaultStorageClass
We are able to create {custom} admission contollers:
-
We use the
Mutating and Validating
webhooks that we configure to a server internet hosting our admission webhook service.
If a request is made, it goes sends an admission evaluation object to admission webhook server that responds whith an admission evaluation object of whether or not the result’s allowed or not -
We then deploy our admission webhook server
-
We then configure to succeed in out to the service and validate or mutate requests by making a validating configuration object
Customized Useful resource Definition
An extension of the Kubernetes API that is not obtainable within the Kubernetes set up
Customized Controller
Course of or code working in a loop and monitoring the kubernetes cluster and listening to occasions of particular objects
We construct the {custom} controller then present kubeconfig file the controller would want to authenticate to the kubernetes api
Thanks!