Hey Guys, Welcome to "DevOps Hunter" blog! In this post I would like to share my learnings at different times collected that is about Kubernetes commands and their applied tricks and tips.
- Initially I've collected few kubectl related alias command tricks
- Play with the etcd database and then backup and recovery short-cuts
- Finally worked on the Kubernetes command tools kubectx, kubens for easy switching in CLI.
kubectl api-resources
We can get sometime the API version mismatch due to change in API version. This can be examine what is new in the current version
How do you identify the certificate file used to authenticate 'apiserver'?
cat /etc/kubernetes/manifests/kube-apiserver.yaml|grep tls-cert - --tls-cert-file=/etc/kubernetes/pki/apiserver.crtThe tls-cert-file will be Kubernetes apiserver cerificate file path .
How do you identify the certificate file used to authenticate 'kube-apiserver' as a client to ETCD server?
You can look into the kube-apiserver manifest file.
cat /etc/kubernetes/manifests/kube-apiserver.yaml
Do you have any alias tricks for Kubernetes CLI commands?
Yes, I do have many but here I would like to common usable Bash shell alias.# kubectl can be used with k most common alias alias k='kubectl' # This is to list all available objects, alias will be used with many Kubernetes Objects alias kg='kubectl get' # This will be used to describe any kubernetes object alias kdp='kubectl describe'
Looking into the logs
Kubernetes collects the logging from all the containers that run in a Pod.# To look into the logs of any pod alias kl='kubectl logs' # To get into the pod containers alias kei='kubectl exec -it'
Realtime scenario: maintenance window on worker node
There can be regular routine maintenance windows on worker nodes may be to have OS patching on the node or any other urgent maintenance then how to handle is important activity as Kubernetes Administrator.When maintenance starts on node01:
alias k="kubectl" k drain node01 --ignore-daemonsets # check pods scheduling on which nodes k get po -o wide # check nodes status - observe that node01 STATUS = Ready,SchedulingDisable k get nodes
when maintenance on node01 completes, How to releae that node back to ready state?
First make the node as schedulable using uncordon, then check nodes
k uncordon node01 the uncordon sub-command will mark node as schedulable, bring back to ready state for node. # Check pods, nodes k get nodes,pods -o wideExisting nodes will not be re-scheduled back to the node01. But if any new pods are created they will be scheduled.
Locking your node for not to perform schedule any new pods
Without effecting existing pods on the node make the node Unschedulable can be done with the cordonk cordon node01 k get nodes -o widecordon sub-command will mark node as unschedulable.
Kubernetes Ugrade plan
Similar to any OS package managers allow us to upgrade here we can do it for Kubernetes. But we need to be little cautious, If there is any upgrade plan then we need to check that from the kubenetes CLIkubeadm upgrade plan
How do you find the ETCD cluster address from the controlplane?
From the describe output you can identify the etcd address which is present in the --advertis-client-urls value.
k describe po etcd-controlplane -n kube-system|grep -i advertise-client Annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.36.169.6:2379 --advertise-client-urls=https://10.36.169.6:2379
How to get the version of etcd running on the Kubernetes Cluster?
To get the version of the etcd by describe the etcd pod which is present in kube-sytem namespace.
k get po -n kube-system |grep etcd etcd-controlplane 1/1 Running 0 22m k describe po etcd-controlplane -n kube-system|grep -i image: Image: k8s.gcr.io/etcd:3.5.3-0
Where is the ETCD server certificate file located?
To find the server certificate the file location present in '--cert-file' line. To skip -- in the grep use back slash
k describe po etcd-controlplane -n kube-system|grep '\--cert-file' --cert-file=/etc/kubernetes/pki/etcd/server.crtAlternative: another way is to get certifiate files and key files of etcd. You know that etcd is a static pod and which will have the definitions and configuration details as manifest file at /etc/kubernetes/manifests/etcd.yaml. To run the the etcd backup we must pass certfiles, key files. Let's find those from the manifest file.
cat /etc/kubernetes/manifests/etcd.yaml |grep "\-file"
Where is the ETCD CA Certificate file located?
Generally CA certificates file will be saved as ca.crt.
k describe po etcd-controlplane -n kube-system|grep -i ca.crt --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
Backup and Recovery of ETCD database
ETCD database BACKUP to a snapshot using following command
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ snapshot save /opt/snapshot-pre-boot.db # validate snapshot created in the /opt directory. ls -l /opt
How to restore the etcd cluster database?
Same command only in place of save use restore option.ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-from-backup \ snapshot restore /opt/snapshot-pre-boot.dbTo know nmber of clusters configured on the node you can use the following :
k config view # Be specific to cluster listing you can use get-clusters k config get-clusters
Kubernetes Tools
Your life will be easy if you know these two tools as your tricks! kubectx, kubens two customized commandline tools.Using kubectx
kubectx examplessudo git clone https://github.com/ahmetb/kubectx /opt/kubectx sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx kubectx -h kubectx -c kubectx
Download and Setup the kubectx |
kubens
Setup the kubens and using it for switching between namespaces.sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens kubens kubens -h kubens kube-system k get po kubens - k get po
Kubernetes namespace switching tool kubens setup and executions |
Network Tricks
To find the weave-net running on which nodek get po -n kube-system -l name=weave-net -o wide
What is the DNS implementation in your Kubernetes Cluster?
To know dns details the label used 'k8s-app-kube' we can run on pods, deployments we can get the complete implementation of DNS on the Kube:k -n kube-system get po,deploy -l k8s-app=kube-dnsThe execution sample output
Finding Node info using jsonpath
To work on the jsonpath you must know what is the output in json format first. then we can narrow-down to the required field data to be extracted.k get nodes -o jsonp k get nodes -o jsonp | jq k get nodes -o jsonp | jq -c 'paths' |grep InternalIPTo get the InternalIP address of each node can be retrived first we need to give a try for first node than we can change to all nodes using '*'.
k get no -o jsonpath='{.items[0].status.addresses}'|jq k get no -o jsonpath='{.items[*].status.addresses[0]}'|jq k get no -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")]}' k get no -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'
Kubectl autocomplete
Set up autocomplete enable in bash shell if that is your current shell, bash-completion package should be installed first.source <(kubectl completion bash)Let's add this above line for autocomplete permanently in the .bashrc
> ~/.bashrc
No comments:
Post a Comment