Sunday, October 23, 2022

Kubernetes security - Service accounts

In this post we are going to learn more  about what is service accounts in Kubernetes and how that is useful.

Prerequisites

Kubernetes cluster Up and running

Let's take the scenario where we get need to connect with the pods, nodes, deployments and other resources in the Kubernetes cluster. you might be working with the automated build with the CICD pipelines to interconnect with each other resources. Pod  is going to work with the planned application deployments.

If  you're working in DevSecOps you need to focus on the regular monthly maintenance OS  patching scheduled in this case Kubernetes node maintenance should be done from a pod. 

In the above two scenarios there is a need of service account inside the pod. When Kubernetes cluster is created at the same time service account also created and its name is default. We can also create our own service accounts using the following command

Every service account is associated with the secret where service account name is  as first part for the secret name followed by token word will be makes it. Example default account has secret as default-token-****.

Here I am going to work in a pod which needs authentication using the service account which created by me. To make this happen need to add a line in the pod definition, under spec section add line as serviceAccount followed by its value. Proceed to create a testing pod.

kubectl create -f mypod-sa.yaml.

To know what Pods are running in this cluster run 

kubectl get pods

Let's go head and see description of pod. Inside the Pod there is a volumeMount configured and it is accessible in specific path of container.

kubectl exec -it mypod-sa -- /bin/bash
Inside the pod 
ls -lrt /var/run/secret/kubernetes.io/serviceaccount 

Here we can see the namespace, token, and certificates file.

TOKEN=`cat var/run/secret/kubernetes.io/serviceaccount/token`
  curl https://Kubernetescluster -k --header "Authorization: Bearer $TOKEN"
  

Just this may be failed. This particular service account which is present in the pod. This cannot have the permissions so the other message saying reason as forbidden.

Now what I am going to do investigate why that is having the permission issue. We need to create a role and rolebinding that associate with service account.

Now all I want to do is same commnd previously executed command from the pod as run early.




Every Service account will be associated with a secret which is the older Kubernetes model it was automatic but now that will be hidden in the latest Kubernetes 1.22 onwards. I'm working on the Kubernetes 1.25 version let's see how it will be now. here I am working on the KillerCoda builtin Kubernetes cluster. I would like to create a ServiceAccount and for it Secret object then that will be used to run a deployment, which is the common requirment in most of the real time projects.
Now I will create a custom serviceaccount, which will be more privileaged to work with deployments.
  kubectl create serviceaccount vt-deploy-sa --dry-run=client -o yaml
kubectl create serviceaccount vt-deploy-sa --dry-run=client -o yaml > vt-deploy-sa.yaml

#Before running creating sa 
kubectl get serviceaccounts
kubectl create -f vt-deploy-sa.yaml 

#Confirm it 
 kubectl get serviceaccounts
 kubectl describe sa vt-deploy-sa
 
Here important rule you need to understand is, One Pod can have one serviceaccount. But in reverse to this, One serviceaccount can be attached with multiple Pods. Let's examine the default serviceaccount what authorized to do?
 kubectl auth can-i create  pods --as=system:serviceaccount:default:default
to understand depth of the above command check about default serviceaccount used --as option with system:serviceaccount:NAMESPACE:SERVICEACCOUNTNAME
When we create our custom serviceaccount we can define our own policy that could be telling what could be done such as list, create, delete etc actions. And that is needs a mapping which is done by Role and RoleBindings. Role is where I am allowed to define the policies for user,group,serviceaccount which it is about to bind. Then I am going to create the RoleBinding which will actually bind the serviceaccount with the Role which have policies.
 kubectl create role deploy-admin --verb=create,list,get,delete --resource=deployments 
Here deploy-admin role is defined with create,list,get, delete actions on the deployment objects.
kubectl create rolebinding deploy-rb --role=deploy-admin --serviceaccount=default:vt-deploy-sa
 
here serviceaccont is defined with the namespace followed by custom serviceaccont.
Now let's try the deployment with serviceaccount.
  kubectl create deploy webapp --image=tomcat \
 --replicas=4 --dry-run=client -o yaml > webapp-deploy.yaml
 
 
 vi webapp-deploy.yaml
 # removed null word containing lines
 apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 4
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      serviceAccountName: vt-deploy-sa
      containers:
      - image: tomcat
        name: tomcat

# Create deployment 
kubectl create -f webapp-deploy.yaml

# list the deploy with default and then custom serviceaccount
kubectl get deploy --as=system:serviceaccount:default:default
kubectl get deploy --as=system:serviceaccount:default:vt-deploy-sa
Now you ccan observe the difference between default serviceaccount vs custom serviceaccount capabilities.

Saturday, October 22, 2022

Kubernetes Security - ClusterRoles and ClusterRoleBindings

Hello in this post we will explore about ClusterRoles and ClusterRoleBindings on Kubernetes Cluster. The ClusterRoleBindings are mapping a subjects with ClusterRole. Here Subjects are nothing but rules that can be applicable with an action on the Cluster resources. It deals with Users, Groups and service accounts. In this post we will try to focus with 'User' specific rules.

Kubernetes User Access Control with ClusterRoleBindings to ClusterRole

 

Prerequisite: 

1. Kubernetes Cluster up and running 
2. Basic understand on RBAC

These system related resources such as pods, nodes, storage etcs will be administrated using ClusterRole and ClusterRoleBindings by assigning to a user.
 
To list the ClusterRoles in the Kubernetes cluster
kubectl get clusterrole
# Get the Count 
kubectl get clusterrole --no-headers |wc -l
To know about the api-resources that have clusterrole and clusterrolebindings.
k api-resources |grep cluster 
To veiew the clusterrolebindings available in this Kubernetes Cluster
kubectl get clusterrolebindings 
# Get the Count 
kubectl get clusterrolebindings --no-headers |wc -l

Imperative way

You can have single verb to used to create clusterrole. Here is an example, Create a role which should have access to list the deamonsets.

# Initial check 
kubectl get ds --as krish 

kubectl create clusterrole list-ds-role --resource=daemonsets --verb=list
kubectl describe clusterrole list-ds-role

Create clusterrolebinding list-ds-rb for user 'krish' to map that clusterrole list-ds which created above.

kubectl create clusterrolebinding list-ds-rb --clusterrole=list-ds-role --user=krish 
After ClusterRoleBinding assigned to krish
kubectl get ds --as krish 

Create ClusterRole, ClusterRoleBinding imperative way

Cleanup for ClusterRoles


Cleanup activity can be in the reverse order. First delete the ClusterRoleBinding then clusterrole
kubectl delete clusterrolebinding list-ds-rb 

kubectl delete clusterrole list-ds 
Cleanup ClusterRole and ClusterRoleBindings


 
ClusterRole are Kubernetes Cluster wide and they are not part of any namespace. To know about user or groups are associated with cluster-admin role, use ClusterRoleBindings and describe it. Where we can see in the subject section that will reveals you about user/groups.
kubectl describe clusterrolebinding cluster-admin
To inspect the clusterrole 'cluster-admin' privileges describe will show the PolicyRules where what resources can be used? and what you can do? The '*' astriek is to indicate that 'all'. If you want to get all resources access then '*.*' should be given. And same way to indicate all actions such as create, delete, list, watch, get use '*'. A new user mahi joined the Kubernetes Administrtors team. She will be focusing on the nodes in the cluster. Let's create a ClusterRole and ClusterRoleBindings so that she gets access to the nodes .
 
Initially we will check that she is able to access the nodes or not.
kubectl create clusterrole node-admin 
 --verb=get,list,watch --resource=nodes --dry-run=client -o yaml > node-admin.yaml
kubectl apply -f node-admin.yaml
kubectl describe clusterrole node-admin
Let's bind the node-admin clusterrole to mahi user using clusterrolebinding.
kubectl create clusterrolebinding mahi-rb --clusterrole=node-admin --user=mahi --dry-run=client -o yaml > mahi-rb.yaml

kubectl create -f node-admin-rb.yaml 
kubectl describe clusterrolebindings node-admin-rb

# Check michelle have the access to nodes 
kubectl --as mahi get nodes
If a user responsibilities are growing as they are into the organization for atime being. Here Maheshwari(mahi) user got more responsibilities for maintaining storge that used for Kubernetes cluster. Create the required ClusterRole and ClusterRoleBindings to allow her access Storage. Requirements:
ClusterRole: storage-admin
Resource: persistentvolumes
Resource: storageclasses
ClusterRoleBinding: mahi-storage-admin
ClusterRoleBinding Subject: mahi
ClusterRoleBinding Role: storage-admin
Now you know all the steps how to proceed on the clusterrole, clusterrolebindings
 kubectl create clusterrole storage-admin \
  --verb=* --resource=persistentvolumes --resource=storageclasses \
  --dry-run=client -o yaml > storage-admin.yaml
  
kubectl apply -f storage-admin.yaml
kubectl describe clusterrole storage-admin

kubectl create clusterrolebinding mahi-storage-admin-rb \
 --clusterrole=storage-admin --user=mahi --dry-run=client -o yaml > mahi-storage-admin-rb.yaml  
 
 kubectl create -f mahi-storage-admin-rb.yaml
 kubectl describe clusterrolebinding mahi-storage-admin-rb
 
# Validate that authentication given for mahi user to access storage
kubectl get pv --as mahi
kubectl get sc --as mahi
Here the last execution of fetching the storageclasses using 'mahi' is successful.

Reference:

Tuesday, October 18, 2022

Kubernetes Security - RBAC

My Understanding about RBAC in Kubernetes

RBAC stands for Role based access control in our Kubernetes system we have users that needs to access the kubernetes cluster and it's resources. Here role is that categorize their needs. Let's say our project have developers, admins, presale users. We could define role named as "readers" that allows all users, because its common need to all user to read from the system. We could define a role called "writers" and allow certainer users like "developers" who contribute many things to develop in application end, "Admin" user can have this to control it. We could also define a role called "administrators" to admins users. Administrator role users can have full rights such as delete from the system.

Role can be used to define "what can be done?"

Role will be given to users, application software. If we need to deal with software then we need to use service account. Service accounts manages to having access control for services that runs softwares. Users can be created to have user access controls.

RoleBindings - who can do it?

In Kubernetes we have RoleBindings as an object. It allows us to users or groups to use roles by mapping that can be defined with role-bindings. RoleBinding is simple concept, role, rolebindings lives at namespace level. For example an ecommerce applicaiton, developers lives in shopping-cart namespace and presale namespace where all the presale systems live and presle team members will be using it. Administrator roles is design to to provide the entire kubernetes cluster level access permissions. That means all namespaces will be accessable to the admin role users. If you have 100 developers working for a project of micro-service based application, you cannot create 100 users and giving the access to each one. here it comes the solution with RBAC where you Kubernetes admin need to create Role and RoleBinding at once and that can be used to 100 users if more developers added still it works without any new ocnfigurations. Roles will lives in namespace constrained, ClusterRole will lives in cluster-wide kubernetes resources. let's see how it works with different users under roles with rolebindings. To check authorization-mode for kube-apiserver-controlplane in the kube-syste namespace.
kubectl get po kube-apiserver-controlplane \
  -n kube-system -o yaml |grep authoriz

How to get the roles present in a namespace?

Let's say here we have created ecom as namespace and application will be ecom-app.
apiVersion: v1
kind: List
metadata:
items:
- apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    name: developer
    namespace: ecom
  rules:
  - apiGroups:
    resourceNames:
    - ecom-app
    resources:
    - pods
    verbs:
    - get
    - watch
    - create
    - delete
Role can be changed as per the project requirements that means initially a role may only have access to work with pods later we can add one more resource such as 'deployments'. You could also work on authorization permissions for a user role where you need to create new set of rule for 'deployments' and apiGroups can be defined with "apps" so that we could get access to the users who have this role.

Kubernetes Security - Group API

Kubernetes API Groups What is the Kubernetes API? Kubernetes API means it works with webservice that uses HTTP and REST protocols to enable the access for the API calls. 

Let's see this how it works using 'curl' command, where we need to provide the URL then api call object path.

Examples To view the Kubernetes version we can use :
curl https://controlplane:6443/version -k
To get the list of pods in default cluster
curl https://controlplane:6443/api/v1/pods -k
in this post we will get to know more about the api specifically Kubernetes API Groups. Each group is defined with a specific purpose, such as on api for health check, other for metrics collection logs etc. These metrics, health check will be used for health of the Kubernetes cluster. And the logs will be used for collecting by third party system where all logs will be collected such as ELK stack uses logstash agent. 

 The API are categorized into two : 
1. Core group /api 
2. Named group /apis 

Here all core group will be associated with the core functionality such as namespaces, pod, replication controllers, nodes, endpoints, bindings, events, pv, pvc, configmaps, services, secrets etc. Where as named group API are more organized and going forward all the newer features are going to be made available to these named groups. apps, extensions, networking.k8s.io, storage.k8s.io, certificates.k8s.io etc comes under named groups. 

 
To list out the
curl https://localhost:6443 
To list all apis names
curl https://localhost:6443/apis -k |grep "name" 

Monday, October 17, 2022

Kubernetes Security - Certificates API


Hello all! Welcome to new learning Kubernetes Certificate API in the series of "Kubernetes Security". a. Private key generation 


Kubernetes Certificate API


We must aware of what does certificate authority CA will do and in Kubernetes how it works.
CA server it is a server which is runs certificate API.

In your DevOps or DevSecOps team a New Kubernetes Admin joins you. Hhow to handle.

Private key, Public key valid pair of CA server sign automated in Kubernetes, it performs following steps:

1. Create CertificateSigningRequest object
2. Review Request
3. Approve Request
4. Share Certs to Users

Let's try how it works

 A user Maheshwari(Mahi) want to create certificate files first private key will be generated with RSA algorithm 'mahi.key' the key size could be 2048 bits.
openssl genrsa -out mahi.key 2048
b. Certificate Signing request (CSR) object Request can be created by providing key and subject values the result can be stored into a csr file by performing the following command:
openssl req -new -key mahi.key -subj "/CN=mahi" -out mahi.csr
c. Certificate Manifestation file can be created as any other Kubernetes object using YAML as mahi-csr.yaml where kind can be used as 'CertificateSigningRequest', under the request section we can add the csr content which can be encrypted with 'base64' Linux command along with the removal of newline chars.
cat mahi.csr | base64 |tr -d "\n"
Now prepare the CSR request manifestation using above outcome.
Filename: mahi-csr.yaml
  ---
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: mahi 
spec:
  groups:
  - system:authenticated
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0ViV0ZvYVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQUs0eDJyd3QzU2F0ZDhxSThEUzJzVDlTcngydHMrQm5Ic202d2lCdkt0Vk5IeXdKCis3Q2VHR09JdlpWN3hOQ08vRkRpT3FXY05HMzhseFR6R2pwWkdqUDNoR2RZdHU1WFNnRjlBbkFGTVZENHBnOVIKOVQzVFBjbU1Rem9ZVllMUE44c2Y5Z3pWdGIrRHV5YTRPd0dVYUNuOUdvaW0yYUV0MTYxOWpmUzRSeEJPVXpjagpFaS9DWlAvY1VUd2dLZWNpTHRKWHhvSGYxRDVuVUhVUFFPQ1JWbGtJSDNkRmZYVWZHSjg3bmpNMzJyRXBqY3gxCkNVZWgzRktLNVA3ZC8rdFB2TUFuNEQ5MzgvLzlvZjBuLzZDa0pSMnZNUStIbkgyK000azArcGVpaWNwSUxQRS8KZVZuNk41UXpUSk5sWldHdmVTMU9ZYzdBczhGa2Q2OXZKanJHcHZjQ0F3RUFBYUFBTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQXV0ZlpQTTlVODlqaFR5ZzhXSkdsRThlOStuWXZ2MjBJQ08wTVV3bVB6dWNvWU1NMlNiK0x5CmhiS0lod3pUVW9QTk91RGk5aEEwaElVS0tmdmxiNENTOHI1UmJneWdheDNMUHpENXlZS1ZZTGR5NmNYRW9EWmoKbUx5d1VpTHY4UnNKdjI3TUF4bEZoZnBrZXFodTZPVTJBaGlWR
signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
  
Now let's create it with CertificateSigningRequest
kubectl create -f mahi-csr.yaml
You can see the CSR status using following command
kubectl get csr 
The CSR status can be any one of these values 'Approved', 'Issued', or 'Pending'

Kubernetes Certificates

Using 'kubectl certificate' object we Kubernetes Administrators can review the CertificateSigningREquest and then decide wheather to 'approve' or 'deny' the CSR. Before this we must recheck the status of the CSR from above 'kubectl get csr' command.
To aprove the CSR request which we have prepared in the steps for mahi user. When you do a review of the CSR file content
kubectl certificate approve mahi 
If you thing the request doesn't look good you can reject by denying it.
kubectl certificate deny agent-xyz
To get rid of the inappropriate user csr request we can delete the csr.
kubectl delete csr agent-xyz 

kubectl get csr # To confirm it is deleted 
This approved certificate can be viewed in YAML format
kubectl get csr mahi -o yaml 
copy the certificate from the above output it is in base64 encrypted format so need to decode it.
echo "copy paste the certificate value from above yaml output" | base64 --decode 
you could see the first and last lines mentioned with BEGIN and END CERTIFICATE All the certificate operations carried out by Controller Manager, If you look inside this ControllerManager it is having CSR-APPROVING, CSR-SIGNING they are responsible for carrying out these specific tasks. If anyone has sign certifciates they need Root Certificate and key of CA that we can see details with:
cat /etc/kubernetes/manifests/kube-controller-manager.yaml 

Official Document reference: 

Thursday, October 13, 2022

Kubernetes Security - Multiple Cluster with Multiple User Config

Hello Guys! in this post we are going to explore about the Kubeconfig. This is a special configuration that will be part of Kubernetes Security. We can configure multiple clusters and different users can access these Kubernetes cluster. We can also configure the users can have access to multiple clusters.

When we started working on Kubernetes Cluster there is a config file automatically generated for us. 

To access a Kube Cluster using the certificate files generated for admin user can be given as follows:
kubectl get pods \
 --server controlplane:6443
 --clisent-key: admin.key
 --client-certificate admin.crt 
 --certificate-authority ca.crt 
 
Every time passing all these TLS details(server,client-key,client-certificate, certificate-authority) including in the kubectl command is tedious process. Instead of this we can move TLS Certificate file set into a config file that is called kubeconfig file. The usage will be as follows
kubectl get pods 
  --kubeconfig config 
Usually this config file will be stored under .kube inside the home directory. If the config file present in this $HOME/.kube/ location and file name as config is automatically detected by the kubectl command while executing.

 

What is KubeConfig contains?


The kubeconfig file have three sections clusters, users, contexts

Cluster section used to define the multiple sets of Kubernetes clusters such as development, testing, preprod and prod environment wise cluster or different organizations integrations use separate clsuter or different cloud providers can have clusters example google-cluster or azure-cluster etc.

And in the Users section we can have admin user, developer user etc. These users may have different privileges on different cluster resources.

Finally contexts manages the above two sections together mapping to form a context. here we will get to know that which user account will be used to access which cluster.

Remember, we are not going to create any new users or configuring any kind of user or authorization in this kubeconfig. We will be using only the existing users with their existing privileges and defining what user acces what cluster mapping. This way we don't have to specify tht user certifcates and server URL in each and every kubectl command to run.

The kubeconfig is in yaml format which basically have above mentioned three sections.
Kubernetes Configuration with different clusters map to Users


Filename: vybhava-config
apiVersion: v1
kind: Config

clusters:
- name: vybhava-prod-cluster
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

- name: vybhava-dev-cluster
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

- name: vybhava-gcp-cluster
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

- name: vybhava-qa-cluster
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

contexts:
- name: operations
  context:
    cluster: vybhava-prod-cluster
    user: kube-admin
    
- name: test-user@vybhava-dev-cluster
  context:
    cluster: vybhava-dev-cluster
    user: test-user

- name: gcp-user@vybhava-gcp-cluster
  context:
    cluster: vybhava-gcp-cluster
    user: gcp-user

- name: test-user@vybhava-prod-cluster
  context:
    cluster: vybhava-prod-cluster
    user: test-user

- name: research
  context:
    cluster: vybhava-qa-cluster
    user: dev-user

users:
- name: kube-admin
  user:
    client-certificate: /etc/kubernetes/pki/users/kube-admin/kube-admin.crt
    client-key: /etc/kubernetes/pki/users/kube-admin/kube-admin.key
- name: test-user
  user:
    client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt
    client-key: /etc/kubernetes/pki/users/test-user/test-user.key
- name: dev-user
  user:
    client-certificate: /etc/kubernetes/pki/users/dev-user/dev-user.crt
    client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key
- name: gcp-user
  user:
    client-certificate: /etc/kubernetes/pki/users/gcp-user/gcp-user.crt
    client-key: /etc/kubernetes/pki/users/gcp-user/gcp-user.key

current-context: operations

To view the configuration of current cluster you must have the config in $HOME/.kube/config 

The content of kubernetes configuration can be view with the following command: :
kubectl config view 
Kubernetes Cluster view when default config used


To view newly created customized configurations, we need to specify the file path "vybhava-config" file. Note here the "vybhava-config" is available in the current directory.
kubectl config view --kubeconfig=vybhava-config


Know your Kubernetes cluster 

To check the list of cluster(s) exist in the default kubernetes cluster config
kubectl config get-clusters
Work with your customized config file vybhava-config to know clusters list
kubectl config get-clusters --kubeconfig=vybhava-config

Knowing about Kubernetes cluster from Kube Config


KuberConfig user details

To check the list of user(s) exist in the default kubernetes cluster config
kubectl config get-users
Work with your customized config file vybhava-config to know user list
kubectl config get-users --kubeconfig=vybhava-config


KubeConfig getting the users list

KubeConfig Context

Here the context will be using users, clusters  and each context is identified with a name define also here we can sees at the end of the configuration  current context. 

To find how many contexts in vybhava-config To know for default cluster contexts :
kubectl config get-contexts
To identify what user configured in the 'operator' context we need to use the 'get-contexts' option the mapping output is displayed as a table where 'CURRENT' context will be pointed with '*' in the column.
kubectl config --kubeconfig=vybhava-config get-contexts
Kubernetes Config getting Contexts using kubectl

Here in the Context section we could add a field namespace that can be specific to project module such as production cluster can be mapped to HR application that runs with hr-finance,hr-hirings namespce.

Here we have executed all possible choices for fetching the Users, Clusters, Context from KubeConfig object. Now let's try to set the context


delete user

kubectl config --kubeconfig=vybhava-config get-users
kubectl config --kubeconfig=vybhava-config delete-user test-user
kubectl config --kubeconfig=vybhava-config get-users
Deletion of Users from Config


delete cluster

kubectl config --kubeconfig=vybhava-config get-clusters 
kubectl config --kubeconfig=vybhava-config delete-cluster vybhava-gcp-cluster
kubectl config --kubeconfig=vybhava-config get-clusters 
Kubernetes Cluster deletion from KubeConfig


delete context

kubectl config --kubeconfig=vybhava-config get-contexts 
kubectl config --kubeconfig=vybhava-config delete-context gcp-user@vybhava-gcp-cluster
kubectl config --kubeconfig=vybhava-config get-contexts 
Deletion of Context from Kube Config

The certificate files can be part in cluster certificate-authority user certificates, The best tells that instead of using admin.crt we must use absolute path to the certificate files such as here /etc/kubernetes/pki/ca.crt and one more way to use certificate-authority-data field value can be used with certificate file content base64 encoded data. As we learnt about the content is sensitive then we need to use base64 command to encode run command as  "base64 ca.crt" and that can be understood by Kubernetes automatically. 

Monday, October 10, 2022

Kubernetes Security - TLS Keys and Certificates

Transport Layer Security  (TLS) Basics 

 Early days it was called Secure Socket Layer (SSL) now it is renamed as TLS. In this post we will explore more TLS  certificate files. and their usage in different Kubernetes Cluster components.  TLS Certificates for Kubernetes Cluster components is basic thing to do a High Availability(HA) in Production configurations and troubleshoot various security for users, applications, and administration level issues.

Public Key Infrastructure used in the Kubernetes Cluster Security

Certificate files

The certificate files are nothing but key pair that have private key, public key. And Public Key which we can consider as a Lock visible to public. The example certificate file can be having extension as .crt or .pem the files like server.crt, server.pem client.crt or client.pem. Whereas Private key it will be owned by the person who generated it. This file cannot be distributed, instead it will be used when Lock is visible to it uses this key to unlock and get the data or send the data in a encrypted format. Most of the private key file extensions can be .key or .pem sample files are server.key or server-key.pem (to differ with server side public key filename changed) client.key or client-key.pem.

There could be three types of certificates

  1. Server Certificates - Server side 
  2. Root Certificates - are reside at CA 
  3. Client Certificates - at browser or client side in Kubernetes component client side

Why we need TLS certificates?

Let's take a online payment transaction example. Customer 'X' wants to transfer the money to a ecommerce vendor online. When user send user details such as username=xxx,password=yyy,trnamount=100. Hacker is going to do an attack in the middle customer - vendor try to get the user details and able to get the control on the transactions. This is called middle-man attack.

To avoid this kind of attacks on your e-business, we need end-to-end secure communication for this use TLS certificates between web-server and web-clients.

Where can I get the Certificates? 

We need to submit the certificate request to Certificate Authority (CA). There ae many CA which are internationally available.

  1. VeriSign
  2. GeoTrust
  3. Let's Encrypt
  4. DigiCert
Other than this we can use self-sign certificates in the Kubernetes. When we installed Kubernetes using kubeadm it is built-in for us certs are created automatically for us. Each component uses different Issuer to make it more secure. 

All the certificate are configured with 2048 bit size encrypted with RSA algorithm. All the Kubernetes certificate files will be stored in a common location on the master "/etc/kubernetes/pki" Every where we have public key and private key files private keys are always having the extension as .key and public key may have .pem or .crt file extensions. we can navigate to the /etc/kubernetes/manifests/ path where we can see all the definitions required to run the Kubernetes Cluster. 
  1. ETCD database related definitions are in etcd.yaml 
  2. Controller-manager - The definitions related to different controller-managers are available in kube-controller-manager.yaml 
  3. kube-apiserver - Whole cluster work entry and exits are happen through apiserver its definitions are in kube-apiserver.yaml 
  4. kube-scheduler - Pods, Network node are controlled using schedulers and their related definitions are in kube-scheduler.yaml

How do you Identify the certificate files used for the kube-apiserver?

To identify certificate files used for the kube-apiserver can be viewed
view  /etc/kubernetes/manifests/kube-apiserver.yaml 
In this file under the containers section we can see there are several - command having different certificate path where apiserver related we have --tls-cert-file, --tls-private-key-file values are used for apiserver. 

Kube-apiserver certificate details from the manifest file


In Kubernetes master kube-apiserver is a client to the ETCD Database server. To authenticate kube-apiserver authenticate ETCD client certificate used are apiserver-etcd-client.crt defined with the option --etcd-certfile

To identify the key used to authenticate kube-apiserver to the kubelet server uses /etc/kubernetes/pki/apiserver-kubelet-client.key 

 To identify the certificate file used to authenticate kube-apiserver to the kubelet server uses /etc/kubernetes/pki/apiserver-kubelet-client.crt

ETCD Server CA Root Certificate used to serve ETCD Server. It's a best practice to have different CA (Certificate Authority) for ETCD server than kube-apiserver. Trusted CA provide certifcate file available at /etc/kubernetes/pki/etcd/ca.crt

How do you find the Common Name (CN) configured on the Kube API server certificate?

Kubernetes apiserver certificate data configured to view we can use 'openssl' command.
opsnssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
Viewing the Kube-apiserver certificate file using openssl command



There will be lengthy output on your screen but important fields which we can look for are: Issuer having CN which is related to CA, where we are looking for which server it is defined then look for "Subject" line contains the CN value here it is "kube-apiserver". 

By default the name of the CA who is issued the Kube APIserver certificate is mentioned as  'kubernetes'. 

The below are the alternate names configured on the kube-apiserver certificate: [look at the X509v3 Subject Alternative Name: under this DNS values] 
  • controlplane 
  • kubernetes 
  • kubernetes.default 
  • kubernetes.default.svc 
  • kubernetes.default.svc.cluster.local 
 ETCD Server certificates configuration to view use the following 'openssl' command:
openssl x509 -in /etc/kubernetes/pki/etcd/server.crt -text -noout 
Viewing the ETCD Server certificate content


Here we need to observe that the "Subject:" is the Common Name (CN) configured on the ETCD Server cert-file. Server certificate validity We can find the certificate how long it is valid from the same 'openssl' command check under the 'Validity' section. Root CA Certificate validity will be approximately 10 years which we can see in the /etc/kubernetes/pki/ca.crt


Most of the troubles will be misplaced certificate paths. Validate the crt file paths and inside the apiserver.yaml or etcd.yaml file contain paths

Tuesday, October 4, 2022

Kubernetes Secrets

Hello DevOps | DevSecOps teams, we are running into the new generation of microservices inside Pods where we need to focus on how we can protect them. And here this post is going with the Security rules imposing on the Kubernetes Cluster with Secret Objects which are specially designed to store the sensitive data in them to refer inside the Pod Containers. But they have limitation that they can hold up to 1MB size of data only.  

Why Secret objects?

  • We can store Password, keys, tokens, certificates etc
  • Secrets will reduce the risk of exposing sensitive data
  • Access secrets using volumes and environment variables
  • Secrets object will be created outside pod/containers 
  • When it is created there is NO clues where it will be injected
  • All secrets resides in ETCD database on the K8s master


This Kubernetes Secret Objects are similar to ConfigMaps Objects 

Kubernetes Secret objects Using Volume, ENVIRONMENT variables

Pre-check first we will check the Kubernetes Cluster is up and running.
kubectl get nodes
All the Kubernetes master and slave nodes are in Ready status.

There are two ways to access these Secrets inside the Pod.
  1. Using Environment variables 
  2. Assign to Path inside Pod 

Creating Kubernetes Generic Secrets

We are talking about Secrets, Let's create a text that can be converted to encrypted format, if we want we can decode that into plain text. In Linux CLI we have base64 command that will be used to do this encode or decode text from file or from standard input to display on standard output that is on our terminal. 

Secrets can be created in Kubernetes following ways: 

  1. from file or directory
  2. from literals
  3. Using YAML Declarative approace

echo "admin" | base64 > username.txt 
cat username.txt 
cat username.txt | base64 -d

# Now password
echo "abhiteja" |base64 > password.txt
cat password.txt | base64 -d

# Create secret from file
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
kubectl get secrets
kubectl describe secrets db-user-pass

Validate

Kubernetes secret creation with username, password

The Pod  with Redis  image will be using the secrets as environment variables


FileName: secretenv-pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: secret-env-pod
spec:
  containers:
  - name: mycontainer
    image: redis
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: db-user-pass
            key: username.txt
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: db-user-pass
            key: password.txt
  restartPolicy: Never  
You can create the Pod using the following command:
kubectl create -f secretenv-pod.yml
kubectl get pods


Secret environment variables in Redis Pod

Get inside the Redis Pod into the container and check with the 'env' command which will show the SECRET_USERNAME and SECRET_PASSWORD 
kubectl exec -it secret-env-pod -- bash 
env | grep SECRET


Secret as environment variables inside Pod Container

From the literal

Now let's see the simple option that is using --from-literal we can have as many as you wish to store as secret here I'm using three variables stored int the 'mysqldb-secret' object.
  
k create secret generic mysqldb-secret \
 --from-literal=DB_Host=mysql01.vybhava.com \
 --from-literal=DB_User=root \
 --from-literal=DB_Password=Welcome123
 
  k describe secret mysqldb-secret
  k get secret -o yaml
  
Execution output as follows:
Kubernetes Secret creation from literal example

 
Pod implementation the secret object you can see in the first option we have already discussed.

Creating the Secret declarative way

We can create a secret using the YAML where we can set the data fields as encrypted values

Filename : mysecret.yaml
apiVersion: v1
data:
  username: a3ViZWFkbQo=
  password: a3ViZXBhc3MK
kind: Secret
metadata:
  name: mysecret
Let's create the secret object with kubectl command
kubectl create -f mysecret.yaml
kubectl get secrets
kubectl describe secrets mysecret

Creating secret object in Kubernetes using kubectl


Note that secret never shows the data present in the secret section, instead it will be showing the size of the data. It is like masking the sensitive data. And this secrete can be used inside the Pod defination as follows: Filename: mypod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: redis-db
      mountPath: "/etc/redis-db"
      readOnly: true
  volumes:
  - name: redis-db
    secret:
      secretName: mysecret
Creating the mypod defined as Pod and that using the volume defined with the secret section with mysecret which will referred to the above secret that we have created earlier.

Create Pod that uses secrets as volume

Encryption at rest configuration

The secrets which are created in the Kubernetes are not really not secrets! So we don't share these declarative YAML files in source code repositories. 

How this secret is stored in the ETCD DB 
To know this we must have etcd-client installed on your machine, on my system it is Ubuntu so let me install it.

apt-get install etcd-client
# Validate installation 
etcdctl 
  
Kubernetes ETCD DB Client installation on Ubuntu


Hope you enjoyed this post!!
Please share with your friends and comment if you find any issues here.

External References:

Sunday, October 2, 2022

Pod scheduling-2: Pod Affinity and Node Affinity

Here in this post we will be exploring in deep dive on the Node Affinity. Node Affinity is more capable alternative to node lables and selector on the Kubernetes cluster.

There are three types of affinities in Kubernetes

  1. Node affinity
  2. Pod affinity
  3. Pod anti-affinity

Node Affinity meaning that the pod should only be scheduled on target nodes with specific lables, this is basically what the node selector does for example only schedule the db-pod on the size=large

Then, We have Pod affinity for dictates that the pod should only be scheduled on nodes where other specific pods are already running. For example cache pod  scheduled only on nodes where the webserver pods already running. In generic way we could say "Schedule Pod X  ' only on where ' Pod Y". Its like Indian girl marrying boy she stay with him! This way we can reduce the network latency between the two pods which need to communication(think about web pod connects with db pods).

Finally, the Pod anti-affinity it is like divorced pair where boy stays there girl wont stay! Pod X should not be scheduled on the same node where Pod Y running. This could be the need where you have two database processing pods don't want to run on the same node therefore you can use Pod anti-affinity to make sure that these database pods repel each other and hence they will get scheduled on two different nodes.

Let's checkout the sample of a node affinity to understand deeper

apiVersion: v1
kind: Pod
metadata:
  name: web-pod
spec:
  containers:
  - name: nginx-container
    image: nginx:alpine
  affinity:
    nodeAffinity:
	  requiredDuringSchedulingIgnoredDuringExecution:
	    nodeSelectorTerms:
		- matchExpressions:
		  - key: machine-type
		    operator: In
			values:
			- hicpu 
		  - key: disk-type
		    operator: In
			values:
			- solidstate 
The node Affinity comes with two options: 
  • requiredDuringSchedulingIgnoredDuringExecution
  • preferredDuringSchedulingIgnoredDuringExecution 

These names of fields are really lengthy :) But it make sense to easy to use them because they do that they spell out exactly!! Here the nodeSelectorTerms will be defining the requirements for the nodeAffinitiy, there can be more number of conditions to match. Node Affinity Example -2 Here we have variation in the nodeAffinity section.
apiVersion: v1
kind: Pod
metadata:
  name: web-pod
spec:
  containers:
  - name: nginx-container
    image: nginx:alpine
  affinity:
    nodeAffinity:
	  preferredDuringSchedulingIgnoredDuringExecution:
	    - weight: 2
		  preference:
	    	matchExpressions:
			- key: machine-type
			  operator: In
			  values:
			  - locpu 
		- weight: 1
		  preference:
	    	matchExpressions:		
			- key: disk-type
			  operator: In
			  values:
			  - harddisk 
In the above nodeAffinity will be look for what node have the higher weight preference given to schedule pod on it. Here the Kubernetes will schedule the pod onto node that have highest total weight on it. You might be wondered the Pod might be scheduled onto the node with no condition matched these labels because the nodeAffinity is preferred not required.


Experiment with changing the Preferred with Required

Saturday, October 1, 2022

K8s Storage Volumes Part 4 - Dynamic Provisioning

Hello guys! I am back with new learning on the Kubernetes Storage Volume section series of posts, We have already seen that how we can create a PV, And then claiming that with different PVC, then you can use the PVC in the Pod manifestation under volumes section, But, in this post we will be exploring the various options available for Dynamic Provisioning with StorageClass.

StorageClass - PersistentVolumeClaim used in Pod



Wanted to know Kubernetes StorageClasses in depth. Visited many blog posts with different cloud choices people are working. Initially I've gone through the Mumshadmohammad session and practice lab, triedout on GCP platform.
Previous Storage related posts

Basically, Kubernetes maintains two types of StorageClasses:
  1. Default storage class (Standard Storage class)
  2. User-defined storage class (Additional which is created with kubectl)
The additional storage-class will depend on the Public Cloud platform storage, There are different Provisioners :

  • On Azure - kubernetes.io/azure-disk
  • On AWS - kubernetes.io/aws-ebs
In this post, let's explore the AWS EBS option

# storageclass-aws.yml
kind: storageclass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-ebs-storageclass
provisioner: kubernetes.io/aws-ebs 
parameters:
  type: gp2
The key to dynamic provisioning is storage class. Thanks to Kubernetes have this great feature. The storage class manifestation starts with provisioner, this is depends on the Cloud platform which provides different storage abilities as per the access speed and size also matters.
kuberntes.io/gce-pd is the provisioner provided by Google. its related parameters we have define pd-standard, zone, reclaim policy. If you created using a storage class it will inherit its reclaim policy.
The Kubernetes cluster administrator setup one or more number of storage provisioners. Using which admin must create one or more storage classes and then user/developer will create claim (PVC). Where it uses storage class name field and then Kubernetes will creates automatically a PV that actual storage will linked. This way dynamically provisioned based on requested capacity, access mode, reclaim policy and provisioner specified in PVC and the matching storage class. And finally user use that claim as volume.
 
Specific to GCP users:
gcloud beta compute disks create \
  --size 1GB --region us-east1 pd-disk 
You can either use pv or storage class just for your reference here is the pv manifestation file:
#File: pv.yaml  
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gcp-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 500M
  gcePersistentDisk:
    pdName: pd-disk
	fsType: ext4
In the PV defination you can tell the specific size and filesystem type as well in your control. We are going to run this on GKE
gcloud container clusters get-credentials pd-cluster 
Defining the storage class with the following YAML
# File: sc-def.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: google-sc
provisioner: kuberntes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-east1-a 
reclaimPolicy: Delete

Now create it and validate it's creation
kubectl create -f sc-def.yaml
kubectl get sc 
Now let's create this claim PVC as follows:
  
# File: pvc-def.yaml  
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessMode:
    - ReadWriteOnce
  resources:
    requrests:
      storage: 500Mi
  storageClassName: "google-sc"
Here PVC uses the storage class which created in the above step.
kubectl create -f pvc-def.yaml
kubectl get pv,pvc 
Now all set to proceed to use the storage into a Deployment-Pod.
# File: mysql-deploy.yaml 
---
apiVersion: app/v1
kind: Deployment-Pod
metadata:
  name: mysql
  labels:
    app: mysql-db 
spec:
  replicas: 1
  selector:
    matchLabels:
	  pod-labels: mysql-pods
	spec:
	  containers:
	  - name: mysql
	    image: mysql:alpine
		env:
		- name: MYSQL_ALLOW_EMPTY_PASSWORD
		  valure: true
		volumeMounts:
		- name: mysql-data
		  mountPath: /var/lib/mysql
		  subPath: mysql
		volumes:
		- name: mysql-data
		  persistentVolumeClaim:
		    claimName: myclaim 
Create the mysql database processing pod
kubectl create -f mysql-db.yaml
kubectl get deploy,po 
To get into mysql db we need to get the shell acces into the pod.
kubectl exec -it mysql-podid -- /bin/bash 
Inside the container:
mysql
create database clients;
create table clients.project(type varchar(15));

insert into clients.project values ('evcars-stats');
insert into clients.project values ('electric-cars');
insert into clients.project values ('jagwar-cars');
Check the content of the database table:
select * from clients.project; 
Exit from the pod shell and Now try to delete this pod which is in deployment so it will replace new pod automatically. get inside the pod shell again check the database table content if all looks good then our test case is successful! Congratulations you learnt how to use Kubernetes Dynamic provisioning!
Cleanup Storage classes stuff using the kubectl delete command

The sequance goes as this:
1. Delete the Pod: kubectl delete pod sc-pod
2. Delete the PVC: kubectl delete pvc test-pvc
3. Delete StorageClass: kubectl delete sc test-sc

References:

Categories

Kubernetes (24) Docker (20) git (13) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create deployment (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)