Kubernetes Security Part 1 – Creating a test Kubernetes Cluster with kubeadm

Kubernetes Security Part 1 – Creating a test Kubernetes Cluster with kubeadm

As enterprise moves towards Cloud Computing, large technologies and platforms, such as AWS, contain complex infrastructure that is susceptible to complex security concerns, and Kubernetes clusters are no exception. Red Cursor has started testing applications that are running as containers within these clusters and having access to a running, reproducible test environment is becoming vital for security research purposes. This blog post details creating a test Kubernetes cluster with Kubeadm consisting of one controlplane node and two worker nodes that are semi-automated using vagrant.

When creating this cluster we refer to the Kubeadm and Arch Linux wiki for documentation and find that there are some interesting caveats for installing Kubeadm, namely:

Specifically, it mentions some caveats that are important to be aware of, namely:

  • Swap space has to be disabled for the Kubelet service to start.
  • The Kubelet service needs to be manually enabled and started
  • The Kubeadm tool requires a back-end container system such as Docker to be enabled and running.

Creating Shell Scripts

The following shell script was used to meet these requirements (with a one-line change to initialize Kubeadm on the controlplane node):

sudo pacman -Syy
printf "y\ny\ny\n" | sudo pacman -S iptables-nft kubelet kubeadm docker vim

sudo mkdir /etc/docker
sudo printf '{\n"storage-driver": "overlay2"\n}\n' | sudo tee /etc/docker/daemon.json

sudo swapoff -a

sudo systemctl enable kubelet.service
sudo systemctl enable docker.service
sudo systemctl start docker.service

This script installs the necessary packages for the machines, sets the storage-driver to overlay2 as btrfs is not supported by Kubeadm, disables swap, enables and starts the docker service, and finally enables the Kubelet service. For our control plane we perform much of the same logic, but we additionally install Kubectl and run Kubeadm to initialize our cluster.

sudo pacman -Syy
printf "y\ny\ny\n" | sudo pacman -S kubelet kubeadm kubectl docker vim

sudo mkdir /etc/docker
sudo printf '{\n"storage-driver": "overlay2"\n}\n' | sudo tee /etc/docker/daemon.json

sudo swapoff -a

sudo systemctl enable kubelet.service
sudo systemctl enable docker.service
sudo systemctl start docker.service

sudo kubeadm init --ignore-preflight-errors=all --apiserver-advertise-address=192.168.56.2

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Using Vagrant to deploy machines to our test Kubernetes Cluster

When creating a test Kubernetes Cluster we require virtual machines to act as each of the nodes. Vagrant is used to create and mange the virtual machines so they can quickly be reset and recreated. For the virtual machine OS, I decided to use Arch Linux as, in normal installations, I find it has a smaller footprint then other operating systems such as Ubuntu as well as supporting installing the needed tools out of the default repositories.

The machines can be created by using the following Vagrantfile:

# -*- mode: ruby -*-
# vi:set ft=ruby sw=2 ts=2 sts=2:

# Define the number of master and worker nodes
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2

IP_NW = "192.168.56."
MASTER_IP_START = 1
NODE_IP_START = 2

Vagrant.configure("2") do |config|
  config.vm.box = "archlinux/archlinux"
  config.vm.box_check_update = false
  
  # Provision Master Nodes
  (1..NUM_MASTER_NODE).each do |i|
    config.vm.define "kubemaster" do |node|
      node.vm.provider "virtualbox" do |vb|
        vb.name = "kubemaster"
        vb.memory = 2048
        vb.cpus = 2
      end
      
      node.vm.hostname = "kubemaster"
      node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
      node.vm.network "forwarded_port", guest: 22, host: "#{2710 + i}"
      
      node.vm.provision "setup-kubemaster", :type => "shell", :path => "scripts/install-controlplane.sh" do |s|
        s.args = []
      end
      
    end
  end
  
  # Provision Worker Nodes
  (1..NUM_WORKER_NODE).each do |i|
    config.vm.define "kubenode0#{i}" do |node|
    node.vm.provider "virtualbox" do |vb|
      vb.name = "kubenode0#{i}"
      vb.memory = 2048
      vb.cpus = 2
    end
    
    node.vm.hostname = "kubenode0#{i}"
    node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
    node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
    
    node.vm.provision "setup-kubemaster", :type => "shell", :path => "scripts/install-node.sh" do |s|
      s.args = []
    end
  end
end
end

Now we can run vagrant up and watch as our boxes are deployed and configured:

Deploying machines using Vagrant
Bringing a box online using vagrant up

During this boot up process, an important command will show up in the vagrant logs of our kubemaster machine:

Retrieving Kubeadm join command from Vagrant output
Kubeadm join command

Write the command down as it is used to join each Kubernetes node to the test cluster.

Configuring our test Kubernetes Cluster

Once all of our machines are online, we can use vagrant ssh nodemaster to access our controlplane node and start issuing commands. We can firstly check that our kubectl is correctly configured and that our controlplane node exists within the cluster:

Getting status of nodes from Kubernetes
Getting the status of nodes after initial install

Firstly we deploy a networking solution such as Flannel. To deploy flannel run the command as shown on their GitHub:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

We can now register each node to our cluster by using vagrant ssh kubenode01 and vagrant ssh kubenode02 and running our previously saved Kubeadm join command:

Adding a node to the Kubernetes cluster
Joining a node to the cluster using kubeadm join

Now we can check the status of our nodes:

Getting the status of our test Kubernetes Cluster
Getting the status of nodes after registering kubenode01 and kubenode02

We are now ready to start testing using our new test Kubernetes cluster.

More Blogs

May 31, 2021

Upgrading from AppLocker to Windows Defender Application Control (WDAC)

Windows Defender Application Control (WDAC), formerly known as Device Guard, is a Microsoft Windows secure feature that restricts executable code, including scripts run by enlightened Windows script hosts, to those that conform to the device code integrity policy. WDAC prevents the execution, loading and running of unwanted or malicious code, drivers and scripts. WDAC also… Continue reading Upgrading from AppLocker to Windows Defender Application Control (WDAC)

Read More
cyber security companies | penetration testing | managed security service provider | cyber security consultant
June 22, 2021

Bypassing LSA Protection (aka Protected Process Light) without Mimikatz on Windows 10

Starting with Windows 8.1 (and Server 2012 R2) Microsoft introduced a feature termed LSA Protection. This feature is based on the Protected Process Light (PPL) technology which is a defense-in-depth security feature that is designed to “prevent non-administrative non-PPL processes from accessing or tampering with code and data in a PPL process via open process… Continue reading Bypassing LSA Protection (aka Protected Process Light) without Mimikatz on Windows 10

Read More
cyber security companies | penetration testing | managed security service provider | cyber security consultant
June 7, 2020

Using Zeek to detect exploitation of Citrix CVE-2019-19781

Using the tool Zeek, formally known as bro, is a high-level packet analysis program. It originally began development in the 1990s and has a long history. It does not directly intercept or modify traffic, rather it passively observes it and creates high-level network logs. It can be used in conjunction with a SIEM to allow… Continue reading Using Zeek to detect exploitation of Citrix CVE-2019-19781

Read More