blog に戻る

2023年05月11日 Anton Ovrutsky

Building a Kubernetes purple teaming lab

Sumo Logic Threat Labs

Kubernetes, and containerization in general, has a wealth of benefits for many teams operating cloud-native applications. From a threat detection standpoint, however, it is often difficult for newcomers to this space to gain the relevant hands-on experience without trampling over production environments.

The Sumo Logic team has previously authored articles on Kubernetes DevSecOps vulnerabilities and best practices as well as Kubernetes logging and monitoring. Now, let’s extend this work and outline how to set up a Kubernetes home lab.

We will provision and configure this lab, send the relevant telemetry to a free Sumo Logic instance and track our testing activities using the awesome Vectr tool.

What is purple teaming?

Before diving head first into the tooling, attacks & defenses, we should pause for a moment and outline what purple teaming is and why it's a powerful tool for developing defenses for complex applications and networks.

Within the information security community, colors have been ascribed to the attack/defense spectrum. With those on the attacking end (penetration testers, red teamers) being associated with the red color and those on the defending end (SOC analysts, threat hunters) being associated with the blue color.

This dichotomy between red and blue is often blurry, as some red teamers may want to perform threat hunting to hone their evasion skills and some SOC analysts may want to understand how an attack tool works in order to craft more comprehensive detection logic.

It should come as no surprise then, that purple teaming is a mix of both blue and red aspects of the cyber security spectrum. Purple teaming engagements and exercises can come in many flavors and may lean more to one side of the red-blue spectrum than the other, depending on how the exercise is laid out and who the recipient of the engagement is.

Overall, however, purple teaming generally encompasses the dynamic of collaboratively attacking a system, checking the results of the attack, tracking the related metrics and then iterating; all with the goal of improving system security, response and resiliency.

Some additional resources on purple teaming can be found here and here.

Tooling overview

Before getting into the various techniques, tactics and procedures (TTPs) as well as technical details, let us step back for a moment and do a quick overview of all the tooling involved and highlight the function that each piece performs. 

Tool

Purpose

Ubuntu 20.04 virtual machine

This virtual machine will be running our Kubernetes cluster as well as various other logging tools in addition to Vectr itself.

Docker

Docker will act as our Kubernetes driver and will host our Vectr instance, all on the same virtual machine.

Minikube

Minikube will make it possible for us to run a Kubernetes instance on our virtual machine.

Sumo Logic

A free Sumo Logic account will be used in order to monitor our Kubernetes installation.

Auditd + Laurel

We will be using an auditd configuration file on our virtual machine in order to get host-level telemetry from our Kubernetes cluster. Laurel will be used to transform these auditd logs into JSON format so that they are easier to work with and query.

Vectr

Vectr will be used to track our purple teaming activities on our local Kubernetes cluster.

Virtual machine setup

Now that you have all the tools to follow along, let’s dive in and get it all set up.

Please note that throughout these instructions, a virtual machine with an ARM architecture is used, if the virtual machine you are running is not ARM, then please adjust the relevant installation instructions to match your particular architecture. At time of writing, Vectr does not support ARM architectures. If you wish to run all the tooling outlined in this blog on one virtual machine, then we recommend a x64/amd64 architecture.

Docker install

In order to install Docker, we will be following the official instructions.

We first install the necessary dependencies:

sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg

We then add the Docker repo GPG keys:

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https:<em>//download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg</em>
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Following this, we set up the relevant Docker repos - note that the below command should be architecture agnostic.

echo \
  "deb [arch="$(dpkg --print</strong>-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] <a href="https://download.docker.com/linux/ubuntu" class="redactor-autoparser-object">https://download.docker.com/li...</a> \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/<strong>null</strong> 
<strong>apt-get update</strong>

Finally, we install the relevant Docker packages:

sudo apt-get <strong>install</strong> docker-ce docker-ce-cli containerd.io docker-buildx-<strong>plugin</strong> docker-compose-<strong>plugin</strong>

If all went well during the installation, you should be able to run: docker -v and see something similar to the following:

Kubernetes purple teaming lab - image 1

Minikube install

Next up, we’ll be installing Minikube. We will follow the official instructions to perform the installation.

The instructions are interactive here, and you can click the relevant buttons to match your particular architectures and virtual machine setups.

Installation

In our case, we will be using the Linux operating system, ARM64 architecture, and will be using the binary installer type, so the command here is:

curl -LO <a href="https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64" class="redactor-autoparser-object" target="_blank">https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64</a>
sudo install minikube-linux-arm64 /usr/local/bin/minikube

Prior to starting Minikube, we need to add our current user to the Docker group:

sudo usermod -aG docker $<strong>USER</strong> <strong>&& newgrp</strong> docker

Now we are ready to start Minikube using: minikube start

If all goes well, you should see something similar to:

In order to interact with our Minikube cluster, we also need to install kubectl.

Minikube can do this for you with the following command:

minikube kubectl <em>-- get po -A</em>

We can then alias this minikube version of kubectl to just kubectl to make it easier for us:

<strong>alias</strong> <strong>kubectl</strong>="minikube kubectl --"

Now we should be able to do a: kubectl get nodes in order to see the minikube control plane running.

Very cool! We now have a virtual machine provisioned with Docker installed as well as a local Kubernetes instance ready for our testing.

Auditd setup

As a next step, let’s instrument our Ubuntu host with some host-level telemetry using Auditd and Laurel.

We first install auditd using the following command: sudo apt-get install auditd

If we do a service auditd status, we should see output similar to the following:

Next, we need to tell auditd what we want it to log and here we will be using Florian Roth’s awesome auditd configuration file.

We can start by making a backup of the existing rules file:

cp /etc/audit/rules.d /etc/audit/rules.d.bak

Then, we can go ahead and change the auditd rules. To do so, use sudo to open up /etc/audit/rules.d/audit.rules with your favorite text editor (no nano versus vim arguments here!) and replace the contents of the audit.rules file on your Ubuntu machine with the audit.rules linked above.

Once that is done, restart auditd using sudo service auditd restart

To ensure that auditd is working properly, you can use: tail -f /var/log/audit/audit.log - we can hit ctrl+c to stop the tailing.

If you take a closer look at the auditd log format, it may look overwhelming at first, as a simple cat command looks like this when logged by auditd:

type=SYSCALL msg=audit(1681912914.195:1053): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffef61a793 a2=0 a3=0 items=1 ppid=3102 pid=4453 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=2 comm="cat" exe="/usr/bin/cat" subj=unconfined key="auditlog"ARCH=aarch64 SYSCALL=openat AUID="parallels" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root"

Laurel setup

In order to make these logs easier to work with, we will be using Laurel.

We start by downloading the relevant Laurel binary:

wget <a href="https://github.com/threathunters-io/laurel/releases/download/v0.5.1/laurel-v0.5.1-aarch64-musl.tar.gz" class="redactor-autoparser-object" target="_blank">https://github.com/threathunters-io/laurel/releases/download/v0.5.1/laurel-v0.5.1-aarch64-musl.tar.gz</a>

We then untar the archive:

<strong>tar</strong> <strong>xzf</strong> <strong>laurel-v0.5.1-aarch64-musl.tar.gz</strong>

And copy it to the proper directory:

sudo install -m755 laurel /usr/local/sbin/laurel

Next up we need to create a user for Laurel:

sudo useradd --system --home-dir /var/log/laurel --create-home _laurel

Next, we need to create a Laurel configuration file, an example is provided here

We can do this with the following commands:

sudo mkdir /etc/laurel 

Followed by:

sudo wget <a href="https://raw.githubusercontent.com/threathunters-io/laurel/v0.5.1/etc/laurel/config.toml" class="redactor-autoparser-object" target="_blank">https://raw.githubusercontent.com/threathunters-io/laurel/v0.5.1/etc/laurel/config.toml</a> -O /etc/laurel/config.toml

Next, open the config.toml file that we just downloaded and change the value on line 20 to match your Ubuntu user.

Finally, we need to tell auditd to use Laurel as a plugin, we can do this with the following command:

sudo wget <a href="https://raw.githubusercontent.com/threathunters-io/laurel/v0.5.1/etc/audit/plugins.d/laurel.conf" class="redactor-autoparser-object" target="_blank">https://raw.githubusercontent.com/threathunters-io/laurel/v0.5.1/etc/audit/plugins.d/laurel.conf</a> -O /etc/audit/plugins.d/laurel.conf

We now need to restart auditd:

sudo pkill -HUP auditd

Before we take a look at our fresh Laurel logs, let’s install the jq utility: sudo apt install jq

Now we can browse to /var/log/laurel and run: cat audit.log | jq

Wow, we ran a lot of commands so far!

To recap, we’ve set up an Ubuntu virtual machine with Docker and Minikube in order to operationalize a local Kubernetes cluster and to have a Docker base for the installation of Vectr. We’ve also instrumented this Linux host with telemetry in the form of an auditd configuration file and have transformed that telemetry into JSON format using Laurel.

Next up, let’s install Vectr on this same host so that we can track and monitor our purple teaming journey.

Vectr setup

Please note that at time of writing, only x64 platforms are supported for Vectr installations.

Vectr is a fully dockerized application, so we can grab the latest release from the Vectr GitHub repository.

The full instructions can also be found here

Once the Vectr release has been downloaded and unzipped to the appropriate directory, you should have a structure that looks like this:

Next up, open up the .env file with your favorite text editor and change the VECTR_HOSTNAME variable to match your virtual machine's IP address. While here, you can also go ahead and change the MongoDB passwords and JWS/JWE keys as well.

Once we save the .env file, we can go ahead and bring up the various Vectr containers by doing a: sudo docker-compose up -d within the /opt/vectr directory.

docker-compose should have been installed in earlier steps as part of our dependencies, but if it is not, you can run: sudo apt install docker-compose

You should now see various containers being pulled and spun up:

Once all the containers are built, you should be able to browse to https://virtualmachineIPAddress:8081 and put in the default credentials which can be found within the Vectr install instructions, from here, we can create a database for our testing activities.

Navigate to the “Environment” button in Vectr, and click on “Select Active Environment” then the + button and create an environment for our testing:

We should now see a fresh Vectr screen waiting for us:

Recap

Prior to proceeding, we thought it would be wise to recap what we’ve built thus far and to provide a high level overview of all the moving pieces:

Sumo Logic setup and collection

Now that we have our Kubernetes set up, and have a host that is generating telemetry, we need to go ahead and send that telemetry to Sumo Logic.

We can spin up a free trial via this link - once you sign up you should receive an email asking you to activate your account.

Once your account is activated you can click through the relevant wizards and you should be greeted with a blank Sumo Logic page:

Kubernetes monitoring

In order to set up monitoring of our Kubernetes cluster, navigate to the “App Catalog” section in the bottom left hand side the menu pane and then click on Kubernetes:

Once in the Kubernetes App Catalog menu, go ahead and click on “Add Integration” - once you click on this, you should see step by step instructions:

If helm is not installed on your virtual machine, you can install it with the following commands:

Helm will allow us to deploy the necessary components for logging and monitoring via a “chart” - for more information about Helm, check out their documentation.

curl -fsSL -o get_helm.sh <a href="https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3" class="redactor-autoparser-object" target="_blank">https://raw.githubusercontent....</a>
chmod 700 get_helm.sh
./get_helm.sh

Once helm is installed, we can follow the instructions on the Sumo Logic App Catalog page - note that the minikube cluster needs to be started prior to performing these steps: minikube start

Before running the helm upgrade command, we will set the sumologic.setup.monitors.enabled field to true.

By default, the Helm chart installs three replicas for the various collection pods, however, since our cluster is small and non-production we will be changing this value to just one.

To do this, we need to copy the contents of this file:

https://raw.githubusercontent.com/SumoLogic/sumologic-kubernetes-collection/main/deploy/helm/sumologic/values.yaml

To our Ubuntu host and find the lines with “replicaCount” in them and change the values from 3 to 1

Then, we will need to add one line to the command that the Kubernetes onboarding wizard gives you:

helm upgrade --install my-release sumologic/sumologic \
  --namespace=default \
  --create-namespace \
  --set sumologic.accessId=suuxOoW071wSiB \
  --set sumologic.accessKey=<snip> \
  --set sumologic.clusterName=<cluster name> \
  --set sumologic.collectorName=<collector name> \
  --set sumologic.setup.monitors.enabled=false \
  -f path/to/user_values.yaml

Once the wizard completes, you should see something similar to the following:

Note that if the helm repo or wizard fails, you can try to add some more CPU and RAM to your Minikube setup:

minikube stop
minikube config set memory 4192
minikube config set cpus 4
minikube start —nodes 2

After a few minutes the process should complete and the wizard should finish successfully.

Right now, there is not much happening with our cluster so there isn’t much data to look at.

Let’s change that and spin up a basic Ubuntu pod, using the following YAML:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
  labels:
    app: ubuntu
spec:
  containers:
  - image: ubuntu
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: ubuntu
  restartPolicy: Always

If you are wondering what the “batcat” command is, it is utilizing the awesome “bat” utility which can be found here

Within your Sumo Logic screen, you can now click on New → Log Search:

Once on the log search screen, we can take a look at the logs for our pod creation, we don’t know what we’re looking for just yet, so we’ll just type in “ubuntu” in the search bar to get us an idea of what the data looks like:

Nice! Even if you aren’t a Kubernetes or threat hunting expert, this very simple query already gives you a good idea of the types of data that are available to you.

We can see the container being pulled, created and started all in our Kubernetes telemetry.

Let’s complete our setup with one final step, installing a Sumo Logic collector to grab our Laurel logs off this host.

Host monitoring

Let’s start by navigating to the collection menu in Sumo Logic, and clicking on “Add Collector”

Next, click on “Installed Collector” and choose the collector that matches your system architecture

On our testing ARM VM, we could use the following command:

wget <a href="https://collectors" class="redactor-autoparser-object" target="_blank">https://collectors</a>.<strong>ca</strong>.sumologic.<strong>com</strong>/rest/download/linux/aarch/64 -O SumoCollector_linux_arm64_19_418-7.<strong>sh</strong>

Before running the script, let’s generate a token to be used for the installation. Back within the Sumo Logic UI, navigate to the “Token” section and create one.

Once you have your token, we can now install the collector using the following command:

 SumoCollector_linux_arm64_19_418-7.sh -<strong>q</strong> -Vsumo.token_and_url=<your_token>

Note that the name of the file may change depending on your architecture.

Back in the Sumo UI - you should now see the minikube collector below the Kubernetes collectors we set up in earlier steps:

Now we need to tell our collector to ingest the Laurel logs. We can do so by clicking the “Add…” button next to the collector, then “Add Source” and then “Local File”

After a few minutes, we should see some Laurel logs trickling in:

At this point, we have a local Kubernetes cluster setup on a virtual machine. We are sending telemetry from this Kubernetes cluster to a Sumo Logic instance and are also monitoring the host that the cluster is running on via auditd and are using Laurel to transform these auditd logs into JSON format.

Now we are ready for the fun stuff, testing out some attacks on our Kubernetes cluster!

The TTPs: attacks and detections for Kubernetes

T1610 – deploy container

To dip our toes into the Kubernetes threat detection world, let us build on our basic example of starting an Ubuntu pod, and look at the MITRE “Deploy Container” category.

If your Ubuntu container is still deployed, go ahead and delete with kubectl delete -f ubuntu.yaml

We’ll then go ahead and recreate the pod: kubectl apply -f ubuntu.yaml

Now let’s look at the following query in Sumo Logic:

_collector="kubernetes-2023-04-20T12:54:16.324Z"
| %"object.reason" <strong>as</strong> reason
| %"object.involvedobject.kind" <strong>as</strong> object_kind
| %"object.involvedobject.name" <strong>as</strong> object_name
| where reason = "Created"
| values(object_kind) <strong>as</strong> kinds,values(object_name) <strong>as</strong> names

This will show us what kinds of objects are created within our cluster along with their names. You should see something similar to:

We see our Ubuntu pod hanging out at the bottom, let’s clean the query up a little bit and filter out the system pods as well as the pods necessary for the Sumo Logic collection to take place:

_collector="kubernetes-2023-04-20T12:54:16.324Z"
| %"object.reason" <strong>as</strong> reason
| %"object.involvedobject.kind" <strong>as</strong> object_kind
| %"object.involvedobject.name" <strong>as</strong> object_name
| <strong>where</strong> reason = "Created"
| <strong>where</strong> !(object_name matches /(coredns|etcd|<strong>my</strong>\-release|kube\-|storage\-provisioner)/)
| values(object_kind) <strong>as</strong> kinds,values(object_name) <strong>as</strong> names

Through some regular expression tweaking on line 6 of our query, we exclude pods that we may not want to see in this particular detection logic, and now we should be left with only our Ubuntu pod showing up in the search results. We highly recommend a regular expression testing site of some kind as a resource to aid you in crafting any type of regular expressions.

That’s pretty cool, but didn’t we spend a bunch of time setting up host logging as well, and can that be used to provide us some additional coverage? Great question; yes it absolutely can!

We may not be super familiar with auditd and Laurel, but we know that when we created our Ubuntu pod, we used a yaml file called “ubuntu.yaml” so let’s start there with a super quick search:

_collector="minikube" ubuntu.yaml

After rolling up our proverbial sleeves and getting a handle on the data, we can craft the following query which is annotated:

_collector="minikube" 
| %"syscall.comm" as binary_name //renaming fields for ease of <strong>use</strong>
| %"proctitle.argv" <strong>as</strong> command_line //renaming <strong>fields</strong> <strong>for</strong> ease <strong>of</strong> <strong>use</strong>
| <strong>where</strong> binary_name = "kubectl" //looking <strong>for</strong> the kubectl binary used 
| <strong>where</strong> command_line matches /<strong>apply</strong>/ //matching <strong>on</strong> the "apply" verb
| <strong>where</strong> !(command_line matches /\/etc\/kubernetes/) //<strong>excluding</strong> <strong>some</strong> <strong>system</strong> <strong>events</strong>
| <strong>values</strong>(command_line)

And looking at the results, we see our kubectl apply command:

This is a great example of why host logs are important for cloud native technologies such as Kubernetes - of course this assumes that your cluster is not hosted within a cloud service of some kind.

In our instance, however, we were able to gain visibility into a container being deployed in the environment from both the host and Kubernetes level - sweet!

Let’s not forget to track these executions in Vectr.

We can navigate to the environment we set up earlier and click on the “Create New '' button on the right hand side within the Vectr UI and create a new Assessment:

Once within the assessment, we can click on “Assessment Actions” and click on “Create New Campaign” - so the overall structure here is an Environment, which contains an assessment, which contains a campaign, with the campaign containing our test cases.

From the campaign menu, click on “Campaign Actions → New Test Case”:

We can then enter the details of our test case:

It should be noted that Vectr provides a ton of options for tracking, including tracking time to alert and various sources - here we are just scratching the surface to get some basic metrics of our executions.

Once we click on “Save” we should see our first test case in Vectr.

Recall, we did two executions/variations of the same technique, so we can clone this test case to track our host-based detection as well.

Once you click the clone button, you will see a window pop up with customizable parameters:

We can then rename this second test to something like “Deploy Container - Host Based”

T1609 – Container administration command

Let’s look at another example, continuing to use our deployed Ubuntu pod and perform a kubectl exec command in order to get a bash shell into our pod.

Before running the command, we need to add the following entry into our audit.rules auditd configuration file so that we log the appropriate telemetry: -w /usr/local/bin/minikube -p x -k minikube and then restart auditd:

sudo pkill -HUP auditd

Now we can run our command:

We take a look at the Kubernetes logs and fail to find any telemetry at this level, so lets pivot to the host level with the following query:

_collector="minikube" 
| %"syscall.comm" <strong>as</strong> binary_name //renaming fields <strong>for</strong> ease of use
| %"proctitle.argv" <strong>as</strong> command_line //renaming fields <strong>for</strong> ease of use
| where binary_name = "minikube" //looking <strong>for</strong> minikube here <strong>as</strong> we aliased kubectl <strong>earlier</strong> - normally this would just <strong>be</strong> kubectl
| values(command_line) <strong>as</strong> command_line,values(binary_name) <strong>as</strong> binary_name

And we get our results:

Now that we know what command line value to look for, let’s tighten up our query a little bit, using some regular expressions.

_collector="minikube" 
| %"syscall.comm" <strong>as</strong> binary_name //renaming fields <strong>for</strong> ease of use
| %"proctitle.argv" <strong>as</strong> command_line //renaming fields <strong>for</strong> ease of use
| where binary_name = "minikube" //looking <strong>for</strong> minikube here <strong>as</strong> we aliased kubectl <strong>earlier</strong> - normally this would just <strong>be</strong> kubectl
| where command_line matches /exec|tty|stdin/ //looking <strong>for</strong> <strong>a</strong> <strong>command</strong> line that conains exec, tty or stdin 
| values(command_line) <strong>as</strong> command_line,values(binary_name) <strong>as</strong> binary_name

Now we can go ahead and add this execution to our Vectr for tracking purposes.

T1613 - Container and resource discovery

Discovery and enumeration type techniques are often difficult to detect as administrators and developers may run these types of commands as part of their normal workflows. Let’s run some basic Kubernetes enumeration commands on our local Minikube cluster:

kubectl config get-users
kubectl config get-clusters
kubectl auth can-i --list
kubectl get roles
kubectl get secrets
kubectl get serviceaccounts
kubectl get deployments
kubectl get pods -A

Now let’s modify a query we used earlier in order to get a sense of how the telemetry looks:

_collector="minikube" 
| %"syscall.comm" <strong>as</strong> binary_name //renaming fields <strong>for</strong> ease of use
| %"proctitle.argv" <strong>as</strong> command_line //renaming fields <strong>for</strong> ease of use
| where binary_name = "minikube" //looking <strong>for</strong> minikube here <strong>as</strong> we aliased kubectl <strong>earlier</strong> - normally this would just <strong>be</strong> kubectl
| values(command_line) <strong>as</strong> command_line,values(binary_name) <strong>as</strong> binary_name

And we get our results:

We see the command line values with “minikube” followed by “kubectl” as Minikube uses its own version of kubectl - recall that we created an alias that mapped minikube kubectl to just kubectl in earlier steps. In production environments, this command line would show up as just “kubectl”

In order to find this activity, we can do some string matching on things like “auth can-i” or “kubectl get” - however, we probably do not want to find normal or day-to-day administrative activity.

Another approach we can take is to slice up our data by time slices and score each of these commands, summing up the score based on the time slice. Our hypothesis here is that threat actors might perform a bunch of enumeration in a short time period, whereas a developer or administrator may not exhibit such behavior.

Let’s take a look at what this looks like in query format:

_collector="minikube" 

// Initialize variables
| 0 as score
| "" as messageQualifiers
| "" as messageQualifiers1
| "" as messageQualifiers2

// Setting our time slice
| timeslice 1h

// Renaming some fields for ease of <strong>use</strong>
| %"syscall.comm" <strong>as</strong> binary_name 
| %"proctitle.argv" <strong>as</strong> command_line

// <strong>Only</strong> looking <strong>at</strong> the aliased minikube binary 
| <strong>where</strong> binary_name = "minikube" //looking <strong>for</strong> minikube here <strong>as</strong> we aliased kubectl earlier - normally this would just be kubectl

// Setting our qualifiers, we look <strong>for</strong> can-i, <strong>get</strong> <strong>or</strong> config, we can <strong>add</strong> more qualiifers here depending <strong>on</strong> the environment
| <strong>if</strong>(command_line matches /(can\-i)/,<strong>concat</strong>(messageQualifiers, "Kubectl auth enumeration: ",command_line,"\nBy Binary: " ,binary_name,"\n# score: 3\n"),"") <strong>as</strong> messageQualifiers
| <strong>if</strong>(command_line matches /(<strong>get</strong>)/,<strong>concat</strong>(messageQualifiers1, "Kubectl cluster enumeration: ",command_line,"\nBy Binary: " ,binary_name,"\n# score: 3\n"),"") <strong>as</strong> messageQualifiers1
| <strong>if</strong>(command_line matches /(config)/,<strong>concat</strong>(messageQualifiers2, "Kubectl config enumeration: ",command_line,"\nBy Binary: " ,binary_name,"\n# score: 3\n"),"") <strong>as</strong> messageQualifiers2

// Putting our qualifiers together 
| <strong>concat</strong>(messageQualifiers,messageQualifiers1,messageQualifiers2) <strong>as</strong> q //Concact all the qualifiers together

// Extracting the score <strong>from</strong> the qualifiers 
| <strong>parse</strong> regex <strong>field</strong>=q "score:\s(?<score>-?\d+)" multi 

//<strong>Only</strong> <strong>return</strong> results <strong>if</strong> there <strong>is</strong> a qualifier <strong>of</strong> <strong>some</strong> kind
| <strong>where</strong> !isEmpty(q) 

//<strong>Return</strong> our <strong>full</strong> qualifiers <strong>and</strong> <strong>sum</strong> the score <strong>by</strong> timeslice
| <strong>values</strong>(q) <strong>as</strong> qualifiers,<strong>sum</strong>(score) <strong>as</strong> score <strong>by</strong> _timeslice 

Looking at the results, we can see our numerous enumeration commands bubbled up with a score of 33, which is much higher than our “normal” administrative activity which occurred on the previous day with a score of 12.

Please keep in mind that all the parameters and scoring within these queries can be tweaked depending on your particular set ups and architecture.

Let’s not forget to add this execution to our Vectr tracking.

T1496 - Resource hijacking

Once threat actors compromise a Kubernetes cluster, often a deployment of some kind of crypto currency miner follows.

This kind of technique is difficult to replicate in a home lab environment, as we probably do not want to be deploying coin miners on our virtual machines.

However, we can exhaust the resources of our Kubernetes cluster in other ways – before we dive in it needs to be noted that this technique is not recommended to execute unless you are comfortable with maxing out resources on whatever compute platform your Minikube cluster is running on.

It goes without saying that this is not recommended for production or even test environments.

Although we all want to avoid stress, we can use the “stress” Linux utility in order to stress test the CPU on a Kubernetes pod - we can also create a deployment that goes ahead and creates many replicas of these pods, all running the stress utility in order to generate a high CPU load on our Kubernetes cluster.

Here is what the YAML looks like:

If you choose to deploy this on your cluster, you will need to give it some time for all the replicas to spin up.

After waiting for a few minutes, we can navigate back to Sumo Logic and take a look at some dashboards that were provisioned for us when we set up the Kubernetes monitoring solution:

Once you navigate to the “Kubernetes - Cluster” page, you should see the CPU usage chart well into the red:

As a final step, we can click the “bell” icon on the top of the Sumo Logic menu to see some alerts waiting for us:

Clicking into the “Kubernetes - Node CPU Utilization High” alert will bring us to the following screen where we can see some additional information:

We can go ahead and add this test case to our Vectr campaign.

Vectr wrap-up

At this point, we should have five test cases in our Vectr instance, with the escalation path looking something like this:

If we navigate to “Reporting → MITRE ATT&CK Coverage” within Vectr:

We should be greeted with a nice MITRE ATT&CK matrix showing us our test cases:

From here, we can click the red “&” symbol on the top right and export this layer into a JSON file which can then be loaded into the MITRE ATT&CK Navigator

Now you know how to set up a local Kubernetes cluster with host and Kubernetes cluster level visibility, with all the telemetry being fed into a free Sumo Logic instance. We have also used Vectr to track and report on our test cases.

This setup used freely available tooling all hosted on a local virtual machine, this type of setup may be preferable for many folks who do not want to be on the hook for potentially large cloud bills.

This type of environment also provides users with the ability to snapshot and recover a virtual machine in order to try out various configurations and threat detection use cases.

Learn more about how Sumo Logic can help you with your Kubernetes monitoring.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Navigate Kubernetes with Sumo Logic

Monitor, troubleshoot and secure your Kubernetes clusters with Sumo Logic cloud-native SaaS analytics solution for K8s.

Learn more

Anton Ovrutsky

Senior Threat Research Engineer

Anton Ovrutsky leverages his 10+ years of expertise and experience as a BSides Toronto speaker, C3X volunteer, and an OSCE, OSCP, CISSP, CSSP and KCNA certificate holder in his role at Sumo Logic's Threat Labs. He enjoys the defensive aspects of cybersecurity and loves logs and queries. When not diving into the details of security, he enjoys listening to music and cycling.

More posts by Anton Ovrutsky.

これを読んだ人も楽しんでいます