Skip to content

Kubernetes Risk Assessment with Octarine’s Kube-Scan

After I saw the news about VMware’s intent to acquire Octarine, I wanted to know what Octarine is all about and what solutions they have. Octarine offers intrinsic security and compliance for Containers and Kubernetes workloads from build (Octarine Guardrail) to run (Octarine Runtime). This allows organizations to adopt DevSecOps principals for the complete application life cycle.


Just in case you haven’t heard the term DevSecOps before. Here is my short take on it. DevOps is a methodology or a set of practices to bring Software Development and IT Operations together, with the overall goal to accelerate business by faster (increase velocity) and better (improved quality & reliability) application development. DevSecOps incorporates Security into DevOps practices and essentially makes everyone responsible for security-relevant decisions along the complete application lifecycle. This ensures security is built into the application rather than added subsequently.


Besides their enterprise security platform offering, Octarine is maintaining a very interesting open-source project called Kube-Scan to assess the risk of your Kubernetes workloads. In this blog post, I want to give a short overview of Kube-Scan.

Installing Kube-Scan is pretty straightforward. Kube-Scan is running as a single Pod in your Kubernetes cluster. You can install it with direct access to the pod and port-forwarding or behind a load balancer. For more details, have a look at Kube-Scan on GitHub. In my case, I used the load balancer option. Simply execute the following command on your Kubernetes cluster.

(⎈ |tkgcl1-admin@tkgcl1:default)➜  ~ kubectl apply -f
namespace/kube-scan created
configmap/kube-scan created
serviceaccount/kube-scan created created created
deployment.apps/kube-scan created
service/kube-scan-ui created
(⎈ |tkgcl1-admin@tkgcl1:default)➜  ~ kubectl get pods -n kube-scan
NAME                         READY   STATUS    RESTARTS   AGE
kube-scan-78f85b5d94-w4mt9   2/2     Running   0          35s
(⎈ |tkgcl1-admin@tkgcl1:default)➜  ~ kubectl get svc -n kube-scan
NAME           TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
kube-scan-ui   LoadBalancer   80:31935/TCP   45s

That’s it, Kube-Scan is running, and we can access the Dashboard to look at our Risk Assessment. As a side note, if you want to do another scan of your cluster, simply kill the Kube-Scan pod, and it will do another scan when it comes back up. Otherwise, it will wait for 24 hours until it will do another scan.

Screenshot 2020-05-14 at 09.29.50

As you can see, I have a Deployment called yelb-ui that has a risk score of 7. By the way, the scoring is done via KCCSS (Kubernetes Common Configuration Scoring System). KCCSS is an open-source framework from Octarian that rates your workloads from 0=no risk to 10=high risk.

Let’s look at the risk rating for my yelb-ui deployment in more detail. Multiple medium risks have been detected.

Screenshot 2020-05-14 at 09.34.03

We can drill down to the individual risks to get more details. This particular risk states that the workload may have containers running as root as we don’t have a non-root user id specified nor the runAsNonRoot setting defined for the Pods security context. Additionally, we see information about the potential impact. E.g., this risk has a high Confidentiality Impact but a low Availability Impact. It also gives an indication of Exploitability=Low, Attack Vector=Local, and Scope=Host.

Screenshot 2020-05-14 at 09.35.36

Another good example is the risk of having no CPU and Memory limits configured. This obviously has a high impact on Availability as it could starve other Pods running on the same node. This risk has no impact on Integrity or Confidentiality.

Screenshot 2020-05-14 at 10.17.01

This risk can be easily fixed by adding resource limits to the application manifest.


Kube-Scan can be easily used to quickly assess the risk of Kubernetes workloads running in your cluster. It gives a good first idea of the risk that you are exposed to. No data is collected or is leaving the cluster. Nevertheless, this is just a first step into the security-related challenges that need to be addressed when running Kubernetes workloads in production and adopting DevSecOps practices. I cant wait to see how Octarine’s solutions will be integrated into the VMware Security and Tanzu portfolio. Have a look at this blog post to see how Octarine can be used in combination with Tanzu Service Mesh.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: