Checks standard vulnerabilities of your cluster.
multipass launch -n k8s-tester
sudo apt update && sudo apt install -y docker.io
The number one advice for any production kubernetes cluster is to use GitOps. The basic idea is that you have a repository that declares all of the resources that live on the cluster. Whenever someone wants to deploy a new microservice, they add their yaml files to this master repository.
A proper code-review workflow allows you to nearly ensure that only what you expect will run on the cluser. This is not only wonderful for security, but will also reduce a lot of frictions with regards to management & operations.
You can also add security validation of your yaml manifests into your CI flow with KubeSec.
You can also scan your containers for vulnerabilities using Clair for example.
Unlike virtual machines, isolation of containers from the host is harder to reason about. There are ways containers can still escalate priviledges. A process running as root within a container is actually running as root on the host. We should harden our containers to make sure that it is less likely that they can break free.
A PodSecurityPolicy is a kubernetes resource that acts like a contract for the containers running within it. The containers have to match the security requirements of the policy (eg. no root access, no host network...). A good base policy can be found here. To actually ensure these policies from the application's perspective, you will need something like AppArmor (using bane) or docker-slim.
You should add KubeSec to your CI, it will inform you of insecurities.
For secret mangement there are two options. You either store them in k8s secret objects, or you use a centralised secret manager (eg. HashiCorp Vault).
When you store secrets in k8s secrets, they are stored as plain-text. This is not that big of a deal from a security perspective, as we trust the execution environment anyways and we should use RBAC to prevent user accounts to access secrets. But for GitOps, this is terrible. Bitnami SealedSecrets allows you to store secrets in encrypted form, where only the cluster has the private key to decrypt them.
Using SealedSecrets is a great way to manage your day-to-day secrets in a GitOps environment. But you may want to go a step further. What if we could simply declare which secrets we need, and a controller will make sure to set them up and provide them. You could for example simply have a resource that requests an AWS token, then the token is automatically created. This way, you no longer have to trust the user that actually encrypts your secret. This is were Vault comes in: Vault will automatically generate, manage and rotate your secrets for dozens of services.
All though we ourselves are not great at this, reactive security is a must for any large organization. When something goes wrong, you should be able to discover this and audit what happened and who caused it.
A good start for your reactive security is to setup a script that continuously tests your cluster for unexpected changes or security issues.
As a very simple start this tester could include kube-hunter and port-scanning of your master and kubelets.
Want to take this a step further? You can start locking down your GitOps even further: When someone changes stuff unexpectatly (eg. changes the branch protection), you can send a huge warning to the CTO. When a release binary suddenly changes, you can send a huge warning to the CTO. When your binaries are build deterministically, you can rebuild them and audit that nothing was injected. Taking this all a step further, you could lock down the CD automatically when any of these things happen.
Log who does what and when on your cluster in an append-only logging service. This should be as isolated from your cluster as possible! We do not want to provide the ability for an intruder to cover up their trails.
Please look at the official kubernetes documentation for kubernetes event auditing.
Did a shell command execute within a pod? This is likely not a good sign! It would be nice if we could detect these sort of red-flags that some part of the system was breached.
Falco is a great, open-source, threat detection engine for Kubernetes and we highly recommend it.