Kubestack - GitOps for K8s using Terraform
Remember when DevOps was newly trending? The tech world has been putting a lot of effort to reduce the gap between Development and Operations. The core idea was to make developers and operations experts work in synergy.
Organizations are reaping the benefits of DevOps in terms of team collaboration, a smarter way of working, and accelerating time to resolution - to name a few.
With cloud-native movement, things got even better. Containerization and cluster management technologies have been proving their worth for a long time. It has been so beneficial that cloud providers have created “managed services” for the same.
Being cloud-native is all about the degree of cloud adoption in terms of using the latest technology trends, practices, and frameworks. In this post, we explore Kubestack - which is a GitOps framework built for Kubernetes automation using Terraform.
Kubestack helps reduce the gap between infrastructure development and application development by introducing GitOps for Kubernetes clusters.
For applications that are deployed on Kubernetes clusters, Kubestack implements the framework to create and manage the infrastructure required by K8s clusters and the clusters themselves with a Git-based flow.
Kubestack is an open-source framework and installing the (kbst
) CLI is a matter of downloading the binary in your PATH. Scaffolding a kbst
repository is a matter of running a command in your desired path.
The idea is to have a local development environment, a staging, and a production environment to implement K8s cluster changes. The way we similarly do it for application development.
However, Kubestack follows a different default naming convention - loc
for local, ops
for staging, and apps
for production environments.
Kubestack makes use of Kubernetes in Docker (KinD) to implement the local environment. Kubernetes cluster environment is replicated on the local system so that we can test our infrastructure changes against it.
This helps simulate the configuration locally, before applying/deploying it on the cloud environment. Any configuration changes made to the locally deployed K8s cluster are implemented dynamically. KinD makes the local development of K8s cluster infrastructure possible.
To apply the changes to the cloud provider, Kubestack has their own container image which bundles all the required dependencies like the cloud provider CLIs, Terraform, Kustomize, etc.
We can simply run this container locally and “exec
” into it to work with our “ops
” and “apps
” environments which will actually go ahead and deploy the changes in the given cloud provider. While following the Kubestack tutorial here, I worked with AWS.
In the initial few steps after logging (exec
) into the container, we setup the Terraform backend to maintain state, initialize Terraform and create workspaces corresponding to ops
and apps
environment. It is also possible to have custom environments.
Kubestack mainly uses 2 types of Terraform providers:
Cluster provider - which is used to deploy K8s cluster on respective cloud providers - Azure AKS, Amazon EKS, Google GKE.
Cluster service provider - which is a Kubestack maintained Kustomization provider used to configure and deploy applications on K8s cluster.
By default, configurations required by various environments (local
, ops
, apps
) are inherited. Meaning, the configurations required by apps are automatically inherited by ops
and local
environments. Consequently, it is also easily possible to override these configurations for better resource utilization.
Once the K8s environments are set up as per your expectations, it is then time to automate it using GitOps. Kubestack code is maintained in Github where we can leverage Github Actions to implement the CI/CD pipeline.
Here you can specify the build and test stages in a similar manner you would do for application development. In an out-of-the-box implementation, once local
configuration changes are committed and pushed to the Github repo, the pipeline is triggered.
At first, changes are “plan
”ed and “apply
”ed to ops
workspace for validation. After successfully validating the changes, the release is created and deployed to apps
workspace.
Working on this tutorial gives you a great introduction to Kubestack. It helps us understand how a GitOps flow can be implemented in K8s clusters using Terraform. Highly recommended!
In the following sections, I note down some pros and cons that I observed while using Kubestack.
Pros:
The goal of giving the operations team a way to collaborate and continuously integrate, deliver and deploy changes using K8s clusters is achieved.
Implementing automation at the infrastructure level is the need today. Kubestack makes this happen quite easily for K8s clusters.
As a developer and operations engineer, it was very interesting to use this framework to wrap most of the routine tasks into a CICD pipeline.
Personally, I am not sure if there is a better way to leverage Terraform and KinD capabilities to automate K8s infrastructure.
Kustomization provider’s implementation is highly intuitive - strictly from a developer’s perspective.
Personally, I found the inheritance model to be of great use to manage the desired configuration of sub-production environments.
Cons:
Kubestack is designed to work for K8s clusters. It is very much purpose-built. To make this work with other cloud provider’s services, would still require a separate development process.
Even though the end-to-end flow works great - it can still take some time to wrap your head around the framework to getting used to it. Documentation does its job, but perhaps it could be better. The tutorial introduces the core concepts, but to actually adjust it to a certain project’s requirement requires some digging into specific concepts.
I used M1 Macbook pro to try the tutorial. Unfortunately, the processor architecture is not directly compatible with the current binary. I ended up using a VM on AWS.
This post is intended to give a very high-level overview of Kubestack and a quick review of the same. I recommend you to try this tutorial and I would look forward to hear back from you.