Kubernetes is an open source system for automatically deploying, scaling, and managing containerized applications. Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. Containerization has matured over the past few years and has led to greater adoption of Kubernetes across the tech industry.
Kubernetes greatly increases a user’s ability to run testing and simulations. At a high level, it allows users to expand their technical resources outside of their traditional limits; for example, users can simulate a 101-node blockchain cluster without needing to own 101 physical machines. This is ideal for quickly standing up staging environments for a blockchain cluster. It is also particularly useful for simulating blockchain networks with proof of stake (PoS) or delegated proof of stake (DPoS) consensus mechanisms, which involve a finite amount (e.g., 101) of geographically distributed block producing / verifying nodes.
Starting with our Testnet Beta “Epik” release, IoTeX has used Kubernetes to deploy and optimize our Testnet infrastructure. Overall, Kubernetes has greatly improved our operations — in this blog, we share our experience and some tips to run your own containerized applications and environments on Kubernetes.
We begin by defining two type of Kubernetes services:
All node configurations are deployed through the Kubernetes ConfigMap, including the genesis block. This allows us to redeploy a new cluster without waiting for a new docker image to be built.
Applications running in Kubernetes Pods are stateless by default, which means saved data from any applications will not be available after a redeployment. To counteract this, we use the following setup process. In our scenario each IoTeX node will utilize the same copy of data (i.e., distributed ledger), so it is not necessary to persist every nodes’ data. Therefore, we mount the Kubernetes Persistent Volume to only one iotex node — in our case, we mounted it to our boot node. We also back up our persisted data to object storage spaces like S3 and Digital Ocean. Since all nodes except the boot node will not have saved data after a redeployment, they will need to first download the backup before startup. We use init container to achieve this. This process enables our cluster to restart from a crash our failure without worrying about losing previous data.
It is not trivial to track the status of 21 nodes (let alone 101 nodes down the road), and it becomes more complex with a larger amount of nodes. To have better observability within our IoTeX cluster, we set up monitoring capabilities in Kubernetes, which consists of a logging stack, metrics stack, and alert manager.
For the logging stack, we use Fluentd + Elastic Search + Kibana + Elastic Search Curator. Fluentd sends logs from each node to the Elastic Search client, and the Elastic Search master server indexes the logs which are query-able through Kibana. Finally, we use the Elastic Search Curator to clean up the outdated logs.
Refer from Log aggregation with ElasticSearch, Fluentd and Kibana stack on ARM64 Kubernetes cluster
For the metrics stack, we use Prometheus + Prometheus Operator + Grafana. CoreOS Prometheus Operator provides an easy way to configure Prometheus running in a Kubernetes cluster. We can hook up the IoTeX node’s Prometheus client with the Prometheus server by simply exposing a metrics service.
Refer from CoreOS: Prometheus Operator
We are also working on setting up alerts with Prometheus Alert Manager to send out alerts to our on-call engineer when we have abnormal metrics.
Using yaml configuration to deploy different types of applications on Kubernetes is fairly straightforward for a single environment. However, when you have multiple environments (e.g., testing, staging, production) and different node twists / configurations (e.g., 21 vs. 101 delegates), users encounter higher overhead and redundancy to manage various Kubernetes yaml configurations for the same application. To solve this issue, we use Helm.
Helm is a package manager for Kubernetes applications. It allows us to create our IoTeX cluster as a chart package with version and default values. With Helm, we don’t need to manage duplicate Kubernetes yaml files; rather, we only need to manage a smaller subset of configurations for different environments and twists which overwrite the default value. It also simplifies the command we need to run to start a fresh IoTeX cluster into one single command:
Helm also allows us to publish a chart easily. In the future, we will publish our IoTeX Helm chart to the community, so you can setup an IoTeX cluster within minutes.
With the power of Kubernetes and other operational tools, IoTeX’s Development team not only saves time on cluster operation, but also digests issues quicker and iterates even faster than before. We look forward to sharing more technical perspectives in the future — feel free to reach out with any questions to [email protected].
IoTeX is the world’s first privacy-centric blockchain platform that is fast, flexible, and Internet of Things (IoT) friendly. IoTeX’s global team is comprised of Ph.Ds in Cryptography, Distributed Systems, and Machine Learning, top tier engineers, and experienced ecosystem builders. Designed and optimized for IoT, IoTeX uses state-of-the-art privacy, consensus, and subchain innovations to capture the full potential of IoT. By enabling trusted data, interoperability, and M2M automation, IoTeX connects the physical and digital worlds and brings trusted machine economies to the masses.
Stay connected with us!
Website: https://iotex.io/Twitter: https://twitter.com/iotex_ioTelegram Announcement Channel: https://t.me/iotexchannelTelegram Group: https://t.me/IoTeXGroupMedium: https://medium.com/@iotexReddit: https://www.reddit.com/r/IoTeX/Join us: https://iotex.io/careers