paint-brush
Navigating the Cloud: Creating Kubernetes Clusters Across Providersby@dejanualex
183 reads

Navigating the Cloud: Creating Kubernetes Clusters Across Providers

by dejanualexAugust 7th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Unlock the secrets of creating Kubernetes clusters in various cloud providers and discover techniques for managing Kubernetes across cloud environments.
featured image - Navigating the Cloud: Creating Kubernetes Clusters Across Providers
dejanualex HackerNoon profile picture



This article is focused on managed Kubernetes. A managed Kubernetes service can be beneficial for organizations that want to focus on applications rather than dealing with installing, operating, and maintaining a Kubernetes cluster.


Firstly, what exactly is a managed Kubernetes service?


It’s a cloud computing offering, in which the cloud provider takes the responsibility of managing the control plane (kube-apiserver, etcd store, kube-scheduler, controller-manager).


Even more, the cloud provider takes the burden of managing and ensuring the reliability, availability, and upgrades of the Kubernetes cluster. Some of the major Kubernetes as a Service offerings are:



As a general prerequisite in terms of tooling, you’ll need:


There are three main ways to interact with the cloud:


  • Cloud console (UI approach)
  • Custom cloud client libraries or SDK (programmatic approach)
  • CLI (command line approach)


Also, each cloud provider offers the possibility of an Infrastructure as a Code approach via various solutions: Azure ARM templates, AWS CloudFormation, and Google Cloud Deployment Manager.


I will opt for a CLI approach due to its scripting capabilities. For instance, common choices like the cluster name and the number of nodes can be set as environment variables.


export CLUSTER="democluster"
export NODES=2


AWS

  • You’re going you need a user with the right policies for services like EKS, CloudFormation, EC2, IAM, and an access key for that user (guide here).
  • aws - CLI for interacting with AWS services (installation guide here)
  • eksctl - CLI for creating and managing clusters on AWS (installation guide here)



eksctl



First, we need to configure aws CLI using the access key generated for our IAM user. We can use aws configurecommand to configure AWS CLI. Run the following command without arguments in order to get prompted for configuration values (e.g. AWS Access Key Id and your AWS Secret Access Key, AWS region).


# credentials are placed ~/.aws/credentials.
aws configure


IAM AWS CLI: for eksctlyou will need to have AWS API credentials configured. Amazon EKS uses the IAM service to provide authentication to your Kubernetes cluster through the AWS IAM authenticator for Kubernetes. Verify if you’re authenticated by running the following command:

aws iam get-user



Creating Kubernetes Cluster:

EKS clusters run in a VPC, therefore you need an Amazon VPC with public and private subnets. The VPC must have a sufficient number of IP addresses available for the cluster, any nodes, and other Kubernetes resources that you want to create, and also it must have a DNS hostname and DNS resolution support (otherwise nodes can’t register to the cluster). You can’t change which subnets you want to use after cluster creation.


The beauty of it is that eksctl will do all the heavy lifting for us and even more, you can customize your Kubernetes cluster as needed (number of nodes, region, node size). To allow SSH access to nodes, eksctl imports by default the ssh public key from ~/.ssh/id_rsa.pub , but if you can use another SSH public key by passing the absolute path to the key in the --ssh-public-key flag.

 
eksctl create cluster --name=$CLUSTER --nodes=$NODES --region=eu-west-1 --ssh-public-key=~/.ssh/eks_key.pub


We’re going to create a Kubernetes cluster with :

  • 2 nodes with the default instance type (currently m5.large)
  • default Kubernetes version (currently 1.25)


Behind the scenes eksctl uses CloudFormation, you can see that in this case, it creates 2 CloudFormation stacks, also CloudFormation console can be used to check the status of the deployment.


EKS creation


After the cluster has been created, the appropriate Kubernetes configuration will be added to your kubeconfig file, to verify it just:

eksctl get clusters
kubectl get no -owide


You can clean up and delete the cluster using:

eksctl delete cluster --name <cluster_name>



Azure

  • az — CLI for managing Azure resources and services.

  • Before using any Azure CLI commands you need to authenticate, to initiate the authorization code flow run:

    az login
    


  • If needed then you can provision the subscription with a new resource group.

    # list resource groups in the current subscription
    az group list -o table
    
    # create new resource group
    az group create --name <resource_group_name> --location <location>
    


Creating Kubernetes Cluster:

The workhorse for managing Azure Kubernetes Services is az aks. We’re going to create a Kubernetes cluster with:


  • two worker nodes with the default VM size (currently Standard_DS2_v2)

  • default Kubernetes version (currently 1.25.11)

  • default SKU load balancer (Standard)

  • default VM set type (VirtualMachineScaleSets)


az aks create -g <resource_group_name> -n $CLUSTER --node-count $NODES --generate-ssh-keys --location westeurope


The aks create command is highly extensible via flags usage e.g.--node-vm-size to change the node size. After the cluster has been created, because the command also generated the SSH keys, the next step is to download and merge kubeconfig


aks get-credentials --resource-group <resource-group-name> --name $CLUSTER


An important aspect is that after the cluster is created, another resource group will be created (in the same subscription) usually called MC_<cluster_name>_<region> which will contain the Virtual machine scale set for the nodepool, the load balancer.


az aks list -o table
kubectl get no -owide


AKS cluster



Clean up and delete the cluster, using the following:

# delete cluster
az aks delete --name $CLUSTER --resource-group <resource_group_name>

# delete resource group
az group delete --name <resource_group_name>



Google

  • gcloud— CLI for interacting with Google services.
  • enable Kubernetes engine API (guide here) gcloud services enable container.googleapis.com


To authorizegcloud to access Google Cloud and to set up or update configuration, use the following commands that will launch an interactive Getting Started workflow and obtain access credentials four your user account:

gcloud init
gcloud auth login


Creating Kubernetes Cluster:

We’re going to use gcloud container to create the Kubernetes cluster, as before the command is highly extensible providing various flags for cluster creation.

  • 2 N1 machine-type nodes
  • default Kubernetes version (currently 1.27)


gcloud container clusters create $CLUSTER\
                --num-nodes $NODES\
                --machine-type n1-standard-1 \
                --zone europe-central2-a


Update kubeconfig with the credentials:

cloud container clusters get-credentials


Now you can list the Kubernetes clusters in the desired zones or regions.

gcloud container clusters list --zone europe-central2-a


And of course, you can clean up and delete the cluster:

cloud container clusters delete $CLUSTER --zone europe-central2-a



Conclusions

The purpose of this article was twofold: to demonstrate the ease of creating Kubernetes clusters and to offer an overview of the tooling available for interacting with a managed Kubernetes service.


The general approach is the same for all three cloud providers, and I would say using the CLI hits the sweet spot between imperative and declarative because it enables you to create wrapper scripts in which you can combine numerous commands and create various customizations.


Last but not least for production environments I strongly suggest using an Infrastructure as Code approach for example a common IaC solution for all three cloud providers is Terraform by HashiCorp.


The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "Cloud infrastructure"