paint-brush
The Serverless CI - Running Jenkins Slaves on AWS EKS Fargateby@anadimisra
667 reads
667 reads

The Serverless CI - Running Jenkins Slaves on AWS EKS Fargate

by Anadi MisraSeptember 25th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Learn how to configure Jenkins to run slaves on AWS Fargate clusters, enhancing your cloud-native CI/CD workflows. This comprehensive guide covers prerequisites, Kubernetes connection setup, and usage in Jenkins pipelines, ensuring efficient and secure deployments.
featured image - The Serverless CI - Running Jenkins Slaves on AWS EKS Fargate
Anadi Misra HackerNoon profile picture

Jenkins requires no introduction, as it stands as the undisputed king of Continuous Integration. Over the years, it has adapted to all the technological disruptions in the industry, including Kubernetes.


This blog post delves into an intriguing topic: how to execute on-demand slaves in a remote AWS Fargate cluster from a Jenkins master instance. For those wondering why such a capability is necessary, the following sections will elucidate not only the reasons but also the methods and associated advantages.


Everything is remote!

Imagine this: you're running cloud-native services on AWS EKS, and as the diligent engineer that you are, you establish two distinct clusters—one for production and another for all non-production purposes. You might be wondering why you would undertake such an approach.


Here's a hint: consider the blast radius. If you prioritize the security of your cloud-native services as fervently as we do at NimbleWork, this decision makes sense. The dev cluster operates, among other things, all our Continuous Integration and Delivery tools. Speaking of Continuous Delivery, we run nightly pipelines that test the services for performance, security vulnerabilities, and regression before deploying them to the production cluster. In the production cluster, we adhere to a blue-green deployment model. This entails the Jenkins master, which operates on the dev EKS cluster, to run slaves on the production EKS cluster for various deployment, management, and general housekeeping tasks.


Having outlined the reasons for this setup, let's delve into the details of how it is accomplished.

Prerequisites

This post assumes you have a running AWS EKS cluster, either on Fargate or Worker Nodes. You can refer to this article for creating a Fargate cluster or this for Worker nodes if you don’t have them handy. The next step is to install Jenkins on Kubernetes, you can refer to this page in their official documentation for this. Since we’re configuring Jenkins slaves to run on AWS Fargate, install the Kubernetes Plugin in Jenkins.


Serverless Jenkins Slaves

Configuring Kubernetes Connection in Jenkins Master

Kubernetes cluster can be configured from the Manage Nodes and Clouds option on the Manage Jenkins Page. Navigate to `Manage Jenkins > Clouds > New Cloud` to open the Cloud configuration page


Cloud configuration page in Jenkins LTS



Add a name for the cloud, choose Kubernetes in the Type section, and click on "Create” to create the cloud configuration


Create a Kubernetes cloud in Jenkins


Expand the Kubernetes Cloud details dropdown, this is where we will configure Jenkins master access to the AWS Fargate Cluster.


Kubernetes URL

Here we add the public API Server URL of the AWS Fargate Cluster. Log in to the AWS Management Console and select Elastic Kubernetes Service, click on the clusters link to list all clusters in your account, and then click on the name of the cluster you want to connect Jenkins with to reach the overview page. Copy the API Server URL from the highlighted section in the image below and paste it to the Kubernetes URL field in the cloud configuration page in Jenkins.


Cluster Info to get details of the API Server


Alternatively, you can get the same information by kubectl using the command line too.


Point to your EKS cluster:

export AWS_ACCESS_KEY_ID="KEY_ID_HERE"
export AWS_SECRET_ACCESS_KEY="ACCESS_KEY_HERE"
export AWS_SESSION_TOKEN="SESSION_TOKEN_HERE" 

aws eks update-kubeconfig --region us-east-1 --name mycluster


then run the kubectl commands as follows:

kubectl cluster-info
Kubernetes control plane is running at https://XXXXXXXXXXXXX.gr7.us-east-1.eks.amazonaws.com


Kubernetes server certificate key


We’ll be using a Kubernetes Service Account to authenticate to the API Server. Perform the following steps on the AWS EKS cluster to enable Jenkins access.


  1. Create a namespace jenkins-jobs associated with the Fargate profile
  2. Create a service account named jenkins-service-account in the namespace
  3. Create a secret token named jenkins-token in the namespace associated with the service account
  4. Create a role-binding providing the service-account ClusterRole admin in this jenkins-jobs namespace.


you can achieve this in multiple ways, via the Management Console or AWS CLI, we like to stick to IAC in NimbleWork so here’s a sample terraform snippet for the same:

resource "kubernetes_service_account" "jenkins-service-account" {
  metadata {
    name      = "jenkins"
    namespace = "jenkins-jobs"
    labels = {
      "app.kubernetes.io/name" = "jenkins"
    }
  }
  secret {
    name = "jenkins-token"
  }
  depends_on = [module.fargate-profile]
}

resource "kubernetes_secret" "jenkins-token" {
  metadata {
    name = "jenkins-token"
    namespace = "jenkins-jobs"
    labels = {
      "app.kubernetes.io/name" = "jenkins"
    }
    annotations = {
      "kubernetes.io/service-account.name" = "jenkins"
    }
  }
  type = "kubernetes.io/service-account-token"
}

resource "kubernetes_role_binding" "jenkins-role-binding" {
  metadata {
    name      = "jenkins-role-binding"
    namespace = "jenkins-jobs"
    labels = {
      "name" = "jenkins-role-binding"
    }
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "admin"
  }

  subject {
    kind      = "ServiceAccount"
    name      = "jenkins"
    namespace = "jenkins-jobs"
  }

  depends_on = [kubernetes_service_account.jenkins-service-account]
}


Let’s retrieve the certificate key now, run the following command to get the service account certificate key and token

% kubectl get secret jenkins-token --namespace=jenkins-jobs -o yaml


The output contains ca.crt and token

apiVersion: v1
data:
  ca.crt: XXXXXXXXXX
  namespace: XXXXXXXXXXXXXX
  token: XXXXXXXXXX
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: jenkins
    kubernetes.io/service-account.uid: XXXXXXXXXXXXXX
  creationTimestamp: "2023-05-20T17:25:16Z"
  labels:
    app.kubernetes.io/name: jenkins
  name: jenkins-token
  namespace: jenkins-jobs
  resourceVersion: "3388"
  uid: XXXXXXXXXXXXXX
type: kubernetes.io/service-account-token

the ca.crt value is base64 encoded, decode it via the base64 -d command and paste the resultant value into the Kubernetes server certificate key field.


Kubernetes Namespace

Enter the value jenkins-jobs here


Credentials

Click on Add > Jenkins and choose the Secret text in the Kind dropdown list of the Credentials provider pop-up, then add the base64 decoded value of the token from the output of the command kubectl get secret above to create the credentials.


Adding Service Account token to Jenkins


Click on the Test Connection button and you’ll see the message

Connected to Kubernetes v1.28-eks-XXXXXXX

When successfully connected!


Pod Template and Retention

Add the following values to Pod-Template and Retention settings

Pod Settings

Click Save to finish adding the cloud.


Using the Kubernetes Cloud in Pipelines

Now that we have the Jenkins configuration in place let’s look at defining builds to run via Fargate Pods as slaves. We’re using the declarative pipeline syntax here.


The pipeline job DSL Groovy should mention the configured cloud name as follows:

  agent {
    hackernoonkube {
      yamlFile 'builder.yaml'
    }
  }


Add the labels defined in the POD template section above to run the job in a Jenkins slave running as a Fargate Pod.