paint-brush
Building Continuous Delivery Pipeline using CDK Pipelines Modern APIby@mugilanragupathi
1,094 reads
1,094 reads

Building Continuous Delivery Pipeline using CDK Pipelines Modern API

tldt arrow

Too Long; Didn't Read

AWS CDK is an open-source software development framework that lets you define cloud infrastructure. AWS CDK supports many languages including Typescript, Python, C#, and Java. In this step-by-step tutorial, we're going to learn about how to build continuous delivery using CDK pipelines using modern API.
featured image - Building Continuous Delivery Pipeline using CDK Pipelines Modern API
Mugilan Ragupathi @CloudTechSimplied.com HackerNoon profile picture

In this step-by-step tutorial, we're going to learn how to build continuous delivery using CDK pipelines using modern API. Before discussing CDK Pipelines, let us discuss a few things first so that we're all on the same page.


What is a CI/CD pipeline?

A CI/CD pipeline automates your software delivery process. It is a series of steps that happen when you push your changes until the deployment, such as building your code, running tests, and deploying your application and services.


Why CDK Pipelines?

AWS CDK is an open-source software development framework that lets you define cloud infrastructure. AWS CDK supports many languages including Typescript, Python, C#, and Java. You can learn more about AWS CDK from the docs here, and I wrote a beginner's guide to it on my blog here.


If you're already using AWS CDK to define your infrastructure, it is easier to use CDK pipelines. It is serverless and you only pay for what you use.


CDK Pipelines are self-mutating - meaning that if you add any stage or stack in your pipeline, the CDK pipeline will reconfigure itself to deploy those new stages or stacks.


Project

In this tutorial, we're going to build a pipeline using CDK Pipeline for a simple serverless project. As the main objective of this tutorial is to explain the concepts and implementation of CDK pipelines using modern API, the actual application (serverless project) is intentionally kept simple. However, the same concept applies to even large-scale applications. The only difference would be your application being deployed.


In this project, the lambda function will be called whenever an object is uploaded to the S3 bucket.

Our simple serverless project


CDK Pipeline Architecture

Below is the high-level architecture diagram for Code Pipeline.

CI CD Pipeline Architecture using CDK Pipelines


Our code resides in the GitHub repository. First, we would create a connection to the GitHub repository using AWS CodeStar in the AWS console (detailed instructions are provided later in this article).

Whenever a user pushes any change to GitHub, AWS CodeStar detects the changes, and your AWS code pipeline will start to execute. AWS CodeBuild will build your project and AWS CodeDeploy will deploy your AWS resources required for the project (S3 bucket and Lambda function)


Creating AWS CDK project

Before creating a new CDK project, you should have installed aws-cdk the package globally. If you haven't, you can install using the below command


npm i -g aws-cdk


Then, you can execute the below commands to create a new CDK project.

mkdir aws-cdk-pipelines
cd aws-cdk-pipelines
cdk init app --language=typescript


Once the CDK application is created, it will provide a default template for creating an SQS queue (commented code)as shown below.


import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
// import * as sqs from 'aws-cdk-lib/aws-sqs';

export class AwsCdkPipelinesStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // The code that defines your stack goes here

    // example resource
    // const queue = new sqs.Queue(this, 'AwsCdkPipelinesQueue', {
    //   visibilityTimeout: cdk.Duration.seconds(300)
    // });
  }
}

As we would not be needing SQS, you can delete the commented lines.

Create GitHub repository

We'll create a new GitHub repository and push the changes (the CDK project that we created earlier).


git remote add origin [email protected]:<your-github-username>/<your-repo-name>.git
git branch -M main
git push -u origin main


We need this GitHub repository in the next step for creating an AWS CodeStar connection. AWS CodeStar connection is the connectivity between AWS CodePipeline and your GitHub repository.


Creating AWS CodeStar Connection

You can go to the AWS CodePipeline service in the AWS console and select settings from the left-hand side menu, as shown below screenshot

Click the "Create connection" button and you'll get the below screen. You can select Github as a provider, enter your connection name and click the "Connect to GitHub" button


Once you click the "Connect to GitHub" button, you'll get the below the screen and you can click the "Install a new app" button



Once you click the "Install a new app" button, you'll be redirected to GitHub asking you to approve the repository access. The screen would look like the one below.


Either you can provide access to all repositories or a specific repository. It is recommended to provide access to a particular repository. We've selected the GitHub repository that we created earlier.


Once you click save, the connection will be created.


Please note that we'll use the ARN(Amazon Resource Name) of this connection in our pipeline.


Creating AWS CDK Pipeline

AWS CDK provides an L2 construct for creating CDK Pipelines


We want to do the below steps as part of our pipeline


  • Connect to the source (GitHub repo in our case)

  • Install the packages

  • Build the project

  • Produce the cloud formation templates from CDK code which then can be deployed


All these 4 steps can be done using ShellStep . ShellStep is a construct provided by CDK where you can execute shell script commands in the pipeline. You can provide the ShellStep to the synth property as shown below.


const pipeline = new pipelines.CodePipeline(this, 'Pipeline', {
      synth: new pipelines.ShellStep('Synth', {
        // Use a connection created using the AWS console to authenticate to GitHub
        // Other sources are available.
        input: pipelines.CodePipelineSource.connection(
          'mugiltsr/aws-cdk-pipelines',
          'main',
          {
            connectionArn:
              'arn:aws:codestar-connections:us-east-1:853185881679:connection/448e0e0b-0066-486b-ae1c-b4c8be92f79b', // Created using the AWS console
          }
        ),
        commands: ['npm ci', 'npm run build', 'npx cdk synth'],
      }),
      dockerEnabledForSynth: true,
    });

The input property in synth represents the GitHub source connection - it takes 3 parameters

  • GitHub repo follows the pattern - <owner>/<repo-name>. My GitHub id is mugiltsr and my repo name is aws-cdk-pipelines
  • Branch name for which the pipeline is to be triggered. As I want the pipeline to be executed when the code is pushed to main branch - I've specified main as the branch name
  • connectionArn - This is the ARN (Amazon Resource Name) of the connection that we created earlier in the AWS console.


The commands property is a string array that contains the commands to be executed.

  • npm ci - This command is to install the packages in CI mode
  • npm run build - Command to build the project
  • npm run test - If you want, you can execute the test cases
  • npx cdk synth - The synth command is to build the cloud formation templates from our CDK code



dockerEnabledForSynth : Set the value of this property to true if we want to enable docker for the synth step. As we're going to use bundling for building lambda functions - we've to set the value of this property to true.


Deploying our stack

Commit and push these changes to the GitHub repo.


For the first time only, we need to deploy the stack using the command cdk deploy . Once you execute the command, AWS Code Pipeline will be created and will be executed.


From next time onwards, we don't need to deploy our code manually. As you can see in the below screenshot, the pipeline is created and executed successfully.



Creating our serverless application

Our serverless application is pretty simple and it just has the following components

  • S3 bucket

  • Lambda function

  • Add S3 event source for the lambda function


We'll have all these components in a new stack AwsS3LambdaStack which would be in lib directory beside our pipeline stack


export class AwsS3LambdaStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    // create s3 bucket
    // create lambda function 
    // add s3 event source for lambda function
    }
   }


Creating S3 bucket

Below is the CDK code for creating an S3 bucket


 const bucket = new s3.Bucket(this, 'S3Bucket', {
      bucketName: `aws-lambda-s3-132823`,
      autoDeleteObjects: true,
      removalPolicy: RemovalPolicy.DESTROY,
    });


It has 3 properties - bucketName represents the name of the bucket, autoDeleteObjects tells whether the objects in the bucket are to be deleted when the stack is brought down, removalPolicy tells whether the bucket is to be removed when the stack is brought down. Only the property bucketName is a required property.


Creating Lambda Function

We create a simple lambda function with Node16 as runtime. We configure the timeout and memory size properties and specify the path to the lambda function.


const nodeJsFunctionProps: NodejsFunctionProps = {
      bundling: {
        externalModules: [
          'aws-sdk', // Use the 'aws-sdk' available in the Lambda runtime
        ],
      },
      runtime: Runtime.NODEJS_16_X,
      timeout: Duration.minutes(3), // Default is 3 seconds
      memorySize: 256,
    };

    const readS3ObjFn = new NodejsFunction(this, 'readS3ObjFn', {
      entry: path.join(__dirname, '../src/lambdas', 'read-s3.ts'),
      ...nodeJsFunctionProps,
      functionName: 'readS3ObjFn',
    });

    bucket.grantRead(readS3ObjFn);


As we're using NodeJsFunction , it will use docker for bundling the lambda function. This is the reason we've set the value of the property  dockerEnabledForSynth  to true in the pipeline.


Adding S3 as an event source for Lambda


We want the lambda to be triggered when an object is created in the S3 bucket and below CDK code does that


readS3ObjFn.addEventSource(
      new S3EventSource(bucket, {
        events: [s3.EventType.OBJECT_CREATED],
      })
    );


Lambda function code


We're just printing the contents of the S3 object which was uploaded. However, you can process the file as per your application needs. Below is the lambda function code which prints the contents of the file.


import { S3Event } from 'aws-lambda';
import * as AWS from 'aws-sdk';

export const handler = async (
  event: S3Event,
  context: any = {}
): Promise<any> => {
  for (const record of event.Records) {
    const bucketName = record?.s3?.bucket?.name || '';
    const objectKey = record?.s3?.object?.key || '';

    const s3 = new AWS.S3();
    const params = { Bucket: bucketName, Key: objectKey };
    const response = await s3.getObject(params).promise();
    const data = response.Body?.toString('utf-8') || '';
    console.log('file contents:', data);
  }
};


Creating Serverless Application

We're going to use our lambda stack in our application - let's call this application MyApplication . You can name whatever you want.


import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { AwsS3LambdaStack } from './lambda-stack';
export class MyApplication extends cdk.Stage {
  constructor(scope: Construct, id: string, props?: cdk.StageProps) {
    super(scope, id, props);

    const lambdaStack = new AwsS3LambdaStack(this, 'lambdaStack');
  }
}


Please note that we're extending the cdk.Stage construct so that we can use this as a stage in our pipeline.


Adding the stage to our pipeline


You can use below CDK code to add our application as a stage in the pipeline


pipeline.addStage(new MyApplication(this, 'lambdaApp'));


Commit and push these changes to main branch. Please note that we don't need to deploy the stack manually.


“Code pipeline” will self-mutate and will create additional steps and it will deploy the S3 bucket and create a lambda function.


Commit and push these changes to main branch. Please note that we don't need to deploy the stack manually.


“Code pipeline” will self-mutate and it will create additional steps and it will deploy the S3 bucket and create a lambda function


Testing

To test our actual serverless application, I upload a simple text file into the bucket (screenshot shown below)

Our lambda function will be invoked and you can see the same in the execution logs

Conclusion:

Hope you’ve learned about building CI/CD pipelines using CDK Pipelines. Because of this self-mutating feature, it is easier to build CICD Pipeline.