paint-brush
Deploying a Java Spring Boot app on AKS with Azure Database for PostgreSQLby@yi
4,854 reads
4,854 reads

Deploying a Java Spring Boot app on AKS with Azure Database for PostgreSQL

by Yi AiMarch 28th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Azure Kubernetes Service (AKS) is a managed service that lets you quickly deploy and manage applications based on microservices. We'll learn how to run a Java Spring Boot application on AKS and connects to Azure Postgres using Azure AD Pod identity. We’ll assume you already have an Azure Subscription setup. The next step is to create an Azure Database for Postgres for the identity we created in the first step to grant database access. The identity must have Reader permission in the resource group that contains the virtual machine scale set of our AKS cluster.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Deploying a Java Spring Boot app on AKS with Azure Database for PostgreSQL
Yi Ai HackerNoon profile picture

In this post, you'll learn how to run a Java Spring Boot application on Azure Kubernetes Service (AKS) and connects to Azure PostgreSQL using Azure AD Pod identity. Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage applications based on microservices.

Azure Active Directory pod-managed identities uses Kubernetes primitives to associate managed identities for Azure resources and identities in Azure Active Directory (AAD) with pods.

What we’ll cover in this post:

  • Create an AKS cluster and Pod Identity.
  • Create an Azure Database for PostgreSQL server.
  • Prepare Java Spring Boot application for AKS.
  • Deploy Azure Container Registry (ACR).
  • Deploy Java application to Kubernetes with Kustomize.

The following diagram shows the architecture of the above steps:

AKS cluster and POD Identity

I will assume you already have an Azure Subscription setup.

Before going any further, we will need to register EnablePodIdentityPreview and install 

aks-preview
 Azure CLI extension.

az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
az extension update --name aks-preview

Let’s create a resource group and an AKS cluster with Azure CNI and pod-managed identity enabled.

export RESOURCE_GROUP=demo-k8s-rg
export CLUSTER_NAME=my-k8s-cluster

az group create --name=${RESOURCE_GROUP} --location eastus
az aks create -g ${RESOURCE_GROUP} -n ${CLUSTER_NAME} --enable-managed-identity --enable-pod-identity --network-plugin azure --enable-addons monitoring --node-count 1 --generate-ssh-keys 

Then create an identity,

export IDENTITY_RESOURCE_GROUP="my-identity-rg"
export IDENTITY_NAME="sp-application-identity"

az group create --name ${IDENTITY_RESOURCE_GROUP} --location eastus
az identity create --resource-group ${IDENTITY_RESOURCE_GROUP} --name ${IDENTITY_NAME}

Then, assign required permissions for the created identity. The identity must have Reader permission in the resource group that contains the virtual machine scale set of our AKS cluster and 

acrpull
 permission in the resource group to access repositories to pull images from ACR.

export IDENTITY_CLIENT_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n ${IDENTITY_NAME} --query clientId -otsv)"
export IDENTITY_RESOURCE_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n ${IDENTITY_NAME} --query id -otsv)"
export RG_RESOURCE_ID="$(az group show -g ${RESOURCE_GROUP} --query id -otsv)"
export NODE_GROUP=$(az aks show -g ${RESOURCE_GROUP} -n ${CLUSTER_NAME} --query nodeResourceGroup -o tsv)
export NODES_RESOURCE_ID=$(az group show -n $NODE_GROUP -o tsv --query "id")

az role assignment create --role "Reader" --assignee "$IDENTITY_CLIENT_ID" --scope $NODES_RESOURCE_ID
az role assignment create --role "acrpull" --assignee "$IDENTITY_CLIENT_ID" --scope $RG_RESOURCE_ID

Next, let’s create a pod identity for the cluster using the following command.

az aks pod-identity add --resource-group ${RESOURCE_GROUP} --cluster-name ${CLUSTER_NAME} --namespace dev-ns  --name my-sp-pod-identity --identity-resource-id ${IDENTITY_RESOURCE_ID}

Now the first step is done, and we move on to the next step.

Azure Database for PostgreSQL server

In this section, we will create an Azure SQL server and PostgreSQL database, grant database access to the identity we created in the first step.

First, create an Azure SQL server and database.

export DB_SERVER=my-sp-db-server
export DB_RG=my-db-rg
export DB_NAME=my-sp-db
export PGSSLMODE=require

az group create --name=my-database-rg --location eastus
az postgres server create --resource-group ${DB_RG} --name ${DB_SERVER} --location eastus --admin-user myadmin --admin-password P@ssword123 --sku-name B_Gen5_1
az postgres db create -g ${DB_RG} -s ${DB_SERVER} -n ${DB_NAME}

After the SQL server is ready, secure the server by setting the IP firewall rule.

az postgres server firewall-rule create --resource-group ${DB_RG} --server ${DB_SERVER} -name "AllowAllLinuxAzureIps" --start-ip-address YOUR_LOCAL_CLIENT_IP  --end-ip-address YOUR_LOCAL_CLIENT_IP

Next, add the Azure AD Admin user to SQL Server; for more details about AD authentication, please refer to this link.

After the AD admin user has been set up, connect as the Azure AD administrator user to the PostgreSQL database using Azure AD authentication and run the following SQL statements:

SET aad_validate_oids_in_tenant = off;
CREATE ROLE myuser WITH LOGIN PASSWORD '<YOU_IDENTITY_CLIENT_ID>' IN ROLE azure_ad_user;
CREATE DATABASE my-sp-db;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO myuser;

Replace <YOUR_IDENTITY_CLIENT_ID> with your identity client id that we created in section 1.

Spring Boot application for AKS

The demo/sample application is a simple Spring Boot REST API; we will build the API in a docker image and then push it to Azure Container Registry (ACR).

The complete Java project is in my Github repo, clone the repo and run the following command in the project root directory:

mvn install dependency:copy-dependencies -DskipTests && cd target/dependency; jar -xf ../*.jar && cd ../..

Make sure you have JAVA SDK and maven installed on your computer.

Next, Create Azure Container Registry.

az acr create --resource-group ${RESOURCE_GROUP} --location eastus --name myspdemo --sku Basic

Then login to the ACR, build and push the Java container image to the registry.

az acr login --name myspdemo && mvn compile jib:build

Note that we have the jib plugin in the Spring Boot project; for more details visit this link.

To connect to the PostgreSQL database using managed identity, we have to acquire an OAuth access token from the MSI endpoint:

http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=<YOUR_IDENTITY_CLIENT_ID>

Then, configure a DataSource programmatically in Spring Boot. The configuration script would look similar to this:

package com.example.awesomeprject;

import javax.sql.DataSource;
import org.springframework.context.annotation.Bean;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.context.annotation.Configuration;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.jdbc.DataSourceBuilder;

import org.json.JSONTokener;
import org.json.JSONObject;

import java.net.*;  
import java.io.BufferedReader;
import java.io.InputStreamReader;

import org.apache.log4j.Logger;

@Configuration
public class DataSourceConfig {
    public static Logger logger = Logger.getLogger("global");

    @Value("${db.host}")
    private String Host;

    @Value("${db.user}")
    private String User;

    @Value("${db.name}")
    private String Database;

    @Value("${client_id}")
    private String ClientId;

    @Bean
    @RefreshScope
    public DataSource getDataSource() {

        DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();

        try {
            URL url = new URL("http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=" + ClientId);
            HttpURLConnection con = (HttpURLConnection) url.openConnection();
            con.setRequestMethod("GET");
            con.setRequestProperty("Metadata", "true");

            BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
            JSONTokener tokener = new JSONTokener(in);
            JSONObject json = new JSONObject(tokener);
            String accessToken = json.getString("access_token");

            logger.info("accessToken: " +  accessToken);

            dataSourceBuilder.url(this.Host);
            dataSourceBuilder.username(this.User);
            dataSourceBuilder.password(accessToken);
            
            in.close();
            con.disconnect();
        } catch(Exception e) {
            e.printStackTrace();
        }

        return dataSourceBuilder.build();
    }
}

Now that we have all of the resources, it’s time to deploy our application pod.

Deploy to Kubernetes with Kustomize

With Kustomize, we can create multiple overlays and deploy the application with multi environments to Kubernetes.

Kustomize is a tool included with kubectl 1.14 that “lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.”

Make a 

.k8s/base
 directory for all the default configuration templates:

  • kustomization.yaml
  • apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
      - namespace.yaml
      - service.yaml
      - deployment.yaml
  • deployment.yaml
  • apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo-deployment
      labels:
        app: demo
    spec:
      selector:
        matchLabels:
          app: demo
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: demo
            aadpodidbinding: my-sp-pod-identity
        spec:
          containers:
            - image: myspdemo2021.azurecr.io/awesomeprject:latest
              name: demo
              ports:
                - containerPort: 8080
              env:
                - name: DB_SCHEMA
                  valueFrom:
                    configMapKeyRef:
                      name: sp-config
                      key: DB_SCHEMA
                - name: DB_DATA
                  valueFrom:
                    configMapKeyRef:
                      name: sp-config
                      key: DB_DATA
                - name: DS_INIT_MODE
                  valueFrom:
                    configMapKeyRef:
                      name: sp-config
                      key: DS_INIT_MODE
                - name: DB_HOST
                  valueFrom:
                    configMapKeyRef:
                      name: sp-config
                      key: DB_HOST
                - name: DB_USER
                  valueFrom:
                    configMapKeyRef:
                      name: sp-config
                      key: DB_USER
                - name: DB_NAME
                  valueFrom:
                    configMapKeyRef:
                      name: sp-config
                      key: DB_NAME
                - name: CLIENT_ID
                  valueFrom:
                    secretKeyRef:
                      name: sp-secret
                      key: CLIENT_ID
              volumeMounts:
                - name: config
                  mountPath: 'app/resources/config'
                  readOnly: true
          volumes:
            - name: config
              configMap:
                name: sp-config
                items:
                  - key: 'schema.sql'
                    path: 'schema.sql'
                  - key: 'data.sql'
                    path: 'data.sql'
  • namespace.yaml
  • apiVersion: v1
    kind: Namespace
    metadata:
      name: ns
  • service.yaml
  • apiVersion: v1
    kind: Service
    metadata:
      name: demo-service
      labels:
        app: demo
    spec:
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
      selector:
        app: demo
      type: LoadBalancer

Then, make .k8s/dev for development environment configuration, Kustomize called this an overlay. Add new configMap.yml, configmap.yml, and kustomization.yaml files into the overlay directory 

.k8s/dev
.

  • Kustomization.yaml
  • apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    namePrefix: dev-
    namespace: dev-ns
    commonLabels:
      variant: dev
    # patchesStrategicMerge:
    resources:
      - configmap.yaml
      - secret.yaml
    bases:
      - ../base
  • configMap.yml
  • apiVersion: v1
    kind: ConfigMap
    metadata:
      name: sp-config
    data:
      DS_INIT_MODE: always
      DB_USER: myuser@my-sp-db-server
      DB_HOST: jdbc:postgresql://my-sp-db-server.postgres.database.azure.com:5432/my-sp-db?sslmode=require
      DB_NAME: my-sp-db
      DB_SCHEMA: config/schema.sql
      DB_DATA: config/data.sql
      data.sql: |
        INSERT INTO "user" (firstName, lastName) SELECT 'William','Ferguson'
          WHERE
              NOT EXISTS (
                  SELECT id FROM "user" WHERE firstName = 'William' AND lastName = 'Ferguson'
          );
      schema.sql: |
        DROP TABLE IF EXISTS "user";
        CREATE TABLE "user"
        (
          id SERIAL PRIMARY KEY,
          firstName VARCHAR(100) NOT NULL,
          lastName VARCHAR(100) NOT NULL
        );
  • secret.yaml
  • 
    apiVersion: v1
    kind: Secret
    data:
      CLIENT_ID: CLIENT_ID_ENCODED_WITH_BASE64
    metadata:
      name: sp-secret
    type: Opaque

In this demo, I store identity base 64 encoded client id into secret, for production, I suggest that we store the client id in Azure Key Vault and integrate Azure Key Vault with AKS.

We are almost there! Now deploy these configuration files to the Kubernetes cluster.

kustomize build .k8s/dev/. | kubectl apply -f -

Once the application has been deployed, use 

kubectl
 to check the status of our application pod:

kubectl get pods -n dev-ns

We will eventually see our application Pod is in Running status and 1/1 containers in the READY column:

NAME                                  READY   STATUS    RESTARTS   AGE
dev-demo-deployment-6499974b5-2srzz   1/1     Running   0          23h

We can view Kubernetes logs, events, and pod metrics in real-time in the Azure Portal.

Container insights includes the Live Data feature, which is an advanced diagnostic feature allowing you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to kubectl logs -c, kubectl get events, and kubectl top pods.

For more details about Container Insight, refer to this link.

Conclusion

With these steps above, we now have a Java Spring Boot REST API running in Kubernetes and connects to an Azure PostgreSQL database using AAD Pod Identities. I also walked you through how to deploy applications to Kubernetes with Kustomize. For the complete code of this sample/demo, please refer to my GitHub repo.

Read behind a paywall at https://codeburst.io/deploying-a-spring-boot-rest-api-on-azure-kubernetes-service-with-azure-database-for-postgresql-4bf86a8059e0