paint-brush
Building a Secure Web Chat With Redis, mTLS and GCPby@mourjo
793 reads
793 reads

Building a Secure Web Chat With Redis, mTLS and GCP

by Mourjo SenFebruary 3rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Google Cloud provides a free e2-micro instance as part of their monthly free tier. I will install Redis as a service on a compute instance. The cost efficient and safe architecture that runs the backend code via Cloud Functions and communicates with Redis over a secure mTLS connection. Anonymous users choose a username and are ranked by recency of last access time. After 10 minutes of inactivity, users are purged from the system.
featured image - Building a Secure Web Chat With Redis, mTLS and GCP
Mourjo Sen HackerNoon profile picture


This is the first post in this series where I build a web chat powered by GCP free limits. In this post I focus on setting up the architecture to build an HTTP endpoint to do the following:


  • Anonymous users choose a username
  • Users are ranked by recency of last access time
  • After 10 minutes of inactivity, users are purged from the system
  • A maximum of 100 users at a time

Architecture and choice of components

I want to have an effectively free infrastructure using GCP’s perpetually free tier. The components for this post:


  • Cloud function: I will use HTTP-based serverless cloud functions to run the backend code. Cloud functions have a free limit of 2 million requests per month. I chose cloud functions because primarily because it requires little code to get started and I get observability and scalability out of the box.

  • Redis: To store users’ data, I will use Redis because it is lightweight to run, fast and versatile for storing all kinds of data I will need for the chat app. But GCP’s MemoryStore implementation of Redis is not free. I work around this by installing Redis on a free compute instance (this somewhat limits scalability, but I have some room due to Redis’s memory efficiency).

  • Connecting with Redis over the internet: Connecting to rest of my VPC from cloud functions requires a VPC connector (Serverless VPC access), which is not free. I work around this by connecting to Redis over the internet, but I need to ensure it is done over a secure connection.

    The cost-efficient and safe architecture that runs the backend code via Cloud Functions and communicates with Redis over a secure mTLS connection

Installing Redis

Google Cloud provides a free e2-micro instance as part of their monthly free tier. I will install Redis as a service on a compute instance.


  • Providing an e2 micro instance in us-central1 region -- let’s call the instance pelican

  • Reserve a static IP address to use here and attach it to pelican

  • SSH into the instance and install the latest version Redis (atleast version 7.0, to get best TLS support)

  • If the compute instance is based on Debian, it is best to update the Redis config to be supervised by systemd which is the default init system: change the file /etc/redis/redis.conf to update the supervision setting:

    supervised systemd
    
  • Remove the bind address to be nothing (all network interfaces on the instance should be able to access Redis) and the port to 12345, add a password using the requirepass setting

  • One last thing to do is allow the Redis service to be managed by systemd to have access to the home directory -- this is only necessary because I store some files (generated later) in the home directory.


    To do this, add or update the service file /etc/systemd/system/redis.service with this setting to read only:

    ProtectHome=read-only
    


  • Reload systemd

    sudo systemctl daemon-reload
    


  • Restart Redis

    sudo systemctl restart redis
    


  • I can test Redis via the CLI locally on the instance

    mourjo@pelican:~$ redis-cli -p 12345
    127.0.0.1:11219> AUTH somepassword
    OK
    127.0.0.1:11219> set x y
    OK
    127.0.0.1:11219> get x
    "y"
    


  • I want to connect to Redis over the internet, so first allow the port on pelican to be accessible. On the firewall tab in GCP console, create a new allow list with the following settings:


    • Priority: 70 (less than the default value of 100)

    • Direction: ingress

    • Target tags: A tag that I will apply to the compute instance

    • Source IPv4 ranges: 0.0.0.0/0 allows every host to connect to this port

    • Protocol: TCP on port 12345

      Creating a firewall rule


  • Edit the instance to apply this firewall using the “network tags” setting and using the tag I created the firewall with above (note that this opens up my instance to be accessible to anyone, so it is highly recommended to set a password as above)

  • It should now be possible to connect to Redis from outside of the machine, for example from a computer that has not SSH’ed into the instance 🎉

Enabling TLS support for Redis

So far I have set a password to access Redis but that data is still being transmitted over an unencrypted TCP connection which has two problems:


  • It is very easy to intercept traffic and read the password
  • Anyone can connect and start sending a flood of traffic even without authenticating, causing my tiny instance to be overwhelmed


Both problems can be solved by using a secure connection over TLS where both the server and the client need to establish their identity even before interacting with Redis. Mutual TLS (or mTLS) ensures that both the client and the server verify each other. Therefore only known clients with the right private keys can access my server over an encrypted connection.

Digital Certificates and Trust Establishment

Digital certificates are commonly used in TLS and HTTPS to verify the identity of participants. In short, a certificate is cryptographically verifiable proof that someone is who they say they are as attested by another authority.


If I am connecting to google.com, I need to know that I am in fact connecting to google.com and not someone impersonating google.com. This happens via a trust chain established between me (the browser) and the server (google.com): Google sends a certificate that has been cryptographically signed by a certifying authority (CA) and that signature is verifiable by us.


It is important to note that I need to implicitly trust the CA without question. This usually happens on the web via known issuers of certificates baked into browsers and operating systems (eg /etc/ssl/certs stores the CAs a Linux system implicitly trusts). Once I see a certificate that is signed by someone I trust, I can verify that the party claiming to be who they are (here google.com) is actually Google and not a spoof pretending to steal information from us. In reality, there may be intermediate authorities that do the actual certificate signing, but the intermediate CAs themselves must also have certificates signed by the root CA.



The root certificate is implicitly trusted and is self-signed. All other certificates in the chain are signed by a CA that issued it. The only implicit trust that needs to be established is with the root CA.


Image source.


Certificates contain the public key of the party whose identity is being verified. Using the server’s public key from the certificate, the client can encrypt a payload that only the server can decrypt, thus making the communication secure: no other observer of the network can decrypt the information being communicated.


This is called an mTLS handshake. It is slightly more complicated in reality, but it is the same in principle:


How mTLS works (source: https://www.cloudflare.com/en-in/learning/access-management/what-is-mutual-tls/)


Generating the Certificates

The first step to a TLS connection is to have a certificate for the server and the client, both of which are signed by a CA. I don’t want to purchase a certificate so I will generate my own CA certificate, which is not trusted in the wild, so I will need to tell the server and client to accept all certificates issued by this CA.


  1. Create the root CA key and certificate -- this will be implicitly trusted by both client and server

    1. Create a new private key for the root CA (optionally encrypt it with AES 256)

      openssl genrsa -aes256 -out ca.key 4096
      
    2. Create the certificate valid for 10 years signed by this private key

      openssl req -new -x509 -days 3650 -key ca.key -out ca.crt
      
  2. Create the client certificate signed by the root CA

    1. Generate a new private key for the client (same command as 1a)

      openssl genrsa -aes256 -out client.key 2048
      
    2. Create a certificate signing request, which contains the parameters the certificate will be created with

      openssl req -new -key client.key -out client.csr
      
    3. Create the certificate signed by the CA

      openssl x509 -req -days 3650 -in client.csr -CA ca.crt -CAkey ca.key -out client.crt -CAcreateserial
      
  3. Create the server certificate signed by the root CA

    1. Generate a new private key for the server (same command as 1a)

      openssl genrsa -aes256 -out server.key 2048
      
    2. Create a certificate signing request (it is customary to use the server IP address when prompted for Common Name)

      openssl req -new -key server.key -out server.csr
      
    3. Create a certificate signed by the CA

      openssl x509 -req -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -out server.crt
      


Setting up Redis with the certificates

Using the certificates, let’s make these changes to Redis config:

  • Enable TLS, disable the TCP port
  • Add server certificate, server key
  • Add CA certificate
  • Enforce clients to be authenticated


Update the file /etc/redis/redis.conf with these settings (here is the full config):

port 0
tls-port 12345

# server certificate and key:
tls-cert-file /home/mourjo/certs/server.crt
tls-key-file  /home/mourjo/certs/server.key

# the key file is encrypted, so add the password
tls-key-file-pass thisisredacted

# the CA certificate
tls-ca-cert-file /home/mourjo/certs/ca.crt

# make client authentication mandatory
tls-auth-clients yes


Starting the server should show the following logs

sudo systemctl start redis
tail /var/log/redis/redis-server.log
148924:M 27 Jan 2023 04:16:10.675 # Server initialized
...
148924:M 27 Jan 2023 04:16:10.676 * Ready to accept connections


I can verify using OpenSSL’s s_client that Redis does not accept a client that does not have a valid certificate signed by my CA:

# without any certificate
openssl s_client -state  -connect 35.209.163.139:11219 -servername 35.209.163.139
8610505984:error:1404C45C:SSL routines:ST_OK:reason(1116):/AppleInternal/Library/BuildRoots/810eba08-405a-11ed-86e9-6af958a02716/Library/Caches/com.apple.xbs/Sources/libressl/libressl-3.3/ssl/tls13_lib.c:129:SSL alert number 116


# with a certificate not signed by the CA
openssl s_client -state -cert random.crt -key random.key -connect <ip-address>:<port> -servername <ip-address>
8610505984:error:1404C418:SSL routines:ST_OK:tlsv1 alert unknown ca:/AppleInternal/Library/BuildRoots/810eba08-405a-11ed-86e9-6af958a02716/Library/Caches/com.apple.xbs/Sources/libressl/libressl-3.3/ssl/tls13_lib.c:129:SSL alert number 48


I can verify the SSL connection works when passing the right certificate:

openssl s_client -state -cert client.crt -key client.key -connect <ip-address>:<port> -servername <ip-address>


This should print out the necessary information about the server and security and also allow us to run Redis commands (because the Redis protocol is human-readable):

Enter pass phrase for certprac/client.key:
CONNECTED(00000003)
depth=1 C = IN, ST = WB, L = Kolkata, O = mourjo.me, emailAddress = [email protected]
verify error:num=19:self signed certificate in certificate chain
verify return:0
write W BLOCK
---
Certificate chain
 0 s:/C=IN/ST=WB/L=Kolkata/CN=35.209.163.139/[email protected]
   i:/C=IN/ST=WB/L=Kolkata/O=mourjo.me/[email protected]
 1 s:/C=IN/ST=WB/L=Kolkata/O=mourjo.me/[email protected]
   i:/C=IN/ST=WB/L=Kolkata/O=mourjo.me/[email protected]
---
Server certificate
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
subject=/C=IN/ST=WB/L=Kolkata/CN=35.209.163.139/[email protected]
issuer=/C=IN/ST=WB/L=Kolkata/O=mourjo.me/[email protected]
---
No client certificate CA names sent
Server Temp Key: ECDH, X25519, 253 bits
---
SSL handshake has read 3049 bytes and written 1768 bytes
---
New, TLSv1/SSLv3, Cipher is AEAD-CHACHA20-POLY1305-SHA256
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : AEAD-CHACHA20-POLY1305-SHA256
    Session-ID:
    Session-ID-ctx:
    Master-Key:
    Start Time: 1674968406
    Timeout   : 7200 (sec)
    Verify return code: 19 (self signed certificate in certificate chain)
---
read R BLOCK
read R BLOCK
AUTH SMASH-workaday-sully
+OK
PING
+PONG
SET a b
+OK
GET a
$1
b
^C


I can also verify the SSL settings using an online tool like sslshopper which reports that the SSL connection is set correctly, but issued by a CA that is not widely trusted in the world because I am using a CA who only I know.


Let us now connect using the Redis command-line client redis-cli from outside the instance, passing the client certificate:


redis-cli -p 11219 --tls --cert client.crt --key client.key --cacert ca.crt

Enter PEM pass phrase:
127.0.0.1:11219> auth thishasbeenredacted
OK
127.0.0.1:11219> set xyz abc
OK
127.0.0.1:11219> get abc


Redis is now working on TLS over the internet! 🎉

Client Implementation

I have set up the Redis server and it works with the default client redis-cli but I want to build a simple backend using Redis:


  • Allow a user to use their username -- authentication is out of scope for this post
  • Keep track of active users for 10 minutes and purge inactive users
  • Allow a maximum of 100 users at a time


I will deploy an HTTP cloud function that stores and communicates with the Redis server. The backend code is written in Java using the popular Jedis library for communicating with Redis.


The business logic is fairly simple, every time I get a new user, I check the limit of 100 users, purge inactive users and store the current user in a sorted set with the current timestamp.


double timeoutMillis = 10 * 60 * 1000D;
int MAX_USERS = 100;

// connect with TLS, pass connect timeout and socket timeout of 10 sec
try (Jedis jedis = new Jedis(host, port, 10_000, 10_000, true)) {
    jedis.auth(redisPassword);

    // purge old users
    jedis.zremrangeByScore("recent_users", Double.NEGATIVE_INFINITY, System.currentTimeMillis() - timeoutMillis);

    // ensure that there only 100 users at max
    var total_users = jedis.zcount("recent_users", Double.NEGATIVE_INFINITY, Double.POSITIVE_INFINITY);
    if (total_users >= MAX_USERS) {
        throw new TooManyUsersException(total_users);
    }

    // add the current user to the sorted set with the timestamp
    jedis.zadd("recent_users", (double) System.currentTimeMillis(), user);

    // return a list of active users
    var activeUsers = jedis.zrangeWithScores("recent_users", 0, System.currentTimeMillis());
    Collections.reverse(activeUsers);
    return activeUsers;
}


The rest of the client code uses the Java SDK for cloud functions which wrap around the above business logic to respond to incoming requests. The full repository is available here.

Adding client certificates to Java Keystore

The above code snippet makes no mention of certificates although my Redis server only accepts TLS connections from clients that are trusted via my CA. This is as per the Java Cryptography Architecture (JCA): credentials/certificates are stored in password-protected files managed by the CLI utility keytool, which ships directly with the JDK installation. So the client code while setting up the TLS connection transparently uses the certificates present in these protected files.


But before I start importing my certificates with keytool, I need to convert my certificates from the PEM format (which is the Linux default) to the PKCS12 format.


Convert the CA root certificate to the PKCS12 format:

openssl pkcs12 -export -in ca.crt -inkey ca.key -out ca.p12


Convert the client certificate (note that the key needs to be included in the p12 file):

openssl pkcs12 -export -in client.crt -inkey client.key -out client.p12


As per common practice, will create two files with keytool: one called the keystore and the other truststore.


The keystore will store the keys/certificates the client needs to identify itself, that is the client’s private key and the client’s certificate that will allow us to prove to the Redis server that the client should be allowed to connect.

keytool -importkeystore -noprompt  -srckeystore client.p12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype PKCS12


The truststore will store the certificates the client will implicitly trust, that is the root CA’s certificate.


With this, when the server sends its certificate, the client will know that it can trust the server. The truststore will have my CA certificate:

keytool -importkeystore -noprompt -srckeystore ca.p12 -srcstoretype PKCS12 -destkeystore truststore.jks -deststoretype PKCS12


Announce to the JVM where it should look for credentials/certificates and set the following options either as CLI options or load them in the code:

System.setProperty("javax.net.ssl.keyStorePassword", "redacted"); // decryption password used while generating the keystore:
System.setProperty("javax.net.ssl.keyStore", "keystore.jks");
System.setProperty("javax.net.ssl.keyStoreType", "PKCS12");

System.setProperty("javax.net.ssl.trustStorePassword", "redacted"); // decryption password used while generating the truststore
System.setProperty("javax.net.ssl.trustStore", "truststore.p12");
System.setProperty("javax.net.ssl.trustStoreType", "PKCS12");


If all goes well, the following should return a PONG from Redis server:

try (Jedis jedis = new Jedis(host, port, true)) {
  jedis.auth("theredispassword");
  System.out.println(jedis.ping());
}


Secret Storage

The client code should now be able to access Redis. But before deploying this to a cloud function, I need to safely store secrets instead of having them hovering all over the codebase or via unencrypted environment variables.


To do this, I will use GCP’s secret manager to store the following secrets:

  • JIBBER_REDIS_PASSWORD: The text password for authenticating with Redis

  • JIBBER_KEYSTORE: The keystore file generated by keytool above

  • JIBBER_KEYSTORE_PASSWORD: The text password used to decrypt the keystore file

  • JIBBER_TRUSTSTORE: The truststore file generated by keytool above

  • JIBBER_TRUSTSTORE_PASSWORD: The text password used to decrypt the truststore file


I need to also make sure that my cloud function has access to these secrets. I can do this by giving my default service account permission to read these secrets. I have to allow my service account to have the Secret Manager Secret Accessor permission.

Deploying the Cloud Function

In addition to the secrets, I also need to set a few environment variables (which are not secretive):


  • Redis host and port
  • Keystore and trust store locations


The final command to deploy to cloud functions with secrets and environment variables looks like this:

gcloud functions deploy jibber-function  \
  --entry-point me.mourjo.functions.Hello \
  --runtime java17 \
  --trigger-http \
  --allow-unauthenticated \
  --set-secrets '/etc/keystore:/keystore=JIBBER_KEYSTORE:1,/etc/truststore:/truststore=JIBBER_TRUSTSTORE:1,TRUSTSTORE_PASS=JIBBER_TRUSTSTORE_PASSWORD:1,KEYSTORE_PASS=JIBBER_KEYSTORE_PASSWORD:1,REDIS_PASSWORD=JIBBER_REDIS_PASSWORD:1' \
  --set-env-vars 'REDIS_HOST=1.1.1.1,REDIS_PORT=12345,KEYSTORE_LOCATION=/etc/keystore/keystore,TRUSTSTORE_LOCATION=/etc/truststore/truststore'


Deploying the code to my cloud function should now work! 🎉

Conclusion

In this post, I covered the foundations of the chat web app by setting up a GCP Cloud Function that triggers via HTTP and communicates over the internet via a secure connection with Redis -- all of which is part of GCP’s free tier.


This cost-efficient infrastructure comes with caveats:

  • The endpoint is slow: This is partly due to the Cloud function and partly due to the encrypted connection to Redis over the internet instead of the local network
  • Redis is not failsafe: Any data written to Redis is not backed up and there is no secondary Redis instance that might fill in if the pelican compute instance goes down.


In the next post, I will write a browser-based client that can communicate with this cloud function to establish a user session and then hand it over to a WebSocket server based on Cloud Run. Stay tuned!