When learning a new technology, sometimes I just want to see it work. It gives me a baseline to extend my ideas, to see what is possible, and to imagine what it can become.
This series aims at minimizing the possibility of having a missing link and encourages you to build your next innovative solution based on what you learned here.
Today, we are building a virtual coffee shop that serves cappuccino. Through this project, we are exploring the benefit of using AWS Application Load Balancer and WAF.
Imagine owning a coffee shop with a fantastic cappuccino that attracts customers in droves. Initially, your trusty coffee machine effortlessly manages the influx of orders, serving a cappuccino per minute.
However, success comes knocking, and soon, a single customer per minute evolves into a queue of two. The dilemma arises – the second customer is left waiting, and impatience could cost you their loyalty. But fear not, for there's a solution: the load balancer.
The Coffee Machine Analogy:
Picture your coffee machine as your API backend. As demand surges, the analogy becomes clear. Vertical scaling, akin to replacing your current machine with a pricier one capable of handling two cappuccinos per minute, has its limits.
Costs skyrocket as performance gains taper off, creating an unsustainable trajectory.
Enter Horizontal Scaling:
A more sensible approach is horizontal scaling. Just as you'd introduce more coffee machines to meet growing demand, horizontal scaling involves adding more servers to distribute the load.
This strategy accommodates increasing traffic gracefully, keeping performance steady without exponential cost escalation.
The Load Balancer's Role:
Here's where the load balancer steps in – the load balancer ensures an even distribution of incoming requests across multiple servers. Think of it as your coffee shop's efficient manager, guiding each customer to the next available machine.
Customizable Traffic Distribution:
Tailoring your load balancer's configuration to suit your needs is key. Whether you choose round-robin distribution, favoring fairness, or opt for more nuanced algorithms based on server health, the load balancer's flexibility adapts to your traffic patterns.
Enhanced Performance and Resilience:
Load balancers don't just prevent bottlenecks. They also bolster fault tolerance. If a server falters, the load balancer redirects traffic, averting disruptions. This ensures uninterrupted service, akin to customers always getting their cappuccinos.
Owning a coffee shop is an honor, and naturally, safeguarding your valuable resources becomes paramount. This is where AWS WAF comes into play, offering a seamless method to place a protective firewall in front of your load balancer.
Learning the nuances of WAF may be a steep learning curve.
However, with an array of managed rules offered by AWS, configuring the WAF can take mere minutes to complete.
Assuming you are familiar with AWS, follow the Cornell VPC to set up the below infrastructure.
Coffee-api is a simple Node.js Express project that does one thing - serves cappuccino.
const express = require('express');
const config = require('config');
const coffee = require('./routes/coffee')
const app = express();
const PORT = 3000;
app.use(express.json());
app.use('/api/coffee', coffee);
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
console.log('Application Name: ' + config.get('name'));
NODE_ENV=development, development.json is used
NODE_ENV=production, production.json is used
const express = require('express');
const config = require('config');
const router = express.Router();
const HOST = config.get('host');
const CAPPUCCINO = 'cappuccino';
router.post('/make', (req, res) => {
//Validate the request
const { type } = req.body;
if (!type || type.toLowerCase() != CAPPUCCINO) {
res.status(400).send(`(${HOST}) No coffe for you.`);
return;
}
//Make coffee
const coffee = MakeCappuccino();
//Serve coffee
res.send(`(${HOST}) Here's your ${coffee}. Enjoy!`);
});
function MakeCappuccino() {
return CAPPUCCINO;
}
module.exports = router;
In the /make endpoint,
npm pack
sudo apt update
sudo apt install nodejs npm apache2
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo systemctl restart apache2
cd /etc/apache2/sites-available/
sudo nano ./api.conf
Enter the following to api.conf.
Make sure to update the SERVERNAME to the public DNS of the server
<VirtualHost *:80>
ServerName SERVER_PUBLIC_DNS
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
</VirtualHost>
sudo a2ensite api.conf
sudo systemctl reload apache2
tar -xvzf coffee-api-1.0.0.tgz
mkdir ~/projects
mv package ~/projects/coffee-api
nano coffee-api/config/production.json
cd ~/projects/coffee-api
npm install --production
sudo npm install -g pm2
export NODE_ENV=production
pm2 start index.js
You should see the application running.
Let’s use Postman to test the API endpoints.
{
"type": "cappuccino"
}
In the response, you should get your cappuccino!
(MK-001) Here's your cappuccino. Enjoy!
Finally, we can set up the load balancer! We need to first create a target group and forward traffic to this group in the load balancer.
You are done setting up the load balancer!
Recall that when we setup the reverse proxy, we set the ServerName to the public DNS of the EC2 instance. We verified that the API endpoint is working by testing it with Postman.
Now, we want to configure the reverse proxy to forward traffic from your load balancer to the API endpoint.
At this point, you are ready to observe the magic of load balancing. Recall that you have one endpoint setup as MK-001, and the other as MK-002. In Postman, update the URL in your request to point to the public DNS of the load balancer.
Send the request multiple times, you should see the response with the name alternating like the following.
(MK-001) Here is your cappuccino. Enjoy!
(MK-002) Here is your cappuccino. Enjoy!
There you go. That’s a victory! Take a break and enjoy your coffee. Once you are ready, let’s add a WAF in front of the load balancer.
The process will take a few minutes. Then you should see the success badge at the top of the screen.
Once the Web ACL is created, click on it, and you will be able to see all the metrics you set up for this ACL. Test the coffee-api using Postman again. You should get the same result.
In the Overview of your web ACL, observe the diagram of requests flowing through your coffee shop, like the following. Ask your friends to hit your coffee-api, and see your coffee shop grow!
There you have it. Enjoy your cappuccino!
As the aroma of success wafts through our virtual coffee shop, we experienced the power of load balancers and WAF through the following steps:
What is your use case for the load balancer and WAF? Let me know by leaving a comment below. Thanks for joining me on this journey. See you next time!