In this article, We will try to explore the Token Bucket algorithm and its implementation in NodeJS for API Rate Limiting in very simple terms.
Token Bucket is an algorithm which is used to limit our resources or server usage. It is an algorithm in which we have a finite amount of tokens on our server, and whenever a request is made, a token is used for that request's full fulfillment. When there are requests for more than the number of tokens, then the server denies or awaits all the requests until the tokens are refilled.
The tokens can be refilled in various ways. Two of them are as follows :
Set up a new npm project and set the "type": "module".
COPY
npm init -y
Install express.
COPY
npm i express
Create a new file, name it anything, let's say "index.js.” Write a basic express server code in it.
COPY
import express from 'express';
const app = express();
app.get('/',(req,res) => {
res.send('Hello There !!');
})
const PORT = process.env.PORT || 8000;
app.listen(PORT,() => {
console.log(`Server is running on PORT : ${PORT}`);
})
Now we'll be rate-limiting the request to '/'’ We'll be programming a middleware to achieve this.
Create a new file for the middleware, let's say, "rateLimiter.js," write the syntax of the middleware function, and export it.
COPY
export const rateLimiter = (req,res,next) => {
}
COPY
const tokens = ["AAAA","AAAA","AAAA","AAAA","AAAA"];
var time = new Date().getTime();
export const rateLimiter = (req,res,next) => {
}
In this code, tokens are the array of tokens that the user can use to make a request to the server. Time is a variable in which we store the current time in milliseconds.
COPY
const tokens = ["AAAA","AAAA","AAAA","AAAA","AAAA"];
var time = new Date().getTime();
export const rateLimiter = (req,res,next) => {
if(tokens.length){
tokens.pop();
next();
}
else{
res.send(`Service not available due to excess requests !! , ${new Date().getTime()-time}`);
}
}
We have added simple if else condition in this, if any request comes and if the tokens array has a token, then a token will pop out and the request will be allowed, else the request will be denied.
After the tokens array becomes empty, we need to refill it.
COPY
const tokens = ["AAAA","AAAA","AAAA","AAAA","AAAA"];
var time = new Date().getTime();
export const rateLimiter = (req,res,next) => {
if(tokens.length){
tokens.pop();
next();
}
else{
if(new Date().getTime()-time >= 20000){
const len = tokens.length;
for(let i=0;i<5-len;i++){
tokens.push("AAAA");
}
time = new Date().getTime();
rateLimiter(res,res,next);
}
else{
res.send(`Service not available due to excess requests !! , ${new Date().getTime()-time}`);
}
}
}
In this code, we changed the other condition a little bit. We are calculating the time difference with "new Date().getTime()-time". If this difference exceeds 20000 milliseconds, i.e., 20 sec, the tokens array will be refilled to 5 tokens.
But this code has a problem. In this code, whenever the server is started, the time variable gets initialized. Suppose that after 20 seconds, the first request is made. Then, by this code, we will be able to make ten requests in a span of 20 seconds because after 20 seconds, when all the five tokens are used, the if condition of the outer else condition will run, and it will refill the array with five more tokens immediately. But by our code, we only want five requests per 20 seconds.
With a smaller change in the present code, we can achieve the desired result.
COPY
const tokens = ["AAAA","AAAA","AAAA","AAAA","AAAA"];
var time = new Date().getTime();
export const rateLimiter = (req,res,next) => {
if(tokens.length && new Date().getTime()-time < 20000){
tokens.pop();
next();
}
else{
if(new Date().getTime()-time >= 20000){
const len = tokens.length;
for(let i=0;i<5-len;i++){
tokens.push("AAAA");
}
time = new Date().getTime();
rateLimiter(res,res,next);
}
else{
res.send(`Service not available due to excess requests !! , ${new Date().getTime()-time}`);
}
}
}
We changed the outer if condition. The conditions are like firstly the server should be having a token and the time should be less than 20 sec, if the time difference is exceeding 20 seconds, the array will be refilled if there are less than 5 tokens in it.
Now, add the middleware to the route.
COPY
import express from 'express';
import { rateLimiter } from './rateLimiter.js';
const app = express();
app.get('/', rateLimiter, (req,res) => {
res.send('Hello There !!');
})
const PORT = process.env.PORT || 8000;
app.listen(PORT,() => {
console.log(`Server is running on PORT : ${PORT}`);
})
The final code looks like this.
COPY
//"index.js"
import express from 'express';
import { rateLimiter } from './middleware/rateLimiter.js';
const app = express();
app.get('/', rateLimiter, (req,res) => {
res.send("Hello There !!");
})
const PORT = process.env.PORT || 8000;
app.listen(PORT,() => {
console.log(`Server is running on PORT : ${PORT}`);
})
COPY
// "rateLimiter.js"
const tokens = ["AAAA","AAAA","AAAA","AAAA","AAAA"];
var time = new Date().getTime();
export const rateLimiter = (req,res,next) => {
if(tokens.length && new Date().getTime()-time < 20000){
tokens.pop();
next();
}
else{
if(new Date().getTime()-time >= 20000){
const len = tokens.length;
for(let i=0;i<5-len;i++){
tokens.push("AAAA");
}
time = new Date().getTime();
rateLimiter(res,res,next);
}
else{
res.send(`Service not available due to excess requests !! , ${new Date().getTime()-time}`);
}
}
}
Now, I think we have understood how Token Bucket is implemented and will be able to customize the number of requests and tokens in it. If you have any queries, write them in the comments, and I will try to help you out.
The Token Bucket algorithm is used for rate limiting to control the consumption of resources or server usage. It involves maintaining a bucket of tokens, where each token represents permission to perform a specific action or request.
Requests consume tokens from the bucket. When the bucket is empty, further requests are either denied or queued until tokens are refilled. Tokens can be refilled at a fixed rate or based on certain conditions.
Rate limiting helps prevent abuse or overload of APIs by restricting the number of requests a client can make within a specified timeframe. It ensures fair usage of resources and protects the server from being overwhelmed.