Rate Limiting in REST APIs (Using SpringBoot and Redis)
Rate limiting is a technique used to control the amount of incoming requests to a server or a system within a specified time frame. It helps to prevent overloading and ensure stability, reliability and fairness for all users. This technique is particularly important for REST APIs, which are designed to handle a large number of requests from various clients.
In this article, we will discuss the implementation of rate limiting in REST APIs using Spring Boot and Redis. Spring Boot is a popular Java-based framework for building microservices, while Redis is an in-memory data structure store that can be used as a database, cache, and message broker. By combining these two technologies, we can create a scalable and efficient rate limiting system for REST APIs.
Applications of rate limiting in REST APIs include controlling access to limited resources, preventing abuse and exploitation, and managing traffic spikes. By implementing rate limiting, API owners can ensure that their API remains available and responsive for all users, even during high traffic conditions. Talking about an example, one may apply rate limiting to the “password reset” endpoint in an application in order to prevent malicious users from bombarding the API with requests while carrying out a brute-force attack to hack a particular user’s password.
Here in this article I will demonstrate how we can implement rate-limiting on a simple REST end point using SpringBoot and Redis(which we will run as a container on docker). So without further ado, let’s get started.
To download a Spring project with the specified libraries, follow these steps:
- Go to “start.spring.io” website.
- Select the desired Spring Boot version.
- Choose the project type as “Maven Project” or “Gradle Project” based on your preference.
- Select the packaging type as “Jar”.
- Choose the Java version you want to use.
- Enter the project name and group name.
- In the “Dependencies” section, search for “spring-boot-starter-web”.
- Check the box for “spring-boot-starter-web”, “lombok” and click on “Add”.
- We would also need a few other libraries like: “redisson-spring-boot-starter”, “bucket4j-spring-boot-starter” which can be found in the mvrepository website. Make sure to include the gradle version of these packages. In case of any doubts, you can refer the repository link shared at the end of this article.
- Click on the “Generate” button to download the project.
The utilities of the above libraries/packages are as follows:
- “spring-boot-starter-web”: This is the Spring Boot starter for building web applications, including RESTful services.
- “redisson-spring-boot-starter”: This is a Redis client library that provides support for Redis-based data structures and can be used as a distributed lock, cache, or queue.
- “bucket4j-spring-boot-starter”: This is a Java library for rate limiting that implements the token bucket algorithm. It is used to control the rate of incoming requests to a system.
- “lombok”: This is a library that provides annotations for code generation, reducing boilerplate code.
- “junit”: This is a popular testing framework for Java. It is used to write and run tests for the Spring project.
Note: In this article we’ll be using Java version 11 and along with that we’ll implment the rate-limiting using “token-bucket” algorithm. So what is this token-bucket algorithm all about? Let’s discuss a bit about it.
Token Bucket is a simple and widely used algorithm for rate limiting. It works by maintaining a bucket with a certain number of tokens. Each incoming request to the system removes one token from the bucket. If there are no tokens left in the bucket, the request is rejected. The tokens are replenished at a fixed rate, so that the number of tokens in the bucket increases over time.
Advantages of Token Bucket algorithm:
- Simple to implement: The Token Bucket algorithm is relatively straightforward to implement, making it a popular choice for rate limiting.
- Flexible: Token Bucket can be used to limit the rate of requests over a variety of time frames, making it highly customizable.
- Scalable: Token Bucket can be used for rate limiting across multiple servers, making it a scalable solution for large systems.
Disadvantages of Token Bucket algorithm:
- Bursty traffic: Token Bucket may not be effective for rate limiting when there is bursty traffic, as the tokens may be used up quickly, leading to an excessive number of rejected requests.
- Lack of fine-grained control: Token Bucket does not provide fine-grained control over the rate of incoming requests, as the rate is determined by the rate at which tokens are replenished.
Other rate limiting algorithms include:
- Leaky Bucket: This is similar to Token Bucket, but instead of adding tokens to the bucket at a fixed rate, incoming requests are accumulated in the bucket and released at a fixed rate.
- Fixed Window: In this algorithm, the number of requests is limited over a fixed time window. Requests are rejected once the limit is reached.
- Sliding Window: In this algorithm, the rate of incoming requests is limited over a sliding time window. Requests are rejected if the rate exceeds the specified limit over any part of the sliding window.
- Smooth Bursting: This algorithm combines the benefits of Fixed Window and Token Bucket algorithms, allowing for a certain number of bursts of incoming requests, beyond which requests are rejected.
Each of these algorithms has its own advantages and disadvantages, and the choice of algorithm will depend on the specific requirements of the system being rate limited. For the sake of simplicity, I have considered the token-bucket implementation over here. You can go through the details of these algorithms if you wish. I found this article quite useful — https://betterprogramming.pub/4-rate-limit-algorithms-every-developer-should-know-7472cb482f48
Enough talk. Let’s get start with our project setup. Here is the build.gradle file for your reference:
So as I mentioned before we’ll be using Redis as our DB for implmenting rate-limiting functionality. We can easily spin-up a redis docker container with the following command.
docker run -d -p 6379:6379 --name rate-limit redis
Now let’s have a look at the folder structure:
If you’re a java developer, you’ll find most of the files are pretty much self-explanatory. Like the controller here for example. We are taking in a “path param” which we have assumed will be the user id and based on the user id we are implmenting rate limiting using the token-bucket algorithm. Please note that this is done as such, just to keep things simple. In a typical production environment, we would most likely collect the user data(if we are rate-limiting based on limits assigned to different users) from an authentication token from the Spring security context. Then using that token extract the user details from the DB - like what are the API limits assigned to this user and subsequently use that value as our limit. There are 3 important files here in this screenshot which we’ll be discussing. The RedisConfig.java
, RateLimitConfig.java
and the RedisspringratelimitingApplicationTests.java
file in which we’ll be writing some simple test cases to verify our implmentation. So let’s look at these files one by one.
Here is the RedisConfig.java
file:
Since this is a configuration file, it’s annotated with @Configuration
. Inside this file we have defined certain beans.
First one is the Config
bean. This defines the redis configuration. As discussed earlier our redis docker container is running on the local machine with port mapping of 6379 which is the default port for redis.
The second one is the CacheManager
bean. It takes in the reference to redis configuration defined before and creates a cache by the name of “cache”. In simple words this is the step where we actually create a cache inside our redis container.
The third bean is the proxyManager bean which returns an instance of JCacheProxyManger
which is the default implementation in the bucket-4j
library. But if this is the default cache implementation — how do we instruct Spring to use redis itself as our cache and not the default one provided by this library?
This is where the last bean comes into picture. That’s why it’s annotated as Primary
. This is the SyncCacheResolver
bean which resolves this conflict between the 2 implementations. Please note that this last bean too takes in a reference of the CacheManager
bean(because as mentioned earlier, this is where we direct springboot to use redis as our cache) and resolves it using that. So that’s pretty much all about the Redis Configuration. Now let’s move on to the next file i.e. RateLimitConfig.java
The class has an autowired field buckets
of type ProxyManager
. This field is used to access the shared instance of Bucket
which will be used for rate limiting across multiple instances of the application.
The resolveBucket
method takes in a key
parameter, which can be an authentication token. In a production environment, the relevant user details could be extracted from the token and used to fetch the corresponding rate limit details for that particular user from the database as discussed earlier. The method then returns a Bucket
instance for the specified key
using the buckets
ProxyManager
field.
The getConfigSupplierForUser
method returns a supplier of BucketConfiguration
. It creates a Refill
object with a refill rate of 20 tokens every 1 minute. It also creates a Bandwidth
object with a capacity of 20 tokens and the specified Refill
. Finally, it returns a BucketConfiguration
built from the Bandwidth
object. Now let’s have a look at the endpoint(or the controller layer) that we have created.
Here is the RateLimitController:
The class uses the @RestController
and @RequestMapping("/")
annotations to define the endpoint and map it to the class. The class also makes use of Autowired
annotation to automatically wire in a bean of type RateLimitConfig
into the class instance. The getInfo
method maps to the GET
HTTP request and takes a path variable id
as an input, which represents the user ID.
In the method body, the RateLimitConfig
instance is used to obtain a Bucket
object that corresponds to the given user ID. The Bucket
instance is then used to check if there are enough tokens (1) available to allow access to the API by calling the tryConsume
method. If there are sufficient tokens, a ResponseEntity
with a 200
HTTP status code and a ApiResponse
body containing the message Success for user {id}
is returned. If there are not enough tokens, a ResponseEntity
with a 429
HTTP status code and a ApiResponse
body containing the message Rate limit exceeded for user {id}
is returned. The RateLimitConfig
class provides the implementation for rate limiting and the RateLimitController
class enforces the rate limit by consuming tokens from the Bucket
for a given user ID.
Now let’s test our API. Below is the test file:
The class is annotated with @SpringBootTest
indicating that it's a test for a Spring Boot application. It declares two mock beans: rateLimitConfig
and redisConfig
. These beans are necessary for initializing the application context for the tests. There are two test methods: testAPIWithoutRateLimit()
and testAPIWithRateLimitExceeding()
.
The testAPIWithoutRateLimit()
method tests an API endpoint without rate limiting. It creates an instance of the RestTemplate
class and uses it to make a GET request to http://localhost:8085/user/1
. Then it uses JUnit's assertions to verify the response status code and the response body. The expected status code is 200, and the expected response body is "{\"message\":\"Success for user 1\"}"
.
The testAPIWithRateLimitExceeding()
method tests the same API endpoint but with rate limiting enabled. It makes 20 GET requests to http://localhost:8085/user/2
using a RestTemplate
instance. After that, it tries to make the 21st request. Since the rate limit is exceeded, the server returns a 429 status code and a response body of "{\"message\":\"Rate limit exceeded for user 2\"}"
. The test uses JUnit's assertions to verify the status code and response body as expected.
Note: The test cases are written in a manner such that they will execute successfully only when the application is already running. In simple terms these are NOT unit test cases for our methods as we have not done any mocking for Redis DB. Instead they are more like integration tests. So the correct way to verify our implementation is to run the program and then run the tests while the program is already running.
We could have implemented unit testing as well, but I just found this way a bit simple. Anyways as long as we are able to verify the functionality of the API to check if it’s working as intended — it doesn’t matter how we test it.
Here is a screenshot of the test cases passing:
So that’s all for this article. You can find the repository link below. I have included relevant links like links to github issues and even stackoverflow links as part of the comments inside the code. Do give this article some claps if you found it useful.
Happy Coding!