One place for hosting & domains

      Implement

      How To Implement Browser Caching with Nginx’s header Module on CentOS 8


      Not using CentOS 8?


      Choose a different version or distribution.

      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      The faster a website loads, the more likely a visitor is to stay. When websites are full of images and interactive content run by scripts loaded in the background, opening a website is not a simple task. It consists of requesting many different files from the server one by one. Minimizing the quantity of these requests is one way to speed up your website.

      One method for improving website performance is browser caching. Browser caching tells the browser that it can reuse local versions of downloaded files instead of requesting the server for them again and again. To do this, you must introduce new HTTP response headers that tell the browser how to behave.

      Nginx’s header module can help you accomplish browser caching. You can use this module to add any arbitrary headers to the response, but its major role is to properly set caching headers. In this tutorial, we will use Nginx’s header module to implement browser caching.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Creating Test Files

      In this step, we will create several test files in the default Nginx directory. We’ll use these files later to check Nginx’s default behavior and then to test that browser caching is working.

      To infer what kind of file is served over the network, Nginx does not analyze the file contents; that would be prohibitively slow. Instead, it looks up the file extension to determine the file’s MIME type, which denotes its purpose.

      Because of this behavior, the content of our test files is irrelevant. By naming the files appropriately, we can trick Nginx into thinking that, for example, one entirely empty file is an image and another is a stylesheet.

      Create a file named test.html in the default Nginx directory using truncate. This extension denotes that it’s an HTML page:

      • sudo truncate -s 1k /usr/share/nginx/html/test.html

      Let’s create a few more test files in the same manner: one jpg image file, one css stylesheet, and one js JavaScript file:

      • sudo truncate -s 1k /usr/share/nginx/html/test.jpg
      • sudo truncate -s 1k /usr/share/nginx/html/test.css
      • sudo truncate -s 1k /usr/share/nginx/html/test.js

      The next step is to check how Nginx behaves with respect to sending caching control headers on a fresh installation with the files we have just created.

      Step 2 — Checking the Default Behavior

      By default, all files will have the same default caching behavior. To explore this, we’ll use the HTML file we created in step 1, but you can run these tests with any example files.

      So, let’s check if test.html is served with any information regarding how long the browser should cache the response. The following command requests a file from our local Nginx server and shows the response headers:

      • curl -I http://localhost/test.html

      You should see several HTTP response headers:

      Output: Nginx response headers

      HTTP/1.1 200 OK Server: nginx/1.14.1 Date: Thu, 04 Feb 2021 18:23:09 GMT Content-Type: text/html Content-Length: 1024 Last-Modified: Thu, 04 Feb 2021 18:22:39 GMT Connection: keep-alive ETag: "601c3b6f-400" Accept-Ranges: bytes

      In the second to last line you will find the ETag header, which contains a unique identifier for this particular revision of the requested file. If you execute the previous curl command repeatedly, you will find the exact same ETag value.

      When using a web browser, the ETag value is stored and sent back to the server with the If-None-Match request header when the browser wants to request the same file again — for example, when refreshing the page.

      We can simulate this on the command line with the following command. Make sure you change the ETag value in this command to match the ETag value in your previous output:

      • curl -I -H 'If-None-Match: "601c3b6f-400"' http://localhost/test.html

      The response will now be different:

      Output: Nginx response headers

      HTTP/1.1 304 Not Modified Server: nginx/1.14.1 Date: Thu, 04 Feb 2021 18:24:05 GMT Last-Modified: Thu, 04 Feb 2021 18:22:39 GMT Connection: keep-alive ETag: "601c3b6f-400"

      This time, Nginx will respond with 304 Not Modified. It won’t send the file over the network again; instead, it will tell the browser that it can reuse the file it already has downloaded locally.

      This is useful because it reduces network traffic, but it’s not good enough to achieve good caching performance. The problem with ETag is that the browser always sends a request to the server asking if it can reuse its cached file. Even though the server responds with a 304 instead of sending the file again, it still takes time to make the request and receive the response.

      In the next step, we will use the headers module to append caching control information. This will make the browser to cache some files locally without explicitly asking the server if its fine to do so.

      Step 3 — Configuring Cache-Control and Expires Headers

      In addition to the ETag file validation header, there are two caching control response headers: Cache-Control and Expires. Cache-Control is the newer version, with more options than Expires and is generally more useful if you want finer control over your caching behavior.

      If these headers are set, they can tell the browser that the requested file can be kept locally for a certain amount of time (including forever) without requesting it again. If the headers are not set, browsers will always request the file from the server, expecting either 200 OK or 304 Not Modified responses.

      We can use the header module to set these HTTP headers. The header module is a core Nginx module, which means it doesn’t need to be installed separately to be used.

      To add the header module, open the default server block Nginx configuration file in vi (here’s a short introduction to vi) or your favorite text editor:

      • sudo vi /etc/nginx/nginx.conf

      Find the server configuration block:

      /etc/nginx/nginx.conf

      . . .
      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name  _;
          root         /usr/share/nginx/html;
      . . .
      

      Add the following two new sections here: one before the server block, to define how long to cache different file types, and one inside it, to set the caching headers appropriately:

      Modified /etc/nginx/nginx.conf

      . . .
      # Expires map
      map $sent_http_content_type $expires {
          default                    off;
          text/html                  epoch;
          text/css                   max;
          application/javascript     max;
          ~image/                    max;
          ~font/                     max;
      }
      
      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name  _;
          root         /usr/share/nginx/html;
      
          expires $expires;
      . . .
      

      The section before the server block is a new map block that defines the mapping between the file type and how long that kind of file should be cached.

      We’re using several different settings in this map:

      • The default value is set to off, which will not add any caching control headers. It’s a safe bet for the content, we have no particular requirements on how the cache should work.

      • For text/html, we set the value to epoch. This is a special value that results explicitly in no caching, which forces the browser to always ask if the website itself is up to date.

      • For text/css and application/javascript, which are stylesheets and JavaScript files, we set the value to max. This means the browser will cache these files for as long as possible, reducing the number of requests considerably given that there are typically many of these files.

      • The last two settings are for ~image/ and ~font/, which are regular expressions that will match all file types containing image/ or font/ in their MIME type name (like image/jpg, image/png or font/woff2). Like stylesheets, both pictures and web fonts on websites can be safely cached to speed up page-loading times, so we set this to max as well.

      Note: These are just a few examples of the most common MIME types used on websites. You can get acquainted with the more extensive list of such types on Common MIME types site and add others to the map that you might find useful in your case.

      Inside the server block, the expires directive (a part of the headers module) sets the caching control headers. It uses the value from the $expires variable set in the map. This way, the resulting headers will be different depending on the file type.

      Save and close the file to exit.

      To enable the new configuration, restart Nginx:

      • sudo systemctl restart nginx

      Next, let’s make sure our new configuration works.

      Step 4 — Testing Browser Caching

      Execute the same request as before for the test HTML file:

      • curl -I http://localhost/test.html

      This time the response will be different. You will see two additional HTTP response headers:

      Nginx response headers

      HTTP/1.1 200 OK
      Server: nginx/1.14.1
      Date: Thu, 04 Feb 2021 18:28:19 GMT
      Content-Type: text/html
      Content-Length: 1024
      Last-Modified: Thu, 04 Feb 2021 18:22:39 GMT
      Connection: keep-alive
      ETag: "601c3b6f-400"
      Expires: Thu, 01 Jan 1970 00:00:01 GMT
      Cache-Control: no-cache
      Accept-Ranges: bytes
      

      The Expires header shows a date in the past and Cache-Control is set with no-cache, which tells the browser to always ask the server if there is a newer version of the file (using the ETag header, like before).

      You’ll find a difference in response with the test image file.

      • curl -I http://localhost/test.jpg

      Note the new output:

      Nginx response headers

      HTTP/1.1 200 OK
      Server: nginx/1.14.1
      Date: Thu, 04 Feb 2021 18:29:05 GMT
      Content-Type: image/jpeg
      Content-Length: 1024
      Last-Modified: Thu, 04 Feb 2021 18:22:42 GMT
      Connection: keep-alive
      ETag: "601c3b72-400"
      Expires: Thu, 31 Dec 2037 23:55:55 GMT
      Cache-Control: max-age=315360000
      Accept-Ranges: bytes
      

      In this case, Expires shows the date in the distant future, and Cache-Control contains max-age information, which tells the browser how long it can cache the file in seconds. This tells the browser to cache the downloaded image for as long as it can, so any subsequent appearances of this image will use local cache and not send a request to the server at all.

      The result should be similar for both test.js and test.css, as both JavaScript and stylesheet files are set with caching headers too.

      This means the cache control headers have been configured properly and your website will benefit from the performance gain and less server requests due to browser caching. You should customize the caching settings based on your website’s content, but the defaults in this article are a reasonable place to start.

      Conclusion

      The headers module can be used to add any arbitrary headers to the response, but properly setting caching control headers is one of its most useful applications. It increases performance for the website users, especially on networks with higher latency, like mobile carrier networks. It can also lead to better results on search engines that factor speed tests into their results. Setting browser caching headers is a crucial recommendation from Google’s PageSpeed and similar performance testing tools.

      You can find more detailed information about the headers module in Nginx’s official headers module documentation.



      Source link

      How To Implement Browser Caching with Nginx’s header Module on Ubuntu 20.04


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      The faster a website loads, the more likely a visitor is to stay. When websites are full of images and interactive content run by scripts loaded in the background, opening a website is not a simple task. It consists of requesting many different files from the server one by one. Minimizing the quantity of these requests is one way to speed up your website.

      One method for improving website performance is browser caching. Browser caching tells the browser that it can reuse local versions of downloaded files instead of requesting the server for them again and again. To do this, you must introduce new HTTP response headers that tell the browser how to behave.

      Nginx’s header module can help you accomplish browser caching. You can use this module to add any arbitrary headers to the response, but its major role is to properly set caching headers. In this tutorial, we will use Nginx’s header module to implement browser caching.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Creating Test Files

      In this step, we will create several test files in the default Nginx directory. We’ll use these files later to check Nginx’s default behavior and then to test that browser caching is working.

      To infer what kind of file is served over the network, Nginx does not analyze the file contents; that would be prohibitively slow. Instead, it looks up the file extension to determine the file’s MIME type, which denotes its purpose.

      Because of this behavior, the content of our test files is irrelevant. By naming the files appropriately, we can trick Nginx into thinking that, for example, one entirely empty file is an image and another is a stylesheet.

      Create a file named test.html in the default Nginx directory using truncate. This extension denotes that it’s an HTML page:

      • sudo truncate -s 1k /var/www/html/test.html

      Let’s create a few more test files in the same manner: one jpg image file, one css stylesheet, and one js JavaScript file:

      • sudo truncate -s 1k /var/www/html/test.jpg
      • sudo truncate -s 1k /var/www/html/test.css
      • sudo truncate -s 1k /var/www/html/test.js

      The next step is to check how Nginx behaves with respect to sending caching control headers on a fresh installation with the files we have just created.

      Step 2 — Checking the Default Behavior

      By default, all files will have the same default caching behavior. To explore this, we’ll use the HTML file we created in step 1, but you can run these tests with any example files.

      So, let’s check if test.html is served with any information regarding how long the browser should cache the response. The following command requests a file from our local Nginx server and shows the response headers:

      • curl -I http://localhost/test.html

      You will see several HTTP response headers:

      Output: Nginx response headers

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 02 Feb 2021 19:03:21 GMT Content-Type: text/html Content-Length: 1024 Last-Modified: Tue, 02 Feb 2021 19:02:58 GMT Connection: keep-alive ETag: "6019a1e2-400" Accept-Ranges: bytes

      In the second to last line you will find the ETag header, which contains a unique identifier for this particular revision of the requested file. If you execute the previous curl command repeatedly, you will find the exact same ETag value.

      When using a web browser, the ETag value is stored and sent back to the server with the If-None-Match request header when the browser wants to request the same file again — for example, when refreshing the page.

      We can simulate this on the command line with the following command. Make sure you change the ETag value in this command to match the ETag value in your previous output:

      • curl -I -H 'If-None-Match: "6019a1e2-400"' http://localhost/test.html

      The response will now be different:

      Output: Nginx response headers

      HTTP/1.1 304 Not Modified Server: nginx/1.18.0 (Ubuntu) Date: Tue, 02 Feb 2021 19:04:09 GMT Last-Modified: Tue, 02 Feb 2021 19:02:58 GMT Connection: keep-alive ETag: "6019a1e2-400"

      This time, Nginx will respond with 304 Not Modified. It won’t send the file over the network again; instead, it will tell the browser that it can reuse the file it already has downloaded locally.

      This is useful because it reduces network traffic, but it’s not good enough to achieve good caching performance. The problem with ETag is that browsers always send a request to the server asking if it can reuse its cached file. Even though the server responds with a 304 instead of sending the file again, it still takes time to make the request and receive the response.

      In the next step, we will use the headers module to append caching control information. This will make the browser cache some files locally without explicitly asking the server if its fine to do so.

      Step 3 — Configuring Cache-Control and Expires Headers

      In addition to the ETag file validation header, there are two caching control response headers: Cache-Control and Expires. Cache-Control is the newer version, with more options than Expires and is generally more useful if you want finer control over your caching behavior.

      If these headers are set, they can tell the browser that the requested file can be kept locally for a certain amount of time (including forever) without requesting it again. If the headers are not set, browsers will always request the file from the server, expecting either 200 OK or 304 Not Modified responses.

      We can use the header module to set these HTTP headers. The header module is a core Nginx module, which means it doesn’t need to be installed separately to be used.

      To add the header module, open the default Nginx configuration file in nano or your favorite text editor:

      • sudo nano /etc/nginx/sites-available/default

      Find the server configuration block:

      /etc/nginx/sites-available/default

      . . .
      # Default server configuration
      #
      
      server {
          listen 80 default_server;
          listen [::]:80 default_server;
      
      . . .
      

      Add the following two new sections here: one before the server block, to define how long to cache different file types, and one inside it, to set the caching headers appropriately:

      Modified /etc/nginx/sites-available/default

      . . .
      # Default server configuration
      #
      
      # Expires map
      map $sent_http_content_type $expires {
          default                    off;
          text/html                  epoch;
          text/css                   max;
          application/javascript     max;
          ~image/                    max;
          ~font/                     max;
      }
      
      server {
          listen 80 default_server;
          listen [::]:80 default_server;
      
          expires $expires;
      . . .
      

      The section before the server block is a new map block that defines the mapping between the file type and how long that kind of file should be cached.

      We’re using several different settings in this map:

      • The default value is set to off, which will not add any caching control headers. It’s a safe bet for the content, we have no particular requirements on how the cache should work.

      • For text/html, we set the value to epoch. This is a special value that results explicitly in no caching, which forces the browser to always ask if the website itself is up to date.

      • For text/css and application/javascript, which are stylesheets and JavaScript files, we set the value to max. This means the browser will cache these files for as long as possible, reducing the number of requests considerably given that there are typically many of these files.

      • The last two settings are for ~image/ and ~font/, which are regular expressions that will match all file types containing image/ or font/ in their MIME type name (like image/jpg, image/png or font/woff2). Like stylesheets, both pictures and web fonts on websites can be safely cached to speed up page-loading times, so we set this to max as well.

      Note: These are just a few examples of the most common MIME types used on websites. You can get acquainted with the more extensive list of such types on Common MIME types site and add others to the map that you might find useful in your case.

      Inside the server block, the expires directive (a part of the headers module) sets the caching control headers. It uses the value from the $expires variable set in the map. This way, the resulting headers will be different depending on the file type.

      Save and close the file to exit.

      To enable the new configuration, restart Nginx:

      • sudo systemctl restart nginx

      Next, let’s make sure our new configuration works.

      Step 4 — Testing Browser Caching

      Execute the same request as before for the test HTML file:

      • curl -I http://localhost/test.html

      This time the response will be different. You will see two additional HTTP response headers:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 02 Feb 2021 19:10:13 GMT Content-Type: text/html Content-Length: 1024 Last-Modified: Tue, 02 Feb 2021 19:02:58 GMT Connection: keep-alive ETag: "6019a1e2-400" Expires: Thu, 01 Jan 1970 00:00:01 GMT Cache-Control: no-cache Accept-Ranges: bytes

      The Expires header shows a date in the past and Cache-Control is set with no-cache, which tells the browser to always ask the server if there is a newer version of the file (using the ETag header, like before).

      You’ll find a difference in response with the test image file:

      • curl -I http://localhost/test.jpg

      Note the new output:

      Output

      HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Tue, 02 Feb 2021 19:10:42 GMT Content-Type: image/jpeg Content-Length: 1024 Last-Modified: Tue, 02 Feb 2021 19:03:02 GMT Connection: keep-alive ETag: "6019a1e6-400" Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Accept-Ranges: bytes

      In this case, Expires shows the date in the distant future, and Cache-Control contains max-age information, which tells the browser how long it can cache the file in seconds. This tells the browser to cache the downloaded image for as long as it can, so any subsequent appearances of this image will use local cache and not send a request to the server at all.

      The result should be similar for both test.js and test.css, as both JavaScript and stylesheet files are set with caching headers too.

      This means the cache control headers have been configured properly and your website will benefit from the performance gain and less server requests due to browser caching. You should customize the caching settings based on your website’s content, but the defaults in this article are a reasonable place to start.

      Conclusion

      The headers module can be used to add any arbitrary headers to the response, but properly setting caching control headers is one of its most useful applications. It increases performance for the website users, especially on networks with higher latency, like mobile carrier networks. It can also lead to better results on search engines that factor speed tests into their results. Setting browser caching headers is a crucial recommendation from Google’s PageSpeed and similar performance testing tools.

      More detailed information about the headers module can be found in Nginx’s official headers module documentation.



      Source link

      How To Implement PHP Rate Limiting with Redis on Ubuntu 20.04


      The author selected the Apache Software Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Redis (Remote Dictionary Server ) is an in-memory open source software. It is a data-structure store that uses a server’s RAM, which is several times faster than even the fastest Solid State Drive (SSD). This makes Redis highly responsive, and therefore, suitable for rate limiting.

      Rate limiting is a technology that puts a cap on the number of times a user can request a resource from a server. Many services implement rate limiting to prevent abuse to a service when a user may try to put too much load on a server.

      For instance, when you’re implementing a public API (Application Programming Interface) for your web application with PHP, you need some form of rate limiting. The reason is that when you release an API to the public, you’d want to put a control on the number of times an application user can repeat an action in a specific timeframe. Without any control, users may bring your system to a complete halt.

      Rejecting users’ requests that exceed a certain limit allows your application to run smoothly. If you have a lot of customers, rate limiting enforces a fair-usage policy that allows each customer to have high-speed access to your application. Rate limiting is also good for reducing bandwidth costs and minimizing congestion on your server.

      It might be practical to code a rate-limiting module by logging user activities in a database like MySQL. However, the end product may not be scalable when many users access the system since the data must be fetched from disk and compared against the set limit. This is not only slow, but relational database management systems are not designed for this purpose.

      Since Redis works as an in-memory database, it is a qualified candidate for creating a rate limiter, and it has been proven reliable for this purpose.

      In this tutorial, you’ll implement a PHP script for rate limiting with Redis on an Ubuntu 20.04 server.

      Prerequisites

      Before you begin, you’ll need the following:

      Step 1 — Installing the Redis Library for PHP

      First, you’ll begin by updating your Ubuntu server package repository index. Then, install the php-redis extension. This is a library that allows you to implement Redis in your PHP code. To do this, run the following commands:

      • sudo apt update
      • sudo apt install -y php-redis

      Next, restart the Apache server to load the php-redis library:

      • sudo systemctl restart apache2

      Once you’ve updated your software information index and installed the Redis library for PHP, you’ll now create a PHP resource that caps users’ access based on their IP address.

      Step 2 — Building a PHP Web Resource for Rate Limiting

      In this step, you’ll create a test.php file in the root directory (/var/www/html/) of your web server. This file will be accessible to the public and users can type its address in a web browser to run it. However, for the basis of this guide, you’ll later test access to the resource using the curl command.

      The sample resource file allows users to access it three times in a timeframe of 10 seconds. Users trying to exceed the limit will get an error informing them that they have been rate limited.

      The core functionality of this file relies heavily on the Redis server. When a user requests the resource for the first time, the PHP code in the file will create a key on the Redis server based on the user’s IP address.

      When the user visits the resource again, the PHP code will try to match the user’s IP address with the keys stored in the Redis server and increment the value by one if the key exists. The PHP code will keep checking if the incremented value hits the maximum limit set.

      The Redis key, which is based on the user’s IP address, will expire after 10 seconds; after this time period, logging the user’s visits to the web resource will begin again.

      To begin, open the /var/www/html/test.php file:

      • sudo nano /var/www/html/test.php

      Next, enter the following information to initialize the Redis class. Remember to enter the appropriate value for REDIS_PASSWORD:

      /var/www/html/test.php

      <?php
      
      $redis = new Redis();
      $redis->connect('127.0.0.1', 6379);
      $redis->auth('REDIS_PASSWORD');
      

      $redis->auth implements plain text authentication to the Redis server. This is OK while you’re working locally (via localhost), but if you’re using a remote Redis server, consider using SSL authentication.

      Next, in the same file, initialize the following variables:

      /var/www/html/test.php

      . . .
      $max_calls_limit  = 3;
      $time_period      = 10;
      $total_user_calls = 0;
      

      You’ve defined:

      • $max_calls_limit: is the maximum number of calls a user can access the resource.
      • $time_period: defines the timeframe in seconds within which a user is allowed to access the resource per the $max_calls_limit.
      • $total_user_calls: initializes a variable that retrieves the number of times a user has requested access to the resource in the given timeframe.

      Next, add the following code to retrieve the IP address of the user requesting the web resource:

      /var/www/html/test.php

      . . .
      if (!empty($_SERVER['HTTP_CLIENT_IP'])) {
          $user_ip_address = $_SERVER['HTTP_CLIENT_IP'];
      } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {
          $user_ip_address = $_SERVER['HTTP_X_FORWARDED_FOR'];
      } else {
          $user_ip_address = $_SERVER['REMOTE_ADDR'];
      }
      

      While this code uses the users’ IP address for demonstration purposes, if you’ve got a protected resource on the server that requires authentication, you might log users’ activities using their usernames or access tokens.

      In such a scenario, every user authenticated into your system will have a unique identifier (for example, a customer ID, developer ID, vendor ID, or even a user ID). (If you configure this, remember to use these identifiers in place of the $user_ip_address.)

      For this guide, the user IP address is sufficient for proving the concept. So, once you’ve retrieved the user’s IP address in the previous code snippet, add the next code block to your file:

      /var/www/html/test.php

      . . .
      if (!$redis->exists($user_ip_address)) {
          $redis->set($user_ip_address, 1);
          $redis->expire($user_ip_address, $time_period);
          $total_user_calls = 1;
      } else {
          $redis->INCR($user_ip_address);
          $total_user_calls = $redis->get($user_ip_address);
          if ($total_user_calls > $max_calls_limit) {
              echo "User " . $user_ip_address . " limit exceeded.";
              exit();
          }
      }
      
      echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds";
      

      In this code, you use an if...else statement to check if there is a key defined with the IP address on the Redis server. If the key doesn’t exist, if (!$redis->exists($user_ip_address)) {...}, you set it and define its value to 1 using the code $redis->set($user_ip_address, 1);.

      The $redis->expire($user_ip_address, $time_period); sets the key to expire within the time period—in this case, 10 seconds.

      If the user’s IP address does not exist as a Redis key, you set the variable $total_user_calls to 1.

      In the ...else {...}... statement block, you use the $redis->INCR($user_ip_address); command to increment the value of the Redis key set for each IP address key by 1. This only happens when the key is already set in the Redis server and counts as a repeat request.

      The statement $total_user_calls = $redis->get($user_ip_address); retrieves the total requests the user makes by checking their IP address-based key on the Redis server.

      Toward the end of the file, you use the ...if ($total_user_calls > $max_calls_limit) {... }.. statement to check if the limit is exceeded; if so, you alert the user with echo "User " . $user_ip_address . " limit exceeded.";. Finally, you’re informing the user about the visits they make in the time period using the echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds"; statement.

      After adding all the code, your /var/www/html/test.php file will be as follows:

      /var/www/html/test.php

      <?php
      $redis = new Redis();
      $redis->connect('127.0.0.1', 6379);
      $redis->auth('REDIS_PASSWORD');
      
      $max_calls_limit  = 3;
      $time_period      = 10;
      $total_user_calls = 0;
      
      if (!empty($_SERVER['HTTP_CLIENT_IP'])) {
          $user_ip_address = $_SERVER['HTTP_CLIENT_IP'];
      } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {
          $user_ip_address = $_SERVER['HTTP_X_FORWARDED_FOR'];
      } else {
          $user_ip_address = $_SERVER['REMOTE_ADDR'];
      }
      
      if (!$redis->exists($user_ip_address)) {
          $redis->set($user_ip_address, 1);
          $redis->expire($user_ip_address, $time_period);
          $total_user_calls = 1;
      } else {
          $redis->INCR($user_ip_address);
          $total_user_calls = $redis->get($user_ip_address);
          if ($total_user_calls > $max_calls_limit) {
              echo "User " . $user_ip_address . " limit exceeded.";
              exit();
          }
      }
      
      echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds";
      

      When you’ve finished editing the /var/www/html/test.php file, save and close it.

      You’ve now coded the logic needed to rate limit users on the test.php web resource. In the next step, you’ll test your script.

      Step 3 — Testing Redis Rate Limiting

      In this step, you’ll use the curl command to request the web resource that you’ve coded in Step 2. To fully check the script, you’ll request the resource five times in a single command. It is possible to do this by including a placeholder URL parameter at the end of the test.php file. Here, you use the value ?[1-5] at the end of your request to execute the curl commands five times.

      Run the following command:

      • curl -H "Accept: text/plain" -H "Content-Type: text/plain" -X GET http://localhost/test.php?[1-5]

      After running the code, you will receive output similar to the following:

      Output

      [1/5]: http://localhost/test.php?1 --> <stdout> --_curl_--http://localhost/test.php?1 Welcome 127.0.0.1 total calls made 1 in 10 seconds [2/5]: http://localhost/test.php?2 --> <stdout> --_curl_--http://localhost/test.php?2 Welcome 127.0.0.1 total calls made 2 in 10 seconds [3/5]: http://localhost/test.php?3 --> <stdout> --_curl_--http://localhost/test.php?3 Welcome 127.0.0.1 total calls made 3 in 10 seconds [4/5]: http://localhost/test.php?4 --> <stdout> --_curl_--http://localhost/test.php?4 User 127.0.0.1 limit exceeded. [5/5]: http://localhost/test.php?5 --> <stdout> --_curl_--http://localhost/test.php?5 User 127.0.0.1 limit exceeded.

      As you’ll note, the first three requests ran without a problem. However, your script has rate limited the fourth and fifth requests. This confirms that the Redis server is rate limiting users’ requests.

      In this guide, you’ve set low values for the two variables following:

      /var/www/html/test.php

      ...
      $max_calls_limit  = 3;
      $time_period      = 10;
      ...
      

      When designing your application in a production environment, you could consider higher values depending on how often you expect users to hit your application.

      It is best practice to check real-time stats before setting these values. For instance, if your server logs show that an average user hits your application 1,000 times every 60 seconds, you may use those values as a benchmark for throttling users.

      To put things in a better perspective, here are some real-world examples of rate-limiting implementations (as of 2021):

      Conclusion

      This tutorial implemented a PHP script for rate limiting with Redis on an Ubuntu 20.04 server to prevent your web application from inadvertent or malicious overuse. You could extend the code to further suit your needs depending on your use case.

      You might want to secure your Apache server for production use; follow the How To Secure Apache with Let’s Encrypt on Ubuntu 20.04 tutorial.

      You might also consider reading how Redis works as a database cache. Try out our How To Set Up Redis as a Cache for MySQL with PHP on Ubuntu 20.04 tutorial.

      You can find further resources on our PHP and Redis topic pages.



      Source link