One place for hosting & domains

      Rate

      How To Build a Rate Limiter With Node.js on App Platform


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Rate limiting manages your network’s traffic and limits the number of times someone repeats an operation in a given duration, such as using an API. A service without a layer of security against rate limit abuse is prone to overload and hampers your application’s proper operation for legitimate customers.

      In this tutorial, you will build a Node.js server that will check the IP address of the request and also calculate the rate of these requests by comparing the timestamp of requests per user. If an IP address crosses the limit you have set for the application, you will call Cloudflare’s API and add the IP address to a list. You will then configure a Cloudflare Firewall Rule that will ban all requests with IP addresses in the list.

      By the end of this tutorial, you will have built a Node.js project deployed on DigitalOcean’s App Platform that protects a Cloudflare routed domain with rate limiting.

      Prerequisites

      Before you begin this guide, you will need:

      Step 1 — Setting Up the Node.js Project and Deploying to DigitalOcean’s App Platform

      In this step, you will expand on your basic Express server, push your code to a GitHub repository, and deploy your application to App Platform.

      Open the project directory of the basic Express server with your code editor. Create a new file by the name .gitignore in the root directory of the project. Add the following lines to the newly created .gitignore file:

      .gitignore

      node_modules/
      .env
      

      The first line in your .gitignore file is a directive to git not to track the node_modules directory. This will enable you to keep your repository size small. The node_modules can be generated when required by running the command npm install. The second line prevents the environment variable file from being tracked. You will create the .env file in further steps.

      Navigate to your server.js in your code editor and modify the following lines of code:

      server.js

      ...
      app.listen(process.env.PORT || 3000, () => {
          console.log(`Example app is listening on port ${process.env.PORT || 3000}`);
      });
      

      The change to conditionally use PORT as an environment variable enables the application to dynamically have the server running on the assigned PORT or use 3000 as the fallback one.

      Note: The string in console.log() is wrapped within backticks(`) and not within quotes. This enables you to use template literals, which provides the capability to have expressions within strings.

      Visit your terminal window and run your application:

      Your browser window will display Successful response. In your terminal, you will see the following output:

      Output

      Example app is listening on port 3000

      With your Express server running successfully, you’ll now deploy to App Platform.

      First, initialize git in the root directory of the project and push the code to your GitHub account. Navigate to the App Platform dashboard in the browser and click on the Create App button. Choose the GitHub option and authorize with GitHub, if necessary. Select your project’s repository from the dropdown list of projects you want to deploy to App Platform. Review the configuration, then give a name to the application. For the purpose of this tutorial, select the Basic plan as you’ll work in the application’s development phase. Once ready, click Launch App.

      Next, navigate to the Settings tab and click on the section Domains. Add your domain routed via Cloudflare into the field Domain or Subdomain Name. Select the bullet You manage your domain to copy the CNAME record that you’ll use to add to your domain’s Cloudflare DNS account.

      With your application deployed to App Platform, head over to your domain’s dashboard on Cloudflare in a new tab as you will return to App Platform’s dashboard later. Navigate to the DNS tab. Click on the Add Record button and select CNAME as your Type, @ as the root, and paste in the CNAME you copied from the App Platform. Click on the Save button, then navigate to the Domains section under the Settings tab in your App Platform’s Dashboard and click on the Add Domain button.

      Click the Deployments tab to see the details of the deployment. Once deployment finishes, you can open your_domain to view it on the browser. Your browser window will display: Successful response. Navigate to the Runtime Logs tab on the App Platform dashboard, and you will get the following output:

      Output

      Example app is listening on port 8080

      Note: The port number 8080 is the default assigned port by the App Platform. You can override this by changing the configuration while reviewing the app before deployment.

      With your application now deployed to App Platform, let’s look at how to outline a cache to calculate requests to the rate limiter.

      Step 2 — Caching User’s IP Address and Calculating Requests Per Second

      In this step, you will store a user’s IP address in a cache with an array of timestamps to monitor the requests per second of each user’s IP address. A cache is temporary storage for data frequently used by an application. The data in a cache is usually kept in quick access hardware like RAM (Random-Access Memory). The fundamental goal of a cache is to improve data retrieval performance by decreasing the need to visit the slower storage layer underneath it. You will use three npm packages: node-cache, is-ip, and request-ip to aid in the process.

      The request-ip package captures the user’s IP address used to request the server. The node-cache package creates an in-memory cache which you will use to keep track of user’s requests. You’ll use the is-ip package used to check if an IP Address is IPv6 Address. Install the node-cache, is-ip, and request-ip package via npm on your terminal.

      • npm i node-cache is-ip request-ip

      Open the server.js file in your code editor and add following lines of code below const express = require('express');:

      server.js

      ...
      const requestIP = require('request-ip');
      const nodeCache = require('node-cache');
      const isIp = require('is-ip');
      ...
      

      The first line here grabs the requestIP module from request-ip package you installed. This module captures the user’s IP address used to request the server. The second line grabs the nodeCache module from the node-cache package. nodeCache creates an in-memory cache, which you will use to keep track of user’s requests per second. The third line takes the isIp module from the is-ip package. This checks if an IP address is IPv6 which you will format as per Cloudflare’s specification to use CIDR notation.

      Define a set of constant variables in your server.js file. You will use these constants throughout your application.

      server.js

      ...
      const TIME_FRAME_IN_S = 10;
      const TIME_FRAME_IN_MS = TIME_FRAME_IN_S * 1000;
      const MS_TO_S = 1 / 1000;
      const RPS_LIMIT = 2;
      ...
      

      TIME_FRAME_IN_S is a constant variable that will determine the period over which your application will average the user’s timestamps. Increasing the period will increase the cache size, hence consume more memory. The TIME_FRAME_IN_MS constant variable will also determine the period of time your application will average user’s timestamps, but in milliseconds. MS_TO_S is the conversion factor you will use to convert time in milliseconds to seconds. The RPS_LIMIT variable is the threshold limit of the application that will trigger the rate limiter, and change the value as per your application’s requirements. The value 2 in the RPS_LIMIT variable is a moderate value that will trigger during the development phase.

      With Express, you can write and use middleware functions, which have access to all HTTP requests coming to your server. To define a middleware function, you will call app.use() and pass it a function. Create a function named ipMiddleware as middleware.

      server.js

      ...
      const ipMiddleware = async function (req, res, next) {
          let clientIP = requestIP.getClientIp(req);
          if (isIp.v6(clientIP)) {
              clientIP = clientIP.split(':').splice(0, 4).join(':') + '::/64';
          }
          next();
      };
      app.use(ipMiddleware);
      
      ...
      

      The getClientIp() function provided by requestIP takes the request object, req from the middleware, as parameter. The .v6() function comes from the is-ip module and returns true if the argument passed to it is an IPv6 address. Cloudflare’s Lists requires the IPv6 address in /64 CIDR notation. You need to format the IPv6 address to follow the format: aaaa:bbbb:cccc:dddd::/64. The .split(':') method creates an array from the string containing the IP address splitting them by the character :. The .splice(0,4) method returns the first four elements of the array. The join(':') function returns a string from the array combined with the character :.

      The next() call directs the middleware to go to the next middleware function if there is one. In your example, it will take the request to the GET route /. This is important to include at the end of your function. Otherwise, the request will not move forward from the middleware.

      Initialize an instance of node-cache by adding the following variable below the constants:

      server.js

      ...
      const IPCache = new nodeCache({ stdTTL: TIME_FRAME_IN_S, deleteOnExpire: false, checkperiod: TIME_FRAME_IN_S });
      ...
      

      With the constant variable IPCache, you are overriding the default parameters native to nodeCache with the custom properties:

      • stdTTL: The interval in seconds after which a key-value pair of cache elements will be evicted from the cache. TTL stands for Time To Live, and is a measure of time after which cache expires.
      • deleteOnExpire: Set to false as you will write a custom callback function to handle the expired event.
      • checkperiod: The interval in seconds after which an automatic check for expired elements is triggered. The default value is 600, and as your application’s element expiry is set to a lesser value, the check for expiry will also happen sooner.

      For more information on the default parameters of node-cache, you will find the node-cache npm package’s docs page useful. The following diagram will help you to visualise how a cache stores data:

      Schematic Representation of Data Stored in Cache

      You will now create a new key-value pair for the new IP address and append to an existing key-value pair if an IP address exists in the cache. The value is an array of timestamps corresponding to each request made to your application. In your server.js file, create the updateCache() function below the IPCache constant variable to add the timestamp of the request to cache:

      server.js

      ...
      const updateCache = (ip) => {
          let IPArray = IPCache.get(ip) || [];
          IPArray.push(new Date());
          IPCache.set(ip, IPArray, (IPCache.getTtl(ip) - Date.now()) * MS_TO_S || TIME_FRAME_IN_S);
      };
      ...
      

      The first line in the function gets the array of timestamps for the given IP address, or if null, initializes with an empty array. In the following line, you are pushing the present timestamp caught by the new Date() function into the array. The .set() function provided by node-cache takes three arguments: key, value and the TTL. This TTL will override the standard TTL set by replacing the value of stdTTL from the IPCache variable. If the IP address already exists in the cache, you will use the existing TTL; else, you will set TTL as TIME_FRAME_IN_S.

      The TTL for the current key-value pair is calculated by subtracting the present timestamp from the expiry timestamp. The difference is then converted to seconds and passed as the third argument to the .set() function. The .getTtl() function takes a key and IP address as an argument and returns the TTL of the key-value pair as a timestamp. If the IP address does not exist in the cache, it will return undefined and use the fallback value of TIME_FRAME_IN_S.

      Note: You require the conversion timestamps from milliseconds to seconds as JavaScript stores them in milliseconds while the node-cache module uses seconds.

      In the ipMiddleware middleware, add the following lines after the if code block if (isIp.v6(clientIP)) to calculate the requests per second of the IP address calling your application:

      server.js

      ...
          updateCache(clientIP);
          const IPArray = IPCache.get(clientIP);
          if (IPArray.length > 1) {
              const rps = IPArray.length / ((IPArray[IPArray.length - 1] - IPArray[0]) * MS_TO_S);
              if (rps > RPS_LIMIT) {
                  console.log('You are hitting limit', clientIP);
              }
          }
      ...
      

      The first line adds the timestamp of the request made by the IP address to the cache by calling the updateCache() function you declared. The second line collects the array of timestamps for the IP address. If the number of elements in the array of timestamps is greater than one (calculating requests per second needs a minimum of two timestamps), and the requests per second are more than the threshold value you defined in the constants, you will console.log the IP address. The rps variable calculates the requests per second by dividing the number of requests with a time interval difference, and converts the units to seconds.

      Since you had defaulted the property deleteOnExpire to the value false in the IPCache variable, you will now need to handle the expired event manually. node-cache provides a callback function that triggers on expired event. Add the following lines of code below the IPCache constant variable:

      server.js

      ...
      IPCache.on('expired', (key, value) => {
          if (new Date() - value[value.length - 1] > TIME_FRAME_IN_MS) {
              IPCache.del(key);
          }
      });
      ...
      

      .on() is a callback function that accepts key and value of the expired element as the arguments. In your cache, value is an array of timestamps of requests. The highlighted line checks if the last element in the array is at least TIME_FRAME_IN_S in the past than the present time. As you are adding elements to your array of timestamps, if the last element in value is at least TIME_FRAME_IN_S in the past than the present time, the .del() function takes key as an argument and deletes the expired element from the cache.

      For the instances when some elements of the array are at least TIME_FRAME_IN_S in the past than the present time, you need to handle it by removing the expired items from the cache. Add the following code in the callback function after the if code block if (new Date() - value[value.length - 1] > TIME_FRAME_IN_MS).

      server.js

      ...
          else {
              const updatedValue = value.filter(function (element) {
                  return new Date() - element < TIME_FRAME_IN_MS;
              });
              IPCache.set(key, updatedValue, TIME_FRAME_IN_S - (new Date() - updatedValue[0]) * MS_TO_S);
          }
      ...
      

      The filter() array method native to JavaScript provides a callback function to filter the elements in your array of timestamps. In your case, the highlighted line checks for elements that are least TIME_FRAME_IN_S in the past than the present time. The filtered elements are then added to the updatedValue variable. This will update your cache with the filtered elements in the updatedValue variable and a new TTL. The TTL that matches the first element in the updatedValue variable will trigger the .on('expired') callback function when the cache removes the following element. The difference of TIME_FRAME_IN_S and the time expired since the first request’s timestamp in updatedValue calculates the new and updated TTL.

      With your middleware functions now defined, visit your terminal window and run your application:

      Then, visit localhost:3000 in your web browser. Your browser window will display: Successful response. Refresh the page repeatedly to hit the RPS_LIMIT. Your terminal window will display:

      Output

      Example app is listening on port 3000 You are hitting limit ::1

      Note: The IP address for localhost is shown as ::1. Your application will capture the public IP of a user when deployed outside localhost.

      Your application is now able to able to track the user’s requests and store the timestamps in the cache. In the next step, you will integrate Cloudflare’s API to set up the Firewall.

      Step 3 — Setting Up the Cloudflare Firewall

      In this step, you will set up Cloudflare’s Firewall to block IP Addresses when hitting the rate limit, create environment variables, and make calls to the Cloudflare API.

      Visit the Cloudflare dashboard in your browser, log in, and navigate to your account’s homepage. Open Lists under Configurations tab. Create a new List with your_list as the name.

      Note: The Lists section is available on your Cloudflare account’s dashboard page and not your Cloudflare domain’s dashboard page.

      Navigate to the Home tab and open your_domain’s dashboard. Open the Firewall tab and click on Create a Firewall rule under the Firewall Rules section. Give your_rule_name to the Firewall to identify it. In the Field, select IP Source Address from the dropdown, is in list for the Operator, and your_list for the Value. Under the dropdown for Choose an action, select Block and click Deploy.

      Create a .env file in the project’s root directory with the following lines to call Cloudflare API from your application:

      .env

      ACCOUNT_MAIL=your_cloudflare_login_mail
      API_KEY=your_api_key
      ACCOUNT_ID=your_account_id
      LIST_ID=your_list_id
      

      To get a value for API_KEY, navigate to the API Tokens tab on the My Profile section of your Cloudflare dashboard. Click View in the Global API Key section and enter your Cloudflare password to view it. Visit the Lists section under the Configurations tab on the account’s homepage. Click on Edit beside your_list list you created. Get the ACCOUNT_ID and LIST_ID from the URL of your_list in the browser. The URL is of the format below:
      https://dash.cloudflare.com/your_account_id/configurations/lists/your_list_id

      Warning: Make sure the content of .env is kept confidential and not made public. Make sure you have the .env file listed in the .gitignore file you created in Step 1.

      Install the axios and dotenv package via npm on your terminal.

      Open the server.js file in your code editor and the add following lines of code below the nodeCache constant variable:

      server.js

      ...
      const axios = require('axios');
      require('dotenv').config();
      ...
      

      The first line here grabs the axios module from axios package you installed. You will use this module to make network calls to Cloudflare’s API. The second line requires and configures the dotenv module to enable the process.env global variable that will define the values you placed in your .env file to server.js.

      Add the following to the if (rps > RPS_LIMIT) condition within ipMiddleware above console.log('You are hitting limit', clientIP) to call Cloudflare API.

      server.js

      ...
          const url = `https://api.cloudflare.com/client/v4/accounts/${process.env.ACCOUNT_ID}/rules/lists/${process.env.LIST_ID}/items`;
          const body = [{ ip: clientIP, comment: 'your_comment' }];
          const headers = {
              'X-Auth-Email': process.env.ACCOUNT_MAIL,
              'X-Auth-Key': process.env.API_KEY,
              'Content-Type': 'application/json',
          };
          try {
              await axios.post(url, body, { headers });
          } catch (error) {
              console.log(error);
          }
      ...
      

      You are now calling the Cloudflare API through the URL to add an item, in this case an IP address, to your_list. The Cloudflare API takes your ACCOUNT_MAIL and API_KEY in the header of the request with the key as X-Auth-Email and X-Auth-Key. The body of the request takes an array of objects with ip as the IP address to add to the list, and a comment with the value your_comment to identify the entry. You can modify value of comment with your own custom comment. The POST request made via axios.post() is wrapped in a try-catch block to handle errors if any, that may occur. The axios.post function takes the url, body and an object with headers to make the request.

      Change the clientIP variable within the ipMiddleware function when testing out the API requests with a test IP address like 198.51.100.0/24 as Cloudflare does not accept the localhost’s IP address in its Lists.

      server.js

      ...
      const clientIP = '198.51.100.0/24';
      ...
      

      Visit your terminal window and run your application:

      Then, visit localhost:3000 in your web browser. Your browser window will display: Successful response. Refresh the page repeatedly to hit the RPS_LIMIT. Your terminal window will display:

      Output

      Example app is listening on port 3000 You are hitting limit ::1

      When you have hit the limit, open the Cloudflare dashboard and navigate to the your_list’s page. You will see the IP address you put in the code added to your Cloudflare’s List named your_list. The Firewall page will display after pushing your changes to GitHub.

      Warning: Make sure to change the value in your clientIP constant variable to requestIP.getClientIp(req) before deploying or pushing the code to GitHub.

      Deploy your application by committing the changes and pushing the code to GitHub. As you have set up auto-deploy, the code from GitHub will automatically deploy to your DigitalOcean’s App Platform. As your .env file is not added to GitHub, you will need to add it to App Platform via the Settings tab at App-Level Environment Variables section. Add the key-value pair from your project’s .env file so your application can access its contents on the App Platform. After you save the environment variables, open your_domain in your browser after deployment finishes and refresh the page repeatedly to hit the RPS_LIMIT. Once you hit the limit, the browser will show Cloudflare’s Firewall page.

      Cloudflare's Error 1020 Page

      Navigate to the Runtime Logs tab on the App Platform dashboard, and you will view the following output:

      Output

      ... You are hitting limit your_public_ip

      You can open your_domain from a different device or via VPN to see that the Firewall bans only the IP address in your_list. You can delete the IP address from your_list through your Cloudflare dashboard.

      Note: Occasionally, it takes few seconds for the Firewall to trigger due to the cached response from the browser.

      You have set up Cloudflare’s Firewall to block IP Addresses when users are hitting the rate limit by making calls to the Cloudflare API.

      Conclusion

      In this article, you built a Node.js project deployed on DigitalOcean’s App Platform connected to your domain routed via Cloudflare. You protected your domain against rate limit misuse by configuring a Firewall Rule on Cloudflare. From here, you can modify the Firewall Rule to show JS Challenge or CAPTCHA instead of banning the user. The Cloudflare documentation details the process.



      Source link

      How To Implement PHP Rate Limiting with Redis on Ubuntu 20.04


      The author selected the Apache Software Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Redis (Remote Dictionary Server ) is an in-memory open source software. It is a data-structure store that uses a server’s RAM, which is several times faster than even the fastest Solid State Drive (SSD). This makes Redis highly responsive, and therefore, suitable for rate limiting.

      Rate limiting is a technology that puts a cap on the number of times a user can request a resource from a server. Many services implement rate limiting to prevent abuse to a service when a user may try to put too much load on a server.

      For instance, when you’re implementing a public API (Application Programming Interface) for your web application with PHP, you need some form of rate limiting. The reason is that when you release an API to the public, you’d want to put a control on the number of times an application user can repeat an action in a specific timeframe. Without any control, users may bring your system to a complete halt.

      Rejecting users’ requests that exceed a certain limit allows your application to run smoothly. If you have a lot of customers, rate limiting enforces a fair-usage policy that allows each customer to have high-speed access to your application. Rate limiting is also good for reducing bandwidth costs and minimizing congestion on your server.

      It might be practical to code a rate-limiting module by logging user activities in a database like MySQL. However, the end product may not be scalable when many users access the system since the data must be fetched from disk and compared against the set limit. This is not only slow, but relational database management systems are not designed for this purpose.

      Since Redis works as an in-memory database, it is a qualified candidate for creating a rate limiter, and it has been proven reliable for this purpose.

      In this tutorial, you’ll implement a PHP script for rate limiting with Redis on an Ubuntu 20.04 server.

      Prerequisites

      Before you begin, you’ll need the following:

      Step 1 — Installing the Redis Library for PHP

      First, you’ll begin by updating your Ubuntu server package repository index. Then, install the php-redis extension. This is a library that allows you to implement Redis in your PHP code. To do this, run the following commands:

      • sudo apt update
      • sudo apt install -y php-redis

      Next, restart the Apache server to load the php-redis library:

      • sudo systemctl restart apache2

      Once you’ve updated your software information index and installed the Redis library for PHP, you’ll now create a PHP resource that caps users’ access based on their IP address.

      Step 2 — Building a PHP Web Resource for Rate Limiting

      In this step, you’ll create a test.php file in the root directory (/var/www/html/) of your web server. This file will be accessible to the public and users can type its address in a web browser to run it. However, for the basis of this guide, you’ll later test access to the resource using the curl command.

      The sample resource file allows users to access it three times in a timeframe of 10 seconds. Users trying to exceed the limit will get an error informing them that they have been rate limited.

      The core functionality of this file relies heavily on the Redis server. When a user requests the resource for the first time, the PHP code in the file will create a key on the Redis server based on the user’s IP address.

      When the user visits the resource again, the PHP code will try to match the user’s IP address with the keys stored in the Redis server and increment the value by one if the key exists. The PHP code will keep checking if the incremented value hits the maximum limit set.

      The Redis key, which is based on the user’s IP address, will expire after 10 seconds; after this time period, logging the user’s visits to the web resource will begin again.

      To begin, open the /var/www/html/test.php file:

      • sudo nano /var/www/html/test.php

      Next, enter the following information to initialize the Redis class. Remember to enter the appropriate value for REDIS_PASSWORD:

      /var/www/html/test.php

      <?php
      
      $redis = new Redis();
      $redis->connect('127.0.0.1', 6379);
      $redis->auth('REDIS_PASSWORD');
      

      $redis->auth implements plain text authentication to the Redis server. This is OK while you’re working locally (via localhost), but if you’re using a remote Redis server, consider using SSL authentication.

      Next, in the same file, initialize the following variables:

      /var/www/html/test.php

      . . .
      $max_calls_limit  = 3;
      $time_period      = 10;
      $total_user_calls = 0;
      

      You’ve defined:

      • $max_calls_limit: is the maximum number of calls a user can access the resource.
      • $time_period: defines the timeframe in seconds within which a user is allowed to access the resource per the $max_calls_limit.
      • $total_user_calls: initializes a variable that retrieves the number of times a user has requested access to the resource in the given timeframe.

      Next, add the following code to retrieve the IP address of the user requesting the web resource:

      /var/www/html/test.php

      . . .
      if (!empty($_SERVER['HTTP_CLIENT_IP'])) {
          $user_ip_address = $_SERVER['HTTP_CLIENT_IP'];
      } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {
          $user_ip_address = $_SERVER['HTTP_X_FORWARDED_FOR'];
      } else {
          $user_ip_address = $_SERVER['REMOTE_ADDR'];
      }
      

      While this code uses the users’ IP address for demonstration purposes, if you’ve got a protected resource on the server that requires authentication, you might log users’ activities using their usernames or access tokens.

      In such a scenario, every user authenticated into your system will have a unique identifier (for example, a customer ID, developer ID, vendor ID, or even a user ID). (If you configure this, remember to use these identifiers in place of the $user_ip_address.)

      For this guide, the user IP address is sufficient for proving the concept. So, once you’ve retrieved the user’s IP address in the previous code snippet, add the next code block to your file:

      /var/www/html/test.php

      . . .
      if (!$redis->exists($user_ip_address)) {
          $redis->set($user_ip_address, 1);
          $redis->expire($user_ip_address, $time_period);
          $total_user_calls = 1;
      } else {
          $redis->INCR($user_ip_address);
          $total_user_calls = $redis->get($user_ip_address);
          if ($total_user_calls > $max_calls_limit) {
              echo "User " . $user_ip_address . " limit exceeded.";
              exit();
          }
      }
      
      echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds";
      

      In this code, you use an if...else statement to check if there is a key defined with the IP address on the Redis server. If the key doesn’t exist, if (!$redis->exists($user_ip_address)) {...}, you set it and define its value to 1 using the code $redis->set($user_ip_address, 1);.

      The $redis->expire($user_ip_address, $time_period); sets the key to expire within the time period—in this case, 10 seconds.

      If the user’s IP address does not exist as a Redis key, you set the variable $total_user_calls to 1.

      In the ...else {...}... statement block, you use the $redis->INCR($user_ip_address); command to increment the value of the Redis key set for each IP address key by 1. This only happens when the key is already set in the Redis server and counts as a repeat request.

      The statement $total_user_calls = $redis->get($user_ip_address); retrieves the total requests the user makes by checking their IP address-based key on the Redis server.

      Toward the end of the file, you use the ...if ($total_user_calls > $max_calls_limit) {... }.. statement to check if the limit is exceeded; if so, you alert the user with echo "User " . $user_ip_address . " limit exceeded.";. Finally, you’re informing the user about the visits they make in the time period using the echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds"; statement.

      After adding all the code, your /var/www/html/test.php file will be as follows:

      /var/www/html/test.php

      <?php
      $redis = new Redis();
      $redis->connect('127.0.0.1', 6379);
      $redis->auth('REDIS_PASSWORD');
      
      $max_calls_limit  = 3;
      $time_period      = 10;
      $total_user_calls = 0;
      
      if (!empty($_SERVER['HTTP_CLIENT_IP'])) {
          $user_ip_address = $_SERVER['HTTP_CLIENT_IP'];
      } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {
          $user_ip_address = $_SERVER['HTTP_X_FORWARDED_FOR'];
      } else {
          $user_ip_address = $_SERVER['REMOTE_ADDR'];
      }
      
      if (!$redis->exists($user_ip_address)) {
          $redis->set($user_ip_address, 1);
          $redis->expire($user_ip_address, $time_period);
          $total_user_calls = 1;
      } else {
          $redis->INCR($user_ip_address);
          $total_user_calls = $redis->get($user_ip_address);
          if ($total_user_calls > $max_calls_limit) {
              echo "User " . $user_ip_address . " limit exceeded.";
              exit();
          }
      }
      
      echo "Welcome " . $user_ip_address . " total calls made " . $total_user_calls . " in " . $time_period . " seconds";
      

      When you’ve finished editing the /var/www/html/test.php file, save and close it.

      You’ve now coded the logic needed to rate limit users on the test.php web resource. In the next step, you’ll test your script.

      Step 3 — Testing Redis Rate Limiting

      In this step, you’ll use the curl command to request the web resource that you’ve coded in Step 2. To fully check the script, you’ll request the resource five times in a single command. It is possible to do this by including a placeholder URL parameter at the end of the test.php file. Here, you use the value ?[1-5] at the end of your request to execute the curl commands five times.

      Run the following command:

      • curl -H "Accept: text/plain" -H "Content-Type: text/plain" -X GET http://localhost/test.php?[1-5]

      After running the code, you will receive output similar to the following:

      Output

      [1/5]: http://localhost/test.php?1 --> <stdout> --_curl_--http://localhost/test.php?1 Welcome 127.0.0.1 total calls made 1 in 10 seconds [2/5]: http://localhost/test.php?2 --> <stdout> --_curl_--http://localhost/test.php?2 Welcome 127.0.0.1 total calls made 2 in 10 seconds [3/5]: http://localhost/test.php?3 --> <stdout> --_curl_--http://localhost/test.php?3 Welcome 127.0.0.1 total calls made 3 in 10 seconds [4/5]: http://localhost/test.php?4 --> <stdout> --_curl_--http://localhost/test.php?4 User 127.0.0.1 limit exceeded. [5/5]: http://localhost/test.php?5 --> <stdout> --_curl_--http://localhost/test.php?5 User 127.0.0.1 limit exceeded.

      As you’ll note, the first three requests ran without a problem. However, your script has rate limited the fourth and fifth requests. This confirms that the Redis server is rate limiting users’ requests.

      In this guide, you’ve set low values for the two variables following:

      /var/www/html/test.php

      ...
      $max_calls_limit  = 3;
      $time_period      = 10;
      ...
      

      When designing your application in a production environment, you could consider higher values depending on how often you expect users to hit your application.

      It is best practice to check real-time stats before setting these values. For instance, if your server logs show that an average user hits your application 1,000 times every 60 seconds, you may use those values as a benchmark for throttling users.

      To put things in a better perspective, here are some real-world examples of rate-limiting implementations (as of 2021):

      Conclusion

      This tutorial implemented a PHP script for rate limiting with Redis on an Ubuntu 20.04 server to prevent your web application from inadvertent or malicious overuse. You could extend the code to further suit your needs depending on your use case.

      You might want to secure your Apache server for production use; follow the How To Secure Apache with Let’s Encrypt on Ubuntu 20.04 tutorial.

      You might also consider reading how Redis works as a database cache. Try out our How To Set Up Redis as a Cache for MySQL with PHP on Ubuntu 20.04 tutorial.

      You can find further resources on our PHP and Redis topic pages.



      Source link

      The Beginner’s Guide to Conversion Rate Optimization (CRO)


      When it comes to digital marketing, the goal is to generate traffic and leads that can then be converted into sales. While the focus is usually on developing ways to drive more traffic to your site, you may be wondering if there’s more you can do to encourage conversions.

      Enter Conversion Rate Optimization!

      Rather than focusing on traffic generation, CRO looks at what can be done on your website after you’ve reeled in those leads. Ultimately, CRO is an ongoing process of observation, analysis, and improvement.

      In this how-to guide, we’ll give you a comprehensive overview of CRO and answer some important questions you might have:

      Long story short, we’re going to get you set up with everything you need to know about increasing conversions. Let’s get started!

      Optimize Your Site with Managed WordPress Hosting

      Our automatic updates and strong security defenses take server management off your hands so you can focus on conversions.

      What Conversion Rate Optimization Is (And How it Differs from Traditional Marketing)

      When we talk about conversions, we’re referring to the process of getting a lead to take a desired action. This might be submitting an email address, purchasing a product, or downloading an article.

      It’s easy to rely heavily on strategies that might be too simple in scope. For example, you might be solely focused on getting visitors to submit their email addresses on your website and miss out on other potential conversion opportunities.

      However, if CRO is implemented correctly, it can help you manage the entire process from start to finish. This includes all of your channels and every part of your conversion funnel, rather than just that one lead generation tactic.

      Regardless of where they originate from, conversions of any kind can be calculated with a formula. Since CRO is a continuous process that aims to increase conversions and can employ several different techniques, it’s important to understand how to calculate different kinds of conversion rates. So, put on your glasses, because we’re about to get real nerdy.

      How to Calculate Conversion Rates

      Calculating your current conversion rates will give you a benchmark prior to implementing CRO and can later help you determine whether or not your efforts are working. There are several different ways to approach this task.

      Before you get started with the number-crunching, you’ll need to define a few things that are specific to your business, including:

      • Website Visitors. If you haven’t already, you’ll need to track your website’s traffic. This will be the basis for many CRO calculations.
      • Leads. Make sure you know exactly what counts as a lead for your situation. For example, this could be anyone who clicks on a specific button or submits their email address.
      • Conversion. Making a purchase is the most common kind of conversion we’ll discuss. However, there are several kinds of conversions, so you’ll need to establish how you’re defining the term.

      These three elements are critical components of your marketing funnel. The better you understand your funnel, the easier it will be to implement key CRO tactics.

      Now, let’s look at the most fundamental way to calculate conversion rates. You’ll take the total number of conversions (such as purchases), and divide it by the number of “interactions” or completed actions (clicks on an ad, for example) during a specific time frame.

      For example, if you had 10 sales from 1,000 interactions in one month, your conversion rate for that month would be 1%. However, you’ll have to decide what you are considering a valuable interaction, as calculating all potential actions together can result in skewed rates.

      Fortunately, there are tools available to help you sift through some of the different ways to do this. Specifically, Google offers conversion tracking for use with Google Ads. This enables you to create specific conversion actions that are unique to your business.

      Now, let’s take a step back and look at conversion rates in the context of implementing CRO. To do this, you’ll want to calculate your conversion rate based on the number of website visitors you have and how many of them become leads.

      To get your visitor-to-lead conversion rate, divide the number of leads created by the number of website visitors within a set time frame:

      If you have 1,000 site visitors in one month and 10 leads, your visitor-to-lead conversion rate is 1%.

      In terms of setting goals, you might be inclined to think you need more website traffic. In reality, this is where CRO can be beneficial. In our example, there are a lot of website visitors who did not become leads. This means there might be areas you can optimize in order to increase your visitor-to-lead conversions. In turn, your lead-to-customer conversions should also increase.

      In fact, that lead-to-customer conversion rate is the last calculation we’ll touch on. This is determined by dividing your total conversions (where a lead becomes a paying customer) by the number of total leads in a given time frame:

      If we revisit our previous example, we had a total of 10 leads. Let’s assume that three of those leads convert in the same month. Our lead-to-customer conversion would be 3%.

      These are all necessary formulas to keep in mind. They can help you set goals and compare monthly totals to see if your CRO strategies are working to boost the specific conversion rates you decide to target.

      Should You Be Using CRO? (4 Key Questions to Ask Yourself)

      It seems obvious to say, “Yes! I want more leads from my existing traffic.” However, there are some other questions to consider before you dive into a CRO planning session. While CRO concepts and techniques can benefit just about anyone, there are some specific elements of your existing practices to consider beforehand.

      1. Do You Understand Your Audience?

      To implement a solid CRO plan, you’ll need to have a decent amount of target market data. Marketing personas are a great place to start, and you can enhance their usefulness through CRO.

      If you’re lacking this kind of information but still want to use CRO, there are tools available to get you started. For example, the ThinkWithGoogle suite includes an application called Market Finder.

      Google’s Market Finder application.

      This is an application that can help you determine what the actual market is for your business. Additionally, you can identify new potential markets, or fine-tune your approach according to geographic locations. All of this data is vital to utilizing CRO. If you’re missing this component, you might want to invest some time into filling the gaps first.

      2. Are You Tracking Key Metrics?

      We mentioned previously how important it is to track different business metrics. CRO can only deliver the desired results if you’re already tracking metrics like bounce rate, page loading times, user experience, page views, and traffic. Just as we saw in the conversion rate calculations, data is the key to understanding whether optimizations are working.

      3. Do You Already Have Good Traffic Numbers?

      While mathematical logic tells us that the more traffic you have, the better your conversion numbers should be, that’s not necessarily the approach CRO emphasizes. The optimizations suggested — along with using CRO best practices — are designed to take your existing traffic even further.

      So if you’re already happy with your current traffic, that’s a good starting point. If not, you might first want to look at what could be preventing you from reaching your audience.

      4. Do You Need to Stretch Your Marketing Budget?

      Just like we discussed regarding traffic numbers, CRO aims to get you more with what you have already. If you have the other elements in place, such as data tracking, decent website traffic, and lead funnels, CRO is a logical next step.

      However, obtaining those other items can be costly, so it makes sense to look at where you can optimize what you have in place to bring about better results. Fortunately, most CRO practices are not going to break the bank.

      Be Awesome on the Internet

      Join our monthly newsletter for tips and tricks to build your dream website!

      Understanding the CRO Process and How to Make It Work For You

      We mentioned earlier that some approaches to calculating conversion rates can be too isolated. However, when implemented correctly, CRO can take those individual elements and create a comprehensive process that offers greater depth and value.

      In that regard, CRO is also a multifaceted approach that does not focus on just one element of a website or marketing campaign. There are several different CRO frameworks out there that you can adopt for your process. Each framework puts its own spin on five basic categories, including:

      • Research
      • Hypothesis
      • Prioritization
      • Testing
      • Learning

      On their own, these can be used as a basic CRO framework, but there are more in-depth and specific frameworks out there that you can try as well. We’ll go over three of the most popular, to give you an idea for how they differ.

      Moz’ 5-Step Framework

      Moz offers SEO tools for website developers and businesses, and they’re considered one of the top experts on SEO. Therefore, it’s no surprise that they’ve developed a CRO framework as well. Their approach has five steps that fall into three broad phases: Discovery, Experiments, and Review.

      To get started, let’s look at the Discovery phase. This is where steps one and two of the Moz framework live. There, you’ll look first at gathering data and formulating your hypotheses.

      Phase One of the Moz CRO Framework.

      The Discovery phase is essential to creating a strong foundation for all the work you’ll do next in the Experiment phase. This is where you’ll encounter steps three and four. They cover wireframing your new design, so you’re addressing the hypotheses formed in the previous step. This should match your brand and be realistic in terms of your technical resources.

      The fourth step in this framework focuses specifically on implementing Optimizely. This is a platform we’ll discuss in greater detail later. However, broadly speaking, it’s built to help you test and build digital experiences in a variety of different categories, such as marketing, engineering, and product development.

      In the Review phase, you’ll be looking to see if your hypothesis was correct. If not, you’ll be able to determine what you can learn from that failure.

      The LIFT Model

      Developed by Chris Goward, Founder and CEO of WiderFunnel, the LIFT Model is another CRO framework to consider. While this approach retains some of the same fundamentals of scientific testing that the Moz framework introduced, it has a much different structure.

      The Lift Model enables you to evaluate experiences from the perspective of your page visitors, using these six factors:

      • Value Proposition
      • Clarity
      • Relevance
      • Distraction
      • Urgency
      • Anxiety

      Goward offers a visualization of this model using an airplane as the value proposition. What makes the airplane lift is when the value proposition is relevant, clear, and presented with urgency. As a website user, distractions and anxiety are what can bring the plane down.

      The LIFT Model.

      With the LIFT Model, your value proposition is what determines your potential conversion rate, making it the most vital part of the framework. All the other factors in the model either drive or inhibit your value proposition and are used to develop your hypothesis and testing strategy.

      The LIFT Model has quite a few success stories. For example, a case study on Magento demonstrates how they were able to create an 88% “lift” in qualified leads using this framework.

      The Data36 Model

      Created by data analyst Tomi Mester, the Data36 model is an excellent option for anyone more comfortable with traditional scientific research terminology. This framework uses six steps to work through both qualitative and quantitative research methods that inform the CRO process.

      The Data36 CRO Framework.

      Steps one and two of the Data36 approach are similar to the Moz framework — you’ll be focused on gathering valuable data. However, in this case, it might be anecdotal or historical data.

      The key is to focus on qualitative information at the start. According to Mester, this concept is the first step, so you can form “hunches” before diving into the numerical data. To gather this information, you can conduct user interviews or Five Second Tests, which we’ll talk more about later.

      Your qualitative data can help dictate the direction of your search for quantitative data. This is where you’ll start to confirm your hunches. For the most part, this is similar to the steps in other frameworks where you form a hypothesis and then test it.

      The Data36 framework also has a brainstorming step that is much like wireframing in other CRO frameworks. Once you’ve created optimized content, you’ll engage in another round of qualitative testing.

      To round out the framework, you’ll work through A/B testing of the versions that performed well in the second round of qualitative research. The winner of this step can be moved to the sixth and final step. If used correctly, this framework can help you avoid unnecessary coding projects and potentially speed up the optimization process by weeding out options that might not work.

      6 Areas Where You Can Implement CRO Best Practices

      Now that we’ve covered some of the frameworks you can use to implement a CRO strategy, let’s take a look at the specific areas of your website where these techniques can have a noticeable impact.

      1. Call-To-Actions (CTAs)

      Your CTAs are of prime importance. If your website visitors don’t know what it is you want them to do, it’s unlikely that they’ll do it. Remember that in the LIFT Model, clarity is one of the elements that can help your value proposition take off.

      You might be familiar with some of the more traditional best practices, such as CTA button design, placement, and copy. However, when it comes to CRO, the approach is slightly different. In fact, this is where you’re more likely to find recommendations for using text-based CTAs.

      A text-based or anchor text CTA is designed in a larger format, such as an H3 or H4 heading, and is often styled in a different color. It is meant to stand out, but still be part of your web page’s copy. Hubspot conducted a study that compared end-of-page CTA banners to CTA anchor text and found that 43 to 97% of their leads came from the anchor text.

      A CTA anchor text example.

      Since only 6% of leads came from the end-of-page banner, anchor text CTAs were the clear winner.

      One of the main reasons this approach works is that it can help avoid banner-blindness. This happens when website users simply ignore certain design elements. Additionally, since a high percentage of readers won’t ever make it to the end of a post, implementing anchor text CTAs might be a useful technique to explore on your website.

      2. Website Copy

      Many experts view writing strong website copy as a mashup of art and science. However, CRO has a more formulaic approach for improving conversion rates through optimizing specific areas of your website’s copy.

      For example, applying optimization formulas to your headline is a great place to start. This is likely the first, and potentially only, thing your visitors will see. If the headline is not optimized, they may not even click on it in the first place!

      If your headline passes the test, you’ll want to make sure your page copy follows a few more rules. This is where the relevance of your copy really matters. It’s crucial to CRO that your copy matches or is relevant to your CTA.

      For instance, you wouldn’t want to focus all your copy on website hosting and then have your CTA mention signing up for an email marketing service.

      That might be an extreme example, but it drives home a vital point: copy matters!

      You’ll also want to assess whether your copy uses too much passive voice, stays on topic, and makes claims you can actually deliver on.

      3. Navigation and Site Structure

      Your website’s structure can be a critical factor in a successful approach to traditional SEO. Plus, there are lots of ways to optimize it. A well-executed site structure also plays a pivotal role in CRO.

      In fact, SEO expert Neil Patel calls good site architecture the “older brother” of CRO.

      Basically, navigation and site structure impact conversions because they are how users find and purchase things on your website. If the path to your CTA does not make sense or is hard to follow, your conversion numbers will probably reflect that.

      This is where some standard practices for building better User Experiences (UX) can be helpful. Peter Morville’s Honeycomb Model is a widely-accepted lens through which to view your website’s structure and begin making improvements.

      Peter Morville’s UX Honeycomb.

      The seven segments of the honeycomb represent all the elements that should be present to provide users with a meaningful and valuable experience. Ultimately, if your website structure and navigation are meeting all the standards in the honeycomb, you’ll have naturally optimized your website for better conversion rates.

      4. Page Speed

      It’s a well-cited fact that if a user has to wait just a few seconds for your page to load, they are more likely to leave and not come back. This, of course, can have a negative impact on your bottom line.

      Fortunately, there are ways to improve your website’s speed.

      One significant factor when it comes to page speed is your web host. A quality web host with the right features can be a big help when it comes to CRO.

      For example, built-in caching is one feature to look for when evaluating potential web hosts. This enables you to create static versions of individual web pages, so the server has less to load when a user requests the page through their web browser.

      5. Forms

      Getting your visitors to fill out lead generation forms can be a challenge. Style and length are both factors that can impact the success of your forms. Additionally, where to place them on your site is a hotly-debated topic.

      Whether you place your opt-in forms above or below “the fold,” there are some practices backed by data that seem to yield higher conversion rates. For example, the BrokerNotes lead generation form has a tool-like experience that took their conversion rate from 11% to 46%.

      The lead form on BrokerNotes.co.

      This is a good example of how revamping your lead generation form to look and feel less like a form can assist with CRO.

      However, there are many other form elements to consider when optimizing for conversions. This includes how much and how personal the information is that you ask for. For example, asking for a phone number has been shown to cause a 5% drop in conversions.

      6. Landing Page Design

      While many of the items on our list often live together on a landing page, there are steps you can take using CRO to improve the overall experience.

      From the headline to the CTA, every element of your landing page matters and provides opportunities for optimization. An excellent example of an optimized landing page is Airbnb.

      The Airbnb website.

      Not only is the page simple and visually appealing, but it also gets right to the point with a clear headline and useful information. There is no question about what this page is saying, and it speaks right to a potential host’s wallet.

      In terms of a CTA, it also cleverly offers the user valuable information before asking for anything in return.

      Once you have a basic grasp on what CRO involves, it’s time to dive in and put it to the test. Fortunately, there are many resources available to help you get started. For example, we’ve created a guide to using typography to increase conversions on your website.

      Let’s take a look at six other resources you can leverage to launch your CRO initiative!

      1. Google Marketing Platform

      Google Marketing Tools.

      When it comes to optimizing for search engines, Google is usually a top priority. Fortunately, the search engine also offers an entire suite of tools that can be used with your CRO framework. This is particularly beneficial for small businesses, as they can access these tools for free.

      Another benefit of using Google’s resources is that they are designed to work together, making your data accessible across all the available applications. The Google Marketing Platform provides an integrated approach to using the best tools for optimizing your website.

      For instance, you can gather all the tracking data you need for the beginning steps of most CRO frameworks using Google Analytics. Once you’re ready to run some tests, Google Optimize offers applications that can set up experiments based on your data.

      2. Visual Web Optimizer

      The Virtual Web Optimizer platform.

      Visual Web Optimizer (VWO) is an application with a diverse feature set, geared towards making website optimization easy. The Research, Hypothesize, Experiment, and Measure approach to many of the CRO frameworks we’ve discussed is operationalized with VWO’s digital toolset.

      Essentially, you can use VWO’s services to provide extra support and expertise to the CRO framework you decide to employ. This includes tools for every step of the process. VWO also offers many plans to choose from, including pricing options for individual applications starting at $99 per month.

      3. Optimizely

      The Optimizely platform.

      Optimizely is the platform used explicitly in the Moz 5-Step CRO Framework. It is one of the top CRO platforms out there, with clientele that includes 24 of the top Fortune 100 global businesses.

      This is one of the premium CRO services on the market. You’ll have to contact the sales team directly to get pricing on Optimizely plans.

      Whatever you choose, you’ll get some options in terms of how you can approach the platform. For example, you can choose services based on team (marketing, product, engineering, or data) or industry.

      You can also choose between a Web platform for creating experiments and personalizations with a visual editor and a Full Stack platform geared more towards application and back-end development. This is where you’ll find high-powered A/B testing options and feature flags for product development.

      4. Five Second Tests

      Five Second Tests.

      Five Second Tests is an easy-to-use web-based service that enables you to gather data on what a website user’s first impression of your design is. This process gives testers only five seconds to view a page. Then, they are asked a series of questions to determine if the design is achieving what you intended.

      You can use this application for free in a limited capacity. You’ll be constrained to a total of two minutes of testing per month and you won’t be able to brand your tests with your own logo. For $79 per month, you can increase your testing time, remove the branding, and implement split testing. There are also Pro and Team plans with many more features for $99 and $396 per month, respectively.

      5. Case Studies

      Optimization Case Studies.

      Research and data are both essential components when it comes to CRO. So we wanted to include some excellent resources for your own information gathering. Learning from others can save you time, frustration, and in some cases, money.

      With that in mind, Neil Patel has 100 conversion rate optimization case studies available for free on his website. You can use this as a directory to find situations that are similar to your own to learn from.

      You’ll be able to review what was optimized, in addition to what the results and key findings were. If you’re trying to kickstart a CRO effort with your team, sharing case studies can often serve as a tangible motivator.

      6. CRO Blogs

      To learn more about CRO and keep your skills sharp with the latest optimization tools, following the blogs of CRO experts can be a worthwhile (and often entertaining) strategy. However, if you look for CRO blogs in a Google search, you’re likely to get millions of results. So we’ve picked a few of the best to give you a more manageable reading list.

      To keep up on the latest CRO trends, you might want to follow some of these blogs:

      • The WiderFunnel Blog: CEO Chris Goward created the LIFT Model for CRO.
      • Unbounce: This is a blog brought to you by one of the leaders in A/B testing and landing page optimization tools.
      • Conversion Optimization Blog: A well-researched blog that comes from the Conversion Sciences team.
      • Neil Patel’s Blog: Neil Patel, the creator of KISSMetrics, brings his readers some of the most data-packed posts out there about marketing and optimization.

      As our technology landscape shifts and changes, following expert blogs can help you stay informed and up-to-speed on the most effective CRO practices.

      Let’s Increase Your Conversion Rate

      With some basic elements in place, a well-structured CRO strategy will almost always yield positive results. If you’ve already calculated your conversion rates and are tracking key metrics, then you’re off to a good start.

      Choosing and implementing a CRO framework is another major component of developing a successful strategy. While no one framework is the “right” one, they all require gathering quality data, developing hypotheses, and testing to determine the best optimization tactics for your website.

      Of course, you won’t want to get distracted by an unreliable web host when you could be focusing on a higher conversion rate. Here at DreamHost, we can keep your website’s server in prime condition with our managed hosting plans, so you can get back to building a conversion machine!



      Source link