One place for hosting & domains

      Server

      How to Benchmark the Performance of a Redis Server on Ubuntu 18.04


      Introduction

      Benchmarking is an important practice when it comes to analyzing the overall performance of database servers. It’s helpful for identifying bottlenecks as well as opportunities for improvement within those systems.

      Redis is an in-memory data store that can be used as database, cache and message broker. It supports from simple to complex data structures including hashes, strings, sorted sets, bitmaps, geospatial data, among other types. In this guide, we’ll demonstrate how to benchmark the performance of a Redis server running on Ubuntu 18.04, using a few different tools and methods.

      Prerequisites

      To follow this guide, you’ll need:

      Note: The commands demonstrated in this tutorial were executed on a dedicated Redis server running on a 4GB DigitalOcean Droplet.

      Redis comes with a benchmark tool called redis-benchmark. This program can be used to simulate an arbitrary number of clients connecting at the same time and performing actions on the server, measuring how long it takes for the requests to be completed. The resulting data will give you an idea of the average number of requests that your Redis server is able to handle per second.

      The following list details some of the common command options used with redis-benchmark:

      • -h: Redis host. Default is 127.0.0.1.
      • -p: Redis port. Default is 6379.
      • -a: If your server requires authentication, you can use this option to provide the password.
      • -c: Number of clients (parallel connections) to simulate. Default value is 50.
      • -n: How many requests to make. Default is 100000.
      • -d: Data size for SET and GET values, measured in bytes. Default is 3.
      • -t: Run only a subset of tests. For instance, you can use -t get,set to benchmark the performance of GET and SET commands.
      • -P: Use pipelining for performance improvements.
      • -q: Quiet mode, shows only the average requests per second information.

      For instance, if you want to check the average number of requests per second that your local Redis server can handle, you can use:

      You will get output similar to this, but with different numbers:

      Output

      PING_INLINE: 85178.88 requests per second PING_BULK: 83056.48 requests per second SET: 72202.16 requests per second GET: 94607.38 requests per second INCR: 84961.77 requests per second LPUSH: 78988.94 requests per second RPUSH: 88652.48 requests per second LPOP: 87950.75 requests per second RPOP: 80971.66 requests per second SADD: 80192.46 requests per second HSET: 84317.03 requests per second SPOP: 78125.00 requests per second LPUSH (needed to benchmark LRANGE): 84175.09 requests per second LRANGE_100 (first 100 elements): 52383.45 requests per second LRANGE_300 (first 300 elements): 21547.08 requests per second LRANGE_500 (first 450 elements): 14471.78 requests per second LRANGE_600 (first 600 elements): 9383.50 requests per second MSET (10 keys): 71225.07 requests per second

      You can also limit the tests to a subset of commands of your choice using the -t parameter. The following command shows the averages for the GET and SET commands only:

      • redis-benchmark -t set,get -q

      Output

      SET: 76687.12 requests per second GET: 82576.38 requests per second

      The default options will use 50 parallel connections to create 100000 requests to the Redis server. If you want to increase the number of parallel connections to simulate a peak in usage, you can use the -c option for that:

      • redis-benchmark -t set,get -q -c 1000

      Because this will use 1000 concurrent connections instead of the default 50, you should expect a decrease in performance:

      Output

      SET: 69444.45 requests per second GET: 70821.53 requests per second

      If you want detailed information in the output, you can remove the -q option. The following command will use 100 parallel connections to run 1000000 SET requests on the server:

      • redis-benchmark -t set -c 100 -n 1000000

      You will get output similar to this:

      Output

      ====== SET ====== 1000000 requests completed in 11.29 seconds 100 parallel clients 3 bytes payload keep alive: 1 95.22% <= 1 milliseconds 98.97% <= 2 milliseconds 99.86% <= 3 milliseconds 99.95% <= 4 milliseconds 99.99% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 88605.35 requests per second

      The default settings use 3 bytes for key values. You can change this with the option -d. The following command will benchmark GET and SET commands using 1MB key values:

      • redis-benchmark -t set,get -d 1000000 -n 1000 -q

      Because the server is working with a much bigger payload this time, a significant decrease of performance is expected:

      Output

      SET: 1642.04 requests per second GET: 822.37 requests per second

      It is important to realize that even though these numbers are useful as a quick way to evaluate the performance of a Redis instance, they don't represent the maximum throughput a Redis instance can sustain. By using pipelining, applications can send multiple commands at once in order to improve the number of requests per second the server can handle. With redis-benchmark, you can use the -P option to simulate real world applications that make use of this Redis feature.

      To compare the difference, first run the redis-benchmark command with default values and no pipelining, for the GET and SET tests:

      • redis-benchmark -t get,set -q

      Output

      SET: 86281.27 requests per second GET: 89847.26 requests per second

      The next command will run the same tests, but will pipeline 8 commands together:

      • redis-benchmark -t get,set -q -P 8

      Output

      SET: 653594.81 requests per second GET: 793650.75 requests per second

      As you can see from the output, there is a substantial performance improvement with the use of pipelining.

      Checking Latency with redis-cli

      If you'd like a simple measurement of the average time a request takes to receive a response, you can use the Redis client to check for the average server latency. In the context of Redis, latency is a measure of how long does a ping command take to receive a response from the server.

      The following command will show real-time latency stats for your Redis server:

      You'll get output similar to this, showing an increasing number of samples and a variable average latency:

      Output

      min: 0, max: 1, avg: 0.18 (970 samples)

      This command will keep running indefinitely. You can stop it with a CTRL+C.

      To monitor latency over a certain period of time, you can use:

      • redis-cli --latency-history

      This will track latency averages over time, with a configurable interval that is set to 15 seconds by default. You will get output similar to this:

      Output

      min: 0, max: 1, avg: 0.18 (1449 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1444 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1446 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.16 (1444 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1445 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1445 samples) -- 15.01 seconds range ...

      Because the Redis server on our example is idle, there's not much variation between latency samples. If you have a peak in usage, however, this should be reflected as an increase in latency within the results.

      If you'd like to measure the system latency only, you can use --intrinsic-latency for that. The intrinsic latency is inherent to the environment, depending on factors such as hardware, kernel, server neighbors and other factors that aren't controlled by Redis.

      You can see the intrinsic latency as a baseline for your overall Redis performance. The following command will check for the intrinsic system latency, running a test for 30 seconds:

      • redis-cli --intrinsic-latency 30

      You should get output similar to this:

      Output

      … 498723744 total runs (avg latency: 0.0602 microseconds / 60.15 nanoseconds per run). Worst run took 22975x longer than the average latency.

      Comparing both latency tests can be helpful for identifying hardware or system bottlenecks that could affect the performance of your Redis server. Considering the total latency for a request to our example server has an average of 0.18 microseconds to complete, an intrinsic latency of 0.06 microseconds means that one third of the total request time is spent by the system in processes that aren't controlled by Redis.

      Memtier is a high-throughput benchmark tool for Redis and Memcached created by Redis Labs. Although very similar to redis-benchmark in various aspects, Memtier has several configuration options that can be tuned to better emulate the kind of load you might expect on your Redis server, in addition to offering cluster support.

      To get Memtier installed on your server, you'll need to compile the software from source. First, install the dependencies necessary to compile the code:

      • sudo apt-get install build-essential autoconf automake libpcre3-dev libevent-dev pkg-config zlib1g-dev

      Next, go to your home directory and clone the memtier_benchmark project from its Github repository:

      • cd
      • git clone https://github.com/RedisLabs/memtier_benchmark.git

      Navigate to the project directory and run the autoreconf command to generate the application configuration scripts:

      • cd memtier_benchmark
      • autoreconf -ivf

      Run the configure script in order to generate the application artifacts required for compiling:

      Now run make to compile the application:

      Once the build is finished, you can test the executable with:

      • ./memtier_benchmark --version

      This will give you the following output:

      Output

      memtier_benchmark 1.2.17 Copyright (C) 2011-2017 Redis Labs Ltd. This is free software. You may redistribute copies of it under the terms of the GNU General Public License <http://www.gnu.org/licenses/gpl.html>. There is NO WARRANTY, to the extent permitted by law.

      The following list contains some of the most common options used with the memtier_benchmark command:

      • -s: Server host. Default is localhost.
      • -p: Server port. Default is 6379.
      • -a: Authenticate requests using the provided password.
      • -n: Number of requests per client (default is 10000).
      • -c: Number of clients (default is 50).
      • -t: Number of threads (default is 4).
      • --pipeline: Enable pipelining.
      • --ratio: Ratio between SET and GET commands, default is 1:10.
      • --hide-histogram: Hides detailed output information.

      Most of these options are very similar to the options present in redis-benchmark, but Memtier tests performance in a different way. To simulate common real-world environments better, the default benchmark performed by memtier_benchmark will test for GET and SET requests only, on a ratio of 1 to 10. With 10 GET operations for each SET operation in the test, this arrangement is more representative of a common web application using Redis as a database or cache. You can adjust the ratio value with the option --ratio.

      The following command runs memtier_benchmark with default settings, while providing only high-level output information:

      • ./memtier_benchmark --hide-histogram

      Note: if you have configured your Redis server to require authentication, you should provide the -a option along with your Redis password to the memtier_benchmark command:

      • ./memtier_benchmark --hide-histogram -a your_redis_password

      You'll see output similar to this:

      Output

      ... 4 Threads 50 Connections per thread 10000 Requests per client ALL STATS ========================================================================= Type Ops/sec Hits/sec Misses/sec Latency KB/sec ------------------------------------------------------------------------- Sets 8258.50 --- --- 2.19800 636.05 Gets 82494.28 41483.10 41011.18 2.19800 4590.88 Waits 0.00 --- --- 0.00000 --- Totals 90752.78 41483.10 41011.18 2.19800 5226.93

      According to this run of memtier_benchmark, our Redis server can execute about 90 thousand operations per second in a 1:10 SET/GET ratio.

      It's important to note that each benchmark tool has its own algorithm for performance testing and data presentation. For that reason, it's normal to have slightly different results on the same server, even when using similar settings.

      Conclusion

      In this guide, we demonstrated how to perform benchmark tests on a Redis server using two distinct tools: the included redis-benchmark, and the memtier_benchmark tool developed by Redis Labs. We also saw how to check for the server latency using redis-cli. Based on the data obtained from these tests, you'll have a better understanding of what to expect from your Redis server in terms of performance, and what are the bottlenecks of your current setup.



      Source link

      How To Deploy an ASP.NET Core Application with MySQL Server Using Nginx on Ubuntu 18.04


      The author selected the Open Source Initiative to receive a donation as part of the Write for DOnations program.

      Introduction

      ASP.NET Core is a high-performant, open-source framework for building modern web applications, meant to be a more modular version of Microsoft’s ASP.NET Framework. Released in 2016, it can run on several operating systems such as Linux and macOS. This enables developers to target a particular operating system for development based on design requirements. With ASP.NET Core, a developer can build any kind of web application or service irrespective of the complexity and size. Developers can also make use of Razor pages to create page-focused design working on top of the traditional Model-View-Controller (MVC) pattern.

      ASP.NET Core provides the flexibility to integrate with any front-end frameworks to handle client-side logic or consume a web service. You could, for example, build a RESTful API with ASP.NET Core and easily consume it with JavaScript frameworks such as Angular, React, and Vue.js.

      In this tutorial you’ll set up and deploy a production-ready ASP.NET Core application with a MySQL Server on Ubuntu 18.04 using Nginx. You will deploy a demo ASP.NET Core application similar to the application from Microsoft’s documentation and hosted on GitHub. Once deployed, the demo application will allow you to create a list of movies and store it in the database. You’ll be able to create, read, update, and delete records from the database. You can use this tutorial to deploy your own ASP.NET Core application instead; it’s possible you’ll have to implement extra steps that include generating a new migration file for your database.

      Prerequisites

      You will need the following for this tutorial:

      Step 1 — Installing .NET Core Runtime

      A .NET Core runtime is required to successfully run a .NET Core application, so you’ll start by installing this to your machine. First, you need to register the Microsoft Key and product repository. After that, you will install the required dependencies.

      First, logged in as your new created user, make sure you’re in your root directory:

      Next, run the following command to register the Microsoft key and product repository:

      • wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb

      Use dpkg with the -i flag to install the specified file:

      • sudo dpkg -i packages-microsoft-prod.deb

      To facilitate the installation of other packages required for your application, you will install the universe repository with the following command:

      • sudo add-apt-repository universe

      Next install the apt-transport package to allow the use of repositories accessed via the HTTP Secure protocol:

      • sudo apt install apt-transport-https

      Now, run the following command to download the packages list from the repositories and update them to get information on the newest versions of packages and their dependencies:

      Finally, you can install the .NET runtime SDK with:

      • sudo apt install dotnet-sdk-2.2

      You will be prompted with the details of the size of additional files that will be installed. Type Y and hit ENTER to continue.

      Now that you're done installing the .NET Core runtime SDK on the server, you are almost ready to download the demo application from GitHub and set up the deployment configuration. But first, you'll create the database for the application.

      Step 2 — Creating a MySQL User and Database

      In this section, you will create a MySQL server user, create a database for the application, and grant all the necessary privileges for the new user to connect to the database from your application.

      To begin, you need to access the MySQL client using the MySQL root account as shown here:

      You will be prompted to enter the root account password, set up during the prerequisite tutorial.

      Next, create a MySQL database for the application with:

      • CREATE DATABASE MovieAppDb;

      You will see the following output in the console:

      Output

      Query OK, 1 row affected (0.03 sec)

      You've now created the database successfully. Next, you will create a new MySQL user, associate them with the newly created database, and grant them all privileges.

      Run the following command to create the MySQL user and password. Remember to change the username and password to something more secure:

      • CREATE USER 'movie-admin'@'localhost' IDENTIFIED BY 'password';

      You will see the following output:

      Output

      Query OK, 0 rows affected (0.02 sec)

      To access a database or carry out a specific action on it, a MySQL user needs the appropriate permission. At the moment movie-admin does not have the appropriate permission over the application database.

      You will change that by running the following command to grant access to movie-admin on MovieAppDb:

      • GRANT ALL PRIVILEGES ON MovieAppDb.* TO 'movie-admin'@'localhost';

      You will see the following output:

      Output

      Query OK, 0 rows affected (0.01 sec)

      Now, you can reload the grant tables by running the following command to apply the changes that you just made using the flush statement:

      You will see the following output:

      Output

      Query OK, 0 rows affected (0.00 sec)

      You are done creating a new user and granting privileges. To test if you are on track, exit the MySQL client:

      Log in again, using the credentials of the MySQL user you just created and enter the appropriate password when prompted:

      Check to be sure that the user movie-admin can access the created database, check with:

      You will see the MovieAppDb table listed in the output:

      Output

      +--------------------+ | Database | +--------------------+ | MovieAppDb | | information_schema | +--------------------+ 2 rows in set (0.01 sec)

      Now, exit the MySQL client:

      You've created a database, made a new MySQL user for the demo application, and granted the newly created user the right privileges to access the database. In the next section, you will start setting up the demo application.

      Step 3 — Setting Up the Demo App and Database Credentials

      As stated earlier, you'll deploy an existing ASP.NET Core application. This application was built to create a movie list and it uses the Model-View-Controller design pattern to ensure a proper structure and separation of concerns. To create or add a new movie to the list, the user will populate the form fields with the appropriate details and click on the Create button to post the details to the controller. The controller at this point will receive a POST HTTP request with the submitted details and persist the data in the database through the model.

      You will use Git to pull the source code of this demo application from GitHub and save it in a new directory. You could also download an alternate application here if you will be deploying a different application.

      To begin, create a new directory named movie-app from the terminal by using the following command:

      • sudo mkdir -p /var/www/movie-app

      This will serve as the root directory for your application. Next, change the folder owner and group in order to allow a non-root user account to work with the project files:

      • sudo chown sammy:sammy /var/www/movie-app

      Replace sammy with your sudo non-root username.

      Now, you can move into the parent directory and clone the application on GitHub:

      • cd /var/www
      • git clone https://github.com/do-community/movie-app-list.git movie-app

      You will see the following output:

      Output

      Cloning into 'movie-app'... remote: Enumerating objects: 91, done. remote: Counting objects: 100% (91/91), done. remote: Compressing objects: 100% (73/73), done. remote: Total 91 (delta 13), reused 91 (delta 13), pack-reused 0 Unpacking objects: 100% (91/91), done.

      You have successfully cloned the demo application from GitHub, so the next step will be to create a successful connection to the application database. You will do this by editing the ConnectionStrings property within the appsettings.json file and add the details of the database.

      Change directory into the application:

      Now open the file for editing:

      • sudo nano appsettings.json

      Add your database credentials:

      appsettings.json

      {
        "Logging": {
          "LogLevel": {
            "Default": "Warning"
          }
        },
        "AllowedHosts": "*",
        "ConnectionStrings": {
          "MovieContext": "Server=localhost;User Id=movie-admin;Password=password;Database=MovieAppDb"
        }
      }
      

      With this in place, you've successfully created a connection to your database. Now press CTRL+X to save your changes to the file and type Y to confirm. Then hit ENTER to exit the page.

      ASP.NET Core applications use a .NET standard library named Entity Framework (EF) Core to manage interaction with the database. Entity Framework Core is a lightweight, cross-platform version of the popular Entity Framework data access technology. It is an object-relational mapper (ORM) that enables .NET developers to work with a database using any of the database providers, such as MySQL.

      You can now update your database with the tables from the cloned demo application. Run the following command for that purpose:

      • dotnet ef database update

      This will apply an update to the database and create the appropriate schemas.

      Now, to build the project and all its dependencies, run the following command:

      You will see output similar to:

      Output

      Microsoft (R) Build Engine version 16.1.76+g14b0a930a7 for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 95.09 ms for /var/www/movie-app/MvcMovie.csproj. MvcMovie -> /var/www/movie-app/bin/Debug/netcoreapp2.2/MvcMovie.dll MvcMovie -> /var/www/movie-app/bin/Debug/netcoreapp2.2/MvcMovie.Views.dll Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:01.91

      This will build the project and install any third-party dependencies listed in the project.assets.json file but the application won’t be ready for production yet. To get the application ready for deployment, run the following command:

      You will see the following:

      Output

      Microsoft (R) Build Engine version 16.1.76+g14b0a930a7 for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 89.62 ms for /var/www/movie-app/MvcMovie.csproj. MvcMovie -> /var/www/movie-app/bin/Debug/netcoreapp2.2/MvcMovie.dll MvcMovie -> /var/www/movie-app/bin/Debug/netcoreapp2.2/MvcMovie.Views.dll MvcMovie -> /var/www/movie-app/bin/Debug/netcoreapp2.2/publish/

      This will pack and compile the application, read through its dependencies, publish the resulting set of files into a folder for deployment, and produce a cross-platform .dll file that uses the installed .NET Core runtime to run the application.

      By installing dependencies, creating a connection to the database, updating the database with the necessary tables, and publishing it for production, you've completed the setup for this demo application. In the next step you will configure the web server to make the application accessible and secure at your domain.

      Step 4 — Configuring the Web Server

      By now, having followed the How To Secure Nginx with Let's Encrypt tutorial, you'll have a server block for your domain at /etc/nginx/sites-available/your_domain with the server_name directive already set appropriately. In this step, you will edit this server block to configure Nginx as a reverse proxy for your application. A reverse proxy is a server that sits in front of web servers and forwards every web browser’s request to those web servers. It receives all requests from the network and forwards them to a different web server.

      In the case of an ASP.NET Core application, Kestrel is the preferred web server that is included with it by default. It is great for serving dynamic content from an ASP.NET Core application as it provides better request-processing performance and was designed to make ASP.NET as fast as possible. However, Kestrel isn't considered a full-featured web server because it can't manage security and serve static files, which is why it is advisable to always run it behind a web server.

      To begin, ensure that you are within the root directory of your server:

      Open the server block for editing with:

      • sudo nano /etc/nginx/sites-available/your_domain

      As detailed in the Step 4 of the How To Secure Nginx with Let's Encrypt tutorial, if you selected option 2, Certbot will automatically configure this server block in order to redirect HTTP traffic to HTTPS with just a few modifications.

      Continue with the configuration by editing the first two blocks in the file to reflect the following:

      /etc/nginx/sites-available/your-domain

      server {
      
          server_name your-domain  www.your-domain;
      
         location / {
           proxy_pass http://localhost:5000;
           proxy_http_version 1.1;
           proxy_set_header Upgrade $http_upgrade;
           proxy_set_header Connection keep-alive;
           proxy_set_header Host $host;
           proxy_cache_bypass $http_upgrade;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto $scheme;
          }
      
      listen [::]:443 ssl ipv6only=on; # managed by Certbot
      listen 443 ssl; # managed by Certbot
      ssl_certificate /etc/letsencrypt/live/your-domain/fullchain.pem; # managed by Certbot
      ssl_certificate_key /etc/letsencrypt/live/your-domain/privkey.pem; # managed by Certbot
      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
      
      
      }
      ...
      

      The configuration in this server block will instruct Nginx to listen on port 443, which is the standard port for websites that use SSL. Furthermore, Nginx will accept public traffic on port 443 and forward every matching request to the built-in Kestrel server at http://localhost:5000.

      Finally, following the server block you just edited in the file, ensure that the second server block looks like so:

      /etc/nginx/sites-available/your-domain

      
      ...
      server {
      if ($host = www.your-domain) {
          return 301 https://$host$request_uri;
      } # managed by Certbot
      
      
      if ($host = your-domain) {
          return 301 https://$host$request_uri;
      } # managed by Certbot
      
      
          listen 80;
          listen [::]:80;
      
          server_name your-domain  www.your-domain;
      return 404; # managed by Certbot
      }
      

      This server block will redirect all requests to https://your-domain and https://www.your-domain to a secure HTTPS access.

      Next, force Nginx to pick up the changes you've made to the server block by running:

      With the Nginx configuration successfully completed, the server is fully set up to forward all HTTPS requests made to https://your-domain on to the ASP.NET Core app running on Kestrel at http://localhost:5000. However, Nginx isn’t set up to manage the Kestrel server process. To handle this and ensure that the Kestrel process keeps running in the background, you will use systemd functionalities.

      Systemd files will allow you to manage a process by providing start, stop, restart, and log functionalities once you create a process of work called a unit.

      Move into the systemd directory:

      Create a new file for editing:

      Add the following content to it:

      movie.service

      [Unit]
      Description=Movie app
      
      [Service]
      WorkingDirectory=/var/www/movie-app
      ExecStart=/usr/bin/dotnet /var/www/movie-app/bin/Debug/netcoreapp2.2/publish/MvcMovie.dll
      Restart=always
      RestartSec=10
      SyslogIdentifier=movie
      User=sammy
      Environment=ASPNETCORE_ENVIRONMENT=Production
      Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false
      
      [Install]
      WantedBy=multi-user.target
      

      The configuration file specifies the location of the project’s folder with WorkingDirectory and the command to execute at the start of the process in ExecStart. In addition, you've used the RestartSec directive to specify when to restart the systemd service if the .NET runtime service crashes.

      Now save the file and enable the new movie service created with:

      • sudo systemctl enable movie.service

      After that, proceed to start the service and verify that it’s running by starting the service:

      • sudo systemctl start movie.service

      Then check its status:

      • sudo systemctl status movie.service

      You will see the following output:

      Output

      movie.service - Movie app Loaded: loaded (/etc/systemd/system/movie.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2019-06-23 04:51:28 UTC; 11s ago Main PID: 6038 (dotnet) Tasks: 16 (limit: 1152) CGroup: /system.slice/movie.service └─6038 /usr/bin/dotnet /var/www/movie-app/bin/Debug/netcoreapp2.2/publish/MvcMovie.dll

      This output gives you an overview of the current status of the movie.service created to keep your app running. It indicates that the service is enabled and currently active.

      Navigate to https://your-domain from your browser to run and test out the application.

      You'll see the home page for the demo application—Movie List Application.

      Movie list application

      With the reverse proxy configured and Kestrel managed through systemd, the web app is fully configured and can be accessed from a browser.

      Conclusion

      In this tutorial, you deployed an ASP.NET Core application to an Ubuntu server. To persist and manage data, you installed and used MySQL server and used the Nginx web server as a reverse proxy to serve your application.

      Beyond this tutorial, if you're interested in building an interactive web application using C# instead of Javascript you could try a web UI framework by Microsoft called Blazor. It is an event-driven component-based web UI for implementing logic on the client side of an ASP.NET Core application.

      If you wish to deploy your own application, you'll need to consider other required procedures to deploy your app. The complete source code for this demo application can be found here on GitHub.



      Source link

      How To Set Up an OpenVPN Server on Debian 10


      A previous version of this tutorial was written by Justin Ellingwood

      Introduction

      Want to access the Internet safely and securely from your smartphone or laptop when connected to an untrusted network such as the WiFi of a hotel or coffee shop? A Virtual Private Network (VPN) allows you to traverse untrusted networks privately and securely as if you were on a private network. The traffic emerges from the VPN server and continues its journey to the destination.

      When combined with HTTPS connections, this setup allows you to secure your wireless logins and transactions. You can circumvent geographical restrictions and censorship, and shield your location and any unencrypted HTTP traffic from the untrusted network.

      OpenVPN is a full-featured, open-source Secure Socket Layer (SSL) VPN solution that accommodates a wide range of configurations. In this tutorial, you will set up an OpenVPN server on a Debian 10 server and then configure access to it from Windows, macOS, iOS and/or Android. This tutorial will keep the installation and configuration steps as simple as possible for each of these setups.

      Note: If you plan to set up an OpenVPN server on a DigitalOcean Droplet, be aware that we, like many hosting providers, charge for bandwidth overages. For this reason, please be mindful of how much traffic your server is handling.

      See this page for more info.

      Prerequisites

      To complete this tutorial, you will need access to a Debian 10 server to host your OpenVPN service. You will need to configure a non-root user with sudo privileges before you start this guide. You can follow our Debian 10 initial server setup guide to set up a user with appropriate permissions. The linked tutorial will also set up a firewall, which is assumed to be in place throughout this guide.

      Additionally, you will need a separate machine to serve as your certificate authority (CA). While it’s technically possible to use your OpenVPN server or your local machine as your CA, this is not recommended as it opens up your VPN to some security vulnerabilities. Per the official OpenVPN documentation, you should place your CA on a standalone machine that’s dedicated to importing and signing certificate requests. For this reason, this guide assumes that your CA is on a separate Debian 10 server that also has a non-root user with sudo privileges and a basic firewall.

      Please note that if you disable password authentication while configuring these servers, you may run into difficulties when transferring files between them later on in this guide. To resolve this issue, you could re-enable password authentication on each server. Alternatively, you could generate an SSH keypair for each server, then add the OpenVPN server’s public SSH key to the CA machine’s authorized_keys file and vice versa. See How to Set Up SSH Keys on Debian 10 for instructions on how to perform either of these solutions.

      When you have these prerequisites in place, you can move on to Step 1 of this tutorial.

      Step 1 — Installing OpenVPN and EasyRSA

      To start off, update your VPN server’s package index and install OpenVPN. OpenVPN is available in Debian’s default repositories, so you can use apt for the installation:

      • sudo apt update
      • sudo apt install openvpn

      OpenVPN is a TLS/SSL VPN. This means that it utilizes certificates in order to encrypt traffic between the server and clients. To issue trusted certificates, you will set up your own simple certificate authority (CA). To do this, we will download the latest version of EasyRSA, which we will use to build our CA public key infrastructure (PKI), from the project’s official GitHub repository.

      As mentioned in the prerequisites, we will build the CA on a standalone server. The reason for this approach is that, if an attacker were able to infiltrate your server, they would be able to access your CA private key and use it to sign new certificates, giving them access to your VPN. Accordingly, managing the CA from a standalone machine helps to prevent unauthorized users from accessing your VPN. Note, as well, that it’s recommended that you keep the CA server turned off when not being used to sign keys as a further precautionary measure.

      To begin building the CA and PKI infrastructure, use wget to download the latest version of EasyRSA on both your CA machine and your OpenVPN server. To get the latest version, go to the Releases page on the official EasyRSA GitHub project, copy the download link for the file ending in .tgz, and then paste it into the following command:

      • wget -P ~/ https://github.com/OpenVPN/easy-rsa/releases/download/v3.0.6/EasyRSA-unix-v3.0.6.tgz

      Then extract the tarball:

      • cd ~
      • tar xvf EasyRSA-unix-v3.0.6.tgz

      You have successfully installed all the required software on your server and CA machine. Continue on to configure the variables used by EasyRSA and to set up a CA directory, from which you will generate the keys and certificates needed for your server and clients to access the VPN.

      Step 2 — Configuring the EasyRSA Variables and Building the CA

      EasyRSA comes installed with a configuration file which you can edit to define a number of variables for your CA.

      On your CA machine, navigate to the EasyRSA directory:

      Inside this directory is a file named vars.example. Make a copy of this file, and name the copy vars without a file extension:

      Open this new file using your preferred text editor:

      Find the settings that set field defaults for new certificates. It will look something like this:

      ~/EasyRSA-v3.0.6/vars

      . . .
      
      #set_var EASYRSA_REQ_COUNTRY    "US"
      #set_var EASYRSA_REQ_PROVINCE   "California"
      #set_var EASYRSA_REQ_CITY       "San Francisco"
      #set_var EASYRSA_REQ_ORG        "Copyleft Certificate Co"
      #set_var EASYRSA_REQ_EMAIL      "me@example.net"
      #set_var EASYRSA_REQ_OU         "My Organizational Unit"
      
      . . .
      

      Uncomment these lines and update the highlighted values to whatever you'd prefer, but do not leave them blank:

      ~/EasyRSA-v3.0.6/vars

      . . .
      
      set_var EASYRSA_REQ_COUNTRY    "US"
      set_var EASYRSA_REQ_PROVINCE   "NewYork"
      set_var EASYRSA_REQ_CITY       "New York City"
      set_var EASYRSA_REQ_ORG        "DigitalOcean"
      set_var EASYRSA_REQ_EMAIL      "admin@example.com"
      set_var EASYRSA_REQ_OU         "Community"
      
      . . .
      

      When you are finished, save and close the file.

      Within the EasyRSA directory is a script called easyrsa which is called to perform a variety of tasks involved with building and managing the CA. Run this script with the init-pki option to initiate the public key infrastructure on the CA server:

      Output

      . . . init-pki complete; you may now create a CA or requests. Your newly created PKI dir is: /home/sammy/EasyRSA-v3.0.6/pki

      After this, call the easyrsa script again, following it with the build-ca option. This will build the CA and create two important files — ca.crt and ca.key — which make up the public and private sides of an SSL certificate.

      • ca.crt is the CA’s public certificate file which, in the context of OpenVPN, the server and the client use to inform one another that they are part of the same web of trust and not someone performing a man-in-the-middle attack. For this reason, your server and all of your clients will need a copy of the ca.crt file.
      • ca.key is the private key which the CA machine uses to sign keys and certificates for servers and clients. If an attacker gains access to your CA and, in turn, your ca.key file, they will be able to sign certificate requests and gain access to your VPN, impeding its security. This is why your ca.key file should only be on your CA machine and that, ideally, your CA machine should be kept offline when not signing certificate requests as an extra security measure.

      If you don’t want to be prompted for a password every time you interact with your CA, you can run the build-ca command with the nopass option, like this:

      • ./easyrsa build-ca nopass

      In the output, you’ll be asked to confirm the common name for your CA:

      Output

      . . . Common Name (eg: your user, host, or server name) [Easy-RSA CA]:

      The common name is the name used to refer to this machine in the context of the certificate authority. You can enter any string of characters for the CA’s common name but, for simplicity’s sake, press ENTER to accept the default name.

      With that, your CA is in place and it’s ready to start signing certificate requests.

      Step 3 — Creating the Server Certificate, Key, and Encryption Files

      Now that you have a CA ready to go, you can generate a private key and certificate request from your server and then transfer the request over to your CA to be signed, creating the required certificate. You’re also free to create some additional files used during the encryption process.

      Start by navigating to the EasyRSA directory on your OpenVPN server:

      From there, run the easyrsa script with the init-pki option. Although you already ran this command on the CA machine, it’s necessary to run it here because your server and CA will have separate PKI directories:

      Then call the easyrsa script again, this time with the gen-req option followed by a common name for the machine. Again, this could be anything you like but it can be helpful to make it something descriptive. Throughout this tutorial, the OpenVPN server’s common name will simply be “server”. Be sure to include the nopass option as well. Failing to do so will password-protect the request file which could lead to permissions issues later on:

      Note: If you choose a name other than “server” here, you will have to adjust some of the instructions below. For instance, when copying the generated files to the /etc/openvpn directory, you will have to substitute the correct names. You will also have to modify the /etc/openvpn/server.conf file later to point to the correct .crt and .key files.

      • ./easyrsa gen-req server nopass

      This will create a private key for the server and a certificate request file called server.req. Copy the server key to the /etc/openvpn/ directory:

      • sudo cp ~/EasyRSA-v3.0.6/pki/private/server.key /etc/openvpn/

      Using a secure method (like SCP, in our example below), transfer the server.req file to your CA machine:

      • scp ~/EasyRSA-v3.0.6/pki/reqs/server.req sammy@your_CA_ip:/tmp

      Next, on your CA machine, navigate to the EasyRSA directory:

      Using the easyrsa script again, import the server.req file, following the file path with its common name:

      • ./easyrsa import-req /tmp/server.req server

      Then sign the request by running the easyrsa script with the sign-req option, followed by the request type and the common name. The request type can either be client or server, so for the OpenVPN server’s certificate request, be sure to use the server request type:

      • ./easyrsa sign-req server server

      In the output, you’ll be asked to verify that the request comes from a trusted source. Type yes then press ENTER to confirm this:

      You are about to sign the following certificate.
      Please check over the details shown below for accuracy. Note that this request
      has not been cryptographically verified. Please be sure it came from a trusted
      source or that you have verified the request checksum with the sender.
      
      Request subject, to be signed as a server certificate for 1080 days:
      
      subject=
          commonName                = server
      
      
      Type the word 'yes' to continue, or any other input to abort.
        Confirm request details: yes
      

      If you encrypted your CA key, you’ll be prompted for your password at this point.

      Next, transfer the signed certificate back to your VPN server using a secure method:

      • scp pki/issued/server.crt sammy@your_server_ip:/tmp

      Before logging out of your CA machine, transfer the ca.crt file to your server as well:

      • scp pki/ca.crt sammy@your_server_ip:/tmp

      Next, log back into your OpenVPN server and copy the server.crt and ca.crt files into your /etc/openvpn/ directory:

      • sudo cp /tmp/{server.crt,ca.crt} /etc/openvpn/

      Then navigate to your EasyRSA directory:

      From there, create a strong Diffie-Hellman key to use during key exchange by typing:

      This may take a few minutes to complete. Once it does, generate an HMAC signature to strengthen the server's TLS integrity verification capabilities:

      • sudo openvpn --genkey --secret ta.key

      When the command finishes, copy the two new files to your /etc/openvpn/ directory:

      • sudo cp ~/EasyRSA-v3.0.6/ta.key /etc/openvpn/
      • sudo cp ~/EasyRSA-v3.0.6/pki/dh.pem /etc/openvpn/

      With that, all the certificate and key files needed by your server have been generated. You’re ready to create the corresponding certificates and keys which your client machine will use to access your OpenVPN server.

      Step 4 — Generating a Client Certificate and Key Pair

      Although you can generate a private key and certificate request on your client machine and then send it to the CA to be signed, this guide outlines a process for generating the certificate request on the server. The benefit of this is that we can create a script which will automatically generate client configuration files that contain all of the required keys and certificates. This lets you avoid having to transfer keys, certificates, and configuration files to clients and streamlines the process of joining the VPN.

      We will generate a single client key and certificate pair for this guide. If you have more than one client, you can repeat this process for each one. Please note, though, that you will need to pass a unique name value to the script for every client. Throughout this tutorial, the first certificate/key pair is referred to as client1.

      Get started by creating a directory structure within your home directory to store the client certificate and key files:

      • mkdir -p ~/client-configs/keys

      Since you will store your clients’ certificate/key pairs and configuration files in this directory, you should lock down its permissions now as a security measure:

      • chmod -R 700 ~/client-configs

      Next, navigate back to the EasyRSA directory and run the easyrsa script with the gen-req and nopass options, along with the common name for the client:

      • cd ~/EasyRSA-v3.0.6/
      • ./easyrsa gen-req client1 nopass

      Press ENTER to confirm the common name. Then, copy the client1.key file to the /client-configs/keys/ directory you created earlier:

      • cp pki/private/client1.key ~/client-configs/keys/

      Next, transfer the client1.req file to your CA machine using a secure method:

      • scp pki/reqs/client1.req sammy@your_CA_ip:/tmp

      Log in to your CA machine, navigate to the EasyRSA directory, and import the certificate request:

      • ssh sammy@your_CA_ip
      • cd EasyRSA-v3.0.6/
      • ./easyrsa import-req /tmp/client1.req client1

      Then sign the request as you did for the server in the previous step. This time, though, be sure to specify the client request type:

      • ./easyrsa sign-req client client1

      At the prompt, enter yes to confirm that you intend to sign the certificate request and that it came from a trusted source:

      Output

      Type the word 'yes' to continue, or any other input to abort. Confirm request details: yes

      Again, if you encrypted your CA key, you’ll be prompted for your password here.

      This will create a client certificate file named client1.crt. Transfer this file back to the server:

      • scp pki/issued/client1.crt sammy@your_server_ip:/tmp

      SSH back into your OpenVPN server and copy the client certificate to the /client-configs/keys/ directory:

      • cp /tmp/client1.crt ~/client-configs/keys/

      Next, copy the ca.crt and ta.key files to the /client-configs/keys/ directory as well:

      • sudo cp ~/EasyRSA-v3.0.6/ta.key ~/client-configs/keys/
      • sudo cp /etc/openvpn/ca.crt ~/client-configs/keys/

      With that, your server and client’s certificates and keys have all been generated and are stored in the appropriate directories on your server. There are still a few actions that need to be performed with these files, but those will come in a later step. For now, you can move on to configuring OpenVPN on your server.

      Step 5 — Configuring the OpenVPN Service

      Now that both your client and server’s certificates and keys have been generated, you can begin configuring the OpenVPN service to use these credentials.

      Start by copying a sample OpenVPN configuration file into the configuration directory and then extract it in order to use it as a basis for your setup:

      • sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
      • sudo gzip -d /etc/openvpn/server.conf.gz

      Open the server configuration file in your preferred text editor:

      • sudo nano /etc/openvpn/server.conf

      Find the HMAC section by looking for the tls-auth directive. This line should already be uncommented, but if isn’t then remove the ";" to uncomment it. Below this line, add the key-direction parameter, set to "0":

      /etc/openvpn/server.conf

      tls-auth ta.key 0 # This file is secret
      key-direction 0
      

      Next, find the section on cryptographic ciphers by looking for the commented out cipher lines. The AES-256-CBC cipher offers a good level of encryption and is well supported. Again, this line should already be uncommented, but if it isn’t then just remove the ";" preceding it:

      /etc/openvpn/server.conf

      cipher AES-256-CBC
      

      Below this, add an auth directive to select the HMAC message digest algorithm. For this, SHA256 is a good choice:

      /etc/openvpn/server.conf

      auth SHA256
      

      Next, find the line containing a dh directive which defines the Diffie-Hellman parameters. Because of some recent changes made to EasyRSA, the filename for the Diffie-Hellman key may be different than what is listed in the example server configuration file. If necessary, change the file name listed here by removing the 2048 so it aligns with the key you generated in the previous step:

      /etc/openvpn/server.conf

      dh dh.pem
      

      Finally, find the user and group settings and remove the ";" at the beginning of each to uncomment these lines:

      /etc/openvpn/server.conf

      user nobody
      group nogroup
      

      The changes you’ve made to the sample server.conf file up to this point are necessary in order for OpenVPN to function. The changes outlined below are optional, though they too are needed for many common use cases.

      (Optional) Push DNS Changes to Redirect All Traffic Through the VPN

      The settings above will create the VPN connection between the two machines, but will not force any connections to use the tunnel. If you wish to use the VPN to route all of your traffic, you will likely want to push the DNS settings to the client computers.

      There are a few directives in the server.conf file which you must change in order to enable this functionality. Find the redirect-gateway section and remove the semicolon ";" from the beginning of the redirect-gateway line to uncomment it:

      /etc/openvpn/server.conf

      push "redirect-gateway def1 bypass-dhcp"
      

      Just below this, find the dhcp-option section. Again, remove the ";" from in front of both of the lines to uncomment them:

      /etc/openvpn/server.conf

      push "dhcp-option DNS 208.67.222.222"
      push "dhcp-option DNS 208.67.220.220"
      

      This will assist clients in reconfiguring their DNS settings to use the VPN tunnel for as the default gateway.

      (Optional) Adjust the Port and Protocol

      By default, the OpenVPN server uses port 1194 and the UDP protocol to accept client connections. If you need to use a different port because of restrictive network environments that your clients might be in, you can change the port option. If you are not hosting web content on your OpenVPN server, port 443 is a popular choice since it is usually allowed through firewall rules.

      /etc/openvpn/server.conf

      # Optional!
      port 443
      

      Oftentimes, the protocol is restricted to that port as well. If so, change proto from UDP to TCP:

      /etc/openvpn/server.conf

      # Optional!
      proto tcp
      

      If you do switch the protocol to TCP, you will need to change the explicit-exit-notify directive’s value from 1 to 0, as this directive is only used by UDP. Failing to do so while using TCP will cause errors when you start the OpenVPN service:

      /etc/openvpn/server.conf

      # Optional!
      explicit-exit-notify 0
      

      If you have no need to use a different port and protocol, it is best to leave these two settings as their defaults.

      (Optional) Point to Non-Default Credentials

      If you selected a different name during the ./build-key-server command earlier, modify the cert and key lines that you see to point to the appropriate .crt and .key files. If you used the default name, “server”, this is already set correctly:

      /etc/openvpn/server.conf

      cert server.crt
      key server.key
      

      When you are finished, save and close the file.

      After going through and making whatever changes to your server’s OpenVPN configuration are required for your specific use case, you can begin making some changes to your server’s networking.

      Step 6 — Adjusting the Server Networking Configuration

      There are some aspects of the server’s networking configuration that need to be tweaked so that OpenVPN can correctly route traffic through the VPN. The first of these is IP forwarding, a method for determining where IP traffic should be routed. This is essential to the VPN functionality that your server will provide.

      Adjust your server’s default IP forwarding setting by modifying the /etc/sysctl.conf file:

      • sudo nano /etc/sysctl.conf

      Inside, look for the commented line that sets net.ipv4.ip_forward. Remove the "#" character from the beginning of the line to uncomment this setting:

      /etc/sysctl.conf

      net.ipv4.ip_forward=1
      

      Save and close the file when you are finished.

      To read the file and adjust the values for the current session, type:

      Output

      net.ipv4.ip_forward = 1

      If you followed the Debian 10 initial server setup guide listed in the prerequisites, you should have a UFW firewall in place. Regardless of whether you use the firewall to block unwanted traffic (which you almost always should do), for this guide you need a firewall to manipulate some of the traffic coming into the server. Some of the firewall rules must be modified to enable masquerading, an iptables concept that provides on-the-fly dynamic network address translation (NAT) to correctly route client connections.

      Before opening the firewall configuration file to add the masquerading rules, you must first find the public network interface of your machine. To do this, type:

      Your public interface is the string found within this command’s output that follows the word "dev". For example, this result shows the interface named wlp11s0, which is highlighted below:

      Output

      default via 203.0.113.1 dev eth0 proto static

      When you have the interface associated with your default route, open the /etc/ufw/before.rules file to add the relevant configuration:

      • sudo nano /etc/ufw/before.rules

      UFW rules are typically added using the ufw command. Rules listed in the before.rules file, though, are read and put into place before the conventional UFW rules are loaded. Towards the top of the file, add the highlighted lines below. This will set the default policy for the POSTROUTING chain in the nat table and masquerade any traffic coming from the VPN. Remember to replace wlp11s0 in the -A POSTROUTING line below with the interface you found in the above command:

      /etc/ufw/before.rules

      #
      # rules.before
      #
      # Rules that should be run before the ufw command line added rules. Custom
      # rules should be added to one of these chains:
      #   ufw-before-input
      #   ufw-before-output
      #   ufw-before-forward
      #
      
      # START OPENVPN RULES
      # NAT table rules
      *nat
      :POSTROUTING ACCEPT [0:0] 
      # Allow traffic from OpenVPN client to eth0 (change to the interface you discovered!)
      -A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
      COMMIT
      # END OPENVPN RULES
      
      # Don't delete these required lines, otherwise there will be errors
      *filter
      . . .
      

      Save and close the file when you are finished.

      Next, you need to tell UFW to allow forwarded packets by default as well. To do this, open the /etc/default/ufw file:

      • sudo nano /etc/default/ufw

      Inside, find the DEFAULT_FORWARD_POLICY directive and change the value from DROP to ACCEPT:

      /etc/default/ufw

      DEFAULT_FORWARD_POLICY="ACCEPT"
      

      Save and close the file when you are finished.

      Next, adjust the firewall itself to allow traffic to OpenVPN. If you did not change the port and protocol in the /etc/openvpn/server.conf file, you will need to open up UDP traffic to port 1194. If you modified the port and/or protocol, substitute the values you selected here.

      Also, in case you didn't add the SSH port when completing the prerequisite tutorial, add it here as well:

      • sudo ufw allow 1194/udp
      • sudo ufw allow OpenSSH

      After adding those rules, disable and re-enable UFW to restart it and load the changes from all of the files you've modified:

      • sudo ufw disable
      • sudo ufw enable

      Your server is now configured to correctly handle OpenVPN traffic.

      Step 7 — Starting and Enabling the OpenVPN Service

      You're finally ready to start the OpenVPN service on your server. This is done using the systemd utility systemctl.

      Start the OpenVPN server by specifying your configuration file name as an instance variable after the systemd unit file name. The configuration file for your server is called /etc/openvpn/server.conf, so add @server to end of your unit file when calling it:

      • sudo systemctl start openvpn@server

      Double-check that the service has started successfully by typing:

      • sudo systemctl status openvpn@server

      If everything went well, your output will look something like this:

      Output

      ● openvpn@server.service - OpenVPN connection to server Loaded: loaded (/lib/systemd/system/openvpn@.service; disabled; vendor preset: enabled) Active: active (running) since Wed 2019-07-17 03:39:24 UTC; 29s ago Docs: man:openvpn(8) https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage https://community.openvpn.net/openvpn/wiki/HOWTO Main PID: 3371 (openvpn) Status: "Initialization Sequence Completed" Tasks: 1 (limit: 3587) Memory: 1.2M CGroup: /system.slice/system-openvpn.slice/openvpn@server.service └─3371 /usr/sbin/openvpn --daemon ovpn-server --status /run/openvpn/server.status 10 --cd /etc/openvpn --config /etc/openvpn/server.conf --writepid /run/openvpn/

      You can also check that the OpenVPN tun0 interface is available by typing:

      This will output a configured interface:

      Output

      3: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100 link/none inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0 valid_lft forever preferred_lft forever inet6 fe80::dd60:3a78:b0ca:1659/64 scope link stable-privacy valid_lft forever preferred_lft forever

      After starting the service, enable it so that it starts automatically at boot:

      • sudo systemctl enable openvpn@server

      Your OpenVPN service is now up and running. Before you can start using it, though, you must first create a configuration file for the client machine. This tutorial already went over how to create certificate/key pairs for clients, and in the next step we will demonstrate how to create an infrastructure that will generate client configuration files easily.

      Step 8 — Creating the Client Configuration Infrastructure

      Creating configuration files for OpenVPN clients can be somewhat involved, as every client must have its own config and each must align with the settings outlined in the server’s configuration file. Rather than writing a single configuration file that can only be used on one client, this step outlines a process for building a client configuration infrastructure which you can use to generate config files on-the-fly. You will first create a “base” configuration file then build a script which will allow you to generate unique client config files, certificates, and keys as needed.

      Get started by creating a new directory where you will store client configuration files within the client-configs directory you created earlier:

      • mkdir -p ~/client-configs/files

      Next, copy an example client configuration file into the client-configs directory to use as your base configuration:

      • cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf

      Open this new file in your text editor:

      • nano ~/client-configs/base.conf

      Inside, locate the remote directive. This points the client to your OpenVPN server address — the public IP address of your OpenVPN server. If you decided to change the port that the OpenVPN server is listening on, you will also need to change 1194 to the port you selected:

      ~/client-configs/base.conf

      . . .
      # The hostname/IP and port of the server.
      # You can have multiple remote entries
      # to load balance between the servers.
      remote your_server_ip 1194
      . . .
      

      Be sure that the protocol matches the value you are using in the server configuration:

      ~/client-configs/base.conf

      proto udp
      

      Next, uncomment the user and group directives by removing the ";" at the beginning of each line:

      ~/client-configs/base.conf

      # Downgrade privileges after initialization (non-Windows only)
      user nobody
      group nogroup
      

      Find the directives that set the ca, cert, and key. Comment out these directives since you will add the certs and keys within the file itself shortly:

      ~/client-configs/base.conf

      # SSL/TLS parms.
      # See the server config file for more
      # description.  It's best to use
      # a separate .crt/.key file pair
      # for each client.  A single ca
      # file can be used for all clients.
      #ca ca.crt
      #cert client.crt
      #key client.key
      

      Similarly, comment out the tls-auth directive, as you will add ta.key directly into the client configuration file:

      ~/client-configs/base.conf

      # If a tls-auth key is used on the server
      # then every client must also have the key.
      #tls-auth ta.key 1
      

      Mirror the cipher and auth settings that you set in the /etc/openvpn/server.conf file:

      ~/client-configs/base.conf

      cipher AES-256-CBC
      auth SHA256
      

      Next, add the key-direction directive somewhere in the file. You must set this to "1" for the VPN to function correctly on the client machine:

      ~/client-configs/base.conf

      key-direction 1
      

      Finally, add a few commented out lines. Although you can include these directives in every client configuration file, you only need to enable them for Linux clients that ship with an /etc/openvpn/update-resolv-conf file. This script uses the resolvconf utility to update DNS information for Linux clients.

      ~/client-configs/base.conf

      # script-security 2
      # up /etc/openvpn/update-resolv-conf
      # down /etc/openvpn/update-resolv-conf
      

      If your client is running Linux and has an /etc/openvpn/update-resolv-conf file, uncomment these lines from the client’s configuration file after it has been generated.

      Save and close the file when you are finished.

      Next, create a simple script that will compile your base configuration with the relevant certificate, key, and encryption files and then place the generated configuration in the ~/client-configs/files directory. Open a new file called make_config.sh within the ~/client-configs directory:

      • nano ~/client-configs/make_config.sh

      Inside, add the following content, making sure to change sammy to that of your server’s non-root user account:

      ~/client-configs/make_config.sh

      #!/bin/bash
      
      # First argument: Client identifier
      
      KEY_DIR=/home/sammy/client-configs/keys
      OUTPUT_DIR=/home/sammy/client-configs/files
      BASE_CONFIG=/home/sammy/client-configs/base.conf
      
      cat ${BASE_CONFIG} 
          <(echo -e '<ca>') 
          ${KEY_DIR}/ca.crt 
          <(echo -e '</ca>n<cert>') 
          ${KEY_DIR}/${1}.crt 
          <(echo -e '</cert>n<key>') 
          ${KEY_DIR}/${1}.key 
          <(echo -e '</key>n<tls-auth>') 
          ${KEY_DIR}/ta.key 
          <(echo -e '</tls-auth>') 
          > ${OUTPUT_DIR}/${1}.ovpn
      

      Save and close the file when you are finished.

      Before moving on, be sure to mark this file as executable by typing:

      • chmod 700 ~/client-configs/make_config.sh

      This script will make a copy of the base.conf file you made, collect all the certificate and key files you’ve created for your client, extract their contents, append them to the copy of the base configuration file, and export all of this content into a new client configuration file. This means that, rather than having to manage the client’s configuration, certificate, and key files separately, all the required information is stored in one place. The benefit of this is that if you ever need to add a client in the future, you can just run this script to quickly create the config file and ensure that all the important information is stored in a single, easy-to-access location.

      Please note that any time you add a new client, you will need to generate new keys and certificates for it before you can run this script and generate its configuration file. You will get some practice using this script in the next step.

      Step 9 — Generating Client Configurations

      If you followed along with the guide, you created a client certificate and key named client1.crt and client1.key, respectively, in Step 4. You can generate a config file for these credentials by moving into your ~/client-configs directory and running the script you made at the end of the previous step:

      • cd ~/client-configs
      • sudo ./make_config.sh client1

      This will create a file named client1.ovpn in your ~/client-configs/files directory:

      • ls ~/client-configs/files

      Output

      client1.ovpn

      You need to transfer this file to the device you plan to use as the client. For instance, this could be your local computer or a mobile device.

      While the exact applications used to accomplish this transfer will depend on your device's operating system and your personal preferences, a dependable and secure method is to use SFTP (SSH file transfer protocol) or SCP (Secure Copy) on the backend. This will transport your client's VPN authentication files over an encrypted connection.

      Here is an example SFTP command using the client1.ovpn example which you can run from your local computer (macOS or Linux). It places the .ovpn file in your home directory:

      • sftp sammy@your_server_ip:client-configs/files/client1.ovpn ~/

      Here are several tools and tutorials for securely transferring files from the server to a local computer:

      Step 10 — Installing the Client Configuration

      This section covers how to install a client VPN profile on Windows, macOS, Linux, iOS, and Android. None of these client instructions are dependent on one another, so feel free to skip to whichever is applicable to your device.

      The OpenVPN connection will have the same name as whatever you called the .ovpn file. In regards to this tutorial, this means that the connection is named client1.ovpn, aligning with the first client file you generated.

      Windows

      Installing

      Download the OpenVPN client application for Windows from OpenVPN's Downloads page. Choose the appropriate installer version for your version of Windows.

      Note: OpenVPN needs administrative privileges to install.

      After installing OpenVPN, copy the .ovpn file to:

      C:Program FilesOpenVPNconfig
      

      When you launch OpenVPN, it will automatically see the profile and make it available.

      You must run OpenVPN as an administrator each time it's used, even by administrative accounts. To do this without having to right-click and select Run as administrator every time you use the VPN, you must preset this from an administrative account. This also means that standard users will need to enter the administrator's password to use OpenVPN. On the other hand, standard users can't properly connect to the server unless the OpenVPN application on the client has admin rights, so the elevated privileges are necessary.

      To set the OpenVPN application to always run as an administrator, right-click on its shortcut icon and go to Properties. At the bottom of the Compatibility tab, click the button to Change settings for all users. In the new window, check Run this program as an administrator.

      Connecting

      Each time you launch the OpenVPN GUI, Windows will ask if you want to allow the program to make changes to your computer. Click Yes. Launching the OpenVPN client application only puts the applet in the system tray so that you can connect and disconnect the VPN as needed; it does not actually make the VPN connection.

      Once OpenVPN is started, initiate a connection by going into the system tray applet and right-clicking on the OpenVPN applet icon. This opens the context menu. Select client1 at the top of the menu (that's your client1.ovpn profile) and choose Connect.

      A status window will open showing the log output while the connection is established, and a message will show once the client is connected.

      Disconnect from the VPN the same way: Go into the system tray applet, right-click the OpenVPN applet icon, select the client profile and click Disconnect.

      macOS

      Installing

      Tunnelblick is a free, open source OpenVPN client for macOS. You can download the latest disk image from the Tunnelblick Downloads page. Double-click the downloaded .dmg file and follow the prompts to install.

      Towards the end of the installation process, Tunnelblick will ask if you have any configuration files. Answer I have configuration files and let Tunnelblick finish. Open a Finder window and double-click client1.ovpn. Tunnelblick will install the client profile. Administrative privileges are required.

      Connecting

      Launch Tunnelblick by double-clicking the Tunnelblick icon in the Applications folder. Once Tunnelblick has been launched, there will be a Tunnelblick icon in the menu bar at the top right of the screen for controlling connections. Click on the icon, and then the Connect client1 menu item to initiate the VPN connection.

      Linux

      Installing

      If you are using Linux, there are a variety of tools that you can use depending on your distribution. Your desktop environment or window manager might also include connection utilities.

      The most universal way of connecting, however, is to just use the OpenVPN software.

      On Debian, you can install it just as you did on the server by typing:

      • sudo apt update
      • sudo apt install openvpn

      On CentOS you can enable the EPEL repositories and then install it by typing:

      • sudo yum install epel-release
      • sudo yum install openvpn

      Configuring

      Check to see if your distribution includes an /etc/openvpn/update-resolv-conf script:

      Output

      update-resolv-conf

      Next, edit the OpenVPN client configuration file you transfered:

      If you were able to find an update-resolv-conf file, uncomment the three lines you added to adjust the DNS settings:

      client1.ovpn

      script-security 2
      up /etc/openvpn/update-resolv-conf
      down /etc/openvpn/update-resolv-conf
      

      If you are using CentOS, change the group directive from nogroup to nobody to match the distribution's available groups:

      client1.ovpn

      group nobody
      

      Save and close the file.

      Now, you can connect to the VPN by just pointing the openvpn command to the client configuration file:

      • sudo openvpn --config client1.ovpn

      This should connect you to your VPN.

      iOS

      Installing

      From the iTunes App Store, search for and install OpenVPN Connect, the official iOS OpenVPN client application. To transfer your iOS client configuration onto the device, connect it directly to a computer.

      The process of completing the transfer with iTunes is outlined here. Open iTunes on the computer and click on iPhone > apps. Scroll down to the bottom to the File Sharing section and click the OpenVPN app. The blank window to the right, OpenVPN Documents, is for sharing files. Drag the .ovpn file to the OpenVPN Documents window.

      iTunes showing the VPN profile ready to load on the iPhone

      Now launch the OpenVPN app on the iPhone. You will receive a notification that a new profile is ready to import. Tap the green plus sign to import it.

      The OpenVPN iOS app showing new profile ready to import

      Connecting

      OpenVPN is now ready to use with the new profile. Start the connection by sliding the Connect button to the On position. Disconnect by sliding the same button to Off.

      Note: The VPN switch under Settings cannot be used to connect to the VPN. If you try, you will receive a notice to only connect using the OpenVPN app.

      The OpenVPN iOS app connected to the VPN

      Android

      Installing

      Open the Google Play Store. Search for and install Android OpenVPN Connect, the official Android OpenVPN client application.

      You can transfer the .ovpn profile by connecting the Android device to your computer by USB and copying the file over. Alternatively, if you have an SD card reader, you can remove the device's SD card, copy the profile onto it and then insert the card back into the Android device.

      Start the OpenVPN app and tap the menu to import the profile.

      The OpenVPN Android app profile import menu selection

      Then navigate to the location of the saved profile (the screenshot uses /sdcard/Download/) and select the file. The app will make a note that the profile was imported.

      The OpenVPN Android app selecting VPN profile to import

      Connecting

      To connect, simply tap the Connect button. You'll be asked if you trust the OpenVPN application. Choose OK to initiate the connection. To disconnect from the VPN, go back to the OpenVPN app and choose Disconnect.

      The OpenVPN Android app ready to connect to the VPN

      Step 11 — Testing Your VPN Connection (Optional)

      Note: This method for testing your VPN connection will only work if you opted to route all your traffic through the VPN in Step 5.

      Once everything is installed, a simple check confirms everything is working properly. Without having a VPN connection enabled, open a browser and go to DNSLeakTest.

      The site will return the IP address assigned by your internet service provider and as you appear to the rest of the world. To check your DNS settings through the same website, click on Extended Test and it will tell you which DNS servers you are using.

      Now connect the OpenVPN client to your Droplet's VPN and refresh the browser. A completely different IP address (that of your VPN server) should now appear, and this is how you appear to the world. Again, DNSLeakTest's Extended Test will check your DNS settings and confirm you are now using the DNS resolvers pushed by your VPN.

      Step 12 — Revoking Client Certificates

      Occasionally, you may need to revoke a client certificate to prevent further access to the OpenVPN server.

      To do so, navigate to the EasyRSA directory on your CA machine:

      Next, run the easyrsa script with the revoke option, followed by the client name you wish to revoke:

      This will ask you to confirm the revocation by entering yes:

      Output

      Please confirm you wish to revoke the certificate with the following subject: subject= commonName = client2 Type the word 'yes' to continue, or any other input to abort. Continue with revocation: yes

      After confirming the action, the CA will fully revoke the client’s certificate. However, your OpenVPN server currently has no way to check whether any clients’ certificates have been revoked and the client will still have access to the VPN. To correct this, create a certificate revocation list (CRL) on your CA machine:

      This will generate a file called crl.pem. Securely transfer this file to your OpenVPN server:

      • scp ~/EasyRSA-v3.0.6/pki/crl.pem sammy@your_server_ip:/tmp

      On your OpenVPN server, copy this file into your /etc/openvpn/ directory:

      • sudo cp /tmp/crl.pem /etc/openvpn

      Next, open the OpenVPN server configuration file:

      • sudo nano /etc/openvpn/server.conf

      At the bottom of the file, add the crl-verify option, which will instruct the OpenVPN server to check the certificate revocation list that we've created each time a connection attempt is made:

      /etc/openvpn/server.conf

      crl-verify crl.pem
      

      Save and close the file.

      Finally, restart OpenVPN to implement the certificate revocation:

      • sudo systemctl restart openvpn@server

      The client should no longer be able to successfully connect to the server using the old credential.

      To revoke additional clients, follow this process:

      1. Revoke the certificate with the ./easyrsa revoke client_name command
      2. Generate a new CRL
      3. Transfer the new crl.pem file to your OpenVPN server and copy it to the /etc/openvpn directory to overwrite the old list.
      4. Restart the OpenVPN service.

      You can use this process to revoke any certificates that you've previously issued for your server.

      Conclusion

      You are now securely traversing the internet protecting your identity, location, and traffic from snoopers and censors.

      To configure more clients, you only need to follow steps 4 and 9-11 for each additional device. To revoke access to clients, just follow step 12.



      Source link