One place for hosting & domains

      Ubuntu

      How To Set Up a Video Streaming Server using Nginx-RTMP on Ubuntu 20.04


      Introduction

      There are many use cases for streaming video. Service providers such as Twitch are very popular for handling the web discovery and community management aspects of streaming, and free software such as OBS Studio is widely used for combining video overlays from multiple different stream sources in real time. While these platforms are very powerful, in some cases you may want to be able to host a stream that does not rely on other service providers.

      In this tutorial, you will learn how to configure the Nginx web server to host an independent RTMP video stream that can be linked and viewed in different applications. RTMP, the Real-Time Messaging Protocol, defines the fundamentals of most internet video streaming. You will also learn how to host HLS and DASH streams that support more modern platforms using the same technology.

      Prerequisites

      To complete this guide, you will need:

      This tutorial will use the placeholder domain name your_domain for URLs and hostnames. Substitute this with your own domain name or IP address as you work through the tutorial.

      Step 1 — Installing and Configuring Nginx-RTMP

      Most modern streaming tools support the RTMP protocol, which defines the basic parameters of an internet video stream. The Nginx web server includes a module that allows you to provide an RTMP stream with minimal configuration from a dedicated URL, just like it provides HTTP access to web pages by default. The Nginx RTMP module isn’t included automatically with Nginx, but on Ubuntu 20.04 and most other Linux distributions you can install it as an additional package.

      Begin by running the following commands as a non-root user to update your package listings and install the Nginx module:

      • sudo apt update
      • sudo apt install libnginx-mod-rtmp

      Installing the module won’t automatically start providing a stream. You’ll need to add a configuration block to your Nginx configuration file that defines where and how the stream will be available.

      Using nano or your favorite text editor, open Nginx’s main configuration file, /etc/nginx/nginx.conf, and add this configuration block to the end of the file:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
      rtmp {
              server {
                      listen 1935;
                      chunk_size 4096;
                      allow publish 127.0.0.1;
                      deny publish all;
      
                      application live {
                              live on;
                              record off;
                      }
              }
      }
      
      • listen 1935 means that RTMP will be listening for connections on port 1935, which is standard.
      • chunk_size 4096 means that RTMP will be sending data in 4KB blocks, which is also standard.
      • allow publish 127.0.0.1 and deny publish all mean that the server will only allow video to be published from the same server, to avoid any other users pushing their own streams.
      • application live defines an application block that will be available at the /live URL path.
      • live on enables live mode so that multiple users can connect to your stream concurrently, a baseline assumption of video streaming.
      • record off disables Nginx-RTMP’s recording functionality, so that all streams are not separately saved to disk by default.

      Save and close the file. If you are using nano, press Ctrl+X, then when prompted, Y and Enter.

      This provides the beginning of your RTMP configuration. By default, it listens on port 1935, which means you’ll need to open that port in your firewall. If you configured ufw as part of your initial server setup run the following command.

      Now you can reload Nginx with your changes:

      • sudo systemctl reload nginx.service

      You should now have a working RTMP server. In the next section, we’ll cover streaming video to your RTMP server from both local and remote sources.

      Step 2 — Sending Video to Your RTMP Server

      There are multiple ways to send video to your RTMP server. One option is to use ffmpeg, a popular command line audio-video utility, to play a video file directly on your server. If you don’t have a video file already on the server, you can download one using youtube-dl, a command line tool for capturing video from streaming platforms like YouTube. In order to use youtube-dl, you’ll need an up to date Python installation on your server as well.

      First, install Python and its package manager, pip:

      • sudo apt install python3-pip

      Next, use pip to install youtube-dl:

      Now you can use youtube-dl to download a video from YouTube. If you don’t have one in mind, try this video, introducing DigitalOcean’s App Platform:

      • youtube-dl https://www.youtube.com/watch?v=iom_nhYQIYk

      You’ll see some output as youtube-dl combines the video and audio streams it’s downloading back into a single file – this is normal.

      Output

      [youtube] iom_nhYQIYk: Downloading webpage WARNING: Requested formats are incompatible for merge and will be merged into mkv. [download] Destination: Introducing App Platform by DigitalOcean-iom_nhYQIYk.f137.mp4 [download] 100% of 32.82MiB in 08:40 [download] Destination: Introducing App Platform by DigitalOcean-iom_nhYQIYk.f251.webm [download] 100% of 1.94MiB in 00:38 [ffmpeg] Merging formats into "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" Deleting original file Introducing App Platform by DigitalOcean-iom_nhYQIYk.f137.mp4 (pass -k to keep) Deleting original file Introducing App Platform by DigitalOcean-iom_nhYQIYk.f251.webm (pass -k to keep)

      You should now have a video file in your current directory with a title like Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv. In order to stream it, you’ll want to install ffmpeg:

      And use ffmpeg to send it to your RTMP server:

      • ffmpeg -re -i "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" -c:v copy -c:a aac -ar 44100 -ac 1 -f flv rtmp://localhost/live/stream

      This ffmpeg command is doing a few things to prepare the video for a streaming-friendly format. This isn’t an ffmpeg tutorial, so you don’t need to examine it too closely, but you can understand the various options as follows:

      • -re specifies that input will be read at its native framerate.
      • -i "Introducing App Platform by DigitalOcean-iom_nhYQIYk.mkv" specifies the path to our input file.
      • -c:v is set to copy, meaning that you’re copying over the video format you got from YouTube natively.
      • -c:a has other parameters, namely aac -ar 44100 -ac 1, because you need to resample the audio to an RTMP-friendly format. aac is a widely supported audio codec, 44100 hz is a common frequency, and -ac 1 specifies the first version of the AAC spec for compatibility purposes.
      • -f flv wraps the video in an flv format container for maximum compatibility with RTMP.

      The video is sent to rtmp://localhost/live/stream because you defined the live configuration block in Step 1, and stream is an arbitrarily chosen URL for this video.

      Note: You can learn more about ffmpeg options from ffmprovisr, a community-maintained catalog of ffmpeg command examples, or refer to the official documentation.

      While ffmpeg is streaming the video, it will print timecodes:

      Output

      frame= 127 fps= 25 q=-1.0 size= 405kB time=00:00:05.00 bitrate= 662.2kbits/s speed=frame= 140 fps= 25 q=-1.0 size= 628kB time=00:00:05.52 bitrate= 931.0kbits/s speed=frame= 153 fps= 25 q=-1.0 size= 866kB time=00:00:06.04 bitrate=1173.1kbits/s speed=

      This is standard ffmpeg output. If you were converting video to a different format, these might be helpful in order to understand how efficiently the video is being resampled, but in this case, you just want to see that it’s being played back consistently. Using this sample video, you should get exact fps= 25 increments.

      While ffmpeg is running, you can connect to your RTMP stream from a video player. If you have VLC, mpv, or another media player installed locally, you should be able to view your stream by opening the URL rtmp://your_domain/live/stream in your media player. Your stream will terminate after ffmpeg has finished playing the video. If you want it to keep looping indefinitely, you can add -stream_loop -1 to the beginning of your ffmpeg command.

      Note: You can also stream directly to, for example, Facebook Live using ffmpeg without needing to use Nginx-RTMP at all by replacing rtmp://localhost/live/stream in your ffmpeg command with rtmps://live-api-s.facebook.com:443/rtmp/your-facebook-stream-key. YouTube uses URLs like rtmp://a.rtmp.youtube.com/live2. Other streaming providers that can consume RTMP streams should behave similarly.

      Now that you’ve learned to stream static video sources from the command line, you’ll learn how to stream video from dynamic sources using OBS on a desktop.

      Step 3 — Streaming Video to Your Server via OBS (Optional)

      Streaming via ffmpeg is convenient when you have a prepared video that you want to play back, but live streaming can be much more dynamic. The most popular software for live streaming is OBS, or Open Broadcaster Software – it is free, open source, and very powerful.

      OBS is a desktop application, and will connect to your server from your local computer.

      After installing OBS, configuring it means customizing which of your desktop windows and audio sources you want to add to your stream, and then adding credentials for a streaming service. This tutorial will not be covering your streaming configuration, as it is down to preference, and by default, you can have a working demo by just streaming your entire desktop. In order to set your streaming service credentials, open OBS’ settings menu, navigate to the Stream option and input the following options:

      Streaming Service: Custom
      Server: rtmp://your_domain/live
      Play Path/Stream Key: obs_stream
      

      obs_stream is an arbitrarily chosen path – in this case, your video would be available at rtmp://your_domain/live/obs_stream. You do not need to enable authentication, but you do need to add an additional entry to the IP whitelist that you configured in Step 1.

      Back on the server, open Nginx’s main configuration file, /etc/nginx/nginx.conf, and add an additional allow publish entry for your local IP address. If you don’t know your local IP address, it’s best to just go to a site like What’s my IP which can tell you where you accessed it from:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
                      allow publish 127.0.0.1;
                      allow publish your_local_ip_address;
                      deny publish all;
      . . .
      

      Save and close the file, then reload Nginx:

      • sudo systemctl reload nginx.service

      You should now be able to close OBS’ settings menu and click Start Streaming from the main interface! Try viewing your stream in a media player as before. Now that you’ve seen the fundamentals of streaming video in action, you can add a few other features to your server to make it more production-ready.

      Step 4 — Adding Monitoring to Your Configuration (Optional)

      Now that you have Nginx configured to stream video using the Nginx-RTMP module, a common next step is to enable the RTMP statistics page. Rather than adding more and more configuration details to your main nginx.conf file, Nginx allows you to add per-site configurations to individual files in a subdirectory called sites-available/. In this case, you’ll create one called rtmp:

      • sudo nano /etc/nginx/sites-available/rtmp

      Add the following contents:

      /etc/nginx/sites-available/rtmp

      server {
          listen 8080;
          server_name  localhost;
      
          # rtmp stat
          location /stat {
              rtmp_stat all;
              rtmp_stat_stylesheet stat.xsl;
          }
          location /stat.xsl {
              root /var/www/html/rtmp;
          }
      
          # rtmp control
          location /control {
              rtmp_control all;
          }
      }
      

      Save and close the file. The stat.xsl file from this configuration block is used to style and display an RTMP statistics page in your browser. It is provided by the libnginx-mod-rtmp library that you installed earlier, but it comes zipped up by default, so you will need to unzip it and put it in the /var/www/html/rtmp directory to match the above configuration. Note that you can find additional information about any of these options in the Nginx-RTMP documentation.

      Create the /var/www/html/rtmp directory, and then uncompress the stat.xsl.gz file with the following commands:

      • sudo mkdir /var/www/html/rtmp
      • sudo gunzip -c /usr/share/doc/libnginx-mod-rtmp/examples/stat.xsl.gz > /var/www/html/rtmp/stat.xsl`

      Finally, to access the statistics page that you added, you will need to open another port in your firewall. Specifically, the listen directive is configured with port 8080, so you will need to add a rule to access Nginx on that port. However, you probably don’t want others to be able to access your stats page, so it’s best only to allow it for your own IP address. Run the following command:

      • sudo ufw allow from your_ip_address to any port http-alt

      Next, you’ll need to activate this new configuration. Nginx’s convention is to create symbolic links (like shortcuts) from files in sites-available/ to another folder called sites-enabled/ as you decide to enable or disable them. Using full paths for clarity, make that link:

      • sudo ln -s /etc/nginx/sites-available/rtmp /etc/nginx/sites-enabled/rtmp

      Now you can reload Nginx again to process your changes:

      • sudo systemctl reload nginx.service

      You should now be able to go to http://your_domain:8080/stat in a browser to see the RTMP statistics page. Visit and refresh the page while streaming video and watch as the stream statistics change.

      You’ve now seen how to monitor your video stream and push it to third party providers. In the final section, you’ll learn how to provide it directly in a browser without the use of third party streaming platforms or standalone media player apps.

      Step 5 — Creating Modern Streams for Browsers (Optional)

      As a final step, you may want to add support for newer streaming protocols so that users can stream video from your server using a web browser directly. There are two protocols that you can use to create HTTP-based video streams: Apple’s HLS and MPEG DASH. They both have advantages and disadvantages, so you will probably want to support both.

      The Nginx-RTMP module supports both standards. To add HLS and DASH support to your server, you will need to modify the rtmp block in your nginx.conf file. Open /etc/nginx/nginx.conf using nano or your preferred editor, then add the following highlighted directives:

      • sudo nano /etc/nginx/nginx.conf

      /etc/nginx/nginx.conf

      . . .
      rtmp {
              server {
      . . .
                      application live {
                              live on;
                              record off;
                              hls on;
                              hls_path /var/www/html/stream/hls;
                              hls_fragment 3;
                              hls_playlist_length 60;
      
                              dash on;
                              dash_path /var/www/html/stream/dash;
                      }
              }
      }
      . . .
      

      Save and close the file. Next, add this to the bottom of your sites-available/rtmp:

      • sudo nano /etc/nginx/sites-available/rtmp

      /etc/nginx/sites-available/rtmp

      . . .
      server {
          listen 8088;
      
          location / {
              add_header Access-Control-Allow-Origin *;
              root /var/www/html/stream;
          }
      }
      
      types {
          application/dash+xml mpd;
      }
      

      Note: The Access-Control-Allow-Origin * header enables CORS, or Cross-Origin Resource Sharing, which is disabled by default. This communicates to any web browsers accessing data from your server that the server may load resources from other ports or domains. CORS is needed for maximum compatibility with HLS and DASH clients, and a common configuration toggle in many other web deployments.

      Save and close the file. Note that you’re using port 8088 here, which is another arbitrary choice for this tutorial to ensure we aren’t conflicting with any services you may be running on port 80 or 443. You’ll want to open that port in your firewall for now too:

      Finally, create a stream directory in your web root to match the configuration block, so that Nginx can generate the necessary files for HLS and DASH:

      • sudo mkdir /var/www/html/stream

      Reload Nginx again:

      • sudo systemctl reload nginx

      You should now have an HLS stream available at http://your_domain:8088/hls/stream.m3u8 and a DASH stream available at http://your_domain:8088/dash/stream.mpd. These endpoints will generate any necessary metadata on top of your RTMP video feed in order to support modern APIs.

      Conclusion

      The configuration options that you used in this tutorial are all documented in the Nginx RTMP Wiki page. Nginx modules typically share common syntax and expose a very large set of configuration options, and you can review their documentation to change any of your settings from here.

      Nearly all internet video streaming is implemented on top of RTMP, HLS, and DASH, and by using the approach that you have explored in this tutorial, you can provide your stream via other broadcasting services, or expose it any other way you choose. Next, you could look into configuring Nginx as a reverse proxy in order to make some of these different video endpoints available as subdomains.



      Source link

      How to Set Up Squid Proxy for Private Connections on Ubuntu 20.04


      Introduction

      Proxy servers are a type of server application that functions as a gateway between an end user and an internet resource. Through a proxy server, an end user is able to control and monitor their web traffic for a wide variety of purposes, including privacy, security, and caching. For example, you can use a proxy server to make web requests from a different IP address than your own. You can also use a proxy server to research how the web is served differently from one jurisdiction to the next, or avoid some methods of surveillance or web traffic throttling.

      Squid is a stable, popular, open-source HTTP proxy. In this tutorial, you will be installing and configuring Squid to provide an HTTP proxy on a Ubuntu 20.04 server.

      Prerequisites

      To complete this guide, you will need:

      You will use the domain name your_domain in this tutorial, but you should substitute this with your own domain name, or IP address.

      Step 1 — Installing Squid Proxy

      Squid has many use cases beyond routing an individual user’s outbound traffic. In the context of large-scale server deployments, it can be used as a distributed caching mechanism, a load balancer, or another component of a routing stack. However, some methods of horizontally scaling server traffic that would typically have involved a proxy server have been surpassed in popularity by containerization frameworks such as Kubernetes, which distribute more components of an application. At the same time, using proxy servers to redirect web requests as an individual user has become increasingly popular for protecting your privacy. This is helpful to keep in mind when working with open-source proxy servers which may appear to have many dozens of features in a lower-priority maintenance mode. The use cases for a proxy have changed over time, but the fundamental technology has not.

      Begin by running the following commands as a non-root user to update your package listings and install Squid Proxy:

      • sudo apt update
      • sudo apt install squid

      Squid will automatically set up a background service and start after being installed. You can check that the service is running properly:

      • systemctl status squid.service

      Output

      ● squid.service - Squid Web Proxy Server Loaded: loaded (/lib/systemd/system/squid.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2021-12-15 21:45:15 UTC; 2min 11s ago

      By default, Squid does not allow any clients to connect to it from outside of this server. In order to enable that, you’ll need to make some changes to its configuration file, which is stored in /etc/squid/squid.conf. Open it in nano or your favorite text editor:

      • sudo nano /etc/squid/squid.conf

      Be advised that Squid’s default configuration file is very, very long, and contains a massive number of options that have been temporarily disabled by putting a # at the start of the line they’re on, also called being commented out. You will most likely want to search through the file to find the lines you want to edit. In nano, this is done by pressing Ctrl+W, entering your search term, pressing Enter, and then repeatedly pressing Alt+W to find the next instance of that term if needed.

      Begin by navigating to the line containing the phrase http_access deny all. You should see a block of text explaining Squid’s default access rules:

      /etc/squid/squid.conf

      . . . 
      #
      # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
      #
      include /etc/squid/conf.d/*
      # Example rule allowing access from your local networks.
      # Adapt localnet in the ACL section to list your (internal) IP networks
      # from where browsing should be allowed
      #http_access allow localnet
      http_access allow localhost
      
      # And finally deny all other access to this proxy
      http_access deny all
      . . . 
      

      From this, you can see the current behavior – localhost is allowed; other connections are not. Note that these rules are parsed sequentially, so it’s a good idea to keep the deny all rule at the bottom of this configuration block. You could change that rule to allow all, enabling anyone to connect to your proxy server, but you probably don’t want to do that. Instead, you can add a line above http_access allow localhost that includes your own IP address, like so:

      /etc/squid/squid.conf

      #
      # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
      #
      include /etc/squid/conf.d/*
      # Example rule allowing access from your local networks.
      acl localnet src your_ip_address
      # Adapt localnet in the ACL section to list your (internal) IP networks
      # from where browsing should be allowed
      #http_access allow localnet
      http_access allow localhost
      
      • acl means an Access Control List, a common term for permissions policies
      • localnet in this case is the name of your ACL.
      • src is where the request would originate from under this ACL, i.e., your IP address.

      If you don’t know your local IP address, it’s quickest to go to a site like What’s my IP which can tell you where you accessed it from. After making that change, save and close the file. If you are using nano, press Ctrl+X, and then when prompted, Y and then Enter.

      At this point, you could restart Squid and connect to it, but there’s more you can do in order to secure it first.

      Step 2 — Securing Squid

      Most proxies, and most client-side apps that connect to proxies (e.g., web browsers) support multiple methods of authentication. These can include shared keys, or separate authentication servers, but most commonly entail regular username-password pairs. Squid allows you to create username-password pairs using built-in Linux functionality, as an additional or an alternative step to restricting access to your proxy by IP address. To do that, you’ll create a file called /etc/squid/passwords and point Squid’s configuration to it.

      First, you’ll need to install some utilities from the Apache project in order to have access to a password generator that Squid likes.

      • sudo apt install apache2-utils

      This package provides the htpasswd command, which you can use in order to generate a password for a new Squid user. Squid’s usernames won’t overlap with system usernames in any way, so you can use the same name you’ve logged in with if you want. You’ll be prompted to add a password as well:

      • sudo htpasswd -c /etc/squid/passwords your_squid_username

      This will store your username along with a hash of your new password in /etc/squid/passwords, which will be used as an authentication source by Squid. You can cat the file afterward to see what that looks like:

      • sudo cat /etc/squid/passwords

      Output

      sammy:$apr1$Dgl.Mtnd$vdqLYjBGdtoWA47w4q1Td.

      After verifying that your username and password have been stored, you can update Squid’s configuration to use your new /etc/squid/passwords file. Using nano or your favorite text editor, reopen the Squid configuration file and add the following highlighted lines:

      • sudo nano /etc/squid/squid.conf

      /etc/squid/squid.conf

      …
      #
      # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
      #
      include /etc/squid/conf.d/*
      auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/passwords
      auth_param basic realm proxy
      acl authenticated proxy_auth REQUIRED
      # Example rule allowing access from your local networks.
      acl localnet src your_ip_address
      # Adapt localnet in the ACL section to list your (internal) IP networks
      # from where browsing should be allowed
      #http_access allow localnet
      http_access allow localhost
      http_access allow authenticated
      # And finally deny all other access to this proxy
      http_access deny all
      …
      

      These additional directives tell Squid to check in your new passwords file for password hashes that can be parsed using the basic_ncsa_auth mechanism, and to require authentication for access to your proxy. You can review Squid’s documentation for more information on this or other authentication methods. After that, you can finally restart Squid with your configuration changes. This might take a moment to complete.

      • sudo systemctl restart squid.service

      And don’t forget to open port 3128 in your firewall if you’re using ufw:

      In the next step, you’ll connect to your proxy at last.

      Step 3 — Connecting through Squid

      In order to demonstrate your Squid server, you’ll use a command line program called curl, which is popular for making different types of web requests. In general, if you want to verify whether a given connection should be working in a browser under ideal circumstances, you should always test first with curl. You’ll be using curl on your local machine in order to do this – it’s installed by default on all modern Windows, Mac, and Linux environments, so you can open any local shell to run this command:

      • curl -v -x http://your_squid_username:your_squid_password@your_server_ip:3128 http://www.google.com/

      The -x argument passes a proxy server to curl, and in this case you’re using the http:// protocol this time, specifying your username and password to this server, and then connecting to a known-working website like google.com. If the command was successful, you should see the following output:

      Output

      * Trying 138.197.103.77... * TCP_NODELAY set * Connected to 138.197.103.77 (138.197.103.77) port 3128 (#0) * Proxy auth using Basic with user 'sammy' > GET http://www.google.com/ HTTP/1.1

      It is also possible to access https:// websites with your Squid proxy without making any further configuration changes. These make use of a separate proxy directive called CONNECT in order to preserve SSL between the client and the server:

      • curl -v -x http://your_squid_username:your_squid_password@your_server_ip:3128 https://www.google.com/

      Output

      * Trying 138.197.103.77... * TCP_NODELAY set * Connected to 138.197.103.77 (138.197.103.77) port 3128 (#0) * allocate connect buffer! * Establish HTTP proxy tunnel to www.google.com:443 * Proxy auth using Basic with user 'sammy' > CONNECT www.google.com:443 HTTP/1.1 > Host: www.google.com:443 > Proxy-Authorization: Basic c2FtbXk6c2FtbXk= > User-Agent: curl/7.55.1 > Proxy-Connection: Keep-Alive > < HTTP/1.1 200 Connection established < * Proxy replied OK to CONNECT request * CONNECT phase completed!

      The credentials that you used for curl should now work anywhere else you might want to use your new proxy server.

      Conclusion

      In this tutorial, you learned to deploy a popular, open-source API endpoint for proxying traffic with little to no overhead. Many applications have built-in proxy support (often at the OS level) going back decades, making this proxy stack highly reusable.

      Next, you may want to learn how to deploy Dante, a SOCKS proxy which can run alongside Squid for proxying different types of web traffic.

      Because one of the most common use cases for proxy servers is proxying traffic to and from different global regions, you may want to review how to use Ansible to automate server deployments next, in case you find yourself wanting to duplicate this configuration in other data centers.



      Source link

      How to Set Up Dante Proxy for Private Connections on Ubuntu 20.04


      Introduction

      Proxy servers are a type of server application that functions as a gateway between an end user and an internet resource. Through a proxy server, an end user is able to control and monitor their web traffic for a wide variety of purposes, including privacy, security, and caching. For example, you can use a proxy server to make web requests from a different IP address than your own. You can also use a proxy server to research how the web is served differently from one jurisdiction to the next, or avoid some methods of surveillance or web traffic throttling.

      Dante is a stable, popular, open-source SOCKS proxy. In this tutorial, you will be installing and configuring Dante to provide a SOCKS proxy on a Ubuntu 20.04 server.

      Prerequisites

      To complete this guide, you will need:

      You will use the domain name your_domain in this tutorial, but you should substitute this with your own domain name, or IP address.

      Step 1 — Installing Dante

      Dante is an open-source SOCKS proxy server. SOCKS is a less widely used protocol, but it is more efficient for some peer-to-peer applications, and is preferred over HTTP for some kinds of traffic. Begin by running the following commands as a non-root user to update your package listings and install Dante:

      • sudo apt update
      • sudo apt install dante-server

      Dante will also automatically set up a background service and start after being installed. However, it is designed to gracefully quit with an error message the first time it runs, because it ships with all of its features disabled. You can verify this by using the systemctl command:

      • systemctl status danted.service

      Output

      ● danted.service - SOCKS (v4 and v5) proxy daemon (danted) Loaded: loaded (/lib/systemd/system/danted.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2021-12-15 21:48:22 UTC; 1min 45s ago Docs: man:danted(8) man:danted.conf(5) Main PID: 14496 (code=exited, status=1/FAILURE) Dec 15 21:48:21 proxies systemd[1]: Starting SOCKS (v4 and v5) proxy daemon (danted)... Dec 15 21:48:22 proxies systemd[1]: Started SOCKS (v4 and v5) proxy daemon (danted). Dec 15 21:48:22 proxies danted[14496]: Dec 15 21:48:22 (1639604902.102601) danted[14496]: warning: checkconfig(): no socks authentication methods enabled. This means all socks requests will be blocked after negotiation. Perhaps this is not intended?

      To successfully start Dante’s services, you’ll need to enable them in the config file.

      Dante’s config file is provided, by default, in /etc/danted.conf. If you open this file using nano or your favorite text editor, you will see a long list of configuration options, all of them disabled. You could try to navigate through this file and enable some options line-by-line, but in practice it will be more efficient and more readable to delete this file and replace it from scratch. Don’t worry about doing this. You can always review Dante’s default configuration by navigating to its online manual, and you could even redownload the package manually from Ubuntu’s package listing to reobtain the stock configuration file if you ever wanted. In the meantime, go ahead and delete it:

      Now you can replace it with something more concise. Opening a file with a text editor will automatically create the file if it doesn’t exist, so by using nano or your favorite text editor, you should now get an empty configuration file:

      • sudo nano /etc/danted.conf

      Add the following contents:

      /etc/danted.conf

      logoutput: syslog
      user.privileged: root
      user.unprivileged: nobody
      
      # The listening network interface or address.
      internal: 0.0.0.0 port=1080
      
      # The proxying network interface or address.
      external: eth0
      
      # socks-rules determine what is proxied through the external interface.
      socksmethod: username
      
      # client-rules determine who can connect to the internal interface.
      clientmethod: none
      
      client pass {
          from: 0.0.0.0/0 to: 0.0.0.0/0
      }
      
      socks pass {
          from: 0.0.0.0/0 to: 0.0.0.0/0
      }
      

      You now have a usable SOCKS server configuration, running on port 1080, which is a common convention for SOCKS. You can also break down the rest of this configuration file line-by-line:

      • logoutput refers to how Dante will log connections, in this case using regular system logging
      • user.privileged allows dante to have root permissions for checking permissions
      • user.unprivileged does not grant the server any permissions for running as an unprivileged user, as this is unnecessary when not granting more granular permissions
      • internal connection details specify the port that the service is running on and which IP addresses can connect
      • external connection details specify the network interface used for outbound connections, eth0 by default on most servers

      The rest of the configuration details deal with authentication methods, which are discussed in the next section. Don’t forget to open port 1080 in your firewall if you’re using ufw:

      At this point, you could restart Dante and connect to it, but you would have a SOCKS server that’s open to the entire world, which you probably don’t want, so you’ll learn how to secure it first.

      Step 2 — Securing Dante

      If you followed this tutorial so far, Dante will be making use of regular Linux user accounts for authentication. This is helpful, but the password used for that connection will be sent over plain text, so it’s important to create a dedicated SOCKS user that won’t have any other login privileges. To do that, you’ll use useradd with flags that won’t assign a login shell to the user, then set a password:

      • sudo useradd -r -s /bin/false your_dante_user
      • sudo passwd your_dante_user

      You’ll also want to avoid logging into this account over an unsecured wireless connection or sharing the server too widely. Otherwise, malicious actors can and will make repeated efforts to log in.

      Dante supports other authentication methods, but many clients (i.e., applications) that will connect to SOCKS proxies only support basic username and password authentication, so you may want to leave that part as-is. What you can do as an alternative is to restrict access to only specific IP addresses. This isn’t the most sophisticated option, but given the combination of technologies in use here, it’s a sensible one. You may have already learned how to restrict access to specific IP addresses with ufw from our prerequisite tutorials , but you can also do it within Dante directly. Edit your /etc/danted.conf:

      • sudo nano /etc/danted.conf

      /etc/danted.conf

      …
      client pass {
          from: your_ip_address/0 to: 0.0.0.0/0
      }
      

      In order to support multiple IP addresses, you can use CIDR notation, or just add another client pass {} configuration block:

      /etc/danted.conf

      client pass {
          from: your_ip_address/0 to: 0.0.0.0/0
      }
      
      client pass {
          from: another_ip_address/0 to: 0.0.0.0/0
      }
      

      After that, you can finally restart Dante with your configuration changes.

      • sudo systemctl restart danted.service

      This time, when you check the service status, you should see it running without any errors:

      • systemctl status danted.service

      Output

      ● danted.service - SOCKS (v4 and v5) proxy daemon (danted) Loaded: loaded (/lib/systemd/system/danted.service; enabled; vendor preset: enable> Active: active (running) since Thu 2021-12-16 18:06:26 UTC; 24h ago

      In the next step, you’ll connect to your proxy at last.

      Step 3 — Connecting through Dante

      In order to demonstrate your Dante server, you’ll use a command line program called curl, which is popular for making different types of web requests. In general, if you want to verify whether a given connection should be working in a browser under ideal circumstances, you should always test first with curl. You’ll be using curl on your local machine in order to do this – it’s installed by default on all modern Windows, Mac, and Linux environments, so you can open any local shell to run this command:

      • curl -v -x socks5://your_dante_user:your_dante_password@your_server_ip:1080 http://www.google.com/

      Output

      * Trying 138.197.103.77... * TCP_NODELAY set * SOCKS5 communication to www.google.com:80 * SOCKS5 connect to IPv4 142.250.189.228 (locally resolved) * SOCKS5 request granted. * Connected to 138.197.103.77 (138.197.103.77) port 1080 (#0) > GET / HTTP/1.1 …

      The credentials that you used for curl should now work anywhere else you might want to use your new proxy server.

      Conclusion

      In this tutorial, you learned to deploy a popular, open-source API endpoint for proxying traffic with little to no overhead. Many applications have built-in proxy support (often at the OS level) going back decades, making this proxy stack highly reusable.

      Next, you may want to learn how to deploy Squid, an HTTP proxy which can run alongside Dante for proxying different types of web traffic.

      Because one of the most common use cases for proxy servers is proxying traffic to and from different global regions, you may want to review how to use Ansible to automate server deployments next, in case you find yourself wanting to duplicate this configuration in other data centers.



      Source link