One place for hosting & domains

      Secure

      How To Install and Secure OpenFaaS Using Docker Swarm on Ubuntu 16.04


      Introduction

      Serverless architecture hides server instances from the developer and usually exposes an API that allows developers to run their applications in the cloud. This approach helps developers deploy applications quickly, as they can leave provisioning and maintaining instances to the appropriate DevOps teams. It also reduces infrastructure costs, since with the appropriate tooling you can scale your instances per demand.

      Applications that run on serverless platforms are called serverless functions. A function is containerized, executable code that’s used to perform specific operations. Containerizing applications ensures that you can reproduce a consistent environment on many machines, enabling updating and scaling.

      OpenFaaS is a free and open-source framework for building and hosting serverless functions. With official support for both Docker Swarm and Kubernetes, it lets you deploy your applications using the powerful API, command-line interface, or Web UI. It comes with built-in metrics provided by Prometheus and supports auto-scaling on demand, as well as scaling from zero.

      In this tutorial, you’ll set up and use OpenFaaS with Docker Swarm running on Ubuntu 16.04, and secure its Web UI and API by setting up Traefik with Let’s Encypt. This ensures secure communication between nodes in the cluster, as well as between OpenFaaS and its operators.

      Prerequisites

      To follow this tutorial, you’ll need:

      • Ubuntu 16.04 running on your local machine. You can use other distributions and operating systems, but make sure you use the appropriate OpenFaaS scripts for your operating system and install all of the dependencies listed in these prerequisites.
      • git, curl, and jq installed on your local machine. You’ll use git to clone the OpenFaaS repository, curl to test the API, and jq to transform raw JSON responses from the API to human-readable JSON. To install the required dependencies for this setup, use the following commands: sudo apt-get update && sudo apt-get install git curl jq
      • Docker installed, following Steps 1 and 2 of How To Install and Use Docker on Ubuntu 16.04.
      • A Docker Hub account. To deploy functions to OpenFaaS, they will need to be published on a public container registry. We’ll use Docker Hub for this tutorial, since it’s both free and widely used. Be sure to authenticate with Docker on your local machine by using the docker login command.
      • Docker Machine installed, following How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04.
      • A DigitalOcean personal access token. To create a token, follow these instructions.
      • A Docker Swarm cluster of 3 nodes, provisioned by following How to Create a Cluster of Docker Containers with Docker Swarm and DigitalOcean on Ubuntu 16.04.
      • A fully registered domain name with an A record pointing to one of the instances in the Docker Swarm. Throughout the tutorial, you’ll see example.com as an example domain. You should replace this with your own domain, which you can either purchase on Namecheap, or get for free on Freenom. You can also use a different domain registrar of your choice.

      Step 1 — Downloading OpenFaaS and Installing the OpenFaaS CLI

      To deploy OpenFaaS to your Docker Swarm, you will need to download the deployment manifests and scripts. The easiest way to obtain them is to clone the official OpenFaas repository and check out the appropriate tag, which represents an OpenFaaS release.

      In addition to cloning the repository, you’ll also install the FaaS CLI, a powerful command-line utility that you can use to manage and deploy new functions from your terminal. It provides templates for creating your own functions in most major programming languages. In Step 7, you’ll use it to create a Python function and deploy it on OpenFaaS.

      For this tutorial, you’ll deploy OpenFaaS v0.8.9. While the steps for deploying other versions should be similar, make sure to check out the project changelog to ensure there are no breaking changes.

      First, navigate to your home directory and run the following command to clone the repository to the ~/faas directory:

      • cd ~
      • git clone https://github.com/openfaas/faas.git

      Navigate to the newly-created ~/faas directory:

      When you clone the repository, you'll get files from the master branch that contain the latest changes. Because breaking changes can get into the master branch, it's not recommended for use in production. Instead, let's check out the 0.8.9 tag:

      The output contains a message about the successful checkout and a warning about committing changes to this branch:

      Output

      Note: checking out '0.8.9'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 8f0d2d1 Expose scale-function endpoint

      If you see any errors, make sure to resolve them by following the on-screen instructions before continuing.

      With the OpenFaaS repository downloaded, complete with the necessary manifest files, let's proceed to installing the FaaS CLI.

      The easiest way to install the FaaS CLI is to use the official script. In your terminal, navigate to your home directory and download the script using the following command:

      • cd ~
      • curl -sSL -o faas-cli.sh https://cli.openfaas.com

      This will download the faas-cli.sh script to your home directory. Before executing the script, it's a good idea to check the contents:

      You can exit the preview by pressing q. Once you have verified content of the script, you can proceed with the installation by giving executable permissions to the script and executing it. Execute the script as root so it will automatically copy to your PATH:

      • chmod +x faas-cli.sh
      • sudo ./faas-cli.sh

      The output contains information about the installation progress and the CLI version that you've installed:

      Output

      x86_64 Downloading package https://github.com/openfaas/faas-cli/releases/download/0.6.17/faas-cli as /tmp/faas-cli Download complete. Running as root - Attempting to move faas-cli to /usr/local/bin New version of faas-cli installed to /usr/local/bin Creating alias 'faas' for 'faas-cli'. ___ _____ ____ / _ _ __ ___ _ __ | ___|_ _ __ _/ ___| | | | | '_ / _ '_ | |_ / _` |/ _` ___ | |_| | |_) | __/ | | | _| (_| | (_| |___) | ___/| .__/ ___|_| |_|_| __,_|__,_|____/ |_| CLI: commit: b5597294da6dd98457434fafe39054c993a5f7e7 version: 0.6.17

      If you see an error, make sure to resolve it by following the on-screen instructions before continuing with the tutorial.

      At this point, you have the FaaS CLI installed. To learn more about commands you can use, execute the CLI without any arguments:

      The output shows available commands and flags:

      Output

      ___ _____ ____ / _ _ __ ___ _ __ | ___|_ _ __ _/ ___| | | | | '_ / _ '_ | |_ / _` |/ _` ___ | |_| | |_) | __/ | | | _| (_| | (_| |___) | ___/| .__/ ___|_| |_|_| __,_|__,_|____/ |_| Manage your OpenFaaS functions from the command line Usage: faas-cli [flags] faas-cli [command] Available Commands: build Builds OpenFaaS function containers cloud OpenFaaS Cloud commands deploy Deploy OpenFaaS functions help Help about any command invoke Invoke an OpenFaaS function list List OpenFaaS functions login Log in to OpenFaaS gateway logout Log out from OpenFaaS gateway new Create a new template in the current folder with the name given as name push Push OpenFaaS functions to remote registry (Docker Hub) remove Remove deployed OpenFaaS functions store OpenFaaS store commands template Downloads templates from the specified github repo version Display the clients version information Flags: --filter string Wildcard to match with function names in YAML file -h, --help help for faas-cli --regex string Regex to match with function names in YAML file -f, --yaml string Path to YAML file describing function(s) Use "faas-cli [command] --help" for more information about a command.

      You have now successfully obtained the OpenFaaS manifests and installed the FaaS CLI, which you can use to manage your OpenFaaS instance from your terminal.

      The ~/faas directory contains files from the 0.8.9 release, which means you can now deploy OpenFaaS to your Docker Swarm. Before doing so, let's modify the deployment manifest file to include Traefik, which will secure your OpenFaaS setup by setting up Let's Encrypt.

      Step 2 — Configuring Traefik

      Traefik is a Docker-aware reverse proxy that comes with SSL support provided by Let's Encrypt. SSL protocol ensures that you communicate with the Swarm cluster securely by encrypting the data you send and receive between nodes.

      To use Traefik with OpenFaaS, you need to modify the OpenFaaS deployment manifest to include Traefik and tell OpenFaaS to use Traefik instead of directly exposing its services to the internet.

      Navigate back to the ~/faas directory and open the OpenFaaS deployment manifest in a text editor:

      • cd ~/faas
      • nano ~/faas/docker-compose.yml

      Note: The Docker Compose manifest file uses YAML formatting, which strictly forbids tabs and requires two spaces for indentation. The manifest will fail to deploy if the file is incorrectly formatted.

      The OpenFaaS deployment is comprised of several services, defined under the services directive, that provide the dependencies needed to run OpenFaaS, the OpenFaaS API and Web UI, and Prometheus and AlertManager (for handling metrics).

      At the beginning of the services section, add a new service called traefik, which uses the traefik:v1.6 image for the deployment:

      ~/faas/docker-compose.yml

      version: "3.3"
      services:
          traefik:
              image: traefik:v1.6
          gateway:
               ...
      

      The Traefik image is coming from the Traefik Docker Hub repository, where you can find a list of all available images.

      Next, let's instruct Docker to run Traefik using the command directive. This will run Traefik, configure it to work with Docker Swarm, and provide SSL using Let's Encrypt. The following flags will configure Traefik:

      • --docker.*: These flags tell Traefik to use Docker and specify that it's running in a Docker Swarm cluster.
      • --web=true: This flag enables Traefik's Web UI.
      • --defaultEntryPoints and --entryPoints: These flags define entry points and protocols to be used. In our case this includes HTTP on port 80 and HTTPS on port 443.
      • --acme.*: These flags tell Traefik to use ACME to generate Let's Encrypt certificates to secure your OpenFaaS cluster with SSL.

      Make sure to replace the example.com domain placeholders in the --acme.domains and --acme.email flags with the domain you're going to use to access OpenFaaS. You can specify multiple domains by separating them with a comma and space. The email address is for SSL notifications and alerts, including certificate expiry alerts. In this case, Traefik will handle renewing certificates automatically, so you can ignore expiry alerts.

      Add the following block of code below the image directive, and above gateway:

      ~/faas/docker-compose.yml

      ...
          traefik:
              image: traefik:v1.6
              command: -c --docker=true
                  --docker.swarmmode=true
                  --docker.domain=traefik
                  --docker.watch=true
                  --web=true
                  --defaultEntryPoints='http,https'
                  --entryPoints='Name:https Address::443 TLS'
                  --entryPoints='Name:http Address::80'
                  --acme=true
                  --acme.entrypoint='https'
                  --acme.httpchallenge=true
                  --acme.httpchallenge.entrypoint='http'
                  --acme.domains='example.com, www.example.com'
                  --acme.email='[email protected]'
                  --acme.ondemand=true
                  --acme.onhostrule=true
                  --acme.storage=/etc/traefik/acme/acme.json
      ...
      

      With the command directive in place, let's tell Traefik what ports to expose to the internet. Traefik uses port 8080 for its operations, while OpenFaaS will use port 80 for non-secure communication and port 443 for secure communication.

      Add the following ports directive below the command directive. The port-internet:port-docker notation ensures that the port on the left side is exposed by Traefik to the internet and maps to the container's port on the right side:

      ~/faas/docker-compose.yml

              ...
              command:
                  ...
              ports:
                  - 80:80
                  - 8080:8080
                  - 443:443
              ...
      

      Next, using the volumes directive, mount the Docker socket file from the host running Docker to Traefik. The Docker socket file communicates with the Docker API in order to manage your containers and get details about them, such as number of containers and their IP addresses. You will also mount the volume called acme, which we'll define later in this step.

      The networks directive instructs Traefik to use the functions network, which is deployed along with OpenFaaS. This network ensures that functions can communicate with other parts of the system, including the API.

      The deploy directive instructs Docker to run Traefik only on the Docker Swarm manager node.

      Add the following directives below the ports directive:

      ~/faas/docker-compose.yml

              ...
              volumes:
                  - "/var/run/docker.sock:/var/run/docker.sock"
                  - "acme:/etc/traefik/acme"
              networks:
                  - functions
              deploy:
                  placement:
                      constraints: [node.role == manager]
      

      At this point, the traefik service block should look like this:

      ~/faas/docker-compose.yml

      version: "3.3"
      services:
          traefik:
              image: traefik:v1.6
              command: -c --docker=true
                  --docker.swarmmode=true
                  --docker.domain=traefik
                  --docker.watch=true
                  --web=true
                  --defaultEntryPoints='http,https'
                  --entryPoints='Name:https Address::443 TLS'
                  --entryPoints='Name:http Address::80'            
                  --acme=true
                  --acme.entrypoint='https'
                  --acme.httpchallenge=true
                  --acme.httpchallenge.entrypoint='http'
                  --acme.domains='example.com, www.example.com'
                  --acme.email='[email protected]'
                  --acme.ondemand=true
                  --acme.onhostrule=true
                  --acme.storage=/etc/traefik/acme/acme.json
              ports:
                  - 80:80
                  - 8080:8080
                  - 443:443
              volumes:
                  - "/var/run/docker.sock:/var/run/docker.sock"
                  - "acme:/etc/traefik/acme"
              networks:
                - functions
              deploy:
                placement:
                  constraints: [node.role == manager]
      
          gateway:
              ...
      

      While this configuration ensures that Traefik will be deployed with OpenFaaS, you also need to configure OpenFaaS to work with Traefik. By default, the gateway service is configured to run on port 8080, which overlaps with Traefik.

      The gateway service provides the API gateway you can use to deploy, run, and manage your functions. It handles metrics (via Prometheus) and auto-scaling, and hosts the Web UI.

      Our goal is to expose the gateway service using Traefik instead of exposing it directly to the internet.

      Locate the gateway service, which should look like this:

      ~/faas/docker-compose.yml

      ...
          gateway:
              ports:
                  - 8080:8080
              image: openfaas/gateway:0.8.7
              networks:
                  - functions
              environment:
                  functions_provider_url: "http://faas-swarm:8080/"
                  read_timeout:  "300s"        # Maximum time to read HTTP request
                  write_timeout: "300s"        # Maximum time to write HTTP response
                  upstream_timeout: "300s"     # Maximum duration of upstream function call - should be more than read_timeout and write_timeout
                  dnsrr: "true"               # Temporarily use dnsrr in place of VIP while issue persists on PWD
                  faas_nats_address: "nats"
                  faas_nats_port: 4222
                  direct_functions: "true"    # Functions are invoked directly over the overlay network
                  direct_functions_suffix: ""
                  basic_auth: "${BASIC_AUTH:-true}"
                  secret_mount_path: "/run/secrets/"
                  scale_from_zero: "false"
              deploy:
                  resources:
                      # limits:   # Enable if you want to limit memory usage
                      #     memory: 200M
                      reservations:
                          memory: 100M
                  restart_policy:
                      condition: on-failure
                      delay: 5s
                      max_attempts: 20
                      window: 380s
                  placement:
                      constraints:
                          - 'node.platform.os == linux'
              secrets:
                  - basic-auth-user
                  - basic-auth-password
      ...
      

      Remove the ports directive from the service to avoid exposing the gateway service directly.

      Next, add the following lables directive to the deploy section of the gateway service. This directive exposes the /ui, /system, and /function endpoints on port 8080 over Traefik:

      ~/faas/docker-compose.yml

              ...
              deploy:
                  labels:
                      - traefik.port=8080
                      - traefik.frontend.rule=PathPrefix:/ui,/system,/function
                  resources:
                  ...            
      

      The /ui endpoint exposes the OpenFaaS Web UI, which is covered in the Step 6 of this tutorial. The /system endpoint is the API endpoint used to manage OpenFaaS, while the /function endpoint exposes the API endpoints for managing and running functions. Step 5 of this tutorial covers the OpenFaaS API in detail.

      After modifications, your gateway service should look like this:

      ~/faas/docker-compose.yml

      ...
          gateway:       
              image: openfaas/gateway:0.8.7
              networks:
                  - functions
              environment:
                  functions_provider_url: "http://faas-swarm:8080/"
                  read_timeout:  "300s"        # Maximum time to read HTTP request
                  write_timeout: "300s"        # Maximum time to write HTTP response
                  upstream_timeout: "300s"     # Maximum duration of upstream function call - should be more than read_timeout and write_timeout
                  dnsrr: "true"               # Temporarily use dnsrr in place of VIP while issue persists on PWD
                  faas_nats_address: "nats"
                  faas_nats_port: 4222
                  direct_functions: "true"    # Functions are invoked directly over the overlay network
                  direct_functions_suffix: ""
                  basic_auth: "${BASIC_AUTH:-true}"
                  secret_mount_path: "/run/secrets/"
                  scale_from_zero: "false"
              deploy:
                  labels:
                      - traefik.port=8080
                      - traefik.frontend.rule=PathPrefix:/ui,/system,/function
                  resources:
                      # limits:   # Enable if you want to limit memory usage
                      #     memory: 200M
                      reservations:
                          memory: 100M
                  restart_policy:
                      condition: on-failure
                      delay: 5s
                      max_attempts: 20
                      window: 380s
                  placement:
                      constraints:
                          - 'node.platform.os == linux'
              secrets:
                  - basic-auth-user
                  - basic-auth-password
      ...
      

      Finally, let's define the acme volume used for storing Let's Encrypt certificates. We can define an empty volume, meaning data will not persist if you destroy the container. If you destroy the container, the certificates will be regenerated the next time you start Traefik.

      Add the following volumes directive on the last line of the file:

      ~/faas/docker-compose.yml

      ...
      volumes:
          acme:
      

      Once you're done, save the file and close your text editor. At this point, you've configured Traefik to protect your OpenFaaS deployment and Docker Swarm. Now you're ready to deploy it along with OpenFaaS on your Swarm cluster.

      Step 3 — Deploying OpenFaaS

      Now that you have prepared the OpenFaaS deployment manifest, you're ready to deploy it and start using OpenFaaS. To deploy, you'll use the deploy_stack.sh script. This script is meant to be used on Linux and macOS operating systems, but in the OpenFaaS directory you can also find appropriate scripts for Windows and ARM systems.

      Before deploying OpenFaaS, you will need to instruct docker-machine to execute Docker commands from the script on one of the machines in the Swarm. For this tutorial, let's use the Swarm manager.

      If you have the docker-machine use command configured, you can use it:

      • docker-machine use node-1

      If not, use the following command:

      • eval $(docker-machine env node-1)

      The deploy_stack.sh script deploys all of the resources required for OpenFaaS to work as expected, including configuration files, network settings, services, and credentials for authorization with the OpenFaaS server.

      Let's execute the script, which will take several minutes to finish deploying:

      The output shows a list of resources that are created in the deployment process, as well as the credentials you will use to access the OpenFaaS server and the FaaS CLI command.

      Write down these credentials, as you will need them throughout the tutorial to access the Web UI and the API:

      Output

      Attempting to create credentials for gateway.. roozmk0y1jkn17372a8v9y63g q1odtpij3pbqrmmf8msy3ampl [Credentials] username: admin password: your_openfaas_password echo -n your_openfaas_password | faas-cli login --username=admin --password-stdin Enabling basic authentication for gateway.. Deploying OpenFaaS core services Creating network func_functions Creating config func_alertmanager_config Creating config func_prometheus_config Creating config func_prometheus_rules Creating service func_alertmanager Creating service func_traefik Creating service func_gateway Creating service func_faas-swarm Creating service func_nats Creating service func_queue-worker Creating service func_prometheus

      If you see any errors, follow the on-screen instructions to resolve them before continuing the tutorial.

      Before continuing, let's authenticate the FaaS CLI with the OpenFaaS server using the command provided by the deployment script.

      The script outputted the flags you need to provide to the command, but you will need to add an additional flag, --gateway, with the address of your OpenFaaS server, as the FaaS CLI assumes the gateway server is running on localhost:

      • echo -n your_openfaas_password | faas-cli login --username=admin --password-stdin --gateway https://example.com

      The output contains a message about successful authorization:

      Output

      Calling the OpenFaaS server to validate the credentials... credentials saved for admin https://example.com

      At this point, you have a fully-functional OpenFaaS server deployed on your Docker Swarm cluster, as well as the FaaS CLI configured to use your newly deployed server. Before testing how to use OpenFaaS, let's deploy some sample functions to get started.

      Step 4 — Deploying OpenFaaS Sample Functions

      Initially, OpenFaaS comes without any functions deployed. To start testing and using it, you will need some functions.

      The OpenFaaS project hosts some sample functions, and you can find a list of available functions along with their deployment manifests in the OpenFaaS repository. Some of the sample functions include nodeinfo, for showing information about the node where a function is running, wordcount, for counting the number of words in a passed request, and markdown, for converting passed markdown input to HTML output.

      The stack.yml manifest in the ~/faas directory deploys several sample functions along with the functions mentioned above. You can deploy it using the FaaS CLI.

      Run the following faas-cli command, which takes the path to the stack manifest and the address of your OpenFaaS server:

      • faas-cli deploy -f ~/faas/stack.yml --gateway https://example.com

      The output contains status codes and messages indicating whether or not the deployment was successful:

      Output

      Deploying: wordcount. Deployed. 200 OK. URL: https://example.com/function/wordcount Deploying: base64. Deployed. 200 OK. URL: https://example.com/function/base64 Deploying: markdown. Deployed. 200 OK. URL: https://example.com/function/markdown Deploying: hubstats. Deployed. 200 OK. URL: https://example.com/function/hubstats Deploying: nodeinfo. Deployed. 200 OK. URL: https://example.com/function/nodeinfo Deploying: echoit. Deployed. 200 OK. URL: https://example.com/function/echoit

      If you see any errors, make sure to resolve them by following the on-screen instructions.

      Once the stack deployment is done, list all of the functions to make sure they're deployed and ready to be used:

      • faas-cli list --gateway https://example.com

      The output contains a list of functions, along with their replica numbers and an invocations count:

      Output

      Function Invocations Replicas markdown 0 1 wordcount 0 1 base64 0 1 nodeinfo 0 1 hubstats 0 1 echoit 0 1

      If you don't see your functions here, make sure the faas-cli deploy command executed successfully.

      You can now use the sample OpenFaaS functions to test and demonstrate how to use the API, Web UI, and CLI. In the next step, you'll start by using the OpenFaaS API to list and run functions.

      Step 5 — Using the OpenFaaS API

      OpenFaaS comes with a powerful API that you can use to manage and execute your serverless functions. Let's use Swagger, a tool for architecting, testing, and documenting APIs, to browse the API documentation, and then use the API to list and run functions.

      With Swagger, you can inspect the API documentation to find out what endpoints are available and how you can use them. In the OpenFaaS repository, you can find the Swagger API specification, which can be used with the Swagger editor to convert the specification to human-readable form.

      Navigate your web browser to http://editor.swagger.io/. You should be welcomed with the following screen:

      Swagger Editor Welcome page

      Here you'll find a text editor containing the source code for the sample Swagger specification, and the human-readable API documentation on the right.

      Let's import the OpenFaaS Swagger specification. In the top menu, click on the File button, and then on Import URL:

      Swagger Editor Import URL

      You'll see a pop-up, where you need to enter the address of the Swagger API specification. If you don't see the pop-up, make sure pop-ups are enabled for your web browser.

      In the field, enter the link to the Swagger OpenFaaS API specification: https://raw.githubusercontent.com/openfaas/faas/master/api-docs/swagger.yml

      Swagger Editor Input URL

      After clicking on the OK button, the Swagger editor will show you the API reference for OpenFaaS, which should look like this:

      Swagger Editor OpenFaaS API specification

      On the left side you can see the source of the API reference file, while on the right side you can see a list of endpoints, along with short descriptions. Clicking on an endpoint shows you more details about it, including what parameters it takes, what method it uses, and possible responses:

      Swagger Editor Endpoint details

      Once you know what endpoints are available and what parameters they expect, you can use them to manage your functions.

      Next, you'll use a curl command to communicate with the API, so navigate back to your terminal. With the -u flag, you will be able to pass the admin:your_openfaas_password pair that you got in Step 3, while the -X flag will define the request method. You will also pass your endpoint URL, https://example.com/system/functions:

      • curl -u admin:your_openfaas_password -X GET https://example.com/system/functions

      You can see the required method for each endpoint in the API docs.

      In Step 4, you deployed several sample functions, which should appear in the output:

      Output

      [{"name":"base64","image":"functions/alpine:latest","invocationCount":0,"replicas":1,"envProcess":"base64","availableReplicas":0,"labels":{"com.openfaas.function":"base64","function":"true"}},{"name":"nodeinfo","image":"functions/nodeinfo:latest","invocationCount":0,"replicas":1,"envProcess":"","availableReplicas":0,"labels":{"com.openfaas.function":"nodeinfo","function":"true"}},{"name":"hubstats","image":"functions/hubstats:latest","invocationCount":0,"replicas":1,"envProcess":"","availableReplicas":0,"labels":{"com.openfaas.function":"hubstats","function":"true"}},{"name":"markdown","image":"functions/markdown-render:latest","invocationCount":0,"replicas":1,"envProcess":"","availableReplicas":0,"labels":{"com.openfaas.function":"markdown","function":"true"}},{"name":"echoit","image":"functions/alpine:latest","invocationCount":0,"replicas":1,"envProcess":"cat","availableReplicas":0,"labels":{"com.openfaas.function":"echoit","function":"true"}},{"name":"wordcount","image":"functions/alpine:latest","invocationCount":0,"replicas":1,"envProcess":"wc","availableReplicas":0,"labels":{"com.openfaas.function":"wordcount","function":"true"}}]

      If you don't see output that looks like this, or if you see an error, follow the on-screen instructions to resolve the problem before continuing with the tutorial. Make sure you're sending the request to the correct endpoint using the recommended method and the right credentials. You can also check the logs for the gateway service using the following command:

      • docker service logs func_gateway

      By default, the API response to the curl call returns raw JSON without new lines, which is not human-readable. To parse it, pipe curl's response to the jq utility, which will convert the JSON to human-readable form:

      • curl -u admin:your_openfaas_password -X GET https://example.com/system/functions | jq

      The output is now in human-readable form. You can see the function name, which you can use to manage and invoke functions with the API, the number of invocations, as well as information such as labels and number of replicas, relevant to Docker:

      Output

      [ { "name": "base64", "image": "functions/alpine:latest", "invocationCount": 0, "replicas": 1, "envProcess": "base64", "availableReplicas": 0, "labels": { "com.openfaas.function": "base64", "function": "true" } }, { "name": "nodeinfo", "image": "functions/nodeinfo:latest", "invocationCount": 0, "replicas": 1, "envProcess": "", "availableReplicas": 0, "labels": { "com.openfaas.function": "nodeinfo", "function": "true" } }, { "name": "hubstats", "image": "functions/hubstats:latest", "invocationCount": 0, "replicas": 1, "envProcess": "", "availableReplicas": 0, "labels": { "com.openfaas.function": "hubstats", "function": "true" } }, { "name": "markdown", "image": "functions/markdown-render:latest", "invocationCount": 0, "replicas": 1, "envProcess": "", "availableReplicas": 0, "labels": { "com.openfaas.function": "markdown", "function": "true" } }, { "name": "echoit", "image": "functions/alpine:latest", "invocationCount": 0, "replicas": 1, "envProcess": "cat", "availableReplicas": 0, "labels": { "com.openfaas.function": "echoit", "function": "true" } }, { "name": "wordcount", "image": "functions/alpine:latest", "invocationCount": 0, "replicas": 1, "envProcess": "wc", "availableReplicas": 0, "labels": { "com.openfaas.function": "wordcount", "function": "true" } } ]

      Let's take one of these functions and execute it, using the API /function/function-name endpoint. This endpoint is available over the POST method, where the -d flag allows you to send data to the function.

      For example, let's run the following curl command to execute the echoit function, which comes with OpenFaaS out of the box and outputs the string you've sent it as a request. You can use the string "Sammy The Shark" to demonstrate:

      • curl -u admin:your_openfaas_password -X POST https://example.com/function/func_echoit -d "Sammy The Shark"

      The output will show you Sammy The Shark:

      Output

      Sammy The Shark

      If you see an error, follow the on-screen logs to resolve the problem before continuing with the tutorial. You can also check the gateway service's logs.

      At this point, you've used the OpenFaaS API to manage and execute your functions. Let's now take a look at the OpenFaaS Web UI.

      Step 6 — Using the OpenFaaS Web UI

      OpenFaaS comes with a Web UI that you can use to add new and execute installed functions. In this step, you will install a function for generating QR Codes from the FaaS Store and generate a sample code.

      To begin, point your web browser to https://example.com/ui/. Note that the trailing slash is required to avoid a "not found" error.

      In the HTTP authentication dialogue box, enter the username and password you got when deploying OpenFaaS in Step 3.

      Once logged in, you will see available functions on the left side of the screen, along with the Deploy New Functions button used to install new functions.

      Click on Deploy New Functions to deploy a new function. You will see the FaaS Store window, which provides community-tested functions that you can install with a single click:

      OpenFaaS Functions store

      In addition to these functions, you can also deploy functions manually from a Docker image.

      For this tutorial, you will deploy the QR Code Generator function from the FaaS Store. Locate the QR Code Generator - Go item in the list, click on it, and then click the Deploy button at the bottom of the window:

      OpenFaaS QR Code Generator function

      After clicking Deploy, the Deploy A New Function window will close and the function will be deployed. In the list at the left side of the window you will see a listing for the qrcode-go function. Click on this entry to select it. The main function window will show the function name, number of replicas, invocation count, and image, along with the option to invoke the function:

      OpenFaaS QR Code Function

      Let's generate a QR code containing the URL with your domain. In the Request body field, type the content of the QR code you'd like to generate; in our case, this will be "example.com". Once you're done, click the Invoke button.

      When you select either the Text or JSON output option, the function will output the file's content, which is not usable or human-readable:

      OpenFaaS generated QR code

      You can download a response. which in our case will be a PNG file with the QR code. To do this, select the Download option, and then click Invoke once again. Shortly after, you should have the QR code downloaded, which you can open with the image viewer of your choice:

      Generated QR code

      In addition to deploying functions from the FaaS store or from Docker images, you can also create your own functions. In the next step, you will create a Python function using the FaaS command-line interface.

      Step 7 — Creating Functions With the FaaS CLI

      In the previous steps, you configured the FaaS CLI to work with your OpenFaaS server. The FaaS CLI is a command-line interface that you can use to manage OpenFaaS and install and run functions, just like you would over the API or using the Web UI.

      Compared to the Web UI or the API, the FaaS CLI has templates for many programming languages that you can use to create your own functions. It can also build container images based on your function code and push images to an image registry, such as Docker Hub.

      In this step, you will create a function, publish it to Docker Hub, and then run it on your OpenFaaS server. This function will be similar to the default echoit function, which returns input passed as a request.

      We will use Python to write our function. If you want to learn more about Python, you can check out our How To Code in Python 3 tutorial series and our How To Code in Python eBook.

      Before creating the new function, let's create a directory to store FaaS functions and navigate to it:

      • mkdir ~/faas-functions
      • cd ~/faas-functions

      Execute the following command to create a new Python function called echo-input. Make sure to replace your-docker-hub-username with your Docker Hub username, as you'll push the function to Docker Hub later:

      • faas-cli new echo-input --lang python --prefix your-docker-hub-username --gateway https://example.com

      The output contains confirmation about the successful function creation. If you don't have templates downloaded, the CLI will download templates in your current directory:

      Output

      2018/05/13 12:13:06 No templates found in current directory. 2018/05/13 12:13:06 Attempting to expand templates from https://github.com/openfaas/templates.git 2018/05/13 12:13:11 Fetched 12 template(s) : [csharp dockerfile go go-armhf node node-arm64 node-armhf python python-armhf python3 python3-armhf ruby] from https://github.com/openfaas/templates.git Folder: echo-input created. ___ _____ ____ / _ _ __ ___ _ __ | ___|_ _ __ _/ ___| | | | | '_ / _ '_ | |_ / _` |/ _` ___ | |_| | |_) | __/ | | | _| (_| | (_| |___) | ___/| .__/ ___|_| |_|_| __,_|__,_|____/ |_| Function created in folder: echo-input Stack file written: echo-input.yml

      The result of the faas-cli new command is a newly-created ~/faas-fucntions/echo-input directory containing the function's code and the echo-input.yml file. This file includes information about your function: what language it's in, its name, and the server you will deploy it on.

      Navigate to the ~/faas-fucntions/echo-input directory:

      • cd ~/faas-fucntions/echo-input

      To see content of the directory, execute:

      The directory contains two files: handler.py, which contains the code for your function, and requirements.txt, which contains the Python modules required by the function.

      Since we don't currently require any non-default Python modules, the requirements.txt file is empty. You can check that by using the cat command:

      Next, let's write a function that will return a request as a string.

      The handler.py file already has the sample handler code, which returns a received response as a string. Let's take a look at the code:

      The default function is called handle and takes a single parameter, req, that contains a request that's passed to the function when it's invoked. The function does only one thing, returning the passed request back as the response:

      def handle(req):
          """handle a request to the function
          Args:
              req (str): request body
          """
      
          return req
      

      Let's modify it to include additional text, replacing the string in the return directive as follows:

          return "Received message: " + req
      

      Once you're done, save the file and close your text editor.

      Next, let's build a Docker image from the function's source code. Navigate to the faas-functions directory where the echo-input.yml file is located:

      The following command builds the Docker image for your function:

      • faas-cli build -f echo-input.yml

      The output contains information about the build progress:

      Output

      [0] > Building echo-input. Clearing temporary build folder: ./build/echo-input/ Preparing ./echo-input/ ./build/echo-input/function Building: sammy/echo-input with python template. Please wait.. Sending build context to Docker daemon 7.168kB Step 1/16 : FROM python:2.7-alpine ---> 5fdd069daf25 Step 2/16 : RUN apk --no-cache add curl && echo "Pulling watchdog binary from Github." && curl -sSL https://github.com/openfaas/faas/releases/download/0.8.0/fwatchdog > /usr/bin/fwatchdog && chmod +x /usr/bin/fwatchdog && apk del curl --no-cache ---> Using cache ---> 247d4772623a Step 3/16 : WORKDIR /root/ ---> Using cache ---> 532cc683d67b Step 4/16 : COPY index.py . ---> Using cache ---> b4b512152257 Step 5/16 : COPY requirements.txt . ---> Using cache ---> 3f9cbb311ab4 Step 6/16 : RUN pip install -r requirements.txt ---> Using cache ---> dd7415c792b1 Step 7/16 : RUN mkdir -p function ---> Using cache ---> 96c25051cefc Step 8/16 : RUN touch ./function/__init__.py ---> Using cache ---> 77a9db274e32 Step 9/16 : WORKDIR /root/function/ ---> Using cache ---> 88a876eca9e3 Step 10/16 : COPY function/requirements.txt . ---> Using cache ---> f9ba5effdc5a Step 11/16 : RUN pip install -r requirements.txt ---> Using cache ---> 394a1dd9e4d7 Step 12/16 : WORKDIR /root/ ---> Using cache ---> 5a5893c25b65 Step 13/16 : COPY function function ---> eeddfa67018d Step 14/16 : ENV fprocess="python index.py" ---> Running in 8e53df4583f2 Removing intermediate container 8e53df4583f2 ---> fb5086bc7f6c Step 15/16 : HEALTHCHECK --interval=1s CMD [ -e /tmp/.lock ] || exit 1 ---> Running in b38681a71378 Removing intermediate container b38681a71378 ---> b04c045b0994 Step 16/16 : CMD ["fwatchdog"] ---> Running in c5a11078df3d Removing intermediate container c5a11078df3d ---> bc5f08157c5a Successfully built bc5f08157c5a Successfully tagged sammy/echo-input:latest Image: your-docker-hub-username/echo-input built. [0] < Building echo-input done. [0] worker done.

      If you get an error, make sure to resolve it by following the on-screen instructions before deploying the function.

      You will need to containerize your OpenFaaS function in order to deploy it. Containerizing applications ensures that the environment needed to run your application can be easily reproduced, and your application can be easily deployed, scaled, and updated.

      For this tutorial, we'll use Docker Hub, as it's a free solution, but you can use any container registry, including your own private registry.

      Run the following command to push the image you built to your specified repository on Docker Hub:

      • faas-cli push -f echo-input.yml

      Pushing will take several minutes, depending on your internet connection speed. The output contains the image's upload progress:

      Output

      [0] > Pushing echo-input. The push refers to repository [docker.io/sammy/echo-input] 320ea573b385: Pushed 9d87e56f5d0c: Pushed 6f79b75e7434: Pushed 23aac2d8ecf2: Pushed 2bec17d09b7e: Pushed e5a0e5ab3be6: Pushed e9c8ca932f1b: Pushed beae1d55b4ce: Pushed 2fcae03ed1f7: Pushed 62103d5daa03: Mounted from library/python f6ac6def937b: Mounted from library/python 55c108c7613c: Mounted from library/python e53f74215d12: Mounted from library/python latest: digest: sha256:794fa942c2f593286370bbab2b6c6b75b9c4dcde84f62f522e59fb0f52ba05c1 size: 3033 [0] < Pushing echo-input done. [0] worker done.

      Finally, with your image pushed to Docker Hub, you can use it to deploy a function to your OpenFaaS server.

      To deploy your function, run the deploy command, which takes the path to the manifest that describes your function, as well as the address of your OpenFaaS server:

      • faas-cli deploy -f echo-input.yml --gateway https://example.com

      The output shows the status of the deployment, along with the name of the function you're deploying and the deployment status code:

      Output

      Deploying: echo-input. Deployed. 200 OK. URL: https://example.com/function/echo-input

      If the deployment is successful, you will see a 200 status code. In the case of errors, follow the provided instructions to fix the problem before continuing.

      At this point your function is deployed and ready to be used. You can test that it is working as expected by invoking it.

      To invoke a function with the FaaS CLI, use the invoke command by passing the function name and OpenFaaS address to it. After executing the command, you'll be asked to enter the request you want to send to the function.

      Execute the following command to invoke the echo-input function:

      • faas-cli invoke echo-input --gateway https://example.com

      You'll be asked to enter the request you want to send to the function:

      Output

      Reading from STDIN - hit (Control + D) to stop.

      Enter the text you want to send to the function, such as:

      Sammy The Shark!
      

      Once you're done, press ENTER and then CTRL + D to finish the request. The CTRL + D shortcut in the terminal is used to register an End-of-File (EOF). The OpenFaaS CLI stops reading from the terminal once EOF is received.

      After several seconds, the command will output the function's response:

      Output

      Reading from STDIN - hit (Control + D) to stop. Sammy The Shark! Received message: Sammy The Shark!

      If you don't see the output or you get an error, retrace the preceding steps to make sure you've deployed the function as explained and follow the on-screen instructions to resolve the problem.

      At this point, you've interacted with your function using three methods: the Web UI, the API, and the CLI. Being able to execute your functions with any of these methods offers you the flexibility of deciding how you would like to integrate functions into your existing workflows.

      Conclusion

      In this tutorial, you've used serverless architecture and OpenFaaS to deploy and manage your applications using the OpenFaaS API, Web UI, and CLI. You also secured your infrastructure by leveraging Traefik to provide SSL using Let's Encrypt.

      If you want to learn more about the OpenFaaS project, you can check out their website and the project's official documentation.



      Source link

      How To Secure Nginx with Let’s Encrypt on Debian 9


      Introduction

      Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.

      In this tutorial, you will use Certbot to obtain a free SSL certificate for Nginx on Debian 9 and set up your certificate to renew automatically.

      This tutorial will use a separate Nginx server block file instead of the default file. We recommend creating new Nginx server block files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration.

      Prerequisites

      To follow this tutorial, you will need:

      • One Debian 9 server, set up by following this initial server setup for Debian 9 tutorial, along with a sudo non-root user and a firewall.
      • A fully registered domain name. This tutorial will use example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.

        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.
      • Nginx installed by following How To Install Nginx on Debian 9. Be sure that you have a server block for your domain. This tutorial will use /etc/nginx/sites-available/example.com as an example.

      Step 1 — Installing Certbot

      The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on your server.

      Certbot is in very active development, so the Certbot packages provided by Debian with current stable releases tend to be outdated. However, we can obtain a more up-to-date package by enabling the Debian 9 backports repository in /etc/apt/sources.list, where the apt package manager looks for package sources. The backports repository includes recompiled packages that can be run without new libraries on stable Debian distributions.

      To add the backports repository, first open /etc/apt/sources.list:

      • sudo nano /etc/apt/sources.list

      At the bottom of the file, add the following mirrors from the Debian project:

      /etc/apt/sources.list

      ...
      deb http://deb.debian.org/debian stretch-backports main contrib non-free
      deb-src http://deb.debian.org/debian stretch-backports main contrib non-free
      

      This includes the main packages, which are Debian Free Software Guidelines (DFSG)- compliant, as well as the non-free and contrib components, which are either not DFSG-compliant themselves or include dependencies in this category.

      Save and close the file when you are finished.

      Update the package list to pick up the new repository’s package information:

      And finally, install Certbot's Nginx package with apt:

      • sudo apt install python-certbot-nginx -t stretch-backports

      Certbot is now ready to use, but in order for it to configure SSL for Nginx, we need to verify some of Nginx's configuration.

      Step 2 — Confirming Nginx's Configuration

      Certbot needs to be able to find the correct server block in your Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a server_name directive that matches your requested domain.

      If you followed the server block setup step in the Nginx installation tutorial, you should have a server block for your domain at /etc/nginx/sites-available/example.com with the server_name directive already set appropriately.

      To check, open the server block file for your domain using nano or your favorite text editor:

      • sudo nano /etc/nginx/sites-available/example.com

      Find the existing server_name line. It should look like this:

      /etc/nginx/sites-available/example.com

      ...
      server_name example.com www.example.com;
      ...
      

      If it does, exit your editor and move on to the next step.

      If it doesn't, update it to match. Then save the file, quit your editor, and verify the syntax of your configuration edits:

      If you get an error, reopen the server block file and check for any typos or missing characters. Once your configuration file syntax is correct, reload Nginx to load the new configuration:

      • sudo systemctl reload nginx

      Certbot can now find the correct server block and update it.

      Next, let's update the firewall to allow HTTPS traffic.

      Step 3 — Allowing HTTPS Through the Firewall

      If you have the ufw firewall enabled, as recommended in the prerequisite guides, you'll need to adjust the settings to allow for HTTPS traffic.

      You can see the current setting by typing:

      It will probably look like this, meaning that only HTTP traffic is allowed to the web server:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx HTTP ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx HTTP (v6) ALLOW Anywhere (v6)

      To let in HTTPS traffic, allow the Nginx Full profile and delete the redundant Nginx HTTP profile allowance:

      • sudo ufw allow 'Nginx Full'
      • sudo ufw delete allow 'Nginx HTTP'

      Your status should now look like this:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx Full ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx Full (v6) ALLOW Anywhere (v6)

      Next, let's run Certbot and fetch our certificates.

      Step 4 — Obtaining an SSL Certificate

      Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary. To use this plugin, type the following:

      • sudo certbot --nginx -d example.com -d www.example.com

      This runs certbot with the --nginx plugin, using -d to specify the names we'd like the certificate to be valid for.

      If this is your first time running certbot, you will be prompted to enter an email address and agree to the terms of service. After doing so, certbot will communicate with the Let's Encrypt server, then run a challenge to verify that you control the domain you're requesting a certificate for.

      If that's successful, certbot will ask how you'd like to configure your HTTPS settings.

      Output

      Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access. ------------------------------------------------------------------------------- 1: No redirect - Make no further changes to the webserver configuration. 2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for new sites, or if you're confident your site works on HTTPS. You can undo this change by editing your web server's configuration. ------------------------------------------------------------------------------- Select the appropriate number [1-2] then [enter] (press 'c' to cancel):

      Select your choice then hit ENTER. The configuration will be updated, and Nginx will reload to pick up the new settings. certbot will wrap up with a message telling you the process was successful and where your certificates are stored:

      Output

      IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/example.com/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/example.com/privkey.pem Your cert will expire on 2018-07-23. To obtain a new or tweaked version of this certificate in the future, simply run certbot again with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le

      Your certificates are downloaded, installed, and loaded. Try reloading your website using https:// and notice your browser's security indicator. It should indicate that the site is properly secured, usually with a green lock icon. If you test your server using the SSL Labs Server Test, it will get an A grade.

      Let's finish by testing the renewal process.

      Step 5 — Verifying Certbot Auto-Renewal

      Let's Encrypt's certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot package we installed takes care of this for us by adding a renew script to /etc/cron.d. This script runs twice a day and will automatically renew any certificate that's within thirty days of expiration.

      To test the renewal process, you can do a dry run with certbot:

      • sudo certbot renew --dry-run

      If you see no errors, you're all set. When necessary, Certbot will renew your certificates and reload Nginx to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.

      Conclusion

      In this tutorial, you installed the Let's Encrypt client certbot, downloaded SSL certificates for your domain, configured Nginx to use these certificates, and set up automatic certificate renewal. If you have further questions about using Certbot, their documentation is a good place to start.



      Source link

      How To Install and Secure Redis on Debian 9


      Introduction

      Redis is an in-memory key-value store known for its flexibility, performance, and wide language support. This tutorial demonstrates how to install, configure, and secure Redis on a Debian 9 server.

      Prerequisites

      To complete this guide, you will need access to a Debian 9 server that has a non-root user with sudo privileges and a basic firewall configured. You can set this up by following our Initial Server Setup guide.

      When you are ready to begin, log in to your server as your sudo-enabled user and continue below.

      Step 1 — Installing and Configuring Redis

      In order to get the latest version of Redis, we will use apt to install it from the official Debian repositories.

      Update your local apt package cache and install Redis by typing:

      • sudo apt update
      • sudo apt install redis-server

      This will download and install Redis and its dependencies. Following this, there is one important configuration change to make in the Redis configuration file, which was generated automatically during the installation.

      Open this file with your preferred text editor:

      • sudo nano /etc/redis/redis.conf

      Inside the file, find the supervised directive. This directive allows you to declare an init system to manage Redis as a service, providing you with more control over its operation. The supervised directive is set to no by default. Since you are running Debian, which uses the systemd init system, change this to systemd:

      /etc/redis/redis.conf

      . . .
      
      # If you run Redis from upstart or systemd, Redis can interact with your
      # supervision tree. Options:
      #   supervised no      - no supervision interaction
      #   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
      #   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
      #   supervised auto    - detect upstart or systemd method based on
      #                        UPSTART_JOB or NOTIFY_SOCKET environment variables
      # Note: these supervision methods only signal "process is ready."
      #       They do not enable continuous liveness pings back to your supervisor.
      supervised systemd
      
      . . .
      

      That’s the only change you need to make to the Redis configuration file at this point, so save and close it when you are finished. Then, reload the Redis service file to reflect the changes you made to the configuration file:

      • sudo systemctl restart redis

      With that, you’ve installed and configured Redis and it’s running on your machine. Before you begin using it, though, it’s prudent to first check whether Redis is functioning correctly.

      Step 2 — Testing Redis

      As with any newly-installed software, it’s a good idea to ensure that Redis is functioning as expected before making any further changes to its configuration. We will go over a handful of ways to check that Redis is working correctly in this step.

      Start by checking that the Redis service is running:

      • sudo systemctl status redis

      If it is running without any errors, this command will produce output similar to the following:

      Output

      ● redis-server.service - Advanced key-value store Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2018-09-05 20:19:44 UTC; 41s ago Docs: http://redis.io/documentation, man:redis-server(1) Process: 10829 ExecStopPost=/bin/run-parts --verbose /etc/redis/redis-server.post-down.d (code=exited, status=0/SUCCESS) Process: 10825 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS) Process: 10823 ExecStop=/bin/run-parts --verbose /etc/redis/redis-server.pre-down.d (code=exited, status=0/SUCCESS) Process: 10842 ExecStartPost=/bin/run-parts --verbose /etc/redis/redis-server.post-up.d (code=exited, status=0/SUCCESS) Process: 10838 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS) Process: 10834 ExecStartPre=/bin/run-parts --verbose /etc/redis/redis-server.pre-up.d (code=exited, status=0/SUCCESS) Main PID: 10841 (redis-server) Tasks: 3 (limit: 4915) CGroup: /system.slice/redis-server.service └─10841 /usr/bin/redis-server 127.0.0.1:6379 . . .

      Here, you can see that Redis is running and is already enabled, meaning that it is set to start up every time the server boots.

      Note: This setting is desirable for many common use cases of Redis. If, however, you prefer to start up Redis manually every time your server boots, you can configure this with the following command:

      • sudo systemctl disable redis

      To test that Redis is functioning correctly, connect to the server using the command-line client:

      In the prompt that follows, test connectivity with the ping command:

      Output

      PONG

      This output confirms that the server connection is still alive. Next, check that you’re able to set keys by running:

      Output

      OK

      Retrieve the value by typing:

      Assuming everything is working, you will be able to retrieve the value you stored:

      Output

      "It's working!"

      After confirming that you can fetch the value, exit the Redis prompt to get back to the shell:

      As a final test, we will check whether Redis is able to persist data even after it’s been stopped or restarted. To do this, first restart the Redis instance:

      • sudo systemctl restart redis

      Then connect with the command-line client once again and confirm that your test value is still available:

      The value of your key should still be accessible:

      Output

      "It's working!"

      Exit out into the shell again when you are finished:

      With that, your Redis installation is fully operational and ready for you to use. However, some of its default configuration settings are insecure and provide malicious actors with opportunities to attack and gain access to your server and its data. The remaining steps in this tutorial cover methods for mitigating these vulnerabilities, as prescribed by the official Redis website. Although these steps are optional and Redis will still function if you choose not to follow them, it is strongly recommended that you complete them in order to harden your system’s security.

      Step 3 — Binding to localhost

      By default, Redis is only accessible from localhost. However, if you installed and configured Redis by following a different tutorial than this one, you might have updated the configuration file to allow connections from anywhere. This is not as secure as binding to localhost.

      To correct this, open the Redis configuration file for editing:

      • sudo nano /etc/redis/redis.conf

      Locate this line and make sure it is uncommented (remove the # if it exists):

      /etc/redis/redis.conf

      bind 127.0.0.1
      

      Save and close the file when finished (press CTRL + X, Y, then ENTER).

      Then, restart the service to ensure that systemd reads your changes:

      • sudo systemctl restart redis

      To check that this change has gone into effect, run the following netstat command:

      • sudo netstat -lnp | grep redis

      Output

      tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 10959/redis-server

      This output shows that the redis-server program is bound to localhost (127.0.0.1), reflecting the change you just made to the configuration file. If you see another IP address in that column (0.0.0.0, for example), then you should double check that you uncommented the correct line and restart the Redis service again.

      Now that your Redis installation is only listening in on localhost, it will be more difficult for malicious actors to make requests or gain access to your server. However, Redis isn’t currently set to require users to authenticate themselves before making changes to its configuration or the data it holds. To remedy this, Redis allows you to require users to authenticate with a password before making changes via the Redis client (redis-cli).

      Step 4 — Configuring a Redis Password

      Configuring a Redis password enables one of its two built-in security features — the auth command, which requires clients to authenticate to access the database. The password is configured directly in Redis's configuration file, /etc/redis/redis.conf, so open that file again with your preferred editor:

      • sudo nano /etc/redis/redis.conf

      Scroll to the SECURITY section and look for a commented directive that reads:

      /etc/redis/redis.conf

      # requirepass foobared
      

      Uncomment it by removing the #, and change foobared to a secure password.

      Note: Above the requirepass directive in the redis.conf file, there is a commented warning:

      # Warning: since Redis is pretty fast an outside user can try up to
      # 150k passwords per second against a good box. This means that you should
      # use a very strong password otherwise it will be very easy to break.
      #
      

      Thus, it’s important that you specify a very strong and very long value as your password. Rather than make up a password yourself, you can use the openssl command to generate a random one, as in the following example. By piping the output of the first command to the second openssl command, as shown here, it will remove any line breaks produced by that the first command:

      • openssl rand 60 | openssl base64 -A

      Your output should look something like:

      Output

      RBOJ9cCNoGCKhlEBwQLHri1g+atWgn4Xn4HwNUbtzoVxAYxkiYBi7aufl4MILv1nxBqR4L6NNzI0X6cE

      After copying and pasting the output of that command as the new value for requirepass, it should read:

      /etc/redis/redis.conf

      requirepass RBOJ9cCNoGCKhlEBwQLHri1g+atWgn4Xn4HwNUbtzoVxAYxkiYBi7aufl4MILv1nxBqR4L6NNzI0X6cE

      After setting the password, save and close the file, then restart Redis:

      • sudo systemctl restart redis.service

      To test that the password works, access the Redis command line:

      The following shows a sequence of commands used to test whether the Redis password works. The first command tries to set a key to a value before authentication:

      That won't work because you didn’t authenticate, so Redis returns an error:

      Output

      (error) NOAUTH Authentication required.

      The next command authenticates with the password specified in the Redis configuration file:

      Redis acknowledges:

      Output

      OK

      After that, running the previous command again will succeed:

      Output

      OK

      get key1 queries Redis for the value of the new key.

      Output

      "10"

      After confirming that you’re able to run commands in the Redis client after authenticating, you can exit the redis-cli:

      Next, we'll look at renaming Redis commands which, if entered by mistake or by a malicious actor, could cause serious damage to your machine.

      Step 5 — Renaming Dangerous Commands

      The other security feature built into Redis involves renaming or completely disabling certain commands that are considered dangerous.

      When run by unauthorized users, such commands can be used to reconfigure, destroy, or otherwise wipe your data. Like the authentication password, renaming or disabling commands is configured in the same SECURITY section of the /etc/redis/redis.conf file.

      Some of the commands that are considered dangerous include: FLUSHDB, FLUSHALL, KEYS, PEXPIRE, DEL, CONFIG, SHUTDOWN, BGREWRITEAOF, BGSAVE, SAVE, SPOP, SREM, RENAME, and DEBUG. This is not a comprehensive list, but renaming or disabling all of the commands in that list is a good starting point for enhancing your Redis server’s security.

      Whether you should disable or rename a command depends on your specific needs or those of your site. If you know you will never use a command that could be abused, then you may disable it. Otherwise, it might be in your best interest to rename it.

      To enable or disable Redis commands, open the configuration file once more:

      • sudo nano /etc/redis/redis.conf

      Warning: The following steps showing how to disable and rename commands are examples. You should only choose to disable or rename the commands that make sense for you. You can review the full list of commands for yourself and determine how they might be misused at redis.io/commands.

      To disable a command, simply rename it to an empty string (signified by a pair of quotation marks with no characters between them), as shown below:

      /etc/redis/redis.conf

      . . .
      # It is also possible to completely kill a command by renaming it into
      # an empty string:
      #
      rename-command FLUSHDB ""
      rename-command FLUSHALL ""
      rename-command DEBUG ""
      . . .
      

      To rename a command, give it another name as shown in the examples below. Renamed commands should be difficult for others to guess, but easy for you to remember:

      /etc/redis/redis.conf

      . . .
      # rename-command CONFIG ""
      rename-command SHUTDOWN SHUTDOWN_MENOT
      rename-command CONFIG ASC12_CONFIG
      . . .
      

      Save your changes and close the file.

      After renaming a command, apply the change by restarting Redis:

      • sudo systemctl restart redis

      To test the new command, enter the Redis command line:

      Then, authenticate:

      Output

      OK

      Let’s assume that you renamed the CONFIG command to ASC12_CONFIG, as in the preceding example. First, try using the original CONFIG command. It should fail, because you’ve renamed it:

      Output

      (error) ERR unknown command 'config'

      Calling the renamed command, however, will be successful. It is not case-sensitive:

      • asc12_config get requirepass

      Output

      1) "requirepass" 2) "your_redis_password"

      Finally, you can exit from redis-cli:

      Note that if you're already using the Redis command line and then restart Redis, you'll need to re-authenticate. Otherwise, you'll get this error if you type a command:

      Output

      NOAUTH Authentication required.

      Regarding the practice of renaming commands, there's a cautionary statement at the end of the SECURITY section in /etc/redis/redis.conf which reads:

      Please note that changing the name of commands that are logged into the AOF file or transmitted to slaves may cause problems.

      Note: The Redis project chooses to use the terms “master” and “slave” while DigitalOcean generally prefers alternative descriptors. In order to avoid confusion we’ve chosen to use the terms used in the Redis documentation here.

      That means if the renamed command is not in the AOF file, or if it is but the AOF file has not been transmitted to slaves, then there should be no problem.

      So, keep that in mind when you're trying to rename commands. The best time to rename a command is when you're not using AOF persistence, or right after installation, that is, before your Redis-using application has been deployed.

      When you're using AOF and dealing with a master-slave installation, consider this answer from the project's GitHub issue page. The following is a reply to the author's question:

      The commands are logged to the AOF and replicated to the slave the same way they are sent, so if you try to replay the AOF on an instance that doesn't have the same renaming, you may face inconsistencies as the command cannot be executed (same for slaves).

      Thus, the best way to handle renaming in cases like that is to make sure that renamed commands are applied to all instances in master-slave installations.

      Conclusion

      In this tutorial, you installed and configured Redis, validated that your Redis installation is functioning correctly, and used its built-in security features to make it less vulnerable to attacks from malicious actors.

      Keep in mind that once someone is logged in to your server, it's very easy to circumvent the Redis-specific security features we've put in place. Therefore, the most important security feature on your Redis server is your firewall (which you configured if you followed the prerequisite Initial Server Setup tutorial), as this makes it extremely difficult for malicious actors to jump that fence.



      Source link