One place for hosting & domains

      Docker

      How to Use Traefik as a Reverse Proxy for Docker Containers on Ubuntu 18.04


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy since you only want to expose ports 80 and 443 to the rest of the world.

      Traefik is a Docker-aware reverse proxy that includes its own monitoring dashboard. In this tutorial, you’ll use Traefik to route requests to two different web application containers: a WordPress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.

      Prerequisites

      To follow along with this tutorial, you will need the following:

      Step 1 — Configuring and Running Traefik

      The Traefik project has an official Docker image, so we will use that to run Traefik in a Docker container.

      But before we get our Traefik container up and running, we need to create a configuration file and set up an encrypted password so we can access the monitoring dashboard.

      We’ll use the htpasswd utility to create this encrypted password. First, install the utility, which is included in the apache2-utils package:

      • sudo apt-get install apache2-utils

      Then generate the password with htpasswd. Substitute secure_password with the password you’d like to use for the Traefik admin user:

      • htpasswd -nb admin secure_password

      The output from the program will look like this:

      Output

      admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/

      You’ll use this output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy the entire output line so you can paste it later.

      To configure the Traefik server, we’ll create a new configuration file called traefik.toml using the TOML format. TOML is a configuration language similar to INI files, but standardized. This file lets us configure the Traefik server and various integrations, or providers, we want to use. In this tutorial, we will use three of Traefik’s available providers: api, docker, and acme, which is used to support TLS using Let’s Encrypt.

      Open up your new file in nano or your favorite text editor:

      First, add two named entry points, http and https, that all backends will have access to by default:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      

      We'll configure the http and https entry points later in this file.

      Next, configure the api provider, which gives you access to a dashboard interface. This is where you'll paste the output from the htpasswd command:

      traefik.toml

      ...
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:your_encrypted_password"]
      
      [api]
      entrypoint="dashboard"
      

      The dashboard is a separate web application that will run within the Traefik container. We set the dashboard to run on port 8080.

      The entrypoints.dashboard section configures how we'll be connecting with with the api provider, and the entrypoints.dashboard.auth.basic section configures HTTP Basic Authentication for the dashboard. Use the output from the htpasswd command you just ran for the value of the users entry. You could specify additional logins by separating them with commas.

      We've defined our first entryPoint, but we'll need to define others for standard HTTP and HTTPS communitcation that isn't directed towards the api provider. The entryPoints section configures the addresses that Traefik and the proxied containers can listen on. Add these lines to the file underneath the entryPoints heading:

      traefik.toml

      ...
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      ...
      

      The http entry point handles port 80, while the https entry point uses port 443 for TLS/SSL. We automatically redirect all of the traffic on port 80 to the https entry point to force secure connections for all requests.

      Next, add this section to configure Let's Encrypt certificate support for Traefik:

      traefik.toml

      ...
      [acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      

      This section is called acme because ACME is the name of the protocol used to communicate with Let's Encrypt to manage certificates. The Let's Encrypt service requires registration with a valid email address, so in order to have Traefik generate certificates for our hosts, set the email key to your email address. We then specify that we will store the information that we will receive from Let's Encrypt in a JSON file called acme.json. The entryPoint key needs to point to the entry point handling port 443, which in our case is the https entry point.

      The key onHostRule dictates how Traefik should go about generating certificates. We want to fetch our certificates as soon as our containers with specified hostnames are created, and that's what the onHostRule setting will do.

      The acme.httpChallenge section allows us to specify how Let's Encrypt can verify that the certificate should be generated. We're configuring it to serve a file as part of the challenge through the http entrypoint.

      Finally, let's configure the docker provider by adding these lines to the file:

      traefik.toml

      ...
      [docker]
      domain = "your_domain"
      watch = true
      network = "web"
      

      The docker provider enables Traefik to act as a proxy in front of Docker containers. We've configured the provider to watch for new containers on the web network (that we'll create soon) and expose them as subdomains of your_domain.

      At this point, traefik.toml should have the following contents:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:your_encrypted_password"]
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      
      [api]
      entrypoint="dashboard"
      
      [acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      
      [docker]
      domain = "your_domain"
      watch = true
      network = "web"
      

      Save the file and exit the editor. With all of this configuration in place, we can fire up Traefik.

      Step 2 – Running the Traefik Container

      Next, create a Docker network for the proxy to share with containers. The Docker network is necessary so that we can use it with applications that are run using Docker Compose. Let's call this network web.

      • docker network create web

      When the Traefik container starts, we will add it to this network. Then we can add additional containers to this network later for Traefik to proxy to.

      Next, create an empty file which will hold our Let's Encrypt information. We'll share this into the container so Traefik can use it:

      Then lock down the permissions on this file so that only the root user can read and write to this file. If you don't do this, Traefik will fail to start.

      Finally, create the Traefik container with this command:

      • docker run -d
      • -v /var/run/docker.sock:/var/run/docker.sock
      • -v $PWD/traefik.toml:/traefik.toml
      • -v $PWD/acme.json:/acme.json
      • -p 80:80
      • -p 443:443
      • -l traefik.frontend.rule=Host:monitor.your_domain
      • -l traefik.port=8080
      • --network web
      • --name traefik
      • traefik:1.7.2-alpine

      The command is a little long so let's break it down.

      We use the -d flag to run the container in the background as a daemon. We then share our docker.sock file into the container so that the Traefik process can listen for changes to containers. We also share the traefik.toml configuration file and the acme.json file we created into the container.

      Next, we map ports :80 and :443 of our Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.

      Then we set up two Docker labels that tell Traefik to direct traffic to the hostname monitor.your_domain to port :8080 within the Traefik container, exposing the monitoring dashboard.

      We set the network of the container to web, and we name the container traefik.

      Finally, we use the traefik:1.7.2-alpine image for this container, because it's small.

      A Docker image's ENTRYPOINT is a command that always runs when a container is created from the image. In this case, the command is the traefik binary within the container. You can pass additional arguments to that command when you launch the container, but we've configured all of our settings in the traefik.toml file.

      With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the frontends and backends that Traefik has registered. Access the monitoring dashboard by pointing your browser to https://monitor.your_domain. You will be prompted for your username and password, which are admin and the password you configured in Step 1.

      Once logged in, you'll see an interface similar to this:

      Empty Traefik dashboard

      There isn't much to see just yet, but leave this window open, and you will see the contents change as you add containers for Traefik to work with.

      We now have our Traefik proxy running, configured to work with Docker, and ready to monitor other Docker containers. Let's start some containers for Traefik to act as a proxy for.

      Step 3 — Registering Containers with Traefik

      With the Traefik container running, you're ready to run applications behind it. Let's launch the following cotainers behind Traefik:

      1. A blog using the official WordPress image.
      2. A database management server using the official Adminer image.

      We'll manage both of these applications with Docker Compose using a docker-compose.yml file. Open the docker-compose.yml file in your editor:

      Add the following lines to the file to specify the version and the networks we'll use:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      

      We use Docker Compose version 3 because it's the newest major version of the Compose file format.

      For Traefik to recognize our applications, they must be part of the same network, and since we created the network manually, we pull it in by specifying the network name of web and setting external to true. Then we define another network so that we can connect our exposed containers to a database container that we won't expose through Traefik. We'll call this network internal.

      Next, we''ll define each of our services, one at a time. Let's start with the blog container, which we'll base on the official WordPress image. Add this configuration to the file:

      docker-compose.yml

      version: "3"
      ...
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.your_domain
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      The environment key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD, we're telling Docker Compose to get the value from our shell and pass it through when we create the container. We will define this environment variable in our shell before starting the containers. This way we don't hard-code passwords into the configuration file.

      The labels section is where you specify configuration values for Traefik. Docker labels don't do anything by themselves, but Traefik reads these so it knows how to treat containers. Here's what each of these labels does:

      • traefik.backend specifies the name of the backend service in Traefik (which points to the actual blog container).
      • traefik.frontend.rule=Host:blog.your_domain tells Traefik to examine the host requested and if it matches the pattern of blog.your_domain it should route the traffic to the blog container.
      • traefik.docker.network=web specifies which network to look under for Traefik to find the internal IP for this container. Since our Traefik container has access to all of the Docker info, it would potentially take the IP for the internal network if we didn't specify this.
      • traefik.port specifies the exposed port that Traefik should use to route traffic to this container.

      With this configuration, all traffic sent to our Docker host's port 80 will be routed to the blog container.

      We assign this container to two different networks so that Traefik can find it via the web network and it can communicate with the database container through the internal network.

      Lastly, the depends_on key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, we must run our mysql container before starting our blog container.

      Next, configure the MySQL service by adding this configuration to your file:

      docker-compose.yml

      services:
      ...
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      

      We're using the official MySQL 5.7 image for this container. You'll notice that we're once again using an environment item without a value. The MYSQL_ROOT_PASSWORD and WORDPRESS_DB_PASSWORD variables will need to be set to the same value to make sure that our WordPress container can communicate with the MySQL. We don't want to expose the mysql container to Traefik or the outside world, so we're only assigning this container to the internal network. Since Traefik has access to the Docker socket, the process will still expose a frontend for the mysql container by default, so we'll add the label traefik.enable=false to specify that Traefik should not expose this container.

      Finally, add this configuration to define the Adminer container:

      docker-compose.yml

      services:
      ...
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.your_domain
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      This container is based on the official Adminer image. The network and depends_on configuration for this container exactly match what we're using for the blog container.

      However, since we're directing all of the traffic to port 80 on our Docker host directly to the blog container, we need to configure this container differently in order for traffic to make it to our adminer container. The line traefik.frontend.rule=Host:db-admin.your_domain tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain, Traefik will route the traffic to the adminer container.

      At this point, docker-compose.yml should have the following contents:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.your_domain
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.your_domain
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      Save the file and exit the text editor.

      Next, set values in your shell for the WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD variables before you start your containers:

      • export WORDPRESS_DB_PASSWORD=secure_database_password
      • export MYSQL_ROOT_PASSWORD=secure_database_password

      Substitute secure_database_password with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD.

      With these variables set, run the containers using docker-compose:

      Now take another look at the Traefik admin dashboard. You'll see that there is now a backend and a frontend for the two exposed servers:

      Populated Traefik dashboard

      Navigate to blog.your_domain, substituting your_domain with your domain. You'll be redirected to a TLS connection and can now complete the WordPress setup:

      WordPress setup screen

      Now access Adminer by visiting db-admin.your_domain in your browser, again substituting your_domain with your domain. The mysql container isn't exposed to the outside world, but the adminer container has access to it through the internal Docker network that they share using the mysql container name as a host name.

      On the Adminer login screen, use the username root, use mysql for the server, and use the value you set for MYSQL_ROOT_PASSWORD for the password. Once logged in, you'll see the Adminer user interface:

      Adminer connected to the MySQL database

      Both sites are now working, and you can use the dashboard at monitor.your_domain to keep an eye on your applications.

      Conclusion

      In this tutorial, you configured Traefik to proxy requests to other applications in Docker containers.

      Traefik's declarative configuration at the application container level makes it easy to configure more services, and there's no need to restart the traefik container when you add new applications to proxy traffic to since Traefik notices the changes immediately through the Docker socket file it's monitoring.

      To learn more about what you can do with Traefik, head over to the official Traefik documentation.



      Source link

      How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 18.04


      Introduction

      [Docker Machine](/) is a tool that makes it easy to provision and manage multiple Docker hosts remotely from your personal computer. Such servers are commonly referred to as Dockerized hosts and are used to run Docker containers.

      While Docker Machine can be installed on a local or a remote system, the most common approach is to install it on your local computer (native installation or virtual machine) and use it to provision Dockerized remote servers.

      Though Docker Machine can be installed on most Linux distributions as well as macOS and Windows, in this tutorial, you’ll install it on your local machine running Ubuntu 18.04 and use it to provision Dockerized DigitalOcean Droplets. If you don’t have a local Ubuntu 18.04 machine, you can follow these instructions on any Ubuntu 18.04 server.

      Prerequisites

      To follow this tutorial, you will need the following:

      • A local machine or server running Ubuntu 18.04 with Docker installed. See How To Install and Use Docker on Ubuntu 18.04 for instructions.
      • A DigitalOcean API token. If you don’t have one, generate it using this guide. When you generate a token, be sure that it has read-write scope. That is the default, so if you do not change any options while generating it, it will have read-write capabilities.

      Step 1 — Installing Docker Machine

      In order to use Docker Machine, you must first install it locally. On Ubuntu, this means downloading a handful of scripts from the official Docker repository on GitHub.

      To download and install the Docker Machine binary, type:

      • wget https://github.com/docker/machine/releases/download/v0.15.0/docker-machine-$(uname -s)-$(uname -m)

      The name of the file should be docker-machine-Linux-x86_64. Rename it to docker-machine to make it easier to work with:

      • mv docker-machine-Linux-x86_64 docker-machine

      Make it executable:

      Move or copy it to the /usr/local/bin directory so that it will be available as a system command:

      • sudo mv docker-machine /usr/local/bin

      Check the version, which will indicate that it's properly installed:

      You'll see output similar to this, displaying the version number and build:

      Output

      docker-machine version 0.15.0, build b48dc28d

      Docker Machine is installed. Let's install some additional helper tools to make Docker Machine easier to work with.

      Step 2 — Installing Additional Docker Machine Scripts

      There are three Bash scripts in the Docker Machine GitHub repository you can install to make working with the docker and docker-machine commands easier. When installed, these scripts provide command completion and prompt customization.

      In this step, you'll install these three scripts into the /etc/bash_completion.d directory on your local machine by downloading them directly from the Docker Machine GitHub repository.

      Note: Before downloading and installing a script from the internet in a system-wide location, you should inspect the script's contents first by viewing the source URL in your browser.

      The first script allows you to see the active machine in your prompt. This comes in handy when you are working with and switching between multiple Dockerized machines. The script is called docker-machine-prompt.bash. Download it

      • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine-prompt.bash -O /etc/bash_completion.d/docker-machine-prompt.bash

      To complete the installation of this file, you'll have to modify the value for the PS1 variable in your .bashrc file. The PS1 variable is a special shell variable used to modify the Bash command prompt. Open ~/.bashrc in your editor:

      Within that file, there are three lines that begin with PS1. They should look just like these:

      ~/.bashrc

      PS1='${debian_chroot:+($debian_chroot)}[33[01;32m]u@h[33[00m]:[33[01;34m]w[33[00m]$ '
      
      ...
      
      PS1='${debian_chroot:+($debian_chroot)}u@h:w$ '
      
      ...
      
      PS1="[e]0;${debian_chroot:+($debian_chroot)}u@h: wa]$PS1"
      

      For each line, insert $(__docker_machine_ps1 " [%s]") near the end, as shown in the following example:

      ~/.bashrc

      PS1='${debian_chroot:+($debian_chroot)}[33[01;32m]u@h[33[00m]:[33[01;34m]w[33[00m]$(__docker_machine_ps1 " [%s]")$ '
      
      ...
      
      PS1='${debian_chroot:+($debian_chroot)}u@h:w$(__docker_machine_ps1 " [%s]")$ '
      
      ...
      
      PS1="[e]0;${debian_chroot:+($debian_chroot)}u@h: wa]$(__docker_machine_ps1 " [%s]")$PS1"
      

      Save and close the file.

      The second script is called docker-machine-wrapper.bash. It adds a use subcommand to the docker-machine command, making it significantly easier to switch between Docker hosts. To download it, type:

      • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine-wrapper.bash -O /etc/bash_completion.d/docker-machine-wrapper.bash

      The third script is called docker-machine.bash. It adds bash completion for docker-machine commands. Download it using:

      • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine.bash -O /etc/bash_completion.d/docker-machine.bash

      To apply the changes you've made so far, close, then reopen your terminal. If you're logged into the machine via SSH, exit the session and log in again, and you'll have command completion for the docker and docker-machine commands.

      Let's test things out by creating a new Docker host with Docker Machine.

      Step 3 — Provisioning a Dockerized Host Using Docker Machine

      Now that you have Docker and Docker Machine running on your local machine, you can provision a Dockerized Droplet on your DigitalOcean account using Docker Machine's docker-machine create command. If you've not done so already, assign your DigitalOcean API token to an environment variable:

      • export DOTOKEN=your-api-token

      NOTE: This tutorial uses DOTOKEN as the bash variable for the DO API token. The variable name does not have to be DOTOKEN, and it does not have to be in all caps.

      To make the variable permanent, put it in your ~/.bashrc file. This step is optional, but it is necessary if you want to the value to persist across shell sessions.

      Open that file with nano:

      Add this line to the file:

      ~/.bashrc

      export DOTOKEN=your-api-token
      

      To activate the variable in the current terminal session, type:

      To call the docker-machine create command successfully you must specify the driver you wish to use, as well as a machine name. The driver is the adapter for the infrastructure you're going to create. There are drivers for cloud infrastructure providers, as well as drivers for various virtualization platforms.

      We'll use the digitalocean driver. Depending on the driver you select, you'll need to provide additional options to create a machine. The digitalocean driver requires the API token (or the variable that evaluates to it) as its argument, along with the name for the machine you want to create.

      To create your first machine, type this command to create a DigitalOcean Droplet called docker-01:

      • docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-01

      You'll see this output as Docker Machine creates the Droplet:

      Output

      ... Installing Docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env ubuntu1804-docker

      Docker Machine creates an SSH key pair for the new host so it can access the server remotely. The Droplet is provisioned with an operating system and Docker is installed. When the command is complete, your Docker Droplet is up and running.

      To see the newly-created machine from the command line, type:

      The output will be similar to this, indicating that the new Docker host is running:

      Output

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Running tcp://209.97.155.178:2376 v18.06.1-ce

      Now let's look at how to specify the operating system when we create a machine.

      Step 4 — Specifying the Base OS and Droplet Options When Creating a Dockerized Host

      By default, the base operating system used when creating a Dockerized host with Docker Machine is supposed to be the latest Ubuntu LTS. However, at the time of this publication, the docker-machine create command is still using Ubuntu 16.04 LTS as the base operating system, even though Ubuntu 18.04 is the latest LTS edition. So if you need to run Ubuntu 18.04 on a recently-provisioned machine, you'll have to specify Ubuntu along with the desired version by passing the --digitalocean-image flag to the docker-machine create command.

      For example, to create a machine using Ubuntu 18.04, type:

      • docker-machine create --driver digitalocean --digitalocean-image ubuntu-18-04-x64 --digitalocean-access-token $DOTOKEN docker-ubuntu-1804

      You're not limited to a version of Ubuntu. You can create a machine using any operating system supported on DigitalOcean. For example, to create a machine using Debian 8, type:

      • docker-machine create --driver digitalocean --digitalocean-image debian-8-x64 --digitalocean-access-token $DOTOKEN docker-debian

      To provision a Dockerized host using CentOS 7 as the base OS, specify centos-7-0-x86 as the image name, like so:

      • docker-machine create --driver digitalocean --digitalocean-image centos-7-0-x64 --digitalocean-access-token $DOTOKEN docker-centos7

      The base operating system is not the only choice you have. You can also specify the size of the Droplet. By default, it is the smallest Droplet, which has 1 GB of RAM, a single CPU, and a 25 GB SSD.

      Find the size of the Droplet you want to use by looking up the corresponding slug in the DigitalOcean API documentation.

      For example, to provision a machine with 2 GB of RAM, two CPUs, and a 60 GB SSD, use the slug s-2vcpu-2gb:

      • docker-machine create --driver digitalocean --digitalocean-size s-2vcpu-2gb --digitalocean-access-token $DOTOKEN docker-03

      To see all the flags specific to creating a Docker Machine using the DigitalOcean driver, type:

      • docker-machine create --driver digitalocean -h

      Tip: If you refresh the Droplet page of your DigitalOcean dashboard, you will see the new machines you created using the docker-machine command.

      Now let's explore some of the other Docker Machine commands.

      Step 5 — Executing Additional Docker Machine Commands

      You've seen how to provision a Dockerized host using the create subcommand, and how to list the hosts available to Docker Machine using the ls subcommand. In this step, you'll learn a few more useful subcommands.

      To obtain detailed information about a Dockerized host, use the inspect subcommand, like so:

      • docker-machine inspect docker-01

      The output includes lines like the ones in the following output. The Image line reveals the version of the Linux distribution used and the Size line indicates the size slug:

      Output

      ... { "ConfigVersion": 3, "Driver": { "IPAddress": "203.0.113.71", "MachineName": "docker-01", "SSHUser": "root", "SSHPort": 22, ... "Image": "ubuntu-16-04-x64", "Size": "s-1vcpu-1gb", ... }, ---

      To print the connection configuration for a host, type:

      • docker-machine config docker-01

      The output will be similar to this:

      Output

      --tlsverify --tlscacert="/home/kamit/.docker/machine/certs/ca.pem" --tlscert="/home/kamit/.docker/machine/certs/cert.pem" --tlskey="/home/kamit/.docker/machine/certs/key.pem" -H=tcp://203.0.113.71:2376

      The last line in the output of the docker-machine config command reveals the IP address of the host, but you can also get that piece of information by typing:

      • docker-machine ip docker-01

      If you need to power down a remote host, you can use docker-machine to stop it:

      • docker-machine stop docker-01

      Verify that it is stopped:

      The output shows that the status of the machine has changed:

      Ouput

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Stopped Unknown

      To start it again, use the start subcommand:

      • docker-machine start docker-01

      Then review its status again:

      You will see that the STATE is now set Running for the host:

      Ouput

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Running tcp://203.0.113.71:2376 v18.06.1-ce

      Next let's look at how to interact with the remote host using SSH.

      Step 6 — Executing Commands on a Dockerized Host via SSH

      At this point, you've been getting information about your machines, but you can do more than that. For example, you can execute native Linux commands on a Docker host by using the ssh subcommand of docker-machine from your local system. This section explains how to perform ssh commands via docker-machine as well as how to open an SSH session to a Dockerized host.

      Assuming that you've provisioned a machine with Ubuntu as the operating system, execute the following command from your local system to update the package database on the Docker host:

      • docker-machine ssh docker-01 apt-get update

      You can even apply available updates using:

      • docker-machine ssh docker-01 apt-get upgrade

      Not sure what kernel your remote Docker host is using? Type the following:

      • docker-machine ssh docker-01 uname -r

      Finally, you can log in to the remote host with the docker machine ssh command:

      docker-machine ssh docker-01
      

      You'll be logged in as the root user and you'll see something similar to the following:

      Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64)
      
       * Documentation:  https://help.ubuntu.com
       * Management:     https://landscape.canonical.com
       * Support:        https://ubuntu.com/advantage
      
        Get cloud support with Ubuntu Advantage Cloud Guest:
          http://www.ubuntu.com/business/services/cloud
      
      14 packages can be updated.
      10 updates are security updates.
      

      Log out by typing exit to return to your local machine.

      Next, we'll direct Docker's commands at our remote host.

      Step 7 — Activating a Dockerized Host

      Activating a Docker host connects your local Docker client to that system, which makes it possible to run normal docker commands on the remote system.

      First, use Docker Machine to create a new Docker host called docker-ubuntu using Ubuntu 18.04:

      • docker-machine create --driver digitalocean --digitalocean-image ubuntu-18-04-x64 --digitalocean-access-token $DOTOKEN docker-ubuntu

      To activate a Docker host, type the following command:

      • eval $(docker-machine env machine-name)

      Alternatively, you can activate it by using this command:

      • docker-machine use machine-name

      Tip When working with multiple Docker hosts, the docker-machine use command is the easiest method of switching from one to the other.

      After typing any of these commands, your prompt will change to indicate that your Docker client is pointing to the remote Docker host. It will take this form. The name of the host will be at the end of the prompt:

      username@localmachine:~ [docker-01]$
      

      Now any docker command you type at this command prompt will be executed on that remote host.

      Execute docker-machine ls again:

      You'll see an asterisk under the ACTIVE column for docker-01:

      Output

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 * digitalocean Running tcp://203.0.113.71:2376 v18.06.1-ce

      To exit from the remote Docker host, type the following:

      Your prompt will no longer show the active host.

      Now let's create containers on the remote machine.

      Step 8 — Creating Docker Containers on a Remote Dockerized Host

      So far, you have provisioned a Dockerized Droplet on your DigitalOcean account and you've activated it — that is, your Docker client is pointing to it. The next logical step is to spin up containers on it. As an example, let's try running the official Nginx container.

      Use docker-machine use to select your remote machine:

      • docker-machine use docker-01

      Now execute this command to run an Nginx container on that machine:

      • docker run -d -p 8080:80 --name httpserver nginx

      In this command, we're mapping port 80 in the Nginx container to port 8080 on the Dockerized host so that we can access the default Nginx page from anywhere.

      Once the container builds, you will be able to access the default Nginx page by pointing your web browser to http://docker_machine_ip:8080.

      While the Docker host is still activated (as seen by its name in the prompt), you can list the images on that host:

      The output includes the Nginx image you just used:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 71c43202b8ac 3 hours ago 109MB

      You can also list the active or running containers on the host:

      If the Nginx container you ran in this step is the only active container, the output will look like this:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d3064c237372 nginx "nginx -g 'daemon of…" About a minute ago Up About a minute 0.0.0.0:8080->80/tcp httpserver

      If you intend to create containers on a remote machine, your Docker client must be pointing to it — that is, it must be the active machine in the terminal that you're using. Otherwise you'll be creating the container on your local machine. Again, let your command prompt be your guide.

      Docker Machine can create and manage remote hosts, and it can also remove them.

      Step 9 – Removing Docker Hosts

      You can use Docker Machine to remove a Docker host you've created. Use the docker-machine rm command to remove the docker-01 host you created:

      • docker-machine rm docker-01

      The Droplet is deleted along with the SSH key created for it. List the hosts again:

      This time, you won't see the docker-01 host listed in the output. And if you've only created one host, you won't see any output at all.

      Be sure to execute the command docker-machine use -u to point your local Docker daemon back to your local machine.

      Step 10 — Disabling Crash Reporting (Optional)

      By default, whenever an attempt to provision a Dockerized host using Docker Machine fails, or Docker Machine crashes, some diagnostic information is sent to a Docker account on Bugsnag. If you're not comfortable with this, you can disable the reporting by creating an empty file called no-error-report in your local computer's .docker/machine directory.

      To create the file, type:

      • touch ~/.docker/machine/no-error-report

      Check the file for error messages if provisioning fails or Docker Machine crashes.

      Conclusion

      You've installed Docker Machine and used it to provision multiple Docker hosts on DigitalOcean remotely from your local system. From here you should be able to provision as many Dockerized hosts on your DigitalOcean account as you need.

      For more on Docker Machine, visit the official documentation page. The three Bash scripts downloaded in this tutorial are hosted on this GitHub page.



      Source link

      How To Install and Secure OpenFaaS Using Docker Swarm on Ubuntu 16.04


      Introduction

      Serverless architecture hides server instances from the developer and usually exposes an API that allows developers to run their applications in the cloud. This approach helps developers deploy applications quickly, as they can leave provisioning and maintaining instances to the appropriate DevOps teams. It also reduces infrastructure costs, since with the appropriate tooling you can scale your instances per demand.

      Applications that run on serverless platforms are called serverless functions. A function is containerized, executable code that’s used to perform specific operations. Containerizing applications ensures that you can reproduce a consistent environment on many machines, enabling updating and scaling.

      OpenFaaS is a free and open-source framework for building and hosting serverless functions. With official support for both Docker Swarm and Kubernetes, it lets you deploy your applications using the powerful API, command-line interface, or Web UI. It comes with built-in metrics provided by Prometheus and supports auto-scaling on demand, as well as scaling from zero.

      In this tutorial, you’ll set up and use OpenFaaS with Docker Swarm running on Ubuntu 16.04, and secure its Web UI and API by setting up Traefik with Let’s Encypt. This ensures secure communication between nodes in the cluster, as well as between OpenFaaS and its operators.

      Prerequisites

      To follow this tutorial, you’ll need:

      • Ubuntu 16.04 running on your local machine. You can use other distributions and operating systems, but make sure you use the appropriate OpenFaaS scripts for your operating system and install all of the dependencies listed in these prerequisites.
      • git, curl, and jq installed on your local machine. You’ll use git to clone the OpenFaaS repository, curl to test the API, and jq to transform raw JSON responses from the API to human-readable JSON. To install the required dependencies for this setup, use the following commands: sudo apt-get update && sudo apt-get install git curl jq
      • Docker installed, following Steps 1 and 2 of How To Install and Use Docker on Ubuntu 16.04.
      • A Docker Hub account. To deploy functions to OpenFaaS, they will need to be published on a public container registry. We’ll use Docker Hub for this tutorial, since it’s both free and widely used. Be sure to authenticate with Docker on your local machine by using the docker login command.
      • Docker Machine installed, following How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04.
      • A DigitalOcean personal access token. To create a token, follow these instructions.
      • A Docker Swarm cluster of 3 nodes, provisioned by following How to Create a Cluster of Docker Containers with Docker Swarm and DigitalOcean on Ubuntu 16.04.
      • A fully registered domain name with an A record pointing to one of the instances in the Docker Swarm. Throughout the tutorial, you’ll see example.com as an example domain. You should replace this with your own domain, which you can either purchase on Namecheap, or get for free on Freenom. You can also use a different domain registrar of your choice.

      Step 1 — Downloading OpenFaaS and Installing the OpenFaaS CLI

      To deploy OpenFaaS to your Docker Swarm, you will need to download the deployment manifests and scripts. The easiest way to obtain them is to clone the official OpenFaas repository and check out the appropriate tag, which represents an OpenFaaS release.

      In addition to cloning the repository, you’ll also install the FaaS CLI, a powerful command-line utility that you can use to manage and deploy new functions from your terminal. It provides templates for creating your own functions in most major programming languages. In Step 7, you’ll use it to create a Python function and deploy it on OpenFaaS.

      For this tutorial, you’ll deploy OpenFaaS v0.8.9. While the steps for deploying other versions should be similar, make sure to check out the project changelog to ensure there are no breaking changes.

      First, navigate to your home directory and run the following command to clone the repository to the ~/faas directory:

      • cd ~
      • git clone https://github.com/openfaas/faas.git

      Navigate to the newly-created ~/faas directory:

      When you clone the repository, you'll get files from the master branch that contain the latest changes. Because breaking changes can get into the master branch, it's not recommended for use in production. Instead, let's check out the 0.8.9 tag:

      The output contains a message about the successful checkout and a warning about committing changes to this branch:

      Output

      Note: checking out '0.8.9'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 8f0d2d1 Expose scale-function endpoint

      If you see any errors, make sure to resolve them by following the on-screen instructions before continuing.

      With the OpenFaaS repository downloaded, complete with the necessary manifest files, let's proceed to installing the FaaS CLI.

      The easiest way to install the FaaS CLI is to use the official script. In your terminal, navigate to your home directory and download the script using the following command:

      • cd ~
      • curl -sSL -o faas-cli.sh https://cli.openfaas.com

      This will download the faas-cli.sh script to your home directory. Before executing the script, it's a good idea to check the contents:

      You can exit the preview by pressing q. Once you have verified content of the script, you can proceed with the installation by giving executable permissions to the script and executing it. Execute the script as root so it will automatically copy to your PATH:

      • chmod +x faas-cli.sh
      • sudo ./faas-cli.sh

      The output contains information about the installation progress and the CLI version that you've installed:

      Output

      x86_64 Downloading package https://github.com/openfaas/faas-cli/releases/download/0.6.17/faas-cli as /tmp/faas-cli Download complete. Running as root - Attempting to move faas-cli to /usr/local/bin New version of faas-cli installed to /usr/local/bin Creating alias 'faas' for 'faas-cli'. ___ _____ ____ / _ _ __ ___ _ __ | ___|_ _ __ _/ ___| | | | | '_ / _ '_ | |_ / _` |/ _` ___ | |_| | |_) | __/ | | | _| (_| | (_| |___) | ___/| .__/ ___|_| |_|_| __,_|__,_|____/ |_| CLI: commit: b5597294da6dd98457434fafe39054c993a5f7e7 version: 0.6.17

      If you see an error, make sure to resolve it by following the on-screen instructions before continuing with the tutorial.

      At this point, you have the FaaS CLI installed. To learn more about commands you can use, execute the CLI without any arguments:

      The output shows available commands and flags:

      Output

      ___ _____ ____ / _ _ __ ___ _ __ | ___|_ _ __ _/ ___| | | | | '_ / _ '_ | |_ / _` |/ _` ___ | |_| | |_) | __/ | | | _| (_| | (_| |___) | ___/| .__/ ___|_| |_|_| __,_|__,_|____/ |_| Manage your OpenFaaS functions from the command line Usage: faas-cli [flags] faas-cli [command] Available Commands: build Builds OpenFaaS function containers cloud OpenFaaS Cloud commands deploy Deploy OpenFaaS functions help Help about any command invoke Invoke an OpenFaaS function list List OpenFaaS functions login Log in to OpenFaaS gateway logout Log out from OpenFaaS gateway new Create a new template in the current folder with the name given as name push Push OpenFaaS functions to remote registry (Docker Hub) remove Remove deployed OpenFaaS functions store OpenFaaS store commands template Downloads templates from the specified github repo version Display the clients version information Flags: --filter string Wildcard to match with function names in YAML file -h, --help help for faas-cli --regex string Regex to match with function names in YAML file -f, --yaml string Path to YAML file describing function(s) Use "faas-cli [command] --help" for more information about a command.

      You have now successfully obtained the OpenFaaS manifests and installed the FaaS CLI, which you can use to manage your OpenFaaS instance from your terminal.

      The ~/faas directory contains files from the 0.8.9 release, which means you can now deploy OpenFaaS to your Docker Swarm. Before doing so, let's modify the deployment manifest file to include Traefik, which will secure your OpenFaaS setup by setting up Let's Encrypt.

      Step 2 — Configuring Traefik

      Traefik is a Docker-aware reverse proxy that comes with SSL support provided by Let's Encrypt. SSL protocol ensures that you communicate with the Swarm cluster securely by encrypting the data you send and receive between nodes.

      To use Traefik with OpenFaaS, you need to modify the OpenFaaS deployment manifest to include Traefik and tell OpenFaaS to use Traefik instead of directly exposing its services to the internet.

      Navigate back to the ~/faas directory and open the OpenFaaS deployment manifest in a text editor:

      • cd ~/faas
      • nano ~/faas/docker-compose.yml

      Note: The Docker Compose manifest file uses YAML formatting, which strictly forbids tabs and requires two spaces for indentation. The manifest will fail to deploy if the file is incorrectly formatted.

      The OpenFaaS deployment is comprised of several services, defined under the services directive, that provide the dependencies needed to run OpenFaaS, the OpenFaaS API and Web UI, and Prometheus and AlertManager (for handling metrics).

      At the beginning of the services section, add a new service called traefik, which uses the traefik:v1.6 image for the deployment:

      ~/faas/docker-compose.yml

      version: "3.3"
      services:
          traefik:
              image: traefik:v1.6
          gateway:
               ...
      

      The Traefik image is coming from the Traefik Docker Hub repository, where you can find a list of all available images.

      Next, let's instruct Docker to run Traefik using the command directive. This will run Traefik, configure it to work with Docker Swarm, and provide SSL using Let's Encrypt. The following flags will configure Traefik:

      • --docker.*: These flags tell Traefik to use Docker and specify that it's running in a Docker Swarm cluster.
      • --web=true: This flag enables Traefik's Web UI.
      • --defaultEntryPoints and --entryPoints: These flags define entry points and protocols to be used. In our case this includes HTTP on port 80 and HTTPS on port 443.
      • --acme.*: These flags tell Traefik to use ACME to generate Let's Encrypt certificates to secure your OpenFaaS cluster with SSL.

      Make sure to replace the example.com domain placeholders in the --acme.domains and --acme.email flags with the domain you're going to use to access OpenFaaS. You can specify multiple domains by separating them with a comma and space. The email address is for SSL notifications and alerts, including certificate expiry alerts. In this case, Traefik will handle renewing certificates automatically, so you can ignore expiry alerts.

      Add the following block of code below the image directive, and above gateway:

      ~/faas/docker-compose.yml

      ...
          traefik:
              image: traefik:v1.6
              command: -c --docker=true
                  --docker.swarmmode=true
                  --docker.domain=traefik
                  --docker.watch=true
                  --web=true
                  --defaultEntryPoints='http,https'
                  --entryPoints='Name:https Address::443 TLS'
                  --entryPoints='Name:http Address::80'
                  --acme=true
                  --acme.entrypoint='https'
                  --acme.httpchallenge=true
                  --acme.httpchallenge.entrypoint='http'
                  --acme.domains='example.com, www.example.com'
                  --acme.email='[email protected]'
                  --acme.ondemand=true
                  --acme.onhostrule=true
                  --acme.storage=/etc/traefik/acme/acme.json
      ...
      

      With the command directive in place, let's tell Traefik what ports to expose to the internet. Traefik uses port 8080 for its operations, while OpenFaaS will use port 80 for non-secure communication and port 443 for secure communication.

      Add the following ports directive below the command directive. The port-internet:port-docker notation ensures that the port on the left side is exposed by Traefik to the internet and maps to the container's port on the right side:

      ~/faas/docker-compose.yml

              ...
              command:
                  ...
              ports:
                  - 80:80
                  - 8080:8080
                  - 443:443
              ...
      

      Next, using the volumes directive, mount the Docker socket file from the host running Docker to Traefik. The Docker socket file communicates with the Docker API in order to manage your containers and get details about them, such as number of containers and their IP addresses. You will also mount the volume called acme, which we'll define later in this step.

      The networks directive instructs Traefik to use the functions network, which is deployed along with OpenFaaS. This network ensures that functions can communicate with other parts of the system, including the API.

      The deploy directive instructs Docker to run Traefik only on the Docker Swarm manager node.

      Add the following directives below the ports directive:

      ~/faas/docker-compose.yml

              ...
              volumes:
                  - "/var/run/docker.sock:/var/run/docker.sock"
                  - "acme:/etc/traefik/acme"
              networks:
                  - functions
              deploy:
                  placement:
                      constraints: [node.role == manager]
      

      At this point, the traefik service block should look like this:

      ~/faas/docker-compose.yml

      version: "3.3"
      services:
          traefik:
              image: traefik:v1.6
              command: -c --docker=true
                  --docker.swarmmode=true
                  --docker.domain=traefik
                  --docker.watch=true
                  --web=true
                  --defaultEntryPoints='http,https'
                  --entryPoints='Name:https Address::443 TLS'
                  --entryPoints='Name:http Address::80'            
                  --acme=true
                  --acme.entrypoint='https'
                  --acme.httpchallenge=true
                  --acme.httpchallenge.entrypoint='http'
                  --acme.domains='example.com, www.example.com'
                  --acme.email='[email protected]'
                  --acme.ondemand=true
                  --acme.onhostrule=true
                  --acme.storage=/etc/traefik/acme/acme.json
              ports:
                  - 80:80
                  - 8080:8080
                  - 443:443
              volumes:
                  - "/var/run/docker.sock:/var/run/docker.sock"
                  - "acme:/etc/traefik/acme"
              networks:
                - functions
              deploy:
                placement:
                  constraints: [node.role == manager]
      
          gateway:
              ...
      

      While this configuration ensures that Traefik will be deployed with OpenFaaS, you also need to configure OpenFaaS to work with Traefik. By default, the gateway service is configured to run on port 8080, which overlaps with Traefik.

      The gateway service provides the API gateway you can use to deploy, run, and manage your functions. It handles metrics (via Prometheus) and auto-scaling, and hosts the Web UI.

      Our goal is to expose the gateway service using Traefik instead of exposing it directly to the internet.

      Locate the gateway service, which should look like this:

      ~/faas/docker-compose.yml

      ...
          gateway:
              ports:
                  - 8080:8080
              image: openfaas/gateway:0.8.7
              networks:
                  - functions
              environment:
                  functions_provider_url: "http://faas-swarm:8080/"
                  read_timeout:  "300s"        # Maximum time to read HTTP request
                  write_timeout: "300s"        # Maximum time to write HTTP response
                  upstream_timeout: "300s"     # Maximum duration of upstream function call - should be more than read_timeout and write_timeout
                  dnsrr: "true"               # Temporarily use dnsrr in place of VIP while issue persists on PWD
                  faas_nats_address: "nats"
                  faas_nats_port: 4222
                  direct_functions: "true"    # Functions are invoked directly over the overlay network
                  direct_functions_suffix: ""
                  basic_auth: "${BASIC_AUTH:-true}"
                  secret_mount_path: "/run/secrets/"
                  scale_from_zero: "false"
              deploy:
                  resources:
                      # limits:   # Enable if you want to limit memory usage
                      #     memory: 200M
                      reservations:
                          memory: 100M
                  restart_policy:
                      condition: on-failure
                      delay: 5s
                      max_attempts: 20
                      window: 380s
                  placement:
                      constraints:
                          - 'node.platform.os == linux'
              secrets:
                  - basic-auth-user
                  - basic-auth-password
      ...
      

      Remove the ports directive from the service to avoid exposing the gateway service directly.

      Next, add the following lables directive to the deploy section of the gateway service. This directive exposes the /ui, /system, and /function endpoints on port 8080 over Traefik:

      ~/faas/docker-compose.yml

              ...
              deploy:
                  labels:
                      - traefik.port=8080
                      - traefik.frontend.rule=PathPrefix:/ui,/system,/function
                  resources:
                  ...            
      

      The /ui endpoint exposes the OpenFaaS Web UI, which is covered in the Step 6 of this tutorial. The /system endpoint is the API endpoint used to manage OpenFaaS, while the /function endpoint exposes the API endpoints for managing and running functions. Step 5 of this tutorial covers the OpenFaaS API in detail.

      After modifications, your gateway service should look like this:

      ~/faas/docker-compose.yml

      ...
          gateway:       
              image: openfaas/gateway:0.8.7
              networks:
                  - functions
              environment:
                  functions_provider_url: "http://faas-swarm:8080/"
                  read_timeout:  "300s"        # Maximum time to read HTTP request
                  write_timeout: "300s"        # Maximum time to write HTTP response
                  upstream_timeout: "300s"     # Maximum duration of upstream function call - should be more than read_timeout and write_timeout
                  dnsrr: "true"               # Temporarily use dnsrr in place of VIP while issue persists on PWD
                  faas_nats_address: "nats"
                  faas_nats_port: 4222
                  direct_functions: "true"    # Functions are invoked directly over the overlay network
                  direct_functions_suffix: ""
                  basic_auth: "${BASIC_AUTH:-true}"
                  secret_mount_path: "/run/secrets/"
                  scale_from_zero: "false"
              deploy:
                  labels:
                      - traefik.port=8080
                      - traefik.frontend.rule=PathPrefix:/ui,/system,/function
                  resources:
                      # limits:   # Enable if you want to limit memory usage
                      #     memory: 200M
                      reservations:
                          memory: 100M
                  restart_policy:
                      condition: on-failure
                      delay: 5s
                      max_attempts: 20
                      window: 380s
                  placement:
                      constraints:
                          - 'node.platform.os == linux'
              secrets:
                  - basic-auth-user
                  - basic-auth-password
      ...
      

      Finally, let's define the acme volume used for storing Let's Encrypt certificates. We can define an empty volume, meaning data will not persist if you destroy the container. If you destroy the container, the certificates will be regenerated the next time you start Traefik.

      Add the following volumes directive on the last line of the file:

      ~/faas/docker-compose.yml

      ...
      volumes:
          acme:
      

      Once you're done, save the file and close your text editor. At this point, you've configured Traefik to protect your OpenFaaS deployment and Docker Swarm. Now you're ready to deploy it along with OpenFaaS on your Swarm cluster.

      Step 3 — Deploying OpenFaaS

      Now that you have prepared the OpenFaaS deployment manifest, you're ready to deploy it and start using OpenFaaS. To deploy, you'll use the deploy_stack.sh script. This script is meant to be used on Linux and macOS operating systems, but in the OpenFaaS directory you can also find appropriate scripts for Windows and ARM systems.

      Before deploying OpenFaaS, you will need to instruct docker-machine to execute Docker commands from the script on one of the machines in the Swarm. For this tutorial, let's use the Swarm manager.

      If you have the docker-machine use command configured, you can use it:

      • docker-machine use node-1

      If not, use the following command:

      • eval $(docker-machine env node-1)

      The deploy_stack.sh script deploys all of the resources required for OpenFaaS to work as expected, including configuration files, network settings, services, and credentials for authorization with the OpenFaaS server.

      Let's execute the script, which will take several minutes to finish deploying:

      The output shows a list of resources that are created in the deployment process, as well as the credentials you will use to access the OpenFaaS server and the FaaS CLI command.

      Write down these credentials, as you will need them throughout the tutorial to access the Web UI and the API:

      Output

      Attempting to create credentials for gateway.. roozmk0y1jkn17372a8v9y63g q1odtpij3pbqrmmf8msy3ampl [Credentials] username: admin password: your_openfaas_password echo -n your_openfaas_password | faas-cli login --username=admin --password-stdin Enabling basic authentication for gateway.. Deploying OpenFaaS core services Creating network func_functions Creating config func_alertmanager_config Creating config func_prometheus_config Creating config func_prometheus_rules Creating service func_alertmanager Creating service func_traefik Creating service func_gateway Creating service func_faas-swarm Creating service func_nats Creating service func_queue-worker Creating service func_prometheus

      If you see any errors, follow the on-screen instructions to resolve them before continuing the tutorial.

      Before continuing, let's authenticate the FaaS CLI with the OpenFaaS server using the command provided by the deployment script.

      The script outputted the flags you need to provide to the command, but you will need to add an additional flag, --gateway, with the address of your OpenFaaS server, as the FaaS CLI assumes the gateway server is running on localhost:

      • echo -n your_openfaas_password | faas-cli login --username=admin --password-stdin --gateway https://example.com

      The output contains a message about successful authorization:

      Output

      Calling the OpenFaaS server to validate the credentials... credentials saved for admin https://example.com

      At this point, you have a fully-functional OpenFaaS server deployed on your Docker Swarm cluster, as well as the FaaS CLI configured to use your newly deployed server. Before testing how to use OpenFaaS, let's deploy some sample functions to get started.

      Step 4 — Deploying OpenFaaS Sample Functions

      Initially, OpenFaaS comes without any functions deployed. To start testing and using it, you will need some functions.

      The OpenFaaS project hosts some sample functions, and you can find a list of available functions along with their deployment manifests in the OpenFaaS repository. Some of the sample functions include nodeinfo, for showing information about the node where a function is running, wordcount, for counting the number of words in a passed request, and markdown, for converting passed markdown input to HTML output.

      The stack.yml manifest in the ~/faas directory deploys several sample functions along with the functions mentioned above. You can deploy it using the FaaS CLI.

      Run the following faas-cli command, which takes the path to the stack manifest and the address of your OpenFaaS server:

      • faas-cli deploy -f ~/faas/stack.yml --gateway https://example.com

      The output contains status codes and messages indicating whether or not the deployment was successful:

      Output

      Deploying: wordcount. Deployed. 200 OK. URL: https://example.com/function/wordcount Deploying: base64. Deployed. 200 OK. URL: https://example.com/function/base64 Deploying: markdown. Deployed. 200 OK. URL: https://example.com/function/markdown Deploying: hubstats. Deployed. 200 OK. URL: https://example.com/function/hubstats Deploying: nodeinfo. Deployed. 200 OK. URL: https://example.com/function/nodeinfo Deploying: echoit. Deployed. 200 OK. URL: https://example.com/function/echoit

      If you see any errors, make sure to resolve them by following the on-screen instructions.

      Once the stack deployment is done, list all of the functions to make sure they're deployed and ready to be used:

      • faas-cli list --gateway https://example.com

      The output contains a list of functions, along with their replica numbers and an invocations count:

      Output

      Function Invocations Replicas markdown 0 1 wordcount 0 1 base64 0 1 nodeinfo 0 1 hubstats 0 1 echoit 0 1

      If you don't see your functions here, make sure the faas-cli deploy command executed successfully.

      You can now use the sample OpenFaaS functions to test and demonstrate how to use the API, Web UI, and CLI. In the next step, you'll start by using the OpenFaaS API to list and run functions.

      Step 5 — Using the OpenFaaS API

      OpenFaaS comes with a powerful API that you can use to manage and execute your serverless functions. Let's use Swagger, a tool for architecting, testing, and documenting APIs, to browse the API documentation, and then use the API to list and run functions.

      With Swagger, you can inspect the API documentation to find out what endpoints are available and how you can use them. In the OpenFaaS repository, you can find the Swagger API specification, which can be used with the Swagger editor to convert the specification to human-readable form.

      Navigate your web browser to http://editor.swagger.io/. You should be welcomed with the following screen:

      Swagger Editor Welcome page

      Here you'll find a text editor containing the source code for the sample Swagger specification, and the human-readable API documentation on the right.

      Let's import the OpenFaaS Swagger specification. In the top menu, click on the File button, and then on Import URL:

      Swagger Editor Import URL

      You'll see a pop-up, where you need to enter the address of the Swagger API specification. If you don't see the pop-up, make sure pop-ups are enabled for your web browser.

      In the field, enter the link to the Swagger OpenFaaS API specification: https://raw.githubusercontent.com/openfaas/faas/master/api-docs/swagger.yml

      Swagger Editor Input URL

      After clicking on the OK button, the Swagger editor will show you the API reference for OpenFaaS, which should look like this:

      Swagger Editor OpenFaaS API specification

      On the left side you can see the source of the API reference file, while on the right side you can see a list of endpoints, along with short descriptions. Clicking on an endpoint shows you more details about it, including what parameters it takes, what method it uses, and possible responses:

      Swagger Editor Endpoint details

      Once you know what endpoints are available and what parameters they expect, you can use them to manage your functions.

      Next, you'll use a curl command to communicate with the API, so navigate back to your terminal. With the -u flag, you will be able to pass the admin:your_openfaas_password pair that you got in Step 3, while the -X flag will define the request method. You will also pass your endpoint URL, https://example.com/system/functions:

      • curl -u admin:your_openfaas_password -X GET https://example.com/system/functions

      You can see the required method for each endpoint in the API docs.

      In Step 4, you deployed several sample functions, which should appear in the output:

      Output

      [{"name":"base64","image":"functions/alpine:latest","invocationCount":0,"replicas":1,"envProcess":"base64","availableReplicas":0,"labels":{"com.openfaas.function":"base64","function":"true"}},{"name":"nodeinfo","image":"functions/nodeinfo:latest","invocationCount":0,"replicas":1,"envProcess":"","availableReplicas":0,"labels":{"com.openfaas.function":"nodeinfo","function":"true"}},{"name":"hubstats","image":"functions/hubstats:latest","invocationCount":0,"replicas":1,"envProcess":"","availableReplicas":0,"labels":{"com.openfaas.function":"hubstats","function":"true"}},{"name":"markdown","image":"functions/markdown-render:latest","invocationCount":0,"replicas":1,"envProcess":"","availableReplicas":0,"labels":{"com.openfaas.function":"markdown","function":"true"}},{"name":"echoit","image":"functions/alpine:latest","invocationCount":0,"replicas":1,"envProcess":"cat","availableReplicas":0,"labels":{"com.openfaas.function":"echoit","function":"true"}},{"name":"wordcount","image":"functions/alpine:latest","invocationCount":0,"replicas":1,"envProcess":"wc","availableReplicas":0,"labels":{"com.openfaas.function":"wordcount","function":"true"}}]

      If you don't see output that looks like this, or if you see an error, follow the on-screen instructions to resolve the problem before continuing with the tutorial. Make sure you're sending the request to the correct endpoint using the recommended method and the right credentials. You can also check the logs for the gateway service using the following command:

      • docker service logs func_gateway

      By default, the API response to the curl call returns raw JSON without new lines, which is not human-readable. To parse it, pipe curl's response to the jq utility, which will convert the JSON to human-readable form:

      • curl -u admin:your_openfaas_password -X GET https://example.com/system/functions | jq

      The output is now in human-readable form. You can see the function name, which you can use to manage and invoke functions with the API, the number of invocations, as well as information such as labels and number of replicas, relevant to Docker:

      Output

      [ { "name": "base64", "image": "functions/alpine:latest", "invocationCount": 0, "replicas": 1, "envProcess": "base64", "availableReplicas": 0, "labels": { "com.openfaas.function": "base64", "function": "true" } }, { "name": "nodeinfo", "image": "functions/nodeinfo:latest", "invocationCount": 0, "replicas": 1, "envProcess": "", "availableReplicas": 0, "labels": { "com.openfaas.function": "nodeinfo", "function": "true" } }, { "name": "hubstats", "image": "functions/hubstats:latest", "invocationCount": 0, "replicas": 1, "envProcess": "", "availableReplicas": 0, "labels": { "com.openfaas.function": "hubstats", "function": "true" } }, { "name": "markdown", "image": "functions/markdown-render:latest", "invocationCount": 0, "replicas": 1, "envProcess": "", "availableReplicas": 0, "labels": { "com.openfaas.function": "markdown", "function": "true" } }, { "name": "echoit", "image": "functions/alpine:latest", "invocationCount": 0, "replicas": 1, "envProcess": "cat", "availableReplicas": 0, "labels": { "com.openfaas.function": "echoit", "function": "true" } }, { "name": "wordcount", "image": "functions/alpine:latest", "invocationCount": 0, "replicas": 1, "envProcess": "wc", "availableReplicas": 0, "labels": { "com.openfaas.function": "wordcount", "function": "true" } } ]

      Let's take one of these functions and execute it, using the API /function/function-name endpoint. This endpoint is available over the POST method, where the -d flag allows you to send data to the function.

      For example, let's run the following curl command to execute the echoit function, which comes with OpenFaaS out of the box and outputs the string you've sent it as a request. You can use the string "Sammy The Shark" to demonstrate:

      • curl -u admin:your_openfaas_password -X POST https://example.com/function/func_echoit -d "Sammy The Shark"

      The output will show you Sammy The Shark:

      Output

      Sammy The Shark

      If you see an error, follow the on-screen logs to resolve the problem before continuing with the tutorial. You can also check the gateway service's logs.

      At this point, you've used the OpenFaaS API to manage and execute your functions. Let's now take a look at the OpenFaaS Web UI.

      Step 6 — Using the OpenFaaS Web UI

      OpenFaaS comes with a Web UI that you can use to add new and execute installed functions. In this step, you will install a function for generating QR Codes from the FaaS Store and generate a sample code.

      To begin, point your web browser to https://example.com/ui/. Note that the trailing slash is required to avoid a "not found" error.

      In the HTTP authentication dialogue box, enter the username and password you got when deploying OpenFaaS in Step 3.

      Once logged in, you will see available functions on the left side of the screen, along with the Deploy New Functions button used to install new functions.

      Click on Deploy New Functions to deploy a new function. You will see the FaaS Store window, which provides community-tested functions that you can install with a single click:

      OpenFaaS Functions store

      In addition to these functions, you can also deploy functions manually from a Docker image.

      For this tutorial, you will deploy the QR Code Generator function from the FaaS Store. Locate the QR Code Generator - Go item in the list, click on it, and then click the Deploy button at the bottom of the window:

      OpenFaaS QR Code Generator function

      After clicking Deploy, the Deploy A New Function window will close and the function will be deployed. In the list at the left side of the window you will see a listing for the qrcode-go function. Click on this entry to select it. The main function window will show the function name, number of replicas, invocation count, and image, along with the option to invoke the function:

      OpenFaaS QR Code Function

      Let's generate a QR code containing the URL with your domain. In the Request body field, type the content of the QR code you'd like to generate; in our case, this will be "example.com". Once you're done, click the Invoke button.

      When you select either the Text or JSON output option, the function will output the file's content, which is not usable or human-readable:

      OpenFaaS generated QR code

      You can download a response. which in our case will be a PNG file with the QR code. To do this, select the Download option, and then click Invoke once again. Shortly after, you should have the QR code downloaded, which you can open with the image viewer of your choice:

      Generated QR code

      In addition to deploying functions from the FaaS store or from Docker images, you can also create your own functions. In the next step, you will create a Python function using the FaaS command-line interface.

      Step 7 — Creating Functions With the FaaS CLI

      In the previous steps, you configured the FaaS CLI to work with your OpenFaaS server. The FaaS CLI is a command-line interface that you can use to manage OpenFaaS and install and run functions, just like you would over the API or using the Web UI.

      Compared to the Web UI or the API, the FaaS CLI has templates for many programming languages that you can use to create your own functions. It can also build container images based on your function code and push images to an image registry, such as Docker Hub.

      In this step, you will create a function, publish it to Docker Hub, and then run it on your OpenFaaS server. This function will be similar to the default echoit function, which returns input passed as a request.

      We will use Python to write our function. If you want to learn more about Python, you can check out our How To Code in Python 3 tutorial series and our How To Code in Python eBook.

      Before creating the new function, let's create a directory to store FaaS functions and navigate to it:

      • mkdir ~/faas-functions
      • cd ~/faas-functions

      Execute the following command to create a new Python function called echo-input. Make sure to replace your-docker-hub-username with your Docker Hub username, as you'll push the function to Docker Hub later:

      • faas-cli new echo-input --lang python --prefix your-docker-hub-username --gateway https://example.com

      The output contains confirmation about the successful function creation. If you don't have templates downloaded, the CLI will download templates in your current directory:

      Output

      2018/05/13 12:13:06 No templates found in current directory. 2018/05/13 12:13:06 Attempting to expand templates from https://github.com/openfaas/templates.git 2018/05/13 12:13:11 Fetched 12 template(s) : [csharp dockerfile go go-armhf node node-arm64 node-armhf python python-armhf python3 python3-armhf ruby] from https://github.com/openfaas/templates.git Folder: echo-input created. ___ _____ ____ / _ _ __ ___ _ __ | ___|_ _ __ _/ ___| | | | | '_ / _ '_ | |_ / _` |/ _` ___ | |_| | |_) | __/ | | | _| (_| | (_| |___) | ___/| .__/ ___|_| |_|_| __,_|__,_|____/ |_| Function created in folder: echo-input Stack file written: echo-input.yml

      The result of the faas-cli new command is a newly-created ~/faas-fucntions/echo-input directory containing the function's code and the echo-input.yml file. This file includes information about your function: what language it's in, its name, and the server you will deploy it on.

      Navigate to the ~/faas-fucntions/echo-input directory:

      • cd ~/faas-fucntions/echo-input

      To see content of the directory, execute:

      The directory contains two files: handler.py, which contains the code for your function, and requirements.txt, which contains the Python modules required by the function.

      Since we don't currently require any non-default Python modules, the requirements.txt file is empty. You can check that by using the cat command:

      Next, let's write a function that will return a request as a string.

      The handler.py file already has the sample handler code, which returns a received response as a string. Let's take a look at the code:

      The default function is called handle and takes a single parameter, req, that contains a request that's passed to the function when it's invoked. The function does only one thing, returning the passed request back as the response:

      def handle(req):
          """handle a request to the function
          Args:
              req (str): request body
          """
      
          return req
      

      Let's modify it to include additional text, replacing the string in the return directive as follows:

          return "Received message: " + req
      

      Once you're done, save the file and close your text editor.

      Next, let's build a Docker image from the function's source code. Navigate to the faas-functions directory where the echo-input.yml file is located:

      The following command builds the Docker image for your function:

      • faas-cli build -f echo-input.yml

      The output contains information about the build progress:

      Output

      [0] > Building echo-input. Clearing temporary build folder: ./build/echo-input/ Preparing ./echo-input/ ./build/echo-input/function Building: sammy/echo-input with python template. Please wait.. Sending build context to Docker daemon 7.168kB Step 1/16 : FROM python:2.7-alpine ---> 5fdd069daf25 Step 2/16 : RUN apk --no-cache add curl && echo "Pulling watchdog binary from Github." && curl -sSL https://github.com/openfaas/faas/releases/download/0.8.0/fwatchdog > /usr/bin/fwatchdog && chmod +x /usr/bin/fwatchdog && apk del curl --no-cache ---> Using cache ---> 247d4772623a Step 3/16 : WORKDIR /root/ ---> Using cache ---> 532cc683d67b Step 4/16 : COPY index.py . ---> Using cache ---> b4b512152257 Step 5/16 : COPY requirements.txt . ---> Using cache ---> 3f9cbb311ab4 Step 6/16 : RUN pip install -r requirements.txt ---> Using cache ---> dd7415c792b1 Step 7/16 : RUN mkdir -p function ---> Using cache ---> 96c25051cefc Step 8/16 : RUN touch ./function/__init__.py ---> Using cache ---> 77a9db274e32 Step 9/16 : WORKDIR /root/function/ ---> Using cache ---> 88a876eca9e3 Step 10/16 : COPY function/requirements.txt . ---> Using cache ---> f9ba5effdc5a Step 11/16 : RUN pip install -r requirements.txt ---> Using cache ---> 394a1dd9e4d7 Step 12/16 : WORKDIR /root/ ---> Using cache ---> 5a5893c25b65 Step 13/16 : COPY function function ---> eeddfa67018d Step 14/16 : ENV fprocess="python index.py" ---> Running in 8e53df4583f2 Removing intermediate container 8e53df4583f2 ---> fb5086bc7f6c Step 15/16 : HEALTHCHECK --interval=1s CMD [ -e /tmp/.lock ] || exit 1 ---> Running in b38681a71378 Removing intermediate container b38681a71378 ---> b04c045b0994 Step 16/16 : CMD ["fwatchdog"] ---> Running in c5a11078df3d Removing intermediate container c5a11078df3d ---> bc5f08157c5a Successfully built bc5f08157c5a Successfully tagged sammy/echo-input:latest Image: your-docker-hub-username/echo-input built. [0] < Building echo-input done. [0] worker done.

      If you get an error, make sure to resolve it by following the on-screen instructions before deploying the function.

      You will need to containerize your OpenFaaS function in order to deploy it. Containerizing applications ensures that the environment needed to run your application can be easily reproduced, and your application can be easily deployed, scaled, and updated.

      For this tutorial, we'll use Docker Hub, as it's a free solution, but you can use any container registry, including your own private registry.

      Run the following command to push the image you built to your specified repository on Docker Hub:

      • faas-cli push -f echo-input.yml

      Pushing will take several minutes, depending on your internet connection speed. The output contains the image's upload progress:

      Output

      [0] > Pushing echo-input. The push refers to repository [docker.io/sammy/echo-input] 320ea573b385: Pushed 9d87e56f5d0c: Pushed 6f79b75e7434: Pushed 23aac2d8ecf2: Pushed 2bec17d09b7e: Pushed e5a0e5ab3be6: Pushed e9c8ca932f1b: Pushed beae1d55b4ce: Pushed 2fcae03ed1f7: Pushed 62103d5daa03: Mounted from library/python f6ac6def937b: Mounted from library/python 55c108c7613c: Mounted from library/python e53f74215d12: Mounted from library/python latest: digest: sha256:794fa942c2f593286370bbab2b6c6b75b9c4dcde84f62f522e59fb0f52ba05c1 size: 3033 [0] < Pushing echo-input done. [0] worker done.

      Finally, with your image pushed to Docker Hub, you can use it to deploy a function to your OpenFaaS server.

      To deploy your function, run the deploy command, which takes the path to the manifest that describes your function, as well as the address of your OpenFaaS server:

      • faas-cli deploy -f echo-input.yml --gateway https://example.com

      The output shows the status of the deployment, along with the name of the function you're deploying and the deployment status code:

      Output

      Deploying: echo-input. Deployed. 200 OK. URL: https://example.com/function/echo-input

      If the deployment is successful, you will see a 200 status code. In the case of errors, follow the provided instructions to fix the problem before continuing.

      At this point your function is deployed and ready to be used. You can test that it is working as expected by invoking it.

      To invoke a function with the FaaS CLI, use the invoke command by passing the function name and OpenFaaS address to it. After executing the command, you'll be asked to enter the request you want to send to the function.

      Execute the following command to invoke the echo-input function:

      • faas-cli invoke echo-input --gateway https://example.com

      You'll be asked to enter the request you want to send to the function:

      Output

      Reading from STDIN - hit (Control + D) to stop.

      Enter the text you want to send to the function, such as:

      Sammy The Shark!
      

      Once you're done, press ENTER and then CTRL + D to finish the request. The CTRL + D shortcut in the terminal is used to register an End-of-File (EOF). The OpenFaaS CLI stops reading from the terminal once EOF is received.

      After several seconds, the command will output the function's response:

      Output

      Reading from STDIN - hit (Control + D) to stop. Sammy The Shark! Received message: Sammy The Shark!

      If you don't see the output or you get an error, retrace the preceding steps to make sure you've deployed the function as explained and follow the on-screen instructions to resolve the problem.

      At this point, you've interacted with your function using three methods: the Web UI, the API, and the CLI. Being able to execute your functions with any of these methods offers you the flexibility of deciding how you would like to integrate functions into your existing workflows.

      Conclusion

      In this tutorial, you've used serverless architecture and OpenFaaS to deploy and manage your applications using the OpenFaaS API, Web UI, and CLI. You also secured your infrastructure by leveraging Traefik to provide SSL using Let's Encrypt.

      If you want to learn more about the OpenFaaS project, you can check out their website and the project's official documentation.



      Source link