One place for hosting & domains

      How to Manually Set Up a Prisma Server on Ubuntu 18.04

      The author selected the Electronic Frontier Foundation to receive a donation as part of the Write for DOnations program.


      Prisma is a data layer that replaces traditional object-relational mapping tools (ORMs) in your application. Offering support for both building GraphQL servers as well as REST APIs, Prisma simplifies database access with a focus on type safety and enables declarative database migrations. Type safety helps reduce potential code errors and inconsistencies, while the declarative database migrations allow you to store your datamodel in version control. These features help developers reduce time spent focused on setting up database access, migrations, and data management workflows.

      You can deploy the Prisma server, which acts as a proxy for your database, in a number of ways and host it either remotely or locally. Through the Prisma service you can access your data and connect to your database with the GraphQL API, which allows realtime operations and the ability to create, update, and delete data. GraphQL is a query language for APIs that allows users to send queries to access the exact data they require from their server. The Prisma server is a standalone component that sits on top of your database.

      In this tutorial you will manually install a Prisma server on Ubuntu 18.04 and run a test GraphQL query in the GraphQL Playground. You will host your Prisma setup code and development locally — where you will actually build your application — while running Prisma on your remote server. By running through the installation manually, you will have a deeper understanding and customizability of the underlying infrastructure of your setup.

      While this tutorial covers the manual steps for deploying Prisma on an Ubuntu 18.04 server, you can also accomplish this in a more automated way with Docker Machine by following this tutorial on Prisma’s site.

      Note: The setup described in this section does not include features you would normally expect from production-ready servers, such as automated backups and active failover.


      To complete this tutorial, you will need:

      Step 1 — Starting the Prisma Server

      The Prisma CLI is the primary tool used to deploy and manage your Prisma services. To start the services, you need to set up the required infrastructure, which includes the Prisma server and a database for it to connect to.

      Docker Compose allows you to manage and run multi-container applications. You’ll use it to set up the infrastructure required for the Prisma service.

      You will begin by creating the docker-compose.yml file to store the Prisma service configuration on your server. You’ll use this file to automatically spin up Prisma, an associated database, and configure the necessary details, all in one step. Once the file is spun up with Docker Compose, it will configure the passwords for your databases, so be sure to replace the passwords for managementAPIsecret and MYSQL_ROOT_PASSWORD with something secure. Run the following command to create and edit the docker-compose.yml file:

      • sudo nano docker-compose.yml

      Add the following content to the file to define the services and volumes for the Prisma setup:


      version: "3"
          image: prismagraphql/prisma:1.20
          restart: always
            - "4466:4466"
            PRISMA_CONFIG: |
              port: 4466
              managementApiSecret: my-secret
                  connector: mysql
                  host: mysql
                  port: 3306
                  user: root
                  password: prisma
                  migrations: true
          image: mysql:5.7
          restart: always
            MYSQL_ROOT_PASSWORD: prisma
            - mysql:/var/lib/mysql

      This configuration does the following:

      • It launches two services: prisma-db and db.
      • It pulls in the latest version of Prisma. As of this writing, that is Prisma 1.20.
      • It sets the ports Prisma will be available on and specifies all of the credentials to connect to the MySQL database in the databases section.

      The docker-compose.yml file sets up the managementApiSecret, which prevents others from accessing your data with knowledge of your endpoint. If you are using this tutorial for anything but a test deployment, you should change the managementAPIsecret to something more secure. When you do, be sure to remember it so that you can enter it later during the prisma init process.

      This file also pulls in the MySQL Docker image and sets those credentials as well. For the purposes of this tutorial, this Docker Compose file spins up a MySQL image, but you can also use PostgreSQL with Prisma. Both Docker images are available on Docker hub:

      Save and exit the file.

      Now that you have saved all of the details, you can start the Docker containers. The -d command tells the containers to run in detached mode, meaning they’ll run in the background:

      • sudo docker-compose up -d

      This will fetch the Docker images for both prisma and mysql. You can verify that the Docker containers are running with the following command:

      You will see an output that looks similar to this:

      CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                    NAMES
      24f4dd6222b1        prismagraphql/prisma:1.12   "/bin/sh -c /app/sta…"   15 seconds ago      Up 1 second>4466/tcp   root_prisma_1
      d8cc3a393a9f        mysql:5.7                   "docker-entrypoint.s…"   15 seconds ago      Up 13 seconds       3306/tcp                 root_mysql_1

      With your Prisma server and database set up, you are now ready to work locally to deploy the Prisma service.

      Step 2 — Installing Prisma Locally

      The Prisma server provides the runtime environments for your Prisma services. Now that you have your Prisma server started, you can deploy your Prisma service. You will run these steps locally, not on your server.

      To start, create a separate folder to contain all of the Prisma files:

      Then move into that folder:

      You can install Prisma with Homebrew if you're using MacOS. To do this, run the following command to add the Prisma repository:

      You can then install Prisma with the following command:

      Or alternately, with npm:

      With Prisma installed locally, you are ready to bootstrap the new Prisma service.

      Step 3 — Creating the Configuration for a New Prisma Service

      After the installation, you can use prisma init to create the file structure for a new Prisma database API, which generates the files necessary to build your application with Prisma. Your endpoint will automatically be in the prisma.yml file, and datamodel.prisma will already contain a sample datamodel that you can query in the next step. The datamodel serves as the basis for your Prisma API and specifies the model for your application. At this point, you are only creating the files and the sample datamodel. You are not making any changes to the database until you run prisma deploy later in this step.

      Now you can run the following command locally to create the new file structure:

      After you run this command you will see an interactive prompt. When asked, select, Use other server and press ENTER:


      Set up a new Prisma server or deploy to an existing server? You can set up Prisma for local development (based on docker-compose) Use existing database Connect to existing database Create new database Set up a local database using Docker Or deploy to an existing Prisma server: Demo server Hosted demo environment incl. database (requires login) ❯ Use other server Manually provide endpoint of a running Prisma server

      You will then provide the endpoint of your server that is acting as the Prisma server. It will look something like: http://SERVER_IP_ADDRESS:4466. It is key that the endpoint begins with http (or https) and has the port number indicated.


      Enter the endpoint of your Prisma server http://SERVER_IP_ADDRESS:4466

      For the management API secret, enter in the phrase or password that you indicated earlier in the configuration file:


      Enter the management API secret my-secret

      For the subsequent options, you can choose the default variables by pressing ENTER for the service name and service stage:


      Choose a name for your service hello-world Choose a name for your stage dev

      You will also be given a choice on a programming language for the Prisma client. In this case, you can choose your preferred language. You can read more about the client here.


      Select the programming language for the generated Prisma client (Use arrow keys) ❯ Prisma TypeScript Client Prisma Flow Client Prisma JavaScript Client Prisma Go Client Don't generate

      Once you have completed the prompt, you will see the following output that confirms the selections you made:


      Created 3 new files: prisma.yml Prisma service definition datamodel.prisma GraphQL SDL-based datamodel (foundation for database) .env Env file including PRISMA_API_MANAGEMENT_SECRET Next steps: 1. Open folder: cd hello-world 2. Deploy your Prisma service: prisma deploy 3. Read more about deploying services:

      Move into the hello-world directory:

      Sync these changes to your server with prisma deploy. This sends the information to the Prisma server from your local machine and creates the Prisma service on the Prisma server:

      Note: Running prisma deploy again will update your Prisma service.

      Your output will look something like:


      Creating stage dev for service hello-world ✔ Deploying service `hello-world` to stage 'dev' to server 'default' 468ms Changes: User (Type) + Created type `User` + Created field `id` of type `GraphQLID!` + Created field `name` of type `String!` + Created field `updatedAt` of type `DateTime!` + Created field `createdAt` of type `DateTime!` Applying changes 716ms Your Prisma GraphQL database endpoint is live: HTTP: http://SERVER_IP_ADDRESS:4466/hello-world/dev WS: ws://SERVER_IP_ADDRESS:4466/hello-world/dev

      The output shows that Prisma has updated your database according to your datamodel (created in the prisma init step) with a type User. Types are an essential part of a datamodel; they represent an item from your application, and each type contains multiple fields. For your datamodel the associated fields describing the user are: the user’s ID, name, time they were created, and time they were updated.

      If you run into issues at this stage and get a different output, double check that you entered all of the fields correctly during the interactive prompt. You can do so by reviewing the contents of the prisma.yml file.

      With your Prisma service running, you can connect to two different endpoints:

      • The management interface, available at http://SERVER_IP_ADDRESS:4466/management, where you can manage and deploy Prisma services.

      • The GraphQL API for your Prisma service, available at http://SERVER_IP_ADDRESS:4466/hello-world/dev.

      GraphQL API exploring _Your Project_

      You have successfully set up and deployed your Prisma server. You can now explore queries and mutations in GraphQL.

      Step 4 — Running an Example Query

      To explore another Prisma use case, you can experiment with the GraphQL playground tool, which is an open-source GraphQL integrated development environment (IDE) on your server. To access it, visit your endpoint in your browser from the previous step:


      A mutation is a GraphQL term that describes a way to modify — create, update, or delete (CRUD) — data in the backend via GraphQL. You can send a mutation to create a new user and explore the functionality. To do this, run the following mutation in the left-hand side of the page:

      mutation {
        createUser(data: { name: "Alice" }) {

      Once you press the play button, you will see the results on the right-hand side of the page.
      GraphQL Playground Creating a New User

      Subsequently, if you want to look up a user by using the ID column in the database, you can run the following query:

      query {
        user(where: { id: "cjkar2d62000k0847xuh4g70o" }) {

      You now have a Prisma server and service up and running on your server, and you have run test queries in GraphQL's IDE.


      You have a functioning Prisma setup on your server. You can see some additional Prisma use cases and next steps in the Getting Started Guide or explore Prisma's feature set in the Prisma Docs. Once you have completed all of the steps in this tutorial, you have a number of options to verify your connection to the database, one possibility is using the Prisma Client.

      Source link

      How To Set Up a Private Docker Registry on Ubuntu 18.04

      The author selected the Apache Software Foundation to receive a donation as part of the Write for DOnations program.


      Docker Registry is an application that manages storing and delivering Docker container images. Registries centralize container images and reduce build times for developers. Docker images guarantee the same runtime environment through virtualization, but building an image can involve a significant time investment. For example, rather than installing dependencies and packages separately to use Docker, developers can download a compressed image from a registry that contains all of the necessary components. Furthermore, developers can automate pushing images to a registry using continuous integration tools, such as TravisCI, to seamlessly update images during production and development.

      Docker also has a free public registry, Docker Hub, that can host your custom Docker images, but there are situations where you will not want your image to be publicly available. Images typically contain all the code necessary to run an application, so using a private registry is preferable when using proprietary software.

      In this tutorial, you will set up and secure your own private Docker Registry. You will use Docker Compose to define configurations to run your Docker applications and Nginx to forward server traffic from HTTPS to the running Docker container. Once you’ve completed this tutorial, you will be able to push a custom Docker image to your private registry and pull the image securely from a remote server.


      Before you begin this guide, you’ll need the following:

      • Two Ubuntu 18.04 servers set up by following the Ubuntu 18.04 initial server setup guide, including a sudo non-root user and a firewall. One server will host your private Docker Registry and the other will be your client server.
      • Docker and Docker-Compose installed on both servers by following the How to Install Docker-Compose on Ubuntu 18.04 tutorial. You only need to complete the first step of this tutorial to install Docker Compose. This tutorial explains how to install Docker as part of its prerequisites.
      • Nginx installed on your private Docker Registry server by following the How to Install Nginx on Ubuntu 18.04.
      • Nginx secured with Let’s Encrypt on your server for the private Docker Registry, by following How to Secure Nginx With Let’s Encrypt. Make sure to redirect all traffic from HTTP to HTTPS in Step 4.
      • A domain name that resolves to the server you’re using for the private Docker Registry. You will set this up as part of the Let’s Encrypt prerequisite.

      Step 1 — Installing and Configuring the Docker Registry

      The Docker command line tool is useful for starting and managing one or two Docker containers, but, for full deployment most applications running inside Docker containers require other components to be running in parallel. For example, a lot of web applications consist of a web server, like Nginx, that serves up the application’s code, an interpreted scripting language such as PHP, and a database server like MySQL.

      With Docker Compose, you can write one .yml file to set up each container’s configuration and the information the containers need to communicate with each other. You can then use the docker-compose command line tool to issue commands to all the components that make up your application.

      Docker Registry is itself an application with multiple components, so you will use Docker Compose to manage your configuration. To start an instance of the registry, you’ll set up a docker-compose.yml file to define the location where your registry will be storing its data.

      On the server you have created to host your private Docker Registry, you can create a docker-registry directory, move into it, and then create a data subfolder with the following commands:

      • mkdir ~/docker-registry && cd $_
      • mkdir data

      Use your text editor to create the docker-compose.yml configuration file:

      Add the following content to the file, which describes the basic configuration for a Docker Registry:


      version: '3'
          image: registry:2
          - "5000:5000"
            - ./data:/data

      The environment section sets an environment variable in the Docker Registry container with the path /data. The Docker Registry application checks this environment variable when it starts up, and as a result begins to save its data to the /data folder.

      However, as you have included the volumes: - ./data:/data line, Docker will start to map the /data directory in that container to /data on your registry server. The end result is that the Docker Registry's data all gets stored in ~/docker-registry/data on the registry server.

      The ports section, with configuration 5000:5000, tells Docker to map port 5000 on the server to port 5000 in the running container. This allows you to send a request to port 5000 on the server, and have the request forwarded to the registry application.

      You can now start Docker Compose to check the setup:

      You will see download bars in your output that show Docker downloading the Docker Registry image from Docker's own registry. Within a minute or two, you'll see output that looks similar to the following (versions might vary):

      Output of docker-compose up

      Starting docker-registry_registry_1 ... done Attaching to docker-registry_registry_1 registry_1 | time="2018-11-06T18:43:09Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.7.6 version=v2.6.2 registry_1 | time="2018-11-06T18:43:09Z" level=info msg="redis not configured" go.version=go1.7.6 version=v2.6.2 registry_1 | time="2018-11-06T18:43:09Z" level=info msg="Starting upload purge in 20m0s" go.version=go1.7.6 version=v2.6.2 registry_1 | time="2018-11-06T18:43:09Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.7.6 version=v2.6.2 registry_1 | time="2018-11-06T18:43:09Z" level=info msg="listening on [::]:5000" go.version=go1.7.6 version=v2.6.2

      You'll address the No HTTP secret provided warning message later in this tutorial. The output shows that the container is starting. The last line of the output shows it has successfully started listening on port 5000.

      By default, Docker Compose will remain waiting for your input, so hit CTRL+C to shut down your Docker Registry container.

      You have set up a full Docker Registry listening on port 5000. At this point the registry won't start unless you bring it up manually. Also, Docker Registry doesn't come with any built-in authentication mechanism, so it is currently insecure and completely open to the public. In the following steps, you will address these security concerns.

      Step 2 — Setting Up Nginx Port Forwarding

      You already have HTTPS set up on your Docker Registry server with Nginx, which means you can now set up port forwarding from Nginx to port 5000. Once you complete this step, you can access your registry directly at

      As part of the How to Secure Nginx With Let's Encrypt prerequisite, you have already set up the /etc/nginx/sites-available/ file containing your server configuration.

      Open this file with your text editor:

      • sudo nano /etc/nginx/sites-available/

      Find the existing location line. It will look like this:


      location / {

      You need to forward traffic to port 5000, where your registry will be running. You also want to append headers to the request to the registry, which provide additional information from the server with each request and response. Delete the contents of the location section, and add the following content into that section:


      location / {
          # Do not allow connections from docker 1.5 and earlier
          # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
          if ($http_user_agent ~ "^(docker/1.(3|4|5(?!.[0-9]-dev))|Go ).*$" ) {
            return 404;
          proxy_pass                          http://localhost:5000;
          proxy_set_header  Host              $http_host;   # required for docker client's sake
          proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
          proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
          proxy_set_header  X-Forwarded-Proto $scheme;
          proxy_read_timeout                  900;

      The $http_user_agent section verifies that the Docker version of the client is above 1.5, and ensures that the UserAgent is not a Go application. Since you are using version 2.0 of the registry, older clients are not supported. For more information, you can find the nginx header configuration in Docker's Registry Nginx guide.

      Save and exit the file. Apply the changes by restarting Nginx:

      • sudo service nginx restart

      You can confirm that Nginx is forwarding traffic to port 5000 by running the registry:

      • cd ~/docker-registry
      • docker-compose up

      In a browser window, open up the following url:

      You will see an empty JSON object, or:


      In your terminal, you'll see output similar to the following:

      Output of docker-compose up

      registry_1 | time="2018-11-07T17:57:42Z" level=info msg="response completed" go.version=go1.7.6 http.request.method=GET http.request.remoteaddr= http.request.uri="/v2/" http.request.useragent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7" http.response.contenttype="application/json; charset=utf-8" http.response.duration=2.125995ms http.response.status=200 http.response.written=2 version=v2.6.2 registry_1 | - - [07/Nov/2018:17:57:42 +0000] "GET /v2/ HTTP/1.0" 200 2 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7"

      You can see from the last line that a GET request was made to /v2/, which is the endpoint you sent a request to from your browser. The container received the request you made, from the port forwarding, and returned a response of {}. The code 200 in the last line of the output means that the container handled the request successfully.

      Now that you have set up port forwarding, you can move on to improving the security of your registry.

      Step 3 — Setting Up Authentication

      With Nginx proxying requests properly, you can now secure your registry with HTTP authentication to manage who has access to your Docker Registry. To achieve this, you'll create an authentication file with htpasswd and add users to it. HTTP authentication is quick to set up and secure over a HTTPS connection, which is what the registry will use.

      You can install the htpasswd package by running the following:

      • sudo apt install apache2-utils

      Now you'll create the directory where you'll store our authentication credentials, and change into that directory. $_ expands to the last argument of the previous command, in this case ~/docker-registry/auth:

      • mkdir ~/docker-registry/auth && cd $_

      Next, you will create the first user as follows, replacing username with the username you want to use. The -B flag specifies bcrypt encryption, which is more secure than the default encryption. Enter the password when prompted:

      • htpasswd -Bc registry.password username

      Note: To add more users, re-run the previous command without the -c option, (the c is for create):

      • htpasswd registry.password username

      Next, you'll edit the docker-compose.yml file to tell Docker to use the file you created to authenticate users.

      • cd ~/docker-registry
      • nano docker-compose.yml

      You can add environment variables and a volume for the auth/ directory that you created, by editing the docker-compose.yml file to tell Docker how you want to authenticate users. Add the following highlighted content to the file:


      version: '3'
          image: registry:2
          - "5000:5000"
            REGISTRY_AUTH: htpasswd
            REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password
            - ./auth:/auth
            - ./data:/data

      For REGISTRY_AUTH, you have specified htpasswd, which is the authentication scheme you are using, and set REGISTRY_AUTH_HTPASSWD_PATH to the path of the authentication file. Finally, REGISTRY_AUTH_HTPASSWD_REALM signifies the name of htpasswd realm.

      You can now verify that your authentication works correctly, by running the registry and checking that it prompts users for a username and password.

      In a browser window, open

      After entering username and the corresponding password, you will see {} once again. You've confirmed the basic authentication setup: the registry only returned the result after you entered the correct username and password. You have now secured your registry, and can continue to using the registry.

      Step 4 — Starting Docker Registry as a Service

      You want to ensure that your registry will start whenever the system boots up. If there are any unforeseen system crashes, you want to make sure the registry restarts when the server reboots. Open up docker-compose.yml:

      Add the following line of content under registry::


          restart: always

      You can start your registry as a background process, which will allow you to exit the ssh session and persist the process:

      With your registry running in the background, you can now prepare Nginx for file uploads.

      Step 5 — Increasing File Upload Size for Nginx

      Before you can push an image to the registry, you need to ensure that your registry will be able to handle large file uploads. Although Docker splits large image uploads into separate layers, they can sometimes be over 1GB. By default, Nginx has a limit of 1MB on file uploads, so you need to edit the configuration file for nginx and set the max file upload size to 2GB.

      • sudo nano /etc/nginx/nginx.conf

      Find the http section, and add the following line:


      http {
              client_max_body_size 2000M;

      Finally, restart Nginx to apply the configuration changes:

      • sudo service nginx restart

      You can now upload large images to your Docker Registry without Nginx errors.

      Step 6 — Publishing to Your Private Docker Registry

      You are now ready to publish an image to your private Docker Registry, but first you have to create an image. For this tutorial, you will create a simple image based on the ubuntu image from Docker Hub. Docker Hub is a publicly hosted registry, with many pre-configured images that can be leveraged to quickly Dockerize applications. Using the ubuntu image, you will test pushing and pulling to your registry.

      From your client server, create a small, empty image to push to your new registry, the -i and -t flags give you interactive shell access into the container:

      • docker run -t -i ubuntu /bin/bash

      After it finishes downloading you'll be inside a Docker prompt, note that your container ID following root@ will vary. Make a quick change to the filesystem by creating a file called SUCCESS. In the next step, you'll be able to use this file to determine whether the publishing process is successful:

      Exit out of the Docker container:

      The following command creates a new image called test-image based on the image already running plus any changes you have made. In our case, the addition of the /SUCCESS file is included in the new image.

      Commit the change:

      • docker commit $(docker ps -lq) test-image

      At this point, the image only exists locally. Now you can push it to the new registry you have created. Log in to your Docker Registry:

      • docker login

      Enter the username and corresponding password from earlier. Next, you will tag the image with the private registry's location in order to push to it:

      • docker tag test-image

      Push the newly tagged image to the registry:

      • docker push

      Your output will look similar to the following:


      The push refers to a repository [] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

      You've verified that your registry handles user authentication, and allows authenticated users to push images to the registry. Next, you will confirm that you are able to pull images from the registry as well.

      Step 7 — Pulling From Your Private Docker Registry

      Return to your registry server so that you can test pulling the image from your client server. It is also possible to test this from a third server.

      Log in with the username and password you set up previously:

      • docker login

      You're now ready to pull the image. Use your domain name and image name, which you tagged in the previous step:

      • docker login

      Docker will download the image and return you to the prompt. If you run the image on the registry server you'll see the SUCCESS file you created earlier is there:

      • docker run -it /bin/bash

      List your files inside the bash shell:

      You will see the SUCCESS file you created for this image:

      SUCCESS  bin  boot  dev  etc  home  lib  lib64  media   mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

      You've finished setting up a secure registry to which users can push and pull custom images.


      In this tutorial you set up your own private Docker Registry, and published a Docker image. As mentioned in the introduction, you can also use TravisCI or a similar CI tool to automate pushing to a private registry directly. By leveraging Docker and registries into your workflow, you can ensure that the image containing the code will result in the same behavior on any machine, whether in production or in development. For more information on writing Docker files, you can read this Docker tutorial explaining the process.

      Source link

      How To Set Up Multi-Node Deployments With Rancher 2.1, Kubernetes, and Docker Machine on Ubuntu 18.04

      The author selected Code Org to receive a donation as part of the Write for DOnations program.


      Rancher is a popular open-source container management platform. Released in early 2018, Rancher 2.X works on Kubernetes and has incorporated new tools such as multi-cluster management and built-in CI pipelines. In addition to the enhanced security, scalability, and straightforward deployment tools already in Kubernetes, Rancher offers a graphical user interface that makes managing containers easier. Through Rancher’s GUI, users can manage secrets, securely handle roles and permissions, scale nodes and pods, and set up load balancers and volumes without needing a command line tool or complex YAML files.

      In this tutorial, you will deploy a multi-node Rancher 2.1 server using Docker Machine on Ubuntu 18.04. By the end, you’ll be able to provision new DigitalOcean Droplets and container pods via the Rancher UI to quickly scale up or down your hosting environment.


      Before you start this tutorial, you’ll need a DigitalOcean account, in addition to the following:

      • A DigitalOcean Personal Access Token, which you can create following the instructions in this tutorial. This token will allow Rancher to have API access to your DigitalOcean account.

      • A fully registered domain name with an A record that points to the IP address of the Droplet you create in Step 1. You can learn how to point domains to DigitalOcean Droplets by reading through DigitalOcean’s Domains and DNS documentation. Throughout this tutorial, substitute your domain for

      Step 1 — Creating a Droplet With Docker Installed

      To start and configure Rancher, you’ll need to create a new Droplet with Docker installed. To accomplish this, you can use DigitalOcean’s Docker image.

      First, log in to your DigitalOcean account and choose Create Droplet. Then, under the Choose an Image section, select the Marketplace tab. Select Docker 18.06.1~ce~3 on 18.04.

      Choose the Docker 18.06 image from the One-click Apps menu

      Next, select a Droplet no smaller than 2GB and choose a datacenter region for your Droplet.

      Finally, add your SSH keys, provide a host name for your Droplet, and press the Create button.

      It will take a few minutes for the server to provision and for Docker to download. Once the Droplet deploys successfully, you’re ready to start Rancher in a new Docker container.

      Step 2 — Starting and Configuring Rancher

      The Droplet you created in Step 1 will run Rancher in a Docker container. In this step, you will start the Rancher container and ensure it has a Let’s Encrypt SSL certificate so that you can securely access the Rancher admin panel. Let’s Encrypt is an automated, open-source certificate authority that allows developers to provision ninety-day SSL certificates for free.

      Log in to your new Droplet:

      To make sure Docker is running, enter:

      Check that the listed Docker version is what you expect. You can start Rancher with a Let's Encrypt certificate already installed by running the following command:

      • docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /host/rancher:/var/lib/rancher rancher/rancher --acme-domain

      The --acme-domain option installs an SSL certificate from Let's Encrypt to ensure your Rancher admin is served over HTTPS. This script also instructs the Droplet to fetch the rancher/rancher Docker image and start a Rancher instance in a container that will restart automatically if it ever goes down accidentally. To ease recovery in the event of data loss, the script mounts a volume on the host machine (at /host/rancher) that contains the Rancher data.

      To see all the running containers, enter:

      You'll see output similar to the following (with a unique container ID and name):


      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b2afed0a599 rancher/rancher "" 12 seconds ago Up 11 seconds>80/tcp,>443/tcp wizardly_fermat

      If the container is not running, you can execute the docker run command again.

      Before you can access the Rancher admin panel, you'll need to set your admin password and Rancher server URL. The Rancher admin interface will give you access to all of your running nodes, pods, and secrets, so it is important that you use a strong password for it.

      Go to the domain name that points to your new Droplet in your web browser. The first time you visit this address, Rancher will let you set a password:

      Set your Rancher password using the prompt

      When asked for your Rancher server URL, use the domain name pointed at your Droplet.

      You have now completed your Rancher server setup, and you will see the Rancher admin home screen:

      The Rancher admin home screen

      You're ready to continue to the Rancher cluster setup.

      Step 3 — Configuring a Cluster With a Single Node

      To use Rancher, you'll need to create a cluster with at least one node. A cluster is a group of one or more nodes. This guide will give you more information about the Kubernetes Architecture. In this tutorial, nodes correspond to Droplets that Rancher will manage. Pods represent a group of running Docker containers within the Droplet. Each node can run many pods. Using the Rancher UI, you can set up clusters and nodes in an underlying Kubernetes environment.

      By the end of this step, you will have set up a cluster with a single node ready to run your first pod.

      In Rancher, click Add Cluster, and select DigitalOcean as the infrastructure provider.

      Select DigitalOcean from the listed infrastructure providers

      Enter a Cluster Name and scroll down to the Node Pools section. Enter a Name Prefix, leave the Count at 1 for now, and check etcd, Control Plane, and Worker.

      • etcd is Kubernetes' key value storage system for keeping your entire environment's state. In order to maintain high availability, you should run three or five etcd nodes so that if one goes down your environment will still be manageable.
      • Control Plane checks through all of the Kubernetes Objects — such as pods — in your environment and keeps them up to date with the configuration you provide in the Rancher admin interface.
      • Workers run the actual workloads and monitoring agents that ensure your containers stay running and networked. Worker nodes are where your pods will run the software you deploy.

      Create a Node Pool with a single Node

      Before creating the cluster, click Add Node Template to configure the specific options for your new node.

      Enter your DigitalOcean Personal Access Token in the Access Token input box and click Next: Configure Droplet.

      Next, select the same Region and Droplet Size as Step 1. For Image, be sure to select Ubuntu 16.04.5 x64 as there's currently a compatibility issue with Rancher and Ubuntu 18.04. Hit Create to save the template.

      Finally, click Create at the Add Cluster page to kick off the provisioning process. It will take a few minutes for Rancher to complete this step, but you will see a new Droplet in your DigitalOcean Droplets dashboard when it's done.

      In this step, you've created a new cluster and node onto which you will deploy a workload in the next section.

      Step 4 — Deploying a Web Application Workload

      Once the new cluster and node are ready, you can deploy your first pod in a workload. A Kubernetes Pod is the smallest unit of work available to Kubernetes and by extension Rancher. Workloads describe a single group of pods that you deploy together. For example, you may run multiple pods of your webserver in a single workload to ensure that if one pod slows down with a particular request, other instances can handle incoming requests. In this section, you're going to deploy a Nginx Hello World image to a single pod.

      Hover over Global in the header and select Default. This will bring you to the Default project dashboard. You'll focus on deploying a single project in this tutorial, but from this dashboard you can also create multiple projects to achieve isolated container hosting environments.

      To start configuring your first pod, click Deploy.

      Enter a Name, and put nginxdemos/hello in the Docker Image field. Next, map port 80 in the container to port 30000 on the host nodes. This will ensure that the pods you deploy are available on each node at port 30000. You can leave Protocol set to TCP, and the next dropdown as NodePort.

      Note: While this method of running the pod on every node's port is easier to get started, Rancher also includes Ingress to provide load balancing and SSL termination for production use.

      The input form for deploying a Workload

      To launch the pod, scroll to the bottom and click Launch.

      Rancher will take you back to the default project home page, and within a few seconds your pod will be ready. Click the link 30000/tcp just below the name of the workload and Rancher will open a new tab with information about the running container's environment.

      Server address, Server name, and other output from the running NGINX container

      The Server address and port you see on this page are those of the internal Docker network, and not the public IP address you see in your browser. This means that Rancher is working and routing traffic from http://first_node_ip:30000/ to the workload as expected.

      At this point, you've successfully deployed your first workload of one pod to a single Rancher node. Next, you'll see how to scale your Rancher environment.

      Step 5 — Scaling Nodes and Pods

      Rancher gives you two ways to scale your hosting resources: increasing the number of pods in your workload or increasing the number of nodes in your cluster.

      Adding pods to your workload will give your application more running processes. This will allow it to handle more traffic and enable zero-downtime deployments, but each node can handle only a finite number of pods. Once all your nodes have hit their pod limit, you will have to increase the number of nodes if you want to continue scaling up.

      Another consideration is that while increasing pods is typically free, you will have to pay for each node you add to your environment. In this step, you will scale up both nodes and pods, and add another node to your Rancher cluster.

      Note: This part of the tutorial will provision a new DigitalOcean Droplet automatically via the API, so be aware that you will incur extra charges while the second node is running.

      Navigate to the cluster home page of your Rancher installation by selecting Cluster: your-cluster-name from the top navigation bar. Next click Nodes from the top navigation bar.

      Use the top navbar dropdown to select your Cluster

      This page shows that you currently have one running node in the cluster. To add more nodes, click Edit Cluster, and scroll to the Node Pools section at the bottom of the page. Click Add Node Pool, enter a prefix, and check the Worker box. Click Save to update the cluster.

      Add a Node Pool as a Worker only

      Within 2–5 minutes, Rancher will provision a second droplet and indicate the node as Active in the cluster's dashboard. This second node is only a worker, which means it will not run the Rancher etcd or Control Plane containers. This allows the Worker more capacity for running workloads.

      Note: Having an uneven number of etcd nodes will ensure that they can always reach a quorum (or consensus). If you only have one etcd node, you run the risk of your cluster being unreachable if that one node goes down. In a production environment it is a better practice to run three or five etcd nodes.

      When the second node is ready, you will be able to see the workload you deployed in the previous step on this node by navigating to http://second_node_ip:30000/ in your browser.

      Scaling up nodes gives you more Droplets to distribute your workloads on, but you may also want to run more instances of each pod within a workload. To add more pods, return to the Default project page, press the arrow to the left of your hello-world workload, and click + twice to add two more pods.

      Running three Hello World Pods in a Workload

      Rancher will automatically deploy more pods and distribute the running containers to each node depending on where there is availability.

      You can now scale your nodes and pods to suit your application's requirements.


      You've now set up multi-node deployments using Rancher 2.1 on Ubuntu 18.04, and have scaled up to two running nodes and multiple pods within a workload. You can use this strategy to host and scale any kind of Docker container that you need to run in your application and use Rancher's dashboard and alerts to help you maximize the performance of your workloads and nodes within each cluster.

      Source link