One place for hosting & domains

      Deploy

      How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker is the most common containerization software used today. It enables developers to easily package apps along with their environments, which allows for quicker iteration cycles and better resource efficiency, while providing the same desired environment on each run. Docker Compose is a container orchestration tool that facilitates modern app requirements. It allows you to run multiple interconnected containers at the same time. Instead of manually running containers, orchestration tools give developers the ability to control, scale, and extend a container simultaneously.

      The benefits of using Nginx as a front-end web server are its performance, configurability, and TLS termination, which frees the app from completing these tasks. The nginx-proxy is an automated system for Docker containers that greatly simplifies the process of configuring Nginx to serve as a reverse proxy. Its Let’s Encrypt add-on can accompany the nginx-proxy to automate the generation and renewal of certificates for proxied containers.

      In this tutorial, you will deploy an example Go web application with gorilla/mux as the request router and Nginx as the web server, all inside Docker containers, orchestrated by Docker Compose. You’ll use nginx-proxy with the Let’s Encrypt add-on as the reverse proxy. At the end of this tutorial, you will have deployed a Go web app accessible at your domain with multiple routes, using Docker and secured with Let’s Encrypt certificates.

      Prerequisites

      Step 1 — Creating an Example Go Web App

      In this step, you will set up your workspace and create a simple Go web app, which you’ll later containerize. The Go app will use the powerful gorilla/mux request router, chosen for its flexibility and speed.

      Start off by logging in as sammy:

      For this tutorial, you'll store all data under ~/go-docker. Run the following command to do this:

      Navigate to it:

      You'll store your example Go web app in a file named main.go. Create it using your text editor:

      Add the following lines:

      main.go

      package main
      
      import (
          "fmt"
          "net/http"
      
          "github.com/gorilla/mux"
      )
      
      func main() {
          r := mux.NewRouter()
      
          r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
              fmt.Fprintf(w, "<h1>This is the homepage. Try /hello and /hello/Sammyn</h1>")
          })
      
          r.HandleFunc("/hello", func(w http.ResponseWriter, r *http.Request) {
              fmt.Fprintf(w, "<h1>Hello from Docker!n</h1>")
          })
      
          r.HandleFunc("/hello/{name}", func(w http.ResponseWriter, r *http.Request) {
              vars := mux.Vars(r)
              title := vars["name"]
      
              fmt.Fprintf(w, "<h1>Hello, %s!n</h1>", title)
          })
      
          http.ListenAndServe(":80", r)
      }
      

      You first import net/http and gorilla/mux packages, which provide HTTP server functionality and routing.

      The gorilla/mux package implements an easier and more powerful request router and dispatcher, while at the same time maintaining interface compatibility with the standard router. Here, you instantiate a new mux router and store it in variable r. Then, you define three routes: /, /hello, and /hello/{name}. The first (/) serves as the homepage and you include a message for the page. The second (/hello) returns a greeting to the visitor. For the third route (/hello/{name}) you specify that it should take a name as a parameter and show a greeting message with the name inserted.

      At the end of your file, you start the HTTP server with http.ListenAndServe and instruct it to listen on port 80, using the router you configured.

      Save and close the file.

      Before running your Go app, you first need to compile and pack it for execution inside a Docker container. Go is a compiled language, so before a program can run, the compiler translates the programming code into executable machine code.

      You've set up your workspace and created an example Go web app. Next, you will deploy nginx-proxy with an automated Let's Encrypt certificate provision.

      Step 2 — Deploying nginx-proxy with Let's Encrypt

      It's important that you secure your app with HTTPS. To accomplish this, you'll deploy nginx-proxy via Docker Compose, along with its Let's Encrypt add-on. This secures Docker containers proxied using nginx-proxy, and takes care of securing your app through HTTPS by automatically handling TLS certificate creation and renewal.

      You'll be storing the Docker Compose configuration for nginx-proxy in a file named nginx-proxy-compose.yaml. Create it by running:

      • nano nginx-proxy-compose.yaml

      Add the following lines to the file:

      nginx-proxy-compose.yaml

      version: '2'
      
      services:
        nginx-proxy:
          restart: always
          image: jwilder/nginx-proxy
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - "/etc/nginx/vhost.d"
            - "/usr/share/nginx/html"
            - "/var/run/docker.sock:/tmp/docker.sock:ro"
            - "/etc/nginx/certs"
      
        letsencrypt-nginx-proxy-companion:
          restart: always
          image: jrcs/letsencrypt-nginx-proxy-companion
          volumes:
            - "/var/run/docker.sock:/var/run/docker.sock:ro"
          volumes_from:
            - "nginx-proxy"
      

      Here you're defining two containers: one for nginx-proxy and one for its Let's Encrypt add-on (letsencrypt-nginx-proxy-companion). For the proxy, you specify the image jwilder/nginx-proxy, expose and map HTTP and HTTPS ports, and finally define volumes that will be accessible to the container for persisting Nginx-related data.

      In the second block, you name the image for the Let's Encrypt add-on configuration. Then, you configure access to Docker's socket by defining a volume and then the existing volumes from the proxy container to inherit. Both containers have the restart property set to always, which instructs Docker to always keep them up (in the case of a crash or a system reboot).

      Save and close the file.

      Deploy the nginx-proxy by running:

      • docker-compose -f nginx-proxy-compose.yaml up -d

      Docker Compose accepts a custom named file via the -f flag. The up command runs the containers, and the -d flag, detached mode, instructs it to run the containers in the background.

      Your final output will look like this:

      Output

      Creating network "go-docker_default" with the default driver Pulling nginx-proxy (jwilder/nginx-proxy:)... latest: Pulling from jwilder/nginx-proxy a5a6f2f73cd8: Pull complete 2343eb083a4e: Pull complete ... Digest: sha256:619f390f49c62ece1f21dfa162fa5748e6ada15742e034fb86127e6f443b40bd Status: Downloaded newer image for jwilder/nginx-proxy:latest Pulling letsencrypt-nginx-proxy-companion (jrcs/letsencrypt-nginx-proxy-companion:)... latest: Pulling from jrcs/letsencrypt-nginx-proxy-companion ... Creating go-docker_nginx-proxy_1 ... done Creating go-docker_letsencrypt-nginx-proxy-companion_1 ... done

      You've deployed nginx-proxy and its Let's Encrypt companion using Docker Compose. Next, you'll create a Dockerfile for your Go web app.

      Step 3 — Dockerizing the Go Web App

      In this section, you will create a Dockerfile containing instructions on how Docker will create an immutable image for your Go web app. Docker builds an immutable app image—similar to a snapshot of the container—using the instructions found in the Dockerfile. The image's immutability guarantees the same environment each time a container, based on the particular image, is run.

      Create the Dockerfile with your text editor:

      Add the following lines:

      Dockerfile

      FROM golang:alpine AS build
      RUN apk --no-cache add gcc g++ make git
      WORKDIR /go/src/app
      COPY . .
      RUN go get ./...
      RUN GOOS=linux go build -ldflags="-s -w" -o ./bin/web-app ./main.go
      
      FROM alpine:3.9
      RUN apk --no-cache add ca-certificates
      WORKDIR /usr/bin
      COPY --from=build /go/src/app/bin /go/bin
      EXPOSE 80
      ENTRYPOINT /go/bin/web-app --port 80
      

      This Dockerfile has two stages. The first stage uses the golang:alpine base, which contains pre-installed Go on Alpine Linux.

      Then you install gcc, g++, make, and git as the necessary compilation tools for your Go app. You set the working directory to /go/src/app, which is under the default GOPATH. You also copy the content of the current directory into the container. The first stage concludes with recursively fetching the packages used from the code and compiling the main.go file for release without symbol and debug info (by passing -ldflags="-s -w"). When you compile a Go program it keeps a separate part of the binary that would be used for debugging, however, this extra information uses memory, and is not necessary to preserve when deploying to a production environment.

      The second stage bases itself on alpine:3.9 (Alpine Linux 3.9). It installs trusted CA certificates, copies the compiled app binaries from the first stage to the current image, exposes port 80, and sets the app binary as the image entry point.

      Save and close the file.

      You've created a Dockerfile for your Go app that will fetch its packages, compile it for release, and run it upon container creation. In the next step, you will create the Docker Compose yaml file and test the app by running it in Docker.

      Step 4 — Creating and Running the Docker Compose File

      Now, you'll create the Docker Compose config file and write the necessary configuration for running the Docker image you created in the previous step. Then, you will run it and check if it works correctly. In general, the Docker Compose config file specifies the containers, their settings, networks, and volumes that the app requires. You can also specify that these elements can start and stop as one at the same time.

      You will be storing the Docker Compose configuration for the Go web app in a file named go-app-compose.yaml. Create it by running:

      Add the following lines to this file:

      go-app-compose.yaml

      version: '2'
      services:
        go-web-app:
          restart: always
          build:
            dockerfile: Dockerfile
            context: .
          environment:
            - VIRTUAL_HOST=example.com
            - LETSENCRYPT_HOST=example.com
      

      Remember to replace example.com both times with your domain name. Save and close the file.

      This Docker Compose configuration contains one container (go-web-app), which will be your Go web app. It builds the app using the Dockerfile you've created in the previous step, and takes the current directory, which contains the source code, as the context for building. Furthermore, it sets two environment variables: VIRTUAL_HOST and LETSENCRYPT_HOST. nginx-proxy uses VIRTUAL_HOST to know from which domain to accept the requests. LETSENCRYPT_HOST specifies the domain name for generating TLS certificates, and must be the same as VIRTUAL_HOST, unless you specify a wildcard domain.

      Now, you'll run your Go web app in the background via Docker Compose with the following command:

      • docker-compose -f go-app-compose.yaml up -d

      Your final output will look like the following:

      Output

      Creating network "go-docker_default" with the default driver Building go-web-app Step 1/12 : FROM golang:alpine AS build ---> b97a72b8e97d ... Successfully tagged go-docker_go-web-app:latest WARNING: Image for service go-web-app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`. Creating go-docker_go-web-app_1 ... done

      If you review the output presented after running the command, Docker logged every step of building the app image according to the configuration in your Dockerfile.

      You can now navigate to https://example.com/ to see your homepage. At your web app's home address, you're seeing the page as a result of the / route you defined in the first step.

      This is the homepage. Try /hello and /hello/Sammy

      Now navigate to https://example.com/hello. You will see the message you defined in your code for the /hello route from Step 1.

      Hello from Docker!

      Finally, try appending a name to your web app's address to test the other route, like: https://example.com/hello/Sammy.

      Hello, Sammy!

      Note: In the case that you receive an error about invalid TLS certificates, wait a few minutes for the Let's Encrypt add-on to provision the certificates. If you are still getting errors after a short time, double check what you've entered against the commands and configuration shown in this step.

      You've created the Docker Compose file and written configuration for running your Go app inside a container. To finish, you navigated to your domain to check that the gorilla/mux router setup is serving requests to your Dockerized Go web app correctly.

      Conclusion

      You have now successfully deployed your Go web app with Docker and Nginx on Ubuntu 18.04. With Docker, maintaining applications becomes less of a hassle, because the environment the app is executed in is guaranteed to be the same each time it's run. The gorilla/mux package has excellent documentation and offers more sophisticated features, such as naming routes and serving static files. For more control over the Go HTTP server module, such as defining custom timeouts, visit the official docs.



      Source link

      How To Build and Deploy a GraphQL Server with Node.js and MongoDB on Ubuntu 18.04


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      GraphQL was publicly released by Facebook in 2015 as a query language for APIs that makes it easy to query and mutate data from different data collections. From a single endpoint, you can query and mutate multiple data sources with a single POST request. GraphQL solves some of the common design flaws in REST API architectures, such as situations where the endpoint returns more information than you actually need. Also, it is possible when using REST APIs you would need to send requests to multiple REST endpoints to collect all the information you require—a situation that is called the n+1 problem. An example of this would be when you want to show a users’ information, but need to collect data such as personal details and addresses from different endpoints.

      These problems don’t apply to GraphQL as it has only one endpoint, which can return data from multiple collections. The data it returns depends on the query that you send to this endpoint. In this query you define the structure of the data you want to receive, including any nested data collections. In addition to a query, you can also use a mutation to change data on a GraphQL server, and a subscription to watch for changes in the data. For more information about GraphQL and its concepts, you can visit the documentation on the official website.

      As GraphQL is a query language with a lot of flexibility, it combines especially well with document-based databases like MongoDB. Both technologies are based on hierarchical, typed schemas and are popular within the JavaScript community. Also, MongoDB’s data is stored as JSON objects, so no additional parsing is necessary on the GraphQL server.

      In this tutorial, you’ll build and deploy a GraphQL server with Node.js that can query and mutate data from a MongoDB database that is running on Ubuntu 18.04. At the end of this tutorial, you’ll be able to access data in your database by using a single endpoint, both by sending requests to the server directly through the terminal and by using the pre-made GraphiQL playground interface. With this playground you can explore the contents of the GraphQL server by sending queries, mutations, and subscriptions. Also, you can find visual representations of the schemas that are defined for this server.

      At the end of this tutorial, you’ll use the GraphiQL playground to quickly interface with your GraphQL server:

      The GraphiQL playground in action

      Prerequisites

      Before you begin this guide you’ll need the following:

      Step 1 — Setting Up the MongoDB Database

      Before creating the GraphQL server, make sure your database is configured right, has authentication enabled, and is filled with sample data. For this you need to connect to the Ubuntu 18.04 server running the MongoDB database from your command prompt. All steps in this tutorial will take place on this server.

      After you’ve established the connection, run the following command to check if MongoDB is active and running on your server:

      • sudo systemctl status mongodb

      You’ll see the following output in your terminal, indicating the MongoDB database is actively running:

      Output

      ● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2019-02-23 12:23:03 UTC; 1 months 13 days ago Docs: man:mongod(1) Main PID: 2388 (mongod) Tasks: 25 (limit: 1152) CGroup: /system.slice/mongodb.service └─2388 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

      Before creating the database where you’ll store the sample data, you need to create an admin user first, since regular users are scoped to a specific database. You can do this by executing the following command that opens the MongoDB shell:

      With the MongoDB shell you'll get direct access to the MongoDB database and can create users or databases and query data. Inside this shell, execute the following command that will add a new admin user to MongoDB. You can replace the highlighted keywords with your own username and password combination, but don't forget to write them down somewhere.

      • use admin
      • db.createUser({
      • user: "admin_username",
      • pwd: "admin_password",
      • roles: [{ role: "root", db: "admin"}]
      • })

      The first line of the preceding command selects the database called admin, which is the database where all the admin roles are stored. With the method db.createUser() you can create the actual user and define its username, password, and roles.

      Executing this command will return:

      Output

      Successfully added user: { "user" : "admin_username", "roles" : [ { "role" : "root", "db" : "admin" } ] }

      You can now close the MongoDB shell by typing exit.

      Next, log in at the MongoDB shell again, but this time with the newly created admin user:

      • mongo -u "admin_username" -p "admin_password" --authenticationDatabase "admin"

      This command will open the MongoDB shell as a specific user, where the -u flag specifies the username and the -p flag the password of that user. The extra flag --authenticationDatabase specifies that you want to log in as an admin.

      Next, you'll switch to a new database and then use the db.createUser() method to create a new user with permissions to make changes to this database. Replace the highlighted sections with your own information, making sure to write these credentials down.

      Run the following command in the MongoDB shell:

      • use database_name
      • db.createUser({
      • user: "username",
      • pwd: "password",
      • roles: ["readWrite"]
      • })

      This will return the following:

      Output

      Successfully added user: { "user" : "username", "roles" : ["readWrite"] }

      After creating the database and user, fill this database with sample data that can be queried by the GraphQL server later on in this tutorial. For this, you can use the bios collection sample from the MongoDB website. By executing the commands in the following code snippet you'll insert a smaller version of this bios collection dataset into your database. You can replace the highlighted sections with your own information, but for the purposes of this tutorial, name the collection bios:

      • db.bios.insertMany([
      • {
      • "_id" : 1,
      • "name" : {
      • "first" : "John",
      • "last" : "Backus"
      • },
      • "birth" : ISODate("1924-12-03T05:00:00Z"),
      • "death" : ISODate("2007-03-17T04:00:00Z"),
      • "contribs" : [
      • "Fortran",
      • "ALGOL",
      • "Backus-Naur Form",
      • "FP"
      • ],
      • "awards" : [
      • {
      • "award" : "W.W. McDowell Award",
      • "year" : 1967,
      • "by" : "IEEE Computer Society"
      • },
      • {
      • "award" : "National Medal of Science",
      • "year" : 1975,
      • "by" : "National Science Foundation"
      • },
      • {
      • "award" : "Turing Award",
      • "year" : 1977,
      • "by" : "ACM"
      • },
      • {
      • "award" : "Draper Prize",
      • "year" : 1993,
      • "by" : "National Academy of Engineering"
      • }
      • ]
      • },
      • {
      • "_id" : ObjectId("51df07b094c6acd67e492f41"),
      • "name" : {
      • "first" : "John",
      • "last" : "McCarthy"
      • },
      • "birth" : ISODate("1927-09-04T04:00:00Z"),
      • "death" : ISODate("2011-12-24T05:00:00Z"),
      • "contribs" : [
      • "Lisp",
      • "Artificial Intelligence",
      • "ALGOL"
      • ],
      • "awards" : [
      • {
      • "award" : "Turing Award",
      • "year" : 1971,
      • "by" : "ACM"
      • },
      • {
      • "award" : "Kyoto Prize",
      • "year" : 1988,
      • "by" : "Inamori Foundation"
      • },
      • {
      • "award" : "National Medal of Science",
      • "year" : 1990,
      • "by" : "National Science Foundation"
      • }
      • ]
      • }
      • ]);

      This code block is an array consisting of multiple objects that contain information about successful scientists from the past. After running these commands to enter this collection into your database, you'll receive the following message indicating the data was added:

      Output

      { "acknowledged" : true, "insertedIds" : [ 1, ObjectId("51df07b094c6acd67e492f41") ] }

      After seeing the success message, you can close the MongoDB shell by typing exit. Next, configure the MongoDB installation to have authorization enabled so only authenticated users can access the data. To edit the configuration of the MongoDB installation, open the file containing the settings for this installation:

      • sudo nano /etc/mongodb.conf

      Uncomment the highlighted line in the following code to enable authorization:

      /etc/mongodb.conf

      ...
      # Turn on/off security.  Off is currently the default
      #noauth = true
      auth = true
      ...
      

      In order to make these changes active, restart MongoDB by running:

      • sudo systemctl restart mongodb

      Make sure the database is running again by executing the command:

      • sudo systemctl status mongodb

      This will yield output similar to the following:

      Output

      ● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2019-02-23 12:23:03 UTC; 1 months 13 days ago Docs: man:mongod(1) Main PID: 2388 (mongod) Tasks: 25 (limit: 1152) CGroup: /system.slice/mongodb.service └─2388 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

      To make sure that your user can connect to the database you just created, try opening the MongoDB shell as an authenticated user with the command:

      • mongo -u "username" -p "password" --authenticationDatabase "database_name"

      This uses the same flags as before, only this time the --authenticationDatabase is set to the database you've created and filled with the sample data.

      Now you've successfully added an admin user and another user that has read/write access to the database with the sample data. Also, the database has authorization enabled meaning you need a username and password to access it. In the next step you'll create the GraphQL server that will be connected to this database later in the tutorial.

      Step 2 — Creating the GraphQL Server

      With the database configured and filled with sample data, it's time to create a GraphQL server that can query and mutate this data. For this you'll use Express and express-graphql, which both run on Node.js. Express is a lightweight framework to quickly create Node.js HTTP servers, and express-graphql provides middleware to make it possible to quickly build GraphQL servers.

      The first step is to make sure your machine is up to date:

      Next, install Node.js on your server by running the following commands. Together with Node.js you'll also install npm, a package manager for JavaScript that runs on Node.js.

      • sudo apt install nodejs npm

      After following the installation process, check if the Node.js version you've just installed is v8.10.0 or higher:

      This will return the following:

      Output

      v8.10.0

      To initialize a new JavaScript project, run the following commands on the server as a sudo user, and replace the highlighted keywords with a name for your project.

      First move into the root directory of your server:

      Once there, create a new directory named after your project:

      Move into this directory:

      Finally, initialize a new npm package with the following command:

      After running npm init -y you'll receive a success message that the following package.json file was created:

      Output

      Wrote to /home/username/project_name/package.json: { "name": "project_name", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }

      Note: You can also execute npm init without the -y flag, after which you would answer multiple questions to set up the project name, author, etc. You can enter the details or just press enter to proceed.

      Now that you've initialized the project, install the packages you need to set up the GraphQL server:

      • sudo npm install --save express express-graphql graphql

      Create a new file called index.js and subsequently open this file by running:

      Next, add the following code block into the newly created file to set up the GraphQL server:

      index.js

      const express = require('express');
      const graphqlHTTP = require('express-graphql');
      const { buildSchema } = require('graphql');
      
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          hello: String
        }
      `);
      
      // Provide resolver functions for your schema fields
      const resolvers = {
        hello: () => 'Hello world!'
      };
      
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      This code block consists of several parts that are all important. First you describe the schema of the data that is returned by the GraphQL API:

      index.js

      ...
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          hello: String
        }
      `);
      ...
      

      The type Query defines what queries can be executed and in which format it will return the result. As you can see, the only query defined is hello that returns data in a String format.

      The next section establishes the resolvers, where data is matched to the schemas that you can query:

      index.js

      ...
      // Provide resolver functions for your schema fields
      const resolvers = {
        hello: () => 'Hello world!'
      };
      ...
      

      These resolvers are directly linked to schemas, and return the data that matches these schemas.

      The final part of this code block initializes the GraphQL server, creates the API endpoint with Express, and describes the port on which the GraphQL endpoint is running:

      index.js

      ...
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      After you have added these lines, save and exit from index.js.

      Next, to actually run the GraphQL server you need to run the file index.js with Node.js. This can be done manually from the command line, but it's common practice to set up the package.json file to do this for you.

      Open the package.json file:

      Add the following highlighted line to this file:

      package.json

      {
        "name": "project_name",
        "version": "1.0.0",
        "description": "",
        "main": "index.js",
        "scripts": {
          "start": "node index.js",
          "test": "echo "Error: no test specified" && exit 1"
        },
        "keywords": [],
        "author": "",
        "license": "ISC"
      }
      

      Save and exit the file.

      To start the GraphQL server, execute the following command in the terminal:

      Once you run this, the terminal prompt will disappear, and a message will appear to confirm the GraphQL server is running:

      Output

      🚀 Server ready at http://localhost:4000/graphql

      If you now open up another terminal session, you can test if the GraphQL server is running by executing the following command. This sends a curl POST request with a JSON body after the --data flag that contains your GraphQL query to the local endpoint:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ hello }" }' http://localhost:4000/graphql

      This will execute the query as it's described in the GraphQL schema in your code and return data in a predictable JSON format that is equal to the data as it's returned in the resolvers:

      Output

      { "data": { "hello": "Hello world!" } }

      Note: In case the Express server crashes or gets stuck, you need to manually kill the node process that is running on the server. To kill all such processes, you can execute the following:

      After which, you can restart the GraphQL server by running:

      In this step you've created the first version of the GraphQL server that is now running on a local endpoint that can be accessed on your server. Next, you'll connect your resolvers to the MongoDB database.

      Step 3 — Connecting to the MongoDB Database

      With the GraphQL server in order, you can now set up the connection with the MongoDB database that you configured and filled with data before and create a new schema that matches this data.

      To be able to connect to MongoDB from the GraphQL server, install the JavaScript package for MongoDB from npm:

      • sudo npm install --save mongodb

      Once this has been installed, open up index.js in your text editor:

      Next, add the following highlighted code to index.js just after the imported dependencies and fill the highlighted values with your own connection details to the local MongoDB database. The username, password, and database_name are those that you created in the first step of this tutorial.

      index.js

      const express = require('express');
      const graphqlHTTP = require('express-graphql');
      const { buildSchema } = require('graphql');
      const { MongoClient } = require('mongodb');
      
      const context = () => MongoClient.connect('mongodb://username:password@localhost:27017/database_name', { useNewUrlParser: true }).then(client => client.db('database_name'));
      ...
      

      These lines add the connection to the local MongoDB database to a function called context. This context function will be available to every resolver, which is why you use this to set up database connections.

      Next, in your index.js file, add the context function to the initialization of the GraphQL server by inserting the following highlighted lines:

      index.js

      ...
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers,
        context
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      Now you can call this context function from your resolvers, and thereby read variables from the MongoDB database. If you look back to the first step of this tutorial, you can see which values are present in the database. From here, define a new GraphQL schema that matches this data structure. Overwrite the previous value for the constant schema with the following highlighted lines:

      index.js

      ...
      // Construct a schema, using GrahQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
        }
        type Bio {
          name: Name,
          title: String,
          birth: String,
          death: String,
          awards: [Award]
        }
        type Name {
          first: String,
          last: String
        },
        type Award {
          award: String,
          year: Float,
          by: String
        }
      `);
      ...
      

      The type Query has changed and now returns a collection of the new type Bio. This new type consists of several types including two other non-scalar types Name and Awards, meaning these types don't match a predefined format like String or Float. For more information on defining GraphQL schemas you can look at the documentation for GraphQL.

      Also, since the resolvers tie the data from the database to the schema, update the code for the resolvers when you make changes to the schema. Create a new resolver that is called bios, which is equal to the Query that can be found in the schema and the name of the collection in the database. Note that, in this case, the name of the collection in db.collection('bios') is bios, but that this would change if you had assigned a different name to your collection.

      Add the following highlighted line to index.js:

      index.js

      ...
      // Provide resolver functions for your schema fields
      const resolvers = {
        bios: (args, context) => context().then(db => db.collection('bios').find().toArray())
      };
      ...
      

      This function will use the context function, which you can use to retrieve variables from the MongoDB database. Once you have made these changes to the code, save and exit index.js.

      In order to make these changes active, you need to restart the GraphQL server. You can stop the current process by using the keyboard combination CTRL + C and start the GraphQL server by running:

      Now you're able to use the updated schema and query the data that is inside the database. If you look at the schema, you'll see that the Query for bios returns the type Bio; this type could also return the type Name.

      To return all the first and last names for all the bios in the database, send the following request to the GraphQL server in a new terminal window:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bios { name { first, last } } }" }' http://localhost:4000/graphql

      This again will return a JSON object that matches the structure of the schema:

      Output

      {"data":{"bios":[{"name":{"first":"John","last":"Backus"}},{"name":{"first":"John","last":"McCarthy"}}]}}

      You can easily retrieve more variables from the bios by extending the query with any of the types that are described in the type for Bio.

      Also, you can retrieve a bio by specifying an id. In order to do this you need to add another type to the Query type and extend the resolvers. To do this, open index.js in your text editor:

      Add the following highlighted lines of code:

      index.js

      ...
      // Construct a schema, using GrahQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
          bio(id: Int): Bio
        }
      
        ...
      
        // Provide resolver functions for your schema fields
        const resolvers = {
          bios: (args, context) => context().then(db => db.collection('bios').find().toArray()),
          bio: (args, context) => context().then(db => db.collection('bios').findOne({ _id: args.id }))
        };
        ...
      

      Save and exit the file.

      In the terminal that is running your GraphQL server, press CTRL + C to stop it from running, then execute the following to restart it:

      In another terminal window, execute the following GraphQL request:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bio(id: 1) { name { first, last } } }" }' http://localhost:4000/graphql

      This returns the entry for the bio that has an id equal to 1:

      Output

      { "data": { "bio": { "name": { "first": "John", "last": "Backus" } } } }

      Being able to query data from a database is not the only feature of GraphQL; you can also change the data in the database. To do this, open up index.js:

      Next to the type Query you can also use the type Mutation, which allows you to mutate the database. To use this type, add it to the schema and also create input types by inserting these highlighted lines:

      index.js

      ...
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
          bio(id: Int): Bio
        }
        type Mutation {
          addBio(input: BioInput) : Bio
        }
        input BioInput {
          name: NameInput
          title: String
          birth: String
          death: String
        }
        input NameInput {
          first: String
          last: String
        }
      ...
      

      These input types define which variables can be used as inputs, which you can access in the resolvers and use to insert a new document in the database. Do this by adding the following lines to index.js:

      index.js

      ...
      // Provide resolver functions for your schema fields
      const resolvers = {
        bios: (args, context) => context().then(db => db.collection('bios').find().toArray()),
        bio: (args, context) => context().then(db => db.collection('bios').findOne({ _id: args.id })),
        addBio: (args, context) => context().then(db => db.collection('bios').insertOne({ name: args.input.name, title: args.input.title, death: args.input.death, birth: args.input.birth})).then(response => response.ops[0])
      };
      ...
      

      Just as with the resolvers for regular queries, you need to return a value from the resolver in index.js. In the case of a Mutation where the type Bio is mutated, you would return the value of the mutated bio.

      At this point, your index.js file will contain the following lines:

      index.js

      iconst express = require('express');
      const graphqlHTTP = require('express-graphql');
      const { buildSchema } = require('graphql');
      const { MongoClient } = require('mongodb');
      
      const context = () => MongoClient.connect('mongodb://username:password@localhost:27017/database_name', { useNewUrlParser: true })
        .then(client => client.db('GraphQL_Test'));
      
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
          bio(id: Int): Bio
        }
        type Mutation {
          addBio(input: BioInput) : Bio
        }
        input BioInput {
          name: NameInput
          title: String
          birth: String
          death: String
        }
        input NameInput {
          first: String
          last: String
        }
        type Bio {
          name: Name,
          title: String,
          birth: String,
          death: String,
          awards: [Award]
        }
        type Name {
          first: String,
          last: String
        },
        type Award {
          award: String,
          year: Float,
          by: String
        }
      `);
      
      // Provide resolver functions for your schema fields
      const resolvers = {
        bios: (args, context) =>context().then(db => db.collection('Sample_Data').find().toArray()),
        bio: (args, context) =>context().then(db => db.collection('Sample_Data').findOne({ _id: args.id })),
        addBio: (args, context) => context().then(db => db.collection('Sample_Data').insertOne({ name: args.input.name, title: args.input.title, death: args.input.death, birth: args.input.birth})).then(response => response.ops[0])
      };
      
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers,
        context
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      Save and exit index.js.

      To check if your new mutation is working, restart the GraphQL server by pressing CTRL + c and running npm start in the terminal that is running your GraphQL server, then open another terminal session to execute the following curl request. Just as with the curl request for queries, the body in the --data flag will be sent to the GraphQL server. The highlighted parts will be added to the database:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "mutation { addBio(input: { name: { first: "test", last: "user" } }) { name { first, last } } }" }' http://localhost:4000/graphql

      This returns the following result, meaning you just inserted a new bio to the database:

      Output

      { "data": { "addBio": { "name": { "first": "test", "last": "user" } } } }

      In this step, you created the connection with MongoDB and the GraphQL server, allowing you to retrieve and mutate data from this database by executing GraphQL queries. Next, you'll expose this GraphQL server for remote access.

      Step 4 — Allowing Remote Access

      Having set up the database and the GraphQL server, you can now configure the GraphQL server to allow remote access. For this you'll use Nginx, which you set up in the prerequisite tutorial How to install Nginx on Ubuntu 18.04. This Nginx configuration can be found in the /etc/nginx/sites-available/example.com file, where example.com is the server name you added in the prerequisite tutorial.

      Open this file for editing, replacing your domain name with example.com:

      • sudo nano /etc/nginx/sites-available/example.com

      In this file you can find a server block that listens to port 80, where you've already set up a value for server_name in the prerequisite tutorial. Inside this server block, change the value for root to be the directory in which you created the code for the GraphQL server and add index.js as the index. Also, within the location block, set a proxy_pass so you can use your server's IP or a custom domain name to refer to the GraphQL server:

      /etc/nginx/sites-available/example.com

      server {
        listen 80;
        listen [::]:80;
      
        root /project_name;
        index index.js;
      
        server_name example.com;
      
        location / {
          proxy_pass http://localhost:4000/graphql;
        }
      }
      

      Make sure there are no Nginx syntax errors in this configuration file by running:

      You will receive the following output:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      When there are no errors found for the configuration file, restart Nginx:

      • sudo systemctl restart nginx

      Now you will be able to access your GraphQL server from any terminal session tab by executing and replacing example.com by either your server's IP or your custom domain name:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bios { name { first, last } } }" }' http://example.com

      This will return the same JSON object as the one of the previous step, including any additional data you might have added by using a mutation:

      Output

      {"data":{"bios":[{"name":{"first":"John","last":"Backus"}},{"name":{"first":"John","last":"McCarthy"}},{"name":{"first":"test","last":"user"}}]}}

      Now that you have made your GraphQL server accessible remotely, make sure your GraphQL server doesn't go down when you close the terminal or the server restarts. This way, your MongoDB database will be accessible via the GraphQL server whenever you want to make a request.

      To do this, use the npm package forever, a CLI tool that ensures that your command line scripts run continuously, or get restarted in case of any failure.

      Install forever with npm:

      • sudo npm install forever -g

      Once it is done installing, add it to the package.json file:

      package.json

      {
        "name": "project_name",
        "version": "1.0.0",
        "description": "",
        "main": "index.js",
        "scripts": {
          "start": "node index.js",
          "deploy": "forever start --minUptime 2000 --spinSleepTime 5 index.js",
          "test": "echo "Error: no test specified" && exit 1"
        },
        ...
      

      To start the GraphQL server with forever enabled, run the following command:

      This will start the index.js file containing the GraphQL server with forever, and ensure it will keep running with a minimum uptime of 2000 milliseconds and 5 milliseconds between every restart in case of a failure. The GraphQL server will now continuously run in the background, so you don't need to open a new tab any longer when you want to send a request to the server.

      You've now created a GraphQL server that is using MongoDB to store data and is set up to allow access from a remote server. In the next step you'll enable the GraphiQL playground, which will make it easier for you to inspect the GraphQL server.

      Step 5 — Enabling GraphiQL Playground

      Being able to send cURL requests to the GraphQL server is great, but it would be faster to have a user interface that can execute GraphQL requests immediately, especially during development. For this you can use GraphiQL, an interface supported by the package express-graphql.

      To enable GraphiQL, edit the file index.js:

      Add the following highlighted lines:

      index.js

      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers,
        context,
        graphiql: true
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      Save and exit the file.

      In order for these changes to become visible, make sure to stop forever by executing:

      Next, start forever again so the latest version of your GraphQL server is running:

      Open a browser at the URL http://example.com, replacing example.com with your domain name or your server IP. You will see the GraphiQL playground, where you can type GraphQL requests.

      The initial screen for the GraphiQL playground

      On the left side of this playground you can type the GraphQL queries and mutations, while the output will be shown on the right side of the playground. To test if this is working, type the following query on the left side:

      query {
        bios {
          name {
            first
            last
          }
        }
      }
      

      This will output the same result on the right side of the playground, again in JSON format:

      The GraphiQL playground in action

      Now you can send GraphQL requests using the terminal and the GraphiQL playground.

      Conclusion

      In this tutorial you've set up a MongoDB database and retrieved and mutated data from this database using GraphQL, Node.js, and Express for the server. Additionally, you configured Nginx to allow remote access to this server. Not only can you send requests to this GraphQL server directly, you can also use the GraphiQL as a visual, in-browser GraphQL interface.

      If you want to learn about GraphQL, you can watch a recording of my presentation on GraphQL at NDC {London} or visit the website howtographql.com for tutorials about GraphQL. To study how GraphQL interacts with other technologies, check out the tutorial on How to Manually Set Up a Prisma Server on Ubuntu 18.04, and for more information on building applications with MongoDB, see How To Build a Blog with Nest.js, MongoDB, and Vue.js.



      Source link

      How to Deploy Kubernetes on Linode with Rancher 2.2


      Updated by Linode Written by Linode

      Rancher title graphic.

      What is Rancher?

      Rancher is a web application that provides an interactive and easy-to-use GUI for creating and managing Kubernetes clusters. Rancher has plugins for interacting with multiple cloud hosts, including Linode, and you can manage clusters across different hosting providers.

      Rancher also maintains a curated list of apps that offer simple configuration options and a click-to-deploy interface. If you prefer to deploy your apps from a Helm chart, you can do that too.

      Guide Outline

      This guide will show how to:

      • Install Rancher on a Linode

      • Deploy a Kubernetes cluster on Linode using Rancher

      • Deploy an app from the Rancher app library to your cluster

      • Take advantage of the Linode CCM and CSI for Kubernetes via Rancher.

      If you are not familiar with Kubernetes and container deployments, we recommend that you review our other guides on these subjects first.

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to remove it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account.

      If You Already Run Rancher

      If you already run Rancher and would like to start creating clusters on Linode, you can skip to the Activate the Linode Node Driver and Deploy a Kubernetes Cluster sections. The Deploy an App section will show how you can take advantage of the Linode CCM and CSI.

      You may need to update your local Rancher installation to see the Linode node driver as an option.

      Before You Begin

      The Rancher web application will run on a Linode in your Cloud Manager account. Create and prepare the Linode that will run Rancher:

      1. Create a Linode running Ubuntu 18.04 in the data center of your choice. Follow the Getting Started guide for instructions on setting up your server. It is recommended that you create a 2GB Linode or larger.

        Note

        You will be able to create Kubernetes clusters in any Linode data center from the Rancher UI, even if your Rancher Linode is located in a different region.

      2. The Rancher web application is run inside a Docker container, so you will also need to install Docker CE on your Linode. Follow the instructions for installing Docker CE on Ubuntu 18.04 and then return to this guide.

      You will also need to generate an API token and prepare a domain zone:

      1. Rancher will need a Linode APIv4 token from your Linode account in order to create your cluster. Review the instructions from the Getting Started with the Linode API guide to get a token.

      2. The Set Up DNS section of this guide will assign an address to this guide’s example app. In order to do so, you must already have a domain zone created in the Linode Cloud Manager. If you do not have a zone created, review the instructions from our DNS Manager guide.

        Note

        If you haven’t purchased a domain name, then you can read along with the DNS section of this guide without implementing it in your own cluster.

      Install Rancher

      After you have your Linode up and running with Docker, you can then install and run Rancher:

      1. Log in to your Linode via SSH:

        ssh [email protected]
        
      2. Create a rancher directory inside /opt; this folder will hold settings and keys for Rancher:

        sudo mkdir -p /opt/rancher
        
      3. Run Rancher:

        docker run -d -p 80:80 -p 443:443 
          --restart=unless-stopped 
          -v /opt/rancher:/var/lib/rancher 
          rancher/rancher:latest
        
        • The --restart option ensures that the application will be restarted if the Linode is ever rebooted.
        • The -v option binds the /opt/rancher directory on the Linode to the container so that the application can persist its data.
      4. Visit your Linode’s IP address in your browser. Your browser will display an SSL certificate warning, but you can bypass it.

        Note

        If you are interested in setting up an SSL certificate with Rancher, you may consider also creating an NGINX container with an SSL certificate that proxies traffic to the Rancher container.

      5. You should see a welcome screen from Rancher. Enter a new password for the default administrative user for Rancher (which is named admin) and click Continue:

        Rancher first load screen

      6. The server URL entry form will appear, which should already show your Rancher server’s IP address. Click the Save URL button to continue:

        Rancher enter server URL screen

      7. The default home page for your Rancher app will appear. This page normally displays a list of all of your Kubernetes clusters. Since you have not created a cluster yet, a placeholder image is shown instead:

        Rancher enter server URL screen

        Note

        The main interface for navigating Rancher is via the blue navigation bar that spans the top of the page. The items in this navigation bar will change when you view different parts of the application.

      8. Before you can create your first cluster, you will need to enable Linode’s integration with Rancher. Proceed to the Activate the Linode Node Driver for Rancher section.

      Activate the Linode Node Driver for Rancher

      Rancher includes two kinds of integrations with hosting providers:

      • A cluster driver allows Rancher to create and administer a cloud host-launched Kubernetes cluster. In a host-launched Kubernetes cluster, your hosting platform operates the new cluster’s control plane and etcd components, while you provision and configure your worker nodes (via Rancher as well).

      • A node driver allows Rancher to create and administer a Rancher-launched Kubernetes cluster. Rancher will directly provision your control plane and etcd nodes along with your worker nodes. Your cloud host does not manage your control plane and etcd components.

      Rancher is shipped with a node driver for Linode, but it is inactive by default. To activate the Linode node driver:

      1. Click on Tools from the main navigation bar and select Drivers from the dropdown menu.

        Rancher Drivers menu option highlighted

      2. Click on the Node Drivers tab:

        Rancher Node Drivers tab highlighted

      3. Scroll down to Linode’s driver. Click the corresponding more options ellipsis and click on the Activate item in the dropdown menu that appears:

        Rancher activate Linode node driver

      4. Activating the Linode node driver does not install the Linode CCM and CSI for your new clusters. Further instructions for enabling these features are listed in the Deploy a Kubernetes Cluster section. You should wait until the node driver is listed as Active before moving on.

        What are the Linode CCM and CSI?

        The CCM (Cloud Controller Manager) and CSI (Container Storage Interface) are Kubernetes addons published by Linode. These addons provide additional integrations with the Linode cloud platform. Specifically, you can use them to create NodeBalancers, DNS records, and Block Storage Volumes.

      Deploy a Kubernetes Cluster

      Add a Node Template

      Node templates are used by Rancher to provision cluster nodes. When you create a node template, you can specify configuration parameters, like the region, instance type, and Linux image that should be used for any node in the cluster. You can set different templates for different clusters, which allows you to choose the right resources for your different workloads.

      Before provisioning your cluster, you will need to add the node template it will use. To add a node template:

      1. Click on your User Avatar icon in the upper right-hand corner and select Node Templates.

        Click on the User Avatar icon

      2. On the Node Templates page, click on the Add Template button in the upper right-hand corner.

      3. The Add Node Template dialog will appear, select Linode from the list of providers and enter in your Linode APIv4 token in the API Token field.

        Add Node Template dialog window

      4. Click on the Next: Configure Instances button.

      5. Another dialog will appear which accepts options for your new node template. Under the Instance Options section, set the preferred region, instance type, and Linux image for your nodes, along with any Cloud Manager tags you’d like to apply to your nodes.

        Rancher Add Node Template form - Linode options

        Note

        We recommend that you choose a Linode 2GB or higher for the nodes in a Kubernetes cluster. The Block Storage service has not been deployed to our Atlanta (US-Southeast) data center. Since this guide will use Block Storage Volumes in its example cluster, please choose a different region when creating your node template.

      6. Enter a name for your template. This can be arbitrary, but it’s helpful to call it something that will help you remember the options you set in the template form, like newark-linode8gb-ubuntu1804.

        Rancher Add Node Template form - template name

      7. When finished with the form, click the Create button.

        Note

        All other node template settings are optional and will not be used in this guide. You do not need to set a password for the nodes created through this template; Rancher will generate one automatically. As well, Rancher provides command-line access to the Kubernetes API for your cluster, so logging into your nodes generally isn’t needed.

      8. You will be returned to the Node Templates page where your node template will be visible.

        Rancher Node Template list page

      Provision a Cluster

      1. Return to the home page by hovering over the Global dropdown menu in the main navigation bar and then clicking the Global menu item:

        Rancher return to the global view

      2. Click on the Add Cluster button. The Add Cluster form will appear.

      3. Select the Linode driver from the From nodes in an infrastructure provider section:

        Rancher Add Cluster form - Linode node driver selected

      4. Enter a name for your cluster in the Cluster Name field. The name for our example cluster will be example-cluster.

      5. In the Node Pools section, under the Template column, you should see the node template you created in the previous section of this guide. Set a value for the Name Prefix field. For each Linode that Rancher creates for that node pool, the Linode will be prefixed according to the name you set (e.g. if the name prefix is example-cluster-, then your Linodes will be named example-cluster-1, example-cluster-2, etc.

        Rancher Add Cluster form - Add Node Template button highlighted

        • A node pool is Rancher’s method for creating the nodes (Linodes) that form your cluster. You specify how many nodes should be in a node pool, along with the node template for those nodes in that pool. If Rancher later detects that one of the nodes has lost connectivity with the cluster, it will automatically create a new one.

        • When configuring a node pool, you also specify which of your cluster’s components operate on the nodes in the pool. For example, you can have one pool that only runs your cluster’s etcd database, another pool which only runs your control plane components (the Kubernetes API server, scheduler, and controller manager), and a third pool which runs your application workloads.

      6. Set the value for the Count field to 3. Rancher will create 3 Linodes for the node pool.

      7. Toggle on the checkboxes for etcd, Control Plane, and Worker. In our example cluster, the nodes for this node pool will run each of these components. Your configured form should look like the following:

        Rancher Add Node Template form - single node pool configuration

        Note

        When you set up a cluster for production, avoid having a node pool that runs your workloads alongside your etcd or control plane components. An example node pool configuration which splits the etcd and control plane components from your workloads would look like the following:

        Rancher Add Node Template form - example production node pool configuration

        Review Rancher’s Production Ready Cluster documentation for more guidance on setting up production clusters.

      8. The last part in creating your cluster is to configure Linode’s CCM and CSI. In the Cluster Options section, toggle on the Custom option for the Cloud Provider field, then click on the Edit as YAML button above the section.

        Rancher Add Node Template form - example production node pool configuration

      9. A text editor will appear:

        Rancher Add Cluster form - YAML editor

      10. Insert the following snippet before the first line in the editor (above the addon_job_timeout declaration):

         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        addons_include:
          - https://linode.github.io/rancher-ui-driver-linode/releases/v0.2.0/linode-addons.yml
        addons: |-
          ---
          apiVersion: v1
          kind: Secret
          metadata:
            name: linode
            namespace: kube-system
          stringData:
            token: "..."
            region: "..."
          ---
      11. Insert your Linode APIv4 token in the token field from this snippet. Also, enter the label for your node template’s data center in the region field. This label should be lower-case (e.g. us-east instead of US-East).

      12. Scroll down in the editor to the services section. Remove the existing services section and replace it with this snippet:

         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        
        services:
          etcd:
            backup_config:
              interval_hours: 12
              retention: 6
            creation: "12h"
            extra_args:
              heartbeat-interval: 500
              election-timeout: 5000
            retention: "72h"
            snapshot: true
          kube-api:
            always_pull_images: false
            pod_security_policy: false
            service_node_port_range: "30000-32767"
            extra_args:
              feature-gates: "PersistentLocalVolumes=true,VolumeScheduling=true,CSINodeInfo=true,CSIDriverRegistry=true,BlockVolume=true,CSIBlockVolume=true"
          kubelet:
            fail_swap_on: false
            extra_args:
              cloud-provider: "external"
              feature-gates: "PersistentLocalVolumes=true,VolumeScheduling=true,CSINodeInfo=true,CSIDriverRegistry=true,BlockVolume=true,CSIBlockVolume=true"
          kube-controller:
            extra_args:
              cloud-provider: "external"
      13. After you finish with both of these steps, your YAML should resemble this completed snippet.

        Note

        Avoid copying and pasting the entire completed snippet example, as it has some variable values outside of the addons_include, addons, and services sections that may not match your deployment (e.g. the kubernetes_version setting).

        Instead, compare your YAML file with the completed example to ensure you have inserted the addons_include, addons, and services sections in the right places.

      14. Click the Create button below the YAML editor. You will be returned to the global home page, and your new cluster will be listed (in the Provisioning state):

        Rancher Global home page with new cluster listed

      15. If you visit the list of your Linodes in the Linode Cloud Manager, you will see the new nodes in your cluster:

        Linode Cloud Manager - new cluster nodes

        Note

        If your nodes do not not appear in the Linode Cloud Manager as expected, then you may have run into a limit on the number of resources allowed on your Linode account. Contact Linode Support if you believe this may be the case.

      Explore the New Cluster

      1. To inspect your new cluster, click on its name in the global list of clusters:

        Rancher home page - new cluster highlighted

        Or, hover over the Global menu and then select the name of the new cluster:

        Rancher cluster selection menu

      2. Your cluster’s dashboard will appear. If it is still provisioning, it will display a placeholder graphic:

        Rancher cluster dashboard

      3. Note that the items in the main navigation bar will change when viewing a cluster:

        Rancher navigation bar - cluster mode

      4. Click on the Nodes main menu item:

        Rancher cluster navigation bar - Nodes highlighted

      5. A list of your cluster’s nodes will appear. If they are still provisioning, they will be labelled with the most recent step in the provisioning sequence:

        Rancher cluster nodes list - provisioning

        When the nodes finish provisioning, the Active label will be displayed:

        Rancher cluster nodes list - active

      6. When your nodes have finished provisioning, click the Cluster item in the navigation bar to navigate back to the dashboard. A summary of the cluster’s resource usage is displayed:

        Rancher cluster dashboard

      Load the Kubectl Command Line

      In addition to managing your cluster via Rancher’s interactive UI, Rancher also provides command-line access to your cluster’s Kubernetes API:

      • From your cluster’s dashboard, click the Launch kubectl button:

        Rancher cluster dashboard - kubectl button highlighted

        A new dialog will appear with a command-line prompt. You are able to use kubectl to interact with your cluster:

        Rancher kubectl web CLI

      • Alternatively, you can use the kubectl CLI from your local computer if you have it installed. From your cluster’s dashboard in Rancher, click on the Kubeconfig File button:

        Rancher cluster dashboard - kubeconfig button highlighted

        A new dialog will appear with the correct kubeconfig for your cluster. Copy the contents of the configuration to a file on your computer. Then, pass it as an option when using the CLI:

        kubectl --kubeconfig /path/to/your/local/kube.config get pods
        

      Rancher Projects

      Rancher introduces an organizational concept called projects. Projects group together Kubernetes namespaces and allows you to perform actions across all namespaces in a project, like adjusting administrative access to them.

      1. To view your cluster’s projects, click on the cluster selection menu in Rancher’s navigation bar, then hover over your cluster in the dropdown menu’s Clusters list:

        Rancher cluster selection menu - list of projects

        A new cluster created through Rancher will have a Default project and a System project.

      2. Options for creating new projects are exposed under the navigation bar’s Projects/Namespaces item:

        Rancher cluster navigation bar - Projects/Namespaces highlighted

        However, this guide will deploy its example app to the Default project.

      3. Navigate to the Default project. Click on the cluster selection menu in Rancher’s navigation bar, then hover over your cluster in the dropdown menu’s Clusters list, and then click on the Default item that appears:

        Rancher cluster selection menu - Default project highlighted

      4. The Workloads view for the project will appear, but it will be empty, as no apps have been deployed yet. Also, note that the items in the navigation bar will change when viewing a project:

        Rancher navigation bar - project mode

      Deploy an App from the Rancher App Library

      Rancher provides a library of apps which offer easy setup through Rancher’s UI. The apps in this curated library are based on existing Helm charts.

      A Helm chart is a popular format for describing Kubernetes resources. Rancher extends the Helm chart format with some additional configuration files, and this extended packaging is referred to as a Rancher chart. The additional information in the Rancher chart format is used to create interactive forms for configuring the app through the Rancher UI.

      Note

      It is possible to enable more app catalogs than just Rancher’s curated library, including a catalog of stable Helm charts. These other apps will not feature Rancher’s easy setup forms and will instead require manual entry of configuration options.

      To test out deploying an app on your new cluster, launch the WordPress app from the Rancher library. This app will also take advantage of Linode’s Block Storage, NodeBalancer, and DNS services (via the CCM and CSI):

      1. From the Default project in your new cluster, click on the Apps item in the navigation bar:

        Rancher project navigation bar - Apps highlighted

      2. The Apps view for your project will appear, but it will show a placeholder image because no apps have been deployed yet. Click on this view’s Launch button.

      3. A list of apps will appear. Scroll down to the WordPress app and click its View Details button:

        WordPress app in Rancher app library catalog

      4. The setup form for the WordPress app will appear. In the WordPress Settings section, enter a username, password, and email for your WordPress admin user:

        Rancher WordPress setup form - WordPress Settings

        Note

        Avoid using symbols in the password you enter, as some symbols can cause syntax errors for this Rancher chart.

      5. In the Database Settings section, enter a password for WordPress’ database user. Then set MariaDB Persistent Volume Enabled to True and select the linode-block-storage option from the Default StorageClass for MariaDB dropdown menu:

        Rancher WordPress setup form - Database Settings

        These settings will result in your database deployment keeping its data in a Linode Block Storage Volume.

        Note

        The default value for the MariaDB Volume Size field is 8GiB, but the minimum size for a Block Storage Volume is 10Gib. The Linode CSI will automatically upgrade any persistent volume claims that are smaller than 10GiB to 10GiB.

      6. In the Services and Load Balancing section, set Expose app using Layer 7 Load Balancer to False, then choose the L4 Balancer option from the WordPress Service Type dropdown menu:

        Rancher WordPress setup form - Services and Load Balancing Settings

        Selecting the L4 Balancer option will result in the creation of a Linode NodeBalancer.

      7. Click the Launch button at the bottom of the form.

      8. You will be directed back to the project’s Apps view, where your new WordPress app will be listed. The listing will show a red bar with the number 2 below it. 2 represents the number of pods in the app, and the red color indicates that they are not available yet.

      9. Click on the name of the app:

        Rancher deployed apps list - WordPress app in middle of provisioning

      10. A detail view for the app will appear. The Workloads section will show the MariaDB and WordPress deployments for your app. They may still be in the middle of provisioning:

        Rancher app detail view - Workloads section

        When the deployments finish provisioning, they will display the Active label and the red bars under the Scale column will turn green.

      11. The Endpoints section displays the address of the NodeBalancer that was created for your app. After your deployments have finished provisioning, click on the HTTP NodeBalancer endpoint:

        Rancher app detail view - NodeBalancer HTTP endpoint highlighted

        Your WordPress site should open in a new browser tab.

      12. Visit the wp-login.php page on your site (e.g. at http://your-nodebalancer-name.newark.nodebalancer.linode.com/wp-login.php). You should be able to login with the WordPress admin username and password you specified earlier in the app’s form.

        Note

        If you view the Volumes and NodeBalancers areas of the Linode Cloud Manager, you should see the new Volume and NodeBalancer that were created for this app. They will have random alphanumeric names like pvc77e0c083490411e9beabf23c916b1.

      Set Up DNS for the WordPress App

      You can currently visit your new app from the NodeBalancer’s generic subdomain. With the Linode CCM, it’s also possible to assign a custom domain or subdomain to your app:

      1. In the detail view for your WordPress app, scroll down to the Services section.

      2. Click on the more options ellipsis for the wordpress-wordpress service, then click on the View/Edit YAML item in the dropdown menu that appears:

        Rancher Services section - View/Edit YAML option highlighted

      3. A YAML editor for the service will appear. Find the annotations section under the metadata sections, then insert this line:

        1
        2
        3
        
        metadata:
          annotations:
            external-dns.alpha.kubernetes.io/hostname: wordpress.example.com

        Replace wordpress.example.com with the address you want to use for your app. As a reminder, example.com needs to exist as a domain zone on your Linode account. If you’re not sure if you’ve inserted the new line in the right location, compare your YAML with this snippet of an updated metadata section.

      4. Click the Save button below the YAML editor.

      5. The Linode CCM will create a DNS record in your domain’s zone and automatically assign the IP of your NodeBalancer to it. If you visit the domain’s zone in the Linode Cloud Manager, the new A record should appear.

      6. It may take some time for Linode’s DNS database to update, so if you don’t see the record show up in the Cloud Manager immediately, try refreshing it after a few minutes.

        After the record becomes visible in the Cloud Manager, it can also take time for the DNS change to propagate to your local ISP. After the DNS change has propagated, you should be able to view your WordPress app by navigating to the address you set up.

      Scaling your Cluster and App

      Rancher makes it easy to scale the number of nodes in your cluster and to scale the number of replica pods in your app’s deployments.

      Scale your Cluster

      Caution

      The example instructions in this section will add nodes to your cluster, which will add further billable services to your account. You can read these instructions without performing them on your own account if you prefer.

      1. Return to the home page by hovering over the Global dropdown menu in the main navigation bar and then clicking the Global menu item:

        Rancher return to the global view

      2. Your cluster will show up on the page that appears. Click on the more options ellipsis corresponding to the cluster and then click on the Edit item from the dropdown menu:

        Rancher global list of clusters - Edit option highlighted

      3. The same form that you completed when creating your cluster will appear. In the Node Pools section, increase the count of your pool:

        Rancher edit cluster form - node pool count highlighted

        Note

        Your example cluster’s nodes all run etcd, so you can only scale the node pool to a count of 1, 3, or 5. If you had a separate node pool for your workloads, you could scale it freely to any count.

      4. Click the Save button at the bottom of the form. You will be redirected to the dashboard for the cluster.

      5. The dashboard will report that the cluster is updating. When the new nodes have finished provisioning and are registered with Kubernetes, the dashboard will show that all components and nodes are responding normally.

      Scale your App

      Rancher also provides an easy way to scale your app’s deployments:

      1. Navigate to the Default project of your cluster:

        Rancher cluster selection menu - Default project highlighted

      2. Click on the Apps item in the navigation bar:

        Rancher project navigation bar - Apps highlighted

      3. Click on the name of the WordPress app:

        Rancher deployed apps list - WordPress app completed provisioning

      4. In the Workloads section, click on the wordpress-wordpress link in the Name column for that deployment:

        Rancher WordPress workloads - deployment name highlighted

      5. A detail view for the deployment will appear. In the Config Scale section at the top, click on the + button to increase the replica count for the deployment by one (to a total of 2):

        Rancher WordPress deployment detail view - config scale highlighted

      6. A second pod will appear in the Pods section on this page, and there will be an Updating label at the top of the page. You may see a series of warning messages about the new pod not being available. Eventually, the new pod will be labelled as Running.

      Set Up GitHub Authentication

      In addition to manually creating users that can access your Rancher application, you can also enable GitHub authentication and then invite GitHub users:

      1. From the Global home page, click on the Security item in the navigation bar and then select Authentication from the dropdown menu:

        Rancher Security navigation bar item - Authentication highlighted

      2. Choose GitHub from the authentication options listed:

        Rancher Authentication page - GitHub option selected

      3. In a new browser window, visit the developer settings of your account on the GitHub website. Click the Register a new application button under the OAuth Apps section.

      4. Fill out the form that appears. Use the values that Rancher lists under the Authentication page that you have open in your original browser window, then click the Register application button:

        Rancher generated GitHub OAuth form values

      5. You will be directed to the detail view for your new OAuth application. Copy the ClientID and Client Secret values displayed on this page. Paste these into the form at the bottom of Rancher’s Authentication page:

        Rancher Authentication page - Client ID and Secret form

      6. Click the Authenticate with GitHub button. A new browser window will appear that asks you to confirm the access request.

      7. Confirm the access request. Rancher will show a new page with further options for controlling access to your Rancher site.

      8. To invite GitHub users to your Rancher site, enter them in the search field in the Site Access section and select the correct user from the search results. Then, click the Save button:

        Rancher Authentication page - search for and add GitHub users

      Removing the Cluster

      To remove the cluster:

      1. Navigate to the Global home page.

      2. Click on the more options ellipsis for your cluster, then select the Delete option from the dropdown menu that appears:

        Rancher global cluster list - delete option highlighted

      3. Confirm in the Linode Cloud Manager that all the Linodes, Volumes, NodeBalancers, and DNS records from the cluster are deleted.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link