One place for hosting & domains

      Deploy

      How To Deploy a Resilient Node.js Application on Kubernetes from Scratch


      Description

      You may have heard the buzz around Kubernetes and noticed that many companies have been rapidly adopting it. Due to its many components and vast ecosystem it can be quite confusing to find where the path starts to learn it.

      In this session, you will learn the basics of containers and Kubernetes. Step by step, we will go through the entire process of packaging a Node.js application into a Docker container image and then deploying it on Kubernetes. We will demonstrate scaling to multiple replicas for better performance. The end result will be a resilient and scalable Node.js deployment.

      You will leave this session with sufficient knowledge of containerization, Kubernetes basics, and the ability to deploy highly available, performant, and scalable Node.js applications on Kubernetes.

      💰 Use this free $100 credit to try out Kubernetes on DigitalOcean for free!

      About the Presenter

      Kamal Nasser is a Developer Advocate at DigitalOcean. If not automating and playing with modern software and technologies, you’ll likely find him penning early 17th century calligraphy. You can find Kamal on Twitter at @kamaln7 or on GitHub at @kamaln7.

      Resources

      View the slides for this talk, or watch the recording on YouTube (coming soon).

      Transcript of The Commands and Manifests Used

      Be sure to follow along with the recording for an explanation and replace kamaln7 with your own DockerHub username.

      Node App

      1. Create an empty node package: npm init -y
      2. Install express as a dependency: npm install express
      3. index.js

        const express = require('express')
        const os = require('os')
        
        const app = express()
        app.get('/', (req, res) => {
                res.send(`Hi from ${os.hostname()}!`)
        })
        
        const port = 3000
        app.listen(port, () => console.log(`listening on port ${port}`))
        

      Docker

      1. Dockerfile

        FROM node:13-alpine
        
        WORKDIR /app
        
        COPY package.json package-lock.json ./
        
        RUN npm install --production
        
        COPY . .
        
        EXPOSE 3000
        
        CMD node index.js
        
      2. Build the image: docker build -t kamaln7/node-hello-app .

      3. Edit index.js and replace the word Hi with Hello.

      4. Re-build the image and notice Docker re-using previous layers: docker build -t kamaln7/node-hello-app .

      5. Run a container to test it: docker run --rm -d -p 3000:3000 kamaln7/node-hello-app

      6. Look at the running containers: docker ps

      7. Stop the container: docker stop CONTAINER_ID

      8. Push the image to DockerHub: docker push kamaln7/node-hello-app

      Kubernetes

      1. Get worker nodes: kubectl get nodes
      2. Create a deployment: kubectl create deployment --image kamaln7/node-hello-app node-app
      3. Scale up to 3 replicas: kubectl scale deployment node-app --replicas 3
      4. Expose the deployment as a NodePort replica: kubectl expose deployment node-app --port 3000
      5. Look at the newly created service (and the assigned port): kubectl get services
      6. Grab the public IP of one of the worker nodes: kubectl get nodes -o wide
      7. Browse to IP:port to test the service
      8. Edit the service: kubectl edit service node-app
        1. Replace port: 3000 with port: 80
        2. Replace type: NodePort with type: LoadBalancer
      9. Verify that the service was updated: kubectl get service
      10. Run the above command every few seconds until you get the external IP address of the Load Balancer
      11. Browse to the IP of the Load Balancer





      Source link

      How to Deploy a Linode Kubernetes Engine Cluster Using Terraform


      Updated by Linode Contributed by Linode

      What is the Linode Kubernetes Engine (LKE)?

      The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy a LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE Cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.

      In this Guide

      This guide will walk you through the steps needed to deploy a Kubernetes cluster using LKE and the popular infrastructure as code (IaC) tool, Terraform. Throughout the guide you will:

      Before you Begin

      1. Create a personal access token for Linode’s API v4. Follow the Getting Started with the Linode API to get a token. You will need a token to be able to create Linode resources using Terraform.

        Note

        Ensure that your token has, at minimum, Read/Write permissions for Linodes, Kubernetes, NodeBalancers, and Volumes.

      2. Review the A Beginner’s Guide to Terraform to familiarize yourself with Terraform concepts if you have not used the tool before. This guide assumes familiarity with Terraform and its native HCL syntax.

      Prepare your Local Environment

      Install Terraform

      Install Terraform on your computer by following the Install Terraform section of our Use Terraform to Provision Linode Environments guide.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest kubectl release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Create your Terraform Configuration Files

      In this section, you will create Terraform configuration files that define the resources needed to create a Kubernetes cluster. You will create a main.tf file to store your resource declarations, a variables.tf file to store your input variable definitions, and a terraform.tfvars file to assign values to your input variables. Setting up your Terraform project in this way will allow you to reuse your configuration files to deploy more Kubernetes clusters, if desired.

      Create your Resource Configuration File

      Terraform defines the elements of your Linode infrastructure inside of configuration files. Terraform refers to these infrastructure elements as resources. Once you declare your Terraform configuration, you then apply it, which results in the creation of those resources on the Linode platform. The Linode Provider for Terraform exposes the Linode resources you will need to deploy a Kubernetes cluster using LKE.

      1. Navigate to the directory where you installed Terraform. Replace ~/terraform with the location of your installation.

        cd ~/terraform
        
      2. Create a new directory to store your LKE cluster’s Terraform configurations. Replace lke-cluster with your preferred directory name.

        mkdir lke-cluster
        
      3. Using the text editor of your choice, create your cluster’s main configuration file named main.tf which will store your resource definitions. Add the following contents to the file.

        ~/terraform/lke-cluster/main.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        
        //Use the Linode Provider
        provider "linode" {
          token = var.token
        }
        
        //Use the linode_lke_cluster resource to create
        //a Kubernetes cluster
        resource "linode_lke_cluster" "foobar" {
            k8s_version = var.k8s_version
            label = var.label
            region = var.region
            tags = var.tags
        
            dynamic "pool" {
                for_each = var.pools
                content {
                    type  = pool.value["type"]
                    count = pool.value["count"]
                }
            }
        }
        
        //Export this cluster's attributes
        output "kubeconfig" {
           value = linode_lke_cluster.foobar.kubeconfig
        }
        
        output "api_endpoints" {
           value = linode_lke_cluster.foobar.api_endpoints
        }
        
        output "status" {
           value = linode_lke_cluster.foobar.status
        }
        
        output "id" {
           value = linode_lke_cluster.foobar.id
        }
        
        output "pool" {
           value = linode_lke_cluster.foobar.pool
        }
            

        This file contains your cluster’s main configuration arguments and output variables. In this example, you make use of Terraform’s input variables so that your main.tf configuration can be easily reused across different clusters.

        Variables and their values will be created in separate files later on in this guide. Using separate files for variable declaration allows you to avoid hard-coding values into your resources. This strategy can help you reuse, share, and version control your Terraform configurations.

        This configuration file uses the Linode provider to create a Kubernetes cluster. All arguments within the linode_lke_cluster.foobar resource are required, except for tags. The pool argument accepts a list of pool objects. In order to read their input variable values, the configuration file makes use of Terraform’s dynamic blocks. Finally, output values are declared in order to capture your cluster’s attribute values that will be returned to Terraform after creating your cluster.

        Note

        For a complete linode_lke_cluster resource argument reference, see the Linode Provider Terraform documentation. You can update the main.tf file to include any additional arguments you would like to use.

      Define your Input Variables

      You are now ready to define the input variables that were referenced in your main.tf file.

      1. Create a new file named variables.tf in the same directory as your main.tf file. Add the following contents to the file:

        ~/terraform/lke-cluster/variables.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        
            variable "token" {
              description = "Your Linode API Personal Access Token.(required)."
            }
        
            variable "k8s_version" {
              description = "The Kubernetes version to use for this cluster.(required)"
              default = "1.17"
            }
        
            variable "label" {
              description = "The unique label to assign to this cluster.(required)"
              default = "default-lke-cluster"
            }
        
            variable "region" {
              description = "The region where your cluster will be located.(required)"
              default = "us-east"
            }
        
            variable "tags" {
              description = "Tags to apply to your cluster for organizational purposes.(optional)"
              type = list(string)
              default = ["testing"]
            }
        
            variable "pools" {
              description = "The Node Pool specifications for the Kubernetes cluster.(required)"
              type = list(object({
                type = string
                count = number
              }))
              default = [
                {
                  type = "g6-standard-4"
                  count = 3
                },
                {
                  type = "g6-standard-8"
                  count = 3
                }
              ]
            }
            

        This file describes each variable and provides them with default values. You can update the file with your own preferred default values.

      Assign Values to your Input Variables

      You will now need to define the values you would like to use in order to create your Kubernetes cluster. These values are stored in a separate file named terraform.tfvars. This file should be the only file that requires updating when reusing the files created in this guide to deploy a new Kubernetes cluster or to add a new node pool to the cluster.

      1. Create a new file named terraform.tfvars to provide values for all the input variables declared in the previous section.

        Note

        If you leave out a variable value in this file, Terraform will use the variable’s default value that you provided in your variables.tf file.

        $~/terraform/lke-cluster/terraform.tfvars
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        label = "example-lke-cluster"
        k8s_version = "1.17"
        region = "us-west"
        pools = [
          {
            type : "g6-standard-2"
            count : 3
          }
        ]
              

        Terraform will use the values in this file to create a new Kubernetes cluster with one node pool that contains three 4 GB nodes. The cluster will be located in the us-west data center (Dallas, Texas, USA). Each node in the cluster’s node pool will use Kubernetes version 1.17 and the cluster will be named example-lke-cluster. You can replace any of the values in this file with your own preferred cluster configurations.

      Deploy your Kubernetes Cluster

      Now that all your Terraform configuration files are ready, you can deploy your Kubernetes cluster.

      1. Ensure that you are in your lke-cluster project directory which should contain all of your Terraform configuration files. If you followed the naming conventions used in this guide, your project directory will be ~/terraform/lke-cluster.

        cd ~/terraform/lke-cluster
        
      2. Install the Linode Provider to your Terraform project directory. Whenever a new provider is used in a Terraform configuration, it must be initialized before you can create resources with it.

        terraform init
        

        You will see a message that confirms that the Linode provider plugins have been successfully initialized.

      3. Export your API token to an environment variable. Terraform environment variables have the prefix TF_VAR_ and are supplied at the command line. This method is preferable over storing your token in a plain text file. Replace the example’s token value with your own.

        export TF_VAR_token=70a1416a9.....d182041e1c6bd2c40eebd
        

        Caution

        This method commits the environment variable to your shell’s history, so take care when using this method.

      4. View your Terraform’s execution plan before deploying your infrastructure. This command won’t take any actions or make any changes on your Linode account. It will provide a report displaying all the resources that will be created or modified when the plan is executed.

        terraform plan 
        -var-file="terraform.tfvars"
        
      5. Apply your Terraform configurations to deploy your Kubernetes cluster.

        terraform apply 
        -var-file="terraform.tfvars"
        

        Terraform will begin to create the resources you’ve defined throughout this guide. This process will take several minutes to complete. Once the cluster has been successfully created the output will include a success message and the values that you exposed as output when creating your main.tf file (the example output has been truncated for brevity).

          
        Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
        
        Outputs:
        
        api_endpoints = [
          "https://91132f3d-fd20-4a70-a171-06ddec5d9c4d.us-west-2.linodelke.net:443",
          "https://91132f3d-fd20-4a70-a171-06ddec5d9c4d.us-west-2.linodelke.net:6443",
          "https://192.0.2.0:443",
          "https://192.0.2.0:6443",
        ]
        ...
                  
        

      Connect to your LKE Cluster

      Now that your Kubernetes cluster is deployed, you can use kubectl to connect to it and begin defining your workload. In this section, you will access your cluster’s kubeconfig and use it to connect to your cluster with kubectl.

      1. Use Terraform to access your cluster’s kubeconfig, decode its contents, and save them to a file. Terraform returns a base64 encoded string (a useful format for automated pipelines) representing your kubeconfig. Replace lke-cluster-config.yaml with your preferred file name.

        export KUBE_VAR=`terraform output kubeconfig` && echo $KUBE_VAR | base64 -D > lke-cluster-config.yaml
        

        Note

        Depending on your local operating system, to decode the kubeconfig’s base64 format, you may need to replace base64 -D with base64 -d. For example, this is update is needed on an Ubuntu 18.04 system.

      2. Add the kubeconfig file to your $KUBECONFIG environment variable. This will give kubectl access to your cluster’s kubeconfig file.

        export KUBECONFIG=lke-cluster-config.yaml
        
      3. Verify that your cluster is selected as kubectl’s current context:

        kubectl config get-contexts
        
      4. View all nodes in your Kubernetes cluster using kubectl:

        kubectl get nodes
        

        Your output will resemble the following example, but will vary depending on your own cluster’s configurations.

          
        NAME                        STATUS   ROLES    AGE   VERSION
        lke4377-5673-5eb331ac7f89   Ready       17h   v1.17.0
        lke4377-5673-5eb331acab1d   Ready       17h   v1.17.0
        lke4377-5673-5eb331acd6c2   Ready       17h   v1.17.0
            
        

        Now that you are connected to your LKE cluster, you can begin using kubectl to deploy applications, inspect and manage cluster resources, and view logs.

      Destroy your Kubernetes Cluster (optional)

      Terraform includes a destroy command to remove resources managed by Terraform.

      1. Run the plan command with the -destroy option to verify which resources will be destroyed.

        terraform plan -destroy
        

        Follow the prompt to enter your Linode API v4 access token and review the report to ensure the resources you expect to be destroyed are listed.

      2. Destroy the resources outlined in the above command.

        terraform destroy
        

        Follow the prompt to enter your Linode API v4 access token and type in yes when prompted to destroy your Kubernetes cluster.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Deploy a React Application on Debian 10


      Updated by Linode

      Contributed by
      Linode

      What is React?

      React is a popular JavaScript library for building user interfaces. While React is often used as a frontend for more complex applications, it’s also powerful enough to be used for full client-side applications on its own.

      Since a basic React app is static (it consists of compiled HTML, CSS, and JavaScript files), it is easy to deploy from a local computer to a Linode using Rsync. This guide shows how to set up your Debian 10 Linode and local machine so that you can easily deploy your app whenever changes are made.

      Before You Begin

      1. Familiarize yourself with our Getting Started guide and complete the steps for setting your Linode’s hostname and timezone.

      2. This guide will use sudo wherever possible. Complete the sections of our Securing Your Server to create a standard user account, harden SSH access, and remove unnecessary network services.

      3. Install and configure a web server to host a website on your Linode. This guide’s examples will use the Apache and NGINX web servers. Complete the steps in the Installing Apache Web Server on Debian 10 guide or the Installing NGINX on Debian 10 guide.

      4. This guide assumes you already have a React app you’d like to deploy. If you don’t have one, you can bootstrap a project quickly following the steps in the already have a React app you’d like to deploy. If you don’t have one, you can quickly bootstrap a project following the steps in the Create an Example React App section of this guide. This step should be completed on your local system.

      5. Update your Linode’s system.

        sudo apt update && sudo apt-get upgrade
        
      6. Install the Rsync program on your Linode server.

        sudo apt install rsync
        
      7. Install Git on your local computer if it is not already installed.

        sudo apt install git
        

      Configure your Linode for Deployment

      The steps in this section should be performed on your Linode.

      Create your Host Directory

      1. If it does not yet exist, create your site’s web root directory. Most of the time, it will be located in the /var/www directory.

        sudo mkdir -p /var/www/example.com
        
      2. Set permissions for the new directory to allow your regular user account to write to it:

        sudo chmod 755 -R /var/www/example.com
        
      3. The Rsync program will execute its commands as the user you designate in your deployment script. This user must be the owner of your site’s web root. Replace example_user with your own user’s name and /var/www/example.com with the location of your site’s web root.

        sudo chown -R example_user:www-data /var/www/example.com
        

        Note

        Depending on how you have configured your web root’s directory, www-data may or may not be the group that owns it. To verify the directory’s group, issue the following command:

        ls -la /var/www/
        

        You will see a similar output:

          
        drwxrwxr-x 3 example_user www-data     4096 Apr 24 17:34 example.com
        
        

      Configure your Web Server

      In this section, you will update your web server configuration to ensure that it is configured to point to your site’s web root.

      1. Update your configuration file to point to your site’s web root.

        Apache

        Modify the DocumentRoot in your virtual host file with the path to your site’s web root.

        /etc/apache2/sites-available/example.com.conf
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
          <VirtualHost *:80>
              ServerAdmin [email protected]
              ServerName example.com
              ServerAlias www.example.com
              DocumentRoot /var/www/example.com/ ## Modify this line as well as others referencing the path to your app
              ErrorLog /var/www/example.com/logs/error.log
              CustomLog /var/www/example.com/logs/access.log combined
          </VirtualHost>
          

        NGINX

        Modify the root parameter with the path to your site’s web root.

        /etc/nginx/sites-available.example.com
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
          server {
              listen 80;
              listen [::]:80;
        
              root /var/www/example.com; ## Modify this line
              index index.html index.htm;
        
          }
          
      2. Restart the web server to apply the changes.

        Apache

        sudo systemctl restart apache2
        

        NGINX

        sudo systemctl restart nginx
        

      Configure your Local Computer

      Install the Node Version Manager and Node.js

      You will need Node.js installed on your local computer in order to build your React app prior to copying your site files to the remote Linode server.

      1. Install the Node Version Manager (NVM) for Node.js. This program helps you manage different Node.js versions on a single system.

        sudo curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
        
      2. To start using nvm in the same terminal run the following commands:

        export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
        [ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
        

        Verify that you have access to NVM by printing its current version.

        nvm --version
        

        You should see a similar output:

          
        0.35.3
            
        
      3. Install Node.js:

        Note

        As of writing this guide, the latest LTS version of Node.js is v12.16.2. Update this command with the version of Node.js you would like to install.
        nvm install 12.16.2
        
      4. Use NVM to run your preferred version of Node.js.

        nvm use 12.16.2
        

        Your output will resemble the following

          
        Now using node v12.16.2 (npm v6.14.4)
            
        

      Create an Example React App

      If you already have a React App that you would like to deploy to your Linode, you can skip this section. Otherwise, follow the steps in this section to create a basic React app using the create-react-app tool.

      1. Use the Node Package Manager to create your React app.

        npm init react-app ~/my-app
        

      Create your Deployment Script

      1. Navigate to your app’s directory. Replace ~/my-app with the location of your React app’s directory.

        cd ~/my-app
        
      2. Using a text editor, create a deployment script called deploy.sh in your app’s root directory. Replace the following values in the example file:

        • example_user with the username of your limited user account.
        • example.com with your Linode’s fully qualified domain name (FQDN) or public IP address.
        • /var/www/example.com/ with the location of your site’s web root. This is where all of your React app’s local build/ files will be copied to on the remote server.
        ~/my-app/deploy.sh
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        #!/bin/sh
        
        echo "Switching to branch master"
        git checkout master
        
        echo "Building app"
        npm run build
        
        echo "Deploying files to server"
        rsync -avP build/ [email protected]:/var/www/example.com/
        echo "Deployment complete"

        This script will check out the master branch of your project on Git, build the app using npm run build, and then sync the build files to the remote Linode using Rsync. If your React app was not built with create-react-app, the build command may be different and the built files may be stored in a different directory (such as dist). Modify the script accordingly.

        Note

        If your React app’s directory is not initialized as a Git repository, the command git checkout master will return a fatal: not a git repository (or any of the parent directories): .git error. However, the script will continue on to the next commands and the files should still be transferred to your remote Linode server. See our Getting Started with Git guide to learn how to initialize a Git repository.
      3. Make the script executable:

        sudo chmod u+x deploy.sh
        
      4. Run the deployment script. Enter your Linode user’s password when prompted by the script.

        ./deploy.sh
        
      5. In a browser, navigate to your Linode’s domain name or public IP address. If the deploy was successful, you should see your React app displayed.

        View your example React app in a browser.

      6. Make a few changes to your app’s src directory and then re-run the deploy script. Your changes should be visible in the browser after reloading the page.

      Next Steps

      Deployment can be a complex topic and there are many factors to consider when working with production systems. This guide is meant to be a simple example for personal projects, and isn’t necessarily suitable on its own for a large scale production application.

      More advanced build and continuous integration tools such as Travis, Jenkins, and Wercker can be used to automate a more complicated deployment workflow. This can include running unit tests before proceeding with the deployment and deploying to multiple servers (such as test and production boxes). See our guides on Jenkins and Wercker to get started.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      This guide is published under a CC BY-ND 4.0 license.



      Source link