One place for hosting & domains

      Webinar Series: Kubernetes Package Management with Helm and CI/CD with Jenkins X


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a cloud native approach to building, testing, and deploying applications, covering release management, cloud native tools, service meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the second session of the series, Kubernetes Package Management with Helm and CI/CD with Jenkins X.

      Warning: The procedures in this tutorial are meant for demonstration purposes only. As a result, they don’t follow the best practices and security measures necessary for a production-ready deployment.

      Introduction

      In order to reduce error and organize complexity when deploying an application, CI/CD systems must include robust tooling for package management/deployment and pipelines with automated testing. But in modern production environments, the increased complexity of cloud-based infrastructure can present problems for putting together a reliable CI/CD environment. Two Kubernetes-specific tools developed to solve this problem are the Helm package manager and the Jenkins X pipeline automation tool.

      Helm is a package manager specifically designed for Kubernetes, maintained by the Cloud Native Computing Foundation (CNCF) in collaboration with Microsoft, Google, Bitnami, and the Helm contributor community. At a high level, it accomplishes the same goals as Linux system package managers like APT or YUM: managing the installation of applications and dependencies behind the scenes and hiding the complexity from the user. But with Kubernetes, the need for this kind of management is even more pronounced: Installing applications requires the complex and tedious orchestration of YAML files, and upgrading or rolling back releases can be anywhere from difficult to impossible. In order to solve this problem, Helm runs on top of Kubernetes and packages applications into pre-configured resources called charts, which the user can manage with simple commands, making the process of sharing and managing applications more user-friendly.

      Jenkins X is a CI/CD tool used to automate production pipelines and environments for Kubernetes. Using Docker images, Helm charts, and the Jenkins pipeline engine, Jenkins X can automatically manage releases and versions and promote applications between environments on GitHub.

      In this second article of the CI/CD with Kubernetes series, you will preview these two tools by:

      • Managing, creating, and deploying Kubernetes packages with Helm.

      • Building a CI/CD pipeline with Jenkins X.

      Though a variety of Kubernetes platforms can use Helm and Jenkins X, in this tutorial you will run a simulated Kubernetes cluster, set up in your local environment. To do this, you will use Minikube, a program that allows you to try out Kubernetes tools on your own machine without having to set up a true Kubernetes cluster.

      By the end of this tutorial, you will have a basic understanding of how these Kubernetes-native tools can help you implement a CI/CD system for your cloud application.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 16.04 server with 16 GB of RAM or above. Since this tutorial is meant for demonstration purposes only, commands are run from the root account. Note that the unrestrained privileges of this account do not adhere to production-ready best practices and could affect your system. For this reason, it is suggested to follow these steps in a test environment such as a virtual machine or a DigitalOcean Droplet.

      • A GitHub account and GitHub API token. Be sure to record this API token so that you can enter it during the Jenkins X portion of this tutorial.

      • Familiarity with Kubernetes concepts. Please refer to the article An Introduction to Kubernetes for more details.

      Step 1 — Creating a Local Kubernetes Cluster with Minikube

      Before setting up Minikube, you will have to install its dependencies, including the Kubernetes command line tool kubectl, the bidirectional data transfer relay socat, and the container program Docker.

      First, make sure that your system’s package manager can access packages over HTTPS with apt-transport-https:

      • apt-get update
      • apt-get install apt-transport-https

      Next, in order to ensure the kubectl download is valid, add the GPG key for the official Google repository to your system:

      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

      Once you have added the GPG key, create the file /etc/apt/sources.list.d/kubernetes.list by opening it in your text editor:

      • nano /etc/apt/sources.list.d/kubernetes.list

      Once this file is open, add the following line:

      /etc/apt/sources.list.d/kubernetes.list

      deb http://apt.kubernetes.io/ kubernetes-xenial main
      

      This will show your system the source for downloading kubectl. Once you have added the line, save and exit the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Finally, update the source list for APT and install kubectl, socat, and docker.io:

      • apt-get update
      • apt-get install -y kubectl socat docker.io

      Note: For Minikube to simulate a Kubernetes cluster, you must download the docker.io package rather than the newer docker-ce release. For production-ready environments, docker-ce would be the more appropriate choice, since it is better maintained in the official Docker repository.

      Now that you have installed kubectl, you can proceed with installing Minikube. First, use curl to download the program’s binary:

      • curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.28.0/minikube-linux-amd64

      Next, change the access permissions of the file you just downloaded so that your system can execute it:

      Finally, copy the minikube file to the executable path at /usr/local/bin/ and remove the original file from your home directory:

      • cp minikube /usr/local/bin/
      • rm minikube

      With Minikube installed on your machine, you can now start the program. To create a Minikube Kubernetes cluster, use the following command:

      • minikube start --vm-driver none

      The flag --vm-driver none instructs Minikube to run Kubernetes on the local host using containers rather than a virtual machine. Running Minikube this way means that you do not need to download a VM driver, but also means that the Kubernetes API server will run insecurely as root.

      Warning: Because the API server with root privileges will have unlimited access to the local host, it is not recommended to run Minikube using the none driver on personal workstations.

      Now that you have started Minikube, check to make sure that your cluster is running with the following command:

      You will receive the following output, with your IP address in place of your_IP_address:

      minikube: Running
      cluster: Running
      kubectl: Correctly Configured: pointing to minikube-vm at your_IP_address
      

      Now that you have set up your simulated Kubernetes cluster using Minikube, you can gain experience with Kubernetes package management by installing and configuring the Helm package manager on top of your cluster.

      Step 2 — Setting Up the Helm Package Manager on your Cluster

      In order to coordinate the installation of applications on your Kubernetes cluster, you will now install the Helm package manager. Helm consists of a helm client that runs outside the cluster and a tiller server that manages application releases from within the cluster. You will have to install and configure both to successfully run Helm on your cluster.

      To install the Helm binaries, first use curl to download the following installation script from the official Helm GitHub repository into a new file named get_helm.sh:

      • curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh

      Since this script requires root access, change the permission of get_helm.sh so that the owner of the file (in this case, root) can read, write, and execute it:

      Now, execute the script:

      When the script finishes, you will have helm installed to /usr/local/bin/helm and tiller installed to /usr/local/bin/tiller.

      Though tiller is now installed, it does not yet have the correct roles and permissions to access the necessary resources in your Kubernetes cluster. To assign these roles and permissions to tiller, you will have to create a service account named tiller. In Kubernetes, a service account represents an identity for processes that run in a pod. After a process is authenticated through a service account, it can then contact the API server and access cluster resources. If a pod is not assigned a specific service account, it gets the default service account. You will also have to create a Role-Based access control (RBAC) rule that authorizes the tiller service account.

      In Kubernetes RBAC API, a role contains rules that determine a set of permissions. A role can be defined with a scope of namespace or cluster, and can only grant access to resources within a single namespace. ClusterRole can create the same permissions on the level of a cluster, granting access to cluster-scoped resources like nodes and namespaced resources like pods. To assign the tiller service account the right role, create a YAML file called rbac_helm.yaml and open it in your text editor:

      Add the following lines to the file to configure the tiller service account:

      rbac_helm.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: tiller
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
        name: tiller
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
        - kind: ServiceAccount
          name: tiller
          namespace: kube-system
      
        - kind: User
          name: "admin"
          apiGroup: rbac.authorization.k8s.io
      
        - kind: User
          name: "kubelet"
          apiGroup: rbac.authorization.k8s.io
      
        - kind: Group
          name: system:serviceaccounts
          apiGroup: rbac.authorization.k8s.io
      

      In the preceding file, ServiceAccount allows the tiller processes to access the apiserver as an authenticated service account. ClusterRole grants certain permissions to a role, and ClusterRoleBinding assigns that role to a list of subjects, including the tiller service account, the admin and kubelet users, and the system:serviceaccounts group.

      Next, deploy the configuration in rbac_helm.yaml with the following command:

      • kubectl apply -f rbac_helm.yaml

      With the tiller configuration deployed, you can now initialize Helm with the --service-acount flag to use the service account you just set up:

      • helm init --service-account tiller

      You will receive the following output, representing a successful initialization:

      Output

      Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!

      This creates a tiller pod in the kube-system namespace. It also creates the .helm default repository in your $HOME directory and configures the default Helm stable chart repository at https://kubernetes-charts.storage.googleapis.com and the local Helm repository at http://127.0.0.1:8879/charts.

      To make sure that the tiller pod is running in the kube-system namespace, enter the following command:

      • kubectl --namespace kube-system get pods

      In your list of pods, tiller-deploy will appear, as is shown in the following output:

      Output

      NAME READY STATUS RESTARTS AGE etcd-minikube 1/1 Running 0 2h kube-addon-manager-minikube 1/1 Running 0 2h kube-apiserver-minikube 1/1 Running 0 2h kube-controller-manager-minikube 1/1 Running 0 2h kube-dns-86f4d74b45-rjql8 3/3 Running 0 2h kube-proxy-dv268 1/1 Running 0 2h kube-scheduler-minikube 1/1 Running 0 2h kubernetes-dashboard-5498ccf677-wktkl 1/1 Running 0 2h storage-provisioner 1/1 Running 0 2h tiller-deploy-689d79895f-bggbk 1/1 Running 0 5m

      If the tiller pod's status is Running, it can now manage Kubernetes applications from inside your cluster on behalf of Helm.

      To make sure that the entire Helm application is working, search the Helm package repositiories for an application like MongoDB:

      In the output, you will see a list of possible applications that fit your search term:

      Output

      NAME CHART VERSION APP VERSION DESCRIPTION stable/mongodb 5.4.0 4.0.6 NoSQL document-oriented database that stores JSON-like do... stable/mongodb-replicaset 3.9.0 3.6 NoSQL document-oriented database that stores JSON-like do... stable/prometheus-mongodb-exporter 1.0.0 v0.6.1 A Prometheus exporter for MongoDB metrics stable/unifi 0.3.1 5.9.29 Ubiquiti Network's Unifi Controller

      Now that you have installed Helm on your Kubernetes cluster, you can learn more about the package manager by creating a sample Helm chart and deploying an application from it.

      Step 3 — Creating a Chart and Deploying an Application with Helm

      In the Helm package manager, individual packages are called charts. Within a chart, a set of files defines an application, which can vary in complexity from a pod to a structured, full-stack app. You can download charts from the Helm repositories, or you can use the helm create command to create your own.

      To test out the capabilities of Helm, create a new Helm chart named demo with the following command:

      In your home directory, you will find a new directory called demo, within which you can create and edit your own chart templates.

      Move into the demo directory and use ls to list its contents:

      You will find the following files and directories in demo:

      demo

      charts  Chart.yaml  templates  values.yaml
      

      Using your text editor, open up the Chart.yaml file:

      Inside, you will find the following contents:

      demo/Chart.yaml

      apiVersion: v1
      appVersion: "1.0"
      description: A Helm chart for Kubernetes
      name: demo
      version: 0.1.0
      

      In this Chart.yaml file, you will find fields like apiVersion, which must be always v1, a description that gives additional information about what demo is, the name of the chart, and the version number, which Helm uses as a release marker. When you are done examining the file, close out of your text editor.

      Next, open up the values.yaml file:

      In this file, you will find the following contents:

      demo/values.yaml

      # Default values for demo.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 1
      
      image:
        repository: nginx
        tag: stable
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: ClusterIP
        port: 80
      
      ingress:
        enabled: false
        annotations: {}
          # kubernetes.io/ingress.class: nginx
          # kubernetes.io/tls-acme: "true"
        paths: []
        hosts:
          - chart-example.local
        tls: []
        #  - secretName: chart-example-tls
        #    hosts:
        #      - chart-example.local
      
      resources: {}
        # We usually recommend not to specify default resources and to leave this as a conscious
        # choice for the user. This also increases chances charts run on environments with little
        # resources, such as Minikube. If you do want to specify resources, uncomment the following
        # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
        # limits:
        #  cpu: 100m
        #  memory: 128Mi
        # requests:
        #  cpu: 100m
        #  memory: 128Mi
      
      nodeSelector: {}
      
      tolerations: []
      
      affinity: {}
      

      By changing the contents of values.yaml, chart developers can supply default values for the application defined in the chart, controlling replica count, image base, ingress access, secret management, and more. Chart users can supply their own values for these parameters with a custom YAML file using helm install. When a user provides custom values, these values will override the values in the chart’s values.yaml file.

      Close out the values.yaml file and list the contents of the templates directory with the following command:

      Here you will find templates for various files that can control different aspects of your chart:

      templates

      deployment.yaml  _helpers.tpl  ingress.yaml  NOTES.txt  service.yaml  tests
      

      Now that you have explored the demo chart, you can experiment with Helm chart installation by installing demo. Return to your home directory with the following command:

      Install the demo Helm chart under the name web with helm install:

      • helm install --name web ./demo

      You will get the following output:

      Output

      NAME: web LAST DEPLOYED: Wed Feb 20 20:59:48 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web-demo ClusterIP 10.100.76.231 <none> 80/TCP 0s ==> v1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web-demo 1 0 0 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE web-demo-5758d98fdd-x4mjs 0/1 ContainerCreating 0 0s NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80

      In this output, you will find the STATUS of your application, plus a list of relevant resources in your cluster.

      Next, list the deployments created by the demo Helm chart with the following command:

      This will yield output that will list your active deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web-demo 1 1 1 1 4m

      Listing your pods with the command kubectl get pods would show the pods that are running your web application, which would look like the following:

      Output

      NAME READY STATUS RESTARTS AGE web-demo-5758d98fdd-nbkqd 1/1 Running 0 4m

      To demonstrate how changes in the Helm chart can release different versions of your application, open up demo/values.yaml in your text editor and change replicaCount: to 3 and image:tag: from stable to latest. In the following code block, you will find what the YAML file should look like after you have finished modifying it, with the changes highlighted:

      demo/values.yaml

      # Default values for demo.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 3
      
      image:
        repository: nginx
        tag: latest
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: ClusterIP
        port: 80
      . . .
      

      Save and exit the file.

      Before you deploy this new version of your web application, list your Helm releases as they are now with the following command:

      You will receive the following output, with the one deployment you created earlier:

      Output

      NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE web 1 Wed Feb 20 20:59:48 2019 DEPLOYED demo-0.1.0 1.0 default

      Notice that REVISION is listed as 1, indicating that this is the first revision of the web application.

      To deploy the web application with the latest changes made to demo/values.yaml, upgrade the application with the following command:

      Now, list the Helm releases again:

      You will receive the following output:

      Output

      NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE web 2 Wed Feb 20 21:18:12 2019 DEPLOYED demo-0.1.0 1.0 default

      Notice that REVISION has changed to 2, indicating that this is the second revision.

      To find the history of the Helm releases for web, use the following:

      This will show both of the revisions of the web application:

      Output

      REVISION        UPDATED                         STATUS          CHART           DESCRIPTION
      1               Wed Feb 20 20:59:48 2019        SUPERSEDED      demo-0.1.0      Install complete
      2               Wed Feb 20 21:18:12 2019        DEPLOYED        demo-0.1.0      Upgrade complete
      

      To roll back your application to revision 1, enter the following command:

      This will yield the following output:

      Output

      Rollback was a success! Happy Helming!

      Now, bring up the Helm release history:

      You will receive the following list:

      Output

      REVISION UPDATED STATUS CHART DESCRIPTION 1 Wed Feb 20 20:59:48 2019 SUPERSEDED demo-0.1.0 Install complete 2 Wed Feb 20 21:18:12 2019 SUPERSEDED demo-0.1.0 Upgrade complete 3 Wed Feb 20 21:28:48 2019 DEPLOYED demo-0.1.0 Rollback to 1

      By rolling back the web application, you have created a third revision that has the same settings as revision 1. Remember, you can always tell which revision is active by finding the DEPLOYED item under STATUS.

      To prepare for the next section, clean up your testing area by deleting your web release with the helm delete command:

      Examine the Helm release history again:

      You will receive the following output:

      Output

      REVISION UPDATED STATUS CHART DESCRIPTION 1 Wed Feb 20 20:59:48 2019 SUPERSEDED demo-0.1.0 Install complete 2 Wed Feb 20 21:18:12 2019 SUPERSEDED demo-0.1.0 Upgrade complete 3 Wed Feb 20 21:28:48 2019 DELETED demo-0.1.0 Deletion complete

      The STATUS for REVISION 3 has changed to DELETED, indicating that your deployed instance of web has been deleted. However, although this does delete the release, it does not delete it from store. In order to delete the release completely, run the helm delete command with the --purge flag.

      In this step, you have managed application releases on Kubernetes with the Helm. If you would like to study Helm further, check out our An Introduction to Helm, the Package Manager for Kubernetes tutorial, or review the official Helm documentation.

      Next, you will set up and test the pipeline automation tool Jenkins X by using the jx CLI to create a CI/CD-ready Kubernetes cluster.

      Step 4 — Setting Up the Jenkins X Environment

      With Jenkins X, you can create your Kubernetes cluster from the ground up with pipeline automation and CI/CD solutions built in. By installing the jx CLI tool, you will be able to efficiently manage application releases, Docker images, and Helm charts, in addition to automatically promoting your applications across environments in GitHub.

      Since you will be using jx to create your cluster, you must first delete the Minikube cluster that you already have. To do this, use the following command:

      This will delete the local simulated Kubernete cluster, but will not delete the default directories created when you first installed Minikube. To clean these off your machine, use the following commands:

      • rm -rf ~/.kube
      • rm -rf ~/.minikube
      • rm -rf /etc/kubernetes/*
      • rm -rf /var/lib/minikube/*

      Once you have completely cleared Minikube from your machine, you can move on to installing the Jenkins X binary.

      First, download the compressed jx file from the official Jenkins X GitHub repository with the curl command and uncompress it with the tar command:

      • curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.781/jx-linux-amd64.tar.gz | tar xzv

      Next, move the downloaded jx file to the executable path at /usr/local/bin:

      Jenkins X comes with a Docker Registry that runs inside your Kubernetes cluster. Since this is an internal element, security measures such as self-signed certificates can cause trouble for the program. To fix this, set Docker to use insecure registries for the local IP range. To do this, create the file /etc/docker/daemon.json and open it in your text editor:

      • nano /etc/docker/daemon.json

      Add the following contents to the file:

      /etc/docker/daemon.json

      {
        "insecure-registries" : ["0.0.0.0/0"]
      }
      

      Save and exit the file. For these changes to take effect, restart the Docker service with the following command:

      To verify that you have configured Docker with insecure registries, use the following command:

      At the end of the output, you should see the following highlighted line:

      Output

      Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 15 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true . . . Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 0.0.0.0/0 127.0.0.0/8 Live Restore Enabled: false

      Now that you have downloaded Jenkins X and configured the Docker registry, use the jx CLI tool to create a Minikube Kubernetes cluster with CI/CD capabilities:

      • jx create cluster minikube --cpu=5 --default-admin-password=admin --vm-driver=none --memory=13314

      Here you are creating a Kubernetes cluster using Minikube, with the flag --cpu=5 to set 5 CPUs and --memory=13314 to give your cluster 13314 MBs of memory. Since Jenkins X is a robust but large program, these specifications will ensure that Jenkins X works without problems in this demonstration. Also, you are using --default-admin-password=admin to set the Jenkins X password as admin and --vm-driver=none to set up the cluster locally, as you did in Step 1.

      As Jenkins X spins up your cluster, you will receive various prompts at different times throughout the process that set the parameters for your cluster and determine how it will communicate with GitHub to manage your production environments.

      First, you will receive the following prompt:

      Output

      ? disk-size (MB) 150GB

      Press ENTER to continue. Next, you will be prompted for the name you wish to use with git, the email address you wish to use with git, and your GitHub username. Enter each of these when prompted, then press ENTER.

      Next, Jenkins X will prompt you to enter your GitHub API token:

      Output

      To be able to create a repository on GitHub we need an API Token Please click this URL https://github.com/settings/tokens/new?scopes=repo,read:user,read:org,user:email,write:repo_hook,delete_repo Then COPY the token and enter in into the form below: ? API Token:

      Enter your token here, or create a new token with the appropriate permissions using the highlighted URL in the preceding code block.

      Next, Jenkins X will ask:

      Output

      ? Do you wish to use GitHub as the pipelines Git server: (Y/n) ? Do you wish to use your_GitHub_username as the pipelines Git user for GitHub server: (Y/n)

      Enter Y for both questions.

      After this, Jenkins X will prompt you to answer the following:

      Output

      ? Select Jenkins installation type: [Use arrows to move, type to filter] >Static Master Jenkins Serverless Jenkins ? Pick workload build pack: [Use arrows to move, type to filter] > Kubernetes Workloads: Automated CI+CD with GitOps Promotion Library Workloads: CI+Release but no CD

      For the prior, select Static Master Jenkins, and select Kubernetes Workloads: Automated CI+CD with GitOps Promotion for the latter. When prompted to select an organization for your environment repository, select your GitHub username.

      Finally, you will receive the following output, which verifies successful installation and provides your Jenkins X admin password.

      Output

      Creating GitHub webhook for your_GitHub_username/environment-horsehelix-production for url http://jenkins.jx.your_IP_address.nip.io/github-webhook/ Jenkins X installation completed successfully ******************************************************** NOTE: Your admin password is: admin ******************************************************** Your Kubernetes context is now set to the namespace: jx To switch back to your original namespace use: jx namespace default For help on switching contexts see: https://jenkins-x.io/developing/kube-context/ To import existing projects into Jenkins: jx import To create a new Spring Boot microservice: jx create spring -d web -d actuator To create a new microservice from a quickstart: jx create quickstart

      Next, use the jx get command to receive a list of URLs that show information about your application:

      This command will yield a list similar to the following:

      Name                      URL
      jenkins                   http://jenkins.jx.your_IP_address.nip.io
      jenkins-x-chartmuseum     http://chartmuseum.jx.your_IP_address.nip.io
      jenkins-x-docker-registry http://docker-registry.jx.your_IP_address.nip.io
      jenkins-x-monocular-api   http://monocular.jx.your_IP_address.nip.io
      jenkins-x-monocular-ui    http://monocular.jx.your_IP_address.nip.io
      nexus                     http://nexus.jx.your_IP_address.nip.io
      

      You can use the URLs to view Jenkins X data about your CI/CD environment via a UI by entering the address into your browser and entering your username and password. In this case, this will be "admin" for both.

      Next, in order to ensure that the service accounts in the namespaces jx, jx-staging, and jx-production have admin privileges, modify your RBAC policies with the following commands:

      • kubectl create clusterrolebinding jx-staging1 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-staging:expose --namespace=jx-staging
      • kubectl create clusterrolebinding jx-staging2 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-staging:default --namespace=jx-staging
      • kubectl create clusterrolebinding jx-production1 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-production:expose --namespace=jx-productions
      • kubectl create clusterrolebinding jx-production2 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx-production:default --namespace=jx-productions
      • kubectl create clusterrolebinding jx-binding1 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx:expose --namespace=jx
      • kubectl create clusterrolebinding jx-binding2 --clusterrole=cluster-admin --user=admin --user=expose --group=system:serviceaccounts --serviceaccount=jx:default --namespace=jx

      Now that you have created your local Kubernetes cluster with Jenkins X functionality built in, you can move on to creating an application on the platform to test its CI/CD capabilities and experience a Jenkins X pipeline.

      Step 5 — Creating a Test Application in Your Jenkins X Environment

      With your Jenkins X environment set up in your Kubernetes cluster, you now have CI/CD infrastructure in place that can help you automate a testing pipeline. In this step, you will try this out by setting up a test application in a working Jenkins X pipeline.

      For demonstration purposes, this tutorial will use a sample RSVP application created by the CloudYuga team. You can find this application, along with other webinar materials, at the DO-Community GitHub repository.

      First, clone the sample application from the repository with the following command:

      • git clone https://github.com/do-community/rsvpapp.git

      Once you've cloned the repository, move into the rsvpapp directory and remove the git files:

      To initialize a git repository and a Jenkins X project for a new application, you can use jx create to start from scratch or a template, or jx import to import an existing application from a local project or git repository. For this tutorial, import the sample RSVP application by running the following command from within the application's home directory:

      Jenkins X will prompt you for your GitHub username, whether you'd like to initialize git, a commit message, your organization, and the name you would like for your repository. Answer yes to initialize git, then provide the rest of the prompts with your individual GitHub information and preferences. As Jenkins X imports the application, it will create Helm charts and a Jenkinsfile in your application's home directory. You can modify these charts and the Jenkinsfile as per your requirements.

      Since the sample RSVP application runs on port 5000 of its container, modify your charts/rsvpapp/values.yaml file to match this. Open the charts/rsvpapp/values.yaml in your text editor:

      • nano charts/rsvpapp/values.yaml

      In this values.yaml file, set service:internalPort: to 5000. Once you have made this change, your file should look like the following:

      charts/rsvpapp/values.yaml

      # Default values for python.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      replicaCount: 1
      image:
        repository: draft
        tag: dev
        pullPolicy: IfNotPresent
      service:
        name: rsvpapp
        type: ClusterIP
        externalPort: 80
        internalPort: 5000
        annotations:
          fabric8.io/expose: "true"
          fabric8.io/ingress.annotations: "kubernetes.io/ingress.class: nginx"
      resources:
        limits:
          cpu: 100m
          memory: 128Mi
        requests:
          cpu: 100m
          memory: 128Mi
      ingress:
        enabled: false
      

      Save and exit your file.

      Next, change the charts/preview/requirements.yaml to fit with your application. requirements.yaml is a YAML file in which developers can declare chart dependencies, along with the location of the chart and the desired version. Since our sample application uses MongoDB for database purposes, you'll need to modify the charts/preview/requirements.yaml file to list MongoDB as a dependency. Open the file in your text editor with the following command:

      • nano charts/preview/requirements.yaml

      Edit the file by adding the mongodb-replicaset entry after the alias: cleanup entry, as is highlighted in the following code block:

      charts/preview/requirements.yaml

      # !! File must end with empty line !!
      dependencies:
      - alias: expose
        name: exposecontroller
        repository: http://chartmuseum.jenkins-x.io
        version: 2.3.92
      - alias: cleanup
        name: exposecontroller
        repository: http://chartmuseum.jenkins-x.io
        version: 2.3.92
      - name: mongodb-replicaset
        repository: https://kubernetes-charts.storage.googleapis.com/
        version: 3.5.5
      
        # !! "alias: preview" must be last entry in dependencies array !!
        # !! Place custom dependencies above !!
      - alias: preview
        name: rsvpapp
        repository: file://../rsvpapp
      

      Here you have specified the mongodb-replicaset chart as a dependency for the preview chart.

      Next, repeat this process for your rsvpapp chart. Create the charts/rsvpapp/requirements.yaml file and open it in your text editor:

      • nano charts/rsvpapp/requirements.yaml

      Once the file is open, add the following, making sure that there is a single line of empty space before and after the populated lines:

      charts/rsvpapp/requirements.yaml

      
      dependencies:
      - name: mongodb-replicaset
        repository: https://kubernetes-charts.storage.googleapis.com/
        version: 3.5.5
      
      

      Now you have specified the mongodb-replicaset chart as a dependency for your rsvpapp chart.

      Next, in order to connect the frontend of the sample RSVP application to the MongoDB backend, add a MONGODB_HOST environment variable to your deployment.yaml file in charts/rsvpapp/templates/. Open this file in your text editor:

      • nano charts/rsvpapp/templates/deployment.yaml

      Add the following highlighted lines to the file, in addition to one blank line at the top of the file and two blank lines at the bottom of the file. Note that these blank lines are required for the YAML file to work:

      charts/rsvpapp/templates/deployment.yaml

      
      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        name: {{ template "fullname" . }}
        labels:
          draft: {{ default "draft-app" .Values.draft }}
          chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
      spec:
        replicas: {{ .Values.replicaCount }}
        template:
          metadata:
            labels:
              draft: {{ default "draft-app" .Values.draft }}
              app: {{ template "fullname" . }}
      {{- if .Values.podAnnotations }}
            annotations:
      {{ toYaml .Values.podAnnotations | indent 8 }}
      {{- end }}
          spec:
            containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              env:
              - name: MONGODB_HOST
                value: "mongodb://{{.Release.Name}}-mongodb-replicaset-0.{{.Release.Name}}-mongodb-replicaset,{{.Release.Name}}-mongodb-replicaset-1.{{.Release.Name}}-mongodb-replicaset,{{.Release.Name}}-mongodb-replicaset-2.{{.Release.Name}}-mongodb-replicaset:27017"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              ports:
              - containerPort: {{ .Values.service.internalPort }}
              resources:
      {{ toYaml .Values.resources | indent 12 }}
      
      
      

      With these changes, Helm will be able to deploy your application with MongoDB as its database.

      Next, examine the Jenkinsfile generated by Jenkins X by opening the file from your application's home directory:

      This Jenkinsfile defines the pipeline that is triggered every time you commit a version of your application to your GitHub repository. If you wanted to automate your code testing so that the tests are triggered every time the pipeline is triggered, you would add the test to this document.

      To demonstrate this, add a customized test case by replacing sh "python -m unittest" under stage('CI Build and push snapshot') and stage('Build Release') in the Jenkinsfile with the following highlighted lines:

      /rsvpapp/Jenkinsfile

      . . .
        stages {
          stage('CI Build and push snapshot') {
            when {
              branch 'PR-*'
            }
            environment {
              PREVIEW_VERSION = "0.0.0-SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER"
              PREVIEW_NAMESPACE = "$APP_NAME-$BRANCH_NAME".toLowerCase()
              HELM_RELEASE = "$PREVIEW_NAMESPACE".toLowerCase()
            }
            steps {
              container('python') {
                sh "pip install -r requirements.txt"
                sh "python -m pytest tests/test_rsvpapp.py"
                sh "export VERSION=$PREVIEW_VERSION && skaffold build -f skaffold.yaml"
                sh "jx step post build --image $DOCKER_REGISTRY/$ORG/$APP_NAME:$PREVIEW_VERSION"
                dir('./charts/preview') {
                  sh "make preview"
                  sh "jx preview --app $APP_NAME --dir ../.."
                }
              }
            }
          }
          stage('Build Release') {
            when {
              branch 'master'
            }
            steps {
              container('python') {
      
                // ensure we're not on a detached head
                sh "git checkout master"
                sh "git config --global credential.helper store"
                sh "jx step git credentials"
      
                // so we can retrieve the version in later steps
                sh "echo $(jx-release-version) > VERSION"
                sh "jx step tag --version $(cat VERSION)"
                sh "pip install -r requirements.txt"
                sh "python -m pytest tests/test_rsvpapp.py"
                sh "export VERSION=`cat VERSION` && skaffold build -f skaffold.yaml"
                sh "jx step post build --image $DOCKER_REGISTRY/$ORG/$APP_NAME:$(cat VERSION)"
              }
            }
          }
      . . .
      

      With the added lines, the Jenkins X pipeline will install dependencies and carry out a Python test whenever you commit a change to your application.

      Now that you have changed the sample RSVP application, commit and push these changes to GitHub with the following commands:

      • git add *
      • git commit -m update
      • git push

      When you push these changes to GitHub, you will trigger a new build of your application. If you open the Jenkins UI by navigating to http://jenkins.jx.your_IP_address.nip.io and entering "admin" for your username and password, you will find information about your new build. If you click "Build History" from the menu on the left side of the page, you should see a history of your committed builds. If you click on the blue icon next to a build then select "Console Ouput" from the lefthand menu, you will find the console output for the automated steps in your pipeline. Scrolling to the end of this output, you will find the following message:

      Output

      . . . Finished: SUCCESS

      This means that your application has passed your customized tests and is now successfully deployed.

      Once Jenkins X builds the application release, it will promote the application to the staging environment. To verify that your application is running, list the applications running on your Kubernetes cluster by using the following command:

      You will receive output similar to the following:

      Output

      APPLICATION STAGING PODS URL rsvpapp 0.0.2 1/1 http://rsvpapp.jx-staging.your_IP_address.nip.io

      From this, you can see that Jenkins X has deployed your application in your jx-staging environment as version 0.0.2. The output also shows the URL that you can use to access your application. Visiting this URL will show you the sample RSVP application:

      Sample RSVP Application in the Staging Environment

      Next, check out the activity of your application with the following command:

      • jx get activity -f rsvpapp

      You will receive output similar to the following:

      Output

      STEP STARTED AGO DURATION STATUS your_GitHub_username/rsvpappv/master #1 3h42m23s 4m51s Succeeded Version: 0.0.1 Checkout Source 3h41m52s 6s Succeeded CI Build and push snapshot 3h41m46s NotExecuted Build Release 3h41m46s 56s Succeeded Promote to Environments 3h40m50s 3m17s Succeeded Promote: staging 3h40m29s 2m36s Succeeded PullRequest 3h40m29s 1m16s Succeeded PullRequest: https://github.com/your_GitHub_username/environment-horsehelix-staging/pull/1 Merge SHA: dc33d3747abdacd2524e8c22f0b5fbb2ac3f6fc7 Update 3h39m13s 1m20s Succeeded Status: Success at: http://jenkins.jx.your_IP_address.nip.io/job/your_GitHub_username/job/environment-horsehelix-staging/job/master/2/display/redirect Promoted 3h39m13s 1m20s Succeeded Application is at: http://rsvpapp.jx-staging.your_IP_address.nip.io Clean up 3h37m33s 1s Succeeded your_GitHub_username/rsvpappv/master #2 28m37s 5m57s Succeeded Version: 0.0.2 Checkout Source 28m18s 4s Succeeded CI Build and push snapshot 28m14s NotExecuted Build Release 28m14s 56s Succeeded Promote to Environments 27m18s 4m38s Succeeded Promote: staging 26m53s 4m0s Succeeded PullRequest 26m53s 1m4s Succeeded PullRequest: https://github.com/your_GitHub_username/environment-horsehelix-staging/pull/2 Merge SHA: 976bd5ad4172cf9fd79f0c6515f5006553ac6611 Update 25m49s 2m56s Succeeded Status: Success at: http://jenkins.jx.your_IP_address.nip.io/job/your_GitHub_username/job/environment-horsehelix-staging/job/master/3/display/redirect Promoted 25m49s 2m56s Succeeded Application is at: http://rsvpapp.jx-staging.your_IP_address.nip.io Clean up 22m40s 0s Succeeded

      Here you are getting the Jenkins X activity for the RSVP application by applying a filter with -f rsvpapp.

      Next, list the pods running in the jx-staging namespace with the following command:

      • kubectl get pod -n jx-staging

      You will receive output similar to the following:

      NAME                                 READY     STATUS    RESTARTS   AGE
      jx-staging-mongodb-replicaset-0      1/1       Running   0          6m
      jx-staging-mongodb-replicaset-1      1/1       Running   0          6m
      jx-staging-mongodb-replicaset-2      1/1       Running   0          5m
      jx-staging-rsvpapp-c864c4844-4fw5z   1/1       Running   0          6m
      

      This output shows that your application is running in the jx-staging namespace, along with three pods of the backend MongoDB database, adhering to the changes you made to the YAML files earlier.

      Now that you have run a test application through the Jenkins X pipeline, you can try out promoting this application to the production environment.

      To finish up this demonstration, you will complete the CI/CD process by promoting the sample RSVP application to your jx-production namespace.

      First, use jx promote in the following command:

      • jx promote rsvpapp --version=0.0.2 --env=production

      This will promote the rsvpapp application running with version=0.0.2 to the production environment. Throughout the build process, Jenkins X will prompt you to enter your GitHub account information. Answer these prompts with your individual responses as they appear.

      After successful promotion, check the list of applications:

      You will receive output similar to the following:

      Output

      APPLICATION STAGING PODS URL PRODUCTION PODS URL rsvpapp 0.0.2 1/1 http://rsvpapp.jx-staging.your_IP_address.nip.io 0.0.2 1/1 http://rsvpapp.jx-production.your_IP_address.nip.io

      With this PRODUCTION information, you can confirm that Jenkins X has promoted rsvpapp to the production environment. For further verification, visit the production URL http://rsvpapp.jx-production.your_IP_address.nip.io in your browser. You should see the working application, now runnning from "production":

      Sample RSVP Application in the Production Environment

      Finally, list your pods in the jx-production namespace.

      • kubectl get pod -n jx-production

      You will find that rsvpapp and the MongoDB backend pods are running in this namespace:

      NAME                                     READY     STATUS    RESTARTS   AGE
      jx-production-mongodb-replicaset-0       1/1       Running   0          1m
      jx-production-mongodb-replicaset-1       1/1       Running   0          1m
      jx-production-mongodb-replicaset-2       1/1       Running   0          55s
      jx-production-rsvpapp-54748d68bd-zjgv7   1/1       Running   0          1m 
      

      This shows that you have successfully promoted the RSVP sample application to your production environment, simulating the production-ready deployment of an application at the end of a CI/CD pipeline.

      Conclusion

      In this tutorial, you used Helm to manage packages on a simulated Kubernetes cluster and customized a Helm chart to package and deploy your own application. You also set up a Jenkins X environment on your Kubernetes cluster and run a sample application through a CI/CD pipeline from start to finish.

      You now have experience with these tools that you can use when building a CI/CD system on your own Kubernetes cluster. If you'd like to learn more about Helm, check out our An Introduction to Helm, the Package Manager for Kubernetes and How To Install Software on Kubernetes Clusters with the Helm Package Manager articles. To explore further CI/CD tools on Kubernetes, you can read about the Istio service mesh in the next tutorial in this webinar series.



      Source link

      Webinar Series: Building Blocks for Doing CI/CD with Kubernetes


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a Cloud Native approach to building, testing, and deploying applications, covering release management, Cloud Native tools, Service Meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the first session of the series, Building Blocks for Doing CI/CD with Kubernetes.

      Introduction

      If you are getting started with containers, you will likely want to know how to automate building, testing, and deployment. By taking a Cloud Native approach to these processes, you can leverage the right infrastructure APIs to package and deploy applications in an automated way.

      Two building blocks for doing automation include container images and container orchestrators. Over the last year or so, Kubernetes has become the default choice for container orchestration. In this first article of the CI/CD with Kubernetes series, you will:

      • Build container images with Docker, Buildah, and Kaniko.
      • Set up a Kubernetes cluster with Terraform, and create Deployments and Services.
      • Extend the functionality of a Kubernetes cluster with Custom Resources.

      By the end of this tutorial, you will have container images built with Docker, Buildah, and Kaniko, and a Kubernetes cluster with Deployments, Services, and Custom Resources.

      Future articles in the series will cover related topics: package management for Kubernetes, CI/CD tools like Jenkins X and Spinnaker, Services Meshes, and GitOps.

      Prerequisites

      Step 1 — Building Container Images with Docker and Buildah

      A container image is a self-contained entity with its own application code, runtime, and dependencies that you can use to create and run containers. You can use different tools to create container images, and in this step you will build containers with two of them: Docker and Buildah.

      Building Container Images with Dockerfiles

      Docker builds your container images automatically by reading instructions from a Dockerfile, a text file that includes the commands required to assemble a container image. Using the docker image build command, you can create an automated build that will execute the command-line instructions provided in the Dockerfile. When building the image, you will also pass the build context with the Dockerfile, which contains the set of files required to create an environment and run an application in the container image.

      Typically, you will create a project folder for your Dockerfile and build context. Create a folder called demo to begin:

      Next, create a Dockerfile inside the demo folder:

      Add the following content to the file:

      ~/demo/Dockerfile

      FROM ubuntu:16.04
      
      LABEL MAINTAINER neependra@cloudyuga.guru
      
      RUN apt-get update 
          && apt-get install -y nginx 
          && apt-get clean 
          && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* 
          && echo "daemon off;" >> /etc/nginx/nginx.conf
      
      EXPOSE 80
      CMD ["nginx"]
      

      This Dockerfile consists of a set of instructions that will build an image to run Nginx. During the build process ubuntu:16.04 will function as the base image, and the nginx package will be installed. Using the CMD instruction, you've also configured nginx to be the default command when the container starts.

      Next, you'll build the container image with the docker image build command, using the current directory (.) as the build context. Passing the -t option to this command names the image nkhare/nginx:latest:

      • sudo docker image build -t nkhare/nginx:latest .

      You will see the following output:

      Output

      Sending build context to Docker daemon 49.25MB Step 1/5 : FROM ubuntu:16.04 ---> 7aa3602ab41e Step 2/5 : MAINTAINER neependra@cloudyuga.guru ---> Using cache ---> 552b90c2ff8d Step 3/5 : RUN apt-get update && apt-get install -y nginx && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && echo "daemon off;" >> /etc/nginx/nginx.conf ---> Using cache ---> 6bea966278d8 Step 4/5 : EXPOSE 80 ---> Using cache ---> 8f1c4281309e Step 5/5 : CMD ["nginx"] ---> Using cache ---> f545da818f47 Successfully built f545da818f47 Successfully tagged nginx:latest

      Your image is now built. You can list your Docker images using the following command:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE nkhare/nginx latest 4073540cbcec 3 seconds ago 171MB ubuntu 16.04 7aa3602ab41e 11 days ago

      You can now use the nkhare/nginx:latest image to create containers.

      Building Container Images with Project Atomic-Buildah

      Buildah is a CLI tool, developed by Project Atomic, for quickly building Open Container Initiative (OCI)-compliant images. OCI provides specifications for container runtimes and images in an effort to standardize industry best practices.

      Buildah can create an image either from a working container or from a Dockerfile. It can build images completely in user space without the Docker daemon, and can perform image operations like build, list, push, and tag. In this step, you'll compile Buildah from source and then use it to create a container image.

      To install Buildah you will need the required dependencies, including tools that will enable you to manage packages and package security, among other things. Run the following commands to install these packages:

      • cd
      • sudo apt-get install software-properties-common
      • sudo add-apt-repository ppa:alexlarsson/flatpak
      • sudo add-apt-repository ppa:gophers/archive
      • sudo apt-add-repository ppa:projectatomic/ppa
      • sudo apt-get update
      • sudo apt-get install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man

      Because you will compile the buildah source code to create its package, you'll also need to install Go:

      • sudo apt-get update
      • sudo curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
      • sudo tar -xvf go1.8.linux-amd64.tar.gz
      • sudo mv go /usr/local
      • sudo echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
      • source ~/.profile
      • go version

      You will see the following output, indicating a successful installation:

      Output

      go version go1.8 linux/amd64

      You can now get the buildah source code to create its package, along with the runc binary. runc is the implementation of the OCI container runtime, which you will use to run your Buildah containers.

      Run the following commands to install runc and buildah:

      • mkdir ~/buildah
      • cd ~/buildah
      • export GOPATH=`pwd`
      • git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
      • cd ./src/github.com/projectatomic/buildah
      • make runc all TAGS="apparmor seccomp"
      • sudo cp ~/buildah/src/github.com/opencontainers/runc/runc /usr/bin/.
      • sudo apt install buildah

      Next, create the /etc/containers/registries.conf file to configure your container registries:

      • sudo nano /etc/containers/registries.conf

      Add the following content to the file to specify your registries:

      /etc/containers/registries.conf

      
      # This is a system-wide configuration file used to
      # keep track of registries for various container backends.
      # It adheres to TOML format and does not support recursive
      # lists of registries.
      
      # The default location for this configuration file is /etc/containers/registries.conf.
      
      # The only valid categories are: 'registries.search', 'registries.insecure',
      # and 'registries.block'.
      
      [registries.search]
      registries = ['docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.access.redhat.com', 'registry.centos.org']
      
      # If you need to access insecure registries, add the registry's fully-qualified name.
      # An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
      [registries.insecure]
      registries = []
      
      # If you need to block pull access from a registry, uncomment the section below
      # and add the registries fully-qualified name.
      #
      # Docker only
      [registries.block]
      registries = []
      

      The registries.conf configuration file specifies which registries should be consulted when completing image names that do not include a registry or domain portion.

      Now run the following command to build an image, using the https://github.com/do-community/rsvpapp repository as the build context. This repository also contains the relevant Dockerfile:

      • sudo buildah build-using-dockerfile -t rsvpapp:buildah github.com/do-community/rsvpapp

      This command creates an image named rsvpapp:buildah from the Dockerfille available in the https://github.com/do-community/rsvpapp repository.

      To list the images, use the following command:

      You will see the following output:

      Output

      IMAGE ID IMAGE NAME CREATED AT SIZE b0c552b8cf64 docker.io/teamcloudyuga/python:alpine Sep 30, 2016 04:39 95.3 MB 22121fd251df localhost/rsvpapp:buildah Sep 11, 2018 14:34 114 MB

      One of these images is localhost/rsvpapp:buildah, which you just created. The other, docker.io/teamcloudyuga/python:alpine, is the base image from the Dockerfile.

      Once you have built the image, you can push it to Docker Hub. This will allow you to store it for future use. You will first need to login to your Docker Hub account from the command line:

      • docker login -u your-dockerhub-username -p your-dockerhub-password

      Once the login is successful, you will get a file, ~/.docker/config.json, that will contain your Docker Hub credentials. You can then use that file with buildah to push images to Docker Hub.

      For example, if you wanted to push the image you just created, you could run the following command, citing the authfile and the image to push:

      • sudo buildah push --authfile ~/.docker/config.json rsvpapp:buildah docker://your-dockerhub-username/rsvpapp:buildah

      You can also push the resulting image to the local Docker daemon using the following command:

      • sudo buildah push rsvpapp:buildah docker-daemon:rsvpapp:buildah

      Finally, take a look at the Docker images you have created:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE rsvpapp buildah 22121fd251df 4 minutes ago 108MB nkhare/nginx latest 01f0982d91b8 17 minutes ago 172MB ubuntu 16.04 b9e15a5d1e1a 5 days ago 115MB

      As expected, you should now see a new image, rsvpapp:buildah, that has been exported using buildah.

      You now have experience building container images with two different tools, Docker and Buildah. Let's move on to discussing how to set up a cluster of containers with Kubernetes.

      Step 2 — Setting Up a Kubernetes Cluster on DigitalOcean using kubeadm and Terraform

      There are different ways to set up Kubernetes on DigitalOcean. To learn more about how to set up Kubernetes with kubeadm, for example, you can look at How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04.

      Since this tutorial series discusses taking a Cloud Native approach to application development, we'll apply this methodology when setting up our cluster. Specifically, we will automate our cluster creation using kubeadm and Terraform, a tool that simplifies creating and changing infrastructure.

      Using your personal access token, you will connect to DigitalOcean with Terraform to provision 3 servers. You will run the kubeadm commands inside of these VMs to create a 3-node Kubernetes cluster containing one master node and two workers.

      On your Ubuntu server, create a pair of SSH keys, which will allow password-less logins to your VMs:

      You will see the following output:

      Output

      Generating public/private rsa key pair. Enter file in which to save the key (~/.ssh/id_rsa):

      Press ENTER to save the key pair in the ~/.ssh directory in your home directory, or enter another destination.

      Next, you will see the following prompt:

      Output

      Enter passphrase (empty for no passphrase):

      In this case, press ENTER without a password to enable password-less logins to your nodes.

      You will see a confirmation that your key pair has been created:

      Output

      Your identification has been saved in ~/.ssh/id_rsa. Your public key has been saved in ~/.ssh/id_rsa.pub. The key fingerprint is: SHA256:lCVaexVBIwHo++NlIxccMW5b6QAJa+ZEr9ogAElUFyY root@3b9a273f18b5 The key's randomart image is: +---[RSA 2048]----+ |++.E ++o=o*o*o | |o +..=.B = o | |. .* = * o | | . =.o + * | | . . o.S + . | | . +. . | | . ... = | | o= . | | ... | +----[SHA256]-----+

      Get your public key by running the following command, which will display it in your terminal:

      Add this key to your DigitalOcean account by following these directions.

      Next, install Terraform:

      • sudo apt-get update
      • sudo apt-get install unzip
      • wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
      • unzip terraform_0.11.7_linux_amd64.zip
      • sudo mv terraform /usr/bin/.
      • terraform version

      You will see output confirming your Terraform installation:

      Output

      Terraform v0.11.7

      Next, run the following commands to install kubectl, a CLI tool that will communicate with your Kubernetes cluster, and to create a ~/.kube directory in your user's home directory:

      • sudo apt-get install apt-transport-https
      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      • sudo touch /etc/apt/sources.list.d/kubernetes.list
      • echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update
      • sudo apt-get install kubectl
      • mkdir -p ~/.kube

      Creating the ~/.kube directory will enable you to copy the configuration file to this location. You’ll do that once you run the Kubernetes setup script later in this section. By default, the kubectl CLI looks for the configuration file in the ~/.kube directory to access the cluster.

      Next, clone the sample project repository for this tutorial, which contains the Terraform scripts for setting up the infrastructure:

      • git clone https://github.com/do-community/k8s-cicd-webinars.git

      Go to the Terrafrom script directory:

      • cd k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Get a fingerprint of your SSH public key:

      • ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'

      You will see output like the following, with the highlighted portion representing your key:

      Output

      MD5:dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

      Keep in mind that your key will differ from what's shown here.

      Save the fingerprint to an environmental variable so Terraform can use it:

      • export FINGERPRINT=dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

      Next, export your DO personal access token:

      • export TOKEN=your-do-access-token

      Now take a look at the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ project directory:

      Output

      cluster.tf destroy.sh files outputs.tf provider.tf script.sh

      This folder contains the necessary scripts and configuration files for deploying your Kubernetes cluster with Terraform.

      Execute the script.sh script to trigger the Kubernetes cluster setup:

      When the script execution is complete, kubectl will be configured to use the Kubernetes cluster you've created.

      List the cluster nodes using kubectl get nodes:

      Output

      NAME STATUS ROLES AGE VERSION k8s-master-node Ready master 2m v1.10.0 k8s-worker-node-1 Ready <none> 1m v1.10.0 k8s-worker-node-2 Ready <none> 57s v1.10.0

      You now have one master and two worker nodes in the Ready state.

      With a Kubernetes cluster set up, you can now explore another option for building container images: Kaniko from Google.

      Step 3 — Building Container Images with Kaniko

      Earlier in this tutorial, you built container images with Dockerfiles and Buildah. But what if you could build container images directly on Kubernetes? There are ways to run the docker image build command inside of Kubernetes, but this isn't native Kubernetes tooling. You would have to depend on the Docker daemon to build images, and it would need to run on one of the Pods in the cluster.

      A tool called Kaniko allows you to build container images with a Dockerfile on an existing Kubernetes cluster. In this step, you will build a container image with a Dockerfile using Kaniko. You will then push this image to Docker Hub.

      In order to push your image to Docker Hub, you will need to pass your Docker Hub credentials to Kaniko. In the previous step, you logged into Docker Hub and created a ~/.docker/config.json file with your login credentials. Let's use this configuration file to create a Kubernetes ConfigMap object to store the credentials inside the Kubernetes cluster. The ConfigMap object is used to store configuration parameters, decoupling them from your application.

      To create a ConfigMap called docker-config using the ~/.docker/config.json file, run the following command:

      • sudo kubectl create configmap docker-config --from-file=$HOME/.docker/config.json

      Next, you can create a Pod definition file called pod-kaniko.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory (though it can go anywhere).

      First, make sure that you are in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Create the pod-kaniko.yml file:

      Add the following content to the file to specify what will happen when you deploy your Pod. Be sure to replace your-dockerhub-username in the Pod's args field with your own Docker Hub username:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/pod-kaniko.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        name: kaniko
      spec:
        containers:
        - name: kaniko
          image: gcr.io/kaniko-project/executor:latest
          args: ["--dockerfile=./Dockerfile",
                  "--context=/tmp/rsvpapp/",
                  "--destination=docker.io/your-dockerhub-username/rsvpapp:kaniko",
                  "--force" ]
          volumeMounts:
            - name: docker-config
              mountPath: /root/.docker/
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
        initContainers:
          - image: python
            name: demo
            command: ["/bin/sh"]
            args: ["-c", "git clone https://github.com/do-community/rsvpapp.git /tmp/rsvpapp"] 
            volumeMounts:
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
        volumes:
          - name: docker-config
            configMap:
              name: docker-config
          - name: demo
            emptyDir: {}
      

      This configuration file describes what will happen when your Pod is deployed. First, the Init container will clone the Git repository with the Dockerfile, https://github.com/do-community/rsvpapp.git, into a shared volume called demo. Init containers run before application containers and can be used to run utilties or other tasks that are not desirable to run from your application containers. Your application container, kaniko, will then build the image using the Dockerfile and push the resulting image to Docker Hub, using the credentials you passed to the ConfigMap volume docker-config.

      To deploy the kaniko pod, run the following command:

      • kubectl apply -f pod-kaniko.yml

      You will see the following confirmation:

      Output

      pod/kaniko created

      Get the list of pods:

      You will see the following list:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Init:0/1 0 47s

      Wait a few seconds, and then run kubectl get pods again for a status update:

      You will see the following:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 1/1 Running 0 1m

      Finally, run kubectl get pods once more for a final status update:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 2m

      This sequence of output tells you that the Init container ran, cloning the GitHub repository inside of the demo volume. After that, the Kaniko build process ran and eventually finished.

      Check the logs of the pod:

      You will see the following output:

      Output

      time="2018-08-02T05:01:24Z" level=info msg="appending to multi args docker.io/your-dockerhub-username/rsvpapp:kaniko" time="2018-08-02T05:01:24Z" level=info msg="Downloading base image nkhare/python:alpine" . . . ime="2018-08-02T05:01:46Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:48Z" level=info msg="cmd: CMD" time="2018-08-02T05:01:48Z" level=info msg="Replacing CMD in config with [/bin/sh -c python rsvp.py]" time="2018-08-02T05:01:48Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:49Z" level=info msg="No files were changed, appending empty layer to config." 2018/08/02 05:01:51 mounted blob: sha256:bc4d09b6c77b25d6d3891095ef3b0f87fbe90621bff2a333f9b7f242299e0cfd 2018/08/02 05:01:51 mounted blob: sha256:809f49334738c14d17682456fd3629207124c4fad3c28f04618cc154d22e845b 2018/08/02 05:01:51 mounted blob: sha256:c0cb142e43453ebb1f82b905aa472e6e66017efd43872135bc5372e4fac04031 2018/08/02 05:01:51 mounted blob: sha256:606abda6711f8f4b91bbb139f8f0da67866c33378a6dcac958b2ddc54f0befd2 2018/08/02 05:01:52 pushed blob sha256:16d1686835faa5f81d67c0e87eb76eab316e1e9cd85167b292b9fa9434ad56bf 2018/08/02 05:01:53 pushed blob sha256:358d117a9400cee075514a286575d7d6ed86d118621e8b446cbb39cc5a07303b 2018/08/02 05:01:55 pushed blob sha256:5d171e492a9b691a49820bebfc25b29e53f5972ff7f14637975de9b385145e04 2018/08/02 05:01:56 index.docker.io/your-dockerhub-username/rsvpapp:kaniko: digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 size: 1243

      From the logs, you can see that the kaniko container built the image from the Dockerfile and pushed it to your Docker Hub account.

      You can now pull the Docker image. Be sure again to replace your-dockerhub-username with your Docker Hub username:

      • docker pull your-dockerhub-username/rsvpapp:kaniko

      You will see a confirmation of the pull:

      Output

      kaniko: Pulling from your-dockerhub-username/rsvpapp c0cb142e4345: Pull complete bc4d09b6c77b: Pull complete 606abda6711f: Pull complete 809f49334738: Pull complete 358d117a9400: Pull complete 5d171e492a9b: Pull complete Digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 Status: Downloaded newer image for your-dockerhub-username/rsvpapp:kaniko

      You have now successfully built a Kubernetes cluster and created new images from within the cluster. Let's move on to discussing Deployments and Services.

      Step 4 — Create Kubernetes Deployments and Services

      Kubernetes Deployments allow you to run your applications. Deployments specify the desired state for your Pods, ensuring consistency across your rollouts. In this step, you will create an Nginx deployment file called deployment.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory to create an Nginx Deployment.

      First, open the file:

      Add the following configuration to the file to define your Nginx Deployment:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
        labels:
          app: nginx
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
      
      

      This file defines a Deployment named nginx-deployment that creates three pods, each running an nginx container on port 80.

      To deploy the Deployment, run the following command:

      • kubectl apply -f deployment.yml

      You will see a confirmation that the Deployment was created:

      Output

      deployment.apps/nginx-deployment created

      List your Deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 29s

      You can see that the nginx-deployment Deployment has been created and the desired and current count of the Pods are same: 3.

      To list the Pods that the Deployment created, run the following command:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 9m nginx-deployment-75675f5897-nhwsp 1/1 Running 0 1m nginx-deployment-75675f5897-pxpl9 1/1 Running 0 1m nginx-deployment-75675f5897-xvf4f 1/1 Running 0 1m

      You can see from this output that the desired number of Pods are running.

      To expose an application deployment internally and externally, you will need to create a Kubernetes object called a Service. Each Service specifies a ServiceType, which defines how the service is exposed. In this example, we will use a NodePort ServiceType, which exposes the Service on a static port on each node.

      To do this, create a file, service.yml, in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following content to define your Service:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/service.yml

      kind: Service
      apiVersion: v1
      metadata:
        name: nginx-service
      spec:
        selector:
          app: nginx
        type: NodePort
        ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30111
      

      These settings define the Service, nginx-service, and specify that it will target port 80 on your Pod. nodePort defines the port where the application will accept external traffic.

      To deploy the Service run the following command:

      • kubectl apply -f service.yml

      You will see a confirmation:

      Output

      service/nginx-service created

      List the Services:

      You will see the following list:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h nginx-service NodePort 10.100.98.213 <none> 80:30111/TCP 7s

      Your Service, nginx-service, is exposed on port 30111 and you can now access it on any of the node’s public IPs. For example, navigating to http://node_1_ip:30111 or http://node_2_ip:30111 should take you to Nginx's standard welcome page.

      Once you have tested the Deployment, you can clean up both the Deployment and Service:

      • kubectl delete deployment nginx-deployment
      • kubectl delete service nginx-service

      These commands will delete the Deployment and Service you have created.

      Now that you have worked with Deployments and Services, let's move on to creating Custom Resources.

      Step 5 — Creating Custom Resources in Kubernetes

      Kubernetes offers limited but production-ready functionalities and features. It is possible to extend Kubernetes' offerings, however, using its Custom Resources feature. In Kubernetes, a resource is an endpoint in the Kubernetes API that stores a collection of API objects. A Pod resource contains a collection of Pod objects, for instance. With Custom Resources, you can add custom offerings for networking, storage, and more. These additions can be created or removed at any point.

      In addition to creating custom objects, you can also employ sub-controllers of the Kubernetes Controller component in the control plane to make sure that the current state of your objects is equal to the desired state. The Kubernetes Controller has sub-controllers for specified objects. For example, ReplicaSet is a sub-controller that makes sure the desired Pod count remains consistent. When you combine a Custom Resource with a Controller, you get a true declarative API that allows you to specify the desired state of your resources.

      In this step, you will create a Custom Resource and related objects.

      To create a Custom Resource, first make a file called crd.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following Custom Resource Definition (CRD):

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/crd.yml

      apiVersion: apiextensions.k8s.io/v1beta1
      kind: CustomResourceDefinition
      metadata:
        name: webinars.digitalocean.com
      spec:
        group: digitalocean.com
        version: v1
        scope: Namespaced
        names:
          plural: webinars
          singular: webinar
          kind: Webinar
          shortNames:
          - wb
      

      To deploy the CRD defined in crd.yml, run the following command:

      • kubectl create -f crd.yml

      You will see a confirmation that the resource has been created:

      Output

      customresourcedefinition.apiextensions.k8s.io/webinars.digitalocean.com created

      The crd.yml file has created a new RESTful resource path: /apis/digtialocean.com/v1/namespaces/*/webinars. You can now refer to your objects using webinars, webinar, Webinar, and wb, as you listed them in the names section of the CustomResourceDefinition. You can check the RESTful resource with the following command:

      • kubectl proxy & curl 127.0.0.1:8001/apis/digitalocean.com

      Note: If you followed the initial server setup guide in the prerequisites, then you will need to allow traffic to port 8001 in order for this test to work. Enable traffic to this port with the following command:

      You will see the following output:

      Output

      HTTP/1.1 200 OK Content-Length: 238 Content-Type: application/json Date: Fri, 03 Aug 2018 06:10:12 GMT { "apiVersion": "v1", "kind": "APIGroup", "name": "digitalocean.com", "preferredVersion": { "groupVersion": "digitalocean.com/v1", "version": "v1" }, "serverAddressByClientCIDRs": null, "versions": [ { "groupVersion": "digitalocean.com/v1", "version": "v1" } ] }

      Next, create the object for using new Custom Resources by opening a file called webinar.yml:

      Add the following content to create the object:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/webinar.yml

      apiVersion: "digitalocean.com/v1"
      kind: Webinar
      metadata:
        name: webinar1
      spec:
        name: webinar
        image: nginx
      

      Run the following command to push these changes to the cluster:

      • kubectl apply -f webinar.yml

      You will see the following output:

      Output

      webinar.digitalocean.com/webinar1 created

      You can now manage your webinar objects using kubectl. For example:

      Output

      NAME CREATED AT webinar1 21s

      You now have an object called webinar1. If there had been a Controller, it would have intercepted the object creation and performed any defined operations.

      Deleting a Custom Resource Definition

      To delete all of the objects for your Custom Resource, use the following command:

      • kubectl delete webinar --all

      You will see:

      Output

      webinar.digitalocean.com "webinar1" deleted

      Remove the Custom Resource itself:

      • kubectl delete crd webinars.digitalocean.com

      You will see a confirmation that it has been deleted:

      Output

      customresourcedefinition.apiextensions.k8s.io "webinars.digitalocean.com" deleted

      After deletion you will not have access to the API endpoint that you tested earlier with the curl command.

      This sequence is an introduction to how you can extend Kubernetes functionalities without modifying your Kubernetes code.

      Step 6 — Deleting the Kubernetes Cluster

      To destroy the Kubernetes cluster itself, you can use the destroy.sh script from the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom folder. Make sure that you are in this directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom

      Run the script:

      By running this script, you'll allow Terraform to communicate with the DigitalOcean API and delete the servers in your cluster.

      Conclusion

      In this tutorial, you used different tools to create container images. With these images, you can create containers in any environment. You also set up a Kubernetes cluster using Terraform, and created Deployment and Service objects to deploy and expose your application. Additionally, you extended Kubernetes' functionality by defining a Custom Resource.

      You now have a solid foundation to build a CI/CD environment on Kubernetes, which we'll explore in future articles.



      Source link

      Create a CI/CD Pipeline with Gatsby.js, Netlify and Travis CI


      Updated by Linode

      Contributed by

      Linode


      Use promo code DOCS10 for $10 credit on a new account.

      What is Gatsby?

      Gatsby is a Static Site Generator for React built on Node.js. Gatsby uses a modern web technology stack based on client-side Javascript, reusable APIs, and prebuilt Markdown, otherwise known as the JAMstack. This method of building a site is fast, secure, and scalable. All production site pages are prebuilt and static, so Gatsby does not have to build HTML for each page request.

      What is the CI/CD Pipeline?

      The CI/CD (continuous integration/continuous delivery) pipeline created in this guide is an automated sequence of events that is initiated after you update the code for your website on your local computer. These events take care of the work that you would otherwise need to perform manually: previewing your in-development site, testing your new code, and deploying it to your production server. These actions are powered by GitHub, Netlify, and Travis CI.

      Note

      This guide uses GitHub as your central Git repository, but you can use any service that is compatible with Netlify and Travis.

      Netlify

      Netlify is a PaaS (Platform as a Service) provider that allows you to quickly deploy static sites on the Netlify platform. In this guide Netlify will be used to provide a preview of your Gatsby site while it is in development. This preview can be shared with different stakeholders for site change approvals, or with anyone that is interested in your project. The production version of your website will ultimately be deployed to a Linode, so Netlify will only be used to preview development of the site.

      Travis CI

      Travis CI is a continuous integration tool that tests and deploys the code you upload to your GitHub repository. Travis will be used in this guide to deploy your Gatsby site to a Linode running Ubuntu 18.04. Testing your website code will not be explored in depth, but the method for integrating unit tests will be introduced.

      The CI/CD Pipeline Sequence

      This guide sets up the following flow of events:

      1. You create a new branch in your local Git repository and make code changes to your Gatsby project.

      2. You push your branch to your GitHub repository and create a pull request.

      3. Netlify automatically creates a preview of the site with a unique URL that can be shared.

      4. Travis CI automatically builds the site in an isolated container and runs any declared tests.

      5. When all tests pass, you merge the PR into the repository’s master branch, which automatically triggers a deployment to your production Linode.

      Before You Begin

      1. Follow the Getting Started guide and deploy a Linode running Ubuntu 18.04.

      2. Complete the Securing Your Server guide to create a limited Linux user account with sudo privileges, harden SSH access, and remove unnecessary network services.

        Note

        This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, visit our Users and Groups guide.

        All configuration files should be edited with elevated privileges. Remember to include sudo before running your text editor.

      3. Configure DNS for your site by adding a domain zone and setting up reverse DNS on your Linode’s IP.

      4. Create a GitHub account if you don’t already have one. GitHub is free for open source projects.

      5. Install Git on your local computer. Later in this guide, Homebrew will be used to install Gatsby on a Mac, so it’s recommended that you also use Homebrew to install Git if you’re using a Mac.

      Prepare Your Production Linode

      Install NGINX

      1. Install NGINX from Ubuntu’s repository on your Linode:

        sudo apt install nginx
        

      Configure NGINX

      1. Delete the default welcome page:

        sudo rm /etc/nginx/sites-enabled/default
        
      2. Create a site configuration file for Gatsby. Replace example.com in the file name and in the file’s contents with your domain name:

        /etc/nginx/conf.d/example.com.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        server {
            listen       80;
            server_name  example.com;
            #charset koi8-r;
            #access_log  /var/log/nginx/host.access.log  main;
        
            location / {
                root   /usr/share/nginx/html/example.com/public;
                index  index.html index.htm;
            }
        }

        Note

        Replace all future instances of example.com in this guide with your domain name.

      3. The root directive in your NGINX configuration points to a directory named public within /usr/share/nginx/html/example.com/. Later in this guide, Gatsby will be responsible for creating the public directory and building its static content within it (specifically, via the gatsby build command).

        The /usr/share/nginx/html/example.com/ directory does not exist on your server yet, so create it:

        sudo mkdir -p /usr/share/nginx/html/example.com/
        
      4. The Gatsby deployment script that will be introduced later in this guide will run under your limited Linux user. Set your limited user to be the owner of the new document root directory. This ensures the deployment script will be able write your site’s files to it:

        sudo chown $(whoami):$(id -gn) -R /usr/share/nginx/html/example.com/
        
      5. Test your NGINX configuration for errors:

        sudo nginx -t
        
      6. Reload the configuration:

        sudo nginx -s reload
        
      7. Navigate to your Linode’s domain or IP address in a browser. Your Gatsby site files aren’t deployed yet, so you should only see a 404 Not Found error. Still, this error indicates that your NGINX process is running as expected.

      Develop with Gatsby on Your Local Computer

      You will develop with Gatsby on your local computer. This guide walks through creating a simple sample Gatsby website, but more extensive website development is not explored, so review Gatsby’s official documentation afterwards for more information on the subject.

      Install Gatsby

      This section provides instructions for installing Gatsby via Node.js and the Node Package Manager (npm) on Mac and Linux computers. If you are using a Windows PC, read Gatsby’s official documentation for installation instructions.

      1. Install npm on your local computer. If you are running Ubuntu or Debian on your computer, use apt:

        sudo apt install nodejs npm
        

        If you have a Mac, use Homebrew:

        brew install nodejs npm
        
      2. Ensure Node.js was installed by checking its version:

        node --version
        
      3. Install the Gatsby command line:

        sudo npm install --global gatsby-cli
        

      Create a Gatsby Site

      1. Gatsby uses starters to provide a pre-configured base Gatsby site that you can customize and build on top of. This guide uses the Hello World starter. On your local computer, install the Hello World starter in your home directory (using the name example-site for your new project) and navigate into it:

        gatsby new example-site https://github.com/gatsbyjs/gatsby-starter-hello-world
        cd ~/example-site
        
      2. Inspect the contents of the directory:

        ls
        

        You should see output similar to:

          
        LICENSE  node_modules  package.json  package-lock.json  README.md  src
        
        

        The src directory contains your project’s source files. This starter will include the React Javascript component file src/pages/index.js, which will be mapped to our example site’s homepage.

        Gatsby uses React components to build your site’s static pages. Components are small and isolated pieces of code, and Gatsby stores them in the src/pages directory. When your Gatsby site is built, these will automatically become your site’s pages, with paths based on each file’s name.

      3. Gatsby offers a built-in development server which builds and serves your Gatsby site. This server will also monitor any changes made to your src directory’s React components and will rebuild Gatsby after every change, which helps you see your local changes as you make them.

        Open a new shell session (in addition the one you already have open) and run the Gatsby development server:

        cd ~/example-site
        gatsby develop
        
      4. The gatsby develop command will display messages from the build process, including a section similar to the following:

          
        You can now view gatsby-starter-hello-world in the browser.
        
            http://localhost:8000/
        
        

        Copy and paste the http://localhost:8000/ URL (or the specific string displayed in your terminal) into your web browser to view your Gatsby site. You should see a page that displays “Hello World”.

      5. In your original shell session, view the contents of your example-site directory again:

        ls
        
          
            LICENSE  node_modules  package.json  package-lock.json  README.md  src  public
            
        

        You should now see a public directory which was not present before. This directory holds the static files built by Gatsby. Your NGINX server will serve the static files located in the public directory.

      6. Open the src/pages/index.js file in your text editor, add new text between the <div> tags, and save your change:

        src/pages/index.js
        1
        2
        3
        4
        
        import React from "react"
        
        export default () => <div>Hello world and universe!</div>
        
      7. Navigate back to your browser window, where the updated text should automatically appear on the page.

      Version Control Your Gatsby Project

      In the workflow explored by this guide, Git and GitHub are used to:

      • Track changes you make during your site’s development.
      • Trigger the preview, test, and deployment functions offered by Netlify and Travis.

      The following steps present how to initialize a new local Git repository for your Gatsby project, and how to connect it to a central GitHub repository.

      1. Open a shell session on your local computer and navigate to the example-site directory. Initialize a Git repository to begin tracking your project files:

        git init
        

        Stage all the files you’ve created so far for your first commit:

        git add -A
        

        The Hello World starter includes a .gitignore file. Your .gitignore designates which files and directories to ignore in your Git commits. By default, it is set to ignore any files in the public directory. The public directory’s files will not be tracked in this repository, as they can be quickly rebuilt by anyone who clones your repository.

      2. Commit all the Hello World starter files:

        git commit -m "Initial commit"
        
      3. Navigate to your GitHub account and create a new repository named example-site. After the repository is created, copy its URL, which will have the form https://github.com/your-github-username/example-site.git.

      4. In your local computer’s shell session, add the GitHub repository as your local repository’s origin:

        git remote add origin https://github.com/your-github-username/example-site.git
        
      5. Verify the origin remote’s location:

        git remote -v
        
          
        origin	https://github.com/your-github-username/example-site.git (fetch)
        origin	https://github.com/your-github-username/example-site.git (push)
        
        
      6. Push the master branch of your local repository to the origin repository:

        git push origin master
        
      7. View your GitHub account in your browser, navigate to the example-site repository, and verify that all the files have been pushed to it successfully:

        GitHub Initial Commit

      Preview Your Site with Netlify

      In the course of developing a website (or any other software project), a common practice when you’ve finished a new feature and would like to share it with your collaborators is to create a pull request (also referred to as a PR). A pull request is an intermediate step between uploading your work to GitHub (by pushing the changes to a new branch) and later merging it into the master branch (or another release or development branch, according to your specific Git workflow).

      Once connected to your GitHub account, the Netlify service can build a site preview from your PR’s code every time you create a PR. Netlify will also regenerate your site preview if you commit and push new updates to your PR’s branch while the PR is still open. A random, unique URL is assigned to every preview, and you can share these URLs with your collaborators.

      Connect Your GitHub Repository to Netlify

      1. Navigate to the Netlify site and click on the Sign Up link:

        Netlify Home Page

      2. Click on the GitHub button to connect your GitHub account with Netlify. If you used a different version control service, select that option instead:

        GitHub and Netlify connection page

      3. You will be taken to the GitHub site and asked to authorize Netlify to access your account. Click on the Authorize Netlify button:

        GitHub Netlify Authorization

      4. Add your new site to Netlify and continue along with the prompts to finish connecting your repository to Netlify. Be sure to select the GitHub repository created in the previous steps:

        Add site to Netlify

      5. Provide the desired deploy settings for your repository. Unless you are sure you need to change these settings, keep the Netlify defaults:

        Netlify repository settings

        Note

        You can add a netlify.toml configuration file to your Git repository to define more deployment settings.

      Create a Pull Request

      1. In your local Git repository, create a new branch to test Netlify:

        git checkout -b test-netlify
        
      2. On your computer, edit your src/pages/index.js and update the message displayed:

        src/pages/index.js
        1
        2
        3
        4
        
        import React from "react"
        
        export default () => <div>Hello world, universe, and multiverse!</div>
        
      3. Commit those changes:

        git add .
        git commit -m "Testing Netlify"
        
      4. Push the new branch to the origin repository:

        git push origin test-netlify
        
      5. Navigate to the example-site repository in your GitHub account and create a pull request with the test-netlify branch:

        GitHub Compare and Pull Request Banner

      6. After you create the pull request, you will see a deploy/netlify row with a Details link. The accent color for this row will initially be yellow while the Netlify preview is being built. When the preview’s build process is finished, this row will turn green. At that point, you can click on the Details link to view your Gatsby site’s preview.

        Netlify GitHub Preview

        Every time you push changes to your branch, Netlify will provide a new preview link.

      Test and Deploy Your Site with Travis CI

      Travis CI manages testing your Gatsby site and deploying it to the Linode production server. Travis does this by monitoring updates to your GitHub repository:

      • Travis’s tests will run when a pull request is created, whenever new commits are pushed to that pull request’s branch, and whenever a branch is updated on your GitHub repository in general (including outside the context of a pull request).

      • Travis’s deployment function will trigger whenever a pull request has been merged into the master branch (and optionally when merging into other branches, depending on your configuration).

      Connect Your GitHub Repository to Travis CI

      1. Navigate to the Travis CI site and click on the Sign up with GitHub button.

        Note

        Be sure to visit travis-ci.com, not travis-ci.org. Travis originally operated travis-ci.com for paid/private repositories, and travis-ci.org was run separately for free/open source projects. As of May 2018, travis-ci.com supports open source projects and should be used for all new projects. Projects on travis-ci.org will eventually be migrated to travis-ci.com.
      2. You will be redirected to your GitHub account. Authorize Travis CI to access your GitHub account:

        Authorize Travis CI

      3. You will be redirected to your Travis CI account’s page where you will be able to see a listing of all your public repositories. Click on the toggle button next to your Gatsby repository to activate Travis CI for it.

      Configure Travis CI to Run Tests

      Travis’s functions are all configured by adding and editing a .travis.yml file in the root of your project. When .travis.yml is present in your project and you push a commit to your central GitHub respository, Travis performs one or more builds.

      Travis builds are run in new virtualized environments created for each build. The build lifecycle is primarily composed of an install step and a script step. The install step is responsible for installing your project’s dependencies in the new virtual environment. The script step invokes one or more bash scripts that you specify, usually test scripts of some kind.

      1. Navigate to your local Gatsby project and create a new Git branch to keep track of your Travis configurations:

        git checkout -b travis-configs
        
      2. Create your .travis.yml file at the root of the project:

        touch .travis.yml
        

        Note

        Make sure you commit changes at logical intervals as you modify the files in your Git repository.

      3. Open your .travis.yml file in a text editor and add the following lines:

        ~/example-site/.travis.yml
        1
        2
        3
        4
        5
        6
        
        language: node_js
        node_js:
          - '10.0'
        
        dist: trusty
        sudo: false

        This configuration specifies that the build’s virtual environment should be Ubuntu 14.04 (also known as trusty). sudo: false indicates that the virtual environment should be a container, and not a full virtual machine. Other environments are available.

        Gatsby is built with Node.js, so the Travis configuration is set to use node_js as the build language, and to use the latest version of Node (10.0 at the time of this guide’s publication). When Node is specified as the build language, Travis automatically sets default values for the install and script steps: install will run npm install, and script will run npm test. Other languages, like Python, are also available.

      4. The Gatsby Hello World starter provides a package.json file, which is a collection of metadata that describes your project. It is used by npm to install, run, and test your project. In particular, it includes a dependencies section used by npm install, and a scripts section where you can declare the tests run by npm test.

        No tests are listed by default in your starter’s package.json, so open the file with your editor and add a test line to the scripts section:

        package.json
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        {
          "name": "gatsby-starter-hello-world",
          "description": "Gatsby hello world starter",
          "license": "MIT",
          "scripts": {
            "develop": "gatsby develop",
            "build": "gatsby build",
            "serve": "gatsby serve",
            "test": "echo 'Run your tests here'"
          },
          "dependencies": {
            "gatsby": "^1.9.277",
            "gatsby-link": "^1.6.46"
          }
        }

        This entry is just a stub to illustrate where tests are declared. For more information on how to test your Gatsby project, review the unit testing documentation on Gatsby’s website. Jest is the testing framework recommended by Gatsby.

      View Output from Your Travis Build

      1. Commit the changes you’ve made and push your travis-configs branch to your origin repository:

        git add .
        git commit -m "Travis testing configuration"
        git push origin travis-configs
        
      2. View your GitHub repository in your browser and create a pull request for the travis-configs branch.

      3. Several rows that link to your Travis builds will appear in your new pull request. When a build finishes running without error, the build’s accent color will turn green:

        GitHub Travis Builds

        Note

        Four rows for your Travis builds will appear, which is more than you may expect. This is because Travis runs your builds whenever your branch is updated, and whenever your pull request is updated, and Travis considers these to be separate events.

        In addition, the rows prefixed by Travis CI - are links to GitHub’s preview of those builds, while rows prefixed with continuous-integration/travis-ci/ are direct links to the builds on travis-ci.com.

        For now, these builds will produce identical output. After the deployment functions of Travis have been configured, the pull request builds will skip the deployment step, while the branch builds will implement your deployment configuration.

      4. Click the Details link in the continuous-integration/travis-ci/push row to visit the logs for that build. A page with similar output will appear:

        Travis Build Logs - First Test

        Towards the end of your output, you should see the “Run your tests here” message from the test stub that you entered in your package.json. If you implement testing of your code with Jest or another library, the output from those tests will appear at this location in your build logs.

        If any of the commands that Travis CI runs in the script step (or in any preceding steps, like install) returns with a non-zero exit code, then the build will fail, and you will not be able to merge your pull request on GitHub.

      5. For now, do not merge your pull request, even if the builds were successful.

      Give Travis Permission to Deploy to Your Linode

      In order to let Travis push your code to your production Linode, you first need to give the Travis build environment access to the Linode. This will be accomplished by generating a public-private key pair for your build environment and then uploading the public key to your Linode. Your code will be deployed over SSH, and the SSH agent in the build environment will be configured to use your new private key.

      The private key will also need to be encrypted, as the key file will live in your Gatsby project’s Git repository, and you should never check a plain-text version of it into version control.

      1. Install the Travis CLI, which you will need to generate an encrypted version of your private key. The Travis CLI is distributed as a Ruby gem:

        On Linux:

        sudo apt install ruby ruby-dev
        sudo gem install travis
        

        On macOS:

        sudo gem install travis
        

        On Windows: Use RubyInstaller to install Ruby and the Travis CLI gem.

      2. Log in to Travis CI with the CLI:

        travis login --com
        

        Follow the prompts to provide your GitHub login credentials. These credentials are passed directly to GitHub and are not recorded by Travis. In exchange, GitHub returns a GitHub access token to Travis, after which Travis will provide your CLI with a Travis access token.

        Note

      3. Inside the root of your local example-site Git repository, create a scripts directory. This will hold files related to deploying your Gatsby site:

        mkdir scripts
        
      4. Generate a pair of SSH keys inside the scripts directory. The key pair will be named gatsby-deploy so that you don’t accidentally overwrite any preexisting key pairs. Replace your_email@example.com with your email address. When prompted for the key pair’s passphrase, enter no passphrase/leave the field empty:

        ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f scripts/gatsby-deploy
        

        Two files will be created: gatsby-deploy (your private key) and gatsby-deploy.pub (your public key).

      5. Add the location of the gatsby-deploy file to your project’s .gitignore file. This will ensure that you do not accidentally commit the secret key to your central repository:

        .gitignore
        1
        2
        3
        
        # Other .gitignore instructions
        # [...]
        scripts/gatsby-deploy
      6. Encrypt your private key using the Travis CLI:

        cd scripts && travis encrypt-file gatsby-deploy --add --com
        
      7. You should now see a gatsby-deploy.enc file in your scripts directory:

        ls
        
          
            gatsby-deploy		gatsby-deploy.enc	gatsby-deploy.pub
        
        
      8. The --add flag from the previous command also told the Travis CLI to add a few new lines to your .travis.yml file. These lines decrypt your private key and should look similar to the following snippet:

        .travis.yml
        1
        2
        3
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_9e3557de08a3_key -iv $encrypted_9e3557de08a3_iv
          -in gatsby-deploy.enc -out gatsby-deploy -d


        About the openssl command and Travis build variables

        The second line (starting with -in gatsby-deploy.enc) is a continuation of the first line, and -in is an option passed to the openssl command. This line is not its own item in the before_install list.

        The openssl command accepts the encrypted gatsby-deploy.enc file and uses two environment variables to decrypt it, resulting in your original gatsby-deploy private key. These two variables are stored in the Settings page for your repository on travis-ci.com. Any variables stored there will be accessible to your build environment:

        Travis Environment Variables

      9. Edit the lines previously added by the travis encrypt-file command so that gatsby-deploy.enc and gatsby-deploy are prefixed with your scripts/ directory:

        .travis.yml
        1
        2
        3
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_9e3557de08a3_key -iv $encrypted_9e3557de08a3_iv
          -in scripts/gatsby-deploy.enc -out scripts/gatsby-deploy -d
      10. Continue preparing the SSH agent in your build environment by adding the following lines to the before_install step, after the openssl command. Be sure to replace 192.0.2.2 with your Linode’s IP address:

        ~/example-site/.travis.yml
        1
        2
        3
        4
        5
        6
        7
        8
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_9e3557de08a3_key -iv $encrypted_9e3557de08a3_iv
          -in scripts/gatsby-deploy.enc -out scripts/gatsby-deploy -d
        - eval "$(ssh-agent -s)"
        - cp scripts/gatsby-deploy ~/.ssh/gatsby-deploy
        - chmod 600 ~/.ssh/gatsby-deploy
        - ssh-add ~/.ssh/gatsby-deploy
        - echo -e "Host 192.0.2.2ntStrictHostKeyChecking non" >> ~/.ssh/config
      11. Travis CI can add entries to the build environment’s ~/.ssh/known_hosts prior to deploying your site. Insert the following addons step prior to the before_install step in your .travis.yml. Replace 192.0.2.2 with your Linode’s IP address:

        ~/example-site/.travis.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        # [...]
        dist: trusty
        sudo: false
        
        addons:
          ssh_known_hosts:
            - 192.0.2.2
        
        before_install:
        # [...]
      12. From your local computer, upload your Travis environment’s public key to the home directory of your limited Linux user on your Linode. Replace example_user with your Linode’s user and 192.0.2.2 with your Linode’s IP address:

        scp ~/example-site/scripts/gatsby-deploy.pub example_user@192.0.2.2:~/gatsby-deploy.pub
        
      13. Log in to your Linode (using the same user that the key was uploaded to) and copy the key into your authorized_keys file:

        mkdir -p .ssh
        cat gatsby-deploy.pub | tee -a .ssh/authorized_keys
        

      Create a Deployment Script

      1. Update your .travis.yml to include a deploy step. This section will be executed when a pull request is merged into the master branch. Add the following lines below the before_install step, at the end of the file:

        ~/example-site/.travis.yml
        1
        2
        3
        4
        5
        6
        
        deploy:
        - provider: script
          skip_cleanup: true
          script: bash scripts/deploy.sh
          on:
            branch: master

        The instructions for pushing your site to your Linode will be defined in a deploy.sh script that you will create.


        Full contents of your Travis configuration

        The complete and final version of your .travis.yml file should resemble the following:

        ~/example-site/.travis.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        
        language: node_js
        node_js:
        - "10.0"
        
        dist: trusty
        sudo: false
        
        addons:
          ssh_known_hosts:
          - 192.0.2.2
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_07d52615a665_key -iv $encrypted_07d52615a665_iv
          -in scripts/gatsby-deploy.enc -out scripts/gatsby-deploy -d
        - eval "$(ssh-agent -s)"
        - cp scripts/gatsby-deploy ~/.ssh/gatsby-deploy
        - chmod 600 ~/.ssh/gatsby-deploy
        - ssh-add ~/.ssh/gatsby-deploy
        - echo -e "Host 192.0.2.2ntStrictHostKeyChecking non" >> ~/.ssh/config
        
        deploy:
        - provider: script
          skip_cleanup: true
          script: bash scripts/deploy.sh
          on:
            branch: master
      2. From your local example-site Git repository, create a deploy.sh file in the scripts directory and make it executable:

        touch scripts/deploy.sh
        chmod +x scripts/deploy.sh
        
      3. Open your deploy.sh file in your text editor and add the following lines. Replace all instances of example_user with your Linode’s user, and replace 192.0.2.2 with your Linode’s IP:

        scripts/deploy.sh
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        #!/bin/bash
        set -x
        
        gatsby build
        
        # Configure Git to only push the current branch
        git config --global push.default simple
        
        # Remove .gitignore and replace with the production version
        rm -f .gitignore
        cp scripts/prodignore .gitignore
        cat .gitignore
        
        # Add the Linode production server as a remote repository
        git remote add production ssh://example_user@192.0.2.2:/home/example_user/gatsbybare.git
        
        # Add and commit all the static files generated by the Gatsby build
        git add . && git commit -m "Gatsby build"
        
        # Push all changes to the Linode production server
        git push -f production HEAD:refs/heads/master

        The deploy script builds the Gatsby static files (which are placed inside the public directory inside your repository) and pushes them to your Linode. Specifically, this script:

        • Commits the newly-built public directory to the Travis build environment’s copy of your Git repository.
        • Pushes that commit (over the SSH protocol) to a remote repository on your Linode, which you will create in the next section of this guide.

        Note

        Remember that because these instructions are executed in an isolated virtual environment, the git commit that is run here does not affect the repository on your local computer or on GitHub.

      4. You may recall that you previously updated your .gitignore file to exclude the public directory. To allow this directory to be committed in your build environment’s repository (and therefore pushed to your Linode), you will need to override that rule at deploy time.

        From the root of your local Gatsby project, copy your .gitignore to a new scripts/prodignore file:

        cp .gitignore scripts/prodignore
        

        Open your new prodignore file, remove the public line, and save the change:

        scripts/prodignore
        1
        2
        3
        
        .cache/
        public # Remove this line
        yarn-error.log

        The deploy.sh script you created includes a line that will copy this scripts/prodignore file into your repository’s root .gitgnore, which will then allow the script to commit the public directory.

      Prepare the Remote Git Repository on Your Linode

      In the previous section you completed the configuration for the Travis deployment step. In this section, you will prepare the Linode to receive Git pushes from your deployment script. The pushed website files will then be served by your NGINX web server.

      1. SSH into your Linode (under the same the user that holds your Travis build environment’s public key). Create a new directory inside your home folder named gatsbybare.git:

        mkdir ~/gatsbybare.git
        
      2. Navigate to the new directory and initialize it as a bare Git repository:

        cd ~/gatsbybare.git
        git init --bare
        

        A bare Git repository stores Git objects and does not maintain working copies (i.e. file changes that haven’t been committed) in the directory. Bare repositories provide a centralized place where users can push their changes. GitHub is an example of a bare Git repository. The common practice for naming a bare Git repository is to end the name with the .git extension.

      3. Configure the Git directory to allow linking two repositories together:

        git config receive.denyCurrentBranch updateInstead
        
      4. Your Travis build environment will now be able to push files into your Linode’s Git repository, but the files will not be located in your NGINX document root. To fix this, you will use the hooks feature of Git to copy your website files to the document root folder. Specifically, you can implement a post-receive hook that will run after every push to your Linode’s repository.

        In your Linode’s Git repository, create the post-receive file and make it executable:

        touch hooks/post-receive
        chmod +x hooks/post-receive
        
      5. Add the following lines to the post-receive file. Replace example.com with your domain name, and replace example_user with your Linode’s user:

        hooks/post-receive
        1
        2
        
        #!/bin/sh
        git --work-tree=/usr/share/nginx/html/example.com --git-dir=/home/example_user/gatsbybare.git checkout -f

        This script will check out the files from your Linode repository’s master branch into your document root folder.

        Note

        While a bare Git repository does not keep working copies of files within the repository’s directory, you can still use the --work-tree option to check out files into another directory.

      Deploy with Travis CI

      All of the test and deployment configuration work has been completed and can now be executed:

      1. Commit all remaining changes to your travis-configs branch and push them up to your central GitHub repository:

        git add .
        git commit -m "Travis deployment configuration"
        git push origin travis-configs
        
      2. Visit the pull request you previously created on GitHub for your travis-configs branch. If you visit this page shortly after the git push command is issued, the new Travis builds may still be in progress.

      3. After the linked continuous-integration/travis-ci/pr pull request Travis build completes, click on the corresponding Details link. If the build was successful, you should see the following message:

          
        Skipping a deployment with the script provider because the current build is a pull request.
        
        

        This message appears because pull request builds skip the deployment step.

      4. Back on the GitHub pull request page, after the linked continuous-integration/travis-ci/push branch build completes, click on the corresponding Details link. If the build was successful, you should see the following message:

          
        Skipping a deployment with the script provider because this branch is not permitted: travis-configs
        
        

        This message appears because your .travis.yml restricts the deployment script to updates on the master branch.

      5. If your Travis builds failed, review the build logs for the reason for the failure.

      6. If the builds succeeded, merge your pull request.

      7. After merging the pull request, visit travis-ci.com directly and view the example-site repository. A new Travis build corresponding to your Merge pull request commit will be in progress. When this build completes, a Deploying application message will appear at the end of the build logs. This message can be expanded to view the complete logs for the deploy step.

      8. If your deploy step succeeded, you can now visit your domain name in your browser. You should see the message from your Gatsby project’s index.js.

      Troubleshooting

      If your Travis builds are failing, here are some places to look when troubleshooting:

      • View the build logs for the failed Travis build.
      • Ensure all your .sh scripts are executable, including the Git hook on the Linode.
      • Test the Git hook on your Linode by running bash ~/gatsbybare.git/hooks/post-receive.
      • If you encounter permissions issues, make sure your Linode user can write files to your document root directory.
      • To view the contents of the bare Git repository, run git ls-tree --full-tree -r HEAD.

      Next Steps

      Read the Gatsby.js Tutorial to learn how to build a website with Gatsby.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link