One place for hosting & domains

      Webinar Series: Building Blocks for Doing CI/CD with Kubernetes


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a Cloud Native approach to building, testing, and deploying applications, covering release management, Cloud Native tools, Service Meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the first session of the series, Building Blocks for Doing CI/CD with Kubernetes.

      Introduction

      If you are getting started with containers, you will likely want to know how to automate building, testing, and deployment. By taking a Cloud Native approach to these processes, you can leverage the right infrastructure APIs to package and deploy applications in an automated way.

      Two building blocks for doing automation include container images and container orchestrators. Over the last year or so, Kubernetes has become the default choice for container orchestration. In this first article of the CI/CD with Kubernetes series, you will:

      • Build container images with Docker, Buildah, and Kaniko.
      • Set up a Kubernetes cluster with Terraform, and create Deployments and Services.
      • Extend the functionality of a Kubernetes cluster with Custom Resources.

      By the end of this tutorial, you will have container images built with Docker, Buildah, and Kaniko, and a Kubernetes cluster with Deployments, Services, and Custom Resources.

      Future articles in the series will cover related topics: package management for Kubernetes, CI/CD tools like Jenkins X and Spinnaker, Services Meshes, and GitOps.

      Prerequisites

      Step 1 — Building Container Images with Docker and Buildah

      A container image is a self-contained entity with its own application code, runtime, and dependencies that you can use to create and run containers. You can use different tools to create container images, and in this step you will build containers with two of them: Docker and Buildah.

      Building Container Images with Dockerfiles

      Docker builds your container images automatically by reading instructions from a Dockerfile, a text file that includes the commands required to assemble a container image. Using the docker image build command, you can create an automated build that will execute the command-line instructions provided in the Dockerfile. When building the image, you will also pass the build context with the Dockerfile, which contains the set of files required to create an environment and run an application in the container image.

      Typically, you will create a project folder for your Dockerfile and build context. Create a folder called demo to begin:

      Next, create a Dockerfile inside the demo folder:

      Add the following content to the file:

      ~/demo/Dockerfile

      FROM ubuntu:16.04
      
      LABEL MAINTAINER neependra@cloudyuga.guru
      
      RUN apt-get update 
          && apt-get install -y nginx 
          && apt-get clean 
          && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* 
          && echo "daemon off;" >> /etc/nginx/nginx.conf
      
      EXPOSE 80
      CMD ["nginx"]
      

      This Dockerfile consists of a set of instructions that will build an image to run Nginx. During the build process ubuntu:16.04 will function as the base image, and the nginx package will be installed. Using the CMD instruction, you've also configured nginx to be the default command when the container starts.

      Next, you'll build the container image with the docker image build command, using the current directory (.) as the build context. Passing the -t option to this command names the image nkhare/nginx:latest:

      • sudo docker image build -t nkhare/nginx:latest .

      You will see the following output:

      Output

      Sending build context to Docker daemon 49.25MB Step 1/5 : FROM ubuntu:16.04 ---> 7aa3602ab41e Step 2/5 : MAINTAINER neependra@cloudyuga.guru ---> Using cache ---> 552b90c2ff8d Step 3/5 : RUN apt-get update && apt-get install -y nginx && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && echo "daemon off;" >> /etc/nginx/nginx.conf ---> Using cache ---> 6bea966278d8 Step 4/5 : EXPOSE 80 ---> Using cache ---> 8f1c4281309e Step 5/5 : CMD ["nginx"] ---> Using cache ---> f545da818f47 Successfully built f545da818f47 Successfully tagged nginx:latest

      Your image is now built. You can list your Docker images using the following command:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE nkhare/nginx latest 4073540cbcec 3 seconds ago 171MB ubuntu 16.04 7aa3602ab41e 11 days ago

      You can now use the nkhare/nginx:latest image to create containers.

      Building Container Images with Project Atomic-Buildah

      Buildah is a CLI tool, developed by Project Atomic, for quickly building Open Container Initiative (OCI)-compliant images. OCI provides specifications for container runtimes and images in an effort to standardize industry best practices.

      Buildah can create an image either from a working container or from a Dockerfile. It can build images completely in user space without the Docker daemon, and can perform image operations like build, list, push, and tag. In this step, you'll compile Buildah from source and then use it to create a container image.

      To install Buildah you will need the required dependencies, including tools that will enable you to manage packages and package security, among other things. Run the following commands to install these packages:

      • cd
      • sudo apt-get install software-properties-common
      • sudo add-apt-repository ppa:alexlarsson/flatpak
      • sudo add-apt-repository ppa:gophers/archive
      • sudo apt-add-repository ppa:projectatomic/ppa
      • sudo apt-get update
      • sudo apt-get install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man

      Because you will compile the buildah source code to create its package, you'll also need to install Go:

      • sudo apt-get update
      • sudo curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
      • sudo tar -xvf go1.8.linux-amd64.tar.gz
      • sudo mv go /usr/local
      • sudo echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
      • source ~/.profile
      • go version

      You will see the following output, indicating a successful installation:

      Output

      go version go1.8 linux/amd64

      You can now get the buildah source code to create its package, along with the runc binary. runc is the implementation of the OCI container runtime, which you will use to run your Buildah containers.

      Run the following commands to install runc and buildah:

      • mkdir ~/buildah
      • cd ~/buildah
      • export GOPATH=`pwd`
      • git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
      • cd ./src/github.com/projectatomic/buildah
      • make runc all TAGS="apparmor seccomp"
      • sudo cp ~/buildah/src/github.com/opencontainers/runc/runc /usr/bin/.
      • sudo apt install buildah

      Next, create the /etc/containers/registries.conf file to configure your container registries:

      • sudo nano /etc/containers/registries.conf

      Add the following content to the file to specify your registries:

      /etc/containers/registries.conf

      
      # This is a system-wide configuration file used to
      # keep track of registries for various container backends.
      # It adheres to TOML format and does not support recursive
      # lists of registries.
      
      # The default location for this configuration file is /etc/containers/registries.conf.
      
      # The only valid categories are: 'registries.search', 'registries.insecure',
      # and 'registries.block'.
      
      [registries.search]
      registries = ['docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.access.redhat.com', 'registry.centos.org']
      
      # If you need to access insecure registries, add the registry's fully-qualified name.
      # An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
      [registries.insecure]
      registries = []
      
      # If you need to block pull access from a registry, uncomment the section below
      # and add the registries fully-qualified name.
      #
      # Docker only
      [registries.block]
      registries = []
      

      The registries.conf configuration file specifies which registries should be consulted when completing image names that do not include a registry or domain portion.

      Now run the following command to build an image, using the https://github.com/do-community/rsvpapp repository as the build context. This repository also contains the relevant Dockerfile:

      • sudo buildah build-using-dockerfile -t rsvpapp:buildah github.com/do-community/rsvpapp

      This command creates an image named rsvpapp:buildah from the Dockerfille available in the https://github.com/do-community/rsvpapp repository.

      To list the images, use the following command:

      You will see the following output:

      Output

      IMAGE ID IMAGE NAME CREATED AT SIZE b0c552b8cf64 docker.io/teamcloudyuga/python:alpine Sep 30, 2016 04:39 95.3 MB 22121fd251df localhost/rsvpapp:buildah Sep 11, 2018 14:34 114 MB

      One of these images is localhost/rsvpapp:buildah, which you just created. The other, docker.io/teamcloudyuga/python:alpine, is the base image from the Dockerfile.

      Once you have built the image, you can push it to Docker Hub. This will allow you to store it for future use. You will first need to login to your Docker Hub account from the command line:

      • docker login -u your-dockerhub-username -p your-dockerhub-password

      Once the login is successful, you will get a file, ~/.docker/config.json, that will contain your Docker Hub credentials. You can then use that file with buildah to push images to Docker Hub.

      For example, if you wanted to push the image you just created, you could run the following command, citing the authfile and the image to push:

      • sudo buildah push --authfile ~/.docker/config.json rsvpapp:buildah docker://your-dockerhub-username/rsvpapp:buildah

      You can also push the resulting image to the local Docker daemon using the following command:

      • sudo buildah push rsvpapp:buildah docker-daemon:rsvpapp:buildah

      Finally, take a look at the Docker images you have created:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE rsvpapp buildah 22121fd251df 4 minutes ago 108MB nkhare/nginx latest 01f0982d91b8 17 minutes ago 172MB ubuntu 16.04 b9e15a5d1e1a 5 days ago 115MB

      As expected, you should now see a new image, rsvpapp:buildah, that has been exported using buildah.

      You now have experience building container images with two different tools, Docker and Buildah. Let's move on to discussing how to set up a cluster of containers with Kubernetes.

      Step 2 — Setting Up a Kubernetes Cluster on DigitalOcean using kubeadm and Terraform

      There are different ways to set up Kubernetes on DigitalOcean. To learn more about how to set up Kubernetes with kubeadm, for example, you can look at How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04.

      Since this tutorial series discusses taking a Cloud Native approach to application development, we'll apply this methodology when setting up our cluster. Specifically, we will automate our cluster creation using kubeadm and Terraform, a tool that simplifies creating and changing infrastructure.

      Using your personal access token, you will connect to DigitalOcean with Terraform to provision 3 servers. You will run the kubeadm commands inside of these VMs to create a 3-node Kubernetes cluster containing one master node and two workers.

      On your Ubuntu server, create a pair of SSH keys, which will allow password-less logins to your VMs:

      You will see the following output:

      Output

      Generating public/private rsa key pair. Enter file in which to save the key (~/.ssh/id_rsa):

      Press ENTER to save the key pair in the ~/.ssh directory in your home directory, or enter another destination.

      Next, you will see the following prompt:

      Output

      Enter passphrase (empty for no passphrase):

      In this case, press ENTER without a password to enable password-less logins to your nodes.

      You will see a confirmation that your key pair has been created:

      Output

      Your identification has been saved in ~/.ssh/id_rsa. Your public key has been saved in ~/.ssh/id_rsa.pub. The key fingerprint is: SHA256:lCVaexVBIwHo++NlIxccMW5b6QAJa+ZEr9ogAElUFyY root@3b9a273f18b5 The key's randomart image is: +---[RSA 2048]----+ |++.E ++o=o*o*o | |o +..=.B = o | |. .* = * o | | . =.o + * | | . . o.S + . | | . +. . | | . ... = | | o= . | | ... | +----[SHA256]-----+

      Get your public key by running the following command, which will display it in your terminal:

      Add this key to your DigitalOcean account by following these directions.

      Next, install Terraform:

      • sudo apt-get update
      • sudo apt-get install unzip
      • wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
      • unzip terraform_0.11.7_linux_amd64.zip
      • sudo mv terraform /usr/bin/.
      • terraform version

      You will see output confirming your Terraform installation:

      Output

      Terraform v0.11.7

      Next, run the following commands to install kubectl, a CLI tool that will communicate with your Kubernetes cluster, and to create a ~/.kube directory in your user's home directory:

      • sudo apt-get install apt-transport-https
      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      • sudo touch /etc/apt/sources.list.d/kubernetes.list
      • echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update
      • sudo apt-get install kubectl
      • mkdir -p ~/.kube

      Creating the ~/.kube directory will enable you to copy the configuration file to this location. You’ll do that once you run the Kubernetes setup script later in this section. By default, the kubectl CLI looks for the configuration file in the ~/.kube directory to access the cluster.

      Next, clone the sample project repository for this tutorial, which contains the Terraform scripts for setting up the infrastructure:

      • git clone https://github.com/do-community/k8s-cicd-webinars.git

      Go to the Terrafrom script directory:

      • cd k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Get a fingerprint of your SSH public key:

      • ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'

      You will see output like the following, with the highlighted portion representing your key:

      Output

      MD5:dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

      Keep in mind that your key will differ from what's shown here.

      Save the fingerprint to an environmental variable so Terraform can use it:

      • export FINGERPRINT=dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

      Next, export your DO personal access token:

      • export TOKEN=your-do-access-token

      Now take a look at the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ project directory:

      Output

      cluster.tf destroy.sh files outputs.tf provider.tf script.sh

      This folder contains the necessary scripts and configuration files for deploying your Kubernetes cluster with Terraform.

      Execute the script.sh script to trigger the Kubernetes cluster setup:

      When the script execution is complete, kubectl will be configured to use the Kubernetes cluster you've created.

      List the cluster nodes using kubectl get nodes:

      Output

      NAME STATUS ROLES AGE VERSION k8s-master-node Ready master 2m v1.10.0 k8s-worker-node-1 Ready <none> 1m v1.10.0 k8s-worker-node-2 Ready <none> 57s v1.10.0

      You now have one master and two worker nodes in the Ready state.

      With a Kubernetes cluster set up, you can now explore another option for building container images: Kaniko from Google.

      Step 3 — Building Container Images with Kaniko

      Earlier in this tutorial, you built container images with Dockerfiles and Buildah. But what if you could build container images directly on Kubernetes? There are ways to run the docker image build command inside of Kubernetes, but this isn't native Kubernetes tooling. You would have to depend on the Docker daemon to build images, and it would need to run on one of the Pods in the cluster.

      A tool called Kaniko allows you to build container images with a Dockerfile on an existing Kubernetes cluster. In this step, you will build a container image with a Dockerfile using Kaniko. You will then push this image to Docker Hub.

      In order to push your image to Docker Hub, you will need to pass your Docker Hub credentials to Kaniko. In the previous step, you logged into Docker Hub and created a ~/.docker/config.json file with your login credentials. Let's use this configuration file to create a Kubernetes ConfigMap object to store the credentials inside the Kubernetes cluster. The ConfigMap object is used to store configuration parameters, decoupling them from your application.

      To create a ConfigMap called docker-config using the ~/.docker/config.json file, run the following command:

      • sudo kubectl create configmap docker-config --from-file=$HOME/.docker/config.json

      Next, you can create a Pod definition file called pod-kaniko.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory (though it can go anywhere).

      First, make sure that you are in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Create the pod-kaniko.yml file:

      Add the following content to the file to specify what will happen when you deploy your Pod. Be sure to replace your-dockerhub-username in the Pod's args field with your own Docker Hub username:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/pod-kaniko.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        name: kaniko
      spec:
        containers:
        - name: kaniko
          image: gcr.io/kaniko-project/executor:latest
          args: ["--dockerfile=./Dockerfile",
                  "--context=/tmp/rsvpapp/",
                  "--destination=docker.io/your-dockerhub-username/rsvpapp:kaniko",
                  "--force" ]
          volumeMounts:
            - name: docker-config
              mountPath: /root/.docker/
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
        initContainers:
          - image: python
            name: demo
            command: ["/bin/sh"]
            args: ["-c", "git clone https://github.com/do-community/rsvpapp.git /tmp/rsvpapp"] 
            volumeMounts:
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
        volumes:
          - name: docker-config
            configMap:
              name: docker-config
          - name: demo
            emptyDir: {}
      

      This configuration file describes what will happen when your Pod is deployed. First, the Init container will clone the Git repository with the Dockerfile, https://github.com/do-community/rsvpapp.git, into a shared volume called demo. Init containers run before application containers and can be used to run utilties or other tasks that are not desirable to run from your application containers. Your application container, kaniko, will then build the image using the Dockerfile and push the resulting image to Docker Hub, using the credentials you passed to the ConfigMap volume docker-config.

      To deploy the kaniko pod, run the following command:

      • kubectl apply -f pod-kaniko.yml

      You will see the following confirmation:

      Output

      pod/kaniko created

      Get the list of pods:

      You will see the following list:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Init:0/1 0 47s

      Wait a few seconds, and then run kubectl get pods again for a status update:

      You will see the following:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 1/1 Running 0 1m

      Finally, run kubectl get pods once more for a final status update:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 2m

      This sequence of output tells you that the Init container ran, cloning the GitHub repository inside of the demo volume. After that, the Kaniko build process ran and eventually finished.

      Check the logs of the pod:

      You will see the following output:

      Output

      time="2018-08-02T05:01:24Z" level=info msg="appending to multi args docker.io/your-dockerhub-username/rsvpapp:kaniko" time="2018-08-02T05:01:24Z" level=info msg="Downloading base image nkhare/python:alpine" . . . ime="2018-08-02T05:01:46Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:48Z" level=info msg="cmd: CMD" time="2018-08-02T05:01:48Z" level=info msg="Replacing CMD in config with [/bin/sh -c python rsvp.py]" time="2018-08-02T05:01:48Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:49Z" level=info msg="No files were changed, appending empty layer to config." 2018/08/02 05:01:51 mounted blob: sha256:bc4d09b6c77b25d6d3891095ef3b0f87fbe90621bff2a333f9b7f242299e0cfd 2018/08/02 05:01:51 mounted blob: sha256:809f49334738c14d17682456fd3629207124c4fad3c28f04618cc154d22e845b 2018/08/02 05:01:51 mounted blob: sha256:c0cb142e43453ebb1f82b905aa472e6e66017efd43872135bc5372e4fac04031 2018/08/02 05:01:51 mounted blob: sha256:606abda6711f8f4b91bbb139f8f0da67866c33378a6dcac958b2ddc54f0befd2 2018/08/02 05:01:52 pushed blob sha256:16d1686835faa5f81d67c0e87eb76eab316e1e9cd85167b292b9fa9434ad56bf 2018/08/02 05:01:53 pushed blob sha256:358d117a9400cee075514a286575d7d6ed86d118621e8b446cbb39cc5a07303b 2018/08/02 05:01:55 pushed blob sha256:5d171e492a9b691a49820bebfc25b29e53f5972ff7f14637975de9b385145e04 2018/08/02 05:01:56 index.docker.io/your-dockerhub-username/rsvpapp:kaniko: digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 size: 1243

      From the logs, you can see that the kaniko container built the image from the Dockerfile and pushed it to your Docker Hub account.

      You can now pull the Docker image. Be sure again to replace your-dockerhub-username with your Docker Hub username:

      • docker pull your-dockerhub-username/rsvpapp:kaniko

      You will see a confirmation of the pull:

      Output

      kaniko: Pulling from your-dockerhub-username/rsvpapp c0cb142e4345: Pull complete bc4d09b6c77b: Pull complete 606abda6711f: Pull complete 809f49334738: Pull complete 358d117a9400: Pull complete 5d171e492a9b: Pull complete Digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 Status: Downloaded newer image for your-dockerhub-username/rsvpapp:kaniko

      You have now successfully built a Kubernetes cluster and created new images from within the cluster. Let's move on to discussing Deployments and Services.

      Step 4 — Create Kubernetes Deployments and Services

      Kubernetes Deployments allow you to run your applications. Deployments specify the desired state for your Pods, ensuring consistency across your rollouts. In this step, you will create an Nginx deployment file called deployment.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory to create an Nginx Deployment.

      First, open the file:

      Add the following configuration to the file to define your Nginx Deployment:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
        labels:
          app: nginx
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
      
      

      This file defines a Deployment named nginx-deployment that creates three pods, each running an nginx container on port 80.

      To deploy the Deployment, run the following command:

      • kubectl apply -f deployment.yml

      You will see a confirmation that the Deployment was created:

      Output

      deployment.apps/nginx-deployment created

      List your Deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 29s

      You can see that the nginx-deployment Deployment has been created and the desired and current count of the Pods are same: 3.

      To list the Pods that the Deployment created, run the following command:

      Output

      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 9m nginx-deployment-75675f5897-nhwsp 1/1 Running 0 1m nginx-deployment-75675f5897-pxpl9 1/1 Running 0 1m nginx-deployment-75675f5897-xvf4f 1/1 Running 0 1m

      You can see from this output that the desired number of Pods are running.

      To expose an application deployment internally and externally, you will need to create a Kubernetes object called a Service. Each Service specifies a ServiceType, which defines how the service is exposed. In this example, we will use a NodePort ServiceType, which exposes the Service on a static port on each node.

      To do this, create a file, service.yml, in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following content to define your Service:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/service.yml

      kind: Service
      apiVersion: v1
      metadata:
        name: nginx-service
      spec:
        selector:
          app: nginx
        type: NodePort
        ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30111
      

      These settings define the Service, nginx-service, and specify that it will target port 80 on your Pod. nodePort defines the port where the application will accept external traffic.

      To deploy the Service run the following command:

      • kubectl apply -f service.yml

      You will see a confirmation:

      Output

      service/nginx-service created

      List the Services:

      You will see the following list:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h nginx-service NodePort 10.100.98.213 <none> 80:30111/TCP 7s

      Your Service, nginx-service, is exposed on port 30111 and you can now access it on any of the node’s public IPs. For example, navigating to http://node_1_ip:30111 or http://node_2_ip:30111 should take you to Nginx's standard welcome page.

      Once you have tested the Deployment, you can clean up both the Deployment and Service:

      • kubectl delete deployment nginx-deployment
      • kubectl delete service nginx-service

      These commands will delete the Deployment and Service you have created.

      Now that you have worked with Deployments and Services, let's move on to creating Custom Resources.

      Step 5 — Creating Custom Resources in Kubernetes

      Kubernetes offers limited but production-ready functionalities and features. It is possible to extend Kubernetes' offerings, however, using its Custom Resources feature. In Kubernetes, a resource is an endpoint in the Kubernetes API that stores a collection of API objects. A Pod resource contains a collection of Pod objects, for instance. With Custom Resources, you can add custom offerings for networking, storage, and more. These additions can be created or removed at any point.

      In addition to creating custom objects, you can also employ sub-controllers of the Kubernetes Controller component in the control plane to make sure that the current state of your objects is equal to the desired state. The Kubernetes Controller has sub-controllers for specified objects. For example, ReplicaSet is a sub-controller that makes sure the desired Pod count remains consistent. When you combine a Custom Resource with a Controller, you get a true declarative API that allows you to specify the desired state of your resources.

      In this step, you will create a Custom Resource and related objects.

      To create a Custom Resource, first make a file called crd.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following Custom Resource Definition (CRD):

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/crd.yml

      apiVersion: apiextensions.k8s.io/v1beta1
      kind: CustomResourceDefinition
      metadata:
        name: webinars.digitalocean.com
      spec:
        group: digitalocean.com
        version: v1
        scope: Namespaced
        names:
          plural: webinars
          singular: webinar
          kind: Webinar
          shortNames:
          - wb
      

      To deploy the CRD defined in crd.yml, run the following command:

      • kubectl create -f crd.yml

      You will see a confirmation that the resource has been created:

      Output

      customresourcedefinition.apiextensions.k8s.io/webinars.digitalocean.com created

      The crd.yml file has created a new RESTful resource path: /apis/digtialocean.com/v1/namespaces/*/webinars. You can now refer to your objects using webinars, webinar, Webinar, and wb, as you listed them in the names section of the CustomResourceDefinition. You can check the RESTful resource with the following command:

      • kubectl proxy & curl 127.0.0.1:8001/apis/digitalocean.com

      Note: If you followed the initial server setup guide in the prerequisites, then you will need to allow traffic to port 8001 in order for this test to work. Enable traffic to this port with the following command:

      You will see the following output:

      Output

      HTTP/1.1 200 OK Content-Length: 238 Content-Type: application/json Date: Fri, 03 Aug 2018 06:10:12 GMT { "apiVersion": "v1", "kind": "APIGroup", "name": "digitalocean.com", "preferredVersion": { "groupVersion": "digitalocean.com/v1", "version": "v1" }, "serverAddressByClientCIDRs": null, "versions": [ { "groupVersion": "digitalocean.com/v1", "version": "v1" } ] }

      Next, create the object for using new Custom Resources by opening a file called webinar.yml:

      Add the following content to create the object:

      ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/webinar.yml

      apiVersion: "digitalocean.com/v1"
      kind: Webinar
      metadata:
        name: webinar1
      spec:
        name: webinar
        image: nginx
      

      Run the following command to push these changes to the cluster:

      • kubectl apply -f webinar.yml

      You will see the following output:

      Output

      webinar.digitalocean.com/webinar1 created

      You can now manage your webinar objects using kubectl. For example:

      Output

      NAME CREATED AT webinar1 21s

      You now have an object called webinar1. If there had been a Controller, it would have intercepted the object creation and performed any defined operations.

      Deleting a Custom Resource Definition

      To delete all of the objects for your Custom Resource, use the following command:

      • kubectl delete webinar --all

      You will see:

      Output

      webinar.digitalocean.com "webinar1" deleted

      Remove the Custom Resource itself:

      • kubectl delete crd webinars.digitalocean.com

      You will see a confirmation that it has been deleted:

      Output

      customresourcedefinition.apiextensions.k8s.io "webinars.digitalocean.com" deleted

      After deletion you will not have access to the API endpoint that you tested earlier with the curl command.

      This sequence is an introduction to how you can extend Kubernetes functionalities without modifying your Kubernetes code.

      Step 6 — Deleting the Kubernetes Cluster

      To destroy the Kubernetes cluster itself, you can use the destroy.sh script from the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom folder. Make sure that you are in this directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom

      Run the script:

      By running this script, you'll allow Terraform to communicate with the DigitalOcean API and delete the servers in your cluster.

      Conclusion

      In this tutorial, you used different tools to create container images. With these images, you can create containers in any environment. You also set up a Kubernetes cluster using Terraform, and created Deployment and Service objects to deploy and expose your application. Additionally, you extended Kubernetes' functionality by defining a Custom Resource.

      You now have a solid foundation to build a CI/CD environment on Kubernetes, which we'll explore in future articles.



      Source link

      Create a CI/CD Pipeline with Gatsby.js, Netlify and Travis CI


      Updated by Linode

      Contributed by

      Linode


      Use promo code DOCS10 for $10 credit on a new account.

      What is Gatsby?

      Gatsby is a Static Site Generator for React built on Node.js. Gatsby uses a modern web technology stack based on client-side Javascript, reusable APIs, and prebuilt Markdown, otherwise known as the JAMstack. This method of building a site is fast, secure, and scalable. All production site pages are prebuilt and static, so Gatsby does not have to build HTML for each page request.

      What is the CI/CD Pipeline?

      The CI/CD (continuous integration/continuous delivery) pipeline created in this guide is an automated sequence of events that is initiated after you update the code for your website on your local computer. These events take care of the work that you would otherwise need to perform manually: previewing your in-development site, testing your new code, and deploying it to your production server. These actions are powered by GitHub, Netlify, and Travis CI.

      Note

      This guide uses GitHub as your central Git repository, but you can use any service that is compatible with Netlify and Travis.

      Netlify

      Netlify is a PaaS (Platform as a Service) provider that allows you to quickly deploy static sites on the Netlify platform. In this guide Netlify will be used to provide a preview of your Gatsby site while it is in development. This preview can be shared with different stakeholders for site change approvals, or with anyone that is interested in your project. The production version of your website will ultimately be deployed to a Linode, so Netlify will only be used to preview development of the site.

      Travis CI

      Travis CI is a continuous integration tool that tests and deploys the code you upload to your GitHub repository. Travis will be used in this guide to deploy your Gatsby site to a Linode running Ubuntu 18.04. Testing your website code will not be explored in depth, but the method for integrating unit tests will be introduced.

      The CI/CD Pipeline Sequence

      This guide sets up the following flow of events:

      1. You create a new branch in your local Git repository and make code changes to your Gatsby project.

      2. You push your branch to your GitHub repository and create a pull request.

      3. Netlify automatically creates a preview of the site with a unique URL that can be shared.

      4. Travis CI automatically builds the site in an isolated container and runs any declared tests.

      5. When all tests pass, you merge the PR into the repository’s master branch, which automatically triggers a deployment to your production Linode.

      Before You Begin

      1. Follow the Getting Started guide and deploy a Linode running Ubuntu 18.04.

      2. Complete the Securing Your Server guide to create a limited Linux user account with sudo privileges, harden SSH access, and remove unnecessary network services.

        Note

        This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, visit our Users and Groups guide.

        All configuration files should be edited with elevated privileges. Remember to include sudo before running your text editor.

      3. Configure DNS for your site by adding a domain zone and setting up reverse DNS on your Linode’s IP.

      4. Create a GitHub account if you don’t already have one. GitHub is free for open source projects.

      5. Install Git on your local computer. Later in this guide, Homebrew will be used to install Gatsby on a Mac, so it’s recommended that you also use Homebrew to install Git if you’re using a Mac.

      Prepare Your Production Linode

      Install NGINX

      1. Install NGINX from Ubuntu’s repository on your Linode:

        sudo apt install nginx
        

      Configure NGINX

      1. Delete the default welcome page:

        sudo rm /etc/nginx/sites-enabled/default
        
      2. Create a site configuration file for Gatsby. Replace example.com in the file name and in the file’s contents with your domain name:

        /etc/nginx/conf.d/example.com.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        server {
            listen       80;
            server_name  example.com;
            #charset koi8-r;
            #access_log  /var/log/nginx/host.access.log  main;
        
            location / {
                root   /usr/share/nginx/html/example.com/public;
                index  index.html index.htm;
            }
        }

        Note

        Replace all future instances of example.com in this guide with your domain name.

      3. The root directive in your NGINX configuration points to a directory named public within /usr/share/nginx/html/example.com/. Later in this guide, Gatsby will be responsible for creating the public directory and building its static content within it (specifically, via the gatsby build command).

        The /usr/share/nginx/html/example.com/ directory does not exist on your server yet, so create it:

        sudo mkdir -p /usr/share/nginx/html/example.com/
        
      4. The Gatsby deployment script that will be introduced later in this guide will run under your limited Linux user. Set your limited user to be the owner of the new document root directory. This ensures the deployment script will be able write your site’s files to it:

        sudo chown $(whoami):$(id -gn) -R /usr/share/nginx/html/example.com/
        
      5. Test your NGINX configuration for errors:

        sudo nginx -t
        
      6. Reload the configuration:

        sudo nginx -s reload
        
      7. Navigate to your Linode’s domain or IP address in a browser. Your Gatsby site files aren’t deployed yet, so you should only see a 404 Not Found error. Still, this error indicates that your NGINX process is running as expected.

      Develop with Gatsby on Your Local Computer

      You will develop with Gatsby on your local computer. This guide walks through creating a simple sample Gatsby website, but more extensive website development is not explored, so review Gatsby’s official documentation afterwards for more information on the subject.

      Install Gatsby

      This section provides instructions for installing Gatsby via Node.js and the Node Package Manager (npm) on Mac and Linux computers. If you are using a Windows PC, read Gatsby’s official documentation for installation instructions.

      1. Install npm on your local computer. If you are running Ubuntu or Debian on your computer, use apt:

        sudo apt install nodejs npm
        

        If you have a Mac, use Homebrew:

        brew install nodejs npm
        
      2. Ensure Node.js was installed by checking its version:

        node --version
        
      3. Install the Gatsby command line:

        sudo npm install --global gatsby-cli
        

      Create a Gatsby Site

      1. Gatsby uses starters to provide a pre-configured base Gatsby site that you can customize and build on top of. This guide uses the Hello World starter. On your local computer, install the Hello World starter in your home directory (using the name example-site for your new project) and navigate into it:

        gatsby new example-site https://github.com/gatsbyjs/gatsby-starter-hello-world
        cd ~/example-site
        
      2. Inspect the contents of the directory:

        ls
        

        You should see output similar to:

          
        LICENSE  node_modules  package.json  package-lock.json  README.md  src
        
        

        The src directory contains your project’s source files. This starter will include the React Javascript component file src/pages/index.js, which will be mapped to our example site’s homepage.

        Gatsby uses React components to build your site’s static pages. Components are small and isolated pieces of code, and Gatsby stores them in the src/pages directory. When your Gatsby site is built, these will automatically become your site’s pages, with paths based on each file’s name.

      3. Gatsby offers a built-in development server which builds and serves your Gatsby site. This server will also monitor any changes made to your src directory’s React components and will rebuild Gatsby after every change, which helps you see your local changes as you make them.

        Open a new shell session (in addition the one you already have open) and run the Gatsby development server:

        cd ~/example-site
        gatsby develop
        
      4. The gatsby develop command will display messages from the build process, including a section similar to the following:

          
        You can now view gatsby-starter-hello-world in the browser.
        
            http://localhost:8000/
        
        

        Copy and paste the http://localhost:8000/ URL (or the specific string displayed in your terminal) into your web browser to view your Gatsby site. You should see a page that displays “Hello World”.

      5. In your original shell session, view the contents of your example-site directory again:

        ls
        
          
            LICENSE  node_modules  package.json  package-lock.json  README.md  src  public
            
        

        You should now see a public directory which was not present before. This directory holds the static files built by Gatsby. Your NGINX server will serve the static files located in the public directory.

      6. Open the src/pages/index.js file in your text editor, add new text between the <div> tags, and save your change:

        src/pages/index.js
        1
        2
        3
        4
        
        import React from "react"
        
        export default () => <div>Hello world and universe!</div>
        
      7. Navigate back to your browser window, where the updated text should automatically appear on the page.

      Version Control Your Gatsby Project

      In the workflow explored by this guide, Git and GitHub are used to:

      • Track changes you make during your site’s development.
      • Trigger the preview, test, and deployment functions offered by Netlify and Travis.

      The following steps present how to initialize a new local Git repository for your Gatsby project, and how to connect it to a central GitHub repository.

      1. Open a shell session on your local computer and navigate to the example-site directory. Initialize a Git repository to begin tracking your project files:

        git init
        

        Stage all the files you’ve created so far for your first commit:

        git add -A
        

        The Hello World starter includes a .gitignore file. Your .gitignore designates which files and directories to ignore in your Git commits. By default, it is set to ignore any files in the public directory. The public directory’s files will not be tracked in this repository, as they can be quickly rebuilt by anyone who clones your repository.

      2. Commit all the Hello World starter files:

        git commit -m "Initial commit"
        
      3. Navigate to your GitHub account and create a new repository named example-site. After the repository is created, copy its URL, which will have the form https://github.com/your-github-username/example-site.git.

      4. In your local computer’s shell session, add the GitHub repository as your local repository’s origin:

        git remote add origin https://github.com/your-github-username/example-site.git
        
      5. Verify the origin remote’s location:

        git remote -v
        
          
        origin	https://github.com/your-github-username/example-site.git (fetch)
        origin	https://github.com/your-github-username/example-site.git (push)
        
        
      6. Push the master branch of your local repository to the origin repository:

        git push origin master
        
      7. View your GitHub account in your browser, navigate to the example-site repository, and verify that all the files have been pushed to it successfully:

        GitHub Initial Commit

      Preview Your Site with Netlify

      In the course of developing a website (or any other software project), a common practice when you’ve finished a new feature and would like to share it with your collaborators is to create a pull request (also referred to as a PR). A pull request is an intermediate step between uploading your work to GitHub (by pushing the changes to a new branch) and later merging it into the master branch (or another release or development branch, according to your specific Git workflow).

      Once connected to your GitHub account, the Netlify service can build a site preview from your PR’s code every time you create a PR. Netlify will also regenerate your site preview if you commit and push new updates to your PR’s branch while the PR is still open. A random, unique URL is assigned to every preview, and you can share these URLs with your collaborators.

      Connect Your GitHub Repository to Netlify

      1. Navigate to the Netlify site and click on the Sign Up link:

        Netlify Home Page

      2. Click on the GitHub button to connect your GitHub account with Netlify. If you used a different version control service, select that option instead:

        GitHub and Netlify connection page

      3. You will be taken to the GitHub site and asked to authorize Netlify to access your account. Click on the Authorize Netlify button:

        GitHub Netlify Authorization

      4. Add your new site to Netlify and continue along with the prompts to finish connecting your repository to Netlify. Be sure to select the GitHub repository created in the previous steps:

        Add site to Netlify

      5. Provide the desired deploy settings for your repository. Unless you are sure you need to change these settings, keep the Netlify defaults:

        Netlify repository settings

        Note

        You can add a netlify.toml configuration file to your Git repository to define more deployment settings.

      Create a Pull Request

      1. In your local Git repository, create a new branch to test Netlify:

        git checkout -b test-netlify
        
      2. On your computer, edit your src/pages/index.js and update the message displayed:

        src/pages/index.js
        1
        2
        3
        4
        
        import React from "react"
        
        export default () => <div>Hello world, universe, and multiverse!</div>
        
      3. Commit those changes:

        git add .
        git commit -m "Testing Netlify"
        
      4. Push the new branch to the origin repository:

        git push origin test-netlify
        
      5. Navigate to the example-site repository in your GitHub account and create a pull request with the test-netlify branch:

        GitHub Compare and Pull Request Banner

      6. After you create the pull request, you will see a deploy/netlify row with a Details link. The accent color for this row will initially be yellow while the Netlify preview is being built. When the preview’s build process is finished, this row will turn green. At that point, you can click on the Details link to view your Gatsby site’s preview.

        Netlify GitHub Preview

        Every time you push changes to your branch, Netlify will provide a new preview link.

      Test and Deploy Your Site with Travis CI

      Travis CI manages testing your Gatsby site and deploying it to the Linode production server. Travis does this by monitoring updates to your GitHub repository:

      • Travis’s tests will run when a pull request is created, whenever new commits are pushed to that pull request’s branch, and whenever a branch is updated on your GitHub repository in general (including outside the context of a pull request).

      • Travis’s deployment function will trigger whenever a pull request has been merged into the master branch (and optionally when merging into other branches, depending on your configuration).

      Connect Your GitHub Repository to Travis CI

      1. Navigate to the Travis CI site and click on the Sign up with GitHub button.

        Note

        Be sure to visit travis-ci.com, not travis-ci.org. Travis originally operated travis-ci.com for paid/private repositories, and travis-ci.org was run separately for free/open source projects. As of May 2018, travis-ci.com supports open source projects and should be used for all new projects. Projects on travis-ci.org will eventually be migrated to travis-ci.com.
      2. You will be redirected to your GitHub account. Authorize Travis CI to access your GitHub account:

        Authorize Travis CI

      3. You will be redirected to your Travis CI account’s page where you will be able to see a listing of all your public repositories. Click on the toggle button next to your Gatsby repository to activate Travis CI for it.

      Configure Travis CI to Run Tests

      Travis’s functions are all configured by adding and editing a .travis.yml file in the root of your project. When .travis.yml is present in your project and you push a commit to your central GitHub respository, Travis performs one or more builds.

      Travis builds are run in new virtualized environments created for each build. The build lifecycle is primarily composed of an install step and a script step. The install step is responsible for installing your project’s dependencies in the new virtual environment. The script step invokes one or more bash scripts that you specify, usually test scripts of some kind.

      1. Navigate to your local Gatsby project and create a new Git branch to keep track of your Travis configurations:

        git checkout -b travis-configs
        
      2. Create your .travis.yml file at the root of the project:

        touch .travis.yml
        

        Note

        Make sure you commit changes at logical intervals as you modify the files in your Git repository.

      3. Open your .travis.yml file in a text editor and add the following lines:

        ~/example-site/.travis.yml
        1
        2
        3
        4
        5
        6
        
        language: node_js
        node_js:
          - '10.0'
        
        dist: trusty
        sudo: false

        This configuration specifies that the build’s virtual environment should be Ubuntu 14.04 (also known as trusty). sudo: false indicates that the virtual environment should be a container, and not a full virtual machine. Other environments are available.

        Gatsby is built with Node.js, so the Travis configuration is set to use node_js as the build language, and to use the latest version of Node (10.0 at the time of this guide’s publication). When Node is specified as the build language, Travis automatically sets default values for the install and script steps: install will run npm install, and script will run npm test. Other languages, like Python, are also available.

      4. The Gatsby Hello World starter provides a package.json file, which is a collection of metadata that describes your project. It is used by npm to install, run, and test your project. In particular, it includes a dependencies section used by npm install, and a scripts section where you can declare the tests run by npm test.

        No tests are listed by default in your starter’s package.json, so open the file with your editor and add a test line to the scripts section:

        package.json
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        {
          "name": "gatsby-starter-hello-world",
          "description": "Gatsby hello world starter",
          "license": "MIT",
          "scripts": {
            "develop": "gatsby develop",
            "build": "gatsby build",
            "serve": "gatsby serve",
            "test": "echo 'Run your tests here'"
          },
          "dependencies": {
            "gatsby": "^1.9.277",
            "gatsby-link": "^1.6.46"
          }
        }

        This entry is just a stub to illustrate where tests are declared. For more information on how to test your Gatsby project, review the unit testing documentation on Gatsby’s website. Jest is the testing framework recommended by Gatsby.

      View Output from Your Travis Build

      1. Commit the changes you’ve made and push your travis-configs branch to your origin repository:

        git add .
        git commit -m "Travis testing configuration"
        git push origin travis-configs
        
      2. View your GitHub repository in your browser and create a pull request for the travis-configs branch.

      3. Several rows that link to your Travis builds will appear in your new pull request. When a build finishes running without error, the build’s accent color will turn green:

        GitHub Travis Builds

        Note

        Four rows for your Travis builds will appear, which is more than you may expect. This is because Travis runs your builds whenever your branch is updated, and whenever your pull request is updated, and Travis considers these to be separate events.

        In addition, the rows prefixed by Travis CI - are links to GitHub’s preview of those builds, while rows prefixed with continuous-integration/travis-ci/ are direct links to the builds on travis-ci.com.

        For now, these builds will produce identical output. After the deployment functions of Travis have been configured, the pull request builds will skip the deployment step, while the branch builds will implement your deployment configuration.

      4. Click the Details link in the continuous-integration/travis-ci/push row to visit the logs for that build. A page with similar output will appear:

        Travis Build Logs - First Test

        Towards the end of your output, you should see the “Run your tests here” message from the test stub that you entered in your package.json. If you implement testing of your code with Jest or another library, the output from those tests will appear at this location in your build logs.

        If any of the commands that Travis CI runs in the script step (or in any preceding steps, like install) returns with a non-zero exit code, then the build will fail, and you will not be able to merge your pull request on GitHub.

      5. For now, do not merge your pull request, even if the builds were successful.

      Give Travis Permission to Deploy to Your Linode

      In order to let Travis push your code to your production Linode, you first need to give the Travis build environment access to the Linode. This will be accomplished by generating a public-private key pair for your build environment and then uploading the public key to your Linode. Your code will be deployed over SSH, and the SSH agent in the build environment will be configured to use your new private key.

      The private key will also need to be encrypted, as the key file will live in your Gatsby project’s Git repository, and you should never check a plain-text version of it into version control.

      1. Install the Travis CLI, which you will need to generate an encrypted version of your private key. The Travis CLI is distributed as a Ruby gem:

        On Linux:

        sudo apt install ruby ruby-dev
        sudo gem install travis
        

        On macOS:

        sudo gem install travis
        

        On Windows: Use RubyInstaller to install Ruby and the Travis CLI gem.

      2. Log in to Travis CI with the CLI:

        travis login --com
        

        Follow the prompts to provide your GitHub login credentials. These credentials are passed directly to GitHub and are not recorded by Travis. In exchange, GitHub returns a GitHub access token to Travis, after which Travis will provide your CLI with a Travis access token.

        Note

      3. Inside the root of your local example-site Git repository, create a scripts directory. This will hold files related to deploying your Gatsby site:

        mkdir scripts
        
      4. Generate a pair of SSH keys inside the scripts directory. The key pair will be named gatsby-deploy so that you don’t accidentally overwrite any preexisting key pairs. Replace your_email@example.com with your email address. When prompted for the key pair’s passphrase, enter no passphrase/leave the field empty:

        ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f scripts/gatsby-deploy
        

        Two files will be created: gatsby-deploy (your private key) and gatsby-deploy.pub (your public key).

      5. Add the location of the gatsby-deploy file to your project’s .gitignore file. This will ensure that you do not accidentally commit the secret key to your central repository:

        .gitignore
        1
        2
        3
        
        # Other .gitignore instructions
        # [...]
        scripts/gatsby-deploy
      6. Encrypt your private key using the Travis CLI:

        cd scripts && travis encrypt-file gatsby-deploy --add --com
        
      7. You should now see a gatsby-deploy.enc file in your scripts directory:

        ls
        
          
            gatsby-deploy		gatsby-deploy.enc	gatsby-deploy.pub
        
        
      8. The --add flag from the previous command also told the Travis CLI to add a few new lines to your .travis.yml file. These lines decrypt your private key and should look similar to the following snippet:

        .travis.yml
        1
        2
        3
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_9e3557de08a3_key -iv $encrypted_9e3557de08a3_iv
          -in gatsby-deploy.enc -out gatsby-deploy -d


        About the openssl command and Travis build variables

        The second line (starting with -in gatsby-deploy.enc) is a continuation of the first line, and -in is an option passed to the openssl command. This line is not its own item in the before_install list.

        The openssl command accepts the encrypted gatsby-deploy.enc file and uses two environment variables to decrypt it, resulting in your original gatsby-deploy private key. These two variables are stored in the Settings page for your repository on travis-ci.com. Any variables stored there will be accessible to your build environment:

        Travis Environment Variables

      9. Edit the lines previously added by the travis encrypt-file command so that gatsby-deploy.enc and gatsby-deploy are prefixed with your scripts/ directory:

        .travis.yml
        1
        2
        3
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_9e3557de08a3_key -iv $encrypted_9e3557de08a3_iv
          -in scripts/gatsby-deploy.enc -out scripts/gatsby-deploy -d
      10. Continue preparing the SSH agent in your build environment by adding the following lines to the before_install step, after the openssl command. Be sure to replace 192.0.2.2 with your Linode’s IP address:

        ~/example-site/.travis.yml
        1
        2
        3
        4
        5
        6
        7
        8
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_9e3557de08a3_key -iv $encrypted_9e3557de08a3_iv
          -in scripts/gatsby-deploy.enc -out scripts/gatsby-deploy -d
        - eval "$(ssh-agent -s)"
        - cp scripts/gatsby-deploy ~/.ssh/gatsby-deploy
        - chmod 600 ~/.ssh/gatsby-deploy
        - ssh-add ~/.ssh/gatsby-deploy
        - echo -e "Host 192.0.2.2ntStrictHostKeyChecking non" >> ~/.ssh/config
      11. Travis CI can add entries to the build environment’s ~/.ssh/known_hosts prior to deploying your site. Insert the following addons step prior to the before_install step in your .travis.yml. Replace 192.0.2.2 with your Linode’s IP address:

        ~/example-site/.travis.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        # [...]
        dist: trusty
        sudo: false
        
        addons:
          ssh_known_hosts:
            - 192.0.2.2
        
        before_install:
        # [...]
      12. From your local computer, upload your Travis environment’s public key to the home directory of your limited Linux user on your Linode. Replace example_user with your Linode’s user and 192.0.2.2 with your Linode’s IP address:

        scp ~/example-site/scripts/gatsby-deploy.pub example_user@192.0.2.2:~/gatsby-deploy.pub
        
      13. Log in to your Linode (using the same user that the key was uploaded to) and copy the key into your authorized_keys file:

        mkdir -p .ssh
        cat gatsby-deploy.pub | tee -a .ssh/authorized_keys
        

      Create a Deployment Script

      1. Update your .travis.yml to include a deploy step. This section will be executed when a pull request is merged into the master branch. Add the following lines below the before_install step, at the end of the file:

        ~/example-site/.travis.yml
        1
        2
        3
        4
        5
        6
        
        deploy:
        - provider: script
          skip_cleanup: true
          script: bash scripts/deploy.sh
          on:
            branch: master

        The instructions for pushing your site to your Linode will be defined in a deploy.sh script that you will create.


        Full contents of your Travis configuration

        The complete and final version of your .travis.yml file should resemble the following:

        ~/example-site/.travis.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        
        language: node_js
        node_js:
        - "10.0"
        
        dist: trusty
        sudo: false
        
        addons:
          ssh_known_hosts:
          - 192.0.2.2
        
        before_install:
        - openssl aes-256-cbc -K $encrypted_07d52615a665_key -iv $encrypted_07d52615a665_iv
          -in scripts/gatsby-deploy.enc -out scripts/gatsby-deploy -d
        - eval "$(ssh-agent -s)"
        - cp scripts/gatsby-deploy ~/.ssh/gatsby-deploy
        - chmod 600 ~/.ssh/gatsby-deploy
        - ssh-add ~/.ssh/gatsby-deploy
        - echo -e "Host 192.0.2.2ntStrictHostKeyChecking non" >> ~/.ssh/config
        
        deploy:
        - provider: script
          skip_cleanup: true
          script: bash scripts/deploy.sh
          on:
            branch: master
      2. From your local example-site Git repository, create a deploy.sh file in the scripts directory and make it executable:

        touch scripts/deploy.sh
        chmod +x scripts/deploy.sh
        
      3. Open your deploy.sh file in your text editor and add the following lines. Replace all instances of example_user with your Linode’s user, and replace 192.0.2.2 with your Linode’s IP:

        scripts/deploy.sh
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        #!/bin/bash
        set -x
        
        gatsby build
        
        # Configure Git to only push the current branch
        git config --global push.default simple
        
        # Remove .gitignore and replace with the production version
        rm -f .gitignore
        cp scripts/prodignore .gitignore
        cat .gitignore
        
        # Add the Linode production server as a remote repository
        git remote add production ssh://example_user@192.0.2.2:/home/example_user/gatsbybare.git
        
        # Add and commit all the static files generated by the Gatsby build
        git add . && git commit -m "Gatsby build"
        
        # Push all changes to the Linode production server
        git push -f production HEAD:refs/heads/master

        The deploy script builds the Gatsby static files (which are placed inside the public directory inside your repository) and pushes them to your Linode. Specifically, this script:

        • Commits the newly-built public directory to the Travis build environment’s copy of your Git repository.
        • Pushes that commit (over the SSH protocol) to a remote repository on your Linode, which you will create in the next section of this guide.

        Note

        Remember that because these instructions are executed in an isolated virtual environment, the git commit that is run here does not affect the repository on your local computer or on GitHub.

      4. You may recall that you previously updated your .gitignore file to exclude the public directory. To allow this directory to be committed in your build environment’s repository (and therefore pushed to your Linode), you will need to override that rule at deploy time.

        From the root of your local Gatsby project, copy your .gitignore to a new scripts/prodignore file:

        cp .gitignore scripts/prodignore
        

        Open your new prodignore file, remove the public line, and save the change:

        scripts/prodignore
        1
        2
        3
        
        .cache/
        public # Remove this line
        yarn-error.log

        The deploy.sh script you created includes a line that will copy this scripts/prodignore file into your repository’s root .gitgnore, which will then allow the script to commit the public directory.

      Prepare the Remote Git Repository on Your Linode

      In the previous section you completed the configuration for the Travis deployment step. In this section, you will prepare the Linode to receive Git pushes from your deployment script. The pushed website files will then be served by your NGINX web server.

      1. SSH into your Linode (under the same the user that holds your Travis build environment’s public key). Create a new directory inside your home folder named gatsbybare.git:

        mkdir ~/gatsbybare.git
        
      2. Navigate to the new directory and initialize it as a bare Git repository:

        cd ~/gatsbybare.git
        git init --bare
        

        A bare Git repository stores Git objects and does not maintain working copies (i.e. file changes that haven’t been committed) in the directory. Bare repositories provide a centralized place where users can push their changes. GitHub is an example of a bare Git repository. The common practice for naming a bare Git repository is to end the name with the .git extension.

      3. Configure the Git directory to allow linking two repositories together:

        git config receive.denyCurrentBranch updateInstead
        
      4. Your Travis build environment will now be able to push files into your Linode’s Git repository, but the files will not be located in your NGINX document root. To fix this, you will use the hooks feature of Git to copy your website files to the document root folder. Specifically, you can implement a post-receive hook that will run after every push to your Linode’s repository.

        In your Linode’s Git repository, create the post-receive file and make it executable:

        touch hooks/post-receive
        chmod +x hooks/post-receive
        
      5. Add the following lines to the post-receive file. Replace example.com with your domain name, and replace example_user with your Linode’s user:

        hooks/post-receive
        1
        2
        
        #!/bin/sh
        git --work-tree=/usr/share/nginx/html/example.com --git-dir=/home/example_user/gatsbybare.git checkout -f

        This script will check out the files from your Linode repository’s master branch into your document root folder.

        Note

        While a bare Git repository does not keep working copies of files within the repository’s directory, you can still use the --work-tree option to check out files into another directory.

      Deploy with Travis CI

      All of the test and deployment configuration work has been completed and can now be executed:

      1. Commit all remaining changes to your travis-configs branch and push them up to your central GitHub repository:

        git add .
        git commit -m "Travis deployment configuration"
        git push origin travis-configs
        
      2. Visit the pull request you previously created on GitHub for your travis-configs branch. If you visit this page shortly after the git push command is issued, the new Travis builds may still be in progress.

      3. After the linked continuous-integration/travis-ci/pr pull request Travis build completes, click on the corresponding Details link. If the build was successful, you should see the following message:

          
        Skipping a deployment with the script provider because the current build is a pull request.
        
        

        This message appears because pull request builds skip the deployment step.

      4. Back on the GitHub pull request page, after the linked continuous-integration/travis-ci/push branch build completes, click on the corresponding Details link. If the build was successful, you should see the following message:

          
        Skipping a deployment with the script provider because this branch is not permitted: travis-configs
        
        

        This message appears because your .travis.yml restricts the deployment script to updates on the master branch.

      5. If your Travis builds failed, review the build logs for the reason for the failure.

      6. If the builds succeeded, merge your pull request.

      7. After merging the pull request, visit travis-ci.com directly and view the example-site repository. A new Travis build corresponding to your Merge pull request commit will be in progress. When this build completes, a Deploying application message will appear at the end of the build logs. This message can be expanded to view the complete logs for the deploy step.

      8. If your deploy step succeeded, you can now visit your domain name in your browser. You should see the message from your Gatsby project’s index.js.

      Troubleshooting

      If your Travis builds are failing, here are some places to look when troubleshooting:

      • View the build logs for the failed Travis build.
      • Ensure all your .sh scripts are executable, including the Git hook on the Linode.
      • Test the Git hook on your Linode by running bash ~/gatsbybare.git/hooks/post-receive.
      • If you encounter permissions issues, make sure your Linode user can write files to your document root directory.
      • To view the contents of the bare Git repository, run git ls-tree --full-tree -r HEAD.

      Next Steps

      Read the Gatsby.js Tutorial to learn how to build a website with Gatsby.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link