One place for hosting & domains

      Automate

      How To Automate Deployments to DigitalOcean Kubernetes with CircleCI


      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Having an automated deployment process is a requirement for a scalable and resilient application, and GitOps, or Git-based DevOps, has rapidly become a popular method of organizing CI/CD with a Git repository as a “single source of truth.” Tools like CircleCI integrate with your GitHub repository, allowing you to test and deploy your code automatically every time you make a change to your repository. When this kind of CI/CD is combined with the flexibility of Kubernetes infrastructure, you can build an application that scales easily with changing demand.

      In this article you will use CircleCI to deploy a sample application to a DigitalOcean Kubernetes (DOKS) cluster. After reading this tutorial, you’ll be able to apply these same techniques to deploy other CI/CD tools that are buildable as Docker images.

      Prerequisites

      To follow this tutorial, you’ll need to have:

      For this tutorial, you will use Kubernetes version 1.13.5 and kubectl version 1.10.7.

      Step 1 — Creating Your DigitalOcean Kubernetes Cluster

      Note: You can skip this section if you already have a running DigitalOcean Kubernetes cluster.

      In this first step, you will create the DigitalOcean Kubernetes (DOKS) cluster from which you will deploy your sample application. The kubectl commands executed from your local machine will change or retrieve information directly from the Kubernetes cluster.

      Go to the Kubernetes page on your DigitalOcean account.

      Click Create a Kubernetes cluster, or click the green Create button at the top right of the page and select Clusters from the dropdown menu.

      [Creating a Kubernetes Cluster on DigitalOcean](assets.digitalocean.com/articles/cart64920/CreateDOKS.gif)

      The next page is where you are going to specify the details of your cluster. On Select a Kubernetes version pick version 1.13.5-do.0. If this one is not available, choose a higher one.

      For Choose a datacenter region, choose the region closest to you. This tutorial will use San Francisco – 2.

      You then have the option to build your Node pool(s). On Kubernetes, a node is a worker machine, which contains the services necessary to run pods. On DigitalOcean, each node is a Droplet. Your node pool will consist of a single Standard node. Select the 2GB/1vCPU configuration and change to 1 Node on the number of nodes.

      You can add extra tags if you want; this can be useful if you plan to use DigitalOcean API or just to better organize your node pools.

      On Choose a name, for this tutorial, use kubernetes-deployment-tutorial. This will make it easier to follow throughout while reading the next sections. Finally, click the green Create Cluster button to create your cluster.

      After cluster creation, there will be a button on the UI to download a configuration file called Download Config File. This is the file you will be using to authenticate the kubectl commands you are going to run against your cluster. Download it to your kubectl machine.

      The default way to use that file is to always pass the --kubeconfig flag and the path to it on all commands you run with kubectl. For example, if you downloaded the config file to Desktop, you would run the kubectl get pods command like this:

      • kubectl --kubeconfig ~/Desktop/kubernetes-deployment-tutorial-kubeconfig.yaml get pods

      This would yield the following output:

      Output

      No resources found.

      This means you accessed your cluster. The No resources found. message is correct, since you don’t have any pods on your cluster.

      If you are not maintaining any other Kubernetes clusters you can copy the kubeconfig file to a folder on your home directory called .kube. Create that directory in case it does not exist:

      Then copy the config file into the newly created .kube directory and rename it config:

      • cp current_kubernetes-deployment-tutorial-kubeconfig.yaml_file_path ~/.kube/config

      The config file should now have the path ~/.kube/config. This is the file that kubectl reads by default when running any command, so there is no need to pass --kubeconfig anymore. Run the following:

      You will receive the following output:

      Output

      No resources found.

      Now access the cluster with the following:

      You will receive the list of nodes on your cluster. The output will be similar to this:

      Output

      NAME STATUS ROLES AGE VERSION kubernetes-deployment-tutorial-1-7pto Ready <none> 1h v1.13.5

      In this tutorial you are going to use the default namespace for all kubectl commands and manifest files, which are files that define the workload and operating parameters of work in Kubernetes. Namespaces are like virtual clusters inside your single physical cluster. You can change to any other namespace you want; just make sure to always pass it using the --namespace flag to kubectl, and/or specifying it on the Kubernetes manifests metadata field. They are a great way to organize the deployments of your team and their running environments; read more about them in the official Kubernetes overview on Namespaces.

      By finishing this step you are now able to run kubectl against your cluster. In the next step, you will create the local Git repository you are going to use to house your sample application.

      Step 2 — Creating the Local Git Repository

      You are now going to structure your sample deployment in a local Git repository. You will also create some Kubernetes manifests that will be global to all deployments you are going to do on your cluster.

      Note: This tutorial has been tested on Ubuntu 18.04, and the individual commands are styled to match this OS. However, most of the commands here can be applied to other Linux distributions with little to no change needed, and commands like kubectl are platform-agnostic.

      First, create a new Git repository locally that you will push to GitHub later on. Create an empty folder called do-sample-app in your home directory and cd into it:

      • mkdir ~/do-sample-app
      • cd ~/do-sample-app

      Now create a new Git repository in this folder with the following command:

      Inside this repository, create an empty folder called kube:

      • mkdir ~/do-sample-app/kube/

      This will be the location where you are going to store the Kubernetes resources manifests related to the sample application that you will deploy to your cluster.

      Now, create another folder called kube-general, but this time outside of the Git repository you just created. Make it inside your home directory:

      This folder is outside of your Git repository because it will be used to store manifests that are not specific to a single deployment on your cluster, but common to multiple ones. This will allow you to reuse these general manifests for different deployments.

      With your folders created and the Git repository of your sample application in place, it's time to arrange the authentication and authorization of your DOKS cluster.

      Step 3 — Creating a Service Account

      It's generally not recommended to use the default admin user to authenticate from other Services into your Kubernetes cluster. If your keys on the external provider got compromised, your whole cluster would become compromised.

      Instead you are going to use a single Service Account with a specific Role, which is all part of the RBAC Kubernetes authorization model.

      This authorization model is based on Roles and Resources. You start by creating a Service Account, which is basically a user on your cluster, then you create a Role, in which you specify what resources it has access to on your cluster. Finally, you create a Role Binding, which is used to make the connection between the Role and the Service Account previously created, granting to the Service Account access to all resources the Role has access to.

      The first Kubernetes resource you are going to create is the Service Account for your CI/CD user, which this tutorial will name cicd.

      Create the file cicd-service-account.yml inside the ~/kube-general folder, and open it with your favorite text editor:

      • nano ~/kube-general/cicd-service-account.yml

      Write the following content on it:

      ~/kube-general/cicd-service-account.yml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: cicd
        namespace: default
      

      This is a YAML file; all Kubernetes resources are represented using one. In this case you are saying this resource is from Kubernetes API version v1 (internally kubectl creates resources by calling Kubernetes HTTP APIs), and it is a ServiceAccount.

      The metadata field is used to add more information about this resource. In this case, you are giving this ServiceAccount the name cicd, and creating it on the default namespace.

      You can now create this Service Account on your cluster by running kubectl apply, like the following:

      • kubectl apply -f ~/kube-general/

      You will recieve output similar to the following:

      Output

      serviceaccount/cicd created

      To make sure your Service Account is working, try to log in to your cluster using it. To do that you first need to obtain their respective access token and store it in an environment variable. Every Service Account has an access token which Kubernetes stores as a Secret.

      You can retrieve this secret using the following command:

      • TOKEN=$(kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode)

      Some explanation on what this command is doing:

      $(kubectl get secret | grep cicd-token | awk '{print $1}')
      

      This is used to retrieve the name of the secret related to our cicd Service Account. kubectl get secret returns the list of secrets on the default namespace, then you use grep to search for the lines related to your cicd Service Account. Then you return the name, since it is the first thing on the single line returned from the grep.

      kubectl get secret preceding-command -o jsonpath='{.data.token}' | base64 --decode
      

      This will retrieve only the secret for your Service Account token. You then access the token field using jsonpath, and pass the result to base64 --decode. This is necessary because the token is stored as a Base64 string. The token itself is a JSON Web Token.

      You can now try to retrieve your pods with the cicd Service Account. Run the following command, replacing server-from-kubeconfig-file with the server URL that can be found after server: in ~kube/config. This command will give a specific error that you will learn about later in this tutorial:

      • kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods

      --insecure-skip-tls-verify skips the step of verifying the certificate of the server, since you are just testing and do not need to verify this. --kubeconfig="/dev/null" is to make sure kubectl does not read your config file and credentials but instead uses the token provided.

      The output should be similar to this:

      Output

      Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:cicd" cannot list resource "pods" in API group "" in the namespace "default"

      This is an error, but it shows us that the token worked. The error you received is about your Service Account not having the neccessary authorization to list the resource secrets, but you were able to access the server itself. If your token had not worked, the error would have been the following one:

      Output

      error: You must be logged in to the server (Unauthorized)

      Now that the authentication was a success, the next step is to fix the authorization error for the Service Account. You will do this by creating a role with the necessary permissions and binding it to your Service Account.

      Step 4 — Creating the Role and the Role Binding

      Kubernetes has two ways to define roles: using a Role or a ClusterRole resource. The difference between the former and the latter is that the first one applies to a single namespace, while the other is valid for the whole cluster.

      As you are using a single namespace on this tutorial, you will use a Role.

      Create the file ~/kube-general/cicd-role.yml and open it with your favorite text editor:

      • nano ~/kube-general/cicd-role.yml

      The basic idea is to grant access to do everything related to most Kubernetes resources in the default namespace. Your Role would look like this:

      ~/kube-general/cicd-role.yml

      kind: Role
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: cicd
        namespace: default
      rules:
        - apiGroups: ["", "apps", "batch", "extensions"]
          resources: ["deployments", "services", "replicasets", "pods", "jobs", "cronjobs"]
          verbs: ["*"]
      

      This YAML has some similarities with the one you created previously, but here you are saying this resource is a Role, and it's from the Kubernetes API rbac.authorization.k8s.io/v1. You are naming your role cicd, and creating it on the same namespace you created your ServiceAccount, the default one.

      Then you have the rules field, which is a list of resources this role has access to. In Kubernetes resources are defined based on the API group they belong to, the resource kind itself, and what actions you can do on then, which is represented by a verb. Those verbs are similar to the HTTP ones.

      In our case you are saying that your Role is allowed to do everything, *, on the following resources: deployments, services, replicasets, pods, jobs, and cronjobs. This also applies to those resources belonging to the following API groups: "" (empty string), apps, batch, and extensions. The empty string means the root API group. If you use apiVersion: v1 when creating a resource it means this resource is part of this API group.

      A Role by itself does nothing; you must also create a RoleBinding, which binds a Role to something, in this case, a ServiceAccount.

      Create the file ~/kube-general/cicd-role-binding.yml and open it:

      • nano ~/kube-general/cicd-role-binding.yml

      Add the following lines to the file:

      ~/kube-general/cicd-role-binding.yml

      kind: RoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: cicd
        namespace: default
      subjects:
        - kind: ServiceAccount
          name: cicd
          namespace: default
      roleRef:
        kind: Role
        name: cicd
        apiGroup: rbac.authorization.k8s.io
      

      Your RoleBinding has some specific fields that have not yet been covered in this tutorial. roleRef is the Role you want to bind to something; in this case it is the cicd role you created earlier. subjects is the list of resources you are binding your role to; in this case it's a single ServiceAccount called cicd.

      Note: If you had used a ClusterRole, you would have to create a ClusterRoleBinding instead of a RoleBinding. The file would be almost the same. The only difference would be that it would have no namespace field inside the metadata.

      With those files created you will be able to use kubectl apply again. Create those new resources on your Kubernetes cluster by running the following command:

      • kubectl apply -f ~/kube-general/

      You will receive output similar to the following:

      Output

      rolebinding.rbac.authorization.k8s.io/cicd created role.rbac.authorization.k8s.io/cicd created serviceaccount/cicd created

      Now, try the command you ran previously:

      • kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods

      Since you have no pods, this will yield the following output:

      Output

      No resources found.

      In this step, you gave the Service Account you are going to use on CircleCI the necessary authorization to do meaningful actions on your cluster like listing, creating, and updating resources. Now it's time to create your sample application.

      Step 5 — Creating Your Sample Application

      Note: All commands and files created from now on will start from the folder ~/do-sample-app you created earlier. This is becase you are now creating files specific to the sample application that you are going to deploy to your cluster.

      The Kubernetes Deployment you are going to create will use the Nginx image as a base, and your application will be a simple static HTML page. This is a great start because it allows you to test if your deployment works by serving a simple HTML directly from Nginx. As you will see later on, you can redirect all traffic coming to a local address:port to your deployment on your cluster to test if it's working.

      Inside the repository you set up earlier, create a new Dockerfile file and open it with your text editor of choice:

      • nano ~/do-sample-app/Dockerfile

      Write the following on it:

      ~/do-sample-app/Dockerfile

      FROM nginx:1.14
      
      COPY index.html /usr/share/nginx/html/index.html
      

      This will tell Docker to build the application container from an nginx image.

      Now create a new index.html file and open it:

      • nano ~/do-sample-app/index.html

      Write the following HTML content:

      ~/do-sample-app/index.html

      <!DOCTYPE html>
      <title>DigitalOcean</title>
      <body>
        Kubernetes Sample Application
      </body>
      

      This HTML will display a simple message that will let you know if your application is working.

      You can test if the image is correct by building and then running it.

      First, build the image with the following command, replacing dockerhub-username with your own Docker Hub username. You must specify your username here so when you push it later on to Docker Hub it will just work:

      • docker build ~/do-sample-app/ -t dockerhub-username/do-kubernetes-sample-app

      Now run the image. Use the following command, which starts your image and forwards any local traffic on port 8080 to the port 80 inside the image, the port Nginx listens to by default:

      • docker run --rm -it -p 8080:80 dockerhub-username/do-kubernetes-sample-app

      The command prompt will stop being interactive while the command is running. Instead you will see the Nginx access logs. If you open localhost:8080 on any browser it should show an HTML page with the content of ~/do-sample-app/index.html. In case you don't have a browser available, you can open a new terminal window and use the following curl command to fetch the HTML from the webpage:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body>

      Stop the container (CTRL + C on the terminal where it's running), and submit this image to your Docker Hub account. To do this, first log in to Docker Hub:

      Fill in the required information about your Docker Hub account, then push the image with the following command (don't forget to replace the dockerhub-username with your own):

      • docker push dockerhub-username/do-kubernetes-sample-app

      You have now pushed your sample application image to your Docker Hub account. In the next step, you will create a Deployment on your DOKS cluster from this image.

      Step 6 — Creating the Kubernetes Deployment and Service

      With your Docker image created and working, you will now create a manifest telling Kubernetes how to create a Deployment from it on your cluster.

      Create the YAML deployment file ~/do-sample-app/kube/do-sample-deployment.yml and open it with your text editor:

      • nano ~/do-sample-app/kube/do-sample-deployment.yml

      Write the following content on the file, making sure to replace dockerhub-username with your Docker Hub username:

      ~/do-sample-app/kube/do-sample-deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: do-kubernetes-sample-app
        namespace: default
        labels:
          app: do-kubernetes-sample-app
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: do-kubernetes-sample-app
        template:
          metadata:
            labels:
              app: do-kubernetes-sample-app
          spec:
            containers:
              - name: do-kubernetes-sample-app
                image: dockerhub-username/do-kubernetes-sample-app:latest
                ports:
                  - containerPort: 80
                    name: http
      

      Kubernetes deployments are from the API group apps, so the apiVersion of your manifest is set to apps/v1. On metadata you added a new field you have not used previously, called metadata.labels. This is useful to organize your deployments. The field spec represents the behavior specification of your deployment. A deployment is responsible for managing one or more pods; in this case it's going to have a single replica by the spec.replicas field. That is, it's going to create and manage a single pod.

      To manage pods, your deployment must know which pods it's responsible for. The spec.selector field is the one that gives it that information. In this case the deployment will be responsible for all pods with tags app=do-kubernetes-sample-app. The spec.template field contains the details of the Pod this deployment will create. Inside the template you also have a spec.template.metadata field. The labels inside this field must match the ones used on spec.selector. spec.template.spec is the specification of the pod itself. In this case it contains a single container, called do-kubernetes-sample-app. The image of that container is the image you built previously and pushed to Docker Hub.

      This YAML file also tells Kubernetes that this container exposes the port 80, and gives this port the name http.

      To access the port exposed by your Deployment, create a Service. Make a file named ~/do-sample-app/kube/do-sample-service.yml and open it with your favorite editor:

      • nano ~/do-sample-app/kube/do-sample-service.yml

      Next, add the following lines to the file:

      ~/do-sample-app/kube/do-sample-service.yml

      apiVersion: v1
      kind: Service
      metadata:
        name: do-kubernetes-sample-app
        namespace: default
        labels:
          app: do-kubernetes-sample-app
      spec:
        type: ClusterIP
        ports:
          - port: 80
            targetPort: http
            name: http
        selector:
          app: do-kubernetes-sample-app
      

      This file gives your Service the same labels used on your deployment. This is not required, but it helps to organize your applications on Kubernetes.

      The service resource also has a spec field. The spec.type field is responsible for the behavior of the service. In this case it's a ClusterIP, which means the service is exposed on a cluster-internal IP, and is only reachable from within your cluster. This is the default spec.type for services. spec.selector is the label selector criteria that should be used when picking the pods to be exposed by this service. Since your pod has the tag app: do-kubernetes-sample-app, you used it here. spec.ports are the ports exposed by the pod's containers that you want to expose from this service. Your pod has a single container which exposes port 80, named http, so you are using it here as targetPort. The service exposes that port on port 80 too, with the same name, but you could have used a different port/name combination than the one from the container.

      With your Service and Deployment manifest files created, you can now create those resources on your Kubernetes cluster using kubectl:

      • kubectl apply -f ~/do-sample-app/kube/

      You will receive the following output:

      Output

      deployment.apps/do-kubernetes-sample-app created service/do-kubernetes-sample-app created

      Test if this is working by forwarding one port on your machine to the port that the service is exposing inside your Kubernetes cluster. You can do that using kubectl port-forward:

      • kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80

      The subshell command $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') retrieves the name of the pod matching the tag you used. Otherwise you could have retrieved it from the list of pods by using kubectl get pods.

      After you run port-forward, the shell will stop being interactive, and will instead output the requests redirected to your cluster:

      Output

      Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80

      Opening localhost:8080 on any browser should render the same page you saw when you ran the container locally, but it's now coming from your Kubernetes cluster! As before, you can also use curl in a new terminal window to check if it's working:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body>

      Next, it's time to push all the files you created to your GitHub repository. To do this you must first create a repository on GitHub called digital-ocean-kubernetes-deploy.

      In order to keep this repository simple for demonstration purposes, do not initialize the new repository with a README, license, or .gitignore file when asked on the GitHub UI. You can add these files later on.

      With the repository created, point your local repository to the one on GitHub. To do this, press CTRL + C to stop kubectl port-forward and get the command line back, then run the following commands to add a new remote called origin:

      • cd ~/do-sample-app/
      • git remote add origin https://github.com/your-github-account-username/digital-ocean-kubernetes-deploy.git

      There should be no output from the preceding command.

      Next, commit all the files you created up to now to the GitHub repository. First, add the files:

      Next, commit the files to your repository, with a commit message in quotation marks:

      • git commit -m "initial commit"

      This will yield output similar to the following:

      Output

      [master (root-commit) db321ad] initial commit 4 files changed, 47 insertions(+) create mode 100644 Dockerfile create mode 100644 index.html create mode 100644 kube/do-sample-deployment.yml create mode 100644 kube/do-sample-service.yml

      Finally, push the files to GitHub:

      • git push -u origin master

      You will be prompted for your username and password. Once you have entered this, you will see output like this:

      Output

      Counting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (7/7), 907 bytes | 0 bytes/s, done. Total 7 (delta 0), reused 0 (delta 0) To github.com:your-github-account-username/digital-ocean-kubernetes-deploy.git * [new branch] master -> master Branch master set up to track remote branch master from origin.

      If you go to your GitHub repository page you will now see all the files there. With your project up on GitHub, you can now set up CircleCI as your CI/CD tool.

      Step 7 — Configuring CircleCI

      For this tutorial, you will use CircleCI to automate deployments of your application whenever the code is updated, so you will need to log in to CircleCI using your GitHub account and set up your repository.

      First, go to their homepage https://circleci.com, and press Sign Up.

      circleci-home-page

      You are using GitHub, so click the green Sign Up with GitHub button.

      CircleCI will redirect to an authorization page on GitHub. CircleCI needs some permissions on your account to be able to start building your projects. This allows CircleCI to obtain your email, deploy keys and permission to create hooks on your repositories, and add SSH keys to your account. If you need more information on what CircleCI is going to do with your data, check their documentation about GitHub integration.

      circleci-github-authorization

      After authorizing CircleCI you will be redirected to their dashboard.

      circleci-project-dashboard

      Next, set up your GitHub repository in CircleCI. Click on Set Up New Projects from the CircleCI Dashboard, or as a shortcut, open the following link changing the highlighted text with your own GitHub username: https://circleci.com/setup-project/gh/your-github-username/digital-ocean-kubernetes-deploy.

      After that press Start Building. Do not create a config file in your repository just yet, and don't worry if the first build fails.

      circleci-start-building

      Next, specify some environment variables in the CircleCI settings. You can find the settings of the project by clicking on the small button with a cog icon on the top right section of the page then selecting Environment Variables, or you can go directly to the environment variables page by using the following URL (remember to fill in your username): https://circleci.com/gh/your-github-username/digital-ocean-kubernetes-deploy/edit#env-vars. Press Add Variable to create new environment variables.

      First, add two environment variables called DOCKERHUB_USERNAME and DOCKERHUB_PASS which will be needed later on to push the image to Docker Hub. Set the values to your Docker Hub username and password, respectively.

      Then add three more: KUBERNETES_TOKEN, KUBERNETES_SERVER, and KUBERNETES_CLUSTER_CERTIFICATE.

      The value of KUBERNETES_TOKEN will be the value of the local environment variable you used earlier to authenticate on your Kubernetes cluster using your Service Account user. If you have closed the terminal, you can always run the following command to retrieve it again:

      • kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode

      KUBERNETES_SERVER will be the string you passed as the --server flag to kubectl when you logged in with your cicd Service Account. You can find this after server: in the ~/.kube/config file, or in the file kubernetes-deployment-tutorial-kubeconfig.yaml downloaded from the DigitalOcean dashboard when you made the initial setup of your Kubernetes cluster.

      KUBERNETES_CLUSTER_CERTIFICATE should also be available on your ~/.kube/config file. It's the certificate-authority-data field on the clusters item related to your cluster. It should be a long string; make sure to copy all of it.

      Those environment variables must be defined here because most of them contain sensitive information, and it is not secure to place them directly on the CircleCI YAML config file.

      With CircleCI listening for changes on your repository, and the environment variables configured, it's time to create the configuration file.

      Make a directory called .circleci inside your sample application repository:

      • mkdir ~/do-sample-app/.circleci/

      Inside this directory, create a file named config.yml and open it with your favorite editor:

      • nano ~/do-sample-app/.circleci/config.yml

      Add the following content to the file, making sure to replace dockerhub-username with your Docker Hub username:

      ~/do-sample-app/.circleci/config.yml

      version: 2.1
      jobs:
        build:
          docker:
            - image: circleci/buildpack-deps:stretch
          environment:
            IMAGE_NAME: dockerhub-username/do-kubernetes-sample-app
          working_directory: ~/app
          steps:
            - checkout
            - setup_remote_docker
            - run:
                name: Build Docker image
                command: |
                  docker build -t $IMAGE_NAME:latest .
            - run:
                name: Push Docker Image
                command: |
                  echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
                  docker push $IMAGE_NAME:latest
      workflows:
        version: 2
        build-master:
          jobs:
            - build:
                filters:
                  branches:
                    only: master
      

      This sets up a Workflow with a single job, called build, that runs for every commit to the master branch. This job is using the image circleci/buildpack-deps:stretch to run its steps, which is an image from CircleCI based on the official buildpack-deps Docker image, but with some extra tools installed, like Docker binaries themselves.

      The workflow has four steps:

      • checkout retrieves the code from GitHub.
      • setup_remote_docker sets up a remote, isolated environment for each build. This is required before you use any docker command inside a job step. This is necessary because as the steps are running inside a docker image, setup_remote_docker allocates another machine to run the commands there.
      • The first run step builds the image, as you did previously locally. For that you are using the environment variable you declared in environment:, IMAGE_NAME (remember to change the highlighted section with your own information).
      • The last run step pushes the image to Dockerhub, using the environment variables you configured on the project settings to authenticate.

      Commit the new file to your repository and push the changes upstream:

      • cd ~/do-sample-app/
      • git add .circleci/
      • git commit -m "add CircleCI config"
      • git push

      This will trigger a new build on CircleCI. The CircleCI workflow is going to correctly build and push your image to Docker Hub.

      CircleCI build page with success build info

      Now that you have created and tested your CircleCI workflow, you can set your DOKS cluster to retrieve the up-to-date image from Docker Hub and deploy it automatically when changes are made.

      Step 8 — Updating the Deployment on the Kubernetes Cluster

      Now that your application image is being built and sent to Docker Hub every time you push changes to the master branch on GitHub, it's time to update your deployment on your Kubernetes cluster so that it retrieves the new image and uses it as a base for deployment.

      To do that, first fix one issue with your deployment: it's currently depending on an image with the latest tag. This tag does not tell us which version of the image you are using. You cannot easily lock your deployment to that tag because it's overwritten everytime you push a new image to Docker Hub, and by using it like that you lose one of the best things about having containerized applications: Reproducibility.

      You can read more about that on this article about why depending on Docker latest tag is a anti-pattern.

      To correct this, you first must make some changes to your Push Docker Image build step in the ~/do-sample-app/.circleci/config.yml file. Open up the file:

      • nano ~/do-sample-app/.circleci/config.yml

      Then add the highlighted lines to your Push Docker Image step:

      ~/do-sample-app/.circleci/config.yml:16-22

      ...
            - run:
                name: Push Docker Image
                command: |
                  echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
                  docker tag $IMAGE_NAME:latest $IMAGE_NAME:$CIRCLE_SHA1
                  docker push $IMAGE_NAME:latest
                  docker push $IMAGE_NAME:$CIRCLE_SHA1
      ...
      

      Save and exit the file.

      CircleCI has some special environment variables set by default. One of them is CIRCLE_SHA1, which contains the hash of the commit it's building. The changes you made to ~/do-sample-app/.circleci/config.yml will use this environment variable to tag your image with the commit it was built from, always tagging the most recent build with the latest tag. That way, you always have specific images available, without overwriting them when you push something new to your repository.

      Next, change your deployment manifest file to point to that file. This would be simple if inside ~/do-sample-app/kube/do-sample-deployment.yml you could set your image as dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1, but kubectl doesn't do variable substitution inside the manifests when you use kubectl apply. To account for this, you can use envsubst. envsubst is a cli tool, part of the GNU gettext project. It allows you to pass some text to it, and if it finds any variable inside the text that has a matching environment variable, it's replaced by the respective value. The resulting text is then returned as their output.

      To use this, you will create a simple bash script which will be responsible for your deployment. Make a new folder called scripts inside ~/do-sample-app/:

      • mkdir ~/do-sample-app/scripts/

      Inside that folder create a new bash script called ci-deploy.sh and open it with your favorite text editor:

      • nano ~/do-sample-app/scripts/ci-deploy.sh

      Inside it write the following bash script:

      ~/do-sample-app/scripts/ci-deploy.sh

      #! /bin/bash
      # exit script when any command ran here returns with non-zero exit code
      set -e
      
      COMMIT_SHA1=$CIRCLE_SHA1
      
      # We must export it so it's available for envsubst
      export COMMIT_SHA1=$COMMIT_SHA1
      
      # since the only way for envsubst to work on files is using input/output redirection,
      #  it's not possible to do in-place substitution, so we need to save the output to another file
      #  and overwrite the original with that one.
      envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
      mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
      
      echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
      
      ./kubectl 
        --kubeconfig=/dev/null 
        --server=$KUBERNETES_SERVER 
        --certificate-authority=cert.crt 
        --token=$KUBERNETES_TOKEN 
        apply -f ./kube/
      

      Let's go through this script, using the comments in the file. First, there is the following:

      set -e
      

      This line makes sure any failed command stops the execution of the bash script. That way if one command fails, the next ones are not executed.

      COMMIT_SHA1=$CIRCLE_SHA1
      export COMMIT_SHA1=$COMMIT_SHA1
      

      These lines export the CircleCI $CIRCLE_SHA1 environment variable with a new name. If you had just declared the variable without exporting it using export, it would not be visible for the envsubst command.

      envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
      mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
      

      envsubst cannot do in-place substitution. That is, it cannot read the content of a file, replace the variables with their respective values, and write the output back to the same file. Therefore, you will redirect the output to another file and then overwrite the original file with the new one.

      echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
      

      The environment variable $KUBERNETES_CLUSTER_CERTIFICATE you created earlier on CircleCI's project settings is in reality a Base64 encoded string. To use it with kubectl you must decode its contents and save it to a file. In this case you are saving it to a file named cert.crt inside the current working directory.

      ./kubectl 
        --kubeconfig=/dev/null 
        --server=$KUBERNETES_SERVER 
        --certificate-authority=cert.crt 
        --token=$KUBERNETES_TOKEN 
        apply -f ./kube/
      

      Finally, you are running kubectl. The command has similar arguments to the one you ran when you were testing your Service Account. You are calling apply -f ./kube/, since on CircleCI the current working directory is the root folder of your project. ./kube/ here is your ~/do-sample-app/kube folder.

      Save the file and make sure it's executable:

      • chmod +x ~/do-sample-app/scripts/ci-deploy.sh

      Now, edit ~/do-sample-app/kube/do-sample-deployment.yml:

      • nano ~/do-sample-app/kube/do-sample-deployment.yml

      Change the tag of the container image value to look like the following one:

      ~/do-sample-app/kube/do-sample-deployment.yml

            # ...
            containers:
              - name: do-kubernetes-sample-app
                image: dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1
                ports:
                  - containerPort: 80
                    name: http
      

      Save and close the file. You must now add some new steps to your CI configuration file to update the deployment on Kubernetes.

      Open ~/do-sample-app/.circleci/config.yml on your favorite text editor:

      • nano ~/do-sample-app/.circleci/config.yml

      Write the following new steps, right below the Push Docker Image one you had before:

      ~/do-sample-app/.circleci/config.yml

      ...
            - run:
                name: Install envsubst
                command: |
                  sudo apt-get update && sudo apt-get -y install gettext-base
            - run:
                name: Install kubectl
                command: |
                  curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
                  chmod u+x ./kubectl
            - run:
                name: Deploy Code
                command: ./scripts/ci-deploy.sh
      

      The first two steps are installing some dependencies, first envsubst, and then kubectl. The Deploy Code step is responsible for running our deploy script.

      To make sure the changes are really going to be reflected on your Kubernetes deployment, edit your index.html. Change the HTML to something else, like:

      ~/do-sample-app/index.html

      <!DOCTYPE html>
      <title>DigitalOcean</title>
      <body>
        Automatic Deployment is Working!
      </body>
      

      Once you have saved the above change, commit all the modified files to the repository, and push the changes upstream:

      • cd ~/do-sample-app/
      • git add --all
      • git commit -m "add deploy script and add new steps to circleci config"
      • git push

      You will see the new build running on CircleCI, and successfully deploying the changes to your Kubernetes cluster.

      Wait for the build to finish, then run the same command you ran previously:

      • kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80

      Make sure everything is working by opening your browser on the URL localhost:8080 or by making a curl request to it. It should show the updated HTML:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Automatic Deployment is Working! </body>

      Congratulations, you have set up automated deployment with CircleCI!

      Conclusion

      This was a basic tutorial on how to do deployments to DigitalOcean Kubernetes using CircleCI. From here, you can improve your pipeline in many ways. The first thing you can do is create a single build job for multiple deployments, each one deploying to different Kubernetes clusters or different namespaces. This can be extremely useful when you have different Git branches for development/staging/production environments, ensuring that the deployments are always separated.

      You could also build your own image to be used on CircleCI, instead of using buildpack-deps. This image could be based on it, but could already have kubectl and envsubst dependencies installed.

      If you would like to learn more about CI/CD on Kubernetes, check out the tutorials for our CI/CD on Kubernetes Webinar Series, or for more information about apps on Kubernetes, see Modernizing Applications for Kubernetes.



      Source link

      Automate Static Site Deployments with Salt, Git, and Webhooks


      Updated by Linode Contributed by Nathan Melehan

      Use promo code DOCS10 for $10 credit on a new account.

      This guide will walk through the deployment of a static site using SaltStack, which is a flexible configuration management system. The configuration files created for Salt will be version controlled using Git. Updates to your static site’s code will be automatically communicated to the production system using webhooks, an event notification system for the web.

      Setting up these mechanisms offers an array of benefits:

      • Using webhooks will keep your production website in sync with your development without any actions needed on your part.

      • Using Salt provides an extensible, reliable way to alter your production systems and minimize human error.

      • Version controlling your configuration management helps you track or revert the changes you’ve made to your systems and collaborate with others on your deployments.

      Development and Deployment Workflow

      The static site generator used in this guide is Hugo, a fast framework written in Go. Static site generators compile markdown or other content files into HTML files. This guide can easily be adapted to other frameworks.

      Two Git repositories will be created: one will track changes to the Hugo site, and the other will track Salt’s configuration files. Remote repositories will be created for both on GitHub.

      Two Linodes will be created: one will act as the Salt master, and the other as the Salt minion. This guide was tested under Debian 9, but the instructions may work with other distributions as well. The Salt minion will run the production webserver which serves the Hugo site, and the master will configure the minion’s software. The minion will also run a webhook server which will receive code update notifications from GitHub.

      It is possible to run Salt in a masterless mode, but using a Salt master will make it easier to expand on your deployment in the future.

      Note

      The workflow described in this guide is similar to how Linode’s own Guides & Tutorials website is developed and deployed.

      Before You Begin

      Set Up the Development Environment

      Development of your Hugo site and your Salt formula will take place on your personal computer. Some software will need to be installed on your computer first:

      1. Install Git using one of the methods in Linode’s guide. If you have a Mac, use the Homebrew method, as it will also be used to install Hugo.

      2. Install Hugo. The Hugo documentation has a full list of installation methods, and instructions for some popular platforms are as follows:

        • Debian/Ubuntu:

          sudo apt-get install hugo
          
        • Fedora, Red Hat and CentOS:

          sudo dnf install hugo
          
        • Mac, using Homebrew:

          brew install hugo
          
        • Windows, using Chocolatey

          choco install hugo -confirm
          

      Deploy the Linodes

      1. Follow the Getting Started guide and deploy two Linodes running Debian 9.

      2. In the settings tab of your Linodes’ dashboards, label one of the Linodes as salt-master and the other as salt-minion. This is not required, but it will help keep track of which Linode serves which purpose.

      3. Complete the Securing Your Server guide on each Linode to create a limited Linux user account with sudo privileges, harden SSH access, and remove unnecessary network services.

        Note

        This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, visit our Users and Groups guide.

        All configuration files should be edited with elevated privileges. Remember to include sudo before running your text editor.

      4. Configure DNS for your site by adding a domain zone and setting up reverse DNS on your Salt minion’s IP address.

      Set Up the Salt Master and Salt Minion

      Before you can start setting up the Salt formulas for the minion, you first need to install the Salt software on the master and minion and set up communication between them.

      1. Log into the Salt master Linode via SSH and run the Salt installation bootstrap script:

        wget -O bootstrap-salt.sh https://bootstrap.saltstack.com
        sudo sh bootstrap-salt.sh -M -N
        

        Note

        The -M option tells the script to install the Salt master software, and the -N option tells the script to not install the minion software.

      2. Log into the Salt minion Linode via SSH and set the hostname. This guide uses hugo-webserver as the example hostname:

        sudo hostnamectl set-hostname hugo-webserver
        

        Note

        This step needs to be completed before installing Salt on the minion, as Salt will use your hostname to generate the minion’s Salt ID.

      3. Edit the minion’s /etc/hosts file and append a new line for your hostname after the localhost line; replace 192.0.2.3 with your minion’s public IP address:

        /etc/hosts
        1
        2
        3
        
        127.0.0.1       localhost
        192.0.2.3       hugo-webserver
        # [...]
      4. Run the bootstrap script on the minion:

        wget -O bootstrap-salt.sh https://bootstrap.saltstack.com
        sudo sh bootstrap-salt.sh
        
      5. Edit /etc/salt/minion on the Salt minion. Uncomment the line that begins with #master: and enter your Salt master’s IP after the colon (in place of 192.0.2.2):

        /etc/salt/minion
        1
        2
        3
        
        # [...]
        master: 192.0.2.2
        # [...]

        Note

        Linode does not charge for traffic within a datacenter across private IP addresses. If your Salt master and minion are in the same datacenter, and both have a private IP addresses, you can use your Salt master’s private IP address in this step to avoid incurring data traffic charges.

      6. Restart Salt on the minion:

        sudo systemctl restart salt-minion
        

      Salt Minion Authentication

      The minion should now be able to find the master, but it has not yet been authenticated to communicate with the master. Salt uses public-private keypairs to authenticate minions to masters.

      1. On the master, list fingerprints for all the master’s local keys, accepted minion keys, and unaccepted keys:

        sudo salt-key --finger-all
        

        The output should resemble:

          
        Local Keys:
        master.pem:  fe:1f:e8:3d:26:83:1c:...
        master.pub:  2b:93:72:b3:3a:ae:cb:...
        Unaccepted Keys:
        hugo-webserver:  29:d8:f3:ed:91:9b:51:...
        
        

        Note

        The example fingerprints in this section have been truncated for brevity.

      2. Copy the fingerprint for master.pub from the output of salt-key --finger-all. On your Salt minion, open /etc/salt/minion in a text editor. Uncomment the line that begins with #master_finger: and enter the value for your master.pub after the colon in single-quotes:

        /etc/salt/minion
        1
        2
        3
        
        # [...]
        master_finger: '0f:d6:5f:5e:f3:4f:d3:...'
        # [...]
      3. Restart Salt on the minion:

        sudo systemctl restart salt-minion
        
      4. View the minion’s local key fingerprint:

        sudo salt-call key.finger --local
        
          
        local:
            29:d8:f3:ed:91:9b:51:...
        
        

        Compare the output’s listed fingerprint to the fingerprints listed by the Salt master for any Unaccepted Keys. This is the output of salt-key --finger-all run on the master in the beginning of this section.

      5. After verifying, that the minion’s fingerprint is the same as the fingerprint detected by the Salt master, run the following command on the master to accept the minion’s key:

        sudo salt-key -a hugo-webserver
        
      6. From the master, verify that the minion is running:

        sudo salt-run manage.up
        

        You can also run a Salt test ping from the master to the minion:

        sudo salt 'hugo-webserver' test.ping
        
          
        hugo-webserver:
            True
        
        

      Initialize the Salt Minion’s Formula

      The Salt minion is ready to be configured by the master. These configurations will be written in a Salt formula which will be hosted on GitHub.

      1. On your computer, create a new directory to hold your minion’s formula and change to that directory:

        mkdir hugo-webserver-salt-formula
        cd hugo-webserver-salt-formula
        
      2. Inside the formula directory, create a new hugo directory to hold your webserver’s configuration:

        mkdir hugo
        
      3. Inside the hugo directory, create a new install.sls file:

        hugo-webserver-salt-formula/hugo/install.sls
        1
        2
        3
        
        nginx_pkg:
          pkg.installed:
            - name: nginx

        Note

        Salt configurations are declared in YAML– a markup language that incorporates whitespace/indentation in its syntax. Be sure to use the same indentation as the snippets presented in this guide.

        A .sls file is a SaLt State file. Salt states describe the state a minion should be in after the state is applied to it: e.g., all the software that should be installed, all the services that should be run, and so on.

        The above snippet says that a package with name nginx (i.e. the NGINX web server) should be installed via the distribution’s package manager. Salt knows how to negotiate software installation via the built-in package manager for various distributions. Salt also knows how to install software via NPM and other package managers.

        The string nginx_pkg is the ID for the state component, pkg is the name of the Salt module used, and pkg.installed is referred to as a function declaration. The component ID is arbitrary, so you can name it however you prefer.

        Note

        If you were to name the ID to be the same as the relevant installed package, then you do not need to specify the - name option, as it will be inferred from the ID. For example, this snippet also installs NGINX:

        hugo-webserver-salt-formula/hugo/install.sls

        The same name/ID convention is true for other Salt modules.

      4. Inside the hugo directory, create a new service.sls file:

        hugo-webserver-salt-formula/hugo/service.sls
        1
        2
        3
        4
        5
        6
        
        nginx_service:
          service.running:
            - name: nginx
            - enable: True
            - require:
              - pkg: nginx_pkg

        This state says that the nginx service should be immediately run and be enabled to run at boot. For a Debian 9 system, Salt will set the appropriate systemd configurations to enable the service. Salt also supports other init systems.

        The require lines specify that this state component should not be applied until after the nginx_pkg component has been applied.

        Note

        Unless specified by a require declaration, Salt makes no guarantees about the order that different components are applied. The order that components are listed in a state file does not necessarily correspond with the order that they are applied.

      5. Inside the hugo directory, create a new init.sls file with the following contents:

        hugo-webserver-salt-formula/hugo/init.sls
        1
        2
        3
        
        include:
          - hugo.install
          - hugo.service

        Using the include declaration in this way simply concatenates the install.sls and service.sls files into a single combined state file.

        Right now, these state files only install and enable NGINX. More functionality will be enabled later in this guide.

        The install and service states will not be applied to the minion on their own–instead, only the combined init state will ever be applied. In Salt, when a file named init.sls exists inside a directory, Salt will refer to that particular state by the name of the directory it belongs to (i.e. hugo in our example).

        Note

        The organization of the state files used here is not mandated by Salt. Salt does not place restrictions on how you organize your states. This specific structure is presented as an example of a best practice.

      Push the Salt Formula to GitHub

      1. Inside your hugo-webserver-salt-formula directory on your computer, initialize a new Git repository:

        cd ~/hugo-webserver-salt-formula
        git init
        
      2. Stage the files you just created:

        git add .
        
      3. Review the staged files:

        git status
        
          
        On branch master
        No commits yet
        Changes to be committed:
          (use "git rm --cached ..." to unstage)
        
          new file:   hugo/init.sls
          new file:   hugo/install.sls
          new file:   hugo/service.sls
        
        
      4. Commit the files:

        git commit -m "Initial commit"
        
      5. Log into the GitHub website in your browser and navigate to the Create a New Repository page.

      6. Create a new public repository with the name hugo-webserver-salt-formula:

        GitHub New Repository - Add New Salt Formula Repo

      7. Copy the HTTPS URL for your new repository:

        GitHub New Repository - New Salt Formula Repo

      8. In your local Salt formula repository, add the GitHub repository as the origin remote and push your new files to it. Replace github-username with your GitHub user:

        git remote add origin https://github.com/github-username/hugo-webserver-salt-formula.git
        git push -u origin master
        

        Note

        If you haven’t pushed anything else to your GitHub account from the command line before, you may be prompted to authenticate with GitHub. If you have two-factor authentication enabled for your account, you will need to create and use a personal access token.
      9. If you navigate back to your hugo-webserver-salt-formula repository on GitHub and refresh the page, you should now see your new files.

      Enable GitFS on the Salt Master

      Update your Salt master to serve the new formula from GitHub:

      1. Salt requires that you install a Python interface to Git to use GitFS. On the Salt master Linode:

        sudo apt-get install python-git
        
      2. Open /etc/salt/master in a text editor. Uncomment the fileserver_backend declaration and enter roots and gitfs in the declaration list:

        /etc/salt/master
        1
        2
        3
        
        fileserver_backend:
          - roots
          - gitfs

        roots refers to Salt files stored on the master’s filesystem. While the Hugo webserver Salt formula is stored on GitHub, the Salt Top file will be stored on the master. The Top file is how Salt maps states to the minions they will be applied to.

      3. In the same file, uncomment the gitfs_remotes declaration and enter your Salt formula’s repository URL:

        /etc/salt/master
        1
        2
        
        gitfs_remotes:
          - https://github.com/your_github_user/hugo-webserver-salt-formula.git
      4. Uncomment the gitfs_provider declaration and set its value to gitpython:

        /etc/salt/master
        1
        
        gitfs_provider: gitpython

      Apply the Formula’s State to the Minion

      1. In /etc/salt/master, uncomment the file_roots declaration and set the following values:

        /etc/salt/master
        1
        2
        3
        
        file_roots:
          base:
            - /srv/salt/

        file_roots specifies where state files are kept on the Master’s filesystem. This is referenced when - roots is declared in the fileserver_backend section. base refers to a Salt environment, which is a tree of state files that can be applied to minions. This guide will only use the base environment, but other environments could be created for development, QA, and so on.

      2. Restart Salt on the master to enable the changes in /etc/salt/master:

        sudo systemctl restart salt-master
        
      3. Create the /srv/salt directory on the Salt master:

        sudo mkdir /srv/salt
        
      4. Create a new top.sls file in /srv/salt:

        /srv/salt/top.sls
        1
        2
        3
        
        base:
          'hugo-webserver':
            - hugo

        This is Salt’s Top file, and the snippet declares that the hugo-webserver minion should receive the init.sls state from the hugo directory (from your GitHub-hosted Salt formula).

      5. Tell Salt to apply states from the Top file to the minion:

        sudo salt 'hugo-webserver' state.apply
        

        Salt as refers to this command as a highstate. Running a highstate can take a bit of time to complete, and the output of the command will describe what actions were taken on the minion. The output will also show if any actions failed.

        Note

        If you see an error similar to:

          
        No matching sls found for 'hugo' in env 'base'
        
        

        Try running this command to manually fetch the Salt formula from GitHub, then run the state.apply command again:

        sudo salt-run fileserver.update
        

        Salt’s GitFS fetches files from remotes periodically, and this period can be configured.

      6. If you visit your domain name in a web browser, you should now see NGINX’s default test page served by the Salt minion.

      Initialize the Hugo Site

      1. On your computer, create a new Hugo site. Make sure you are not running this command in your hugo-webserver-salt-formula directory:

        hugo new site example-hugo-site
        
      2. Navigate to the new Hugo site directory and initialize a Git repository:

        cd example-hugo-site
        git init
        
      3. Install a theme into the themes/ directory. This guide uses the Cactus theme:

        git submodule add https://github.com/digitalcraftsman/hugo-cactus-theme.git themes/hugo-cactus-theme
        
      4. The theme comes with some example content. Copy it into the root of your site so that it can be viewed:

        cp -r themes/hugo-cactus-theme/exampleSite/ .
        
      5. Edit the baseurl, themesDir, and name options in config.toml as follows; replace example.com with your own domain and Your Name with your own name:

        example-hugo-site/config.toml
        1
        2
        3
        4
        5
        6
        
        # [...]
        baseURL = "http://example.com"
        # [...]
        themesDir = "themes"
        # [...]
          name = "Your Name"
      6. Run the Hugo development server on your computer:

        hugo server
        

        The output from this command will end with a line like:

          
        Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
        
        
      7. If you view the URL from this output in a browser, you can see your new Hugo site:

        New Hugo Site - Development Server

      8. Enter CTRL-C in the terminal session on your computer to stop the Hugo development server. Open the .gitignore file and make sure public/ is listed. The default .gitignore from the Cactus theme should look like:

        example-hugo-site/config.toml

        The public directory is the result of Hugo compiling the Markdown content files into HTML. These files can be regenerated by anyone who downloads your site code, so they won’t be checked into version control.

      Push the Hugo Site to GitHub

      1. In the Hugo site directory, commit the new site files:

        git add .
        git commit -m "Initial commit"
        
      2. Create a new public repository on GitHub named example-hugo-site and copy the repository’s HTTPS URL.

      3. In the site directory, add the GitHub repository as the origin remote and push your new files to it; replace github-username with your GitHub user:

        git remote add origin https://github.com/github-username/example-hugo-site.git
        git push -u origin master
        

      Deploy the Hugo Site

      The Salt minion’s formula needs to be updated in order to serve the Hugo site. Specifically, the formula will need to have states which:

      • Install Git and clone the Hugo site repository from GitHub.

      • Install Hugo and build the HTML files from the markdown content.

      • Update the NGINX configuration to serve the built site.

      Some of the new state components will refer to data stored in Salt Pillar. Pillar is a Salt system that stores private data and other parameters that you don’t want to list in your formulas. The Pillar data will be kept as a file on the Salt master and not checked into version control.

      Note

      There are methods for securely checking this data into version control or using other backends to host the data, but those strategies are outside the scope of this guide.

      Pillar data is injected into state files with Salt’s Jinja templating feature. State files are first evaluated as Jinja templates and then as YAML afterwards.

      Install Git and Hugo

      In your local Salt formula’s repository, edit the install.sls file to append the git_pkg and hugo_pkg states:

      hugo-webserver-salt-formula/hugo/install.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      # [...]
      
      git_pkg:
        pkg.installed:
          - name: git
      
      hugo_pkg:
        pkg.installed:
          - name: hugo
          - sources:
            - hugo: https://github.com/gohugoio/hugo/releases/download/v{{ pillar['hugo_deployment_data']['hugo_version'] }}/hugo_{{ pillar['hugo_deployment_data']['hugo_version'] }}_Linux-64bit.deb

      The first state component installs Git, and the second component installs Hugo. The second component’s sources declaration specifies that the package should be downloaded from Hugo’s GitHub repository (instead of from the distribution package manager).

      The {{ }} syntax that appears in {{ pillar['hugo_deployment_data']['hugo_version'] }} is a Jinja substitution statement. pillar['hugo_deployment_data']['hugo_version'] returns the value of the hugo_version key from a dictionary named hugo_deployment_data in Pillar. Keeping the Hugo version in Pillar lets you update Hugo without needing to update your formulas.

      Clone the Hugo Site Git Repository

      Create a new config.sls file in your local Salt formula repository’s hugo directory:

      hugo-webserver-salt-formula/hugo/config.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      
      hugo_group:
        group.present:
          - name: {{ pillar['hugo_deployment_data']['group'] }}
      
      hugo_user:
        user.present:
          - name: {{ pillar['hugo_deployment_data']['user'] }}
          - gid: {{ pillar['hugo_deployment_data']['group'] }}
          - home: {{ pillar['hugo_deployment_data']['home_dir'] }}
          - createhome: True
          - require:
            - group: hugo_group
      
      hugo_site_repo:
        cmd.run:
          - name: git clone --recurse-submodules https://github.com/{{ pillar['hugo_deployment_data']['github_account'] }}/{{ pillar['hugo_deployment_data']['site_repo_name'] }}.git
          - cwd: {{ pillar['hugo_deployment_data']['home_dir'] }}
          - runas: {{ pillar['hugo_deployment_data']['user'] }}
          - creates: {{ pillar['hugo_deployment_data']['home_dir'] }}/{{ pillar['hugo_deployment_data']['site_repo_name'] }}
          - require:
            - pkg: git_pkg
            - user: hugo_user

      The final hugo_site_repo component in this snippet is responsible for cloning the example Hugo site repository from GitHub. This cloned repo is placed in the home directory of a system user that Salt creates in the preceding components. The clone command also recursively downloads the Cactus theme submodule.

      Note

      The - creates declaration tells Salt that running the cmd command module will result in the creation of the file that’s specified. If the state is applied again later, Salt will check if that file already exists. If it exists, Salt will not run the module again.

      The require declarations in each component ensure that:

      • The clone is not run until the system user and home directory have been created, and until the software package for Git has been installed.
      • The user is not created until the group it belongs to is created.

      Instead of hard-coding the parameters for the user, group, home directory, GitHub account, and repository name, these are retrieved from Pillar.

      Configure NGINX

      1. Append the following states to your config.sls:

        hugo-webserver-salt-formula/hugo/config.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        
        nginx_default:
          file.absent:
            - name: '/etc/nginx/sites-enabled/default'
            - require:
              - pkg: nginx_pkg
        
        nginx_config:
          file.managed:
            - name: /etc/nginx/sites-available/hugo_site
            - source: salt://hugo/files/hugo_site
            - user: root
            - group: root
            - mode: 0644
            - template: jinja
            - require:
              - pkg: nginx_pkg
        
        nginx_symlink:
          file.symlink:
            - name: /etc/nginx/sites-enabled/hugo_site
            - target: /etc/nginx/sites-available/hugo_site
            - user: root
            - group: root
            - require:
              - file: nginx_config
        
        nginx_document_root:
          file.directory:
            - name: {{ pillar['hugo_deployment_data']['nginx_document_root'] }}/{{ pillar['hugo_deployment_data']['site_repo_name'] }}
            - user: {{ pillar['hugo_deployment_data']['user'] }}
            - group: {{ pillar['hugo_deployment_data']['group'] }}
            - dir_mode: 0755
            - require:
              - user: hugo_user
        • The nginx_default component removes the symlink in sites-enabled for the default NGINX config, which disables that configuration.
        • nginx_config and nginx_symlink then create a new configuration file in sites-available and a symlink to it in sites-enabled.
        • The nginx_document_root component creates the directory that NGINX will serve your Hugo site files from (when filled in with Pillar data, this will directory will look like /var/www/example-hugo-site).
      2. The - source: salt://hugo/files/hugo_site declaration in nginx_config refers to an NGINX configuration file that doesn’t exist in your repository yet. Create the files/ directory:

        cd ~/hugo-webserver-salt-formula/hugo
        mkdir files
        
      3. Create the hugo_site file inside files/:

        hugo-webserver-salt-formula/hugo/files/hugo_site
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        server {
            listen 80;
            listen [::]:80;
            server_name {{ pillar['hugo_deployment_data']['domain_name'] }};
        
            root {{ pillar['hugo_deployment_data']['nginx_document_root'] }}/{{ pillar['hugo_deployment_data']['site_repo_name'] }};
        
            index index.html index.htm index.nginx-debian.html;
        
            location / {
                try_files $uri $uri/ = /404.html;
            }
        }

        The nginx_config component that manages this file also listed the - template: jinja declaration, so the source file is interpreted as a Jinja template. The source file is able to substitute values from Pillar using the Jinja substitution syntax.

      4. Replace the content of your service.sls with this snippet:

        hugo-webserver-salt-formula/hugo/service.sls
        1
        2
        3
        4
        5
        6
        7
        8
        
        nginx_service:
          service.running:
            - name: nginx
            - enable: True
            - require:
              - file: nginx_symlink
            - watch:
              - file: nginx_config

        The nginx_service component now requires nginx_symlink instead of nginx_pkg. Without this change, the service may be enabled and run before the new NGINX configuration is set up. The - watch declaration also instructs NGINX to restart whenever a change to nginx_config is made.

      Build Hugo

      1. Append a build_script state to config.sls:

        hugo-webserver-salt-formula/hugo/config.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        build_script:
          file.managed:
            - name: {{ pillar['hugo_deployment_data']['home_dir'] }}/deploy.sh
            - source: salt://hugo/files/deploy.sh
            - user: {{ pillar['hugo_deployment_data']['user'] }}
            - group: {{ pillar['hugo_deployment_data']['group'] }}
            - mode: 0755
            - template: jinja
            - require:
              - user: hugo_user
          cmd.run:
            - name: ./deploy.sh
            - cwd: {{ pillar['hugo_deployment_data']['home_dir'] }}
            - runas: {{ pillar['hugo_deployment_data']['user'] }}
            - creates: {{ pillar['hugo_deployment_data']['nginx_document_root'] }}/{{ pillar['hugo_deployment_data']['site_repo_name'] }}/index.html
            - require:
              - file: build_script
              - cmd: hugo_site_repo
              - file: nginx_document_root

        This state uses more than one module. The first module will download the deploy.sh file from the salt master and place it on the minion. This script will be responsible for compiling your Hugo site files. The second module then calls that script. The first module is listed as a requirement of the second module, along with the Git clone command, and the creation of the document root folder.

        Note

        The - creates option in the second module ensures that Salt doesn’t rebuild Hugo if the state is re-applied to the minion.

      2. Create the deploy.sh script in files/:

        hugo-webserver-salt-formula/hugo/files/deploy.sh
        1
        2
        3
        4
        
        #!/bin/bash
        
        cd {{ pillar['hugo_deployment_data']['site_repo_name'] }}
        hugo --destination={{ pillar['hugo_deployment_data']['nginx_document_root'] }}/{{ pillar['hugo_deployment_data']['site_repo_name'] }}

        Hugo’s build function is called with NGINX’s document root as the destination for the built files.

      3. Update init.sls to include the new config.sls file:

        hugo-webserver-salt-formula/hugo/init.sls
        1
        2
        3
        4
        
        include:
          - hugo.install
          - hugo.config
          - hugo.service

      Push the Salt Formula Updates to GitHub

      Your state files should now have these contents: init.sls, install.sls, config.sls, service.sls.

      The files present in your Salt formula repository should be:

        
      hugo
      ├── config.sls
      ├── files
      │   ├── deploy.sh
      │   └── hugo_site
      ├── init.sls
      ├── install.sls
      └── service.sls
      
      
      1. Stage all the changes you made to your local Salt formula files in the previous steps and then commit the changes:

        cd ~/hugo-webserver-salt-formula
        git add .
        git commit -m "Deploy the Hugo site"
        
      2. Push the commit to your GitHub repository:

        git push origin master
        

      Create the Salt Pillar File

      1. Open /etc/salt/master on the Salt master in a text editor. Uncomment the pillar_roots section:

        /etc/salt/master
        1
        2
        3
        
        pillar_roots:
          base:
            - /srv/pillar

        pillar_roots performs an analogous function to file_roots: it specifies where Pillar data is stored on the master’s filesystem.

      2. Restart Salt on the master to enable the changes in /etc/salt/master:

        sudo systemctl restart salt-master
        
      3. Create the /srv/pillar directory on the Salt master:

        sudo mkdir /srv/pillar
        
      4. Create an example-hugo-site.sls file in /srv/pillar to contain the Pillar data for the minion. This file uses the same YAML syntax as other state files. Replace the values for github_account and domain_name with your GitHub account and your site’s domain name:

        /srv/pillar/example-hugo-site.sls
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        hugo_deployment_data:
          hugo_version: 0.49
          group: hugo
          user: hugo
          home_dir: /home/hugo
          github_account: your_github_user
          site_repo_name: example-hugo-site
          nginx_document_root: /var/www
          domain_name: yourdomain.com
      5. Create a top.sls file in /srv/pillar. Similar to the Top file in your state tree, the Pillar’s Top file maps Pillar data to minions:

        /srv/pillar/top.sls
        1
        2
        3
        
        base:
          'hugo-webserver':
            - example-hugo-site

      Apply State Updates to the Minion

      On the Salt master, apply the new states to all minions:

      sudo salt '*' state.apply
      

      Note

      In this guide there is only one minion, but Salt can use shell-style globbing and regular expressions to match against minion IDs when you have more than one. For example, this command would run a highstate on all minions whose IDs begin with hugo:

      sudo salt 'hugo*' state.apply
      

      If no changes are made, try manually fetching the Salt formula updates from GitHub and then run the state.apply command again:

      sudo salt-run fileserver.update
      

      When the operation finishes, your Hugo site should now be visible at your domain.

      Deploy Site Updates with Webhooks

      Your site is now deployed to production, but there is no automatic mechanism in place yet for updating the production server when you update your Hugo site’s content. To update the production server, your minion will need to:

      1. Pull the latest changes pushed to the master branch of your Hugo site repository on GitHub.

      2. Run the Hugo build process with the new content.

      The deploy.sh script can be altered to pull changes from GitHub. These script changes will be made in the Salt formula repository. Then, we’ll set up webhooks to notify the Salt minion that updates have been made to the Hugo site.

      Webhooks are HTTP POST requests specifically designed and sent by systems to communicate some kind of significant event. A webhook server listens for these requests and then takes some action when it receives one. For example, a GitHub repository can be configured to send webhook notifications whenever a push is made to the repository. This is the kind of notification we’ll configure, and the Salt minion will run a webhook server to receive them. Other event notifications can also be set up on GitHub.

      Set Up a Webhook Server on the Salt Minion

      1. In your local Salt formula repository, append a new webhook_pkg state to your install.sls that installs the webhook server package by adnanh:

        hugo-webserver-salt-formula/hugo/install.sls
        1
        2
        3
        
        webhook_pkg:
          pkg.installed:
            - name: webhook

        Note

        The webhook server written in Go by adnanh is a popular implementation of the concept, but it’s possible to write other HTTP servers that parse webhook payloads.

      2. Append two new components to your config.sls:

        hugo-webserver-salt-formula/hugo/config.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        
        webhook_systemd_unit:
          file.managed:
            - name: '/etc/systemd/system/webhook.service'
            - source: salt://hugo/files/webhook.service
            - user: root
            - group: root
            - mode: 0644
            - template: jinja
            - require:
              - pkg: webhook_pkg
          module.run:
            - name: service.systemctl_reload
            - onchanges:
              - file: webhook_systemd_unit
        
        webhook_config:
          file.managed:
            - name: '/etc/webhook.conf'
            - source: salt://hugo/files/webhook.conf
            - user: root
            - group: {{ pillar['hugo_deployment_data']['group'] }}
            - mode: 0640
            - template: jinja
            - require:
              - pkg: webhook_pkg
              - group: hugo_group

        The first state creates a systemd unit file for the webhook service. The second state creates a webhook configuration. The webhook server reads the configuration and generates a webhook URL from it.

      3. Create a webhook.service file in your repository’s files/ directory:

        hugo-webserver-salt-formula/hugo/files/webhook.service
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        [Unit]
        Description=Small server for creating HTTP endpoints (hooks)
        Documentation=https://github.com/adnanh/webhook/
        
        [Service]
        User={{ pillar['hugo_deployment_data']['user'] }}
        ExecStart=/usr/bin/webhook -nopanic -hooks /etc/webhook.conf
        
        [Install]
        WantedBy=multi-user.target
      4. Create a webhook.conf file in your repository’s files/ directory:

        hugo-webserver-salt-formula/hugo/files/webhook.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        
        [
          {
            "id": "github_push",
            "execute-command": "{{ pillar['hugo_deployment_data']['home_dir'] }}/deploy.sh",
            "command-working-directory": "{{ pillar['hugo_deployment_data']['home_dir'] }}",
            "trigger-rule":
            {
              "and":
              [
                {
                  "match":
                  {
                    "type": "payload-hash-sha1",
                    "secret": "{{ pillar['hugo_deployment_data']['webhook_secret'] }}",
                    "parameter":
                    {
                      "source": "header",
                      "name": "X-Hub-Signature"
                    }
                  }
                },
                {
                  "match":
                  {
                    "type": "value",
                    "value": "refs/heads/master",
                    "parameter":
                    {
                      "source": "payload",
                      "name": "ref"
                    }
                  }
                }
              ]
            }
          }
        ]

        This configuration sets up a URL named http://example.com:9000/hooks/github_push, where the last component of the URL is derived from the value of the configuration’s id.

        Note

        The webhook server runs on port 9000 and places your webhooks inside a hooks/ directory by default.

        When a POST request is sent to the URL:

        • The webhook server checks if the header and payload data from the request satisfies the rules in the trigger-rule dictionary, which are:

          • That the SHA1 hash of the server’s webhook secret matches the secret in the request headers. This prevents people who don’t know your webhook secret from triggering the webhook’s action.
          • The ref parameter in the payload matches refs/heads/master. This ensures that only pushes to the master branch trigger the action.
        • If the rules are satisfied, then the command listed in execute-command is run, which is the deploy.sh script.

        Note

        Further documentation on the webhook configuration options can be reviewed on the project’s GitHub repository.
      5. Append a new webhook_service state to your service.sls that enables and starts the webhook server:

        hugo-webserver-salt-formula/hugo/service.sls
        1
        2
        3
        4
        5
        6
        7
        
        webhook_service:
          service.running:
            - name: webhook
            - enable: True
            - watch:
              - file: webhook_config
              - module: webhook_systemd_unit
      6. Update the deploy.sh script so that it pulls changes from master before building the site:

        hugo-webserver-salt-formula/hugo/files/deploy.sh
        1
        2
        3
        4
        5
        
        #!/bin/bash
        
        cd {{ pillar['hugo_deployment_data']['site_repo_name'] }}
        git pull origin master
        hugo --destination={{ pillar['hugo_deployment_data']['nginx_document_root'] }}//{{ pillar['hugo_deployment_data']['site_repo_name'] }}
      7. Your state files should now have these contents: init.sls (unchanged), install.sls, config.sls, service.sls. Save the changes made to your Salt files, then commit and push them to GitHub:

        cd ~/hugo-webserver-salt-formula
        git add .
        git commit -m "Webhook server states"
        git push origin master
        
      8. On the Salt master, add a webhook_secret to the example-hugo-site.sls Pillar. Your secret should be a complex, random alphanumeric string.

        /srv/pillar/example-hugo-site.sls
        1
        2
        3
        
        hugo_deployment_data:
          # [...]
          webhook_secret: your_webhook_secret
      9. From the Salt master, apply the formula updates to the minion:

        sudo salt-run fileserver.update
        sudo salt 'hugo-webserver' state.apply
        
      10. Your webhook server should now be running on the minion. If you run a curl against it, you should see:

        curl http://example.com:9000/hooks/github_push
        
          
        Hook rules were not satisfied.⏎
        
        

      Configure a Webhook on GitHub

      1. Visit your example Hugo site repository on GitHub and navigate to the Webhooks section of the Settings tab. Click on the Add webhook button:

        GitHub - Add Webhook Button

      2. Fill in the form:

        • Enter http://example.com:9000/hooks/github_push for the payload URL (substitute example.com for your own domain).

        • Select application/json for the content type.

        • Paste in the webhook secret that you previously added to Salt Pillar.

        The webhook is configured to notify on push events by default. Keep this option selected.

        GitHub - New Webhook Configuration

      3. Click the green Add webhook button to complete the setup.

      Update the Hugo Site

      1. In your local Hugo site repository, create a new post using Hugo’s archetypes feature:

        hugo new post/test-post.md
        
      2. This command creates a new partially filled in markdown document in content/post/. Open this file in your editor, remove the draft: true line from the frontmatter, and add some body text:

        example-hugo-site/content/post/test-post.md
        1
        2
        3
        4
        5
        6
        
        ---
        title: "Test Post"
        date: 2018-10-19T11:39:15-04:00
        ---
        
        Test post body text
      3. If you run hugo server in the repository directory, you can see the new post:

        Hugo Home Page - Test Post

      4. Commit and push the new post to GitHub:

        cd ~/example-hugo-site
        git add .
        git commit -m "Test post"
        git push origin master
        
      5. Visit your domain in your browser; your test post should automatically appear.

        Note

        If your post does not appear, review the Recent Deliveries section at the bottom of your webhook configuration page on GitHub:

        GitHub Webhook - Recent Deliveries

        If you click on a delivery, full information about the request headers and payload and the server response are shown, and these may provide some troubleshooting information. Editing the webhook.service file so that it starts the service in verbose mode may help.

      Next Steps

      The current Salt configuration can be used as a foundation for more complex deployments:

      • Host multiple Hugo sites by updating Pillar with further GitHub repositories.

      • Host different kinds of static sites by changing the Salt formula to support them.

      • Load balance your site by creating more minions and apply the same Pillar data and Salt states to them. Then, set up a NodeBalancer to direct traffic to the minions.

      • Set up a separate development branch and development server with Salt’s environments feature.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link