One place for hosting & domains

      CircleCI

      How To Automate Deployments to DigitalOcean Kubernetes with CircleCI


      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Having an automated deployment process is a requirement for a scalable and resilient application, and GitOps, or Git-based DevOps, has rapidly become a popular method of organizing CI/CD with a Git repository as a “single source of truth.” Tools like CircleCI integrate with your GitHub repository, allowing you to test and deploy your code automatically every time you make a change to your repository. When this kind of CI/CD is combined with the flexibility of Kubernetes infrastructure, you can build an application that scales easily with changing demand.

      In this article you will use CircleCI to deploy a sample application to a DigitalOcean Kubernetes (DOKS) cluster. After reading this tutorial, you’ll be able to apply these same techniques to deploy other CI/CD tools that are buildable as Docker images.

      Prerequisites

      To follow this tutorial, you’ll need to have:

      For this tutorial, you will use Kubernetes version 1.13.5 and kubectl version 1.10.7.

      Step 1 — Creating Your DigitalOcean Kubernetes Cluster

      Note: You can skip this section if you already have a running DigitalOcean Kubernetes cluster.

      In this first step, you will create the DigitalOcean Kubernetes (DOKS) cluster from which you will deploy your sample application. The kubectl commands executed from your local machine will change or retrieve information directly from the Kubernetes cluster.

      Go to the Kubernetes page on your DigitalOcean account.

      Click Create a Kubernetes cluster, or click the green Create button at the top right of the page and select Clusters from the dropdown menu.

      [Creating a Kubernetes Cluster on DigitalOcean](assets.digitalocean.com/articles/cart64920/CreateDOKS.gif)

      The next page is where you are going to specify the details of your cluster. On Select a Kubernetes version pick version 1.13.5-do.0. If this one is not available, choose a higher one.

      For Choose a datacenter region, choose the region closest to you. This tutorial will use San Francisco – 2.

      You then have the option to build your Node pool(s). On Kubernetes, a node is a worker machine, which contains the services necessary to run pods. On DigitalOcean, each node is a Droplet. Your node pool will consist of a single Standard node. Select the 2GB/1vCPU configuration and change to 1 Node on the number of nodes.

      You can add extra tags if you want; this can be useful if you plan to use DigitalOcean API or just to better organize your node pools.

      On Choose a name, for this tutorial, use kubernetes-deployment-tutorial. This will make it easier to follow throughout while reading the next sections. Finally, click the green Create Cluster button to create your cluster.

      After cluster creation, there will be a button on the UI to download a configuration file called Download Config File. This is the file you will be using to authenticate the kubectl commands you are going to run against your cluster. Download it to your kubectl machine.

      The default way to use that file is to always pass the --kubeconfig flag and the path to it on all commands you run with kubectl. For example, if you downloaded the config file to Desktop, you would run the kubectl get pods command like this:

      • kubectl --kubeconfig ~/Desktop/kubernetes-deployment-tutorial-kubeconfig.yaml get pods

      This would yield the following output:

      Output

      No resources found.

      This means you accessed your cluster. The No resources found. message is correct, since you don’t have any pods on your cluster.

      If you are not maintaining any other Kubernetes clusters you can copy the kubeconfig file to a folder on your home directory called .kube. Create that directory in case it does not exist:

      Then copy the config file into the newly created .kube directory and rename it config:

      • cp current_kubernetes-deployment-tutorial-kubeconfig.yaml_file_path ~/.kube/config

      The config file should now have the path ~/.kube/config. This is the file that kubectl reads by default when running any command, so there is no need to pass --kubeconfig anymore. Run the following:

      You will receive the following output:

      Output

      No resources found.

      Now access the cluster with the following:

      You will receive the list of nodes on your cluster. The output will be similar to this:

      Output

      NAME STATUS ROLES AGE VERSION kubernetes-deployment-tutorial-1-7pto Ready <none> 1h v1.13.5

      In this tutorial you are going to use the default namespace for all kubectl commands and manifest files, which are files that define the workload and operating parameters of work in Kubernetes. Namespaces are like virtual clusters inside your single physical cluster. You can change to any other namespace you want; just make sure to always pass it using the --namespace flag to kubectl, and/or specifying it on the Kubernetes manifests metadata field. They are a great way to organize the deployments of your team and their running environments; read more about them in the official Kubernetes overview on Namespaces.

      By finishing this step you are now able to run kubectl against your cluster. In the next step, you will create the local Git repository you are going to use to house your sample application.

      Step 2 — Creating the Local Git Repository

      You are now going to structure your sample deployment in a local Git repository. You will also create some Kubernetes manifests that will be global to all deployments you are going to do on your cluster.

      Note: This tutorial has been tested on Ubuntu 18.04, and the individual commands are styled to match this OS. However, most of the commands here can be applied to other Linux distributions with little to no change needed, and commands like kubectl are platform-agnostic.

      First, create a new Git repository locally that you will push to GitHub later on. Create an empty folder called do-sample-app in your home directory and cd into it:

      • mkdir ~/do-sample-app
      • cd ~/do-sample-app

      Now create a new Git repository in this folder with the following command:

      Inside this repository, create an empty folder called kube:

      • mkdir ~/do-sample-app/kube/

      This will be the location where you are going to store the Kubernetes resources manifests related to the sample application that you will deploy to your cluster.

      Now, create another folder called kube-general, but this time outside of the Git repository you just created. Make it inside your home directory:

      This folder is outside of your Git repository because it will be used to store manifests that are not specific to a single deployment on your cluster, but common to multiple ones. This will allow you to reuse these general manifests for different deployments.

      With your folders created and the Git repository of your sample application in place, it's time to arrange the authentication and authorization of your DOKS cluster.

      Step 3 — Creating a Service Account

      It's generally not recommended to use the default admin user to authenticate from other Services into your Kubernetes cluster. If your keys on the external provider got compromised, your whole cluster would become compromised.

      Instead you are going to use a single Service Account with a specific Role, which is all part of the RBAC Kubernetes authorization model.

      This authorization model is based on Roles and Resources. You start by creating a Service Account, which is basically a user on your cluster, then you create a Role, in which you specify what resources it has access to on your cluster. Finally, you create a Role Binding, which is used to make the connection between the Role and the Service Account previously created, granting to the Service Account access to all resources the Role has access to.

      The first Kubernetes resource you are going to create is the Service Account for your CI/CD user, which this tutorial will name cicd.

      Create the file cicd-service-account.yml inside the ~/kube-general folder, and open it with your favorite text editor:

      • nano ~/kube-general/cicd-service-account.yml

      Write the following content on it:

      ~/kube-general/cicd-service-account.yml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: cicd
        namespace: default
      

      This is a YAML file; all Kubernetes resources are represented using one. In this case you are saying this resource is from Kubernetes API version v1 (internally kubectl creates resources by calling Kubernetes HTTP APIs), and it is a ServiceAccount.

      The metadata field is used to add more information about this resource. In this case, you are giving this ServiceAccount the name cicd, and creating it on the default namespace.

      You can now create this Service Account on your cluster by running kubectl apply, like the following:

      • kubectl apply -f ~/kube-general/

      You will recieve output similar to the following:

      Output

      serviceaccount/cicd created

      To make sure your Service Account is working, try to log in to your cluster using it. To do that you first need to obtain their respective access token and store it in an environment variable. Every Service Account has an access token which Kubernetes stores as a Secret.

      You can retrieve this secret using the following command:

      • TOKEN=$(kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode)

      Some explanation on what this command is doing:

      $(kubectl get secret | grep cicd-token | awk '{print $1}')
      

      This is used to retrieve the name of the secret related to our cicd Service Account. kubectl get secret returns the list of secrets on the default namespace, then you use grep to search for the lines related to your cicd Service Account. Then you return the name, since it is the first thing on the single line returned from the grep.

      kubectl get secret preceding-command -o jsonpath='{.data.token}' | base64 --decode
      

      This will retrieve only the secret for your Service Account token. You then access the token field using jsonpath, and pass the result to base64 --decode. This is necessary because the token is stored as a Base64 string. The token itself is a JSON Web Token.

      You can now try to retrieve your pods with the cicd Service Account. Run the following command, replacing server-from-kubeconfig-file with the server URL that can be found after server: in ~kube/config. This command will give a specific error that you will learn about later in this tutorial:

      • kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods

      --insecure-skip-tls-verify skips the step of verifying the certificate of the server, since you are just testing and do not need to verify this. --kubeconfig="/dev/null" is to make sure kubectl does not read your config file and credentials but instead uses the token provided.

      The output should be similar to this:

      Output

      Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:cicd" cannot list resource "pods" in API group "" in the namespace "default"

      This is an error, but it shows us that the token worked. The error you received is about your Service Account not having the neccessary authorization to list the resource secrets, but you were able to access the server itself. If your token had not worked, the error would have been the following one:

      Output

      error: You must be logged in to the server (Unauthorized)

      Now that the authentication was a success, the next step is to fix the authorization error for the Service Account. You will do this by creating a role with the necessary permissions and binding it to your Service Account.

      Step 4 — Creating the Role and the Role Binding

      Kubernetes has two ways to define roles: using a Role or a ClusterRole resource. The difference between the former and the latter is that the first one applies to a single namespace, while the other is valid for the whole cluster.

      As you are using a single namespace on this tutorial, you will use a Role.

      Create the file ~/kube-general/cicd-role.yml and open it with your favorite text editor:

      • nano ~/kube-general/cicd-role.yml

      The basic idea is to grant access to do everything related to most Kubernetes resources in the default namespace. Your Role would look like this:

      ~/kube-general/cicd-role.yml

      kind: Role
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: cicd
        namespace: default
      rules:
        - apiGroups: ["", "apps", "batch", "extensions"]
          resources: ["deployments", "services", "replicasets", "pods", "jobs", "cronjobs"]
          verbs: ["*"]
      

      This YAML has some similarities with the one you created previously, but here you are saying this resource is a Role, and it's from the Kubernetes API rbac.authorization.k8s.io/v1. You are naming your role cicd, and creating it on the same namespace you created your ServiceAccount, the default one.

      Then you have the rules field, which is a list of resources this role has access to. In Kubernetes resources are defined based on the API group they belong to, the resource kind itself, and what actions you can do on then, which is represented by a verb. Those verbs are similar to the HTTP ones.

      In our case you are saying that your Role is allowed to do everything, *, on the following resources: deployments, services, replicasets, pods, jobs, and cronjobs. This also applies to those resources belonging to the following API groups: "" (empty string), apps, batch, and extensions. The empty string means the root API group. If you use apiVersion: v1 when creating a resource it means this resource is part of this API group.

      A Role by itself does nothing; you must also create a RoleBinding, which binds a Role to something, in this case, a ServiceAccount.

      Create the file ~/kube-general/cicd-role-binding.yml and open it:

      • nano ~/kube-general/cicd-role-binding.yml

      Add the following lines to the file:

      ~/kube-general/cicd-role-binding.yml

      kind: RoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: cicd
        namespace: default
      subjects:
        - kind: ServiceAccount
          name: cicd
          namespace: default
      roleRef:
        kind: Role
        name: cicd
        apiGroup: rbac.authorization.k8s.io
      

      Your RoleBinding has some specific fields that have not yet been covered in this tutorial. roleRef is the Role you want to bind to something; in this case it is the cicd role you created earlier. subjects is the list of resources you are binding your role to; in this case it's a single ServiceAccount called cicd.

      Note: If you had used a ClusterRole, you would have to create a ClusterRoleBinding instead of a RoleBinding. The file would be almost the same. The only difference would be that it would have no namespace field inside the metadata.

      With those files created you will be able to use kubectl apply again. Create those new resources on your Kubernetes cluster by running the following command:

      • kubectl apply -f ~/kube-general/

      You will receive output similar to the following:

      Output

      rolebinding.rbac.authorization.k8s.io/cicd created role.rbac.authorization.k8s.io/cicd created serviceaccount/cicd created

      Now, try the command you ran previously:

      • kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods

      Since you have no pods, this will yield the following output:

      Output

      No resources found.

      In this step, you gave the Service Account you are going to use on CircleCI the necessary authorization to do meaningful actions on your cluster like listing, creating, and updating resources. Now it's time to create your sample application.

      Step 5 — Creating Your Sample Application

      Note: All commands and files created from now on will start from the folder ~/do-sample-app you created earlier. This is becase you are now creating files specific to the sample application that you are going to deploy to your cluster.

      The Kubernetes Deployment you are going to create will use the Nginx image as a base, and your application will be a simple static HTML page. This is a great start because it allows you to test if your deployment works by serving a simple HTML directly from Nginx. As you will see later on, you can redirect all traffic coming to a local address:port to your deployment on your cluster to test if it's working.

      Inside the repository you set up earlier, create a new Dockerfile file and open it with your text editor of choice:

      • nano ~/do-sample-app/Dockerfile

      Write the following on it:

      ~/do-sample-app/Dockerfile

      FROM nginx:1.14
      
      COPY index.html /usr/share/nginx/html/index.html
      

      This will tell Docker to build the application container from an nginx image.

      Now create a new index.html file and open it:

      • nano ~/do-sample-app/index.html

      Write the following HTML content:

      ~/do-sample-app/index.html

      <!DOCTYPE html>
      <title>DigitalOcean</title>
      <body>
        Kubernetes Sample Application
      </body>
      

      This HTML will display a simple message that will let you know if your application is working.

      You can test if the image is correct by building and then running it.

      First, build the image with the following command, replacing dockerhub-username with your own Docker Hub username. You must specify your username here so when you push it later on to Docker Hub it will just work:

      • docker build ~/do-sample-app/ -t dockerhub-username/do-kubernetes-sample-app

      Now run the image. Use the following command, which starts your image and forwards any local traffic on port 8080 to the port 80 inside the image, the port Nginx listens to by default:

      • docker run --rm -it -p 8080:80 dockerhub-username/do-kubernetes-sample-app

      The command prompt will stop being interactive while the command is running. Instead you will see the Nginx access logs. If you open localhost:8080 on any browser it should show an HTML page with the content of ~/do-sample-app/index.html. In case you don't have a browser available, you can open a new terminal window and use the following curl command to fetch the HTML from the webpage:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body>

      Stop the container (CTRL + C on the terminal where it's running), and submit this image to your Docker Hub account. To do this, first log in to Docker Hub:

      Fill in the required information about your Docker Hub account, then push the image with the following command (don't forget to replace the dockerhub-username with your own):

      • docker push dockerhub-username/do-kubernetes-sample-app

      You have now pushed your sample application image to your Docker Hub account. In the next step, you will create a Deployment on your DOKS cluster from this image.

      Step 6 — Creating the Kubernetes Deployment and Service

      With your Docker image created and working, you will now create a manifest telling Kubernetes how to create a Deployment from it on your cluster.

      Create the YAML deployment file ~/do-sample-app/kube/do-sample-deployment.yml and open it with your text editor:

      • nano ~/do-sample-app/kube/do-sample-deployment.yml

      Write the following content on the file, making sure to replace dockerhub-username with your Docker Hub username:

      ~/do-sample-app/kube/do-sample-deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: do-kubernetes-sample-app
        namespace: default
        labels:
          app: do-kubernetes-sample-app
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: do-kubernetes-sample-app
        template:
          metadata:
            labels:
              app: do-kubernetes-sample-app
          spec:
            containers:
              - name: do-kubernetes-sample-app
                image: dockerhub-username/do-kubernetes-sample-app:latest
                ports:
                  - containerPort: 80
                    name: http
      

      Kubernetes deployments are from the API group apps, so the apiVersion of your manifest is set to apps/v1. On metadata you added a new field you have not used previously, called metadata.labels. This is useful to organize your deployments. The field spec represents the behavior specification of your deployment. A deployment is responsible for managing one or more pods; in this case it's going to have a single replica by the spec.replicas field. That is, it's going to create and manage a single pod.

      To manage pods, your deployment must know which pods it's responsible for. The spec.selector field is the one that gives it that information. In this case the deployment will be responsible for all pods with tags app=do-kubernetes-sample-app. The spec.template field contains the details of the Pod this deployment will create. Inside the template you also have a spec.template.metadata field. The labels inside this field must match the ones used on spec.selector. spec.template.spec is the specification of the pod itself. In this case it contains a single container, called do-kubernetes-sample-app. The image of that container is the image you built previously and pushed to Docker Hub.

      This YAML file also tells Kubernetes that this container exposes the port 80, and gives this port the name http.

      To access the port exposed by your Deployment, create a Service. Make a file named ~/do-sample-app/kube/do-sample-service.yml and open it with your favorite editor:

      • nano ~/do-sample-app/kube/do-sample-service.yml

      Next, add the following lines to the file:

      ~/do-sample-app/kube/do-sample-service.yml

      apiVersion: v1
      kind: Service
      metadata:
        name: do-kubernetes-sample-app
        namespace: default
        labels:
          app: do-kubernetes-sample-app
      spec:
        type: ClusterIP
        ports:
          - port: 80
            targetPort: http
            name: http
        selector:
          app: do-kubernetes-sample-app
      

      This file gives your Service the same labels used on your deployment. This is not required, but it helps to organize your applications on Kubernetes.

      The service resource also has a spec field. The spec.type field is responsible for the behavior of the service. In this case it's a ClusterIP, which means the service is exposed on a cluster-internal IP, and is only reachable from within your cluster. This is the default spec.type for services. spec.selector is the label selector criteria that should be used when picking the pods to be exposed by this service. Since your pod has the tag app: do-kubernetes-sample-app, you used it here. spec.ports are the ports exposed by the pod's containers that you want to expose from this service. Your pod has a single container which exposes port 80, named http, so you are using it here as targetPort. The service exposes that port on port 80 too, with the same name, but you could have used a different port/name combination than the one from the container.

      With your Service and Deployment manifest files created, you can now create those resources on your Kubernetes cluster using kubectl:

      • kubectl apply -f ~/do-sample-app/kube/

      You will receive the following output:

      Output

      deployment.apps/do-kubernetes-sample-app created service/do-kubernetes-sample-app created

      Test if this is working by forwarding one port on your machine to the port that the service is exposing inside your Kubernetes cluster. You can do that using kubectl port-forward:

      • kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80

      The subshell command $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') retrieves the name of the pod matching the tag you used. Otherwise you could have retrieved it from the list of pods by using kubectl get pods.

      After you run port-forward, the shell will stop being interactive, and will instead output the requests redirected to your cluster:

      Output

      Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80

      Opening localhost:8080 on any browser should render the same page you saw when you ran the container locally, but it's now coming from your Kubernetes cluster! As before, you can also use curl in a new terminal window to check if it's working:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body>

      Next, it's time to push all the files you created to your GitHub repository. To do this you must first create a repository on GitHub called digital-ocean-kubernetes-deploy.

      In order to keep this repository simple for demonstration purposes, do not initialize the new repository with a README, license, or .gitignore file when asked on the GitHub UI. You can add these files later on.

      With the repository created, point your local repository to the one on GitHub. To do this, press CTRL + C to stop kubectl port-forward and get the command line back, then run the following commands to add a new remote called origin:

      • cd ~/do-sample-app/
      • git remote add origin https://github.com/your-github-account-username/digital-ocean-kubernetes-deploy.git

      There should be no output from the preceding command.

      Next, commit all the files you created up to now to the GitHub repository. First, add the files:

      Next, commit the files to your repository, with a commit message in quotation marks:

      • git commit -m "initial commit"

      This will yield output similar to the following:

      Output

      [master (root-commit) db321ad] initial commit 4 files changed, 47 insertions(+) create mode 100644 Dockerfile create mode 100644 index.html create mode 100644 kube/do-sample-deployment.yml create mode 100644 kube/do-sample-service.yml

      Finally, push the files to GitHub:

      • git push -u origin master

      You will be prompted for your username and password. Once you have entered this, you will see output like this:

      Output

      Counting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (7/7), 907 bytes | 0 bytes/s, done. Total 7 (delta 0), reused 0 (delta 0) To github.com:your-github-account-username/digital-ocean-kubernetes-deploy.git * [new branch] master -> master Branch master set up to track remote branch master from origin.

      If you go to your GitHub repository page you will now see all the files there. With your project up on GitHub, you can now set up CircleCI as your CI/CD tool.

      Step 7 — Configuring CircleCI

      For this tutorial, you will use CircleCI to automate deployments of your application whenever the code is updated, so you will need to log in to CircleCI using your GitHub account and set up your repository.

      First, go to their homepage https://circleci.com, and press Sign Up.

      circleci-home-page

      You are using GitHub, so click the green Sign Up with GitHub button.

      CircleCI will redirect to an authorization page on GitHub. CircleCI needs some permissions on your account to be able to start building your projects. This allows CircleCI to obtain your email, deploy keys and permission to create hooks on your repositories, and add SSH keys to your account. If you need more information on what CircleCI is going to do with your data, check their documentation about GitHub integration.

      circleci-github-authorization

      After authorizing CircleCI you will be redirected to their dashboard.

      circleci-project-dashboard

      Next, set up your GitHub repository in CircleCI. Click on Set Up New Projects from the CircleCI Dashboard, or as a shortcut, open the following link changing the highlighted text with your own GitHub username: https://circleci.com/setup-project/gh/your-github-username/digital-ocean-kubernetes-deploy.

      After that press Start Building. Do not create a config file in your repository just yet, and don't worry if the first build fails.

      circleci-start-building

      Next, specify some environment variables in the CircleCI settings. You can find the settings of the project by clicking on the small button with a cog icon on the top right section of the page then selecting Environment Variables, or you can go directly to the environment variables page by using the following URL (remember to fill in your username): https://circleci.com/gh/your-github-username/digital-ocean-kubernetes-deploy/edit#env-vars. Press Add Variable to create new environment variables.

      First, add two environment variables called DOCKERHUB_USERNAME and DOCKERHUB_PASS which will be needed later on to push the image to Docker Hub. Set the values to your Docker Hub username and password, respectively.

      Then add three more: KUBERNETES_TOKEN, KUBERNETES_SERVER, and KUBERNETES_CLUSTER_CERTIFICATE.

      The value of KUBERNETES_TOKEN will be the value of the local environment variable you used earlier to authenticate on your Kubernetes cluster using your Service Account user. If you have closed the terminal, you can always run the following command to retrieve it again:

      • kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode

      KUBERNETES_SERVER will be the string you passed as the --server flag to kubectl when you logged in with your cicd Service Account. You can find this after server: in the ~/.kube/config file, or in the file kubernetes-deployment-tutorial-kubeconfig.yaml downloaded from the DigitalOcean dashboard when you made the initial setup of your Kubernetes cluster.

      KUBERNETES_CLUSTER_CERTIFICATE should also be available on your ~/.kube/config file. It's the certificate-authority-data field on the clusters item related to your cluster. It should be a long string; make sure to copy all of it.

      Those environment variables must be defined here because most of them contain sensitive information, and it is not secure to place them directly on the CircleCI YAML config file.

      With CircleCI listening for changes on your repository, and the environment variables configured, it's time to create the configuration file.

      Make a directory called .circleci inside your sample application repository:

      • mkdir ~/do-sample-app/.circleci/

      Inside this directory, create a file named config.yml and open it with your favorite editor:

      • nano ~/do-sample-app/.circleci/config.yml

      Add the following content to the file, making sure to replace dockerhub-username with your Docker Hub username:

      ~/do-sample-app/.circleci/config.yml

      version: 2.1
      jobs:
        build:
          docker:
            - image: circleci/buildpack-deps:stretch
          environment:
            IMAGE_NAME: dockerhub-username/do-kubernetes-sample-app
          working_directory: ~/app
          steps:
            - checkout
            - setup_remote_docker
            - run:
                name: Build Docker image
                command: |
                  docker build -t $IMAGE_NAME:latest .
            - run:
                name: Push Docker Image
                command: |
                  echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
                  docker push $IMAGE_NAME:latest
      workflows:
        version: 2
        build-master:
          jobs:
            - build:
                filters:
                  branches:
                    only: master
      

      This sets up a Workflow with a single job, called build, that runs for every commit to the master branch. This job is using the image circleci/buildpack-deps:stretch to run its steps, which is an image from CircleCI based on the official buildpack-deps Docker image, but with some extra tools installed, like Docker binaries themselves.

      The workflow has four steps:

      • checkout retrieves the code from GitHub.
      • setup_remote_docker sets up a remote, isolated environment for each build. This is required before you use any docker command inside a job step. This is necessary because as the steps are running inside a docker image, setup_remote_docker allocates another machine to run the commands there.
      • The first run step builds the image, as you did previously locally. For that you are using the environment variable you declared in environment:, IMAGE_NAME (remember to change the highlighted section with your own information).
      • The last run step pushes the image to Dockerhub, using the environment variables you configured on the project settings to authenticate.

      Commit the new file to your repository and push the changes upstream:

      • cd ~/do-sample-app/
      • git add .circleci/
      • git commit -m "add CircleCI config"
      • git push

      This will trigger a new build on CircleCI. The CircleCI workflow is going to correctly build and push your image to Docker Hub.

      CircleCI build page with success build info

      Now that you have created and tested your CircleCI workflow, you can set your DOKS cluster to retrieve the up-to-date image from Docker Hub and deploy it automatically when changes are made.

      Step 8 — Updating the Deployment on the Kubernetes Cluster

      Now that your application image is being built and sent to Docker Hub every time you push changes to the master branch on GitHub, it's time to update your deployment on your Kubernetes cluster so that it retrieves the new image and uses it as a base for deployment.

      To do that, first fix one issue with your deployment: it's currently depending on an image with the latest tag. This tag does not tell us which version of the image you are using. You cannot easily lock your deployment to that tag because it's overwritten everytime you push a new image to Docker Hub, and by using it like that you lose one of the best things about having containerized applications: Reproducibility.

      You can read more about that on this article about why depending on Docker latest tag is a anti-pattern.

      To correct this, you first must make some changes to your Push Docker Image build step in the ~/do-sample-app/.circleci/config.yml file. Open up the file:

      • nano ~/do-sample-app/.circleci/config.yml

      Then add the highlighted lines to your Push Docker Image step:

      ~/do-sample-app/.circleci/config.yml:16-22

      ...
            - run:
                name: Push Docker Image
                command: |
                  echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
                  docker tag $IMAGE_NAME:latest $IMAGE_NAME:$CIRCLE_SHA1
                  docker push $IMAGE_NAME:latest
                  docker push $IMAGE_NAME:$CIRCLE_SHA1
      ...
      

      Save and exit the file.

      CircleCI has some special environment variables set by default. One of them is CIRCLE_SHA1, which contains the hash of the commit it's building. The changes you made to ~/do-sample-app/.circleci/config.yml will use this environment variable to tag your image with the commit it was built from, always tagging the most recent build with the latest tag. That way, you always have specific images available, without overwriting them when you push something new to your repository.

      Next, change your deployment manifest file to point to that file. This would be simple if inside ~/do-sample-app/kube/do-sample-deployment.yml you could set your image as dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1, but kubectl doesn't do variable substitution inside the manifests when you use kubectl apply. To account for this, you can use envsubst. envsubst is a cli tool, part of the GNU gettext project. It allows you to pass some text to it, and if it finds any variable inside the text that has a matching environment variable, it's replaced by the respective value. The resulting text is then returned as their output.

      To use this, you will create a simple bash script which will be responsible for your deployment. Make a new folder called scripts inside ~/do-sample-app/:

      • mkdir ~/do-sample-app/scripts/

      Inside that folder create a new bash script called ci-deploy.sh and open it with your favorite text editor:

      • nano ~/do-sample-app/scripts/ci-deploy.sh

      Inside it write the following bash script:

      ~/do-sample-app/scripts/ci-deploy.sh

      #! /bin/bash
      # exit script when any command ran here returns with non-zero exit code
      set -e
      
      COMMIT_SHA1=$CIRCLE_SHA1
      
      # We must export it so it's available for envsubst
      export COMMIT_SHA1=$COMMIT_SHA1
      
      # since the only way for envsubst to work on files is using input/output redirection,
      #  it's not possible to do in-place substitution, so we need to save the output to another file
      #  and overwrite the original with that one.
      envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
      mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
      
      echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
      
      ./kubectl 
        --kubeconfig=/dev/null 
        --server=$KUBERNETES_SERVER 
        --certificate-authority=cert.crt 
        --token=$KUBERNETES_TOKEN 
        apply -f ./kube/
      

      Let's go through this script, using the comments in the file. First, there is the following:

      set -e
      

      This line makes sure any failed command stops the execution of the bash script. That way if one command fails, the next ones are not executed.

      COMMIT_SHA1=$CIRCLE_SHA1
      export COMMIT_SHA1=$COMMIT_SHA1
      

      These lines export the CircleCI $CIRCLE_SHA1 environment variable with a new name. If you had just declared the variable without exporting it using export, it would not be visible for the envsubst command.

      envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
      mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
      

      envsubst cannot do in-place substitution. That is, it cannot read the content of a file, replace the variables with their respective values, and write the output back to the same file. Therefore, you will redirect the output to another file and then overwrite the original file with the new one.

      echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
      

      The environment variable $KUBERNETES_CLUSTER_CERTIFICATE you created earlier on CircleCI's project settings is in reality a Base64 encoded string. To use it with kubectl you must decode its contents and save it to a file. In this case you are saving it to a file named cert.crt inside the current working directory.

      ./kubectl 
        --kubeconfig=/dev/null 
        --server=$KUBERNETES_SERVER 
        --certificate-authority=cert.crt 
        --token=$KUBERNETES_TOKEN 
        apply -f ./kube/
      

      Finally, you are running kubectl. The command has similar arguments to the one you ran when you were testing your Service Account. You are calling apply -f ./kube/, since on CircleCI the current working directory is the root folder of your project. ./kube/ here is your ~/do-sample-app/kube folder.

      Save the file and make sure it's executable:

      • chmod +x ~/do-sample-app/scripts/ci-deploy.sh

      Now, edit ~/do-sample-app/kube/do-sample-deployment.yml:

      • nano ~/do-sample-app/kube/do-sample-deployment.yml

      Change the tag of the container image value to look like the following one:

      ~/do-sample-app/kube/do-sample-deployment.yml

            # ...
            containers:
              - name: do-kubernetes-sample-app
                image: dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1
                ports:
                  - containerPort: 80
                    name: http
      

      Save and close the file. You must now add some new steps to your CI configuration file to update the deployment on Kubernetes.

      Open ~/do-sample-app/.circleci/config.yml on your favorite text editor:

      • nano ~/do-sample-app/.circleci/config.yml

      Write the following new steps, right below the Push Docker Image one you had before:

      ~/do-sample-app/.circleci/config.yml

      ...
            - run:
                name: Install envsubst
                command: |
                  sudo apt-get update && sudo apt-get -y install gettext-base
            - run:
                name: Install kubectl
                command: |
                  curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
                  chmod u+x ./kubectl
            - run:
                name: Deploy Code
                command: ./scripts/ci-deploy.sh
      

      The first two steps are installing some dependencies, first envsubst, and then kubectl. The Deploy Code step is responsible for running our deploy script.

      To make sure the changes are really going to be reflected on your Kubernetes deployment, edit your index.html. Change the HTML to something else, like:

      ~/do-sample-app/index.html

      <!DOCTYPE html>
      <title>DigitalOcean</title>
      <body>
        Automatic Deployment is Working!
      </body>
      

      Once you have saved the above change, commit all the modified files to the repository, and push the changes upstream:

      • cd ~/do-sample-app/
      • git add --all
      • git commit -m "add deploy script and add new steps to circleci config"
      • git push

      You will see the new build running on CircleCI, and successfully deploying the changes to your Kubernetes cluster.

      Wait for the build to finish, then run the same command you ran previously:

      • kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80

      Make sure everything is working by opening your browser on the URL localhost:8080 or by making a curl request to it. It should show the updated HTML:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Automatic Deployment is Working! </body>

      Congratulations, you have set up automated deployment with CircleCI!

      Conclusion

      This was a basic tutorial on how to do deployments to DigitalOcean Kubernetes using CircleCI. From here, you can improve your pipeline in many ways. The first thing you can do is create a single build job for multiple deployments, each one deploying to different Kubernetes clusters or different namespaces. This can be extremely useful when you have different Git branches for development/staging/production environments, ensuring that the deployments are always separated.

      You could also build your own image to be used on CircleCI, instead of using buildpack-deps. This image could be based on it, but could already have kubectl and envsubst dependencies installed.

      If you would like to learn more about CI/CD on Kubernetes, check out the tutorials for our CI/CD on Kubernetes Webinar Series, or for more information about apps on Kubernetes, see Modernizing Applications for Kubernetes.



      Source link

      Webinar Series: GitOps Tool Sets on Kubernetes with CircleCI and Argo CD


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a cloud native approach to building, testing, and deploying applications, covering release management, cloud native tools, service meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the last session of the series, GitOps Tool Sets on Kubernetes with CircleCI and Argo CD.

      Warning: The procedures in this tutorial are meant for demonstration purposes only. As a result, they don’t follow the best practices and security measures necessary for a production-ready deployment.

      Introduction

      Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with the increased control comes an increased complexity that can make CI/CD systems of cooperative code development, version control, change logging, and automated deployment and rollback particularly difficult to manage manually. To account for these difficulties, DevOps engineers have developed several methods of Kubernetes CI/CD automation, including the system of tooling and best practices called GitOps. GitOps, as proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.

      There are many tools that use Git as a focal point for DevOps processes on Kubernetes, including Gitkube developed by Hasura, Flux by Weaveworks, and Jenkins X, the topic of the second webinar in this series. In this tutorial, you will run through a demonstration of two additional tools that you can use to set up your own cloud-based GitOps CI/CD system: The Continuous Integration tool CircleCI and Argo CD, a declarative Continuous Delivery tool.

      CircleCI uses GitHub or Bitbucket repositories to organize application development and to automate building and testing on Kubernetes. By integrating with the Git repository, CircleCI projects can detect when a change is made to the application code and automatically test it, sending notifications of the change and the results of testing over email or other communication tools like Slack. CircleCI keeps logs of all these changes and test results, and the browser-based interface allows users to monitor the testing in real time, so that a team always knows the status of their project.

      As a sub-project of the Argo workflow management engine for Kubernetes, Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including ksonnet applications, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket.

      In this last article of the CI/CD with Kubernetes series, you will try out these GitOps tools by:

      By the end of this tutorial, you will have a basic understanding of how to construct a CI/CD pipeline on Kubernetes with a GitOps tool set.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 16.04 server with 16 GB of RAM or above. Since this tutorial is meant for demonstration purposes only, commands are run from the root account. Note that the unrestrained privileges of this account do not adhere to production-ready best practices and could affect your system. For this reason, it is suggested to follow these steps in a test environment such as a virtual machine or a DigitalOcean Droplet.

      • A Docker Hub Account. For an overview on getting started with Docker Hub, please see these instructions.

      • A GitHub account and basic knowledge of GitHub. For a primer on how to use GitHub, check out our How To Create a Pull Request on GitHub tutorial.

      • Familiarity with Kubernetes concepts. Please refer to the article An Introduction to Kubernetes for more details.

      • A Kubernetes cluster with the kubectl command line tool. This tutorial has been tested on a simulated Kubernetes cluster, set up in a local environment with Minikube, a program that allows you to try out Kubernetes tools on your own machine without having to set up a true Kubernetes cluster. To create a Minikube cluster, follow Step 1 of the second webinar in this series, Kubernetes Package Management with Helm and CI/CD with Jenkins X.

      Step 1 — Setting Up your CircleCI Workflow

      In this step, you will put together a standard CircleCI workflow that involves three jobs: testing code, building an image, and pushing that image to Docker Hub. In the testing phase, CircleCI will use pytest to test the code for a sample RSVP application. Then, it will build the image of the application code and push the image to DockerHub.

      First, give CircleCI access to your GitHub account. To do this, navigate to https://circleci.com/ in your favorite web browser:

      CircleCI Landing Page

      In the top right of the page, you will find a Sign Up button. Click this button, then click Sign Up with GitHub on the following page. The CircleCI website will prompt you for your GitHub credentials:

      Sign In to GitHub CircleCI Page

      Entering your username and password here gives CircleCI the permission to read your GitHub email address, deploy keys and add service hooks to your repository, create a list of your repositories, and add an SSH key to your GitHub account. These permissions are necessary for CircleCI to monitor and react to changes in your Git repository. If you would like to read more about the requested permissions before giving CircleCI your account information, see the CircleCI documentation.

      Once you have reviewed these permissions, enter your GitHub credentials and click Sign In. CircleCI will then integrate with your GitHub account and redirect your browser to the CircleCI welcome page:

      Welcome page for CircleCI

      Now that you have access to your CircleCI dashboard, open up another browser window and navigate to the GitHub repository for this webinar, https://github.com/do-community/rsvpapp-webinar4. If prompted to sign in to GitHub, enter your username and password. In this repository, you will find a sample RSVP application created by the CloudYuga team. For the purposes of this tutorial, you will use this application to demonstrate a GitOps workflow. Fork this repository to your GitHub account by clicking the Fork button at the top right of the screen.

      When you’ve forked the repository, GitHub will redirect you to https://github.com/your_GitHub_username/rsvpapp-webinar4. On the left side of the screen, you will see a Branch: master button. Click this button to reveal the list of branches for this project. Here, the master branch refers to the current official version of the application. On the other hand, the dev branch is a development sandbox, where you can test changes before promoting them to the official version in the master branch. Select the dev branch.

      Now that you are in the development section of this demonstration repository, you can start setting up a pipeline. CircleCI requires a YAML configuration file in the repository that describes the steps it needs to take to test your application. The repository you forked already has this file at .circleci/config.yml; in order to practice setting up CircleCI, delete this file and make your own.

      To create this configuration file, click the Create new file button and make a file named .circleci/config.yml:

      GitHub Create a new file Page

      Once you have this file open in GitHub, you can configure the workflow for CircleCI. To learn about this file’s contents, you will add the sections piece by piece. First, add the following:

      .circleci/config.yml

      version: 2
      jobs:
        test:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
      . . .
      

      In the preceding code, version refers to the version of CircleCI that you will use. jobs:test: means that you are setting up a test for your application, and machine:image: indicates where CircleCI will do the testing, in this case a virtual machine based on the circleci/classic:201808-01 image.

      Next, add the steps you would like CircleCI to take during the test:

      .circleci/config.yml

      . . .
          steps:
            - checkout
            - run:
                name: install dependencies
                command: |
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install software-properties-common
                  sudo add-apt-repository ppa:fkrull/deadsnakes
                  sudo apt-get update
                  sleep 5
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install python3.5
                  sleep 5
                  python -m pip install -r requirements.txt
      
            # run tests!
            # this example uses Django's built-in test-runner
            # other common Python testing frameworks include pytest and nose
            # https://pytest.org
            # https://nose.readthedocs.io
      
            - run:
                name: run tests
                command: |
                  python -m pytest tests/test_rsvpapp.py  
      
      . . .
      

      The steps of the test are listed out after steps:, starting with - checkout, which will checkout your project’s source code and copy it into the job’s space. Next, the - run: name: install dependencies step runs the listed commands to install the dependencies required for the test. In this case, you will be using the Django Web framework’s built-in test-runner and the testing tool pytest. After CircleCI downloads these dependencies, the -run: name: run tests step will instruct CircleCI to run the tests on your application.

      With the test job completed, add in the following contents to describe the build job:

      .circleci/config.yml

      . . .
        build:
      
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout 
            - run:
                name: build image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
      
        push:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
          steps:
            - checkout 
            - run:
                name: Push image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
                  echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
                  docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1    
      
      . . .
      

      As before, machine:image: means that CircleCI will build the application in a virtual machine based on the specified image. Under steps:, you will find - checkout again, followed by - run: name: build image. This means that CircleCi will build a Docker container from the rsvpapp image in your Docker Hub repository. You will set the $DOCKERHUB_USERNAME environment variable in the CircleCI interface, which the tutorial will cover after this YAML file is complete.

      After the build job is done, the push job will push the resulting image to your Docker Hub account.

      Finally, add the following lines to determine the workflows that coordinate the jobs you defined earlier:

      .circleci/config.yml

      . . .
      workflows:
        version: 2
        build-deploy:
          jobs:
            - test:
                context: DOCKERHUB
                filters:
                  branches:
                    only: dev        
            - build:
                context: DOCKERHUB 
                requires:
                  - test
                filters:
                  branches:
                    only: dev
            - push:
                context: DOCKERHUB
                requires:
                  - build
                filters:
                  branches:
                    only: dev
      

      These lines ensure that CircleCI executes the test, build, and push jobs in the correct order. context: DOCKERHUB refers to the context in which the test will take place. You will create this context after finalizing this YAML file. The only: dev line restrains the workflow to trigger only when there is a change to the dev branch of your repository, and ensures that CircleCI will build and test the code from dev.

      Now that you have added all the code for the .circleci/config.yml file, its contents should be as follows:

      .circleci/config.yml

      version: 2
      jobs:
        test:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout
            - run:
                name: install dependencies
                command: |
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install software-properties-common
                  sudo add-apt-repository ppa:fkrull/deadsnakes
                  sudo apt-get update
                  sleep 5
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install python3.5
                  sleep 5
                  python -m pip install -r requirements.txt
      
            # run tests!
            # this example uses Django's built-in test-runner
            # other common Python testing frameworks include pytest and nose
            # https://pytest.org
            # https://nose.readthedocs.io
      
            - run:
                name: run tests
                command: |
                  python -m pytest tests/test_rsvpapp.py  
      
        build:
      
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout 
            - run:
                name: build image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
      
        push:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
          steps:
            - checkout 
            - run:
                name: Push image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
                  echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
                  docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1    
      
      workflows:
        version: 2
        build-deploy:
          jobs:
            - test:
                context: DOCKERHUB
                filters:
                  branches:
                    only: dev        
            - build:
                context: DOCKERHUB 
                requires:
                  - test
                filters:
                  branches:
                    only: dev
            - push:
                context: DOCKERHUB
                requires:
                  - build
                filters:
                  branches:
                    only: dev
      

      Once you have added this file to the dev branch of your repository, return to the CircleCI dashboard.

      Next, you will create a CircleCI context to house the environment variables needed for the workflow that you outlined in the preceding YAML file. On the left side of the screen, you will find a SETTINGS button. Click this, then select Contexts under the ORGANIZATION heading. Finally, click the Create Context button on the right side of the screen:

      Create Context Screen for CircleCI

      CircleCI will then ask you for the name of this context. Enter DOCKERHUB, then click Create. Once you have created the context, select the DOCKERHUB context and click the Add Environment Variable button. For the first, type in the name DOCKERHUB_USERNAME, and in the Value enter your Docker Hub username.

      Add Environment Variable Screen for CircleCI

      Then add another environment variable, but this time, name it DOCKERHUB_PASSWORD and fill in the Value field with your Docker Hub password.

      When you’ve create the two environment variables for your DOCKERHUB context, create a CircleCI project for the test RSVP application. To do this, select the ADD PROJECTS button from the left-hand side menu. This will yield a list of GitHub projects tied to your account. Select rsvpapp-webinar4 from the list and click the Set Up Project button.

      Note: If rsvpapp-webinar4 does not show up in the list, reload the CircleCI page. Sometimes it can take a moment for the GitHub projects to show up in the CircleCI interface.

      You will now find yourself on the Set Up Project page:

      Set Up Project Screen for CircleCI

      At the top of the screen, CircleCI instructs you to create a config.yml file. Since you have already done this, scroll down to find the Start Building button on the right side of the page. By selecting this, you will tell CircleCI to start monitoring your application for changes.

      Click on the Start Building button. CircleCI will redirect you to a build progress/status page, which as yet has no build.

      To test the pipeline trigger, go to the recently forked repository at https://github.com/your_GitHub_username/rsvpapp-webinar4 and make some changes in the dev branch only. Since you have added the branch filter only: dev to your .circleci/config file, CI will build only when there is change in the dev branch. Make a change to the dev branch code, and you will find that CircleCI has triggered a new workflow in the user interface. Click on the running workflow and you will find the details of what CircleCI is doing:

      CircleCI Project Workflow Page

      With your CircleCI workflow taking care of the Continuous Integration aspect of your GitOps CI/CD system, you can install and configure Argo CD on top of your Kubernetes cluster to address Continuous Deployment.

      Step 2 — Installing and Configuring Argo CD on your Kubernetes Cluster

      Just as CircleCI uses GitHub to trigger automated testing on changes to source code, Argo CD connects your Kubernetes cluster into your GitHub repository to listen for changes and to automatically deploy the updated application. To set this up, you must first install Argo CD into your cluster.

      First, create a namespace named argocd:

      • kubectl create namespace argocd

      Within this namespace, Argo CD will run all the services and resources it needs to create its Continuous Deployment workflow.

      Next, download the Argo CD manifest from the official GitHub respository for Argo:

      • kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.9.2/manifests/install.yaml

      In this command, the -n flag directs kubectl to apply the manifest to the namespace argocd, and -f specifies the file name for the manifest that it will apply, in this case the one downloaded from the Argo repository.

      By using the kubectl get command, you can find the pods that are now running in the argocd namespace:

      • kubectl get pod -n argocd

      Using this command will yield output similar to the following:

      NAME                                      READY     STATUS    RESTARTS   AGE
      application-controller-6d68475cd4-j4jtj   1/1       Running   0          1m
      argocd-repo-server-78f556f55b-tmkvj       1/1       Running   0          1m
      argocd-server-78f47bf789-trrbw            1/1       Running   0          1m
      dex-server-74dc6c5ff4-fbr5g               1/1       Running   0          1m
      

      Now that Argo CD is running on your cluster, download the Argo CD CLI tool so that you can control the program from your command line:

      • curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.9.2/argocd-linux-amd64

      Once you’ve downloaded the file, use chmod to make it executable:

      • chmod +x /usr/local/bin/argocd

      To find the Argo CD service, run the kubectl get command in the namespace argocd:

      • kubectl get svc -n argocd argocd-server

      You will get output similar to the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-server ClusterIP 10.109.189.243 <none> 80/TCP,443/TCP 8m

      Now, access the Argo CD API server. This server does not automatically have an external IP, so you must first expose the API so that you can access it from your browser at your local workstation. To do this, use kubectl port-forward to forward port 8080 on your local workstation to the 80 TCP port of the argocd-server service from the preceding output:

      • kubectl port-forward svc/argocd-server -n argocd 8080:80

      The output will be:

      Output

      Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080

      Once you run the port-forward command, your command prompt will disappear from your terminal. To enter more commands for your Kubernetes cluster, open a new terminal window and log onto your remote server.

      To complete the connection, use ssh to forward the 8080 port from your local machine. First, open up an additional terminal window and, from your local workstation, enter the following command, with remote_server_IP_address replaced by the IP address of the remote server on which you are running your Kubernetes cluster:

      • ssh -L 8080:localhost:8080 root@remote_server_IP_address

      To make sure that the Argo CD server is exposed to your local workstation, open up a browser and navigate to the URL localhost:8080. You will see the Argo CD landing page:

      Sign In Page for ArgoCD

      Now that you have installed Argo CD and exposed its server to your local workstation, you can continue to the next step, in which you will connect GitHub into your Argo CD service.

      Step 3 — Connecting Argo CD to GitHub

      To allow Argo CD to listen to GitHub and synchronize deployments to your repository, you first have to connect Argo CD into GitHub. To do this, log into Argo.

      By default, the password for your Argo CD account is the name of the pod for the Argo CD API server. Switch back to the terminal window that is logged into your remote server but is not handling the port forwarding. Retrieve the password with the following command:

      • kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2

      You will get the name of the pod running the Argo API server:

      Output

      argocd-server-b686c584b-6ktwf

      Enter the following command to log in from the CLI:

      • argocd login localhost:8080

      You will receive the following prompt:

      Output

      WARNING: server certificate had error: x509: certificate signed by unknown authority. Proceed insecurely (y/n)?

      For the purposes of this demonstration, type y to proceed without a secure connection. Argo CD will then prompt you for your username and password. Enter admin for username and the complete argocd-server pod name for your password. Once you put in your credentials, you’ll receive the following message:

      Output

      'admin' logged in successfully Context 'localhost:8080' updated

      Now that you have logged in, use the following command to change your password:

      • argocd account update-password

      Argo CD will ask you for your current password and the password you would like to change it to. Choose a secure password and enter it at the prompts. Once you have done this, use your new password to relogin:

      Enter your password again, and you will get:

      Output

      Context 'localhost:8080' updated

      If you were deploying an application on a cluster external to the Argo CD cluster, you would need to register the application cluster's credentials with Argo CD. If, as is the case with this tutorial, Argo CD and your application are on the same cluster, then you will use https://kubernetes.default.svc as the Kubernetes API server when connecting Argo CD to your application.

      To demonstrate how one might register an external cluster, first get a list of your Kubernetes contexts:

      • kubectl config get-contexts

      You'll get:

      Output

      CURRENT NAME CLUSTER AUTHINFO NAMESPACE * minikube minikube minikube

      To add a cluster, enter the following command, with the name of your cluster in place of the highlighted name:

      • argocd cluster add minikube

      In this case, the preceding command would yield:

      Output

      INFO[0000] ServiceAccount "argocd-manager" created INFO[0000] ClusterRole "argocd-manager-role" created INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created, bound "argocd-manager" to "argocd-manager-role" Cluster 'minikube' added

      Now that you have set up your log in credentials for Argo CD and tested how to add an external cluster, move over to the Argo CD landing page and log in from your local workstation. Argo CD will direct you to the Argo CD applications page:

      Argo CD Applications Screen

      From here, click the Settings icon from the left-side tool bar, click Repositories, then click CONNECT REPO. Argo CD will present you with three fields for your GitHub information:

      Argo CD Connect Git Repo Page

      In the field for Repository URL, enter https://github.com/your_GitHub_username/rsvpapp-webinar4, then enter your GitHub username and password. Once you've entered your credentials, click the CONNECT button at the top of the screen.

      Once you've connected your repository containing the demo RSVP app to Argo CD, choose the Apps icon from the left-side tool bar, click the + button in the top right corner of the screen, and select New Application. From the Select Repository page, select your GitHub repository for the RSVP app and click next. Then choose CREATE APP FROM DIRECTORY to go to a page that asks you to review your application parameters:

      Argo CD Review application parameters Page

      The Path field designates where the YAML file for your application resides in your GitHub repository. For this project, type k8s. For Application Name, type rsvpapp, and for Cluster URL, select https://kubernetes.default.svc from the dropdown menu, since Argo CD and your application are on the same Kubernetes cluster. Finally, enter default for Namespace.

      Once you have filled out your application parameters, click on CREATE at the top of the screen. A box will appear, representing your application:

      Argo CD APPLICATIONS Page with rsvpapp

      After Status:, you will see that your application is OutOfSync with your GitHub repository. To deploy your application as it is on GitHub, click ACTIONS and choose Sync. After a few moments, your application status will change to Synced, meaning that Argo CD has deployed your application.

      Once your application has been deployed, click your application box to find a detailed diagram of your application:

      Argo CD Application Details Page for rsvpapp

      To find this deployment on your Kubernetes cluster, switch back to the terminal window for your remote server and enter:

      You will receive output with the pods that are running your app:

      Output

      NAME READY STATUS RESTARTS AGE rsvp-755d87f66b-hgfb5 1/1 Running 0 12m rsvp-755d87f66b-p2bsh 1/1 Running 0 12m rsvp-db-54996bf89-gljjz 1/1 Running 0 12m

      Next, check the services:

      You'll find a service for the RSVP app and your MongoDB database, in addition to the number of the port from which your app is running, highlighted in the following:

      NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
      kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        2h
      mongodb      ClusterIP   10.102.150.54   <none>        27017/TCP      25m
      rsvp         NodePort    10.106.91.108   <none>        80:31350/TCP   25m
      

      You can find your deployed RSVP app by navigating to your_remote_server_IP_address:app_port_number in your browser, using the preceding highlighted number for app_port_number:

      RSVP Application

      Now that you have deployed your application using Argo CD, you can test your Continuous Deployment system and adjust it to automatically sync with GitHub.

      Step 4 — Testing your Continuous Deployment Setup

      With Argo CD set up, test out your Continuous Deployment system by making a change in your project and triggering a new build of your application.

      In your browser, navigate to https://github.com/your_GitHub_username/rsvpapp-webinar4, click into the master branch, and update the k8s/rsvp.yaml file to deploy your app using the image built by CircleCI as a base. Add dev after image: nkhare/rsvpapp:, as shown in the following:

      rsvpapp-webinar2/k8s/rsvp.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: rsvp
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: rsvp
        template:
          metadata:
            labels:
              app: rsvp
          spec:
            containers:
            - name: rsvp-app
              image: nkhare/rsvpapp: dev
              imagePullPolicy: Always
              livenessProbe:
                httpGet:
                  path: /
                  port: 5000
                periodSeconds: 30
                timeoutSeconds: 1
                initialDelaySeconds: 50
              env:
              - name: MONGODB_HOST
                value: mongodb
              ports:
              - containerPort: 5000
                name: web-port
      . . .
      

      Instead of pulling the original image from Docker Hub, Argo CD will now use the dev image created in the Continuous Integration system to build the application.

      Commit the change, then return to the ArgoCD UI. You will notice that nothing has changed yet; this is because you have not activated automatic synchronization and must sync the application manually.

      To manually sync the application, click the blue circle in the top right of the screen, and click Sync. A new menu will appear, with a field to name your new revision and a checkbox labeled PRUNE:

      Synchronization Page for Argo CD

      Clicking this checkbox will ensure that, once Argo CD spins up your new application, it will destroy the outdated version. Click the PRUNE box, then click SYNCHRONIZE at the top of the screen. You will see the old elements of your application spinning down, and the new ones spinning up with your CircleCI-made image. If the new image included any changes, you would find these new changes reflected in your application at the URL your_remote_server_IP_address:app_port_number.

      As mentioned before, Argo CD also has an auto-sync option that will incorporate changes into your application as you make them. To enable this, open up your terminal for your remote server and use the following command:

      • argocd app set rsvpapp --sync-policy automated

      To make sure that revisions are not accidentally deleted, the default for automated sync has prune turned off. To turn automated pruning on, simply add the --auto-prune flag at the end of the preceding command.

      Now that you have added Continuous Deployment capabilities to your Kubernetes cluster, you have completed the demonstration GitOps CI/CD system with CircleCI and Argo CD.

      Conclusion

      In this tutorial, you created a pipeline with CircleCI that triggers tests and builds updated images when you change code in your GitHub repository. You also used Argo CD to deploy an application, automatically incorporating the changes integrated by CircleCI. You can now use these tools to create your own GitOps CI/CD system that uses Git as its organizing theme.

      If you'd like to learn more about Git, check out our An Introduction to Open Source series of tutorials. To explore more DevOps tools that integrate with Git repositories, take a look at How To Install and Configure GitLab on Ubuntu 18.04.



      Source link