One place for hosting & domains

      Kubernetes

      How To Create a Kubernetes Cluster Using Kubeadm on Debian 9


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.

      Note: This tutorial uses version 1.14 of Kubernetes, the official supported version at the time of this article’s publication. For up-to-date information on the latest version, please see the current release notes in the official Kubernetes documentation.

      Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. It does not, however, create users or handle the installation of operating-system-level dependencies and their configuration. For these preliminary tasks, it is possible to use a configuration management tool like Ansible or SaltStack. Using these tools makes creating additional clusters or recreating existing clusters much simpler and less error prone.

      In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.

      Goals

      Your cluster will include the following physical resources:

      The master node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes.

      Worker nodes are the servers where your workloads (i.e. containerized applications and services) will run. A worker will continue to run your workload once they’re assigned to it, even if the master goes down once scheduling is complete. A cluster’s capacity can be increased by adding workers.

      After completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node.

      Once the cluster is set up, you will deploy the web server Nginx to it to ensure that it is running workloads correctly.

      Prerequisites

      Step 1 — Setting Up the Workspace Directory and Ansible Inventory File

      In this section, you will create a directory on your local machine that will serve as your workspace. You will configure Ansible locally so that it can communicate with and execute commands on your remote servers. Once that’s done, you will create a hosts file containing inventory information such as the IP addresses of your servers and the groups that each server belongs to.

      Out of your three servers, one will be the master with an IP displayed as master_ip. The other two servers will be workers and will have the IPs worker_1_ip and worker_2_ip.

      Create a directory named ~/kube-cluster in the home directory of your local machine and cd into it:

      • mkdir ~/kube-cluster
      • cd ~/kube-cluster

      This directory will be your workspace for the rest of the tutorial and will contain all of your Ansible playbooks. It will also be the directory inside which you will run all local commands.

      Create a file named ~/kube-cluster/hosts using nano or your favorite text editor:

      • nano ~/kube-cluster/hosts

      Add the following text to the file, which will specify information about the logical structure of your cluster:

      ~/kube-cluster/hosts

      [masters]
      master ansible_host=master_ip ansible_user=root
      
      [workers]
      worker1 ansible_host=worker_1_ip ansible_user=root
      worker2 ansible_host=worker_2_ip ansible_user=root
      
      [all:vars]
      ansible_python_interpreter=/usr/bin/python3
      

      You may recall that inventory files in Ansible are used to specify server information such as IP addresses, remote users, and groupings of servers to target as a single unit for executing commands. ~/kube-cluster/hosts will be your inventory file and you’ve added two Ansible groups (masters and workers) to it specifying the logical structure of your cluster.

      In the masters group, there is a server entry named “master” that lists the master node’s IP (master_ip) and specifies that Ansible should run remote commands as the root user.

      Similarly, in the workers group, there are two entries for the worker servers (worker_1_ip and worker_2_ip) that also specify the ansible_user as root.

      The last line of the file tells Ansible to use the remote servers’ Python 3 interpreters for its management operations.

      Save and close the file after you’ve added the text.

      Having set up the server inventory with groups, let’s move on to installing operating system level dependencies and creating configuration settings.

      Step 2 — Creating a Non-Root User on All Remote Servers

      In this section you will create a non-root user with sudo privileges on all servers so that you can SSH into them manually as an unprivileged user. This can be useful if, for example, you would like to see system information with commands such as top/htop, view a list of running containers, or change configuration files owned by root. These operations are routinely performed during the maintenance of a cluster, and using a non-root user for such tasks minimizes the risk of modifying or deleting important files or unintentionally performing other dangerous operations.

      Create a file named ~/kube-cluster/initial.yml in the workspace:

      • nano ~/kube-cluster/initial.yml

      Next, add the following play to the file to create a non-root user with sudo privileges on all of the servers. A play in Ansible is a collection of steps to be performed that target specific servers and groups. The following play will create a non-root sudo user:

      ~/kube-cluster/initial.yml

      - hosts: all
        become: yes
        tasks:
          - name: create the 'sammy' user
            user: name=sammy append=yes state=present createhome=yes shell=/bin/bash
      
          - name: allow 'sammy' to have passwordless sudo
            lineinfile:
              dest: /etc/sudoers
              line: 'sammy ALL=(ALL) NOPASSWD: ALL'
              validate: 'visudo -cf %s'
      
          - name: set up authorized keys for the sammy user
            authorized_key: user=sammy key="{{item}}"
            with_file:
              - ~/.ssh/id_rsa.pub
      

      Here’s a breakdown of what this playbook does:

      • Creates the non-root user sammy.

      • Configures the sudoers file to allow the sammy user to run sudo commands without a password prompt.

      • Adds the public key in your local machine (usually ~/.ssh/id_rsa.pub) to the remote sammy user’s authorized key list. This will allow you to SSH into each server as the sammy user.

      Save and close the file after you’ve added the text.

      Next, execute the playbook by locally running:

      • ansible-playbook -i hosts ~/kube-cluster/initial.yml

      The command will complete within two to five minutes. On completion, you will see output similar to the following:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [master] ok: [worker1] ok: [worker2] TASK [create the 'sammy' user] **** changed: [master] changed: [worker1] changed: [worker2] TASK [allow 'sammy' user to have passwordless sudo] **** changed: [master] changed: [worker1] changed: [worker2] TASK [set up authorized keys for the sammy user] **** changed: [worker1] => (item=ssh-rsa AAAAB3...) changed: [worker2] => (item=ssh-rsa AAAAB3...) changed: [master] => (item=ssh-rsa AAAAB3...) PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0 worker1 : ok=5 changed=4 unreachable=0 failed=0 worker2 : ok=5 changed=4 unreachable=0 failed=0

      Now that the preliminary setup is complete, you can move on to installing Kubernetes-specific dependencies.

      Step 3 — Installing Kubernetes Dependencies

      In this section, you will install the operating-system-level packages required by Kubernetes with Debian’s package manager. These packages are:

      • Docker – a container runtime. It is the component that runs your containers. Support for other runtimes such as rkt is under active development in Kubernetes.

      • kubeadm – a CLI tool that will install and configure the various components of a cluster in a standard way.

      • kubelet – a system service/program that runs on all nodes and handles node-level operations.

      • kubectl – a CLI tool used for issuing commands to the cluster through its API Server.

      Create a file named ~/kube-cluster/kube-dependencies.yml in the workspace:

      • nano ~/kube-cluster/kube-dependencies.yml

      Add the following plays to the file to install these packages to your servers:

      ~/kube-cluster/kube-dependencies.yml

      - hosts: all
        become: yes
        tasks:
         - name: install remote apt deps
           apt:
             name: "{{ item }}"
             state: present
           with_items:
             - apt-transport-https
             - ca-certificates
             - gnupg2
             - software-properties-common
      
         - name: add Docker apt-key
           apt_key:
             url: https://download.docker.com/linux/debian/gpg
             state: present
      
         - name: add Docker's APT repository
           apt_repository:
            repo: deb https://download.docker.com/linux/debian stretch stable
            state: present
            filename: 'docker'
      
         - name: install Docker
           apt:
             name: docker-ce
             state: present
             update_cache: true
      
         - name: add Kubernetes apt-key
           apt_key:
             url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
             state: present
      
         - name: add Kubernetes' APT repository
           apt_repository:
            repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
            state: present
            filename: 'kubernetes'
      
         - name: install kubelet
           apt:
             name: kubelet=1.14.0-00
             state: present
             update_cache: true
      
         - name: install kubeadm
           apt:
             name: kubeadm=1.14.0-00
             state: present
      
      - hosts: master
        become: yes
        tasks:
         - name: install kubectl
           apt:
             name: kubectl=1.14.0-00
             state: present
             force: yes
      

      The first play in the playbook does the following:

      • Add dependencies for adding, verifying and installing packages from remote repositories.

      • Adds the Docker APT repository’s apt-key for key verification.

      • Installs Docker, the container runtime.

      • Adds the Kubernetes APT repository’s apt-key for key verification.

      • Adds the Kubernetes APT repository to your remote servers’ APT sources list.

      • Installs kubelet and kubeadm.

      The second play consists of a single task that installs kubectl on your master node.

      Note: While the Kubernetes documentation recommends you use the latest stable release of Kubernetes for your environment, this tutorial uses a specific version. This will ensure that you can follow the steps successfully, as Kubernetes changes rapidly and the latest version may not work with this tutorial.

      Save and close the file when you are finished.

      Next, execute the playbook by locally running:

      • ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [worker1] ok: [worker2] ok: [master] TASK [install Docker] **** changed: [master] changed: [worker1] changed: [worker2] TASK [install APT Transport HTTPS] ***** ok: [master] ok: [worker1] changed: [worker2] TASK [add Kubernetes apt-key] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [add Kubernetes' APT repository] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubelet] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubeadm] ***** changed: [master] changed: [worker1] changed: [worker2] PLAY [master] ***** TASK [Gathering Facts] ***** ok: [master] TASK [install kubectl] ****** ok: [master] PLAY RECAP **** master : ok=9 changed=5 unreachable=0 failed=0 worker1 : ok=7 changed=5 unreachable=0 failed=0 worker2 : ok=7 changed=5 unreachable=0 failed=0

      After execution, Docker, kubeadm, and kubelet will be installed on all of the remote servers. kubectl is not a required component and is only needed for executing cluster commands. Installing it only on the master node makes sense in this context, since you will run kubectl commands only from the master. Note, however, that kubectl commands can be run from any of the worker nodes or from any machine where it can be installed and configured to point to a cluster.

      All system dependencies are now installed. Let’s set up the master node and initialize the cluster.

      Step 4 — Setting Up the Master Node

      In this section, you will set up the master node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.

      A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.

      Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.

      This functionality is provided by pod network plugins. For this cluster, you will use Flannel, a stable and performant option.

      Create an Ansible playbook named master.yml on your local machine:

      • nano ~/kube-cluster/master.yml

      Add the following play to the file to initialize the cluster and install Flannel:

      ~/kube-cluster/master.yml

      - hosts: master
        become: yes
        tasks:
          - name: initialize the cluster
            shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
            args:
              chdir: $HOME
              creates: cluster_initialized.txt
      
          - name: create .kube directory
            become: yes
            become_user: sammy
            file:
              path: $HOME/.kube
              state: directory
              mode: 0755
      
          - name: copy admin.conf to user's kube config
            copy:
              src: /etc/kubernetes/admin.conf
              dest: /home/sammy/.kube/config
              remote_src: yes
              owner: sammy
      
          - name: install Pod network
            become: yes
            become_user: sammy
            shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
            args:
              chdir: $HOME
              creates: pod_network_setup.txt
      

      Here’s a breakdown of this play:

      • The first task initializes the cluster by running kubeadm init. Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from. Flannel uses the above subnet by default; we’re telling kubeadm to use the same subnet.

      • The second task creates a .kube directory at /home/sammy. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.

      • The third task copies the /etc/kubernetes/admin.conf file that was generated from kubeadm init to your non-root user’s home directory. This will allow you to use kubectl to access the newly-created cluster.

      • The last task runs kubectl apply to install Flannel. kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file. The kube-flannel.yml file contains the descriptions of objects required for setting up Flannel in the cluster.

      Save and close the file when you are finished.

      Execute the playbook locally by running:

      • ansible-playbook -i hosts ~/kube-cluster/master.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [master] **** TASK [Gathering Facts] **** ok: [master] TASK [initialize the cluster] **** changed: [master] TASK [create .kube directory] **** changed: [master] TASK [copy admin.conf to user's kube config] ***** changed: [master] TASK [install Pod network] ***** changed: [master] PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0

      To check the status of the master node, SSH into it with the following command:

      Once inside the master node, execute:

      You will now see the following output:

      Output

      NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0

      The output states that the master node has completed all initialization tasks and is in a Ready state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.

      Step 5 — Setting Up the Worker Nodes

      Adding workers to the cluster involves executing a single command on each. This command includes the necessary cluster information, such as the IP address and port of the master's API Server, and a secure token. Only nodes that pass in the secure token will be able join the cluster.

      Navigate back to your workspace and create a playbook named workers.yml:

      • nano ~/kube-cluster/workers.yml

      Add the following text to the file to add the workers to the cluster:

      ~/kube-cluster/workers.yml

      - hosts: master
        become: yes
        gather_facts: false
        tasks:
          - name: get join command
            shell: kubeadm token create --print-join-command
            register: join_command_raw
      
          - name: set join command
            set_fact:
              join_command: "{{ join_command_raw.stdout_lines[0] }}"
      
      
      - hosts: workers
        become: yes
        tasks:
          - name: join cluster
            shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
            args:
              chdir: $HOME
              creates: node_joined.txt
      

      Here's what the playbook does:

      • The first play gets the join command that needs to be run on the worker nodes. This command will be in the following format:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>. Once it gets the actual command with the proper token and hash values, the task sets it as a fact so that the next play will be able to access that info.

      • The second play has a single task that runs the join command on all worker nodes. On completion of this task, the two worker nodes will be part of the cluster.

      Save and close the file when you are finished.

      Execute the playbook by locally running:

      • ansible-playbook -i hosts ~/kube-cluster/workers.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [master] **** TASK [get join command] **** changed: [master] TASK [set join command] ***** ok: [master] PLAY [workers] ***** TASK [Gathering Facts] ***** ok: [worker1] ok: [worker2] TASK [join cluster] ***** changed: [worker1] changed: [worker2] PLAY RECAP ***** master : ok=2 changed=1 unreachable=0 failed=0 worker1 : ok=2 changed=1 unreachable=0 failed=0 worker2 : ok=2 changed=1 unreachable=0 failed=0

      With the addition of the worker nodes, your cluster is now fully set up and functional, with workers ready to run workloads. Before scheduling applications, let's verify that the cluster is working as intended.

      Step 6 — Verifying the Cluster

      A cluster can sometimes fail during setup because a node is down or network connectivity between the master and worker is not working correctly. Let's verify the cluster and ensure that the nodes are operating correctly.

      You will need to check the current state of the cluster from the master node to ensure that the nodes are ready. If you disconnected from the master node, you can SSH back into it with the following command:

      Then execute the following command to get the status of the cluster:

      You will see output similar to the following:

      Output

      NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0 worker1 Ready <none> 1d v1.14.0 worker2 Ready <none> 1d v1.14.0

      If all of your nodes have the value Ready for STATUS, it means that they're part of the cluster and ready to run workloads.

      If, however, a few of the nodes have NotReady as the STATUS, it could mean that the worker nodes haven't finished their setup yet. Wait for around five to ten minutes before re-running kubectl get nodes and inspecting the new output. If a few nodes still have NotReady as the status, you might have to verify and re-run the commands in the previous steps.

      Now that your cluster is verified successfully, let's schedule an example Nginx application on the cluster.

      Step 7 — Running An Application on the Cluster

      You can now deploy any containerized application to your cluster. To keep things familiar, let's deploy Nginx using Deployments and Services to see how this application can be deployed to the cluster. You can use the commands below for other containerized applications as well, provided you change the Docker image name and any relevant flags (such as ports and volumes).

      Still within the master node, execute the following command to create a deployment named nginx:

      • kubectl create deployment nginx --image=nginx

      A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime. The above deployment will create a pod with one container from the Docker registry's Nginx Docker Image.

      Next, run the following command to create a service named nginx that will expose the app publicly. It will do so through a NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster:

      • kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort

      Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external. They are also capable of load balancing requests to multiple pods, and are an integral component in Kubernetes, frequently interacting with other components.

      Run the following command:

      This will output text similar to the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.109.228.209 <none> 80:nginx_port/TCP 40m

      From the third line of the above output, you can retrieve the port that Nginx is running on. Kubernetes will assign a random port that is greater than 30000 automatically, while ensuring that the port is not already bound by another service.

      To test that everything is working, visit http://worker_1_ip:nginx_port or http://worker_2_ip:nginx_port through a browser on your local machine. You will see Nginx's familiar welcome page.

      If you would like to remove the Nginx application, first delete the nginx service from the master node:

      • kubectl delete service nginx

      Run the following to ensure that the service has been deleted:

      You will see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

      Then delete the deployment:

      • kubectl delete deployment nginx

      Run the following to confirm that this worked:

      Output

      No resources found.

      Conclusion

      In this guide, you've successfully set up a Kubernetes cluster on Debian 9 using Kubeadm and Ansible for automation.

      If you're wondering what to do with the cluster now that it's set up, a good next step would be to get comfortable deploying your own applications and services onto the cluster. Here's a list of links with further information that can guide you in the process:

      • Dockerizing applications - lists examples that detail how to containerize applications using Docker.

      • Pod Overview - describes in detail how Pods work and their relationship with other Kubernetes objects. Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work.

      • Deployments Overview - provides an overview of deployments. It is useful to understand how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.

      • Services Overview - covers services, another frequently used object in Kubernetes clusters. Understanding the types of services and the options they have is essential for running both stateless and stateful applications.

      Other important concepts that you can look into are Volumes, Ingresses and Secrets, all of which come in handy when deploying production applications.

      Kubernetes has a lot of functionality and features to offer. The Kubernetes Official Documentation is the best place to learn about concepts, find task-specific guides, and look up API references for various objects.



      Source link

      How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DigitalOcean Kubernetes


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      A Docker registry is a storage and content delivery system for named Docker images, which are the industry standard for containerized applications. A private Docker registry allows you to securely share your images within your team or organization with more flexibility and control when compared to public ones. By hosting your private Docker registry directly in your Kubernetes cluster, you can achieve higher speeds, lower latency, and better availability, all while having control over the registry.

      The underlying registry storage is delegated to external drivers. The default storage system is the local filesystem, but you can swap this for a cloud-based storage driver. DigitalOcean Spaces is an S3-compatible object storage designed for developer teams and businesses that want a scalable, simple, and affordable way to store and serve vast amounts of data, and is very suitable for storing Docker images. It has a built-in CDN network, which can greatly reduce latency when frequently accessing images.

      In this tutorial, you’ll deploy your private Docker registry to your DigitalOcean Kubernetes cluster using Helm, backed up by DigitalOcean Spaces for storing data. You’ll create API keys for your designated Space, install the Docker registry to your cluster with custom configuration, configure Kubernetes to properly authenticate with it, and test it by running a sample deployment on the cluster. At the end of this tutorial, you’ll have a secure, private Docker registry installed on your DigitalOcean Kubernetes cluster.

      Prerequisites

      Before you begin this tutorial, you’ll need:

      • Docker installed on the machine that you’ll access your cluster from. For Ubuntu 18.04 visit How To Install and Use Docker on Ubuntu 18.04. You only need to complete the first step. Otherwise visit Docker’s website for other distributions.

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager. You only need to complete the first two steps.

      • The Nginx Ingress Controller and Cert-Manager installed on the cluster. For a guide on how to do this, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

      • A domain name with two DNS A records pointed to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to create A records. In this tutorial, we’ll refer to the A records as registry.example.com and k8s-test.example.com.

      Step 1 — Configuring and Installing the Docker Registry

      In this step, you will create a configuration file for the registry deployment and install the Docker registry to your cluster with the given config using the Helm package manager.

      During the course of this tutorial, you will use a configuration file called chart_values.yaml to override some of the default settings for the Docker registry Helm chart. Helm calls its packages, charts; these are sets of files that outline a related selection of Kubernetes resources. You’ll edit the settings to specify DigitalOcean Spaces as the underlying storage system and enable HTTPS access by wiring up Let’s Encrypt TLS certificates.

      As part of the prerequisite, you would have created the echo1 and echo2 services and an echo_ingress ingress for testing purposes; you will not need these in this tutorial, so you can now delete them.

      Start off by deleting the ingress by running the following command:

      • kubectl delete -f echo_ingress.yaml

      Then, delete the two test services:

      • kubectl delete -f echo1.yaml && kubectl delete -f echo2.yaml

      The kubectl delete command accepts the file to delete when passed the -f parameter.

      Create a folder that will serve as your workspace:

      Navigate to it by running:

      Now, using your text editor, create your chart_values.yaml file:

      Add the following lines, ensuring you replace the highlighted lines with your details:

      chart_values.yaml

      ingress:
        enabled: true
        hosts:
          - registry.example.com
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
          nginx.ingress.kubernetes.io/proxy-body-size: "30720m"
        tls:
          - secretName: letsencrypt-prod
            hosts:
              - registry.example.com
      
      storage: s3
      
      secrets:
        htpasswd: ""
        s3:
          accessKey: "your_space_access_key"
          secretKey: "your_space_secret_key"
      
      s3:
        region: your_space_region
        regionEndpoint: your_space_region.digitaloceanspaces.com
        secure: true
        bucket: your_space_name
      

      The first block, ingress, configures the Kubernetes Ingress that will be created as a part of the Helm chart deployment. The Ingress object makes outside HTTP/HTTPS routes point to internal services in the cluster, thus allowing communication from the outside. The overridden values are:

      • enabled: set to true to enable the Ingress.
      • hosts: a list of hosts from which the Ingress will accept traffic.
      • annotations: a list of metadata that provides further direction to other parts of Kubernetes on how to treat the Ingress. You set the Ingress Controller to nginx, the Let's Encrypt cluster issuer to the production variant (letsencrypt-prod), and tell the nginx controller to accept files with a max size of 30 GB, which is a sensible limit for even the largest Docker images.
      • tls: this subcategory configures Let's Encrypt HTTPS. You populate the hosts list that defines from which secure hosts this Ingress will accept HTTPS traffic with our example domain name.

      Then, you set the file system storage to s3 — the other available option would be filesystem. Here s3 indicates using a remote storage system compatible with the industry-standard Amazon S3 API, which DigitalOcean Spaces fulfills.

      In the next block, secrets, you configure keys for accessing your DigitalOcean Space under the s3 subcategory. Finally, in the s3 block, you configure the parameters specifying your Space.

      Save and close your file.

      Now, if you haven't already done so, set up your A records to point to the Load Balancer you created as part of the Nginx Ingress Controller installation in the prerequisite tutorial. To see how to set your DNS on DigitalOcean, see How to Manage DNS Records.

      Next, ensure your Space isn't empty. The Docker registry won't run at all if you don't have any files in your Space. To get around this, upload a file. Navigate to the Spaces tab, find your Space, click the Upload File button, and upload any file you'd like. You could upload the configuration file you just created.

      Empty file uploaded to empty Space

      Before installing anything via Helm, you need to refresh its cache. This will update the latest information about your chart repository. To do this run the following command:

      Now, you'll deploy the Docker registry chart with this custom configuration via Helm by running:

      • helm install stable/docker-registry -f chart_values.yaml --name docker-registry

      You'll see the following output:

      Output

      NAME: docker-registry ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-54df68fd64-l26fb 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 3 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.131.143 <none> 5000/TCP 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 0/1 1 0 1s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 80, 443 1s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

      Helm lists all the resources it created as a result of the Docker registry chart deployment. The registry is now accessible from the domain name you specified earlier.

      You've configured and deployed a Docker registry on your Kubernetes cluster. Next, you will test the availability of the newly deployed Docker registry.

      Step 2 — Testing Pushing and Pulling

      In this step, you'll test your newly deployed Docker registry by pushing and pulling images to and from it. Currently, the registry is empty. To have something to push, you need to have an image available on the machine you're working from. Let's use the mysql Docker image.

      Start off by pulling mysql from the Docker Hub:

      Your output will look like this:

      Output

      Using default tag: latest latest: Pulling from library/mysql 27833a3ba0a5: Pull complete ... e906385f419d: Pull complete Digest: sha256:a7cf659a764732a27963429a87eccc8457e6d4af0ee9d5140a3b56e74986eed7 Status: Downloaded newer image for mysql:latest

      You now have the image available locally. To inform Docker where to push it, you'll need to tag it with the host name, like so:

      • sudo docker tag mysql registry.example.com/mysql

      Then, push the image to the new registry:

      • sudo docker push registry.example.com/mysql

      This command will run successfully and indicate that your new registry is properly configured and accepting traffic — including pushing new images. If you see an error, double check your steps against steps 1 and 2.

      To test pulling from the registry cleanly, first delete the local mysql images with the following command:

      • sudo docker rmi registry.example.com/mysql && sudo docker rmi mysql

      Then, pull it from the registry:

      • sudo docker pull registry.example.com/mysql

      This command will take a few seconds to complete. If it runs successfully, that means your registry is working correctly. If it shows an error, double check what you have entered against the previous commands.

      You can list Docker images available locally by running the following command:

      You'll see output listing the images available on your local machine, along with their ID and date of creation.

      Your Docker registry is configured. You've pushed an image to it and verified you can pull it down. Now let's add authentication so only certain people can access the code.

      Step 3 — Adding Account Authentication and Configuring Kubernetes Access

      In this step, you'll set up username and password authentication for the registry using the htpasswd utility.

      The htpasswd utility comes from the Apache webserver, which you can use for creating files that store usernames and passwords for basic authentication of HTTP users. The format of htpasswd files is username:hashed_password (one per line), which is portable enough to allow other programs to use it as well.

      To make htpasswd available on the system, you'll need to install it by running:

      • sudo apt install apache2-utils -y

      Note:
      If you're running this tutorial from a Mac, you'll need to use the following command to make htpasswd available on your machine:

      • docker run --rm -v ${PWD}:/app -it httpd htpasswd -b -c /app/htpasswd_file sammy password

      Create it by executing the following command:

      Add a username and password combination to htpasswd_file:

      • htpasswd -B htpasswd_file username

      Docker requires the password to be hashed using the bcrypt algorithm, which is why we pass the -B parameter. The bcrypt algorithm is a password hashing function based on Blowfish block cipher, with a work factor parameter, which specifies how expensive the hash function will be.

      Remember to replace username with your desired username. When run, htpasswd will ask you for the accompanying password and add the combination to htpasswd_file. You can repeat this command for as many users as you wish to add.

      Now, show the contents of htpasswd_file by running the following command:

      Select and copy the contents shown.

      To add authentication to your Docker registry, you'll need to edit chart_values.yaml and add the contents of htpasswd_file in the htpasswd variable.

      Open chart_values.yaml for editing:

      Find the line that looks like this:

      chart_values.yaml

        htpasswd: ""
      

      Edit it to match the following, replacing htpasswd_file_contents with the contents you copied from the htpasswd_file:

      chart_values.yaml

        htpasswd: |-
          htpasswd_file_contents
      

      Be careful with the indentation, each line of the file contents must have four spaces before it.

      Once you've added your contents, save and close the file.

      To propagate the changes to your cluster, run the following command:

      • helm upgrade docker-registry stable/docker-registry -f chart_values.yaml

      The output will be similar to that shown when you first deployed your Docker registry:

      Output

      Release "docker-registry" has been upgraded. Happy Helming! LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 3m8s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-6c5bb7ffbf-ltnjv 1/1 Running 0 3m7s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 4 3m8s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.128.245 <none> 5000/TCP 3m8s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 1/1 1 1 3m8s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 159.89.215.50 80, 443 3m8s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

      This command calls Helm and instructs it to upgrade an existing release, in your case docker-registry, with its chart defined in stable/docker-registry in the chart repository, after applying the chart_values.yaml file.

      Now, you'll try pulling an image from the registry again:

      • sudo docker pull registry.example.com/mysql

      The output will look like the following:

      Output

      Using default tag: latest Error response from daemon: Get https://registry.example.com/v2/mysql/manifests/latest: no basic auth credentials

      It correctly failed because you provided no credentials. This means that your Docker registry authorizes requests correctly.

      To log in to the registry, run the following command:

      • sudo docker login registry.example.com

      Remember to replace registry.example.com with your domain address. It will prompt you for a username and password. If it shows an error, double check what your htpasswd_file contains. You must define the username and password combination in the htpasswd_file, which you created earlier in this step.

      To test the login, you can try to pull again by running the following command:

      • sudo docker pull registry.example.com/mysql

      The output will look similar to the following:

      Output

      Using default tag: latest latest: Pulling from mysql Digest: sha256:f2dc118ca6fa4c88cde5889808c486dfe94bccecd01ca626b002a010bb66bcbe Status: Image is up to date for registry.example.com/mysql:latest

      You've now configured Docker and can log in securely. To configure Kubernetes to log in to your registry, run the following command:

      • sudo kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/sammy/.docker/config.json --type=kubernetes.io/dockerconfigjson

      You will see the following output:

      Output

      secret/regcred created

      This command creates a secret in your cluster with the name regcred, takes the contents of the JSON file where Docker stores the credentials, and parses it as dockerconfigjson, which defines a registry credential in Kubernetes.

      You've used htpasswd to create a login config file, configured the registry to authenticate requests, and created a Kubernetes secret containing the login credentials. Next, you will test the integration between your Kubernetes cluster and registry.

      Step 4 — Testing Kubernetes Integration by Running a Sample Deployment

      In this step, you'll run a sample deployment with an image stored in the in-cluster registry to test the connection between your Kubernetes cluster and registry.

      In the last step, you created a secret, called regcred, containing login credentials for your private registry. It may contain login credentials for multiple registries, in which case you'll have to update the Secret accordingly.

      You can specify which secret Kubernetes should use when pulling containers in the pod definition by specifying imagePullSecrets. This step is necessary when the Docker registry requires authentication.

      You'll now deploy a sample Hello World image from your private Docker registry to your cluster. First, in order to push it, you'll pull it to your machine by running the following command:

      • sudo docker pull paulbouwer/hello-kubernetes:1.5

      Then, tag it by running:

      • sudo docker tag paulbouwer/hello-kubernetes:1.5 registry.example.com/paulbouwer/hello-kubernetes:1.5

      Finally, push it to your registry:

      • sudo docker push registry.example.com/paulbouwer/hello-kubernetes:1.5

      Delete it from your machine as you no longer need it locally:

      • sudo docker rmi registry.example.com/paulbouwer/hello-kubernetes:1.5

      Now, you'll deploy the sample Hello World application. First, create a new file, hello-world.yaml, using your text editor:

      Next, you'll define a Service and an Ingress to make the app accessible to outside of the cluster. Add the following lines, replacing the highlighted lines with your domains:

      hello-world.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
          nginx.ingress.kubernetes.io/rewrite-target: /
      spec:
        rules:
        - host: k8s-test.example.com
          http:
            paths:
            - path: /
              backend:
                serviceName: hello-kubernetes
                servicePort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes
      spec:
        type: NodePort
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes
        template:
          metadata:
            labels:
              app: hello-kubernetes
          spec:
            containers:
            - name: hello-kubernetes
              image: registry.example.com/paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
            imagePullSecrets:
            - name: regcred
      

      First, you define the Ingress for the Hello World deployment, which you will route through the Load Balancer that the Nginx Ingress Controller owns. Then, you define a service that can access the pods created in the deployment. In the actual deployment spec, you specify the image as the one located in your registry and set imagePullSecrets to regcred, which you created in the previous step.

      Save and close the file. To deploy this to your cluster, run the following command:

      • kubectl apply -f hello-world.yaml

      You'll see the following output:

      Output

      ingress.extensions/hello-kubernetes-ingress created service/hello-kubernetes created deployment.apps/hello-kubernetes created

      You can now navigate to your test domain — the second A record, k8s-test.example.com in this tutorial. You will see the Kubernetes Hello world! page.

      Hello World page

      The Hello World page lists some environment information, like the Linux kernel version and the internal ID of the pod the request was served from. You can also access your Space via the web interface to see the images you've worked with in this tutorial.

      If you want to delete this Hello World deployment after testing, run the following command:

      • kubectl delete -f hello-world.yaml

      You've created a sample Hello World deployment to test if Kubernetes is properly pulling images from your private registry.

      Conclusion

      You have now successfully deployed your own private Docker registry on your DigitalOcean Kubernetes cluster, using DigitalOcean Spaces as the storage layer underneath. There is no limit to how many images you can store, Spaces can extend infinitely, while at the same time providing the same security and robustness. In production, though, you should always strive to optimize your Docker images as much as possible, take a look at the How To Optimize Docker Images for Production tutorial.



      Source link

      How To Automate Deployments to DigitalOcean Kubernetes with CircleCI


      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Having an automated deployment process is a requirement for a scalable and resilient application, and GitOps, or Git-based DevOps, has rapidly become a popular method of organizing CI/CD with a Git repository as a “single source of truth.” Tools like CircleCI integrate with your GitHub repository, allowing you to test and deploy your code automatically every time you make a change to your repository. When this kind of CI/CD is combined with the flexibility of Kubernetes infrastructure, you can build an application that scales easily with changing demand.

      In this article you will use CircleCI to deploy a sample application to a DigitalOcean Kubernetes (DOKS) cluster. After reading this tutorial, you’ll be able to apply these same techniques to deploy other CI/CD tools that are buildable as Docker images.

      Prerequisites

      To follow this tutorial, you’ll need to have:

      For this tutorial, you will use Kubernetes version 1.13.5 and kubectl version 1.10.7.

      Step 1 — Creating Your DigitalOcean Kubernetes Cluster

      Note: You can skip this section if you already have a running DigitalOcean Kubernetes cluster.

      In this first step, you will create the DigitalOcean Kubernetes (DOKS) cluster from which you will deploy your sample application. The kubectl commands executed from your local machine will change or retrieve information directly from the Kubernetes cluster.

      Go to the Kubernetes page on your DigitalOcean account.

      Click Create a Kubernetes cluster, or click the green Create button at the top right of the page and select Clusters from the dropdown menu.

      [Creating a Kubernetes Cluster on DigitalOcean](assets.digitalocean.com/articles/cart64920/CreateDOKS.gif)

      The next page is where you are going to specify the details of your cluster. On Select a Kubernetes version pick version 1.13.5-do.0. If this one is not available, choose a higher one.

      For Choose a datacenter region, choose the region closest to you. This tutorial will use San Francisco – 2.

      You then have the option to build your Node pool(s). On Kubernetes, a node is a worker machine, which contains the services necessary to run pods. On DigitalOcean, each node is a Droplet. Your node pool will consist of a single Standard node. Select the 2GB/1vCPU configuration and change to 1 Node on the number of nodes.

      You can add extra tags if you want; this can be useful if you plan to use DigitalOcean API or just to better organize your node pools.

      On Choose a name, for this tutorial, use kubernetes-deployment-tutorial. This will make it easier to follow throughout while reading the next sections. Finally, click the green Create Cluster button to create your cluster.

      After cluster creation, there will be a button on the UI to download a configuration file called Download Config File. This is the file you will be using to authenticate the kubectl commands you are going to run against your cluster. Download it to your kubectl machine.

      The default way to use that file is to always pass the --kubeconfig flag and the path to it on all commands you run with kubectl. For example, if you downloaded the config file to Desktop, you would run the kubectl get pods command like this:

      • kubectl --kubeconfig ~/Desktop/kubernetes-deployment-tutorial-kubeconfig.yaml get pods

      This would yield the following output:

      Output

      No resources found.

      This means you accessed your cluster. The No resources found. message is correct, since you don’t have any pods on your cluster.

      If you are not maintaining any other Kubernetes clusters you can copy the kubeconfig file to a folder on your home directory called .kube. Create that directory in case it does not exist:

      Then copy the config file into the newly created .kube directory and rename it config:

      • cp current_kubernetes-deployment-tutorial-kubeconfig.yaml_file_path ~/.kube/config

      The config file should now have the path ~/.kube/config. This is the file that kubectl reads by default when running any command, so there is no need to pass --kubeconfig anymore. Run the following:

      You will receive the following output:

      Output

      No resources found.

      Now access the cluster with the following:

      You will receive the list of nodes on your cluster. The output will be similar to this:

      Output

      NAME STATUS ROLES AGE VERSION kubernetes-deployment-tutorial-1-7pto Ready <none> 1h v1.13.5

      In this tutorial you are going to use the default namespace for all kubectl commands and manifest files, which are files that define the workload and operating parameters of work in Kubernetes. Namespaces are like virtual clusters inside your single physical cluster. You can change to any other namespace you want; just make sure to always pass it using the --namespace flag to kubectl, and/or specifying it on the Kubernetes manifests metadata field. They are a great way to organize the deployments of your team and their running environments; read more about them in the official Kubernetes overview on Namespaces.

      By finishing this step you are now able to run kubectl against your cluster. In the next step, you will create the local Git repository you are going to use to house your sample application.

      Step 2 — Creating the Local Git Repository

      You are now going to structure your sample deployment in a local Git repository. You will also create some Kubernetes manifests that will be global to all deployments you are going to do on your cluster.

      Note: This tutorial has been tested on Ubuntu 18.04, and the individual commands are styled to match this OS. However, most of the commands here can be applied to other Linux distributions with little to no change needed, and commands like kubectl are platform-agnostic.

      First, create a new Git repository locally that you will push to GitHub later on. Create an empty folder called do-sample-app in your home directory and cd into it:

      • mkdir ~/do-sample-app
      • cd ~/do-sample-app

      Now create a new Git repository in this folder with the following command:

      Inside this repository, create an empty folder called kube:

      • mkdir ~/do-sample-app/kube/

      This will be the location where you are going to store the Kubernetes resources manifests related to the sample application that you will deploy to your cluster.

      Now, create another folder called kube-general, but this time outside of the Git repository you just created. Make it inside your home directory:

      This folder is outside of your Git repository because it will be used to store manifests that are not specific to a single deployment on your cluster, but common to multiple ones. This will allow you to reuse these general manifests for different deployments.

      With your folders created and the Git repository of your sample application in place, it's time to arrange the authentication and authorization of your DOKS cluster.

      Step 3 — Creating a Service Account

      It's generally not recommended to use the default admin user to authenticate from other Services into your Kubernetes cluster. If your keys on the external provider got compromised, your whole cluster would become compromised.

      Instead you are going to use a single Service Account with a specific Role, which is all part of the RBAC Kubernetes authorization model.

      This authorization model is based on Roles and Resources. You start by creating a Service Account, which is basically a user on your cluster, then you create a Role, in which you specify what resources it has access to on your cluster. Finally, you create a Role Binding, which is used to make the connection between the Role and the Service Account previously created, granting to the Service Account access to all resources the Role has access to.

      The first Kubernetes resource you are going to create is the Service Account for your CI/CD user, which this tutorial will name cicd.

      Create the file cicd-service-account.yml inside the ~/kube-general folder, and open it with your favorite text editor:

      • nano ~/kube-general/cicd-service-account.yml

      Write the following content on it:

      ~/kube-general/cicd-service-account.yml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: cicd
        namespace: default
      

      This is a YAML file; all Kubernetes resources are represented using one. In this case you are saying this resource is from Kubernetes API version v1 (internally kubectl creates resources by calling Kubernetes HTTP APIs), and it is a ServiceAccount.

      The metadata field is used to add more information about this resource. In this case, you are giving this ServiceAccount the name cicd, and creating it on the default namespace.

      You can now create this Service Account on your cluster by running kubectl apply, like the following:

      • kubectl apply -f ~/kube-general/

      You will recieve output similar to the following:

      Output

      serviceaccount/cicd created

      To make sure your Service Account is working, try to log in to your cluster using it. To do that you first need to obtain their respective access token and store it in an environment variable. Every Service Account has an access token which Kubernetes stores as a Secret.

      You can retrieve this secret using the following command:

      • TOKEN=$(kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode)

      Some explanation on what this command is doing:

      $(kubectl get secret | grep cicd-token | awk '{print $1}')
      

      This is used to retrieve the name of the secret related to our cicd Service Account. kubectl get secret returns the list of secrets on the default namespace, then you use grep to search for the lines related to your cicd Service Account. Then you return the name, since it is the first thing on the single line returned from the grep.

      kubectl get secret preceding-command -o jsonpath='{.data.token}' | base64 --decode
      

      This will retrieve only the secret for your Service Account token. You then access the token field using jsonpath, and pass the result to base64 --decode. This is necessary because the token is stored as a Base64 string. The token itself is a JSON Web Token.

      You can now try to retrieve your pods with the cicd Service Account. Run the following command, replacing server-from-kubeconfig-file with the server URL that can be found after server: in ~kube/config. This command will give a specific error that you will learn about later in this tutorial:

      • kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods

      --insecure-skip-tls-verify skips the step of verifying the certificate of the server, since you are just testing and do not need to verify this. --kubeconfig="/dev/null" is to make sure kubectl does not read your config file and credentials but instead uses the token provided.

      The output should be similar to this:

      Output

      Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:cicd" cannot list resource "pods" in API group "" in the namespace "default"

      This is an error, but it shows us that the token worked. The error you received is about your Service Account not having the neccessary authorization to list the resource secrets, but you were able to access the server itself. If your token had not worked, the error would have been the following one:

      Output

      error: You must be logged in to the server (Unauthorized)

      Now that the authentication was a success, the next step is to fix the authorization error for the Service Account. You will do this by creating a role with the necessary permissions and binding it to your Service Account.

      Step 4 — Creating the Role and the Role Binding

      Kubernetes has two ways to define roles: using a Role or a ClusterRole resource. The difference between the former and the latter is that the first one applies to a single namespace, while the other is valid for the whole cluster.

      As you are using a single namespace on this tutorial, you will use a Role.

      Create the file ~/kube-general/cicd-role.yml and open it with your favorite text editor:

      • nano ~/kube-general/cicd-role.yml

      The basic idea is to grant access to do everything related to most Kubernetes resources in the default namespace. Your Role would look like this:

      ~/kube-general/cicd-role.yml

      kind: Role
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: cicd
        namespace: default
      rules:
        - apiGroups: ["", "apps", "batch", "extensions"]
          resources: ["deployments", "services", "replicasets", "pods", "jobs", "cronjobs"]
          verbs: ["*"]
      

      This YAML has some similarities with the one you created previously, but here you are saying this resource is a Role, and it's from the Kubernetes API rbac.authorization.k8s.io/v1. You are naming your role cicd, and creating it on the same namespace you created your ServiceAccount, the default one.

      Then you have the rules field, which is a list of resources this role has access to. In Kubernetes resources are defined based on the API group they belong to, the resource kind itself, and what actions you can do on then, which is represented by a verb. Those verbs are similar to the HTTP ones.

      In our case you are saying that your Role is allowed to do everything, *, on the following resources: deployments, services, replicasets, pods, jobs, and cronjobs. This also applies to those resources belonging to the following API groups: "" (empty string), apps, batch, and extensions. The empty string means the root API group. If you use apiVersion: v1 when creating a resource it means this resource is part of this API group.

      A Role by itself does nothing; you must also create a RoleBinding, which binds a Role to something, in this case, a ServiceAccount.

      Create the file ~/kube-general/cicd-role-binding.yml and open it:

      • nano ~/kube-general/cicd-role-binding.yml

      Add the following lines to the file:

      ~/kube-general/cicd-role-binding.yml

      kind: RoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: cicd
        namespace: default
      subjects:
        - kind: ServiceAccount
          name: cicd
          namespace: default
      roleRef:
        kind: Role
        name: cicd
        apiGroup: rbac.authorization.k8s.io
      

      Your RoleBinding has some specific fields that have not yet been covered in this tutorial. roleRef is the Role you want to bind to something; in this case it is the cicd role you created earlier. subjects is the list of resources you are binding your role to; in this case it's a single ServiceAccount called cicd.

      Note: If you had used a ClusterRole, you would have to create a ClusterRoleBinding instead of a RoleBinding. The file would be almost the same. The only difference would be that it would have no namespace field inside the metadata.

      With those files created you will be able to use kubectl apply again. Create those new resources on your Kubernetes cluster by running the following command:

      • kubectl apply -f ~/kube-general/

      You will receive output similar to the following:

      Output

      rolebinding.rbac.authorization.k8s.io/cicd created role.rbac.authorization.k8s.io/cicd created serviceaccount/cicd created

      Now, try the command you ran previously:

      • kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods

      Since you have no pods, this will yield the following output:

      Output

      No resources found.

      In this step, you gave the Service Account you are going to use on CircleCI the necessary authorization to do meaningful actions on your cluster like listing, creating, and updating resources. Now it's time to create your sample application.

      Step 5 — Creating Your Sample Application

      Note: All commands and files created from now on will start from the folder ~/do-sample-app you created earlier. This is becase you are now creating files specific to the sample application that you are going to deploy to your cluster.

      The Kubernetes Deployment you are going to create will use the Nginx image as a base, and your application will be a simple static HTML page. This is a great start because it allows you to test if your deployment works by serving a simple HTML directly from Nginx. As you will see later on, you can redirect all traffic coming to a local address:port to your deployment on your cluster to test if it's working.

      Inside the repository you set up earlier, create a new Dockerfile file and open it with your text editor of choice:

      • nano ~/do-sample-app/Dockerfile

      Write the following on it:

      ~/do-sample-app/Dockerfile

      FROM nginx:1.14
      
      COPY index.html /usr/share/nginx/html/index.html
      

      This will tell Docker to build the application container from an nginx image.

      Now create a new index.html file and open it:

      • nano ~/do-sample-app/index.html

      Write the following HTML content:

      ~/do-sample-app/index.html

      <!DOCTYPE html>
      <title>DigitalOcean</title>
      <body>
        Kubernetes Sample Application
      </body>
      

      This HTML will display a simple message that will let you know if your application is working.

      You can test if the image is correct by building and then running it.

      First, build the image with the following command, replacing dockerhub-username with your own Docker Hub username. You must specify your username here so when you push it later on to Docker Hub it will just work:

      • docker build ~/do-sample-app/ -t dockerhub-username/do-kubernetes-sample-app

      Now run the image. Use the following command, which starts your image and forwards any local traffic on port 8080 to the port 80 inside the image, the port Nginx listens to by default:

      • docker run --rm -it -p 8080:80 dockerhub-username/do-kubernetes-sample-app

      The command prompt will stop being interactive while the command is running. Instead you will see the Nginx access logs. If you open localhost:8080 on any browser it should show an HTML page with the content of ~/do-sample-app/index.html. In case you don't have a browser available, you can open a new terminal window and use the following curl command to fetch the HTML from the webpage:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body>

      Stop the container (CTRL + C on the terminal where it's running), and submit this image to your Docker Hub account. To do this, first log in to Docker Hub:

      Fill in the required information about your Docker Hub account, then push the image with the following command (don't forget to replace the dockerhub-username with your own):

      • docker push dockerhub-username/do-kubernetes-sample-app

      You have now pushed your sample application image to your Docker Hub account. In the next step, you will create a Deployment on your DOKS cluster from this image.

      Step 6 — Creating the Kubernetes Deployment and Service

      With your Docker image created and working, you will now create a manifest telling Kubernetes how to create a Deployment from it on your cluster.

      Create the YAML deployment file ~/do-sample-app/kube/do-sample-deployment.yml and open it with your text editor:

      • nano ~/do-sample-app/kube/do-sample-deployment.yml

      Write the following content on the file, making sure to replace dockerhub-username with your Docker Hub username:

      ~/do-sample-app/kube/do-sample-deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: do-kubernetes-sample-app
        namespace: default
        labels:
          app: do-kubernetes-sample-app
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: do-kubernetes-sample-app
        template:
          metadata:
            labels:
              app: do-kubernetes-sample-app
          spec:
            containers:
              - name: do-kubernetes-sample-app
                image: dockerhub-username/do-kubernetes-sample-app:latest
                ports:
                  - containerPort: 80
                    name: http
      

      Kubernetes deployments are from the API group apps, so the apiVersion of your manifest is set to apps/v1. On metadata you added a new field you have not used previously, called metadata.labels. This is useful to organize your deployments. The field spec represents the behavior specification of your deployment. A deployment is responsible for managing one or more pods; in this case it's going to have a single replica by the spec.replicas field. That is, it's going to create and manage a single pod.

      To manage pods, your deployment must know which pods it's responsible for. The spec.selector field is the one that gives it that information. In this case the deployment will be responsible for all pods with tags app=do-kubernetes-sample-app. The spec.template field contains the details of the Pod this deployment will create. Inside the template you also have a spec.template.metadata field. The labels inside this field must match the ones used on spec.selector. spec.template.spec is the specification of the pod itself. In this case it contains a single container, called do-kubernetes-sample-app. The image of that container is the image you built previously and pushed to Docker Hub.

      This YAML file also tells Kubernetes that this container exposes the port 80, and gives this port the name http.

      To access the port exposed by your Deployment, create a Service. Make a file named ~/do-sample-app/kube/do-sample-service.yml and open it with your favorite editor:

      • nano ~/do-sample-app/kube/do-sample-service.yml

      Next, add the following lines to the file:

      ~/do-sample-app/kube/do-sample-service.yml

      apiVersion: v1
      kind: Service
      metadata:
        name: do-kubernetes-sample-app
        namespace: default
        labels:
          app: do-kubernetes-sample-app
      spec:
        type: ClusterIP
        ports:
          - port: 80
            targetPort: http
            name: http
        selector:
          app: do-kubernetes-sample-app
      

      This file gives your Service the same labels used on your deployment. This is not required, but it helps to organize your applications on Kubernetes.

      The service resource also has a spec field. The spec.type field is responsible for the behavior of the service. In this case it's a ClusterIP, which means the service is exposed on a cluster-internal IP, and is only reachable from within your cluster. This is the default spec.type for services. spec.selector is the label selector criteria that should be used when picking the pods to be exposed by this service. Since your pod has the tag app: do-kubernetes-sample-app, you used it here. spec.ports are the ports exposed by the pod's containers that you want to expose from this service. Your pod has a single container which exposes port 80, named http, so you are using it here as targetPort. The service exposes that port on port 80 too, with the same name, but you could have used a different port/name combination than the one from the container.

      With your Service and Deployment manifest files created, you can now create those resources on your Kubernetes cluster using kubectl:

      • kubectl apply -f ~/do-sample-app/kube/

      You will receive the following output:

      Output

      deployment.apps/do-kubernetes-sample-app created service/do-kubernetes-sample-app created

      Test if this is working by forwarding one port on your machine to the port that the service is exposing inside your Kubernetes cluster. You can do that using kubectl port-forward:

      • kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80

      The subshell command $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') retrieves the name of the pod matching the tag you used. Otherwise you could have retrieved it from the list of pods by using kubectl get pods.

      After you run port-forward, the shell will stop being interactive, and will instead output the requests redirected to your cluster:

      Output

      Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80

      Opening localhost:8080 on any browser should render the same page you saw when you ran the container locally, but it's now coming from your Kubernetes cluster! As before, you can also use curl in a new terminal window to check if it's working:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body>

      Next, it's time to push all the files you created to your GitHub repository. To do this you must first create a repository on GitHub called digital-ocean-kubernetes-deploy.

      In order to keep this repository simple for demonstration purposes, do not initialize the new repository with a README, license, or .gitignore file when asked on the GitHub UI. You can add these files later on.

      With the repository created, point your local repository to the one on GitHub. To do this, press CTRL + C to stop kubectl port-forward and get the command line back, then run the following commands to add a new remote called origin:

      • cd ~/do-sample-app/
      • git remote add origin https://github.com/your-github-account-username/digital-ocean-kubernetes-deploy.git

      There should be no output from the preceding command.

      Next, commit all the files you created up to now to the GitHub repository. First, add the files:

      Next, commit the files to your repository, with a commit message in quotation marks:

      • git commit -m "initial commit"

      This will yield output similar to the following:

      Output

      [master (root-commit) db321ad] initial commit 4 files changed, 47 insertions(+) create mode 100644 Dockerfile create mode 100644 index.html create mode 100644 kube/do-sample-deployment.yml create mode 100644 kube/do-sample-service.yml

      Finally, push the files to GitHub:

      • git push -u origin master

      You will be prompted for your username and password. Once you have entered this, you will see output like this:

      Output

      Counting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (7/7), 907 bytes | 0 bytes/s, done. Total 7 (delta 0), reused 0 (delta 0) To github.com:your-github-account-username/digital-ocean-kubernetes-deploy.git * [new branch] master -> master Branch master set up to track remote branch master from origin.

      If you go to your GitHub repository page you will now see all the files there. With your project up on GitHub, you can now set up CircleCI as your CI/CD tool.

      Step 7 — Configuring CircleCI

      For this tutorial, you will use CircleCI to automate deployments of your application whenever the code is updated, so you will need to log in to CircleCI using your GitHub account and set up your repository.

      First, go to their homepage https://circleci.com, and press Sign Up.

      circleci-home-page

      You are using GitHub, so click the green Sign Up with GitHub button.

      CircleCI will redirect to an authorization page on GitHub. CircleCI needs some permissions on your account to be able to start building your projects. This allows CircleCI to obtain your email, deploy keys and permission to create hooks on your repositories, and add SSH keys to your account. If you need more information on what CircleCI is going to do with your data, check their documentation about GitHub integration.

      circleci-github-authorization

      After authorizing CircleCI you will be redirected to their dashboard.

      circleci-project-dashboard

      Next, set up your GitHub repository in CircleCI. Click on Set Up New Projects from the CircleCI Dashboard, or as a shortcut, open the following link changing the highlighted text with your own GitHub username: https://circleci.com/setup-project/gh/your-github-username/digital-ocean-kubernetes-deploy.

      After that press Start Building. Do not create a config file in your repository just yet, and don't worry if the first build fails.

      circleci-start-building

      Next, specify some environment variables in the CircleCI settings. You can find the settings of the project by clicking on the small button with a cog icon on the top right section of the page then selecting Environment Variables, or you can go directly to the environment variables page by using the following URL (remember to fill in your username): https://circleci.com/gh/your-github-username/digital-ocean-kubernetes-deploy/edit#env-vars. Press Add Variable to create new environment variables.

      First, add two environment variables called DOCKERHUB_USERNAME and DOCKERHUB_PASS which will be needed later on to push the image to Docker Hub. Set the values to your Docker Hub username and password, respectively.

      Then add three more: KUBERNETES_TOKEN, KUBERNETES_SERVER, and KUBERNETES_CLUSTER_CERTIFICATE.

      The value of KUBERNETES_TOKEN will be the value of the local environment variable you used earlier to authenticate on your Kubernetes cluster using your Service Account user. If you have closed the terminal, you can always run the following command to retrieve it again:

      • kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode

      KUBERNETES_SERVER will be the string you passed as the --server flag to kubectl when you logged in with your cicd Service Account. You can find this after server: in the ~/.kube/config file, or in the file kubernetes-deployment-tutorial-kubeconfig.yaml downloaded from the DigitalOcean dashboard when you made the initial setup of your Kubernetes cluster.

      KUBERNETES_CLUSTER_CERTIFICATE should also be available on your ~/.kube/config file. It's the certificate-authority-data field on the clusters item related to your cluster. It should be a long string; make sure to copy all of it.

      Those environment variables must be defined here because most of them contain sensitive information, and it is not secure to place them directly on the CircleCI YAML config file.

      With CircleCI listening for changes on your repository, and the environment variables configured, it's time to create the configuration file.

      Make a directory called .circleci inside your sample application repository:

      • mkdir ~/do-sample-app/.circleci/

      Inside this directory, create a file named config.yml and open it with your favorite editor:

      • nano ~/do-sample-app/.circleci/config.yml

      Add the following content to the file, making sure to replace dockerhub-username with your Docker Hub username:

      ~/do-sample-app/.circleci/config.yml

      version: 2.1
      jobs:
        build:
          docker:
            - image: circleci/buildpack-deps:stretch
          environment:
            IMAGE_NAME: dockerhub-username/do-kubernetes-sample-app
          working_directory: ~/app
          steps:
            - checkout
            - setup_remote_docker
            - run:
                name: Build Docker image
                command: |
                  docker build -t $IMAGE_NAME:latest .
            - run:
                name: Push Docker Image
                command: |
                  echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
                  docker push $IMAGE_NAME:latest
      workflows:
        version: 2
        build-master:
          jobs:
            - build:
                filters:
                  branches:
                    only: master
      

      This sets up a Workflow with a single job, called build, that runs for every commit to the master branch. This job is using the image circleci/buildpack-deps:stretch to run its steps, which is an image from CircleCI based on the official buildpack-deps Docker image, but with some extra tools installed, like Docker binaries themselves.

      The workflow has four steps:

      • checkout retrieves the code from GitHub.
      • setup_remote_docker sets up a remote, isolated environment for each build. This is required before you use any docker command inside a job step. This is necessary because as the steps are running inside a docker image, setup_remote_docker allocates another machine to run the commands there.
      • The first run step builds the image, as you did previously locally. For that you are using the environment variable you declared in environment:, IMAGE_NAME (remember to change the highlighted section with your own information).
      • The last run step pushes the image to Dockerhub, using the environment variables you configured on the project settings to authenticate.

      Commit the new file to your repository and push the changes upstream:

      • cd ~/do-sample-app/
      • git add .circleci/
      • git commit -m "add CircleCI config"
      • git push

      This will trigger a new build on CircleCI. The CircleCI workflow is going to correctly build and push your image to Docker Hub.

      CircleCI build page with success build info

      Now that you have created and tested your CircleCI workflow, you can set your DOKS cluster to retrieve the up-to-date image from Docker Hub and deploy it automatically when changes are made.

      Step 8 — Updating the Deployment on the Kubernetes Cluster

      Now that your application image is being built and sent to Docker Hub every time you push changes to the master branch on GitHub, it's time to update your deployment on your Kubernetes cluster so that it retrieves the new image and uses it as a base for deployment.

      To do that, first fix one issue with your deployment: it's currently depending on an image with the latest tag. This tag does not tell us which version of the image you are using. You cannot easily lock your deployment to that tag because it's overwritten everytime you push a new image to Docker Hub, and by using it like that you lose one of the best things about having containerized applications: Reproducibility.

      You can read more about that on this article about why depending on Docker latest tag is a anti-pattern.

      To correct this, you first must make some changes to your Push Docker Image build step in the ~/do-sample-app/.circleci/config.yml file. Open up the file:

      • nano ~/do-sample-app/.circleci/config.yml

      Then add the highlighted lines to your Push Docker Image step:

      ~/do-sample-app/.circleci/config.yml:16-22

      ...
            - run:
                name: Push Docker Image
                command: |
                  echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
                  docker tag $IMAGE_NAME:latest $IMAGE_NAME:$CIRCLE_SHA1
                  docker push $IMAGE_NAME:latest
                  docker push $IMAGE_NAME:$CIRCLE_SHA1
      ...
      

      Save and exit the file.

      CircleCI has some special environment variables set by default. One of them is CIRCLE_SHA1, which contains the hash of the commit it's building. The changes you made to ~/do-sample-app/.circleci/config.yml will use this environment variable to tag your image with the commit it was built from, always tagging the most recent build with the latest tag. That way, you always have specific images available, without overwriting them when you push something new to your repository.

      Next, change your deployment manifest file to point to that file. This would be simple if inside ~/do-sample-app/kube/do-sample-deployment.yml you could set your image as dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1, but kubectl doesn't do variable substitution inside the manifests when you use kubectl apply. To account for this, you can use envsubst. envsubst is a cli tool, part of the GNU gettext project. It allows you to pass some text to it, and if it finds any variable inside the text that has a matching environment variable, it's replaced by the respective value. The resulting text is then returned as their output.

      To use this, you will create a simple bash script which will be responsible for your deployment. Make a new folder called scripts inside ~/do-sample-app/:

      • mkdir ~/do-sample-app/scripts/

      Inside that folder create a new bash script called ci-deploy.sh and open it with your favorite text editor:

      • nano ~/do-sample-app/scripts/ci-deploy.sh

      Inside it write the following bash script:

      ~/do-sample-app/scripts/ci-deploy.sh

      #! /bin/bash
      # exit script when any command ran here returns with non-zero exit code
      set -e
      
      COMMIT_SHA1=$CIRCLE_SHA1
      
      # We must export it so it's available for envsubst
      export COMMIT_SHA1=$COMMIT_SHA1
      
      # since the only way for envsubst to work on files is using input/output redirection,
      #  it's not possible to do in-place substitution, so we need to save the output to another file
      #  and overwrite the original with that one.
      envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
      mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
      
      echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
      
      ./kubectl 
        --kubeconfig=/dev/null 
        --server=$KUBERNETES_SERVER 
        --certificate-authority=cert.crt 
        --token=$KUBERNETES_TOKEN 
        apply -f ./kube/
      

      Let's go through this script, using the comments in the file. First, there is the following:

      set -e
      

      This line makes sure any failed command stops the execution of the bash script. That way if one command fails, the next ones are not executed.

      COMMIT_SHA1=$CIRCLE_SHA1
      export COMMIT_SHA1=$COMMIT_SHA1
      

      These lines export the CircleCI $CIRCLE_SHA1 environment variable with a new name. If you had just declared the variable without exporting it using export, it would not be visible for the envsubst command.

      envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
      mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
      

      envsubst cannot do in-place substitution. That is, it cannot read the content of a file, replace the variables with their respective values, and write the output back to the same file. Therefore, you will redirect the output to another file and then overwrite the original file with the new one.

      echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
      

      The environment variable $KUBERNETES_CLUSTER_CERTIFICATE you created earlier on CircleCI's project settings is in reality a Base64 encoded string. To use it with kubectl you must decode its contents and save it to a file. In this case you are saving it to a file named cert.crt inside the current working directory.

      ./kubectl 
        --kubeconfig=/dev/null 
        --server=$KUBERNETES_SERVER 
        --certificate-authority=cert.crt 
        --token=$KUBERNETES_TOKEN 
        apply -f ./kube/
      

      Finally, you are running kubectl. The command has similar arguments to the one you ran when you were testing your Service Account. You are calling apply -f ./kube/, since on CircleCI the current working directory is the root folder of your project. ./kube/ here is your ~/do-sample-app/kube folder.

      Save the file and make sure it's executable:

      • chmod +x ~/do-sample-app/scripts/ci-deploy.sh

      Now, edit ~/do-sample-app/kube/do-sample-deployment.yml:

      • nano ~/do-sample-app/kube/do-sample-deployment.yml

      Change the tag of the container image value to look like the following one:

      ~/do-sample-app/kube/do-sample-deployment.yml

            # ...
            containers:
              - name: do-kubernetes-sample-app
                image: dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1
                ports:
                  - containerPort: 80
                    name: http
      

      Save and close the file. You must now add some new steps to your CI configuration file to update the deployment on Kubernetes.

      Open ~/do-sample-app/.circleci/config.yml on your favorite text editor:

      • nano ~/do-sample-app/.circleci/config.yml

      Write the following new steps, right below the Push Docker Image one you had before:

      ~/do-sample-app/.circleci/config.yml

      ...
            - run:
                name: Install envsubst
                command: |
                  sudo apt-get update && sudo apt-get -y install gettext-base
            - run:
                name: Install kubectl
                command: |
                  curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
                  chmod u+x ./kubectl
            - run:
                name: Deploy Code
                command: ./scripts/ci-deploy.sh
      

      The first two steps are installing some dependencies, first envsubst, and then kubectl. The Deploy Code step is responsible for running our deploy script.

      To make sure the changes are really going to be reflected on your Kubernetes deployment, edit your index.html. Change the HTML to something else, like:

      ~/do-sample-app/index.html

      <!DOCTYPE html>
      <title>DigitalOcean</title>
      <body>
        Automatic Deployment is Working!
      </body>
      

      Once you have saved the above change, commit all the modified files to the repository, and push the changes upstream:

      • cd ~/do-sample-app/
      • git add --all
      • git commit -m "add deploy script and add new steps to circleci config"
      • git push

      You will see the new build running on CircleCI, and successfully deploying the changes to your Kubernetes cluster.

      Wait for the build to finish, then run the same command you ran previously:

      • kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80

      Make sure everything is working by opening your browser on the URL localhost:8080 or by making a curl request to it. It should show the updated HTML:

      You will receive the following output:

      Output

      <!DOCTYPE html> <title>DigitalOcean</title> <body> Automatic Deployment is Working! </body>

      Congratulations, you have set up automated deployment with CircleCI!

      Conclusion

      This was a basic tutorial on how to do deployments to DigitalOcean Kubernetes using CircleCI. From here, you can improve your pipeline in many ways. The first thing you can do is create a single build job for multiple deployments, each one deploying to different Kubernetes clusters or different namespaces. This can be extremely useful when you have different Git branches for development/staging/production environments, ensuring that the deployments are always separated.

      You could also build your own image to be used on CircleCI, instead of using buildpack-deps. This image could be based on it, but could already have kubectl and envsubst dependencies installed.

      If you would like to learn more about CI/CD on Kubernetes, check out the tutorials for our CI/CD on Kubernetes Webinar Series, or for more information about apps on Kubernetes, see Modernizing Applications for Kubernetes.



      Source link