One place for hosting & domains

      Kubernetes

      How to Deploy to Kubernetes using Argo CD and GitOps


      Introduction

      Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with that increased control comes increased complexity. Continuous Integration and Continuous Deployment (CI/CD) systems usually work at a high level of abstraction in order to provide version control, change logging, and rollback functionality. A popular approach to this abstraction layer is called GitOps.

      GitOps, as originally proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.

      There are several tools that use Git as a focal point for DevOps processes on Kubernetes. In this tutorial, you will learn to use Argo CD, a declarative Continuous Delivery tool. Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including Jsonnet, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket.

      In this article, you will use Argo CD to synchronize and deploy an application from a GitHub repository.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Installing Argo CD on Your Cluster

      In order to install Argo CD, you should first have a valid Kubernetes configuration set up with kubectl, from which you can ping your worker nodes. You can test this by running kubectl get nodes:

      This command should return a list of nodes with the Ready status:

      Output

      NAME STATUS ROLES AGE VERSION pool-uqv8a47h0-ul5a7 Ready <none> 22m v1.21.5 pool-uqv8a47h0-ul5am Ready <none> 21m v1.21.5 pool-uqv8a47h0-ul5aq Ready <none> 21m v1.21.5

      If kubectl does not return a set of nodes with the Ready status, you should review your cluster configuration and the Kubernetes documentation.

      Next, create the argocd namespace in your cluster, which will contain Argo CD and its associated services:

      • kubectl create namespace argocd

      After that, you can run the Argo CD install script provided by the project maintainers.

      • kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

      Once the installation completes successfully, you can use the watch command to check the status of your Kubernetes pods:

      • watch kubectl get pods -n argocd

      By default, there should be five pods that eventually receive the Running status as part of a stock Argo CD installation.

      Output

      NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 2m28s argocd-dex-server-66f865ffb4-chwwg 1/1 Running 0 2m30s argocd-redis-5b6967fdfc-q4klp 1/1 Running 0 2m30s argocd-repo-server-656c76778f-vsn7l 1/1 Running 0 2m29s argocd-server-cd68f46f8-zg7hq 1/1 Running 0 2m28s

      You can press Ctrl+C to exit the watch interface. You now have Argo CD running in your Kubernetes cluster! However, because of the way Kubernetes creates abstractions around your network interfaces, you won’t be able to access it directly without forwarding ports from inside your cluster. You’ll learn how to handle that in the next step.

      Step 2 — Forwarding Ports to Access Argo CD

      Because Kubernetes deploys services to arbitrary network addresses inside your cluster, you’ll need to forward the relevant ports in order to access them from your local machine. Argo CD sets up a service named argocd-server on port 443 internally. Because port 443 is the default HTTPS port, and you may be running some other HTTP/HTTPS services, it’s common practice to forward those to arbitrarily chosen other ports, like 8080, like so:

      • kubectl port-forward svc/argocd-server -n argocd 8080:443

      Port forwarding will block the terminal it’s running in as long as it’s active, so you’ll probably want to run this in a new terminal window while you continue to work. You can press Ctrl+C to gracefully quit a blocking process such as this one when you want to stop forwarding the port.

      In the meantime, you should be able to access Argo CD in a web browser by navigating to localhost:8080. However, you’ll be prompted for a login password which you’ll need to use the command line to retrieve in the next step. You’ll probably need to click through a security warning because Argo CD has not yet been configured with a valid SSL certificate.

      Note: Using LetsEncrypt HTTPS certificates with Kubernetes is best accomplished with the use of additional tooling like Cert-Manager.

      Step 3 — Working with Argo CD from the Command Line

      For the next steps, you’ll want to have the argocd command installed locally for interfacing with and changing settings in your Argo CD instance. Argo CD’s official documentation recommends that you install it via the Homebrew package manager. Homebrew is very popular for managing command line tools on MacOS, and has more recently been ported to Linux to facilitate maintaining tools like this one.

      If you don’t already have Homebrew installed, you can retrieve and install it with a one-line command:

      • ​​/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

      You may be prompted for your password during the installation process. Afterward, you should have the brew command available in your terminal. You can use it to install Argo CD:

      This in turn provides the argocd command. Before using it, you’ll want to use kubectl again to retrieve the admin password which was automatically generated during your installation, so that you can use it to log in. You’ll pass it a path to a particular JSON file that’s stored using Kubernetes secrets, and extract the relevant value:

      • kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

      Output

      fbP20pvw-o-D5uxH

      You can then log into your Argo CD dashboard by going back to localhost:8080 in a browser and logging in as the admin user with your own password:

      Argo CD app status

      Once everything is working, you can use the same credentials to log in to Argo CD via the command line, by running argocd login. This will be necessary for deploying from the command line later on:

      • argocd login localhost:8080

      You’ll receive the equivalent certificate warning again on the command line here, and should enter y to proceed when prompted. If desired, you can then change your password to something more secure or more memorable by running argocd account update-password. After that, you’ll have a fully working Argo CD configuration. In the final steps of this tutorial, you’ll learn how to use it to actually deploy some example applications.

      Step 4 — Handling Multiple Clusters (Optional)

      Before deploying an application, you should review where you actually want to deploy it. By default, Argo CD will deploy applications to the same cluster that Argo CD itself is running in, which is fine for a demo, but is probably not what you’ll want in production. In order to list all of the clusters known to your current machine, you can use kubectl config:

      • kubectl config get-contexts -o name

      Output

      test-deploy-cluster test-target-cluster

      Assuming that you’ve installed Argo CD into test-deploy-cluster, and you wanted to use it to deploy applications onto test-target-cluster, you could register test-target-cluster with Argo CD by running argocd cluster add:

      • argocd cluster add target-k8s

      This will add the additional cluster’s login details to Argo CD, and enable Argo CD to deploy services on the cluster.

      Step 5 — Deploying an Example Application (Optional)

      Now that you have Argo CD running and you have an understanding of how to deploy applications to different Kubernetes clusters, it’s time to put it into practice. The Argo CD project maintains a repository of example applications that have been architected to showcase GitOps fundamentals. Many of these examples are ports of the same guestbook demo app to different kinds of Kubernetes manifests, such as Jsonnet. In this case, you’ll be deploying the helm-guestbook example, which uses a Helm chart, one of the most durable Kubernetes management solutions.

      In order to do that, you’ll use the argocd app create command, providing the path to the Git repository, the specific helm-guestbook example, and passing your default destination and namespace:

      • argocd app create helm-guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path helm-guestbook --dest-server https://kubernetes.default.svc --dest-namespace default

      After “creating” the application inside of Argo CD, you can check its status with argocd app get:

      • argocd app get helm-guestbook

      Output

      Name: helm-guestbook Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://localhost:8080/applications/helm-guestbook Repo: https://github.com/argoproj/argocd-example-apps.git Target: Path: helm-guestbook SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: OutOfSync from (53e28ff) Health Status: Missing GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default helm-guestbook OutOfSync Missing apps Deployment default helm-guestbook OutOfSync Missing

      The OutOfSync application status is normal. You’ve retrieved the application’s helm chart from Github and created an entry for it in Argo CD, but you haven’t actually spun up any Kubernetes resources for it yet. In order to actually deploy the application you’ll run argocd app sync:

      • argocd app sync helm-guestbook

      sync is synonymous with deployment here in keeping with the principles of GitOps – the goal when using Argo CD is for your application to always track 1:1 with its upstream configuration.

      Output

      TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-01-19T11:01:48-08:00 Service default helm-guestbook OutOfSync Missing 2022-01-19T11:01:48-08:00 apps Deployment default helm-guestbook OutOfSync Missing 2022-01-19T11:01:48-08:00 Service default helm-guestbook Synced Healthy 2022-01-19T11:01:48-08:00 Service default helm-guestbook Synced Healthy service/helm-guestbook created 2022-01-19T11:01:48-08:00 apps Deployment default helm-guestbook OutOfSync Missing deployment.apps/helm-guestbook created 2022-01-19T11:01:49-08:00 apps Deployment default helm-guestbook Synced Progressing deployment.apps/helm-guestbook created Name: helm-guestbook Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://localhost:8080/applications/helm-guestbook Repo: https://github.com/argoproj/argocd-example-apps.git Target: Path: helm-guestbook SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to (53e28ff) Health Status: Progressing Operation: Sync Sync Revision: 53e28ff20cc530b9ada2173fbbd64d48338583ba Phase: Succeeded Start: 2022-01-19 11:01:49 -0800 PST Finished: 2022-01-19 11:01:50 -0800 PST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default helm-guestbook Synced Healthy service/helm-guestbook created apps Deployment default helm-guestbook Synced Progressing deployment.apps/helm-guestbook created

      You have now successfully deployed an application using Argo CD! It is possible to accomplish the same thing from the Argo CD web interface, but it is usually quicker and more reproducible to deploy via the command line. However, it is very helpful to check on your Argo CD web dashboard after deployment in order to verify that your applications are running properly. You can see that by opening localhost:8080 in a browser:

      Argo CD app status

      At this point, the last thing to do is to ensure you can access your new deployment in a browser. To do that, you’ll forward another port, the way you did for Argo CD itself. Internally, the helm-guestbook app runs on the regular HTTP port 80, and in order to avoid conflicting with anything that might be running on your own port 80 or on the port 8080 you’re using for Argo CD, you can forward it to port 9090:

      • kubectl port-forward svc/helm-guestbook 9090:80

      As before, you’ll probably want to do this in another terminal, because it will block that terminal until you press Ctrl+C to stop forwarding the port. You can then open localhost:9090 in a browser window to see your example guestbook app:

      Guestbook app

      Any further pushes to this Github repository will automatically be reflected in ArgoCD, which will resync your deployment while providing continuous availability.

      Conclusion

      You’ve now seen the fundamentals of installing and deploying applications using Argo CD. Because Kubernetes requires so many layers of abstraction, it’s important to ensure that your deployments are as maintainable as possible, and the GitOps philosophy is a good solution.

      Next, you may want to learn about deploying TOBS, The Observability Stack, for monitoring the uptime, health, and logging of your Kubernetes cluster.



      Source link

      How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 20.04


      Introduction

      Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.

      Note: This tutorial uses version 1.22 of Kubernetes, the official supported version at the time of this article’s publication. For up-to-date information on the latest version, please see the current release notes in the official Kubernetes documentation.

      Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. It does not, however, create users or handle the installation of operating-system-level dependencies and their configuration. For these preliminary tasks, it is possible to use a configuration management tool like Ansible or SaltStack. Using these tools makes creating additional clusters or recreating existing clusters much simpler and less error prone.

      In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.

      Goals

      Your cluster will include the following physical resources:

      The control plane node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes.

      Worker nodes are the servers where your workloads (i.e. containerized applications and services) will run. A worker will continue to run your workload once they’re assigned to it, even if the control plane goes down once scheduling is complete. A cluster’s capacity can be increased by adding workers.

      After completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node.

      Once the cluster is set up, you will deploy the web server Nginx to it to ensure that it is running workloads correctly.

      Prerequisites

      • An SSH key pair on your local Linux/macOS/BSD machine. If you haven’t used SSH keys before, you can learn how to set them up by following this explanation of how to set up SSH keys on your local machine.

      • Three servers running Ubuntu 20.04 with at least 2GB RAM and 2 vCPUs each. You should be able to SSH into each server as the root user with your SSH key pair.

      Note: If you haven’t SSH’d into each of these servers at least once prior to following this tutorial, you may be prompted to accept their host fingerprints at an inconvenient time later on. You should do this now, or as an alternative, you can disable host key checking.

      Step 1 — Setting Up the Workspace Directory and Ansible Inventory File

      In this section, you will create a directory on your local machine that will serve as your workspace. You will configure Ansible locally so that it can communicate with and execute commands on your remote servers. Once that’s done, you will create a hosts file containing inventory information such as the IP addresses of your servers and the groups that each server belongs to.

      Out of your three servers, one will be the control plane with an IP displayed as control_plane_ip. The other two servers will be workers and will have the IPs worker_1_ip and worker_2_ip.

      Create a directory named ~/kube-cluster in the home directory of your local machine and cd into it:

      • mkdir ~/kube-cluster
      • cd ~/kube-cluster

      This directory will be your workspace for the rest of the tutorial and will contain all of your Ansible playbooks. It will also be the directory inside which you will run all local commands.

      Create a file named ~/kube-cluster/hosts using nano or your favorite text editor:

      • nano ~/kube-cluster/hosts

      Add the following text to the file, which will specify information about the logical structure of your cluster:

      ~/kube-cluster/hosts

      [control_plane]
      control1 ansible_host=control_plane_ip ansible_user=root 
      
      [workers]
      worker1 ansible_host=worker_1_ip ansible_user=root
      worker2 ansible_host=worker_2_ip ansible_user=root
      
      [all:vars]
      ansible_python_interpreter=/usr/bin/python3
      

      You may recall that inventory files in Ansible are used to specify server information such as IP addresses, remote users, and groupings of servers to target as a single unit for executing commands. ~/kube-cluster/hosts will be your inventory file and you’ve added two Ansible groups (control plane and workers) to it specifying the logical structure of your cluster.

      In the control plane group, there is a server entry named “control1” that lists the control plane’s IP (control_plane_ip) and specifies that Ansible should run remote commands as the root user.

      Similarly, in the workers group, there are two entries for the worker servers (worker_1_ip and worker_2_ip) that also specify the ansible_user as root.

      The last line of the file tells Ansible to use the remote servers’ Python 3 interpreters for its management operations.

      Save and close the file after you’ve added the text. If you are using nano, press Ctrl+X, then when prompted, Y and Enter.

      Having set up the server inventory with groups, let’s move on to installing operating system level dependencies and creating configuration settings.

      Step 2 — Creating a Non-Root User on All Remote Servers

      In this section you will create a non-root user with sudo privileges on all servers so that you can SSH into them manually as an unprivileged user. This can be useful if, for example, you would like to see system information with commands such as top/htop, view a list of running containers, or change configuration files owned by root. These operations are routinely performed during the maintenance of a cluster, and using a non-root user for such tasks minimizes the risk of modifying or deleting important files or unintentionally performing other dangerous operations.

      Create a file named ~/kube-cluster/initial.yml in the workspace:

      • nano ~/kube-cluster/initial.yml

      Next, add the following play to the file to create a non-root user with sudo privileges on all of the servers. A play in Ansible is a collection of steps to be performed that target specific servers and groups. The following play will create a non-root sudo user:

      ~/kube-cluster/initial.yml

      ---
      - hosts: all
        become: yes
        tasks:
          - name: create the 'ubuntu' user
            user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash
      
          - name: allow 'ubuntu' to have passwordless sudo
            lineinfile:
              dest: /etc/sudoers
              line: 'ubuntu ALL=(ALL) NOPASSWD: ALL'
              validate: 'visudo -cf %s'
      
          - name: set up authorized keys for the ubuntu user
            authorized_key: user=ubuntu key="{{item}}"
            with_file:
              - ~/.ssh/id_rsa.pub
      

      Here’s a breakdown of what this playbook does:

      • Creates the non-root user ubuntu.

      • Configures the sudoers file to allow the ubuntu user to run sudo commands without a password prompt.

      • Adds the public key in your local machine (usually ~/.ssh/id_rsa.pub) to the remote ubuntu user’s authorized key list. This will allow you to SSH into each server as the ubuntu user.

      Save and close the file after you’ve added the text.

      Next, run the playbook locally:

      • ansible-playbook -i hosts ~/kube-cluster/initial.yml

      The command will complete within two to five minutes. On completion, you will see output similar to the following:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [control1] ok: [worker1] ok: [worker2] TASK [create the 'ubuntu' user] **** changed: [control1] changed: [worker1] changed: [worker2] TASK [allow 'ubuntu' user to have passwordless sudo] **** changed: [control1] changed: [worker1] changed: [worker2] TASK [set up authorized keys for the ubuntu user] **** changed: [worker1] => (item=ssh-rsa AAAAB3...) changed: [worker2] => (item=ssh-rsa AAAAB3...) changed: [control1] => (item=ssh-rsa AAAAB3...) PLAY RECAP **** control1 : ok=4 changed=3 unreachable=0 failed=0 worker1 : ok=4 changed=3 unreachable=0 failed=0 worker2 : ok=4 changed=3 unreachable=0 failed=0

      Now that the preliminary setup is complete, you can move on to installing Kubernetes-specific dependencies.

      Step 3 — Installing Kubernetetes’ Dependencies

      In this section, you will install the operating-system-level packages required by Kubernetes with Ubuntu’s package manager. These packages are:

      • Docker – a container runtime. It is the component that runs your containers. Kubernetes supports other runtimes, but Docker is still a popular and straightforward choice.

      • kubeadm – a CLI tool that will install and configure the various components of a cluster in a standard way.

      • kubelet – a system service/program that runs on all nodes and handles node-level operations.

      • kubectl – a CLI tool used for issuing commands to the cluster through its API Server.

      Create a file named ~/kube-cluster/kube-dependencies.yml in the workspace:

      • nano ~/kube-cluster/kube-dependencies.yml

      Add the following plays to the file to install these packages to your servers:

      ~/kube-cluster/kube-dependencies.yml

      ---
      - hosts: all
        become: yes
        tasks:
         - name: create Docker config directory
           file: path=/etc/docker state=directory
      
      - name: changing Docker to systemd driver
           copy:
            dest: "/etc/docker/daemon.json"
            content: |
              {
              "exec-opts": ["native.cgroupdriver=systemd"]
              }
      
         - name: install Docker
           apt:
             name: docker.io
             state: present
             update_cache: true
      
         - name: install APT Transport HTTPS
           apt:
             name: apt-transport-https
             state: present
      
         - name: add Kubernetes apt-key
           apt_key:
             url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
             state: present
      
         - name: add Kubernetes' APT repository
           apt_repository:
            repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
            state: present
            filename: 'kubernetes'
      
         - name: install kubelet
           apt:
             name: kubelet=1.22.4-00
             state: present
             update_cache: true
      
         - name: install kubeadm
           apt:
             name: kubeadm=1.22.4-00
             state: present
      
      - hosts: control_plane
        become: yes
        tasks:
         - name: install kubectl
           apt:
             name: kubectl=1.22.4-00
             state: present
             force: yes
      

      The first play in the playbook does the following:

      • Installs Docker, the container runtime, and configures a compatibility setting.

      • Installs apt-transport-https, allowing you to add external HTTPS sources to your APT sources list.

      • Adds the Kubernetes APT repository’s apt-key for key verification.

      • Adds the Kubernetes APT repository to your remote servers’ APT sources list.

      • Installs kubelet and kubeadm.

      The second play consists of a single task that installs kubectl on your control plane node.

      Note: While the Kubernetes documentation recommends you use the latest stable release of Kubernetes for your environment, this tutorial uses a specific version. This will ensure that you can follow the steps successfully, as Kubernetes changes rapidly and the latest version may not work with this tutorial. Although “xenial” is the name of Ubuntu 16.04, and this tutorial is for Ubuntu 20.04, Kubernetes is still referring to Ubuntu 16.04 package sources by default, and they are supported on 20.04 in this case.

      Save and close the file when you are finished.

      Next, run the playbook locally with the following command:

      • ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

      On completion, you will receive output similar to the following:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [worker1] ok: [worker2] ok: [control1] TASK [create Docker config directory] **** changed: [control1] changed: [worker1] changed: [worker2] TASK [changing Docker to systemd driver] **** changed: [control1] changed: [worker1] changed: [worker2] TASK [install Docker] **** changed: [control1] changed: [worker1] changed: [worker2] TASK [install APT Transport HTTPS] ***** ok: [control1] ok: [worker1] changed: [worker2] TASK [add Kubernetes apt-key] ***** changed: [control1] changed: [worker1] changed: [worker2] TASK [add Kubernetes' APT repository] ***** changed: [control1] changed: [worker1] changed: [worker2] TASK [install kubelet] ***** changed: [control1] changed: [worker1] changed: [worker2] TASK [install kubeadm] ***** changed: [control1] changed: [worker1] changed: [worker2] PLAY [control1] ***** TASK [Gathering Facts] ***** ok: [control1] TASK [install kubectl] ****** changed: [control1] PLAY RECAP **** control1 : ok=11 changed=9 unreachable=0 failed=0 worker1 : ok=9 changed=8 unreachable=0 failed=0 worker2 : ok=9 changed=8 unreachable=0 failed=0

      After running this playbook, Docker, kubeadm, and kubelet will be installed on all of the remote servers. kubectl is not a required component and is only needed for executing cluster commands. Installing it only on the control plane node makes sense in this context, since you will run kubectl commands only from the control plane. Note, however, that kubectl commands can be run from any of the worker nodes or from any machine where it can be installed and configured to point to a cluster.

      All system dependencies are now installed. Let’s set up the control plane node and initialize the cluster.

      Step 4 — Setting Up the Control Plane Node

      In this section, you will set up the control plane node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.

      A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.

      Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.

      This functionality is provided by pod network plugins. For this cluster, you will use Flannel, a stable and performant option.

      Create an Ansible playbook named control-plane.yml on your local machine:

      • nano ~/kube-cluster/control-plane.yml

      Add the following play to the file to initialize the cluster and install Flannel:

      ~/kube-cluster/control-plane.yml

      ---
      - hosts: control_plane
        become: yes
        tasks:
          - name: initialize the cluster
            shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
            args:
              chdir: $HOME
              creates: cluster_initialized.txt
      
          - name: create .kube directory
            become: yes
            become_user: ubuntu
            file:
              path: $HOME/.kube
              state: directory
              mode: 0755
      
          - name: copy admin.conf to user's kube config
            copy:
              src: /etc/kubernetes/admin.conf
              dest: /home/ubuntu/.kube/config
              remote_src: yes
              owner: ubuntu
      
          - name: install Pod network
            become: yes
            become_user: ubuntu
            shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
            args:
              chdir: $HOME
              creates: pod_network_setup.txt
      

      Here’s a breakdown of this play:

      • The first task initializes the cluster by running kubeadm init. Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from. Flannel uses the above subnet by default; we’re telling kubeadm to use the same subnet.

      • The second task creates a .kube directory at /home/ubuntu. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.

      • The third task copies the /etc/kubernetes/admin.conf file that was generated from kubeadm init to your non-root user’s home directory. This will allow you to use kubectl to access the newly-created cluster.

      • The last task runs kubectl apply to install Flannel. kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file. The kube-flannel.yml file contains the descriptions of objects required for setting up Flannel in the cluster.

      Save and close the file when you are finished.

      Run the playbook locally with the following command:

      • ansible-playbook -i hosts ~/kube-cluster/control-plane.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [control1] **** TASK [Gathering Facts] **** ok: [control1] TASK [initialize the cluster] **** changed: [control1] TASK [create .kube directory] **** changed: [control1] TASK [copy admin.conf to user's kube config] ***** changed: [control1] TASK [install Pod network] ***** changed: [control1] PLAY RECAP **** control1 : ok=5 changed=4 unreachable=0 failed=0

      To check the status of the control plane node, SSH into it with the following command:

      • ssh ubuntu@control_plane_ip

      Once inside the control plane node, execute:

      You will now see the following output:

      Output

      NAME STATUS ROLES AGE VERSION control1 Ready control-plane,master 51s v1.22.4

      Note: As of Ubuntu 20.04, kubernetes is in the process of updating their old terminology. The node we’ve referred to as control-plane throughout this tutorial used to be called the master node, and occasionally you’ll see kubernetes assigning both roles simultaneously for compatibility reasons.

      The output states that the control-plane node has completed all initialization tasks and is in a Ready state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.

      Step 5 — Setting Up the Worker Nodes

      Adding workers to the cluster involves executing a single command on each. This command includes the necessary cluster information, such as the IP address and port of the control plane’s API Server, and a secure token. Only nodes that pass in the secure token will be able join the cluster.

      Navigate back to your workspace and create a playbook named workers.yml:

      • nano ~/kube-cluster/workers.yml

      Add the following text to the file to add the workers to the cluster:

      ~/kube-cluster/workers.yml

      ---
      - hosts: control_plane
        become: yes
        gather_facts: false
        tasks:
          - name: get join command
            shell: kubeadm token create --print-join-command
            register: join_command_raw
      
          - name: set join command
            set_fact:
              join_command: "{{ join_command_raw.stdout_lines[0] }}"
      
      
      - hosts: workers
        become: yes
        tasks:
          - name: join cluster
            shell: "{{ hostvars['control1'].join_command }} >> node_joined.txt"
            args:
              chdir: $HOME
              creates: node_joined.txt
      

      Here’s what the playbook does:

      • The first play gets the join command that needs to be run on the worker nodes. This command will be in the following format:kubeadm join --token <token> <control-plane-ip>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>. Once it gets the actual command with the proper token and hash values, the task sets it as a fact so that the next play will be able to access that info.

      • The second play has a single task that runs the join command on all worker nodes. On completion of this task, the two worker nodes will be part of the cluster.

      Save and close the file when you are finished.

      Run the playbook by locally with the following command:

      • ansible-playbook -i hosts ~/kube-cluster/workers.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [control1] **** TASK [get join command] **** changed: [control1] TASK [set join command] ***** ok: [control1] PLAY [workers] ***** TASK [Gathering Facts] ***** ok: [worker1] ok: [worker2] TASK [join cluster] ***** changed: [worker1] changed: [worker2] PLAY RECAP ***** control1 : ok=2 changed=1 unreachable=0 failed=0 worker1 : ok=2 changed=1 unreachable=0 failed=0 worker2 : ok=2 changed=1 unreachable=0 failed=0

      With the addition of the worker nodes, your cluster is now fully set up and functional, with workers ready to run workloads. Before scheduling applications, let’s verify that the cluster is working as intended.

      Step 6 — Verifying the Cluster

      A cluster can sometimes fail during setup because a node is down or network connectivity between the control plane and workers is not working correctly. Let’s verify the cluster and ensure that the nodes are operating correctly.

      You will need to check the current state of the cluster from the control plane node to ensure that the nodes are ready. If you disconnected from the control plane node, you can SSH back into it with the following command:

      • ssh ubuntu@control_plane_ip

      Then execute the following command to get the status of the cluster:

      You will see output similar to the following:

      Output

      NAME STATUS ROLES AGE VERSION control1 Ready control-plane,master 3m21s v1.22.0 worker1 Ready <none> 32s v1.22.0 worker2 Ready <none> 32s v1.22.0

      If all of your nodes have the value Ready for STATUS, it means that they’re part of the cluster and ready to run workloads.

      If, however, a few of the nodes have NotReady as the STATUS, it could mean that the worker nodes haven’t finished their setup yet. Wait for around five to ten minutes before re-running kubectl get nodes and inspecting the new output. If a few nodes still have NotReady as the status, you might have to verify and re-run the commands in the previous steps.

      Now that your cluster is verified successfully, let’s schedule an example Nginx application on the cluster.

      Step 7 — Running An Application on the Cluster

      You can now deploy any containerized application to your cluster. To keep things familiar, let’s deploy Nginx using Deployments and Services to explore how this application can be deployed to the cluster. You can use the commands below for other containerized applications as well, provided you change the Docker image name and any relevant flags (such as ports and volumes).

      Ensure that you are logged into the control plane node and then and then run the following command to create a deployment named nginx:

      • kubectl create deployment nginx --image=nginx

      A deployment is a type of Kubernetes object that ensures there’s always a specified number of pods running based on a defined template, even if the pod crashes during the cluster’s lifetime. The above deployment will create a pod with one container from the Docker registry’s Nginx Docker Image.

      Next, run the following command to create a service named nginx that will expose the app publicly. It will do so through a NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster:

      • kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort

      Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external. They are also capable of load balancing requests to multiple pods, and are an integral component in Kubernetes, frequently interacting with other components.

      Run the following command:

      This command will output text similar to the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.109.228.209 <none> 80:nginx_port/TCP 40m

      From the highlighted line of the above output, you can retrieve the port that Nginx is running on. Kubernetes will assign a random port that is greater than 30000 automatically, while ensuring that the port is not already bound by another service.

      To test that everything is working, visit http://worker_1_ip:nginx_port or http://worker_2_ip:nginx_port through a browser on your local machine. You will see Nginx’s familiar welcome page.

      If you would like to remove the Nginx application, first delete the nginx service from the control plane node:

      • kubectl delete service nginx

      Run the following to ensure that the service has been deleted:

      You will see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

      Then delete the deployment:

      • kubectl delete deployment nginx

      Run the following to confirm that this worked:

      Output

      No resources found.

      Conclusion

      In this guide, you’ve successfully set up a Kubernetes cluster on Ubuntu 20.04 using Kubeadm and Ansible for automation.

      If you’re wondering what to do with the cluster now that it’s set up, a good next step would be to get comfortable deploying your own applications and services onto the cluster. Here’s a list of links with further information that can guide you in the process:

      • Dockerizing applications - lists examples that detail how to containerize applications using Docker.

      • Pod Overview - describes in detail how Pods work and their relationship with other Kubernetes objects. Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work.

      • Deployments Overview - provides an overview of deployments. It is useful to understand how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.

      • Services Overview - covers services, another frequently used object in Kubernetes clusters. Understanding the types of services and the options they have is essential for running both stateless and stateful applications.

      Other important concepts that you can look into are Volumes, Ingresses and Secrets, all of which come in handy when deploying production applications.

      Kubernetes has a lot of functionality and features to offer. The Kubernetes Official Documentation is the best place to learn about concepts, find task-specific guides, and look up API references for various objects. You can also review our Kubernetes for Full-Stack Developers curriculum.



      Source link

      From 0 to 3 Million+ Deployments: Scaling App Platform on Kubernetes


      Video

      About the Talk

      Get a better understanding of how your builds run on DigitalOcean App Platform, and how to scale efficiently when your user base expands.

      App Platform, a new DigitalOcean managed-service offering, was released in October 2020 and has already been deployed over 3 million times. This immense growth didn’t come without its challenges around infrastructure, observability, and release velocity.

      Kamal and Nick share techniques and strategies that have helped DigitalOcean overcome these problem areas and the benefits received from each solution.

      About the Presenters

      Kamal Nasser
      Kamal Nasser is a Senior Software Engineer at DigitalOcean. When not automating and playing with modern software and technologies, you’ll likely find him penning early 17th century calligraphy.

      Nick Tate
      Nicholas Tate is a Tech Lead on the App Platform team at DigitalOcean. He has worked on the DigitalOcean Managed Kubernetes team and before that, on multi-cloud Kubernetes at Containership. He loves everything and anything related to cloud native infrastructure and enjoys playing guitar.



      Source link