One place for hosting & domains

      How To Create a Kubernetes 1.11 Cluster Using Kubeadm on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.

      Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. It does not, however, create users or handle the installation of operating-system-level dependencies and their configuration. For these preliminary tasks, it is possible to use a configuration management tool like Ansible or SaltStack. Using these tools makes creating additional clusters or recreating existing clusters much simpler and less error prone.

      In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.

      Goals

      Your cluster will include the following physical resources:

      The master node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes.

      Worker nodes are the servers where your workloads (i.e. containerized applications and services) will run. A worker will continue to run your workload once they’re assigned to it, even if the master goes down once scheduling is complete. A cluster’s capacity can be increased by adding workers.

      After completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node.

      Once the cluster is set up, you will deploy the web server Nginx to it to ensure that it is running workloads correctly.

      Prerequisites

      Step 1 — Setting Up the Workspace Directory and Ansible Inventory File

      In this section, you will create a directory on your local machine that will serve as your workspace. You will configure Ansible locally so that it can communicate with and execute commands on your remote servers. Once that’s done, you will create a hosts file containing inventory information such as the IP addresses of your servers and the groups that each server belongs to.

      Out of your three servers, one will be the master with an IP displayed as master_ip. The other two servers will be workers and will have the IPs worker_1_ip and worker_2_ip.

      Create a directory named ~/kube-cluster in the home directory of your local machine and cd into it:

      • mkdir ~/kube-cluster
      • cd ~/kube-cluster

      This directory will be your workspace for the rest of the tutorial and will contain all of your Ansible playbooks. It will also be the directory inside which you will run all local commands.

      Create a file named ~/kube-cluster/hosts using nano or your favorite text editor:

      • nano ~/kube-cluster/hosts

      Add the following text to the file, which will specify information about the logical structure of your cluster:

      ~/kube-cluster/hosts

      [masters]
      master ansible_host=master_ip ansible_user=root
      
      [workers]
      worker1 ansible_host=worker_1_ip ansible_user=root
      worker2 ansible_host=worker_2_ip ansible_user=root
      
      [all:vars]
      ansible_python_interpreter=/usr/bin/python3
      

      You may recall that inventory files in Ansible are used to specify server information such as IP addresses, remote users, and groupings of servers to target as a single unit for executing commands. ~/kube-cluster/hosts will be your inventory file and you’ve added two Ansible groups (masters and workers) to it specifying the logical structure of your cluster.

      In the masters group, there is a server entry named “master” that lists the master node’s IP (master_ip) and specifies that Ansible should run remote commands as the root user.

      Similarly, in the workers group, there are two entries for the worker servers (worker_1_ip and worker_2_ip) that also specify the ansible_user as root.

      The last line of the file tells Ansible to use the remote servers’ Python 3 interpreters for its management operations.

      Save and close the file after you’ve added the text.

      Having set up the server inventory with groups, let’s move on to installing operating system level dependencies and creating configuration settings.

      Step 2 — Creating a Non-Root User on All Remote Servers

      In this section you will create a non-root user with sudo privileges on all servers so that you can SSH into them manually as an unprivileged user. This can be useful if, for example, you would like to see system information with commands such as top/htop, view a list of running containers, or change configuration files owned by root. These operations are routinely performed during the maintenance of a cluster, and using a non-root user for such tasks minimizes the risk of modifying or deleting important files or unintentionally performing other dangerous operations.

      Create a file named ~/kube-cluster/initial.yml in the workspace:

      • nano ~/kube-cluster/initial.yml

      Next, add the following play to the file to create a non-root user with sudo privileges on all of the servers. A play in Ansible is a collection of steps to be performed that target specific servers and groups. The following play will create a non-root sudo user:

      ~/kube-cluster/initial.yml

      - hosts: all
        become: yes
        tasks:
          - name: create the 'ubuntu' user
            user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash
      
          - name: allow 'ubuntu' to have passwordless sudo
            lineinfile:
              dest: /etc/sudoers
              line: 'ubuntu ALL=(ALL) NOPASSWD: ALL'
              validate: 'visudo -cf %s'
      
          - name: set up authorized keys for the ubuntu user
            authorized_key: user=ubuntu key="{{item}}"
            with_file:
              - ~/.ssh/id_rsa.pub
      

      Here’s a breakdown of what this playbook does:

      • Creates the non-root user ubuntu.

      • Configures the sudoers file to allow the ubuntu user to run sudo commands without a password prompt.

      • Adds the public key in your local machine (usually ~/.ssh/id_rsa.pub) to the remote ubuntu user’s authorized key list. This will allow you to SSH into each server as the ubuntu user.

      Save and close the file after you’ve added the text.

      Next, execute the playbook by locally running:

      • ansible-playbook -i hosts ~/kube-cluster/initial.yml

      The command will complete within two to five minutes. On completion, you will see output similar to the following:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [master] ok: [worker1] ok: [worker2] TASK [create the 'ubuntu' user] **** changed: [master] changed: [worker1] changed: [worker2] TASK [allow 'ubuntu' user to have passwordless sudo] **** changed: [master] changed: [worker1] changed: [worker2] TASK [set up authorized keys for the ubuntu user] **** changed: [worker1] => (item=ssh-rsa AAAAB3...) changed: [worker2] => (item=ssh-rsa AAAAB3...) changed: [master] => (item=ssh-rsa AAAAB3...) PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0 worker1 : ok=5 changed=4 unreachable=0 failed=0 worker2 : ok=5 changed=4 unreachable=0 failed=0

      Now that the preliminary setup is complete, you can move on to installing Kubernetes-specific dependencies.

      Step 3 — Installing Kubernetetes’ Dependencies

      In this section, you will install the operating-system-level packages required by Kubernetes with Ubuntu’s package manager. These packages are:

      • Docker – a container runtime. It is the component that runs your containers. Support for other runtimes such as rkt is under active development in Kubernetes.

      • kubeadm – a CLI tool that will install and configure the various components of a cluster in a standard way.

      • kubelet – a system service/program that runs on all nodes and handles node-level operations.

      • kubectl – a CLI tool used for issuing commands to the cluster through its API Server.

      Create a file named ~/kube-cluster/kube-dependencies.yml in the workspace:

      • nano ~/kube-cluster/kube-dependencies.yml

      Add the following plays to the file to install these packages to your servers:

      ~/kube-cluster/kube-dependencies.yml

      - hosts: all
        become: yes
        tasks:
         - name: install Docker
           apt:
             name: docker.io
             state: present
             update_cache: true
      
         - name: install APT Transport HTTPS
           apt:
             name: apt-transport-https
             state: present
      
         - name: add Kubernetes apt-key
           apt_key:
             url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
             state: present
      
         - name: add Kubernetes' APT repository
           apt_repository:
            repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
            state: present
            filename: 'kubernetes'
      
         - name: install kubelet
           apt:
             name: kubelet
             state: present
             update_cache: true
      
         - name: install kubeadm
           apt:
             name: kubeadm
             state: present
      
      - hosts: master
        become: yes
        tasks:
         - name: install kubectl
           apt:
             name: kubectl
             state: present
      

      The first play in the playbook does the following:

      • Installs Docker, the container runtime.

      • Installs apt-transport-https, allowing you to add external HTTPS sources to your APT sources list.

      • Adds the Kubernetes APT repository’s apt-key for key verification.

      • Adds the Kubernetes APT repository to your remote servers’ APT sources list.

      • Installs kubelet and kubeadm.

      The second play consists of a single task that installs kubectl on your master node.

      Save and close the file when you are finished.

      Next, execute the playbook by locally running:

      • ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [worker1] ok: [worker2] ok: [master] TASK [install Docker] **** changed: [master] changed: [worker1] changed: [worker2] TASK [install APT Transport HTTPS] ***** ok: [master] ok: [worker1] changed: [worker2] TASK [add Kubernetes apt-key] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [add Kubernetes' APT repository] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubelet] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubeadm] ***** changed: [master] changed: [worker1] changed: [worker2] PLAY [master] ***** TASK [Gathering Facts] ***** ok: [master] TASK [install kubectl] ****** ok: [master] PLAY RECAP **** master : ok=9 changed=5 unreachable=0 failed=0 worker1 : ok=7 changed=5 unreachable=0 failed=0 worker2 : ok=7 changed=5 unreachable=0 failed=0

      After execution, Docker, kubeadm, and kubelet will be installed on all of the remote servers. kubectl is not a required component and is only needed for executing cluster commands. Installing it only on the master node makes sense in this context, since you will run kubectl commands only from the master. Note, however, that kubectl commands can be run from any of the worker nodes or from any machine where it can be installed and configured to point to a cluster.

      All system dependencies are now installed. Let’s set up the master node and initialize the cluster.

      Step 4 — Setting Up the Master Node

      In this section, you will set up the master node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.

      A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.

      Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.

      This functionality is provided by pod network plugins. For this cluster, you will use Flannel, a stable and performant option.

      Create an Ansible playbook named master.yml on your local machine:

      • nano ~/kube-cluster/master.yml

      Add the following play to the file to initialize the cluster and install Flannel:

      ~/kube-cluster/master.yml

      - hosts: master
        become: yes
        tasks:
          - name: initialize the cluster
            shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
            args:
              chdir: $HOME
              creates: cluster_initialized.txt
      
          - name: create .kube directory
            become: yes
            become_user: ubuntu
            file:
              path: $HOME/.kube
              state: directory
              mode: 0755
      
          - name: copy admin.conf to user's kube config
            copy:
              src: /etc/kubernetes/admin.conf
              dest: /home/ubuntu/.kube/config
              remote_src: yes
              owner: ubuntu
      
          - name: install Pod network
            become: yes
            become_user: ubuntu
            shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml >> pod_network_setup.txt
            args:
              chdir: $HOME
              creates: pod_network_setup.txt
      

      Here’s a breakdown of this play:

      • The first task initializes the cluster by running kubeadm init. Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from. Flannel uses the above subnet by default; we’re telling kubeadm to use the same subnet.

      • The second task creates a .kube directory at /home/ubuntu. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.

      • The third task copies the /etc/kubernetes/admin.conf file that was generated from kubeadm init to your non-root user’s home directory. This will allow you to use kubectl to access the newly-created cluster.

      • The last task runs kubectl apply to install Flannel. kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file. The kube-flannel.yml file contains the descriptions of objects required for setting up Flannel in the cluster.

      Save and close the file when you are finished.

      Execute the playbook locally by running:

      • ansible-playbook -i hosts ~/kube-cluster/master.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [master] **** TASK [Gathering Facts] **** ok: [master] TASK [initialize the cluster] **** changed: [master] TASK [create .kube directory] **** changed: [master] TASK [copy admin.conf to user's kube config] ***** changed: [master] TASK [install Pod network] ***** changed: [master] PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0

      To check the status of the master node, SSH into it with the following command:

      Once inside the master node, execute:

      You will now see the following output:

      Output

      NAME STATUS ROLES AGE VERSION master Ready master 1d v1.11.1

      The output states that the master node has completed all initialization tasks and is in a Ready state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.

      Step 5 — Setting Up the Worker Nodes

      Adding workers to the cluster involves executing a single command on each. This command includes the necessary cluster information, such as the IP address and port of the master's API Server, and a secure token. Only nodes that pass in the secure token will be able join the cluster.

      Navigate back to your workspace and create a playbook named workers.yml:

      • nano ~/kube-cluster/workers.yml

      Add the following text to the file to add the workers to the cluster:

      ~/kube-cluster/workers.yml

      - hosts: master
        become: yes
        gather_facts: false
        tasks:
          - name: get join command
            shell: kubeadm token create --print-join-command
            register: join_command_raw
      
          - name: set join command
            set_fact:
              join_command: "{{ join_command_raw.stdout_lines[0] }}"
      
      
      - hosts: workers
        become: yes
        tasks:
          - name: join cluster
            shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
            args:
              chdir: $HOME
              creates: node_joined.txt
      

      Here's what the playbook does:

      • The first play gets the join command that needs to be run on the worker nodes. This command will be in the following format:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>. Once it gets the actual command with the proper token and hash values, the task sets it as a fact so that the next play will be able to access that info.

      • The second play has a single task that runs the join command on all worker nodes. On completion of this task, the two worker nodes will be part of the cluster.

      Save and close the file when you are finished.

      Execute the playbook by locally running:

      • ansible-playbook -i hosts ~/kube-cluster/workers.yml

      On completion, you will see output similar to the following:

      Output

      PLAY [master] **** TASK [get join command] **** changed: [master] TASK [set join command] ***** ok: [master] PLAY [workers] ***** TASK [Gathering Facts] ***** ok: [worker1] ok: [worker2] TASK [join cluster] ***** changed: [worker1] changed: [worker2] PLAY RECAP ***** master : ok=2 changed=1 unreachable=0 failed=0 worker1 : ok=2 changed=1 unreachable=0 failed=0 worker2 : ok=2 changed=1 unreachable=0 failed=0

      With the addition of the worker nodes, your cluster is now fully set up and functional, with workers ready to run workloads. Before scheduling applications, let's verify that the cluster is working as intended.

      Step 6 — Verifying the Cluster

      A cluster can sometimes fail during setup because a node is down or network connectivity between the master and worker is not working correctly. Let's verify the cluster and ensure that the nodes are operating correctly.

      You will need to check the current state of the cluster from the master node to ensure that the nodes are ready. If you disconnected from the master node, you can SSH back into it with the following command:

      Then execute the following command to get the status of the cluster:

      You will see output similar to the following:

      Output

      NAME STATUS ROLES AGE VERSION master Ready master 1d v1.11.1 worker1 Ready <none> 1d v1.11.1 worker2 Ready <none> 1d v1.11.1

      If all of your nodes have the value Ready for STATUS, it means that they're part of the cluster and ready to run workloads.

      If, however, a few of the nodes have NotReady as the STATUS, it could mean that the worker nodes haven't finished their setup yet. Wait for around five to ten minutes before re-running kubectl get nodes and inspecting the new output. If a few nodes still have NotReady as the status, you might have to verify and re-run the commands in the previous steps.

      Now that your cluster is verified successfully, let's schedule an example Nginx application on the cluster.

      Step 7 — Running An Application on the Cluster

      You can now deploy any containerized application to your cluster. To keep things familiar, let's deploy Nginx using Deployments and Services to see how this application can be deployed to the cluster. You can use the commands below for other containerized applications as well, provided you change the Docker image name and any relevant flags (such as ports and volumes).

      Still within the master node, execute the following command to create a deployment named nginx:

      • kubectl run nginx --image=nginx --port 80

      A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime. The above deployment will create a pod with one container from the Docker registry's Nginx Docker Image.

      Next, run the following command to create a service named nginx that will expose the app publicly. It will do so through a NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster:

      • kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort

      Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external. They are also capable of load balancing requests to multiple pods, and are an integral component in Kubernetes, frequently interacting with other components.

      Run the following command:

      This will output text similar to the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.109.228.209 <none> 80:nginx_port/TCP 40m

      From the third line of the above output, you can retrieve the port that Nginx is running on. Kubernetes will assign a random port that is greater than 30000 automatically, while ensuring that the port is not already bound by another service.

      To test that everything is working, visit http://worker_1_ip:nginx_port or http://worker_2_ip:nginx_port through a browser on your local machine. You will see Nginx's familiar welcome page.

      If you would like to remove the Nginx application, first delete the nginx service from the master node:

      • kubectl delete service nginx

      Run the following to ensure that the service has been deleted:

      You will see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

      Then delete the deployment:

      • kubectl delete deployment nginx

      Run the following to confirm that this worked:

      Output

      No resources found.

      Conclusion

      In this guide, you've successfully set up a Kubernetes cluster on Ubuntu 18.04 using Kubeadm and Ansible for automation.

      If you're wondering what to do with the cluster now that it's set up, a good next step would be to get comfortable deploying your own applications and services onto the cluster. Here's a list of links with further information that can guide you in the process:

      • Dockerizing applications - lists examples that detail how to containerize applications using Docker.

      • Pod Overview - describes in detail how Pods work and their relationship with other Kubernetes objects. Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work.

      • Deployments Overview - provides an overview of deployments. It is useful to understand how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.

      • Services Overview - covers services, another frequently used object in Kubernetes clusters. Understanding the types of services and the options they have is essential for running both stateless and stateful applications.

      Other important concepts that you can look into are Volumes, Ingresses and Secrets, all of which come in handy when deploying production applications.

      Kubernetes has a lot of functionality and features to offer. The Kubernetes Official Documentation is the best place to learn about concepts, find task-specific guides, and look up API references for various objects.



      Source link

      An Introduction to Helm, the Package Manager for Kubernetes


      Introduction

      Deploying applications to Kubernetes – the powerful and popular container-orchestration system – can be complex. Setting up a single application can involve creating multiple interdependent Kubernetes resources – such as pods, services, deployments, and replicasets – each requiring you to write a detailed YAML manifest file.

      Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters.

      Helm is now an official Kubernetes project and is part of the Cloud Native Computing Foundation, a non-profit that supports open source projects in and around the Kubernetes ecosystem.

      In this article we will give an overview of Helm and the various abstractions it uses to simplify deploying applications to Kubernetes. If you are new to Kubernetes, it may be helpful to read An Introduction to Kubernetes first to familiarize yourself with the basics concepts.

      An Overview of Helm

      Most every programming language and operating system has its own package manager to help with the installation and maintenance of software. Helm provides the same basic feature set as many of the package managers you may already be familiar with, such as Debian’s apt, or Python’s pip.

      Helm can:

      • Install software.
      • Automatically install software dependencies.
      • Upgrade software.
      • Configure software deployments.
      • Fetch software packages from repositories.

      Helm provides this functionality through the following components:

      • A command line tool, helm, which provides the user interface to all Helm functionality.
      • A companion server component, tiller, that runs on your Kubernetes cluster, listens for commands from helm, and handles the configuration and deployment of software releases on the cluster.
      • The Helm packaging format, called charts.
      • An official curated charts repository with prepackaged charts for popular open-source software projects.

      We’ll investigate the charts format in more detail next.

      Charts

      Helm packages are called charts, and they consist of a few YAML configuration files and some templates that are rendered into Kubernetes manifest files. Here is the basic directory structure of a chart:

      Example chart directory

      package-name/
        charts/
        templates/
        Chart.yaml
        LICENSE
        README.md
        requirements.yaml
        values.yaml
      

      These directories and files have the following functions:

      • charts/: Manually managed chart dependencies can be placed in this directory, though it is typically better to use requirements.yaml to dynamically link dependencies.
      • templates/: This directory contains template files that are combined with configuration values (from values.yaml and the command line) and rendered into Kubernetes manifests. The templates use the Go programming language’s template format.
      • Chart.yaml: A YAML file with metadata about the chart, such as chart name and version, maintainer information, a relevant website, and search keywords.
      • LICENSE: A plaintext license for the chart.
      • README.md: A readme file with information for users of the chart.
      • requirements.yaml: A YAML file that lists the chart’s dependencies.
      • values.yaml: A YAML file of default configuration values for the chart.

      The helm command can install a chart from a local directory, or from a .tar.gz packaged version of this directory structure. These packaged charts can also be automatically downloaded and installed from chart repositories or repos.

      We’ll look at chart repositories next.

      Chart Repositories

      A Helm chart repo is a simple HTTP site that serves an index.yaml file and .tar.gz packaged charts. The helm command has subcommands available to help package charts and create the required index.yaml file. These files can be served by any web server, object storage service, or a static site host such as GitHub Pages.

      Helm comes preconfigured with a default chart repository, referred to as stable. This repo points to a Google Storage bucket at https://kubernetes-charts.storage.googleapis.com. The source for the stable repo can be found in the helm/charts Git repository on GitHub.

      Alternate repos can be added with the helm repo add command. Some popular alternate repositories are:

      Whether you’re installing a chart you’ve developed locally, or one from a repo, you’ll need to configure it for your particular setup. We’ll look into configs next.

      Chart Configuration

      A chart usually comes with default configuration values in its values.yaml file. Some applications may be fully deployable with default values, but you’ll typically need to override some of the configuration to meet your needs.

      The values that are exposed for configuration are determined by the author of the chart. Some are used to configure Kubernetes primitives, and some may be passed through to the underlying container to configure the application itself.

      Here is a snippet of some example values:

      values.yaml

      service:
        type: ClusterIP
        port: 3306
      

      These are options to configure a Kubernetes Service resource. You can use helm inspect values chart-name to dump all of the available configuration values for a chart.

      These values can be overridden by writing your own YAML file and using it when running helm install, or by setting options individually on the command line with the --set flag. You only need to specify those values that you want to change from the defaults.

      A Helm chart deployed with a particular configuration is called a release. We will talk about releases next.

      Releases

      During the installation of a chart, Helm combines the chart’s templates with the configuration specified by the user and the defaults in value.yaml. These are rendered into Kubernetes manifests that are then deployed via the Kubernetes API. This creates a release, a specific configuration and deployment of a particular chart.

      This concept of releases is important, because you may want to deploy the same application more than once on a cluster. For instance, you may need multiple MySQL servers with different configurations.

      You also will probably want to upgrade different instances of a chart individually. Perhaps one application is ready for an updated MySQL server but another is not. With Helm, you upgrade each release individually.

      You might upgrade a release because its chart has been updated, or because you want to update the release’s configuration. Either way, each upgrade will create a new revision of a release, and Helm will allow you to easily roll back to previous revisions in case there’s an issue.

      Creating Charts

      If you can’t find an existing chart for the software you are deploying, you may want to create your own. Helm can output the scaffold of a chart directory with helm create chart-name. This will create a folder with the files and directories we discussed in the Charts section above.

      From there, you’ll want to fill out your chart’s metadata in Chart.yaml and put your Kubernetes manifest files into the templates directory. You’ll then need to extract relevant configuration variables out of your manifests and into values.yaml, then include them back into your manifest templates using the templating system.

      The helm command has many subcommands available to help you test, package, and serve your charts. For more information, please read the official Helm documentation on developing charts.

      Conclusion

      In this article we reviewed Helm, the package manager for Kubernetes. We overviewed the Helm architecture and the individual helm and tiller components, detailed the Helm charts format, and looked at chart repositories. We also looked into how to configure a Helm chart and how configurations and charts are combined and deployed as releases on Kubernetes clusters. Finally, we touched on the basics of creating a chart when a suitable chart isn’t already available.

      For more information about Helm, take a look at the official Helm documentation. To find official charts for Helm, check out the official helm/charts Git repository on GitHub.



      Source link

      How To Install the Django Web Framework on Ubuntu 18.04


      Introduction

      Django is a full-featured Python web framework for developing dynamic websites and applications. Using Django, you can quickly create Python web applications and rely on the framework to do a good deal of the heavy lifting.

      In this guide, you will get Django up and running on an Ubuntu 18.04 server. After installation, you will start a new project to use as the basis for your site.

      Different Methods

      There are different ways to install Django, depending upon your needs and how you want to configure your development environment. These have different advantages and one method may lend itself better to your specific situation than others.

      Some of the different methods include:

      • Global install from packages: The official Ubuntu repositories contain Django packages that can be installed with the conventional apt package manager. This is simple, but not as flexible as some other methods. Also, the version contained in the repositories may lag behind the official versions available from the project.
      • Install with pip in a virtual environment: You can create a self-contained environment for your projects using tools like venv and virtualenv. A virtual environment allows you to install Django in a project directory without affecting the larger system, along with other per-project customizations and packages. This is typically the most practical and recommended approach to working with Django.
      • Development version install with git: If you wish to install the latest development version instead of the stable release, you can acquire the code from the Git repo. This is necessary to get the latest features/fixes and can be done within your virtual environment. Development versions do not have the same stability guarantees as more stable versions, however.

      Prerequisites

      Before you begin, you should have a non-root user with sudo privileges available on your Ubuntu 18.04 server. To set this up, follow our Ubuntu 18.04 initial server setup guide.

      Global Install from Packages

      If you wish to install Django using the Ubuntu repositories, the process is very straightforward.

      First, update your local package index with apt:

      Next, check which version of Python you have installed. 18.04 ships with Python 3.6 by default, which you can verify by typing:

      You should see output like this:

      Output

      Python 3.6.5

      Next, install Django:

      • sudo apt install python3-django

      You can test that the installation was successful by typing:

      Output

      1.11.11

      This means that the software was successfully installed. You may also notice that the Django version is not the latest stable version. To learn more about how to use the software, skip ahead to learn how to create sample project.

      Install with pip in a Virtual Environment

      The most flexible way to install Django on your system is within a virtual environment. We will show you how to install Django in a virtual environment that we will create with the venv module, part of the standard Python 3 library. This tool allows you to create virtual Python environments and install Python packages without affecting the rest of the system. You can therefore select Python packages on a per-project basis, regardless of conflicts with other projects' requirements.

      Let's begin by refreshing the local package index:

      Check the version of Python you have installed:

      Output

      Python 3.6.5

      Next, let's install pip from the Ubuntu repositories:

      • sudo apt install python3-pip

      Once pip is installed, you can use it to install the venv package:

      • sudo apt install python3-venv

      Now, whenever you start a new project, you can create a virtual environment for it. Start by creating and moving into a new project directory:

      • mkdir ~/newproject
      • cd ~/newproject

      Next, create a virtual environment within the project directory using the python command that's compatible with your version of Python. We will call our virtual environment my_env, but you should name it something descriptive:

      This will install standalone versions of Python and pip into an isolated directory structure within your project directory. A directory will be created with the name you select, which will hold the file hierarchy where your packages will be installed.

      To install packages into the isolated environment, you must activate it by typing:

      • source my_env/bin/activate

      Your prompt should change to reflect that you are now in your virtual environment. It will look something like (my_env)username@hostname:~/newproject$.

      In your new environment, you can use pip to install Django. Regardless of your Python version, pip should just be called pip when you are in your virtual environment. Also note that you do not need to use sudo since you are installing locally:

      You can verify the installation by typing:

      Output

      2.1

      Note that your version may differ from the version shown here.

      To leave your virtual environment, you need to issue the deactivate command from anywhere on the system:

      Your prompt should revert to the conventional display. When you wish to work on your project again, re-activate your virtual environment by moving back into your project directory and activating:

      • cd ~/newproject
      • source my_env/bin/activate

      Development Version Install with Git

      If you need a development version of Django, you can download and install Django from its Git repository. Let's do this from within a virtual environment.

      First, let's update the local package index:

      Check the version of Python you have installed:

      Output

      Python 3.6.5

      Next, install pip from the official repositories:

      • sudo apt install python3-pip

      Install the venv package to create your virtual environment:

      • sudo apt install python3-venv

      The next step is cloning the Django repository. Between releases, this repository will have more up-to-date features and bug fixes at the possible expense of stability. You can clone the repository to a directory called ~/django-dev within your home directory by typing:

      • git clone git://github.com/django/django ~/django-dev

      Change to this directory:

      Create a virtual environment using the python command that's compatible with your installed version of Python:

      Activate it:

      • source my_env/bin/activate

      Next, you can install the repository using pip. The -e option will install in "editable" mode, which is necessary when installing from version control:

      • pip install -e ~/django-dev

      You can verify that the installation was successful by typing:

      Output

      2.2.dev20180802155335

      Again, the version you see displayed may not match what is shown here.

      You now have the latest version of Django in your virtual environment.

      Creating a Sample Project

      With Django installed, you can begin building your project. We will go over how to create a project and test it on your development server using a virtual environment.

      First, create a directory for your project and change into it:

      • mkdir ~/django-test
      • cd ~/django-test

      Next, create your virtual environment:

      Activate the environment:

      • source my_env/bin/activate

      Install Django:

      To build your project, you can use django-admin with the startproject command. We will call our project djangoproject, but you can replace this with a different name. startproject will create a directory within your current working directory that includes:

      • A management script, manage.py, which you can use to administer various Django-specific tasks.
      • A directory (with the same name as the project) that includes the actual project code.

      To avoid having too many nested directories, however, let's tell Django to place the management script and inner directory in the current directory (notice the ending dot):

      • django-admin startproject djangoproject .

      To migrate the database (this example uses SQLite by default), let's use the migrate command with manage.py. Migrations apply any changes you've made to your Django models to your database schema.

      To migrate the database, type:

      You will see output like the following:

      Output

      Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying sessions.0001_initial... OK

      Finally, let's create an administrative user so that you can use the Djano admin interface. Let's do this with the createsuperuser command:

      • python manage.py createsuperuser

      You will be prompted for a username, an email address, and a password for your user.

      Modifying ALLOWED_HOSTS in the Django Settings

      To successfully test your application, you will need to modify one of the directives in the Django settings.

      Open the settings file by typing:

      • nano ~/django-test/djangoproject/settings.py

      Inside, locate the ALLOWED_HOSTS directive. This defines a whitelist of addresses or domain names that may be used to connect to the Django instance. An incoming request with a Host header that is not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.

      In the square brackets, list the IP addresses or domain names that are associated with your Django server. Each item should be listed in quotations, with separate entries separated by a comma. If you want requests for an entire domain and any subdomains, prepend a period to the beginning of the entry:

      ~/django-test/djangoproject/settings.py

      . . .
      ALLOWED_HOSTS = ['your_server_ip_or_domain', 'your_second_ip_or_domain', . . .]
      

      When you are finished, save the file and exit your editor.

      Testing the Development Server

      Once you have a user, you can start up the Django development server to see what a fresh Django project looks like. You should only use this for development purposes. When you are ready to deploy, be sure to follow Django's guidelines on deployment carefully.

      Before you try the development server, make sure you open the appropriate port in your firewall. If you followed the initial server setup guide and are using UFW, you can open port 8000 by typing:

      Start the development server:

      • python manage.py runserver your_server_ip:8000

      Visit your server's IP address followed by :8000 in your web browser:

      http://your_server_ip:8000
      

      You should see something that looks like this:

      Django public page

      To access the admin interface, add /admin/ to the end of your URL:

      http://your_server_ip:8000/admin/
      

      This will take you to a log in screen:

      Django admin login

      If you enter the admin username and password that you just created, you will have access to the main admin section of the site:

      Django admin page

      For more information about working with the Django admin interface, please see "How To Enable and Connect the Django Admin Interface."

      When you are finished looking through the default site, you can stop the development server by typing CTRL-C in your terminal.

      The Django project you've created provides the structural basis for designing a more complete site. Check out the Django documentation for more information about how to build your applications and customize your site.

      Conclusion

      You should now have Django installed on your Ubuntu 18.04 server, providing the main tools you need to create powerful web applications. You should also know how to start a new project and launch the developer server. Leveraging a complete web framework like Django can help make development faster, allowing you to concentrate only on the unique aspects of your applications.

      If you would like more information about working with Django, including in-depth discussions of things like models and views, please see our Django development series.



      Source link