One place for hosting & domains

      Deploy

      How To Deploy a PHP Application with Kubernetes on Ubuntu 18.04


      The author selected Electronic Frontier Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes is an open source container orchestration system. It allows you to create, update, and scale containers without worrying about downtime.

      To run a PHP application, Nginx acts as a proxy to PHP-FPM. Containerizing this setup in a single container can be a cumbersome process, but Kubernetes will help manage both services in separate containers. Using Kubernetes will allow you to keep your containers reusable and swappable, and you will not have to rebuild your container image every time there’s a new version of Nginx or PHP.

      In this tutorial, you will deploy a PHP 7 application on a Kubernetes cluster with Nginx and PHP-FPM running in separate containers. You will also learn how to keep your configuration files and application code outside the container image using DigitalOcean’s Block Storage system. This approach will allow you to reuse the Nginx image for any application that needs a web/proxy server by passing a configuration volume, rather than rebuilding the image.

      Prerequisites

      Step 1 — Creating the PHP-FPM and Nginx Services

      In this step, you will create the PHP-FPM and Nginx services. A service allows access to a set of pods from within the cluster. Services within a cluster can communicate directly through their names, without the need for IP addresses. The PHP-FPM service will allow access to the PHP-FPM pods, while the Nginx service will allow access to the Nginx pods.

      Since Nginx pods will proxy the PHP-FPM pods, you will need to tell the service how to find them. Instead of using IP addresses, you will take advantage of Kubernetes’ automatic service discovery to use human-readable names to route requests to the appropriate service.

      To create the service, you will create an object definition file. Every Kubernetes object definition is a YAML file that contains at least the following items:

      • apiVersion: The version of the Kubernetes API that the definition belongs to.
      • kind: The Kubernetes object this file represents. For example, a pod or service.
      • metadata: This contains the name of the object along with any labels that you may wish to apply to it.
      • spec: This contains a specific configuration depending on the kind of object you are creating, such as the container image or the ports on which the container will be accessible from.

      First you will create a directory to hold your Kubernetes object definitions.

      SSH to your master node and create the definitions directory that will hold your Kubernetes object definitions.

      Navigate to the newly created definitions directory:

      Make your PHP-FPM service by creating a php_service.yaml file:

      Set kind as Service to specify that this object is a service:

      php_service.yaml

      apiVersion: v1
      kind: Service
      

      Name the service php since it will provide access to PHP-FPM:

      php_service.yaml

      ...
      metadata:
        name: php
      

      You will logically group different objects with labels. In this tutorial, you will use labels to group the objects into "tiers", such as frontend or backend. The PHP pods will run behind this service, so you will label it as tier: backend.

      php_service.yaml

      ...
        labels:
          tier: backend
      

      A service determines which pods to access by using selector labels. A pod that matches these labels will be serviced, independent of whether the pod was created before or after the service. You will add labels for your pods later in the tutorial.

      Use the tier: backend label to assign the pod into the back-end tier. You will also add the app: php label to specify that this pod runs PHP. Add these two labels after the metadata section.

      php_service.yaml

      ...
      spec:
        selector:
          app: php
          tier: backend
      

      Next, specify the port used to access this service. You will use port 9000 in this tutorial. Add it to the php_service.yaml file under spec:

      php_service.yaml

      ...
        ports:
          - protocol: TCP
            port: 9000
      

      Your completed php_service.yaml file will look like this:

      php_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: php
        labels:
          tier: backend
      spec:
        selector:
          app: php
          tier: backend
        ports:
        - protocol: TCP
          port: 9000
      

      Hit CTRL + O to save the file, and then CTRL + X to exit nano.

      Now that you've created the object definition for your service, to run the service you will use the kubectl apply command along with the -f argument and specify your php_service.yaml file.

      Create your service:

      • kubectl apply -f php_service.yaml

      This output confirms the service creation:

      Output

      service/php created

      Verify that your service is running:

      You will see your PHP-FPM service running:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m php ClusterIP 10.100.59.238 <none> 9000/TCP 5m

      There are various service types that Kubernetes supports. Your php service uses the default service type, ClusterIP. This service type assigns an internal IP and makes the service reachable only from within the cluster.

      Now that the PHP-FPM service is ready, you will create the Nginx service. Create and open a new file called nginx_service.yaml with the editor:

      This service will target Nginx pods, so you will name it nginx. You will also add a tier: backend label as it belongs in the back-end tier:

      nginx_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
        labels:
          tier: backend
      

      Similar to the php service, target the pods with the selector labels app: nginx and tier: backend. Make this service accessible on port 80, the default HTTP port.

      nginx_service.yaml

      ...
      spec:
        selector:
          app: nginx
          tier: backend
        ports:
        - protocol: TCP
          port: 80
      

      The Nginx service will be publicly accessible to the internet from your Droplet's public IP address. your_public_ip can be found from your DigitalOcean Control Panel. Under spec.externalIPs, add:

      nginx_service.yaml

      ...
      spec:
        externalIPs:
        - your_public_ip
      

      Your nginx_service.yaml file will look like this:

      nginx_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
        labels:
          tier: backend
      spec:
        selector:
          app: nginx
          tier: backend
        ports:
        - protocol: TCP
          port: 80
        externalIPs:
        - your_public_ip    
      

      Save and close the file. Create the Nginx service:

      • kubectl apply -f nginx_service.yaml

      You will see the following output when the service is running:

      Output

      service/nginx created

      You can view all running services by executing:

      You will see both the PHP-FPM and Nginx services listed in the output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 50s php ClusterIP 10.100.59.238 <none> 9000/TCP 8m

      Please note, if you want to delete a service you can run:

      • kubectl delete svc/service_name

      Now that you've created your PHP-FPM and Nginx services, you will need to specify where to store your application code and configuration files.

      Step 2 — Installing the DigitalOcean Storage Plug-In

      Kubernetes provides different storage plug-ins that can create the storage space for your environment. In this step, you will install the DigitalOcean storage plug-in to create block storage on DigitalOcean. Once the installation is complete, it will add a storage class named do-block-storage that you will use to create your block storage.

      You will first configure a Kubernetes Secret object to store your DigitalOcean API token. Secret objects are used to share sensitive information, like SSH keys and passwords, with other Kubernetes objects within the same namespace. Namespaces provide a way to logically separate your Kubernetes objects.

      Open a file named secret.yaml with the editor:

      You will name your Secret object digitalocean and add it to the kube-system namespace. The kube-system namespace is the default namespace for Kubernetes’ internal services and is also used by the DigitalOcean storage plug-in to launch various components.

      secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: digitalocean
        namespace: kube-system
      

      Instead of a spec key, a Secret uses a data or stringData key to hold the required information. The data parameter holds base64 encoded data that is automatically decoded when retrieved. The stringData parameter holds non-encoded data that is automatically encoded during creation or updates, and does not output the data when retrieving Secrets. You will use stringData in this tutorial for convenience.

      Add the access-token as stringData:

      secret.yaml

      ...
      stringData:
        access-token: your-api-token
      

      Save and exit the file.

      Your secret.yaml file will look like this:

      secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: digitalocean
        namespace: kube-system
      stringData:
        access-token: your-api-token
      

      Create the secret:

      • kubectl apply -f secret.yaml

      You will see this output upon Secret creation:

      Output

      secret/digitalocean created

      You can view the secret with the following command:

      • kubectl -n kube-system get secret digitalocean

      The output will look similar to this:

      Output

      NAME TYPE DATA AGE digitalocean Opaque 1 41s

      The Opaque type means that this Secret is read-only, which is standard for stringData Secrets. You can read more about it on the Secret design spec. The DATA field shows the number of items stored in this Secret. In this case, it shows 1 because you have a single key stored.

      Now that your Secret is in place, install the DigitalOcean block storage plug-in:

      • kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v1.1.0.yaml

      You will see output similar to the following:

      Output

      csidriver.storage.k8s.io/dobs.csi.digitalocean.com created customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created storageclass.storage.k8s.io/do-block-storage created statefulset.apps/csi-do-controller created serviceaccount/csi-do-controller-sa created clusterrole.rbac.authorization.k8s.io/csi-do-provisioner-role created clusterrolebinding.rbac.authorization.k8s.io/csi-do-provisioner-binding created clusterrole.rbac.authorization.k8s.io/csi-do-attacher-role created clusterrolebinding.rbac.authorization.k8s.io/csi-do-attacher-binding created clusterrole.rbac.authorization.k8s.io/csi-do-snapshotter-role created clusterrolebinding.rbac.authorization.k8s.io/csi-do-snapshotter-binding created daemonset.apps/csi-do-node created serviceaccount/csi-do-node-sa created clusterrole.rbac.authorization.k8s.io/csi-do-node-driver-registrar-role created clusterrolebinding.rbac.authorization.k8s.io/csi-do-node-driver-registrar-binding created error: unable to recognize "https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v1.1.0.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1alpha1"

      For this tutorial, it is safe to ignore the errors.

      Now that you have installed the DigitalOcean storage plug-in, you can create block storage to hold your application code and configuration files.

      Step 3 — Creating the Persistent Volume

      With your Secret in place and the block storage plug-in installed, you are now ready to create your Persistent Volume. A Persistent Volume, or PV, is block storage of a specified size that lives independently of a pod’s life cycle. Using a Persistent Volume will allow you to manage or update your pods without worrying about losing your application code. A Persistent Volume is accessed by using a PersistentVolumeClaim, or PVC, which mounts the PV at the required path.

      Open a file named code_volume.yaml with your editor:

      Name the PVC code by adding the following parameters and values to your file:

      code_volume.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: code
      

      The spec for a PVC contains the following items:

      • accessModes which vary by the use case. These are:
        • ReadWriteOnce – mounts the volume as read-write by a single node
        • ReadOnlyMany – mounts the volume as read-only by many nodes
        • ReadWriteMany – mounts the volume as read-write by many nodes
      • resources – the storage space that you require

      DigitalOcean block storage is only mounted to a single node, so you will set the accessModes to ReadWriteOnce. This tutorial will guide you through adding a small amount of application code, so 1GB will be plenty in this use case. If you plan on storing a larger amount of code or data on the volume, you can modify the storage parameter to fit your requirements. You can increase the amount of storage after volume creation, but shrinking the disk is not supported.

      code_volume.yaml

      ...
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      

      Next, specify the storage class that Kubernetes will use to provision the volumes. You will use the do-block-storage class created by the DigitalOcean block storage plug-in.

      code_volume.yaml

      ...
        storageClassName: do-block-storage
      

      Your code_volume.yaml file will look like this:

      code_volume.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: code
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: do-block-storage
      

      Save and exit the file.

      Create the code PVC using kubectl:

      • kubectl apply -f code_volume.yaml

      The following output tells you that the object was successfully created, and you are ready to mount your 1GB PVC as a volume.

      Output

      persistentvolumeclaim/code created

      To view available Persistent Volumes (PV):

      You will see your PV listed:

      Output

      NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-ca4df10f-ab8c-11e8-b89d-12331aa95b13 1Gi RWO Delete Bound default/code do-block-storage 2m

      The fields above are an overview of your configuration file, except for Reclaim Policy and Status. The Reclaim Policy defines what is done with the PV after the PVC accessing it is deleted. Delete removes the PV from Kubernetes as well as the DigitalOcean infrastructure. You can learn more about the Reclaim Policy and Status from the Kubernetes PV documentation.

      You've successfully created a Persistent Volume using the DigitalOcean block storage plug-in. Now that your Persistent Volume is ready, you will create your pods using a Deployment.

      Step 4 — Creating a PHP-FPM Deployment

      In this step, you will learn how to use a Deployment to create your PHP-FPM pod. Deployments provide a uniform way to create, update, and manage pods by using ReplicaSets. If an update does not work as expected, a Deployment will automatically rollback its pods to a previous image.

      The Deployment spec.selector key will list the labels of the pods it will manage. It will also use the template key to create the required pods.

      This step will also introduce the use of Init Containers. Init Containers run one or more commands before the regular containers specified under the pod's template key. In this tutorial, your Init Container will fetch a sample index.php file from GitHub Gist using wget. These are the contents of the sample file:

      index.php

      <?php
      echo phpinfo(); 
      

      To create your Deployment, open a new file called php_deployment.yaml with your editor:

      This Deployment will manage your PHP-FPM pods, so you will name the Deployment object php. The pods belong to the back-end tier, so you will group the Deployment into this group by using the tier: backend label:

      php_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: php
        labels:
          tier: backend
      

      For the Deployment spec, you will specify how many copies of this pod to create by using the replicas parameter. The number of replicas will vary depending on your needs and available resources. You will create one replica in this tutorial:

      php_deployment.yaml

      ...
      spec:
        replicas: 1
      

      This Deployment will manage pods that match the app: php and tier: backend labels. Under selector key add:

      php_deployment.yaml

      ...
        selector:
          matchLabels:
            app: php
            tier: backend
      

      Next, the Deployment spec requires the template for your pod's object definition. This template will define specifications to create the pod from. First, you will add the labels that were specified for the php service selectors and the Deployment’s matchLabels. Add app: php and tier: backend under template.metadata.labels:

      php_deployment.yaml

      ...
        template:
          metadata:
            labels:
              app: php
              tier: backend
      

      A pod can have multiple containers and volumes, but each will need a name. You can selectively mount volumes to a container by specifying a mount path for each volume.

      First, specify the volumes that your containers will access. You created a PVC named code to hold your application code, so name this volume code as well. Under spec.template.spec.volumes, add the following:

      php_deployment.yaml

      ...
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
      

      Next, specify the container you want to run in this pod. You can find various images on the Docker store, but in this tutorial you will use the php:7-fpm image.

      Under spec.template.spec.containers, add the following:

      php_deployment.yaml

      ...
            containers:
            - name: php
              image: php:7-fpm
      

      Next, you will mount the volumes that the container requires access to. This container will run your PHP code, so it will need access to the code volume. You will also use mountPath to specify /code as the mount point.

      Under spec.template.spec.containers.volumeMounts, add:

      php_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      Now that you have mounted your volume, you need to get your application code on the volume. You may have previously used FTP/SFTP or cloned the code over an SSH connection to accomplish this, but this step will show you how to copy the code using an Init Container.

      Depending on the complexity of your setup process, you can either use a single initContainer to run a script that builds your application, or you can use one initContainer per command. Make sure that the volumes are mounted to the initContainer.

      In this tutorial, you will use a single Init Container with busybox to download the code. busybox is a small image that contains the wget utility that you will use to accomplish this.

      Under spec.template.spec, add your initContainer and specify the busybox image:

      php_deployment.yaml

      ...
            initContainers:
            - name: install
              image: busybox
      

      Your Init Container will need access to the code volume so that it can download the code in that location. Under spec.template.spec.initContainers, mount the volume code at the /code path:

      php_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      Each Init Container needs to run a command. Your Init Container will use wget to download the code from Github into the /code working directory. The -O option gives the downloaded file a name, and you will name this file index.php.

      Note: Be sure to trust the code you’re pulling. Before pulling it to your server, inspect the source code to ensure you are comfortable with what the code does.

      Under the install container in spec.template.spec.initContainers, add these lines:

      php_deployment.yaml

      ...
              command:
              - wget
              - "-O"
              - "/code/index.php"
              - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php
      

      Your completed php_deployment.yaml file will look like this:

      php_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: php
        labels:
          tier: backend
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: php
            tier: backend
        template:
          metadata:
            labels:
              app: php
              tier: backend
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
            containers:
            - name: php
              image: php:7-fpm
              volumeMounts:
              - name: code
                mountPath: /code
            initContainers:
            - name: install
              image: busybox
              volumeMounts:
              - name: code
                mountPath: /code
              command:
              - wget
              - "-O"
              - "/code/index.php"
              - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php
      

      Save the file and exit the editor.

      Create the PHP-FPM Deployment with kubectl:

      • kubectl apply -f php_deployment.yaml

      You will see the following output upon Deployment creation:

      Output

      deployment.apps/php created

      To summarize, this Deployment will start by downloading the specified images. It will then request the PersistentVolume from your PersistentVolumeClaim and serially run your initContainers. Once complete, the containers will run and mount the volumes to the specified mount point. Once all of these steps are complete, your pod will be up and running.

      You can view your Deployment by running:

      You will see the output:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php 1 1 1 0 19s

      This output can help you understand the current state of the Deployment. A Deployment is one of the controllers that maintains a desired state. The template you created specifies that the DESIRED state will have 1 replicas of the pod named php. The CURRENT field indicates how many replicas are running, so this should match the DESIRED state. You can read about the remaining fields in the Kubernetes Deployments documentation.

      You can view the pods that this Deployment started with the following command:

      The output of this command varies depending on how much time has passed since creating the Deployment. If you run it shortly after creation, the output will likely look like this:

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-bf8zd 0/1 Init:0/1 0 9s

      The columns represent the following information:

      • Ready: The number of replicas running this pod.
      • Status: The status of the pod. Init indicates that the Init Containers are running. In this output, 0 out of 1 Init Containers have finished running.
      • Restarts: How many times this process has restarted to start the pod. This number will increase if any of your Init Containers fail. The Deployment will restart it until it reaches a desired state.

      Depending on the complexity of your startup scripts, it can take a couple of minutes for the status to change to podInitializing:

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 0/1 podInitializing 0 39s

      This means the Init Containers have finished and the containers are initializing. If you run the command when all of the containers are running, you will see the pod status change to Running.

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 1/1 Running 0 1m

      You now see that your pod is running successfully. If your pod doesn't start, you can debug with the following commands:

      • View detailed information of a pod:
      • kubectl describe pods pod-name
      • View logs generated by a pod:
      • View logs for a specific container in a pod:
      • kubectl logs pod-name container-name

      Your application code is mounted and the PHP-FPM service is now ready to handle connections. You can now create your Nginx Deployment.

      Step 5 — Creating the Nginx Deployment

      In this step, you will use a ConfigMap to configure Nginx. A ConfigMap holds your configuration in a key-value format that you can reference in other Kubernetes object definitions. This approach will grant you the flexibility to reuse or swap the image with a different Nginx version if needed. Updating the ConfigMap will automatically replicate the changes to any pod mounting it.

      Create a nginx_configMap.yaml file for your ConfigMap with your editor:

      • nano nginx_configMap.yaml

      Name the ConfigMap nginx-config and group it into the tier: backend micro-service:

      nginx_configMap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nginx-config
        labels:
          tier: backend
      

      Next, you will add the data for the ConfigMap. Name the key config and add the contents of your Nginx configuration file as the value. You can use the example Nginx configuration from this tutorial.

      Because Kubernetes can route requests to the appropriate host for a service, you can enter the name of your PHP-FPM service in the fastcgi_pass parameter instead of its IP address. Add the following to your nginx_configMap.yaml file:

      nginx_configMap.yaml

      ...
      data:
        config : |
          server {
            index index.php index.html;
            error_log  /var/log/nginx/error.log;
            access_log /var/log/nginx/access.log;
            root /code;
      
            location / {
                try_files $uri $uri/ /index.php?$query_string;
            }
      
            location ~ .php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_pass php:9000;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_path_info;
              }
          }
      

      Your nginx_configMap.yaml file will look like this:

      nginx_configMap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nginx-config
        labels:
          tier: backend
      data:
        config : |
          server {
            index index.php index.html;
            error_log  /var/log/nginx/error.log;
            access_log /var/log/nginx/access.log;
            root /code;
      
            location / {
                try_files $uri $uri/ /index.php?$query_string;
            }
      
            location ~ .php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_pass php:9000;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_path_info;
              }
          }
      

      Save the file and exit the editor.

      Create the ConfigMap:

      • kubectl apply -f nginx_configMap.yaml

      You will see the following output:

      Output

      configmap/nginx-config created

      You've finished creating your ConfigMap and can now build your Nginx Deployment.

      Start by opening a new nginx_deployment.yaml file in the editor:

      • nano nginx_deployment.yaml

      Name the Deployment nginx and add the label tier: backend:

      nginx_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
        labels:
          tier: backend
      

      Specify that you want one replicas in the Deployment spec. This Deployment will manage pods with labels app: nginx and tier: backend. Add the following parameters and values:

      nginx_deployment.yaml

      ...
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
            tier: backend
      

      Next, add the pod template. You need to use the same labels that you added for the Deployment selector.matchLabels. Add the following:

      nginx_deployment.yaml

      ...
        template:
          metadata:
            labels:
              app: nginx
              tier: backend
      

      Give Nginx access to the code PVC that you created earlier. Under spec.template.spec.volumes, add:

      nginx_deployment.yaml

      ...
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
      

      Pods can mount a ConfigMap as a volume. Specifying a file name and key will create a file with its value as the content. To use the ConfigMap, set path to name of the file that will hold the contents of the key. You want to create a file site.conf from the key config. Under spec.template.spec.volumes, add the following:

      nginx_deployment.yaml

      ...
            - name: config
              configMap:
                name: nginx-config
                items:
                - key: config
                  path: site.conf
      

      Warning: If a file is not specified, the contents of the key will replace the mountPath of the volume. This means that if a path is not explicitly specified, you will lose all content in the destination folder.

      Next, you will specify the image to create your pod from. This tutorial will use the nginx:1.7.9 image for stability, but you can find other Nginx images on the Docker store. Also, make Nginx available on the port 80. Under spec.template.spec add:

      nginx_deployment.yaml

      ...
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
      

      Nginx and PHP-FPM need to access the file at the same path, so mount the code volume at /code:

      nginx_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      The nginx:1.7.9 image will automatically load any configuration files under the /etc/nginx/conf.d directory. Mounting the config volume in this directory will create the file /etc/nginx/conf.d/site.conf. Under volumeMounts add the following:

      nginx_deployment.yaml

      ...
              - name: config
                mountPath: /etc/nginx/conf.d
      

      Your nginx_deployment.yaml file will look like this:

      nginx_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
        labels:
          tier: backend
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
            tier: backend
        template:
          metadata:
            labels:
              app: nginx
              tier: backend
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
            - name: config
              configMap:
                name: nginx-config
                items:
                - key: config
                  path: site.conf
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
              - name: code
                mountPath: /code
              - name: config
                mountPath: /etc/nginx/conf.d
      

      Save the file and exit the editor.

      Create the Nginx Deployment:

      • kubectl apply -f nginx_deployment.yaml

      The following output indicates that your Deployment is now created:

      Output

      deployment.apps/nginx created

      List your Deployments with this command:

      You will see the Nginx and PHP-FPM Deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 0 16s php 1 1 1 1 7m

      List the pods managed by both of the Deployments:

      You will see the pods that are running:

      Output

      NAME READY STATUS RESTARTS AGE nginx-7bf5476b6f-zppml 1/1 Running 0 32s php-86d59fd666-lkwgn 1/1 Running 0 7m

      Now that all of the Kubernetes objects are active, you can visit the Nginx service on your browser.

      List the running services:

      • kubectl get services -o wide

      Get the External IP for your Nginx service:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m <none> nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 27m app=nginx,tier=backend php ClusterIP 10.100.59.238 <none> 9000/TCP 34m app=php,tier=backend

      On your browser, visit your server by typing in http://your_public_ip. You will see the output of php_info() and have confirmed that your Kubernetes services are up and running.

      Conclusion

      In this guide, you containerized the PHP-FPM and Nginx services so that you can manage them independently. This approach will not only improve the scalability of your project as you grow, but will also allow you to efficiently use resources as well. You also stored your application code on a volume so that you can easily update your services in the future.



      Source link

      How to Deploy Kubernetes on Linode with the k8s-alpha CLI


      Updated by Linode

      Written by Linode

      Deploy Kubernetes on Linode with the k8s-alpha CLI

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to delete it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

      What is the k8s-alpha CLI?

      The Linode k8s-alpha CLI is a plugin for the Linode CLI that offers quick, single-command deployments of Kubernetes clusters on your Linode account. When you have it installed, creating a cluster can be as simple as:

      linode-cli k8s-alpha create example-cluster
      

      The clusters that it creates are pre-configured with useful Linode integrations, like our CCM, CSI, and ExternalDNS plugins. As well, the Kubernetes metrics-server is pre-installed, so you can run kubectl top. Nodes in your clusters will also be labeled with the Linode Region and Linode Type, which can also be used by Kubernetes controllers for the purposes of scheduling pods.


      What are Linode’s CCM, CSI, and ExternalDNS plugins?

      The CCM (Cloud Controller Manager), CSI (Container Storage Interface), and ExternalDNS plugins are Kubernetes addons published by Linode. You can use them to create NodeBalancers, Block Storage Volumes, and DNS records through your Kubernetes manifests.

      The k8s-alpha CLI will create two kinds of nodes on your account:

      • Master nodes will run the components of your Kubernetes control plane, and will also run etcd.

      • Worker nodes will run your workloads.

      These nodes will all exist as billable services on your account. You can specify how many master and worker nodes are created and also your nodes’ Linode plan and the data center they are located in.

      Alternatives for Creating Clusters

      Another easy way to create clusters is with Rancher. Rancher is a web application that provides a GUI interface for cluster creation and for management of clusters. Rancher also provides easy interfaces for deploying and scaling apps on your clusters, and it has a built-in catalog of curated apps to choose from.

      To get started with Rancher, review our How to Deploy Kubernetes on Linode with Rancher 2.2 guide. Rancher is capable of importing clusters that were created outside of it, so you can still use it even if you create your clusters through the k8s-alpha CLI or some other means.

      Beginners Resources

      If you haven’t used Kubernetes before, we recommend reading through our introductory guides on the subject:

      Before You Begin

      1. You will need to have a personal access token for Linode’s API. If you don’t have one already, follow the Get an Access Token section of our API guide and create a token with read/write permissions.

      2. If you do not already have a public-private SSH key pair, you will need to generate one. Follow the Generate a Key Pair section of our Public Key Authentication guide for instructions.

        Note

        If you’re unfamiliar with the concept of public-private key pairs, the introduction to our Public Key Authentication guide explains what they are.

      Install the k8s-alpha CLI

      The k8s-alpha CLI is bundled with the Linode CLI, and using it requires the installation and configuration of a few dependencies:

      • Terraform: The k8s-alpha CLI creates clusters by defining a resource plan in Terraform and then having Terraform create those resources. If you’re interested in how Terraform works, you can review our Beginner’s Guide to Terraform, but doing so is not required to use the k8s-alpha CLI.

        Note

      • kubectl: kubectl is the client software for Kubernetes, and it is used to interact with your Kubernetes cluster’s API.

      • SSH agent: Terraform will rely on public-key authentication to connect to the Linodes that it creates, and you will need to configure your SSH agent on your computer with the keys that Terraform should use.

      Install the Linode CLI

      Follow the Install the CLI section of our CLI guide to install the Linode CLI. If you already have the CLI, upgrade it to the latest version available:

      pip install --upgrade linode-cli
      

      Install Terraform

      Follow the instructions in the Install Terraform section of our Use Terraform to Provision Linode Environments guide.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest Kubernetes release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Configure your SSH Agent

      Your SSH key pair is stored in your home directory (or another location), but the k8s-alpha CLI’s Terraform implementation will not be able to reference your keys without first communicating your keys to Terraform. To communicate your keys to Terraform, you’ll first start the ssh-agent process. ssh-agent will cache your private keys for other processes, including keys that are passphrase-protected.

      Linux: Run the following command; if you stored your private key in another location, update the path that’s passed to ssh-add accordingly:

      eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
      

      Note

      You will need to run all of your k8s-alpha CLI commands from the terminal that you start the ssh-agent process in. If you start a new terminal, you will need to run the commands in this step again before using the k8s-alpha CLI.

      macOS: macOS has an ssh-agent process that persists across all of your terminal sessions, and it can store your private key passphrases in the operating system’s Keychain Access service.

      1. Update your ~/.ssh/config SSH configuration file. This configuration will add keys to the persistent agent and store passphrases in the OS keychain:

        ~/.ssh/config
        1
        2
        3
        4
        
        Host *
          AddKeysToAgent yes
          UseKeychain yes
          IdentityFile ~/.ssh/id_rsa

      Note

      Although kubectl should be used in all cases possible to interact with nodes in your cluster, the key pair cached in the ssh-agent process will enable you to access individual nodes via SSH as the core user.

      1. Add your key to the ssh-agent process:

        ssh-add -K ~/.ssh/id_rsa
        

      Create a Cluster

      1. To create your first cluster, run:

        linode-cli k8s-alpha create example-cluster
        
      2. Your terminal will show output related to the Terraform plan for your cluster. The output will halt with the following messages and prompt:

        Plan: 5 to add, 0 to change, 0 to destroy.
        
        Do you want to perform these actions in workspace "example-cluster"?
          Terraform will perform the actions described above.
          Only 'yes' will be accepted to approve.
        
          Enter a value:
        

        Note

        Your Terraform configurations will be stored under ~/.k8s-alpha-linode/

      3. Enter yes at the Enter a value: prompt. The Terraform plan will be applied over the next few minutes.

        Note

        You may see an error like the following:

          
        Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.
        
        

        If this appears, then you have run into a limit on the number of resources allowed on your Linode account. If this is the case, or if your nodes do not appear in the Linode Cloud Manager as expected, contact Linode Support. This limit also applies to Block Storage Volumes and NodeBalancers, which some of your cluster app deployments may try to create.

      4. When the operation finishes, you will see options like the following:

        Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
        Switched to context "example-cluster-4-kacDTg9RmZK@example-cluster-4".
        Your cluster has been created and your kubectl context updated.
        
        Try the following command:
        kubectl get pods --all-namespaces
        
        Come hang out with us in #linode on the Kubernetes Slack! http://slack.k8s.io/
        
      5. If you visit the Linode Cloud Manager, you will see your newly created cluster nodes on the Linodes page. By default, your Linodes will be created under the region and Linode plan that you have set as the default for your Linode CLI. To set new defaults for your Linode CLI, run:

        linode-cli configure
        

        The k8s-alpha CLI will conform to your CLI defaults, with the following exceptions:

        • If you set a default plan size smaller than Linode 4GB, the k8s-alpha CLI will create your master node(s) on the Linode 4GB plan, which is the minimum recommended for master nodes. It will still create your worker nodes using your default plan.

        • The k8s-alpha CLI will always create nodes running CoreOS (instead of the default distribution that you set).

      6. The k8s-alpha CLI will also update your kubectl client’s configuration (the kubeconfig file) to allow immediate access to the cluster. Review the Manage your Clusters with kubectl section for further instructions.

      Cluster Creation Options

      The following optional arguments are available:

      linode-cli k8s-alpha create example-cluster-2 --node-type g6-standard-1 --nodes 6 --master-type g6-standard-4 --region us-east --ssh-public-key $HOME/.ssh/id_rsa.pub
      
      Argument                                           Description
      --node-type TYPE The Linode Type ID for cluster worker nodes (which you can retrieve by running linode-cli linodes types).
      --nodes COUNT The number of Linodes to deploy as Nodes in the cluster (default 3).
      --master-type TYPE The Linode Type ID for cluster master nodes (which you can retrieve by running linode-cli linodes types).
      --region REGION The Linode Region ID in which to deploy the cluster (which you can retrieve by running linode-cli regions list).
      --ssh-public-key KEYPATH The path to your public key file which will be used to access Nodes during initial provisioning only! The keypair must be added to an ssh-agent (default $HOME/.ssh/id_rsa.pub).

      Delete a Cluster

      1. To delete a cluster, run the delete command with the name of your cluster:

        linode-cli k8s-alpha delete example-cluster
        
      2. Your terminal will show output from Terraform that describes the deletion operation. The output will halt with the following messages and prompt:

        Plan: 0 to add, 0 to change, 5 to destroy.
        
        Do you really want to destroy all resources in workspace "example-cluster"?
          Terraform will destroy all your managed infrastructure, as shown above.
          There is no undo. Only 'yes' will be accepted to confirm.
        
          Enter a value:
        
      3. Enter yes at the Enter a value: prompt. The nodes in your cluster will be deleted over the next few minutes.

      4. You should also login to the Linode Cloud Manager and confirm that any Volumes and NodeBalancers created by any of your cluster app deployments.

      5. Deleting the cluster will not remove the kubectl client configuration that the CLI inserted into your kubeconfig file. Review the Remove a Cluster’s Context section if you’d like to remove this information.

      Manage your Clusters with kubectl

      The k8s-alpha CLI will automatically configure your kubectl client to connect to your cluster. Specifically, this connection information is stored in your kubeconfig file. The path for this file is normally ~/.kube/config.

      Use the kubectl client to interact with your cluster’s Kubernetes API. This will work in the same way as with any other cluster. For example, you can get all the pods in your cluster:

      kubectl get pods --all-namespaces
      

      Review the Kubernetes documentation for more information about how to use kubectl.

      Switch between Cluster Contexts

      If you have more than one cluster set up, you can switch your kubectl client between them. To list all of your cluster contexts:

      kubectl config get-contexts
      

      An asterisk will appear before the current context:

      CURRENT   NAME                                                      CLUSTER                 AUTHINFO                            NAMESPACE
      *         example-cluster-kat7BqBBgU8@example-cluster               example-cluster         example-cluster-kat7BqBBgU8
                example-cluster-2-kacDTg9RmZK@example-cluster-2           example-cluster-2       example-cluster-2-kacDTg9RmZK
      

      To switch to another context, use the use-context subcommand and pass the value under the NAME column:

      kubectl config use-context example-cluster-2-kacDTg9RmZK@example-cluster-2
      

      All kubectl commands that you issue will now apply to the cluster you chose.

      Remove a Cluster’s Context

      When you delete a cluster with the k8s-alpha CLI, its connection information will persist in your local kubeconfig file, and it will still appear when you run kubectl config get-contexts. To remove this connection data, run the following commands:

      kubectl config delete-cluster example-cluster
      kubectl config delete-context example-cluster-kat7BqBBgU8@example-cluster
      kubectl config unset users.example-cluster-kat7BqBBgU8
      
      • For the delete-cluster subcommand, supply the value that appears under the CLUSTER column in the output from get-contexts.

      • For the delete-context subcommand, supply the value that appears under the NAME column in the output from get-contexts.

      • For the unset subcommand, supply users.<AUTHINFO>, where <AUTHINFO> is the value that appears under the AUTHINFO column in the output from get-contexts.

      Next Steps

      Now that you have a cluster up and running, you’re ready to start deploying apps to it. Review our other Kubernetes guides for help with deploying software and managing your cluster:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode


      Updated by Linode

      Contributed by

      Linode

      Linode offers several pathways for users to easily deploy a Kubernetes cluster. If you prefer the command line, you can create a Kubernetes cluster with one command using the Linode CLI’s k8s-alpha plugin, and Terraform. Or, if you prefer a full featured GUI, Linode’s Rancher integration enables you to deploy and manage Kubernetes clusters with a simple web interface. The Linode Kubernetes Engine, currently under development with an early access beta version on its way this summer, allows you to spin up a Kubernetes cluster with Linode handling the management and maintenance of your control plane. These are all great options for production ready deployments.

      Kubeadm is a cloud provider agnostic tool that automates many of the tasks required to get a cluster up and running. Users of kubeadm can run a few simple commands on individual servers to turn them into a Kubernetes cluster consisting of a master node and worker nodes. This guide will walk you through installing kubeadm and using it to deploy a Kubernetes cluster on Linode. While the kubeadm approach requires more manual steps than other Kubernetes cluster creation pathways offered by Linode, this solution will be covered as way to dive deeper into the various components that make up a Kubernetes cluster and the ways in which they interact with each other to provide a scalable and reliable container orchestration mechanism.

      Note

      This guide’s example instructions will result in the creation of three billable Linodes. Information on how to tear down the Linodes are provided at the end of the guide. Interacting with the Linodes via the command line will provide the most opportunity for learning, however, this guide is written so that users can also benefit by reading along.

      Before You Begin

      1. Deploy three Linodes running Ubuntu 18.04 with the following system requirements:

        • One Linode to use as the master Node with 4GB RAM and 2 CPU cores.
        • Two Linodes to use as the Worker Nodes each with 1GB RAM and 1 CPU core.
      2. Follow the Getting Started and the Securing Your Server guides for instructions on setting up your Linodes. The steps in this guide assume the use of a limited user account with sudo privileges.

      Note

      When following the Getting Started guide, make sure that each Linode is using a different hostname. Not following this guideline will leave you unable to join some or all nodes to the cluster in a later step.
      1. Disable swap memory on your Linodes. Kubernetes requires that you disable swap memory on any cluster nodes to prevent the Kubernetes scheduler (kube-scheduler) from ever sending a pod to a node that has run out of CPU/memory or reached its designated CPU/memory limit.

        sudo swapoff -a
        

        Verify that your swap has been disabled. You should expect to see a value of 0 returned.

        cat /proc/meminfo | grep 'SwapTotal'
        

        To learn more about managing compute resources for containers, see the official Kubernetes documentation.

      2. Read the Beginners Guide to Kubernetes to familiarize yourself with the major components and concepts of Kubernetes. The current guide assumes a working knowledge of common Kubernetes concepts and terminology.

      Build a Kubernetes Cluster

      Kubernetes Cluster Architecture

      A Kubernetes cluster consists of a master node and worker nodes. The master node hosts the control plane, which is the combination of all the components that provide it the ability to maintain the desired cluster state. This cluster state is defined by manifest files and the kubectl tool. While the control plane components can be run on any cluster node, it is a best practice to isolate the control plane on its own node and to run any application containers on a separate worker node. A cluster can have a single worker node or up to 5000. Each worker node must be able to maintain running containers in a pod and be able to communicate with the master node’s control plane.

      The table below provides a list of the Kubernetes tooling you will need to install on your master and worker nodes in order to meet the minimum requirements for a functioning Kubernetes cluster as described above.

      Tool Description Master Node Worker Nodes
      kubeadm This tool provides a simple way to create a Kubernetes cluster by automating the tasks required to get a cluster up and running. New Kubernetes users with access to a cloud hosting provider, like Linode, can use kubeadm to build out a playground cluster. kubeadm is also used as a foundation to create more mature Kubernetes deployment tooling. x x
      Container Runtime A container runtime is responsible for running the containers that make up a cluster’s pods. This guide will use Docker as the container runtime. x x
      kubelet kubelet ensures that all pod containers running on a node are healthy and meet the specifications for a pod’s desired behavior. x x
      kubectl A command line tool used to manage a Kubernetes cluster. x x
      Control Plane Series of services that form Kubernetes master structure that allow it to control the cluster. Kubeadm allows the control plane services to run as containers on the master node. The control plane will be created when you initialize kubeadm later in this guide. x

      Install the Container Runtime: Docker

      Docker is the software responsible for running the pod containers on each node. You can use other container runtime software with Kubernetes, such as Containerd and CRI-O. You will need to install Docker on all three Linodes.

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine docker.io
        
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
        
      3. Add Docker’s GPG key:

        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88
        

        You should see output similar to the following:

          
        pub   4096R/0EBFCD88 2017-02-22
                Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) 
        sub   4096R/F273FCD8 2017-02-22
        
        
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
        
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
        
      7. Add your limited Linux user account to the docker group. Replace $USER with your username:

        sudo usermod -aG docker $USER
        

        Note

        After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        sudo docker run hello-world
        
      9. Setup the Docker daemon to use systemd as the cgroup driver, instead of the default cgroupfs. This is a recommended step so that Kubelet and Docker are both using the same cgroup manager. This will make it easier for Kubernetes to know which resources are available on your cluster’s nodes.

        sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
        {
          "exec-opts": ["native.cgroupdriver=systemd"],
          "log-driver": "json-file",
          "log-opts": {
            "max-size": "100m"
          },
          "storage-driver": "overlay2"
        }
        EOF'
        
      10. Create a systemd directory for Docker:

        sudo mkdir -p /etc/systemd/system/docker.service.d
        
      11. Restart Docker:

        sudo systemctl daemon-reload
        sudo systemctl restart docker
        

      Install kubeadm, kubelet, and kubectl

      Complete the steps outlined in this section on all three Linodes.

      1. Update the system and install the required dependencies for installation:

        sudo apt-get update && sudo apt-get install -y apt-transport-https curl
        
      2. Add the required GPG key to your apt-sources keyring to authenticate the Kubernetes related packages you will install:

        curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
        
      3. Add Kubernetes to the package manager’s list of sources:

        sudo bash -c "cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
        deb https://apt.kubernetes.io/ kubernetes-xenial main
        EOF"
        
      4. Update apt, install Kubeadm, Kubelet, and Kubectl, and hold the installed packages at their installed versions:

        sudo apt-get update
        sudo apt-get install -y kubelet kubeadm kubectl
        sudo apt-mark hold kubelet kubeadm kubectl
        
      5. Verify that kubeadm, kubelet, and kubectl have installed by retrieving their version information. Each command should return version information about each package.

        kubeadm version
        kubelet --version
        kubectl version
        

      Set up the Kubernetes Control Plane

      After installing the Kubernetes related tooling on all your Linodes, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.

      The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the Beginner’s Guide to Kubernetes.

      In addition to the baseline control plane components, there are several addons, that can be installed on the master node to access additional cluster features. You will need to install a networking and network policy provider add on that will implement Kubernetes’ network model on the cluster’s pod network.

      This guide will use Calico as the pod network add on. Calico is a secure and open source L3 networking and network policy provider for containers. There are several other network and network policy providers to choose from. To view a full list of providers, refer to the official Kubernetes documentation.

      Note

      kubeadm only supports Container Network Interface (CNI) based networks. CNI consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers

      1. Initialize kubeadm on the master node. This command will run checks against the node to ensure it contains all required Kubernetes dependencies, if the checks pass, it will then install the control plane components.

        When issuing this command, it is necessary to set the pod network range that Calico will use to allow your pods to communicate with each other. It is recommended to use the private IP address space, 10.2.0.0/16.

        Note

        The pod network IP range should not overlap with the service IP network range. The default service IP address range is 10.96.0.0/12. You can provide an alternative service ip address range using the --service-cidr=10.97.0.0/12 option when initializing kubeadm. Replace 10.97.0.0/12 with the desired service IP range.

        For a full list of available kubeadm initialization options, see the official Kubernetes documentation.

        sudo kubeadm init --pod-network-cidr=10.2.0.0/16
        

        You should see a similar output:

          
        Your Kubernetes control-plane has initialized successfully!
        
        To start using your cluster, you need to run the following as a regular user:
        
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
        You should now deploy a pod network to the cluster.
        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
          https://kubernetes.io/docs/concepts/cluster-administration/addons/
        
        Then you can join any number of worker nodes by running the following on each as root:
        
        kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
            --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
              
        

        The kubeadm join command will be used in the Join a Worker Node to the Cluster section of this guide to bootstrap the worker nodes to the Kubernetes cluster. This command should be kept handy for later use. Below is a description of the required options you will need to pass in with the kubeadm join command:

        • The master node’s IP address and the Kubernetes API server’s port number. In the example output, this is 192.0.2.0:6443. The Kubernetes API server’s port number is 6443 by default on all Kubernetes installations.
        • A bootstrap token. The bootstrap token has a 24-hour TTL (time to live). A new bootstrap token can be generated if your current token expires.
        • A CA key hash. This is used to verify the authenticity of the data retrieved from the Kubernetes API server during the bootstrap process.
      2. Copy the admin.conf configuration file to your limited user account. This file allows you to communicate with your cluster via kubectl and provides superuser privileges over the cluster. It contains a description of the cluster, users, and contexts. Copying the admin.conf to your limited user account will provide you with administrative privileges over your cluster.

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
      3. Install the necessary Calico manifests to your master node and apply them using kubectl. The first file, rbac-kdd.yaml, works with Kubernetes’ role-based access control (RBAC) to provide Calico components access to necessary parts of the Kubernetes API. The second file, calico.yaml, configures a self-hosted Calico installation that uses the Kubernetes API directly as the datastore (instead of etcd).

        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
        

      Inspect the Master Node with Kubectl

      After completing the previous section, your Kubernetes master node is ready with all the necessary components to manage a cluster. To gain a better understanding of all the parts that make up the master’s control plane, this section will walk you through inspecting your master node. If you have not yet reviewed the Beginner’s Guide to Kubernetes, it will be helpful to do so prior to continuing with this section as it relies on the understanding of basic Kubernetes concepts.

      1. View the current state of all nodes in your cluster. At this stage, the only node you should expect to see is the master node, since worker nodes have yet to be bootstrapped. A STATUS of Ready indicates that the master node contains all necessary components, including the pod network add-on, to start managing clusters.

        kubectl get nodes
        

        Your output should resemble the following:

          
        NAME        STATUS     ROLES     AGE   VERSION
        kube-master   Ready     master      1h    v1.14.1
            
        
      2. Inspect the available namespaces in your cluster.

        kubectl get namespaces
        

        Your output should resemble the following:

          
        NAME              STATUS   AGE
        default           Active   23h
        kube-node-lease   Active   23h
        kube-public       Active   23h
        kube-system       Active   23h
            
        

        Below is an overview of each namespace installed by default on the master node by kubeadm:

        • default: The default namespace contains objects with no other assigned namespace. By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
        • kube-system: The namespace for objects created by the Kubernetes system. This includes all resources used by the master node.
        • kube-public: This namespace is created automatically and is readable by all users. It contains information, like certificate authority data (CA), that helps kubeadm join and authenticate worker nodes.
        • kube-node-lease: The kube-node-lease namespace contains lease objects that are used by kubelet to determine node health. kubelet creates and periodically renews a Lease on a node. The node lifecycle controller treats this lease as a health signal. kube-node-lease was released to beta in Kubernetes 1.14.
      3. View all resources available in the kube-system namespace. The kube-system namespace contains the widest range of resources, since it houses all control plane resources. Replace kube-system with another namespace to view its corresponding resources.

        kubectl get all -n kube-system
        

      Join a Worker Node to the Cluster

      Now that your Kubernetes master node is set up, you can join worker nodes to your cluster. In order for a worker node to join a cluster, it must trust the cluster’s control plane, and the control plane must trust the worker node. This trust is managed via a shared bootstrap token and a certificate authority (CA) key hash. kubeadm handles the exchange between the control plane and the worker node. At a high-level the worker node bootstrap process is the following:

      1. kubeadm retrieves information about the cluster from the Kubernetes API server. The bootstrap token and CA key hash are used to ensure the information originates from a trusted source.

      2. kubelet can take over and begin the bootstrap process, since it has the necessary cluster information retrieved in the previous step. The bootstrap token is used to gain access to the Kubernetes API server and submit a certificate signing request (CSR), which is then signed by the control plane.

      3. The worker node’s kubelet is now able to connect to the Kubernetes API server using the node’s established identity.

      Before continuing, you will need to make sure that you know your Kubernetes API server’s IP address, that you have a bootstrap token, and a CA key hash. This information was provided when kubeadm was initialized on the master node in the Set up the Kubernetes Control Plane section of this guide. If you no longer have this information, you can regenerate the necessary information from the master node.


      Regenerate a Bootstrap Token

      These commands should be issued from your master node.

      1. Generate a new bootstrap token and display the kubeadm join command with the necessary options to join a worker node to the master node’s control plane:

        kubeadm token create --print-join-command
        

      Follow the steps below on each node you would like to bootstrap to the cluster as a worker node.

      1. SSH into the Linode that will be used as a worker node in the Kubernetes cluster.

        ssh username@192.0.2.1
        
      2. Join the node to your cluster using kubeadm. Ensure you replace 192.0.2.0:6443 with the IP address for your master node along with its Kubernetes API server’s port number, udb8fn.nih6n1f1aijmbnx5 with your bootstrap token, and sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26 with your CA key hash. The bootstrap process will take a few moments.

        sudo kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
        --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
        

        When the bootstrap process has completed, you should see a similar output:

          
          This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
        
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
              
        
      3. Repeat the steps outlined above on the second worker node to bootstrap it to the cluster.

      4. SSH into the master node and verify the worker nodes have joined the cluster:

         kubectl get nodes
        

        You should see a similar output.

          
        NAME          STATUS   ROLES    AGE     VERSION
        kube-master   Ready    master   1d22h   v1.14.1
        kube-node-1   Ready       1d22h   v1.14.1
        kube-node-2   Ready       1d22h   v1.14.1
              
        

      Next Steps

      Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, and deploy services that are exposed to the public internet. To help you get started with this, move on to follow along with the Deploy a Static Site on Linode using Kubernetes guide.

      Tear Down Your Cluster

      If you are done experimenting with your Kubernetes Cluster, be sure to remove the Linodes you have running in order to avoid being further billed for them. See the Removing Services section of the Billing and Payments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link