One place for hosting & domains

      Kubernetes

      How To Deploy a PHP Application with Kubernetes on Ubuntu 16.04


      The author selected the Open Internet/Free Speech to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes is an open source container orchestration system. It allows you to create, update, and scale containers without worrying about downtime.

      To run a PHP application, Nginx acts as a proxy to PHP-FPM. Containerizing this setup in a single container can be a cumbersome process, but Kubernetes will help manage both services in separate containers. Using Kubernetes will allow you to keep your containers reusable and swappable, and you will not have to rebuild your container image every time there’s a new version of Nginx or PHP.

      In this tutorial, you will deploy a PHP 7 application on a Kubernetes cluster with Nginx and PHP-FPM running in separate containers. You will also learn how to keep your configuration files and application code outside the container image using DigitalOcean’s Block Storage system. This approach will allow you to reuse the Nginx image for any application that needs a web/proxy server by passing a configuration volume, rather than rebuilding the image.

      Prerequisites

      Step 1 — Creating the PHP-FPM and Nginx Services

      In this step, you will create the PHP-FPM and Nginx services. A service allows access to a set of pods from within the cluster. Services within a cluster can communicate directly through their names, without the need for IP addresses. The PHP-FPM service will allow access to the PHP-FPM pods, while the Nginx service will allow access to the Nginx pods.

      Since Nginx pods will proxy the PHP-FPM pods, you will need to tell the service how to find them. Instead of using IP addresses, you will take advantage of Kubernetes’ automatic service discovery to use human-readable names to route requests to the appropriate service.

      To create the service, you will create an object definition file. Every Kubernetes object definition is a YAML file that contains at least the following items:

      • apiVersion: The version of the Kubernetes API that the definition belongs to.
      • kind: The Kubernetes object this file represents. For example, a pod or service.
      • metadata: This contains the name of the object along with any labels that you may wish to apply to it.
      • spec: This contains a specific configuration depending on the kind of object you are creating, such as the container image or the ports on which the container will be accessible from.

      First you will create a directory to hold your Kubernetes object definitions.

      SSH to your master node and create the definitions directory that will hold your Kubernetes object definitions.

      Navigate to the newly created definitions directory:

      Make your PHP-FPM service by creating a php_service.yaml file:

      Set kind as Service to specify that this object is a service:

      php_service.yaml

      ...
      apiVersion: v1
      kind: Service
      

      Name the service php since it will provide access to PHP-FPM:

      php_service.yaml

      ...
      metadata:
        name: php
      

      You will logically group different objects with labels. In this tutorial, you will use labels to group the objects into "tiers", such as frontend or backend. The PHP pods will run behind this service, so you will label it as tier: backend.

      php_service.yaml

      ...
        labels:
          tier: backend
      

      A service determines which pods to access by using selector labels. A pod that matches these labels will be serviced, independent of whether the pod was created before or after the service. You will add labels for your pods later in the tutorial.

      Use the tier: backend label to assign the pod into the backend tier. You will also add the app: php label to specify that this pod runs PHP. Add these two labels after the metadata section.

      php_service.yaml

      ...
      spec:
        selector:
          app: php
          tier: backend
      

      Next, specify the port used to access this service. You will use port 9000 in this tutorial. Add it to the php_service.yaml file under spec:

      php_service.yaml

      ...
        ports:
          - protocol: TCP
            port: 9000
      

      Your completed php_service.yaml file will look like this:

      php_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: php
        labels:
          tier: backend
      spec:
        selector:
          app: php
          tier: backend
        ports:
        - protocol: TCP
          port: 9000
      

      Hit CTRL + o to save the file, and then CTRL + x to exit nano.

      Now that you've created the object definition for your service, to run the service you will use the kubectl apply command along with the -f argument and specify your php_service.yaml file.

      Create your service:

      • kubectl apply -f php_service.yaml

      This output confirms the service creation:

      Output

      service/php created

      Verify that your service is running:

      You will see your PHP-FPM service running:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m php ClusterIP 10.100.59.238 <none> 9000/TCP 5m

      There are various service types that Kubernetes supports. Your php service uses the default service type, ClusterIP. This service type assigns an internal IP and makes the service reachable only from within the cluster.

      Now that the PHP-FPM service is ready, you will create the Nginx service. Create and open a new file called nginx_service.yaml with the editor:

      This service will target Nginx pods, so you will name it nginx. You will also add a tier: backend label as it belongs in the backend tier:

      nginx_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
        labels:
          tier: backend
      

      Similar to the php service, target the pods with the selector labels app: nginx and tier: backend. Make this service accessible on port 80, the default HTTP port.

      nginx_service.yaml

      ...
      spec:
        selector:
          app: nginx
          tier: backend
        ports:
        - protocol: TCP
          port: 80
      

      The Nginx service will be publicly accessible to the internet from your Droplet's public IP address. your_public_ip can be found from your DigitalOcean Cloud Panel. Under spec.externalIPs, add:

      nginx_service.yaml

      ...
      spec:
        externalIPs:
        - your_public_ip
      

      Your nginx_service.yaml file will look like this:

      nginx_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
        labels:
          tier: backend
      spec:
        selector:
          app: nginx
          tier: backend
        ports:
        - protocol: TCP
          port: 80
        externalIPs:
        - your_public_ip    
      

      Save and close the file. Create the Nginx service:

      • kubectl apply -f nginx_service.yaml

      You will see the following output when the service is running:

      Output

      service/nginx created

      You can view all running services by executing:

      You will see both the PHP-FPM and Nginx services listed in the output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 50s php ClusterIP 10.100.59.238 <none> 9000/TCP 8m

      Please note, if you want to delete a service you can run:

      • kubectl delete svc/service_name

      Now that you've created your PHP-FPM and Nginx services, you will need to specify where to store your application code and configuration files.

      Step 2 — Installing the DigitalOcean Storage Plug-In

      Kubernetes provides different storage plug-ins that can create the storage space for your environment. In this step, you will install the DigitalOcean storage plug-in to create block storage on DigitalOcean. Once the installation is complete, it will add a storage class named do-block-storage that you will use to create your block storage.

      You will first configure a Kubernetes Secret object to store your DigitalOcean API token. Secret objects are used to share sensitive information, like SSH keys and passwords, with other Kubernetes objects within the same namespace. Namespaces provide a way to logically separate your Kubernetes objects.

      Open a file named secret.yaml with the editor:

      You will name your Secret object digitalocean and add it to the kube-system namespace. The kube-system namespace is the default namespace for Kubernetes’ internal services and is also used by the DigitalOcean storage plug-in to launch various components.

      secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: digitalocean
        namespace: kube-system
      

      Instead of a spec key, a Secret uses a data or stringData key to hold the required information. The data parameter holds base64 encoded data that is automatically decoded when retrieved. The stringData parameter holds non-encoded data that is automatically encoded during creation or updates, and does not output the data when retrieving Secrets. You will use stringData in this tutorial for convenience.

      Add the access-token as stringData:

      secret.yaml

      ...
      stringData:
        access-token: your-api-token
      

      Save and exit the file.

      Your secret.yaml file will look like this:

      secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: digitalocean
        namespace: kube-system
      stringData:
        access-token: your-api-token
      

      Create the secret:

      • kubectl apply -f secret.yaml

      You will see this output upon Secret creation:

      Output

      secret/digitalocean created

      You can view the secret with the following command:

      • kubectl -n kube-system get secret digitalocean

      The output will look similar to this:

      Output

      NAME TYPE DATA AGE digitalocean Opaque 1 41s

      The Opaque type means that this Secret is read-only, which is standard for stringData Secrets. You can read more about it on the Secret design spec. The DATA field shows the number of items stored in this Secret. In this case, it shows 1 because you have a single key stored.

      Now that your Secret is in place, install the DigitalOcean block storage plug-in:

      • kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.3.0.yaml

      You will see output similar to the following:

      Output

      storageclass.storage.k8s.io/do-block-storage created serviceaccount/csi-attacher created clusterrole.rbac.authorization.k8s.io/external-attacher-runner created clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created service/csi-attacher-doplug-in created statefulset.apps/csi-attacher-doplug-in created serviceaccount/csi-provisioner created clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created service/csi-provisioner-doplug-in created statefulset.apps/csi-provisioner-doplug-in created serviceaccount/csi-doplug-in created clusterrole.rbac.authorization.k8s.io/csi-doplug-in created clusterrolebinding.rbac.authorization.k8s.io/csi-doplug-in created daemonset.apps/csi-doplug-in created

      Now that you have installed the DigitalOcean storage plug-in, you can create block storage to hold your application code and configuration files.

      Step 3 — Creating the Persistent Volume

      With your Secret in place and the block storage plug-in installed, you are now ready to create your Persistent Volume. A Persistent Volume, or PV, is block storage of a specified size that lives independently of a pod’s life cycle. Using a Persistent Volume will allow you to manage or update your pods without worrying about losing your application code. A Persistent Volume is accessed by using a PersistentVolumeClaim, or PVC, which mounts the PV at the required path.

      Open a file named code_volume.yaml with your editor:

      Name the PVC code by adding the following parameters and values to your file:

      code_volume.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: code
      

      The spec for a PVC contains the following items:

      • accessModes which vary by the use case. These are:
        • ReadWriteOnce – mounts the volume as read-write by a single node
        • ReadOnlyMany – mounts the volume as read-only by many nodes
        • ReadWriteMany – mounts the volume as read-write by many nodes
      • resources – the storage space that you require

      DigitalOcean block storage is only mounted to a single node, so you will set the accessModes to ReadWriteOnce. This tutorial will guide you through adding a small amount of application code, so 1GB will be plenty in this use case. If you plan on storing a larger amount of code or data on the volume, you can modify the storage parameter to fit your requirements. You can increase the amount of storage after volume creation, but shrinking the disk is not supported.

      code_volume.yaml

      ...
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      

      Next, specify the storage class that Kubernetes will use to provision the volumes. You will use the do-block-storage class created by the DigitalOcean block storage plug-in.

      code_volume.yaml

      ...
        storageClassName: do-block-storage
      

      Your code_volume.yaml file will look like this:

      code_volume.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: code
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: do-block-storage
      

      Save and exit the file.

      Create the code PersistentVolumeClaim using kubectl:

      • kubectl apply -f code_volume.yaml

      The following output tells you that the object was successfully created, and you are ready to mount your 1GB PVC as a volume.

      Output

      persistentvolumeclaim/code created

      To view available Persistent Volumes (PV):

      You will see your PV listed:

      Output

      NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-ca4df10f-ab8c-11e8-b89d-12331aa95b13 1Gi RWO Delete Bound default/code do-block-storage 2m

      The fields above are an overview of your configuration file, except for Reclaim Policy and Status. The Reclaim Policy defines what is done with the PV after the PVC accessing it is deleted. Delete removes the PV from Kubernetes as well as the DigitalOcean infrastructure. You can learn more about the Reclaim Policy and Status from the Kubernetes PV documentation.

      You've successfully created a Persistent Volume using the DigitalOcean block storage plug-in. Now that your Persistent Volume is ready, you will create your pods using a Deployment.

      Step 4 — Creating a PHP-FPM Deployment

      In this step, you will learn how to use a Deployment to create your PHP-FPM pod. Deployments provide a uniform way to create, update, and manage pods by using ReplicaSets. If an update does not work as expected, a Deployment will automatically rollback its pods to a previous image.

      The Deployment spec.selector key will list the labels of the pods it will manage. It will also use the template key to create the required pods.

      This step will also introduce the use of Init Containers. Init Containers run one or more commands before the regular containers specified under the pod's template key. In this tutorial, your Init Container will fetch a sample index.php file from GitHub Gist using wget. These are the contents of the sample file:

      index.php

      <?php
      echo phpinfo(); 
      

      To create your Deployment, open a new file called php_deployment.yaml with your editor:

      This Deployment will manage your PHP-FPM pods, so you will name the Deployment object php. The pods belong to the backend tier, so you will group the Deployment into this group by using the tier: backend label:

      php_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: php
        labels:
          tier: backend
      

      For the Deployment spec, you will specify how many copies of this pod to create by using the replicas parameter. The number of replicas will vary depending on your needs and available resources. You will create one replica in this tutorial:

      php_deployment.yaml

      ...
      spec:
        replicas: 1
      

      This Deployment will manage pods that match the app: php and tier: backend labels. Under selector key add:

      php_deployment.yaml

      ...
        selector:
          matchLabels:
            app: php
            tier: backend
      

      Next, the Deployment spec requires the template for your pod's object definition. This template will define specifications to create the pod from. First, you will add the labels that were specified for the php service selectors and the Deployment’s matchLabels. Add app: php and tier: backend under template.metadata.labels:

      php_deployment.yaml

      ...
        template:
          metadata:
            labels:
              app: php
              tier: backend
      

      A pod can have multiple containers and volumes, but each will need a name. You can selectively mount volumes to a container by specifying a mount path for each volume.

      First, specify the volumes that your containers will access. You created a PVC named code to hold your application code, so name this volume code as well. Under spec.template.spec.volumes, add the following:

      php_deployment.yaml

      ...
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
      

      Next, specify the container you want to run in this pod. You can find various images on the Docker store, but in this tutorial you will use the php:7-fpm image.

      Under spec.template.spec.containers, add the following:

      php_deployment.yaml

      ...
            containers:
            - name: php
              image: php:7-fpm
      

      Next, you will mount the volumes that the container requires access to. This container will run your PHP code, so it will need access to the code volume. You will also use mountPath to specify /code as the mount point.

      Under spec.template.spec.containers.volumeMounts, add:

      php_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      Now that you have mounted your volume, you need to get your application code on the volume. You may have previously used FTP/SFTP or cloned the code over an SSH connection to accomplish this, but this step will show you how to copy the code using an Init Container.

      Depending on the complexity of your setup process, you can either use a single initContainer to run a script that builds your application, or you can use one initContainer per command. Make sure that the volumes are mounted to the initContainer.

      In this tutorial, you will use a single Init Container with busybox to download the code. busybox is a small image that contains the wget utility that you will use to accomplish this.

      Under spec.template.spec, add your initContainer and specify the busybox image:

      php_deployment.yaml

      ...
            initContainers:
            - name: install
              image: busybox
      

      Your Init Container will need access to the code volume so that it can download the code in that location. Under spec.template.spec.initContainers, mount the volume code at the /code path:

      php_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      Each Init Container needs to run a command. Your Init Container will use wget to download the code from Github into the /code working directory. The -O option gives the downloaded file a name, and you will name this file index.php.

      Note: Be sure to trust the code you’re pulling. Before pulling it to your server, inspect the source code to ensure you are comfortable with what the code does.

      Under the install container in spec.template.spec.initContainers, add these lines:

      php_deployment.yaml

      ...
              command:
              - wget
              - "-O"
              - "/code/index.php"
              - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php
      

      Your completed php_deployment.yaml file will look like this:

      php_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: php
        labels:
          tier: backend
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: php
            tier: backend
        template:
          metadata:
            labels:
              app: php
              tier: backend
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
            containers:
            - name: php
              image: php:7-fpm
              volumeMounts:
              - name: code
                mountPath: /code
            initContainers:
            - name: install
              image: busybox
              volumeMounts:
              - name: code
                mountPath: /code
              command:
              - wget
              - "-O"
              - "/code/index.php"
              - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php
      

      Save the file and exit the editor.

      Create the PHP-FPM Deployment with kubectl:

      • kubectl apply -f php_deployment.yaml

      You will see the following output upon Deployment creation:

      Output

      deployment.apps/php created

      To summarize, this Deployment will start by downloading the specified images. It will then request the PersistentVolume from your PersistentVolumeClaim and serially run your initContainers. Once complete, the containers will run and mount the volumes to the specified mount point. Once all of these steps are complete, your pod will be up and running.

      You can view your Deployment by running:

      You will see the output:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php 1 1 1 0 19s

      This output can help you understand the current state of the Deployment. A Deployment is one of the controllers that maintains a desired state. The template you created specifies that the DESIRED state will have 1 replicas of the pod named php. The CURRENT field indicates how many replicas are running, so this should match the DESIRED state. You can read about the remaining fields in the Kubernetes Deployments documentation.

      You can view the pods that this Deployment started with the following command:

      The output of this command varies depending on how much time has passed since creating the Deployment. If you run it shortly after creation, the output will likely look like this:

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-bf8zd 0/1 Init:0/1 0 9s

      The columns represent the following information:

      • Ready: The number of replicas running this pod.
      • Status: The status of the pod. Init indicates that the Init Containers are running. In this output, 0 out of 1 Init Containers have finished running.
      • Restarts: How many times this process has restarted to start the pod. This number will increase if any of your Init Containers fail. The Deployment will restart it until it reaches a desired state.

      Depending on the complexity of your startup scripts, it can take a couple of minutes for the status to change to podInitializing:

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 0/1 podInitializing 0 39s

      This means the Init Containers have finished and the containers are initializing. If you run the command when all of the containers are running, you will see the pod status change to Running.

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 1/1 Running 0 1m

      You now see that your pod is running successfully. If your pod doesn't start, you can debug with the following commands:

      • View detailed information of a pod:
      • kubectl describe pods pod-name
      • View logs generated by a pod:
      • View logs for a specific container in a pod:
      • kubectl logs pod-name container-name

      Your application code is mounted and the PHP-FPM service is now ready to handle connections. You can now create your Nginx Deployment.

      Step 5 — Creating the Nginx Deployment

      In this step, you will use a ConfigMap to configure Nginx. A ConfigMap holds your configuration in a key-value format that you can reference in other Kubernetes object definitions. This approach will grant you the flexibility to reuse or swap the image with a different Nginx version if needed. Updating the ConfigMap will automatically replicate the changes to any pod mounting it.

      Create a nginx_configMap.yaml file for your ConfigMap with your editor:

      • nano nginx_configMap.yaml

      Name the ConfigMap nginx-config and group it into the tier: backend micro-service:

      nginx_configMap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nginx-config
        labels:
          tier: backend
      

      Next, you will add the data for the ConfigMap. Name the key config and add the contents of your Nginx configuration file as the value. You can use the example Nginx configuration from this tutorial.

      Because Kubernetes can route requests to the appropriate host for a service, you can enter the name of your PHP-FPM service in the fastcgi_pass parameter instead of its IP address. Add the following to your nginx_configMap.yaml file:

      nginx_configMap.yaml

      ...
      data:
        config : |
          server {
            index index.php index.html;
            error_log  /var/log/nginx/error.log;
            access_log /var/log/nginx/access.log;
            root ^/code^;
      
            location / {
                try_files $uri $uri/ /index.php?$query_string;
            }
      
            location ~ .php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_pass php:9000;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_path_info;
              }
          }
      

      Your nginx_configMap.yaml file will look like this:

      nginx_configMap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nginx-config
        labels:
          tier: backend
      data:
        config : |
          server {
            index index.php index.html;
            error_log  /var/log/nginx/error.log;
            access_log /var/log/nginx/access.log;
            root /code;
      
            location / {
                try_files $uri $uri/ /index.php?$query_string;
            }
      
            location ~ .php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_pass php:9000;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_path_info;
              }
          }
      

      Save the file and exit the editor.

      Create the ConfigMap:

      • kubectl apply -f nginx_configMap.yaml

      You will see the following output:

      Output

      configmap/nginx-config created

      You've finished creating your ConfigMap and can now build your Nginx Deployment.

      Start by opening a new nginx_deployment.yaml file in the editor:

      • nano nginx_deployment.yaml

      Name the Deployment nginx and add the label tier: backend:

      nginx_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
        labels:
          tier: backend
      

      Specify that you want one replicas in the Deployment spec. This Deployment will manage pods with labels app: nginx and tier: backend. Add the following parameters and values:

      nginx_deployment.yaml

      ...
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
            tier: backend
      

      Next, add the pod template. You need to use the same labels that you added for the Deployment selector.matchLabels. Add the following:

      nginx_deployment.yaml

      ...
        template:
          metadata:
            labels:
              app: nginx
              tier: backend
      

      Give Nginx access to the code PVC that you created earlier. Under spec.template.spec.volumes, add:

      nginx_deployment.yaml

      ...
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
      

      Pods can mount a ConfigMap as a volume. Specifying a file name and key will create a file with its value as the content. To use the ConfigMap, set path to name of the file that will hold the contents of the key. You want to create a file site.conf from the key config. Under spec.template.spec.volumes, add the following:

      nginx_deployment.yaml

      ...
            - name: config
              configMap:
                name: nginx-config
                items:
                - key: config
                  path: site.conf
      

      Warning: If a file is not specified, the contents of the key will replace the mountPath of the volume. This means that if a path is not explicitly specified, you will lose all content in the destination folder.

      Next, you will specify the image to create your pod from. This tutorial will use the nginx:1.7.9 image for stability, but you can find other Nginx images on the Docker store. Also, make Nginx available on the port 80. Under spec.template.spec add:

      nginx_deployment.yaml

      ...
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
      

      Nginx and PHP-FPM need to access the file at the same path, so mount the code volume at /code:

      nginx_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      The nginx:1.7.9 image will automatically load any configuration files under the /etc/nginx/conf.d directory. Mounting the config volume in this directory will create the file /etc/nginx/conf.d/site.conf. Under volumeMounts add the following:

      nginx_deployment.yaml

      ...
              - name: config
                mountPath: /etc/nginx/conf.d
      

      Your nginx_deployment.yaml file will look like this:

      nginx_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
        labels:
          tier: backend
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
            tier: backend
        template:
          metadata:
            labels:
              app: nginx
              tier: backend
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
            - name: config
              configMap:
                name: nginx-config
                items:
                - key: config
                  path: site.conf
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
              - name: code
                mountPath: /code
              - name: config
                mountPath: /etc/nginx/conf.d
      

      Save the file and exit the editor.

      Create the Nginx Deployment:

      • kubectl apply -f nginx_deployment.yaml

      The following output indicates that your Deployment is now created:

      Output

      deployment.apps/nginx created

      List your Deployments with this command:

      You will see the Nginx and PHP-FPM Deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 0 16s php 1 1 1 1 7m

      List the pods managed by both of the Deployments:

      You will see the pods that are running:

      Output

      NAME READY STATUS RESTARTS AGE nginx-7bf5476b6f-zppml 1/1 Running 0 32s php-86d59fd666-lkwgn 1/1 Running 0 7m

      Now that all of the Kubernetes objects are active, you can visit the Nginx service on your browser.

      List the running services:

      • kubectl get services -o wide

      Get the External IP for your Nginx service:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m <none> nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 27m app=nginx,tier=backend php ClusterIP 10.100.59.238 <none> 9000/TCP 34m app=php,tier=backend

      On your browser, visit your server by typing in http://your_public_ip. You will see the output of php_info() and have confirmed that your Kubernetes services are up and running.

      Conclusion

      In this guide, you containerized the PHP-FPM and Nginx services so that you can manage them independently. This approach will not only improve the scalability of your project as you grow, but will also allow you to efficiently use resources as well. You also stored your application code on a volume so that you can easily update your services in the future.



      Source link

      How To Set Up Multi-Node Deployments With Rancher 2.1, Kubernetes, and Docker Machine on Ubuntu 18.04


      The author selected Code Org to receive a donation as part of the Write for DOnations program.

      Introduction

      Rancher is a popular open-source container management platform. Released in early 2018, Rancher 2.X works on Kubernetes and has incorporated new tools such as multi-cluster management and built-in CI pipelines. In addition to the enhanced security, scalability, and straightforward deployment tools already in Kubernetes, Rancher offers a graphical user interface that makes managing containers easier. Through Rancher’s GUI, users can manage secrets, securely handle roles and permissions, scale nodes and pods, and set up load balancers and volumes without needing a command line tool or complex YAML files.

      In this tutorial, you will deploy a multi-node Rancher 2.1 server using Docker Machine on Ubuntu 18.04. By the end, you’ll be able to provision new DigitalOcean Droplets and container pods via the Rancher UI to quickly scale up or down your hosting environment.

      Prerequisites

      Before you start this tutorial, you’ll need a DigitalOcean account, in addition to the following:

      • A DigitalOcean Personal Access Token, which you can create following the instructions in this tutorial. This token will allow Rancher to have API access to your DigitalOcean account.

      • A fully registered domain name with an A record that points to the IP address of the Droplet you create in Step 1. You can learn how to point domains to DigitalOcean Droplets by reading through DigitalOcean’s Domains and DNS documentation. Throughout this tutorial, substitute your domain for example.com.

      Step 1 — Creating a Droplet With Docker Installed

      To start and configure Rancher, you’ll need to create a new Droplet with Docker installed. To accomplish this, you can use DigitalOcean’s Docker image.

      First, log in to your DigitalOcean account and choose Create Droplet. Then, under the Choose an Image section, select the Marketplace tab. Select Docker 18.06.1~ce~3 on 18.04.

      Choose the Docker 18.06 image from the One-click Apps menu

      Next, select a Droplet no smaller than 2GB and choose a datacenter region for your Droplet.

      Finally, add your SSH keys, provide a host name for your Droplet, and press the Create button.

      It will take a few minutes for the server to provision and for Docker to download. Once the Droplet deploys successfully, you’re ready to start Rancher in a new Docker container.

      Step 2 — Starting and Configuring Rancher

      The Droplet you created in Step 1 will run Rancher in a Docker container. In this step, you will start the Rancher container and ensure it has a Let’s Encrypt SSL certificate so that you can securely access the Rancher admin panel. Let’s Encrypt is an automated, open-source certificate authority that allows developers to provision ninety-day SSL certificates for free.

      Log in to your new Droplet:

      To make sure Docker is running, enter:

      Check that the listed Docker version is what you expect. You can start Rancher with a Let's Encrypt certificate already installed by running the following command:

      • docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /host/rancher:/var/lib/rancher rancher/rancher --acme-domain example.com

      The --acme-domain option installs an SSL certificate from Let's Encrypt to ensure your Rancher admin is served over HTTPS. This script also instructs the Droplet to fetch the rancher/rancher Docker image and start a Rancher instance in a container that will restart automatically if it ever goes down accidentally. To ease recovery in the event of data loss, the script mounts a volume on the host machine (at /host/rancher) that contains the Rancher data.

      To see all the running containers, enter:

      You'll see output similar to the following (with a unique container ID and name):

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b2afed0a599 rancher/rancher "entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_fermat

      If the container is not running, you can execute the docker run command again.

      Before you can access the Rancher admin panel, you'll need to set your admin password and Rancher server URL. The Rancher admin interface will give you access to all of your running nodes, pods, and secrets, so it is important that you use a strong password for it.

      Go to the domain name that points to your new Droplet in your web browser. The first time you visit this address, Rancher will let you set a password:

      Set your Rancher password using the prompt

      When asked for your Rancher server URL, use the domain name pointed at your Droplet.

      You have now completed your Rancher server setup, and you will see the Rancher admin home screen:

      The Rancher admin home screen

      You're ready to continue to the Rancher cluster setup.

      Step 3 — Configuring a Cluster With a Single Node

      To use Rancher, you'll need to create a cluster with at least one node. A cluster is a group of one or more nodes. This guide will give you more information about the Kubernetes Architecture. In this tutorial, nodes correspond to Droplets that Rancher will manage. Pods represent a group of running Docker containers within the Droplet. Each node can run many pods. Using the Rancher UI, you can set up clusters and nodes in an underlying Kubernetes environment.

      By the end of this step, you will have set up a cluster with a single node ready to run your first pod.

      In Rancher, click Add Cluster, and select DigitalOcean as the infrastructure provider.

      Select DigitalOcean from the listed infrastructure providers

      Enter a Cluster Name and scroll down to the Node Pools section. Enter a Name Prefix, leave the Count at 1 for now, and check etcd, Control Plane, and Worker.

      • etcd is Kubernetes' key value storage system for keeping your entire environment's state. In order to maintain high availability, you should run three or five etcd nodes so that if one goes down your environment will still be manageable.
      • Control Plane checks through all of the Kubernetes Objects — such as pods — in your environment and keeps them up to date with the configuration you provide in the Rancher admin interface.
      • Workers run the actual workloads and monitoring agents that ensure your containers stay running and networked. Worker nodes are where your pods will run the software you deploy.

      Create a Node Pool with a single Node

      Before creating the cluster, click Add Node Template to configure the specific options for your new node.

      Enter your DigitalOcean Personal Access Token in the Access Token input box and click Next: Configure Droplet.

      Next, select the same Region and Droplet Size as Step 1. For Image, be sure to select Ubuntu 16.04.5 x64 as there's currently a compatibility issue with Rancher and Ubuntu 18.04. Hit Create to save the template.

      Finally, click Create at the Add Cluster page to kick off the provisioning process. It will take a few minutes for Rancher to complete this step, but you will see a new Droplet in your DigitalOcean Droplets dashboard when it's done.

      In this step, you've created a new cluster and node onto which you will deploy a workload in the next section.

      Step 4 — Deploying a Web Application Workload

      Once the new cluster and node are ready, you can deploy your first pod in a workload. A Kubernetes Pod is the smallest unit of work available to Kubernetes and by extension Rancher. Workloads describe a single group of pods that you deploy together. For example, you may run multiple pods of your webserver in a single workload to ensure that if one pod slows down with a particular request, other instances can handle incoming requests. In this section, you're going to deploy a Nginx Hello World image to a single pod.

      Hover over Global in the header and select Default. This will bring you to the Default project dashboard. You'll focus on deploying a single project in this tutorial, but from this dashboard you can also create multiple projects to achieve isolated container hosting environments.

      To start configuring your first pod, click Deploy.

      Enter a Name, and put nginxdemos/hello in the Docker Image field. Next, map port 80 in the container to port 30000 on the host nodes. This will ensure that the pods you deploy are available on each node at port 30000. You can leave Protocol set to TCP, and the next dropdown as NodePort.

      Note: While this method of running the pod on every node's port is easier to get started, Rancher also includes Ingress to provide load balancing and SSL termination for production use.

      The input form for deploying a Workload

      To launch the pod, scroll to the bottom and click Launch.

      Rancher will take you back to the default project home page, and within a few seconds your pod will be ready. Click the link 30000/tcp just below the name of the workload and Rancher will open a new tab with information about the running container's environment.

      Server address, Server name, and other output from the running NGINX container

      The Server address and port you see on this page are those of the internal Docker network, and not the public IP address you see in your browser. This means that Rancher is working and routing traffic from http://first_node_ip:30000/ to the workload as expected.

      At this point, you've successfully deployed your first workload of one pod to a single Rancher node. Next, you'll see how to scale your Rancher environment.

      Step 5 — Scaling Nodes and Pods

      Rancher gives you two ways to scale your hosting resources: increasing the number of pods in your workload or increasing the number of nodes in your cluster.

      Adding pods to your workload will give your application more running processes. This will allow it to handle more traffic and enable zero-downtime deployments, but each node can handle only a finite number of pods. Once all your nodes have hit their pod limit, you will have to increase the number of nodes if you want to continue scaling up.

      Another consideration is that while increasing pods is typically free, you will have to pay for each node you add to your environment. In this step, you will scale up both nodes and pods, and add another node to your Rancher cluster.

      Note: This part of the tutorial will provision a new DigitalOcean Droplet automatically via the API, so be aware that you will incur extra charges while the second node is running.

      Navigate to the cluster home page of your Rancher installation by selecting Cluster: your-cluster-name from the top navigation bar. Next click Nodes from the top navigation bar.

      Use the top navbar dropdown to select your Cluster

      This page shows that you currently have one running node in the cluster. To add more nodes, click Edit Cluster, and scroll to the Node Pools section at the bottom of the page. Click Add Node Pool, enter a prefix, and check the Worker box. Click Save to update the cluster.

      Add a Node Pool as a Worker only

      Within 2–5 minutes, Rancher will provision a second droplet and indicate the node as Active in the cluster's dashboard. This second node is only a worker, which means it will not run the Rancher etcd or Control Plane containers. This allows the Worker more capacity for running workloads.

      Note: Having an uneven number of etcd nodes will ensure that they can always reach a quorum (or consensus). If you only have one etcd node, you run the risk of your cluster being unreachable if that one node goes down. In a production environment it is a better practice to run three or five etcd nodes.

      When the second node is ready, you will be able to see the workload you deployed in the previous step on this node by navigating to http://second_node_ip:30000/ in your browser.

      Scaling up nodes gives you more Droplets to distribute your workloads on, but you may also want to run more instances of each pod within a workload. To add more pods, return to the Default project page, press the arrow to the left of your hello-world workload, and click + twice to add two more pods.

      Running three Hello World Pods in a Workload

      Rancher will automatically deploy more pods and distribute the running containers to each node depending on where there is availability.

      You can now scale your nodes and pods to suit your application's requirements.

      Conclusion

      You've now set up multi-node deployments using Rancher 2.1 on Ubuntu 18.04, and have scaled up to two running nodes and multiple pods within a workload. You can use this strategy to host and scale any kind of Docker container that you need to run in your application and use Rancher's dashboard and alerts to help you maximize the performance of your workloads and nodes within each cluster.



      Source link

      How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes


      Introduction

      Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This is accomplished using Ingress Resources, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services. Popular Ingress Controllers include Nginx, Contour, HAProxy, and Traefik. Ingresses provide a more efficient and flexible alternative to setting up multiple LoadBalancer services, each of which uses its own dedicated Load Balancer.

      In this guide, we’ll set up the Kubernetes-maintained Nginx Ingress Controller, and create some Ingress Resources to route traffic to several dummy backend services. Once we’ve set up the Ingress, we’ll install cert-manager into our cluster to manage and provision TLS certificates for encrypting HTTP traffic to the Ingress.

      Prerequisites

      Before you begin with this guide, you should have the following available to you:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled
      • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
      • A domain name and DNS A records which you can point to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.
      • The Helm package manager installed on your local machine and Tiller installed on your cluster, as detailed in How To Install Software on Kubernetes Clusters with the Helm Package Manager.
      • The wget command-line utility installed on your local machine. You can install wget using the package manager built into your operating system.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Setting Up Dummy Backend Services

      Before we deploy the Ingress Controller, we’ll first create and roll out two dummy echo Services to which we’ll route external traffic using the Ingress. The echo Services will run the hashicorp/http-echo web server container, which returns a page containing a text string passed in when the web server is launched. To learn more about http-echo, consult its GitHub Repo, and to learn more about Kubernetes Services, consult Services from the official Kubernetes docs.

      On your local machine, create and edit a file called echo1.yaml using nano or your favorite editor:

      Paste in the following Service and Deployment manifest:

      echo1.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo1
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo1
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo1
      spec:
        selector:
          matchLabels:
            app: echo1
        replicas: 2
        template:
          metadata:
            labels:
              app: echo1
          spec:
            containers:
            - name: echo1
              image: hashicorp/http-echo
              args:
              - "-text=echo1"
              ports:
              - containerPort: 5678
      

      In this file, we define a Service called echo1 which routes traffic to Pods with the app: echo1 label selector. It accepts TCP traffic on port 80 and routes it to port 5678,http-echo's default port.

      We then define a Deployment, also called echo1, which manages Pods with the app: echo1 Label Selector. We specify that the Deployment should have 2 Pod replicas, and that the Pods should start a container called echo1 running the hashicorp/http-echo image. We pass in the text parameter and set it to echo1, so that the http-echo web server returns echo1. Finally, we open port 5678 on the Pod container.

      Once you're satisfied with your dummy Service and Deployment manifest, save and close the file.

      Then, create the Kubernetes resources using kubectl create with the -f flag, specifying the file you just saved as a parameter:

      • kubectl create -f echo1.yaml

      You should see the following output:

      Output

      service/echo1 created deployment.apps/echo1 created

      Verify that the Service started correctly by confirming that it has a ClusterIP, the internal IP on which the Service is exposed:

      You should see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 60s

      This indicates that the echo1 Service is now available internally at 10.245.222.129 on port 80. It will forward traffic to containerPort 5678 on the Pods it selects.

      Now that the echo1 Service is up and running, repeat this process for the echo2 Service.

      Create and open a file called echo2.yaml:

      echo2.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo2
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo2
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo2
      spec:
        selector:
          matchLabels:
            app: echo2
        replicas: 1
        template:
          metadata:
            labels:
              app: echo2
          spec:
            containers:
            - name: echo2
              image: hashicorp/http-echo
              args:
              - "-text=echo2"
              ports:
              - containerPort: 5678
      

      Here, we essentially use the same Service and Deployment manifest as above, but name and relabel the Service and Deployment echo2. In addition, to provide some variety, we create only 1 Pod replica. We ensure that we set the text parameter to echo2 so that the web server returns the text echo2.

      Save and close the file, and create the Kubernetes resources using kubectl:

      • kubectl create -f echo2.yaml

      You should see the following output:

      Output

      service/echo2 created deployment.apps/echo2 created

      Once again, verify that the Service is up and running:

      You should see both the echo1 and echo2 Services with assigned ClusterIPs:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 6m6s echo2 ClusterIP 10.245.128.224 <none> 80/TCP 6m3s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 4d21h

      Now that our dummy echo web services are up and running, we can move on to rolling out the Nginx Ingress Controller.

      Step 2 — Setting Up the Kubernetes Nginx Ingress Controller

      In this step, we'll roll out the Kubernetes-maintained Nginx Ingress Controller. Note that there are several Nginx Ingress Controllers; the Kubernetes community maintains the one used in this guide and Nginx Inc. maintains kubernetes-ingress. The instructions in this tutorial are based on those from the official Kubernetes Nginx Ingress Controller Installation Guide.

      The Nginx Ingress Controller consists of a Pod that runs the Nginx web server and watches the Kubernetes Control Plane for new and updated Ingress Resource objects. An Ingress Resource is essentially a list of traffic routing rules for backend Services. For example, an Ingress rule can specify that HTTP traffic arriving at the path /web1 should be directed towards the web1 backend web server. Using Ingress Resources, you can also perform host-based routing: for example, routing requests that hit web1.your_domain.com to the backend Kubernetes Service web1.

      In this case, because we’re deploying the Ingress Controller to a DigitalOcean Kubernetes cluster, the Controller will create a LoadBalancer Service that spins up a DigitalOcean Load Balancer to which all external traffic will be directed. This Load Balancer will route external traffic to the Ingress Controller Pod running Nginx, which then forwards traffic to the appropriate backend Services.

      We'll begin by first creating the Kubernetes resources required by the Nginx Ingress Controller. These consist of ConfigMaps containing the Controller's configuration, Role-based Access Control (RBAC) Roles to grant the Controller access to the Kubernetes API, and the actual Ingress Controller Deployment. To see a full list of these required resources, consult the manifest from the Kubernetes Nginx Ingress Controller’s GitHub repo.

      To create these mandatory resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

      We use apply instead of create here so that in the future we can incrementally apply changes to the Ingress Controller objects instead of completely overwriting them. To learn more about apply, consult Managing Resources from the official Kubernetes docs.

      You should see the following output:

      Output

      namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.extensions/nginx-ingress-controller created

      This output also serves as a convenient summary of all the Ingress Controller objects created from the mandatory.yaml manifest.

      Next, we'll create the Ingress Controller LoadBalancer Service, which will create a DigitalOcean Load Balancer that will load balance and route HTTP and HTTPS traffic to the Ingress Controller Pod deployed in the previous command.

      To create the LoadBalancer Service, once again kubectl apply a manifest file containing the Service definition:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

      You should see the following output:

      Output

      service/ingress-nginx created

      Now, confirm that the DigitalOcean Load Balancer was successfully created by fetching the Service details with kubectl:

      • kubectl get svc --namespace=ingress-nginx

      You should see an external IP address, corresponding to the IP address of the DigitalOcean Load Balancer:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.245.247.67 203.0.113.0 80:32486/TCP,443:32096/TCP 20h

      Note down the Load Balancer's external IP address, as you'll need it in a later step.

      This load balancer receives traffic on HTTP and HTTPS ports 80 and 443, and forwards it to the Ingress Controller Pod. The Ingress Controller will then route the traffic to the appropriate backend Service.

      We can now point our DNS records at this external Load Balancer and create some Ingress Resources to implement traffic routing rules.

      Step 3 — Creating the Ingress Resource

      Let's begin by creating a minimal Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service.

      In this guide, we'll use the test domain example.com. You should substitute this with the domain name you own.

      We'll first create a simple rule to route traffic directed at echo1.example.com to the echo1 backend service and traffic directed at echo2.example.com to the echo2 backend service.

      Begin by opening up a file called echo_ingress.yaml in your favorite editor:

      Paste in the following ingress definition:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
      spec:
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      When you've finished editing your Ingress rules, save and close the file.

      Here, we've specified that we'd like to create an Ingress Resource called echo-ingress, and route traffic based on the Host header. An HTTP request Host header specifies the domain name of the target server. To learn more about Host request headers, consult the Mozilla Developer Network definition page. Requests with host echo1.example.com will be directed to the echo1 backend set up in Step 1, and requests with host echo2.example.com will be directed to the echo2 backend.

      You can now create the Ingress using kubectl:

      • kubectl apply -f echo_ingress.yaml

      You'll see the following output confirming the Ingress creation:

      Output

      ingress.extensions/echo-ingress created

      To test the Ingress, navigate to your DNS management service and create A records for echo1.example.com and echo2.example.com pointing to the DigitalOcean Load Balancer's external IP. The Load Balancer's external IP is the external IP address for the ingress-nginx Service, which we fetched in the previous step. If you are using DigitalOcean to manage your domain's DNS records, consult How to Manage DNS Records to learn how to create A records.

      Once you've created the necessary echo1.example.com and echo2.example.com DNS records, you can test the Ingress Controller and Resource you've created using the curl command line utility.

      From your local machine, curl the echo1 Service:

      You should get the following response from the echo1 service:

      Output

      echo1

      This confirms that your request to echo1.example.com is being correctly routed through the Nginx ingress to the echo1 backend Service.

      Now, perform the same test for the echo2 Service:

      You should get the following response from the echo2 Service:

      Output

      echo2

      This confirms that your request to echo2.example.com is being correctly routed through the Nginx ingress to the echo2 backend Service.

      At this point, you've successfully set up a basic Nginx Ingress to perform virtual host-based routing. In the next step, we'll install cert-manager using Helm to provision TLS certificates for our Ingress and enable the more secure HTTPS protocol.

      Step 4 — Installing and Configuring Cert-Manager

      In this step, we'll use Helm to install cert-manager into our cluster. cert-manager is a Kubernetes service that provisions TLS certificates from Let's Encrypt and other certificate authorities and manages their lifecycles. Certificates can be requested and configured by annotating Ingress Resources with the certmanager.k8s.io/issuer annotation, appending a tls section to the Ingress spec, and configuring one or more Issuers to specify your preferred certificate authority. To learn more about Issuer objects, consult the official cert-manager documentation on Issuers.

      We'll first begin by using Helm to installcert-manager into our cluster:

      • helm install --name cert-manager --namespace kube-system --version v0.4.1 stable/cert-manager

      You should see the following output:

      Output

      . . . NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://cert-manager.readthedocs.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html

      This indicates that the cert-manager installation was successful.

      Before we begin issuing certificates for our Ingress hosts, we need to create an Issuer, which specifies the certificate authority from which signed x509 certificates can be obtained. In this guide, we'll use the Let's Encrypt certificate authority, which provides free TLS certificates and offers both a staging server for testing your certificate configuration, and a production server for rolling out verifiable TLS certificates.

      Let's create a test Issuer to make sure the certificate provisioning mechanism is functioning correctly. Open a file named staging_issuer.yaml in your favorite text editor:

      nano staging_issuer.yaml
      

      Paste in the following ClusterIssuer manifest:

      staging_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
       name: letsencrypt-staging
      spec:
       acme:
         # The ACME server URL
         server: https://acme-staging-v02.api.letsencrypt.org/directory
         # Email address used for ACME registration
         email: your_email_address_here
         # Name of a secret used to store the ACME account private key
         privateKeySecretRef:
           name: letsencrypt-staging
         # Enable the HTTP-01 challenge provider
         http01: {}
      

      Here we specify that we'd like to create a ClusterIssuer object called letsencrypt-staging, and use the Let's Encrypt staging server. We'll later use the production server to roll out our certificates, but the production server may rate-limit requests made against it, so for testing purposes it's best to use the staging URL.

      We then specify an email address to register the certificate, and create a Kubernetes Secret called letsencrypt-staging to store the certificate's private key. We also enable the HTTP-01 challenge mechanism. To learn more about these parameters, consult the official cert-manager documentation on Issuers.

      Roll out the ClusterIssuer using kubectl:

      • kubectl create -f staging_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-staging created

      Now that we've created our Let's Encrypt staging Issuer, we're ready to modify the Ingress Resource we created above and enable TLS encryption for the echo1.example.com and echo2.example.com paths.

      Open up echo_ingress.yaml once again in your favorite editor:

      Add the following to the Ingress Resource manifest:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-staging
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-staging
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here we add some annotations to specify the ingress.class, which determines the Ingress Controller that should be used to implement the Ingress Rules. In addition, we define the cluster-issuer to be letsencrypt-staging, the certificate Issuer we just created.

      Finally, we add a tls block to specify the hosts for which we want to acquire certificates, and specify the private key we created earlier.

      When you're done making changes, save and close the file.

      We'll now update the existing Ingress Resource using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      You should see the following output:

      Output

      ingress.extensions/echo-ingress configured

      You can use kubectl describe to track the state of the Ingress changes you've just applied:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 14m nginx-ingress-controller Ingress default/echo-ingress Normal UPDATE 1m (x2 over 13m) nginx-ingress-controller Ingress default/echo-ingress Normal CreateCertificate 1m cert-manager Successfully created Certificate "letsencrypt-staging"

      Once the certificate has been successfully created, you can run an additional describe on it to further confirm its successful creation:

      • kubectl describe certificate

      You should see the following output in the Events section:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 50s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 15s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3s cert-manager Issuing certificate... Normal CertObtained 1s cert-manager Obtained certificate from ACME server Normal CertIssued 1s cert-manager Certificate issued successfully

      This confirms that the TLS certificate was successfully issued and HTTPS encryption is now active for the two domains configured.

      We're now ready to send a request to a backend echo server to test that HTTPS is functioning correctly.

      Run the following wget command to send a request to echo1.example.com and print the response headers to STDOUT:

      • wget --save-headers -O- echo1.example.com

      You should see the following output:

      Output

      URL transformed to HTTPS due to an HSTS policy --2018-12-11 14:38:24-- https://echo1.example.com/ Resolving echo1.example.com (echo1.example.com)... 203.0.113.0 Connecting to echo1.example.com (echo1.example.net)|203.0.113.0|:443... connected. ERROR: cannot verify echo1.example.com's certificate, issued by ‘CN=Fake LE Intermediate X1’: Unable to locally verify the issuer's authority. To connect to echo1.example.com insecurely, use `--no-check-certificate'.

      This indicates that HTTPS has successfully been enabled, but the certificate cannot be verified as it's a fake temporary certificate issued by the Let's Encrypt staging server.

      Now that we've tested that everything works using this temporary fake certificate, we can roll out production certificates for the two hosts echo1.example.com and echo2.example.com.

      Step 5 — Rolling Out Production Issuer

      In this step we’ll modify the procedure used to provision staging certificates, and generate a valid, verifiable production certificate for our Ingress hosts.

      To begin, we'll first create a production certificate ClusterIssuer.

      Open a file called prod_issuer.yaml in your favorite editor:

      nano prod_issuer.yaml
      

      Paste in the following manifest:

      prod_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address_here
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      Note the different ACME server URL, and the letsencrypt-prod secret key name.

      When you're done editing, save and close the file.

      Now, roll out this Issuer using kubectl:

      • kubectl create -f prod_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      Update echo_ingress.yaml to use this new Issuer:

      Make the following changes to the file:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here, we update both the ClusterIssuer and secret key to letsencrypt-prod.

      Once you're satisfied with your changes, save and close the file.

      Roll out the changes using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      Output

      ingress.extensions/echo-ingress configured

      Wait a couple of minutes for the Let's Encrypt production server to issue the certificate. You can track its progress using kubectl describe on the certificate object:

      • kubectl describe certificate letsencrypt-prod

      Once you see the following output, the certificate has been issued successfully:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 4m4s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 3m30s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3m18s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3m18s cert-manager Issuing certificate... Normal CertObtained 3m16s cert-manager Obtained certificate from ACME server Normal CertIssued 3m16s cert-manager Certificate issued successfully

      We'll now perform a test using curl to verify that HTTPS is working correctly:

      You should see the following:

      Output

      <html> <head><title>308 Permanent Redirect</title></head> <body> <center><h1>308 Permanent Redirect</h1></center> <hr><center>nginx/1.15.6</center> </body> </html>

      This indicates that HTTP requests are being redirected to use HTTPS.

      Run curl on https://echo1.example.com:

      • curl https://echo1.example.com

      You should now see the following output:

      Output

      echo1

      You can run the previous command with the verbose -v flag to dig deeper into the certificate handshake and to verify the certificate information.

      At this point, you've successfully configured HTTPS using a Let's Encrypt certificate for your Nginx Ingress.

      Conclusion

      In this guide, you set up an Nginx Ingress to load balance and route external requests to backend Services inside of your Kubernetes cluster. You also secured the Ingress by installing the cert-manager certificate provisioner and setting up a Let's Encrypt certificate for two host paths.

      There are many alternatives to the Nginx Ingress Controller. To learn more, consult Ingress controllers from the official Kubernetes documentation.



      Source link