One place for hosting & domains

      How To Scale a Node.js Application with MongoDB on Kubernetes Using Helm


      Introduction

      Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

      When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

      In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

      Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

      Navigate to the node_project directory:

      The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

      For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      When we deploy the Helm mongodb-replicaset chart, it will create:

      • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
      • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

      For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

      The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

      Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node's process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

      The constants for the connection URI and the URI string itself currently look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      ...
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      ...
      

      In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

      Add MONGO_REPLICASET to both the URI constant object and the connection string:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB,
        MONGO_REPLICASET
      } = process.env;
      
      ...
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
      ...
      

      Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

      Save and close the file when you are finished editing.

      With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

      Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

      • docker build -t your_dockerhub_username/node-replicas .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-replicas

      You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

      Step 2 — Creating Secrets for the MongoDB Replica Set

      The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

      • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
      • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

      With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

      First, let's create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

      • openssl rand -base64 756 > key.txt

      The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

      You can now create a Secret called keyfilesecret using this file with kubectl create:

      • kubectl create secret generic keyfilesecret --from-file=key.txt

      This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

      You will see the following output indicating that your Secret has been created:

      Output

      secret/keyfilesecret created

      Remove key.txt:

      Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

      Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

      Convert your database username:

      • echo -n 'your_database_username' | base64

      Note down the value you see in the output.

      Next, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

      ~/node_project/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: mongo-secret
      data:
        user: your_encoded_username
        password: your_encoded_password
      

      Here, we're using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

      Save and close the file when you are finished editing.

      Create the Secret object with the following command:

      • kubectl create -f secret.yaml

      You will see the following output:

      Output

      secret/mongo-secret created

      Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

      With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

      Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

      Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we've just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

      Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

      • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
      • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
      • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

      Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

      Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage — which we can check by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 21m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

      You will set values in this file that will do the following:

      • Enable authorization.
      • Reference your keyfilesecret and mongo-secret objects.
      • Specify 1Gi for your PersistentVolumes.
      • Set your replica set name to db.
      • Specify 3 replicas for the set.
      • Pin the mongo image to the latest version at the time of writing: 4.1.9.

      Paste the following code into the file:

      ~/node_project/mongodb-values.yaml

      replicas: 3
      port: 27017
      replicaSetName: db
      podDisruptionBudget: {}
      auth:
        enabled: true
        existingKeySecret: keyfilesecret
        existingAdminSecret: mongo-secret
      imagePullSecrets: []
      installImage:
        repository: unguiculus/mongodb-install
        tag: 0.7
        pullPolicy: Always
      copyConfigImage:
        repository: busybox
        tag: 1.29.3
        pullPolicy: Always
      image:
        repository: mongo
        tag: 4.1.9
        pullPolicy: Always
      extraVars: {}
      metrics:
        enabled: false
        image:
          repository: ssalaues/mongodb-exporter
          tag: 0.6.1
          pullPolicy: IfNotPresent
        port: 9216
        path: /metrics
        socketTimeout: 3s
        syncTimeout: 1m
        prometheusServiceDiscovery: true
        resources: {}
      podAnnotations: {}
      securityContext:
        enabled: true
        runAsUser: 999
        fsGroup: 999
        runAsNonRoot: true
      init:
        resources: {}
        timeout: 900
      resources: {}
      nodeSelector: {}
      affinity: {}
      tolerations: []
      extraLabels: {}
      persistentVolume:
        enabled: true
        #storageClass: "-"
        accessModes:
          - ReadWriteOnce
        size: 1Gi
        annotations: {}
      serviceAnnotations: {}
      terminationGracePeriodSeconds: 30
      tls:
        enabled: false
      configmap: {}
      readinessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 1
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      livenessProbe:
        initialDelaySeconds: 30
        timeoutSeconds: 5
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      

      The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

      Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

      To learn more about the other parameters included in the file, see the configuration table included with the repo.

      Save and close the file when you are finished editing.

      Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

      This will get the latest chart information from the stable repository.

      Finally, install the chart with the following command:

      • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

      Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

      • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

      Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we've specified. We've pointed to these options by including the -f flag and our mongodb-values.yaml file.

      Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

      Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

      Output

      NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

      You can now check on the creation of your Pods with the following command:

      You will see output like the following as the Pods are being created:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

      The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod's containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

      Once the Pods have been created and all of their associated containers are running, you will see this output:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

      In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

      Output

      NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

      This means that the first member of our StatefulSet will have the following DNS entry:

      mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local
      

      Because we need our application to connect to each MongoDB instance, it's essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

      With your database instances up and running, you are ready to create the chart for your Node application.

      Step 4 — Creating a Custom Application Chart and Configuring Parameters

      We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

      First, create a new chart directory called nodeapp with the following command:

      This will create a directory called nodeapp in your ~/node_project folder with the following resources:

      • A Chart.yaml file with basic information about your chart.
      • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
      • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
      • A templates/ directory with the template files that will generate Kubernetes manifests.
      • A templates/tests/ directory for test files.
      • A charts/ directory for any charts that this chart depends on.

      The first file we will modify out of these default files is values.yaml. Open that file now:

      The values that we will set here include:

      • The number of replicas.
      • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
      • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
      • The targetPort to specify the port on the Pod where our application will be exposed.

      We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

      Configure the following values in the values.yaml file:

      ~/node_project/nodeapp/values.yaml

      # Default values for nodeapp.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 3
      
      image:
        repository: your_dockerhub_username/node-replicas
        tag: latest
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: LoadBalancer
        port: 80
        targetPort: 8080
      ...
      

      Save and close the file when you are finished editing.

      Next, open a secret.yaml file in the nodeapp/templates directory:

      • nano nodeapp/templates/secret.yaml

      In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

      Add the following code to the file:

      ~/node_project/nodeapp/templates/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: {{ .Release.Name }}-auth
      data:
        MONGO_USERNAME: your_encoded_username
        MONGO_PASSWORD: your_encoded_password
      

      The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

      Save and close the file when you are finished.

      Next, open a file to create a ConfigMap for your application:

      • nano nodeapp/templates/configmap.yaml

      In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

      According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

      Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

      ~/node_project/nodeapp/templates/configmap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: {{ .Release.Name }}-config
      data:
        MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
        MONGO_PORT: "27017"
        MONGO_DB: "sharkinfo"
        MONGO_REPLICASET: "db"
      

      Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

      Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

      Save and close the file when you are finished editing.

      With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

      Step 5 — Integrating Environment Variables into Your Helm Deployment

      With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

      Open the application Deployment template for editing:

      • nano nodeapp/templates/deployment.yaml

      Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

      In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              ports:
      

      Next, add the following keys to the list of env variables:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              - name: MONGO_USERNAME
                valueFrom:
                  secretKeyRef:
                    key: MONGO_USERNAME
                    name: {{ .Release.Name }}-auth
              - name: MONGO_PASSWORD
                valueFrom:
                  secretKeyRef:
                    key: MONGO_PASSWORD
                    name: {{ .Release.Name }}-auth
              - name: MONGO_HOSTNAME
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_HOSTNAME
                    name: {{ .Release.Name }}-config
              - name: MONGO_PORT
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_PORT
                    name: {{ .Release.Name }}-config
              - name: MONGO_DB
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_DB
                    name: {{ .Release.Name }}-config      
              - name: MONGO_REPLICASET
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_REPLICASET
                    name: {{ .Release.Name }}-config        
      

      Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

      Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            ...
      

      Next, let's modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

      • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
      • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

      For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

      In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod's container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

      Add the following modification to the stated path for the liveness and readiness probes:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            livenessProbe:
              httpGet:
                path: /sharks
                port: http
            readinessProbe:
              httpGet:
                path: /sharks
                port: http
      

      Save and close the file when you are finished editing.

      You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

      • helm install --name nodejs ./nodeapp

      Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

      Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

      You will see the following output indicating that your release has been created:

      Output

      NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

      Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

      Check the status of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

      Once your Pods are up and running, check your Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer 10.245.33.46 your_lb_ip 80:31518/TCP 3m22s

      The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

      You should see the following landing page:

      Application Landing Page

      Now that your replicated application is working, let's add some test data to ensure that replication is working between members of the replica set.

      Step 6 — Testing MongoDB Replication

      With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

      First, make sure you have navigated your browser to the application landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      Now head back to the shark information form by clicking on Sharks in the top navigation bar:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

      Complete Shark Collection

      Let's check that the data we've entered has been replicated between the primary and secondary members of our replica set.

      Get a list of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

      To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

      When prompted, enter the password associated with this username:

      Output

      MongoDB shell version v4.1.9 Enter password:

      You will be dropped into an administrative shell:

      Output

      MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

      Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

      You will see output like the following, indicating the hostname of the primary:

      Output

      db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

      Next, switch to your sharkinfo database:

      Output

      switched to db sharkinfo

      List the collections in the database:

      Output

      sharks

      Output the documents in the collection:

      You will see the following output:

      Output

      { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      Exit the MongoDB Shell:

      Now that we have checked the data on our primary, let's check that it's being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

      Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

      Switch to the sharkinfo database:

      Output

      switched to db sharkinfo

      Permit the read operation of the documents in the sharks collection:

      Output the documents in the collection:

      You should now see the same information that you saw when running this method on your primary instance:

      Output

      db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      This output confirms that your application data is being replicated between the members of your replica set.

      Conclusion

      You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm's stable repository and other chart repositories.

      As you move toward production, consider implementing the following:

      To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.



      Source link

      How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes Ingresses offer you a flexible way of routing traffic from beyond your cluster to internal Kubernetes Services. Ingress Resources are objects in Kubernetes that define rules for routing HTTP and HTTPS traffic to Services. For these to work, an Ingress Controller must be present; its role is to implement the rules by accepting traffic (most likely via a Load Balancer) and routing it to the appropriate Services. Most Ingress Controllers use only one global Load Balancer for all Ingresses, which is more efficient than creating a Load Balancer per every Service you wish to expose.

      Helm is a package manager for managing Kubernetes. Using Helm Charts with your Kubernetes provides configurability and lifecycle management to update, rollback, and delete a Kubernetes application.

      In this guide, you’ll set up the Kubernetes-maintained Nginx Ingress Controller using Helm. You’ll then create an Ingress Resource to route traffic from your domains to example Hello World back-end services. Once you’ve set up the Ingress, you’ll install Cert-Manager to your cluster to be able to automatically provision Let’s Encrypt TLS certificates to secure your Ingresses.

      Prerequisites

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

      • A fully registered domain name with two available A records. This tutorial will use hw1.example.com and hw2.example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      Step 1 — Setting Up Hello World Deployments

      In this section, before you deploy the Nginx Ingress, you will deploy a Hello World app called hello-kubernetes to have some Services to which you’ll route the traffic. To confirm that the Nginx Ingress works properly in the next steps, you’ll deploy it twice, each time with a different welcome message that will be shown when you access it from your browser.

      You’ll store the deployment configuration on your local machine. The first deployment configuration will be in a file named hello-kubernetes-first.yaml. Create it using a text editor:

      • nano hello-kubernetes-first.yaml

      Add the following lines:

      hello-kubernetes-first.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-first
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-first
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-first
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-first
        template:
          metadata:
            labels:
              app: hello-kubernetes-first
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the first deployment!
      

      This configuration defines a Deployment and a Service. The Deployment consists of three replicas of the paulbouwer/hello-kubernetes:1.5 image, and an environment variable named MESSAGE—you will see its value when you access the app. The Service here is defined to expose the Deployment in-cluster at port 80.

      Save and close the file.

      Then, create this first variant of the hello-kubernetes app in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-first.yaml

      You’ll see the following output:

      Output

      service/hello-kubernetes-first created deployment.apps/hello-kubernetes-first created

      To verify the Service’s creation, run the following command:

      • kubectl get service hello-kubernetes-first

      The output will look like this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 35s

      You’ll see that the newly created Service has a ClusterIP assigned, which means that it is working properly. All traffic sent to it will be forwarded to the selected Deployment on port 8080. Now that you have deployed the first variant of the hello-kubernetes app, you’ll work on the second one.

      Open a file called hello-kubernetes-second.yaml for editing:

      • nano hello-kubernetes-second.yaml

      Add the following lines:

      hello-kubernetes-second.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-second
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-second
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-second
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-second
        template:
          metadata:
            labels:
              app: hello-kubernetes-second
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the second deployment!
      

      Save and close the file.

      This variant has the same structure as the previous configuration; the only differences are in the Deployment and Service names, to avoid collisions, and the message.

      Now create it in Kubernetes with the following command:

      • kubectl create -f hello-kubernetes-second.yaml

      The output will be:

      Output

      service/hello-kubernetes-second created deployment.apps/hello-kubernetes-second created

      Verify that the second Service is up and running by listing all of your services:

      The output will be similar to this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 54s hello-kubernetes-second ClusterIP 10.245.99.130 <none> 80:30303/TCP 12s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 5m

      Both hello-kubernetes-first and hello-kubernetes-second are listed, which means that Kubernetes has created them successfully.

      You've created two deployments of the hello-kubernetes app with accompanying Services. Each one has a different message set in the deployment specification, which allow you to differentiate them during testing. In the next step, you'll install the Nginx Ingress Controller itself.

      Step 2 — Installing the Kubernetes Nginx Ingress Controller

      Now you'll install the Kubernetes-maintained Nginx Ingress Controller using Helm. Note that there are several Nginx Ingresses.

      The Nginx Ingress Controller consists of a Pod and a Service. The Pod runs the Controller, which constantly polls the /ingresses endpoint on the API server of your cluster for updates to available Ingress Resources. The Service is of type LoadBalancer, and because you are deploying it to a DigitalOcean Kubernetes cluster, the cluster will automatically create a DigitalOcean Load Balancer, through which all external traffic will flow to the Controller. The Controller will then route the traffic to appropriate Services, as defined in Ingress Resources.

      Only the LoadBalancer Service knows the IP address of the automatically created Load Balancer. Some apps (such as ExternalDNS) need to know its IP address, but can only read the configuration of an Ingress. The Controller can be configured to publish the IP address on each Ingress by setting the controller.publishService.enabled parameter to true during helm install. It is recommended to enable this setting to support applications that may depend on the IP address of the Load Balancer.

      To install the Nginx Ingress Controller to your cluster, run the following command:

      • helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true

      This command installs the Nginx Ingress Controller from the stable charts repository, names the Helm release nginx-ingress, and sets the publishService parameter to true.

      The output will look like:

      Output

      NAME: nginx-ingress LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nginx-ingress-controller 1 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE nginx-ingress-controller-7658988787-npv28 0/1 ContainerCreating 0 0s nginx-ingress-default-backend-7f5d59d759-26xq2 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.245.9.107 <pending> 80:31305/TCP,443:30519/TCP 0s nginx-ingress-default-backend ClusterIP 10.245.221.49 <none> 80/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE nginx-ingress 1 0s ==> v1beta1/ClusterRole NAME AGE nginx-ingress 0s ==> v1beta1/ClusterRoleBinding NAME AGE nginx-ingress 0s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-ingress-controller 0/1 1 0 0s nginx-ingress-default-backend 0/1 1 0 0s ==> v1beta1/Role NAME AGE nginx-ingress 0s ==> v1beta1/RoleBinding NAME AGE nginx-ingress 0s NOTES: ...

      Helm has logged what resources in Kubernetes it created as a part of the chart installation.

      You can watch the Load Balancer become available by running:

      • kubectl get services -o wide -w nginx-ingress-controller

      You've installed the Nginx Ingress maintained by the Kubernetes community. It will route HTTP and HTTPS traffic from the Load Balancer to appropriate back-end Services, configured in Ingress Resources. In the next step, you'll expose the hello-kubernetes app deployments using an Ingress Resource.

      Step 3 — Exposing the App Using an Ingress

      Now you're going to create an Ingress Resource and use it to expose the hello-kubernetes app deployments at your desired domains. You'll then test it by accessing it from your browser.

      You'll store the Ingress in a file named hello-kubernetes-ingress.yaml. Create it using your editor:

      • nano hello-kubernetes-ingress.yaml

      Add the following lines to your file:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
      spec:
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      In the code above, you define an Ingress Resource with the name hello-kubernetes-ingress. Then, you specify two host rules, so that hw1.example.com is routed to the hello-kubernetes-first Service, and hw2.example.com is routed to the Service from the second deployment (hello-kubernetes-second).

      Remember to replace the highlighted domains with your own, then save and close the file.

      Create it in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-ingress.yaml

      Next, you'll need to ensure that your two domains are pointed to the Load Balancer via A records. This is done through your DNS provider. To configure your DNS records on DigitalOcean, see How to Manage DNS Records.

      You can now navigate to hw1.example.com in your browser. You will see the following:

      Hello Kubernetes - First Deployment

      The second variant (hw2.example.com) will show a different message:

      Hello Kubernetes - Second Deployment

      With this, you have verified that the Ingress Controller correctly routes requests; in this case, from your two domains to two different Services.

      You've created and configured an Ingress Resource to serve the hello-kubernetes app deployments at your domains. In the next step, you'll set up Cert-Manager, so you'll be able to secure your Ingress Resources with free TLS certificates from Let's Encrypt.

      Step 4 — Securing the Ingress Using Cert-Manager

      To secure your Ingress Resources, you'll install Cert-Manager, create a ClusterIssuer for production, and modify the configuration of your Ingress to take advantage of the TLS certificates. ClusterIssuers are Cert-Manager Resources in Kubernetes that provision TLS certificates. Once installed and configured, your app will be running behind HTTPS.

      Before installing Cert-Manager to your cluster via Helm, you'll manually apply the required CRDs (Custom Resource Definitions) from the jetstack/cert-manager repository by running the following command:

      • kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml

      You will see the following output:

      Output

      customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created

      This shows that Kubernetes has applied the custom resources you require for cert-manager.

      Note: If you've followed this tutorial and the prerequisites, you haven't created a Kubernetes namespace called cert-manager, so you won't have to run the command in this note block. However, if this namespace does exist on your cluster, you'll need to inform Cert-Manager not to validate it with the following command:

      • kubectl label namespace cert-manager certmanager.k8s.io/disable-validation="true"

      The Webhook component of Cert-Manager requires TLS certificates to securely communicate with the Kubernetes API server. In order for Cert-Manager to generate certificates for it for the first time, resource validation must be disabled on the namespace it is deployed in. Otherwise, it would be stuck in an infinite loop; unable to contact the API and unable to generate the TLS certificates.

      The output will be:

      Output

      namespace/cert-manager labeled

      Next, you'll need to add the Jetstack Helm repository to Helm, which hosts the Cert-Manager chart. To do this, run the following command:

      • helm repo add jetstack https://charts.jetstack.io

      Helm will display the following output:

      Output

      "jetstack" has been added to your repositories

      Finally, install Cert-Manager into the cert-manager namespace:

      • helm install --name cert-manager --namespace cert-manager jetstack/cert-manager

      You will see the following output:

      Output

      NAME: cert-manager LAST DEPLOYED: ... NAMESPACE: cert-manager STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME AGE cert-manager-edit 3s cert-manager-view 3s cert-manager-webhook:webhook-requester 3s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cert-manager-5d669ffbd8-rb6tr 0/1 ContainerCreating 0 2s cert-manager-cainjector-79b7fc64f-gqbtz 0/1 ContainerCreating 0 2s cert-manager-webhook-6484955794-v56lx 0/1 ContainerCreating 0 2s ... NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://docs.cert-manager.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://docs.cert-manager.io/en/latest/reference/ingress-shim.html

      The output shows that the installation was successful. As listed in the NOTES in the output, you'll need to set up an Issuer to issue TLS certificates.

      You'll now create one that issues Let's Encrypt certificates, and you'll store its configuration in a file named production_issuer.yaml. Create it and open it for editing:

      • nano production_issuer.yaml

      Add the following lines:

      production_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      This configuration defines a ClusterIssuer that contacts Let's Encrypt in order to issue certificates. You'll need to replace your_email_address with your email address in order to receive possible urgent notices regarding the security and expiration of your certificates.

      Save and close the file.

      Roll it out with kubectl:

      • kubectl create -f production_issuer.yaml

      You will see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      With Cert-Manager installed, you're ready to introduce the certificates to the Ingress Resource defined in the previous step. Open hello-kubernetes-ingress.yaml for editing:

      • nano hello-kubernetes-ingress.yaml

      Add the highlighted lines:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - hw1.example.com
          - hw2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      The tls block under spec defines in what Secret the certificates for your sites (listed under hosts) will store their certificates, which the letsencrypt-prod ClusterIssuer issues. This must be different for every Ingress you create.

      Remember to replace the hw1.example.com and hw2.example.com with your own domains. When you've finished editing, save and close the file.

      Re-apply this configuration to your cluster by running the following command:

      • kubectl apply -f hello-kubernetes-ingress.yaml

      You will see the following output:

      Output

      ingress.extensions/hello-kubernetes-ingress configured

      You'll need to wait a few minutes for the Let's Encrypt servers to issue a certificate for your domains. In the meantime, you can track its progress by inspecting the output of the following command:

      • kubectl describe certificate hello-kubernetes

      The end of the output will look similar to this:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Generated 56s cert-manager Generated new private key Normal GenerateSelfSigned 56s cert-manager Generated temporary self signed certificate Normal OrderCreated 56s cert-manager Created Order resource "hello-kubernetes-1197334873" Normal OrderComplete 31s cert-manager Order "hello-kubernetes-1197334873" completed successfully Normal CertIssued 31s cert-manager Certificate issued successfully

      When your last line of output reads Certificate issued successfully, you can exit by pressing CTRL + C. Navigate to one of your domains in your browser to test. You'll see the padlock to the left of the address bar in your browser, signifying that your connection is secure.

      In this step, you have installed Cert-Manager using Helm and created a Let's Encrypt ClusterIssuer. After, you updated your Ingress Resource to take advantage of the Issuer for generating TLS certificates. In the end, you have confirmed that HTTPS works correctly by navigating to one of your domains in your browser.

      Conclusion

      You have now successfully set up the Nginx Ingress Controller and Cert-Manager on your DigitalOcean Kubernetes cluster using Helm. You are now able to expose your apps to the Internet, at your domains, secured using Let's Encrypt TLS certificates.

      For further information about the Helm package manager, read this introduction article.



      Source link

      How to Install Apps on Kubernetes with Helm


      Updated by Linode Written by Linode

      What is Helm?

      Helm is a tool that assists with installing and managing applications on Kubernetes clusters. It is often referred to as “the package manager for Kubernetes,” and it provides functions that are similar to a package manager for an operating system:

      • Helm prescribes a common format and directory structure for packaging your Kubernetes resources, known as a Helm chart.

      • Helm provides a public repository of charts for popular software. You can also retrieve charts from third-party repositories, author and contribute your own charts to someone else’s repository, or run your own chart repository.

      • The Helm client software offers commands for: listing and searching for charts by keyword, installing applications to your cluster from charts, upgrading those applications, removing applications, and other management functions.

      Charts

      The components of a Kubernetes application–deployments, services, ingresses, and other objects–are listed in manifest files (in the YAML file format). Kubernetes does not tell you how you should organize those files, though the Kubernetes documentation does offer a general set of best practices.

      Helm charts are the software packaging format for Helm. A chart specifies a file and directory structure that you follow when packaging your manifests. The structure looks as follows:

      chart-name/
        Chart.yaml
        LICENSE
        README.md
        requirements.yaml
        values.yaml
        charts/
        templates/
        templates/NOTES.txt
      
      File or Directory Description
      Chart.yaml General information about the chart, including the chart name, a version number, and a description.
      LICENSE A plain-text file with licensing information for the chart and for the applications installed by the chart. Optional.
      README.md A Markdown file with instructions that a user of a chart may want to know when installing and using the chart, including a description of the app that the chart installs and the template values that can be set by the user. Optional.
      requirements.yaml A listing of the charts that this chart depends on. This list will specify the chart name version number for each dependency, as well as the repository URL that the chart can be retrieved from. Optional.
      values.yaml Default values for the variables in your manifests’ templates.
      charts/ A directory which stores chart dependencies that you manually copy into your project, instead of linking to them from the requirements.yaml file.
      templates/ Your Kubernetes manifests are stored in the templates/ directory. Helm will interpret your manifests using the Go templating language before applying them to your cluster. You can use the template language to insert variables into your manifests, and users of your chart will be able to enter their own values for those variables.
      templates/NOTES.txt A plain-text file which will print to a user’s terminal when they install the chart. This text can be used to display post-installation instructions or other information that a user may want to know. Optional.

      Releases

      When you tell Helm to install a chart, you can specify variable values to be inserted into the chart’s manifest templates. Helm will then compile those templates into manifests that can be applied to your cluster. When it does this, it creates a new release.

      You can install a chart to the same cluster more than once. Each time you tell Helm to install a chart, it creates another release for that chart. A release can be upgraded when a new version of a chart is available, or even when you just want to supply new variable values to the chart. Helm tracks each upgrade to your release, and it allows you to roll back an upgrade. A release can be easily deleted from your cluster, and you can even roll back release deletions.

      Helm Client and Helm Tiller

      Helm operates with two components:

      • The Helm client software that issues commands to your cluster. You run the client software on your computer, in your CI/CD environment, or anywhere else you’d like

      • A server component runs on your cluster and receives commands from the Helm client software. This component is called Tiller. Tiller is responsible for directly interacting with the Kubernetes API (which the client software does not do). Tiller maintains the state for your Helm releases.

      Before You Begin

      1. Install the Kubernetes CLI (kubectl) on your computer, if it is not already.

      2. You should have a Kubernetes cluster running prior to starting this guide. One quick way to get a cluster up is with Linode’s k8s-alpha CLI command. This guide’s examples only require a cluster with one worker node. We recommend that you create cluster nodes that are at the Linode 4GB tier or higher.

        This guide also assumes that your cluster has role-based access control (RBAC) enabled. This feature became available in Kubernetes 1.6. It is enabled on clusters created via the k8s-alpha Linode CLI.

        Note

        This guide’s example instructions will also result in the creation of a Block Storage Volume and a NodeBalancer, which are also billable resources. If you do not want to keep using the example application after you finish reviewing your guide, make sure to delete these resources afterward.
      3. You should also make sure that your Kubernetes CLI is using the right cluster context. Run the get-contexts subcommand to check:

        kubectl config get-contexts
        
      4. You can set kubectl to use a certain cluster context with the use-context subcommand and the cluster name that was previously output from the get-contexts subcommand:

        kubectl config use-context your-cluster-name
        
      5. It is beneficial to have a registered domain name for this guide’s example app, but it is not required.

      Install Helm

      Install the Helm Client

      Install the Helm client software on your computer:

      • Linux. Run the client installer script that Helm provides:

        curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
        chmod 700 get_helm.sh
        ./get_helm.sh
        
      • macOS. Use Homebrew to install:

        brew install kubernetes-helm
        
      • Windows. Use Chocolatey to install:

        choco install kubernetes-helm
        

      Install Tiller on your Cluster

      Tiller’s default installation instructions will attempt to install it without adequate permissions on a cluster with RBAC enabled, and it will fail. Alternative instructions are available which grant Tiller the appropriate permissions:

      Note

      The following instructions provide Tiller to the cluster-admin role, which is a privileged Kubernetes API user for your cluster. This is a potential security concern. Other access levels for Tiller are possible, like restricting Tiller and the charts it installs to a single namespace. The Bitnami Engineering blog has an article which further explores security in Helm.
      1. Create a file on your computer named rbac-config.yaml with the following snippet:

        rbac-config.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: ServiceAccount
        metadata:
          name: tiller
          namespace: kube-system
        ---
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          name: tiller
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: cluster-admin
        subjects:
          - kind: ServiceAccount
            name: tiller
            namespace: kube-system

        This configuration creates a Kubernetes Service Account for Tiller, and then binds it to the cluster-admin role.

      2. Apply this configuration to your cluster:

        kubectl create -f rbac-config.yaml
        
          
        serviceaccount "tiller" created
        clusterrolebinding "tiller" created
        
        
      3. Initialize Tiller on the cluster:

        helm init --service-account tiller --history-max 200
        

        Note

        The --history-max option prevents Helm’s historical record of the objects it tracks from growing too large.

      4. You should see output like:

        $HELM_HOME has been configured at /Users/your-user/.helm.
        
        Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
        
        Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
        To prevent this, run `helm init` with the --tiller-tls-verify flag.
        For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
        Happy Helming!
        
      5. The pod for Tiller will be running in the kube-system namespace:

        kubectl get pods --namespace kube-system | grep tiller
        tiller-deploy-b6647fc9d-vcdms                1/1       Running   0          1m
        

      Use Helm Charts to Install Apps

      This guide will use the Ghost publishing platform as the example application.

      Search for a Chart

      1. Run the repo update subcommand to make sure you have a full list of available charts:

        helm repo update
        

        Note

        Run helm repo list to see which repositories are registered with your client.

      2. Run the search command with a keyword to search for a chart by name:

        helm search ghost
        

        The output will look like:

        NAME            CHART VERSION   APP VERSION DESCRIPTION
        stable/ghost    6.7.7           2.19.4      A simple, powerful publishing platform that allows you to...
        
      3. The full name for the chart is stable/ghost. Inspect the chart for more information:

        helm inspect stable/ghost
        

        This command’s output will resemble the README text available for the Ghost chart in the official Helm chart repository on GitHub.

      Install the Chart

      The helm install command is used to install a chart by name. It can be run without any other options, but some charts expect you to pass in configuration values for the chart:

      1. Create a file named ghost-config.yaml on your computer from this snippet:

        ghost-config.yaml
        1
        2
        
        ghostHost: ghost.example.com
        ghostEmail: email@example.com

        Replace the value for ghostHost with a domain or subdomain that you own and would like to assign to the app, and the value for ghostEmail with your email.

        Note

        If you don’t own a domain name and won’t continue to use the Ghost website after finishing this guide, you can make up a domain for this configuration file.

      2. Run the install command and pass in the configuration file:

        helm install -f ghost-config.yaml stable/ghost
        
      3. The install command returns immediately and does not wait until the app’s cluster objects are ready. You will see output like the following snippet, which shows that the app’s pods are still in the “Pending” state. The text displayed is generated from the contents of the chart’s templates/NOTES.txt file:

        Full output of helm install

        NAME:   oldfashioned-cricket
        LAST DEPLOYED: Tue Apr 16 09:15:41 2019
        NAMESPACE: default
        STATUS: DEPLOYED
        
        RESOURCES:
        ==> v1/ConfigMap
        NAME                      DATA  AGE
        oldfashioned-cricket-mariadb        1     1s
        oldfashioned-cricket-mariadb-tests  1     1s
        
        ==> v1/PersistentVolumeClaim
        NAME              STATUS   VOLUME                CAPACITY  ACCESS MODES  STORAGECLASS  AGE
        oldfashioned-cricket-ghost  Pending  linode-block-storage  1s
        
        ==> v1/Pod(related)
        NAME                               READY  STATUS   RESTARTS  AGE
        oldfashioned-cricket-ghost-64ff89b9d6-9ngjs  0/1    Pending  0         1s
        oldfashioned-cricket-mariadb-0               0/1    Pending  0         1s
        
        ==> v1/Secret
        NAME                TYPE    DATA  AGE
        oldfashioned-cricket-ghost    Opaque  1     1s
        oldfashioned-cricket-mariadb  Opaque  2     1s
        
        ==> v1/Service
        NAME                TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)       AGE
        oldfashioned-cricket-ghost    LoadBalancer  10.110.3.191    <pending>    80:32658/TCP  1s
        oldfashioned-cricket-mariadb  ClusterIP     10.107.128.144  <none>       3306/TCP      1s
        
        ==> v1beta1/Deployment
        NAME              READY  UP-TO-DATE  AVAILABLE  AGE
        oldfashioned-cricket-ghost  0/1    1           0          1s
        
        ==> v1beta1/StatefulSet
        NAME                READY  AGE
        oldfashioned-cricket-mariadb  0/1    1s
        
        
        NOTES:
        1. Get the Ghost URL by running:
        
          echo Blog URL  : http://ghost.example.com/
          echo Admin URL : http://ghost.example.com/ghost
        
        2. Get your Ghost login credentials by running:
        
          echo Email:    email@example.com
          echo Password: $(kubectl get secret --namespace default oldfashioned-cricket-ghost -o jsonpath="{.data.ghost-password}" | base64 --decode)
        
      4. Helm has created a new release and assigned it a random name. Run the ls command to get a list of all of your releases:

        helm ls
        

        The output will look as follows:

        NAME        REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
        oldfashioned-cricket    1           Tue Apr 16 09:15:41 2019    DEPLOYED    ghost-6.7.7 2.19.4      default
        
      5. You can check on the status of the release by running the status command:

        helm status oldfashioned-cricket
        

        This command will show the same output that was displayed after the helm install command, but the current state of the cluster objects will be updated.

      Access your App

      1. Run the helm status command again and observe the “Service” section:

        ==> v1/Service
        NAME                TYPE          CLUSTER-IP      EXTERNAL-IP     PORT(S)       AGE
        oldfashioned-cricket-ghost    LoadBalancer  10.110.3.191    104.237.148.15  80:32658/TCP  11m
        oldfashioned-cricket-mariadb  ClusterIP     10.107.128.144  <none>          3306/TCP      11m
        
      2. The LoadBalancer that was created for the app will be displayed. Because this example uses a cluster created with Linode’s k8s-alpha CLI (which pre-installs the Linode CCM), the LoadBalancer will be implemented as a Linode NodeBalancer.

      3. Copy the value under the EXTERNAL-IP column for the LoadBalancer and then paste it into your web browser. You should see the Ghost website:

        Ghost home page

      4. Revisit the output from the status command. Instructions for logging into your Ghost website will be displayed:

        1. Get the Ghost URL by running:
        
        echo Blog URL  : http://ghost.example.com/
        echo Admin URL : http://ghost.example.com/ghost
        
        2. Get your Ghost login credentials by running:
        
        echo Email:    email@example.com
        echo Password: $(kubectl get secret --namespace default oldfashioned-cricket-ghost -o jsonpath="{.data.ghost-password}" | base64 --decode)
        
      5. Retrieve the auto-generated password for your app:

        echo Password: $(kubectl get secret --namespace default oldfashioned-cricket-ghost -o jsonpath="{.data.ghost-password}" | base64 --decode)
        
      6. You haven’t set up DNS for your site yet, but you can instead access the admin interface by visiting the ghost URL on your LoadBalancer IP address (e.g. http://104.237.148.15/ghost). Visit this page in your browser and then enter your email and password. You should be granted access to the administrative interface.

      7. Set up DNS for your app. You can do this by creating an A record for your domain which is assigned to the external IP for your app’s LoadBalancer. Review Linode’s DNS Manager guide for instructions.

      Upgrade your App

      The upgrade command can be used to upgrade an existing release to a new version of a chart, or just to supply new chart values:

      1. In your computer’s ghost-config.yaml file, add a line for the title of the website:

        ghost-config.yaml
        1
        2
        3
        
        ghostHost: ghost.example.com
        ghostEmail: email@example.com
        ghostBlogTitle: Example Site Name
      2. Run the upgrade command, specifying the configuration file, release name, and chart name:

        helm upgrade -f ghost-config.yaml oldfashioned-cricket stable/ghost
        

      Roll Back a Release

      Upgrades (and even deletions) can be rolled back if something goes wrong:

      1. Run the helm ls command and observe the number under the “REVISION” column for your release:

        NAME        REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
        oldfashioned-cricket    2           Tue Apr 16 10:02:58 2019    DEPLOYED    ghost-6.7.7 2.19.4      default
        
      2. Every time you perform an upgrade, the revision count is incremented by 1 (and the counter starts at 1 when you first install a chart). So, your current revision number is 2. To roll back the upgrade you just performed, enter the previous revision number:

        helm rollback oldfashioned-cricket 1
        

      Delete a Release

      1. Use the delete command with the name of a release to delete it:

        helm delete oldfashioned-cricket
        

        You should also confirm in the Linode Cloud Manager that the Volumes and NodeBalancer created for the app are removed as well.

      2. Helm will still save information about the deleted release. You can list deleted releases:

        helm list --deleted
        

        You can use the revision number of a deleted release to roll back the deletion.

      3. To fully remove a release, use the --purge option with the delete command:

        helm delete oldfashioned-cricket --purge
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link