One place for hosting & domains

      Kubernetes

      How To Install and Use Istio With Kubernetes


      Introduction

      A service mesh is an infrastructure layer that allows you to manage communication between your application’s microservices. As more developers work with microservices, service meshes have evolved to make that work easier and more effective by consolidating common management and administrative tasks in a distributed setup.

      Using a service mesh like Istio can simplify tasks like service discovery, routing and traffic configuration, encryption and authentication/authorization, and monitoring and telemetry. Istio, in particular, is designed to work without major changes to pre-existing service code. When working with Kubernetes, for example, it is possible to add service mesh capabilities to applications running in your cluster by building out Istio-specific objects that work with existing application resources.

      In this tutorial, you will install Istio using the Helm package manager for Kubernetes. You will then use Istio to expose a demo Node.js application to external traffic by creating Gateway and Virtual Service resources. Finally, you will access the Grafana telemetry addon to visualize your application traffic data.

      Prerequisites

      To complete this tutorial, you will need:

      Note: We highly recommend a cluster with at least 8GB of available memory and 4vCPUs for this setup. This tutorial will use three of DigitalOcean’s standard 4GB/2vCPU Droplets as nodes.

      Step 1 — Packaging the Application

      To use our demo application with Kubernetes, we will need to clone the code and package it so that the kubelet agent can pull the image.

      Our first step will be to clone the nodejs-image-demo respository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker, which describes how to build an image for a Node.js application and how to create a container using this image. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      To get started, clone the nodejs-image-demo repository into a directory called istio_project:

      • git clone https://github.com/do-community/nodejs-image-demo.git istio_project

      Navigate to the istio_project directory:

      This directory contains files and folders for a shark information application that offers users basic information about sharks. In addition to the application files, the directory contains a Dockerfile with instructions for building a Docker image with the application code. For more information about the instructions in the Dockerfile, see Step 3 of How To Build a Node.js Application with Docker.

      To test that the application code and Dockerfile work as expected, you can build and tag the image using the docker build command, and then use the image to run a demo container. Using the -t flag with docker build will allow you to tag the image with your Docker Hub username so that you can push it to Docker Hub once you've tested it.

      Build the image with the following command:

      • docker build -t your_dockerhub_username/node-demo .

      The . in the command specifies that the build context is the current directory. We've named the image node-demo, but you are free to name it something else.

      Once the build process is complete, you can list your images with docker images:

      You will see the following output confirming the image build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-demo latest 37f1c2939dbf 5 seconds ago 77.6MB node 10-alpine 9dfa73010b19 2 days ago 75.3MB

      Next, you'll use docker run to create a container based on this image. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a customized name.

      Run the following command to build the container:

      • docker run --name node-demo -p 80:8080 -d your_dockerhub_username/node-demo

      Inspect your running containers with docker ps:

      You will see output confirming that your application container is running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49a67bafc325 your_dockerhub_username/node-demo "docker-entrypoint.s…" 8 seconds ago Up 6 seconds 0.0.0.0:80->8080/tcp node-demo

      You can now visit your server IP to test your setup: http://your_server_ip. Your application will display the following landing page:

      Application Landing Page

      Now that you have tested the application, you can stop the running container. Use docker ps again to get your CONTAINER ID:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49a67bafc325 your_dockerhub_username/node-demo "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:80->8080/tcp node-demo

      Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

      Now that you have tested the image, you can push it to Docker Hub. First, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-demo

      You now have an application image that you can pull to run your application with Kubernetes and Istio. Next, you can move on to installing Istio with Helm.

      Step 2 — Installing Istio with Helm

      Although Istio offers different installation methods, the documentation recommends using Helm to maximize flexibility in managing configuration options. We will install Istio with Helm and ensure that the Grafana addon is enabled so that we can visualize traffic data for our application.

      First, add the Istio release repository:

      • helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.1.7/charts/

      This will enable you to use the Helm charts in the repository to install Istio.

      Check that you have the repo:

      You should see the istio.io repo listed:

      Output

      NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879/charts istio.io https://storage.googleapis.com/istio-release/releases/1.1.7/charts/

      Next, install Istio's Custom Resource Definitions (CRDs) with the istio-init chart using the helm install command:

      • helm install --name istio-init --namespace istio-system istio.io/istio-init

      Output

      NAME: istio-init LAST DEPLOYED: Fri Jun 7 17:13:32 2019 NAMESPACE: istio-system STATUS: DEPLOYED ...

      This command commits 53 CRDs to the kube-apiserver, making them available for use in the Istio mesh. It also creates a namespace for the Istio objects called istio-system and uses the --name option to name the Helm release istio-init. A release in Helm refers to a particular deployment of a chart with specific configuration options enabled.

      To check that all of the required CRDs have been committed, run the following command:

      • kubectl get crds | grep 'istio.io|certmanager.k8s.io' | wc -l

      This should output the number 53.

      You can now install the istio chart. To ensure that the Grafana telemetry addon is installed with the chart, we will use the --set grafana.enabled=true configuration option with our helm install command. We will also use the installation protocol for our desired configuration profile: the default profile. Istio has a number of configuration profiles to choose from when installing with Helm that allow you to customize the Istio control plane and data plane sidecars. The default profile is recommended for production deployments, and we'll use it to familiarize ourselves with the configuration options that we would use when moving to production.

      Run the following helm install command to install the chart:

      • helm install --name istio --namespace istio-system --set grafana.enabled=true istio.io/istio

      Output

      NAME: istio LAST DEPLOYED: Fri Jun 7 17:18:33 2019 NAMESPACE: istio-system STATUS: DEPLOYED ...

      Again, we're installing our Istio objects into the istio-system namespace and naming the release — in this case, istio.

      We can verify that the Service objects we expect for the default profile have been created with the following command:

      • kubectl get svc -n istio-system

      The Services we would expect to see here include istio-citadel, istio-galley, istio-ingressgateway, istio-pilot, istio-policy, istio-sidecar-injector, istio-telemetry, and prometheus. We would also expect to see the grafana Service, since we enabled this addon during installation:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.245.85.162 <none> 3000/TCP 3m26s istio-citadel ClusterIP 10.245.135.45 <none> 8060/TCP,15014/TCP 3m25s istio-galley ClusterIP 10.245.46.245 <none> 443/TCP,15014/TCP,9901/TCP 3m26s istio-ingressgateway LoadBalancer 10.245.171.39 174.138.125.110 15020:30707/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30285/TCP,15030:31668/TCP,15031:32297/TCP,15032:30853/TCP,15443:30406/TCP 3m26s istio-pilot ClusterIP 10.245.56.97 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 3m26s istio-policy ClusterIP 10.245.206.189 <none> 9091/TCP,15004/TCP,15014/TCP 3m26s istio-sidecar-injector ClusterIP 10.245.223.99 <none> 443/TCP 3m25s istio-telemetry ClusterIP 10.245.5.215 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 3m26s prometheus ClusterIP 10.245.100.132 <none> 9090/TCP 3m26s

      We can also check for the corresponding Istio Pods with the following command:

      • kubectl get pods -n istio-system

      The Pods corresponding to these services should have a STATUS of Running, indicating that the Pods are bound to nodes and that the containers associated with the Pods are running:

      Output

      NAME READY STATUS RESTARTS AGE grafana-67c69bb567-t8qrg 1/1 Running 0 4m25s istio-citadel-fc966574d-v5rg5 1/1 Running 0 4m25s istio-galley-cf776876f-5wc4x 1/1 Running 0 4m25s istio-ingressgateway-7f497cc68b-c5w64 1/1 Running 0 4m25s istio-init-crd-10-bxglc 0/1 Completed 0 9m29s istio-init-crd-11-dv5lz 0/1 Completed 0 9m29s istio-pilot-785694f946-m5wp2 2/2 Running 0 4m25s istio-policy-79cff99c7c-q4z5x 2/2 Running 1 4m25s istio-sidecar-injector-c8ddbb99c-czvwq 1/1 Running 0 4m24s istio-telemetry-578b6f967c-zk56d 2/2 Running 1 4m25s prometheus-d8d46c5b5-k5wmg 1/1 Running 0 4m25s

      The READY field indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod -n pod_namespace
      • kubectl logs your_pod -n pod_namespace

      The final step in the Istio installation will be enabling the creation of Envoy proxies, which will be deployed as sidecars to services running in the mesh.

      Sidecars are typically used to add an extra layer of functionality in existing container environments. Istio's mesh architecture relies on communication between Envoy sidecars, which comprise the data plane of the mesh, and the components of the control plane. In order for the mesh to work, we need to ensure that each Pod in the mesh will also run an Envoy sidecar.

      There are two ways of accomplishing this goal: manual sidecar injection and automatic sidecar injection. We'll enable automatic sidecar injection by labeling the namespace in which we will create our application objects with the label istio-injection=enabled. This will ensure that the MutatingAdmissionWebhook controller can intercept requests to the kube-apiserver and perform a specific action — in this case, ensuring that all of our application Pods start with a sidecar.

      We'll use the default namespace to create our application objects, so we'll apply the istio-injection=enabled label to that namespace with the following command:

      • kubectl label namespace default istio-injection=enabled

      We can verify that the command worked as intended by running:

      • kubectl get namespace -L istio-injection

      You will see the following output:

      Output

      AME STATUS AGE ISTIO-INJECTION default Active 47m enabled istio-system Active 16m kube-node-lease Active 47m kube-public Active 47m kube-system Active 47m

      With Istio installed and configured, we can move on to creating our application Service and Deployment objects.

      Step 3 — Creating Application Objects

      With the Istio mesh in place and configured to inject sidecar Pods, we can create an application manifest with specifications for our Service and Deployment objects. Specifications in a Kubernetes manifest describe each object's desired state.

      Our application Service will ensure that the Pods running our containers remain accessible in a dynamic environment, as individual Pods are created and destroyed, while our Deployment will describe the desired state of our Pods.

      Open a file called node-app.yaml with nano or your favorite editor:

      First, add the following code to define the nodejs application Service:

      ~/istio_project/node-app.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nodejs
        labels: 
          app: nodejs
      spec:
        selector:
          app: nodejs
        ports:
        - name: http
          port: 8080 
      

      This Service definition includes a selector that will match Pods with the corresponding app: nodejs label. We've also specified that the Service will target port 8080 on any Pod with the matching label.

      We are also naming the Service port, in compliance with Istio's requirements for Pods and Services. The http value is one of the values Istio will accept for the name field.

      Next, below the Service, add the following specifications for the application Deployment. Be sure to replace the image listed under the containers specification with the image you created and pushed to Docker Hub in Step 1:

      ~/istio_project/node-app.yaml

      ...
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nodejs
        labels:
          version: v1
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nodejs
        template:
          metadata:
            labels:
              app: nodejs
              version: v1
          spec:
            containers:
            - name: nodejs
              image: your_dockerhub_username/node-demo
              ports:
              - containerPort: 8080
      

      The specifications for this Deployment include the number of replicas (in this case, 1), as well as a selector that defines which Pods the Deployment will manage. In this case, it will manage Pods with the app: nodejs label.

      The template field contains values that do the following:

      • Apply the app: nodejs label to the Pods managed by the Deployment. Istio recommends adding the app label to Deployment specifications to provide contextual information for Istio's metrics and telemetry.
      • Apply a version label to specify the version of the application that corresponds to this Deployment. As with the app label, Istio recommends including the version label to provide contextual information.
      • Define the specifications for the containers the Pods will run, including the container name and the image. The image here is the image you created in Step 1 and pushed to Docker Hub. The container specifications also include a containerPort configuration to point to the port each container will listen on. If ports remain unlisted here, they will bypass the Istio proxy. Note that this port, 8080, corresponds to the targeted port named in the Service definition.

      Save and close the file when you are finished editing.

      With this file in place, we can move on to editing the file that will contain definitions for Gateway and Virtual Service objects, which control how traffic enters the mesh and how it is routed once there.

      Step 4 — Creating Istio Objects

      To control access to a cluster and routing to Services, Kubernetes uses Ingress Resources and Controllers. Ingress Resources define rules for HTTP and HTTPS routing to cluster Services, while Controllers load balance incoming traffic and route it to the correct Services.

      For more information about using Ingress Resources and Controllers, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

      Istio uses a different set of objects to achieve similar ends, though with some important differences. Instead of using a Controller to load balance traffic, the Istio mesh uses a Gateway, which functions as a load balancer that handles incoming and outgoing HTTP/TCP connections. The Gateway then allows for monitoring and routing rules to be applied to traffic entering the mesh. Specifically, the configuration that determines traffic routing is defined as a Virtual Service. Each Virtual Service includes routing rules that match criteria with a specific protocol and destination.

      Though Kubernetes Ingress Resources/Controllers and Istio Gateways/Virtual Services have some functional similarities, the structure of the mesh introduces important differences. Kubernetes Ingress Resources and Controllers offer operators some routing options, for example, but Gateways and Virtual Services make a more robust set of functionalities available since they enable traffic to enter the mesh. In other words, the limited application layer capabilities that Kubernetes Ingress Controllers and Resources make available to cluster operators do not include the functionalities — including advanced routing, tracing, and telemetry — provided by the sidecars in the Istio service mesh.

      To allow external traffic into our mesh and configure routing to our Node app, we will need to create an Istio Gateway and Virtual Service. Open a file called node-istio.yaml for the manifest:

      First, add the definition for the Gateway object:

      ~/istio_project/node-isto.yaml

      apiVersion: networking.istio.io/v1alpha3
      kind: Gateway
      metadata:
        name: nodejs-gateway
      spec:
        selector:
          istio: ingressgateway 
        servers:
        - port:
            number: 80
            name: http
            protocol: HTTP
          hosts:
          - "*"
      

      In addition to specifying a name for the Gateway in the metadata field, we've included the following specifications:

      • A selector that will match this resource with the default Istio IngressGateway controller that was enabled with the configuration profile we selected when installing Istio.
      • A servers specification that specifies the port to expose for ingress and the hosts exposed by the Gateway. In this case, we are specifying all hosts with an asterisk (*) since we are not working with a specific secured domain.

      Below the Gateway definition, add specifications for the Virtual Service:

      ~/istio_project/node-istio.yaml

      ...
      ---
      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: nodejs
      spec:
        hosts:
        - "*"
        gateways:
        - nodejs-gateway
        http:
        - route:
          - destination:
              host: nodejs
      

      In addition to providing a name for this Virtual Service, we're also including specifications for this resource that include:

      • A hosts field that specifies the destination host. In this case, we're again using a wildcard value (*) to enable quick access to the application in the browser, since we're not working with a domain.
      • A gateways field that specifies the Gateway through which external requests will be allowed. In this case, it's our nodejs-gateway Gateway.
      • The http field that specifies how HTTP traffic will be routed.
      • A destination field that indicates where the request will be routed. In this case, it will be routed to the nodejs service, which implicitly expands to the Service's Fully Qualified Domain Name (FQDN) in a Kubernetes environment: nodejs.default.svc.cluster.local. It's important to note, though, that the FQDN will be based on the namespace where the rule is defined, not the Service, so be sure to use the FQDN in this field when your application Service and Virtual Service are in different namespaces. To learn about Kubernetes Domain Name System (DNS) more generally, see An Introduction to the Kubernetes DNS Service.

      Save and close the file when you are finished editing.

      With your yaml files in place, you can create your application Service and Deployment, as well as the Gateway and Virtual Service objects that will enable access to your application.

      Step 5 — Creating Application Resources and Enabling Telemetry Access

      Once you have created your application Service and Deployment objects, along with a Gateway and Virtual Service, you will be able to generate some requests to your application and look at the associated data in your Istio Grafana dashboards. First, however, you will need to configure Istio to expose the Grafana addon so that you can access the dashboards in your browser.

      We will enable Grafana access with HTTP, but when you are working in production or in sensitive environments, it is strongly recommended that you enable access with HTTPS.

      Because we set the --set grafana.enabled=true configuration option when installing Istio in Step 2, we have a Grafana Service and Pod in our istio-system namespace, which we confirmed in that Step.

      With those resources already in place, our next step will be to create a manifest for a Gateway and Virtual Service so that we can expose the Grafana addon.

      Open the file for the manifest:

      Add the following code to the file to create a Gateway and Virtual Service to expose and route traffic to the Grafana Service:

      ~/istio_project/node-grafana.yaml

      apiVersion: networking.istio.io/v1alpha3
      kind: Gateway
      metadata:
        name: grafana-gateway
        namespace: istio-system
      spec:
        selector:
          istio: ingressgateway
        servers:
        - port:
            number: 15031
            name: http-grafana
            protocol: HTTP
          hosts:
          - "*"
      ---
      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: grafana-vs
        namespace: istio-system
      spec:
        hosts:
        - "*"
        gateways:
        - grafana-gateway
        http:
        - match:
          - port: 15031
          route:
          - destination:
              host: grafana
              port:
                number: 3000
      

      Our Grafana Gateway and Virtual Service specifications are similar to those we defined for our application Gateway and Virtual Service in Step 4. There are a few differences, however:

      • Grafana will be exposed on the http-grafana named port (port 15031), and it will run on port 3000 on the host.
      • The Gateway and Virtual Service are both defined in the istio-system namespace.
      • The host in this Virtual Service is the grafana Service in the istio-system namespace. Since we are defining this rule in the same namespace that the Grafana Service is running in, FQDN expansion will again work without conflict.

      Note: Because our current MeshPolicy is configured to run TLS in permissive mode, we do not need to apply a Destination Rule to our manifest. If you selected a different profile with your Istio installation, then you will need to add a Destination Rule to disable mutual TLS when enabling access to Grafana with HTTP. For more information on how to do this, you can refer to the official Istio documentaion on enabling access to telemetry addons with HTTP.

      Save and close the file when you are finished editing.

      Create your Grafana resources with the following command:

      • kubectl apply -f node-grafana.yaml

      The kubectl apply command allows you to apply a particular configuration to an object in the process of creating or updating it. In our case, we are applying the configuration we specified in the node-grafana.yaml file to our Gateway and Virtual Service objects in the process of creating them.

      You can take a look at the Gateway in the istio-system namespace with the following command:

      • kubectl get gateway -n istio-system

      You will see the following output:

      Output

      NAME AGE grafana-gateway 47s

      You can do the same thing for the Virtual Service:

      • kubectl get virtualservice -n istio-system

      Output

      NAME GATEWAYS HOSTS AGE grafana-vs [grafana-gateway] [*] 74s

      With these resources created, we should be able to access our Grafana dashboards in the browser. Before we do that, however, let's create our application Service and Deployment, along with our application Gateway and Virtual Service, and check that we can access our application in the browser.

      Create the application Service and Deployment with the following command:

      • kubectl apply -f node-app.yaml

      Wait a few seconds, and then check your application Pods with the following command:

      Output

      NAME READY STATUS RESTARTS AGE nodejs-7759fb549f-kmb7x 2/2 Running 0 40s

      Your application containers are running, as you can see in the STATUS column, but why does the READY column list 2/2 if the application manifest from Step 3 only specified 1 replica?

      This second container is the Envoy sidecar, which you can inspect with the following command. Be sure to replace the pod listed here with the NAME of your own nodejs Pod:

      • kubectl describe pod nodejs-7759fb549f-kmb7x

      Output

      Name: nodejs-7759fb549f-kmb7x Namespace: default ... Containers: nodejs: ... istio-proxy: Container ID: docker://f840d5a576536164d80911c46f6de41d5bc5af5152890c3aed429a1ee29af10b Image: docker.io/istio/proxyv2:1.1.7 Image ID: docker-pullable://istio/proxyv2@sha256:e6f039115c7d5ef9c8f6b049866fbf9b6f5e2255d3a733bb8756b36927749822 Port: 15090/TCP Host Port: 0/TCP Args: ...

      Next, create your application Gateway and Virtual Service:

      • kubectl apply -f node-istio.yaml

      You can inspect the Gateway with the following command:

      Output

      NAME AGE nodejs-gateway 7s

      And the Virtual Service:

      • kubectl get virtualservice

      Output

      NAME GATEWAYS HOSTS AGE nodejs [nodejs-gateway] [*] 28s

      We are now ready to test access to the application. To do this, we will need the external IP associated with our istio-ingressgateway Service, which is a LoadBalancer Service type.

      Get the external IP for the istio-ingressgateway Service with the following command:

      • kubectl get svc -n istio-system

      You will see output like the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.245.85.162 <none> 3000/TCP 42m istio-citadel ClusterIP 10.245.135.45 <none> 8060/TCP,15014/TCP 42m istio-galley ClusterIP 10.245.46.245 <none> 443/TCP,15014/TCP,9901/TCP 42m istio-ingressgateway LoadBalancer 10.245.171.39 ingressgateway_ip 15020:30707/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30285/TCP,15030:31668/TCP,15031:32297/TCP,15032:30853/TCP,15443:30406/TCP 42m istio-pilot ClusterIP 10.245.56.97 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 42m istio-policy ClusterIP 10.245.206.189 <none> 9091/TCP,15004/TCP,15014/TCP 42m istio-sidecar-injector ClusterIP 10.245.223.99 <none> 443/TCP 42m istio-telemetry ClusterIP 10.245.5.215 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 42m prometheus ClusterIP 10.245.100.132 <none> 9090/TCP 42m

      The istio-ingressgateway should be the only Service with the TYPE LoadBalancer, and the only Service with an external IP.

      Navigate to this external IP in your browser: http://ingressgateway_ip.

      You should see the following landing page:

      Application Landing Page

      Next, generate some load to the site by clicking refresh five or six times.

      You can now check the Grafana dashboard to look at traffic data.

      In your browser, navigate to the following address, again using your istio-ingressgateway external IP and the port you defined in your Grafana Gateway manifest: http://ingressgateway_ip:15031.

      You will see the following landing page:

      Grafana Home Dash

      Clicking on Home at the top of the page will bring you to a page with an istio folder. To get a list of dropdown options, click on the istio folder icon:

      Istio Dash Options Dropdown Menu

      From this list of options, click on Istio Service Dashboard.

      This will bring you to a landing page with another dropdown menu:

      Service Dropdown in Istio Service Dash

      Select nodejs.default.svc.cluster.local from the list of available options.

      You will now be able to look at traffic data for that service:

      Nodejs Service Dash

      You now have a functioning Node.js application running in an Istio service mesh with Grafana enabled and configured for external access.

      Conclusion

      In this tutorial, you installed Istio using the Helm package manager and used it to expose a Node.js application Service using Gateway and Virtual Service objects. You also configured Gateway and Virtual Service objects to expose the Grafana telemetry addon, in order to look at traffic data for your application.

      As you move toward production, you will want to take steps like securing your application Gateway with HTTPS and ensuring that access to your Grafana Service is also secure.

      You can also explore other telemetry-related tasks, including collecting and processing metrics, logs, and trace spans.



      Source link

      How To Scale a Node.js Application with MongoDB on Kubernetes Using Helm


      Introduction

      Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

      When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

      In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

      Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

      Navigate to the node_project directory:

      The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

      For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      When we deploy the Helm mongodb-replicaset chart, it will create:

      • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
      • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

      For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

      The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

      Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node's process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

      The constants for the connection URI and the URI string itself currently look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      ...
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      ...
      

      In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

      Add MONGO_REPLICASET to both the URI constant object and the connection string:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB,
        MONGO_REPLICASET
      } = process.env;
      
      ...
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
      ...
      

      Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

      Save and close the file when you are finished editing.

      With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

      Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

      • docker build -t your_dockerhub_username/node-replicas .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-replicas

      You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

      Step 2 — Creating Secrets for the MongoDB Replica Set

      The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

      • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
      • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

      With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

      First, let's create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

      • openssl rand -base64 756 > key.txt

      The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

      You can now create a Secret called keyfilesecret using this file with kubectl create:

      • kubectl create secret generic keyfilesecret --from-file=key.txt

      This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

      You will see the following output indicating that your Secret has been created:

      Output

      secret/keyfilesecret created

      Remove key.txt:

      Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

      Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

      Convert your database username:

      • echo -n 'your_database_username' | base64

      Note down the value you see in the output.

      Next, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

      ~/node_project/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: mongo-secret
      data:
        user: your_encoded_username
        password: your_encoded_password
      

      Here, we're using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

      Save and close the file when you are finished editing.

      Create the Secret object with the following command:

      • kubectl create -f secret.yaml

      You will see the following output:

      Output

      secret/mongo-secret created

      Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

      With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

      Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

      Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we've just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

      Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

      • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
      • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
      • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

      Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

      Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage — which we can check by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 21m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

      You will set values in this file that will do the following:

      • Enable authorization.
      • Reference your keyfilesecret and mongo-secret objects.
      • Specify 1Gi for your PersistentVolumes.
      • Set your replica set name to db.
      • Specify 3 replicas for the set.
      • Pin the mongo image to the latest version at the time of writing: 4.1.9.

      Paste the following code into the file:

      ~/node_project/mongodb-values.yaml

      replicas: 3
      port: 27017
      replicaSetName: db
      podDisruptionBudget: {}
      auth:
        enabled: true
        existingKeySecret: keyfilesecret
        existingAdminSecret: mongo-secret
      imagePullSecrets: []
      installImage:
        repository: unguiculus/mongodb-install
        tag: 0.7
        pullPolicy: Always
      copyConfigImage:
        repository: busybox
        tag: 1.29.3
        pullPolicy: Always
      image:
        repository: mongo
        tag: 4.1.9
        pullPolicy: Always
      extraVars: {}
      metrics:
        enabled: false
        image:
          repository: ssalaues/mongodb-exporter
          tag: 0.6.1
          pullPolicy: IfNotPresent
        port: 9216
        path: /metrics
        socketTimeout: 3s
        syncTimeout: 1m
        prometheusServiceDiscovery: true
        resources: {}
      podAnnotations: {}
      securityContext:
        enabled: true
        runAsUser: 999
        fsGroup: 999
        runAsNonRoot: true
      init:
        resources: {}
        timeout: 900
      resources: {}
      nodeSelector: {}
      affinity: {}
      tolerations: []
      extraLabels: {}
      persistentVolume:
        enabled: true
        #storageClass: "-"
        accessModes:
          - ReadWriteOnce
        size: 1Gi
        annotations: {}
      serviceAnnotations: {}
      terminationGracePeriodSeconds: 30
      tls:
        enabled: false
      configmap: {}
      readinessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 1
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      livenessProbe:
        initialDelaySeconds: 30
        timeoutSeconds: 5
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      

      The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

      Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

      To learn more about the other parameters included in the file, see the configuration table included with the repo.

      Save and close the file when you are finished editing.

      Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

      This will get the latest chart information from the stable repository.

      Finally, install the chart with the following command:

      • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

      Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

      • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

      Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we've specified. We've pointed to these options by including the -f flag and our mongodb-values.yaml file.

      Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

      Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

      Output

      NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

      You can now check on the creation of your Pods with the following command:

      You will see output like the following as the Pods are being created:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

      The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod's containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

      Once the Pods have been created and all of their associated containers are running, you will see this output:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

      In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

      Output

      NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

      This means that the first member of our StatefulSet will have the following DNS entry:

      mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local
      

      Because we need our application to connect to each MongoDB instance, it's essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

      With your database instances up and running, you are ready to create the chart for your Node application.

      Step 4 — Creating a Custom Application Chart and Configuring Parameters

      We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

      First, create a new chart directory called nodeapp with the following command:

      This will create a directory called nodeapp in your ~/node_project folder with the following resources:

      • A Chart.yaml file with basic information about your chart.
      • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
      • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
      • A templates/ directory with the template files that will generate Kubernetes manifests.
      • A templates/tests/ directory for test files.
      • A charts/ directory for any charts that this chart depends on.

      The first file we will modify out of these default files is values.yaml. Open that file now:

      The values that we will set here include:

      • The number of replicas.
      • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
      • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
      • The targetPort to specify the port on the Pod where our application will be exposed.

      We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

      Configure the following values in the values.yaml file:

      ~/node_project/nodeapp/values.yaml

      # Default values for nodeapp.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 3
      
      image:
        repository: your_dockerhub_username/node-replicas
        tag: latest
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: LoadBalancer
        port: 80
        targetPort: 8080
      ...
      

      Save and close the file when you are finished editing.

      Next, open a secret.yaml file in the nodeapp/templates directory:

      • nano nodeapp/templates/secret.yaml

      In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

      Add the following code to the file:

      ~/node_project/nodeapp/templates/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: {{ .Release.Name }}-auth
      data:
        MONGO_USERNAME: your_encoded_username
        MONGO_PASSWORD: your_encoded_password
      

      The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

      Save and close the file when you are finished.

      Next, open a file to create a ConfigMap for your application:

      • nano nodeapp/templates/configmap.yaml

      In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

      According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

      Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

      ~/node_project/nodeapp/templates/configmap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: {{ .Release.Name }}-config
      data:
        MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
        MONGO_PORT: "27017"
        MONGO_DB: "sharkinfo"
        MONGO_REPLICASET: "db"
      

      Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

      Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

      Save and close the file when you are finished editing.

      With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

      Step 5 — Integrating Environment Variables into Your Helm Deployment

      With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

      Open the application Deployment template for editing:

      • nano nodeapp/templates/deployment.yaml

      Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

      In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              ports:
      

      Next, add the following keys to the list of env variables:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              - name: MONGO_USERNAME
                valueFrom:
                  secretKeyRef:
                    key: MONGO_USERNAME
                    name: {{ .Release.Name }}-auth
              - name: MONGO_PASSWORD
                valueFrom:
                  secretKeyRef:
                    key: MONGO_PASSWORD
                    name: {{ .Release.Name }}-auth
              - name: MONGO_HOSTNAME
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_HOSTNAME
                    name: {{ .Release.Name }}-config
              - name: MONGO_PORT
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_PORT
                    name: {{ .Release.Name }}-config
              - name: MONGO_DB
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_DB
                    name: {{ .Release.Name }}-config      
              - name: MONGO_REPLICASET
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_REPLICASET
                    name: {{ .Release.Name }}-config        
      

      Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

      Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            ...
      

      Next, let's modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

      • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
      • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

      For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

      In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod's container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

      Add the following modification to the stated path for the liveness and readiness probes:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            livenessProbe:
              httpGet:
                path: /sharks
                port: http
            readinessProbe:
              httpGet:
                path: /sharks
                port: http
      

      Save and close the file when you are finished editing.

      You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

      • helm install --name nodejs ./nodeapp

      Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

      Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

      You will see the following output indicating that your release has been created:

      Output

      NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

      Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

      Check the status of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

      Once your Pods are up and running, check your Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer 10.245.33.46 your_lb_ip 80:31518/TCP 3m22s

      The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

      You should see the following landing page:

      Application Landing Page

      Now that your replicated application is working, let's add some test data to ensure that replication is working between members of the replica set.

      Step 6 — Testing MongoDB Replication

      With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

      First, make sure you have navigated your browser to the application landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      Now head back to the shark information form by clicking on Sharks in the top navigation bar:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

      Complete Shark Collection

      Let's check that the data we've entered has been replicated between the primary and secondary members of our replica set.

      Get a list of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

      To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

      When prompted, enter the password associated with this username:

      Output

      MongoDB shell version v4.1.9 Enter password:

      You will be dropped into an administrative shell:

      Output

      MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

      Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

      You will see output like the following, indicating the hostname of the primary:

      Output

      db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

      Next, switch to your sharkinfo database:

      Output

      switched to db sharkinfo

      List the collections in the database:

      Output

      sharks

      Output the documents in the collection:

      You will see the following output:

      Output

      { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      Exit the MongoDB Shell:

      Now that we have checked the data on our primary, let's check that it's being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

      Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

      Switch to the sharkinfo database:

      Output

      switched to db sharkinfo

      Permit the read operation of the documents in the sharks collection:

      Output the documents in the collection:

      You should now see the same information that you saw when running this method on your primary instance:

      Output

      db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      This output confirms that your application data is being replicated between the members of your replica set.

      Conclusion

      You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm's stable repository and other chart repositories.

      As you move toward production, consider implementing the following:

      To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.



      Source link

      How to Deploy a Resilient Go Application to DigitalOcean Kubernetes


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker is a containerization tool used to provide applications with a filesystem holding everything they need to run, ensuring that the software will have a consistent run-time environment and will behave the same way regardless of where it is deployed. Kubernetes is a cloud platform for automating the deployment, scaling, and management of containerized applications.

      By leveraging Docker, you can deploy an application on any system that supports Docker with the confidence that it will always work as intended. Kubernetes, meanwhile, allows you to deploy your application across multiple nodes in a cluster. Additionally, it handles key tasks such as bringing up new containers should any of your containers crash. Together, these tools streamline the process of deploying an application, allowing you to focus on development.

      In this tutorial, you will build an example application written in Go and get it up and running locally on your development machine. Then you’ll containerize the application with Docker, deploy it to a Kubernetes cluster, and create a load balancer that will serve as the public-facing entry point to your application.

      Prerequisites

      Before you begin this tutorial, you will need the following:

      • A development server or local machine from which you will deploy the application. Although the instructions in this guide will largely work for most operating systems, this tutorial assumes that you have access to an Ubuntu 18.04 system configured with a non-root user with sudo privileges, as described in our Initial Server Setup for Ubuntu 18.04 tutorial.
      • The docker command-line tool installed on your development machine. To install this, follow Steps 1 and 2 of our tutorial on How to Install and Use Docker on Ubuntu 18.04.
      • The kubectl command-line tool installed on your development machine. To install this, follow this guide from the official Kubernetes documentation.
      • A free account on Docker Hub to which you will push your Docker image. To set this up, visit the Docker Hub website, click the Get Started button at the top-right of the page, and follow the registration instructions.
      • A Kubernetes cluster. You can provision a DigitalOcean Kubernetes cluster by following our Kubernetes Quickstart guide. You can still complete this tutorial if you provision your cluster from another cloud provider. Wherever you procure your cluster, be sure to set up a configuration file and ensure that you can connect to the cluster from your development server.

      Step 1 — Building a Sample Web Application in Go

      In this step, you will build a sample application written in Go. Once you containerize this app with Docker, it will serve My Awesome Go App in response to requests to your server’s IP address at port 3000.

      Get started by updating your server’s package lists if you haven’t done so recently:

      Then install Go by running:

      Next, make sure you're in your home directory and create a new directory which will contain all of your project files:

      Then navigate to this new directory:

      Use nano or your preferred text editor to create a file named main.go which will contain the code for your Go application:

      The first line in any Go source file is always a package statement that defines which code bundle the file belongs to. For executable files like this one, the package statement must point to the main package:

      go-app/main.go

      package main
      

      Following that, add an import statement where you can list all the libraries the application will need. Here, include fmt, which handles formatted text input and output, and net/http, which provides HTTP client and server implementations:

      go-app/main.go

      package main
      
      import (
        "fmt"
        "net/http"
      )
      

      Next, define a homePage function which will take in two arguments: http.ResponseWriter and a pointer to http.Request. In Go, a ResponseWriter interface is used to construct an HTTP response, while http.Request is an object representing an incoming request. Thus, this block reads incoming HTTP requests and then constructs a response:

      go-app/main.go

      . . .
      
      import (
        "fmt"
        "net/http"
      )
      
      func homePage(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "My Awesome Go App")
      }
      

      After this, add a setupRoutes function which will map incoming requests to their intended HTTP handler functions. In the body of this setupRoutes function, add a mapping of the / route to your newly defined homePage function. This tells the application to print the My Awesome Go App message even for requests made to unknown endpoints:

      go-app/main.go

      . . .
      
      func homePage(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "My Awesome Go App")
      }
      
      func setupRoutes() {
        http.HandleFunc("/", homePage)
      }
      

      And finally, add the following main function. This will print out a string indicating that your application has started. It will then call the setupRoutes function before listening and serving your Go application on port 3000.

      go-app/main.go

      . . .
      
      func setupRoutes() {
        http.HandleFunc("/", homePage)
      }
      
      func main() {
        fmt.Println("Go Web App Started on Port 3000")
        setupRoutes()
        http.ListenAndServe(":3000", nil)
      }
      

      After adding these lines, this is how the final file will look:

      go-app/main.go

      package main
      
      import (
        "fmt"
        "net/http"
      )
      
      func homePage(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "My Awesome Go App")
      }
      
      func setupRoutes() {
        http.HandleFunc("/", homePage)
      }
      
      func main() {
        fmt.Println("Go Web App Started on Port 3000")
        setupRoutes()
        http.ListenAndServe(":3000", nil)
      }
      

      Save and close this file. If you created this file using nano, do so by pressing CTRL + X, Y, then ENTER.

      Next, run the application using the following go run command. This will compile the code in your main.go file and run it locally on your development machine:

      Output

      Go Web App Started on Port 3000

      This output confirms that the application is working as expected. It will run indefinitely, however, so close it by pressing CTRL + C.

      Throughout this guide, you will use this sample application to experiment with Docker and Kubernetes. To that end, continue reading to learn how to containerize your application with Docker.

      Step 2 — Dockerizing Your Go Application

      In its current state, the Go application you just created is only running on your development server. In this step, you'll make this new application portable by containerizing it with Docker. This will allow it to run on any machine that supports Docker containers. You will build a Docker image and push it to a central public repository on Docker Hub. This way, your Kubernetes cluster can pull the image back down and deploy it as a container within the cluster.

      The first step towards containerizing your application is to create a special script called a Dockerfile. A Dockerfile typically contains a list of instructions and arguments that run in sequential order so as to automatically perform certain actions on a base image or create a new one.

      Note: In this step, you will configure a simple Docker container that will build and run your Go application in a single stage. If, in the future, you want to reduce the size of the container where your Go applications will run in production, you may want to look into mutli-stage builds.

      Create a new file named Dockerfile:

      At the top of the file, specify the base image needed for the Go app:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      

      Then create an app directory within the container that will hold the application's source files:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      

      Below that, add the following line which copies everything in the root directory into the app directory:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      

      Next, add the following line which changes the working directory to app, meaning that all the following commands in this Dockerfile will be run from that location:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      WORKDIR /app
      

      Add a line instructing Docker to run the go build -o main command, which compiles the binary executable of the Go app:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      WORKDIR /app
      RUN go build -o main .
      

      Then add the final line, which will run the binary executable:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      WORKDIR /app
      RUN go build -o main .
      CMD ["/app/main"]
      

      Save and close the file after adding these lines.

      Now that you have this Dockerfile in the root of your project, you can create a Docker image based off of it using the following docker build command. This command includes the -t flag which, when passed the value go-web-app, will name the Docker image go-web-app and tag it.

      Note: In Docker, tags allow you to convey information specific to a given image, such as its version number. The following command doesn't provide a specific tag, so Docker will tag the image with its default tag: latest. If you want to give an image a custom tag, you would append the image name with a colon and the tag of your choice, like so:

      • docker build -t sammy/image_name:tag_name .

      Tagging an image like this can give you greater control over your images. For example, you could deploy an image tagged v1.1 to production, but deploy another tagged v1.2 to your pre-production or testing environment.

      The final argument you'll pass is the path: .. This specifies that you wish to build the Docker image from the contents of the current working directory. Also, be sure to update sammy to your Docker Hub username:

      • docker build -t sammy/go-web-app .

      This build command will read all of the lines in your Dockerfile, execute them in order, and then cache them, allowing future builds to run much faster:

      Output

      . . . Successfully built 521679ff78e5 Successfully tagged go-web-app:latest

      Once this command finishes building it, you will be able to see your image when you run the docker images command like so:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE sammy/go-web-app latest 4ee6cf7a8ab4 3 seconds ago 355MB

      Next, use the following command create and start a container based on the image you just built. This command includes the -it flag, which specifies that the container will run in interactive mode. It also has the -p flag which maps the port on which the Go application is running on your development machine — port 3000 — to port 3000 in your Docker container:

      • docker run -it -p 3000:3000 sammy/go-web-app

      Output

      Go Web App Started on Port 3000

      If there is nothing else running on that port, you'll be able to see the application in action by opening up a browser and navigating to the following URL:

      http://your_server_ip:3000
      

      Note: If you're following this tutorial from your local machine instead of a server, visit the application by instead going to the following URL:

      http://localhost:3000
      

      Your containerized Go App

      After checking that the application works as expected in your browser, stop it by pressing CTRL + C in your terminal.

      When you deploy your containerized application to your Kubernetes cluster, you'll need to be able to pull the image from a centralized location. To that end, you can push your newly created image to your Docker Hub image repository.

      Run the following command to log in to Docker Hub from your terminal:

      This will prompt you for your Docker Hub username and password. After entering them correctly, you will see Login Succeeded in the command's output.

      After logging in, push your new image up to Docker Hub using the docker push command, like so:

      • docker push sammy/go-web-app

      Once this command has successfully completed, you will be able to open up your Docker Hub account and see your Docker image there.

      Now that you've pushed your image to a central location, you're ready to deploy it to your Kubernetes cluster. First, though, we will walk through a brief process that will make it much less tedious to run kubectl commands.

      Step 3 — Improving Usability for kubectl

      By this point, you've created a functioning Go application and containerized it with Docker. However, the application still isn't publicly accessible. To resolve this, you will deploy your new Docker image to your Kubernetes cluster using the kubectl command line tool. Before doing this, though, let's make a small change to the Kubernetes configuration file that will help to make running kubectl commands less laborious.

      By default, when you run commands with the kubectl command-line tool, you have to specify the path of the cluster configuration file using the --kubeconfig flag. However, if your configuration file is named config and is stored in a directory named ~/.kube, kubectl will know where to look for the configuration file and will be able pick it up without the --kubeconfig flag pointing to it.

      To that end, if you haven't already done so, create a new directory called ~/.kube:

      Then move your cluster configuration file to this directory, and rename it config in the process:

      • mv clusterconfig.yaml ~/.kube/config

      Moving forward, you won't need to specify the location of your cluster's configuration file when you run kubectl, as the command will be able to find it now that it's in the default location. Test out this behavior by running the following get nodes command:

      This will display all of the nodes that reside within your Kubernetes cluster. In the context of Kubernetes, a node is a server or a worker machine on which one or more pods can be deployed:

      Output

      NAME STATUS ROLES AGE VERSION k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfd Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfi Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfv Ready <none> 1m v1.13.5

      With that, you're ready to move on and deploy your application to your Kubernetes cluster. You will do this by creating two Kubernetes objects: one that will deploy the application to some pods in your cluster and another that will create a load balancer, providing an access point to your application.

      Step 4 — Creating a Deployment

      RESTful resources make up all the persistent entities wihtin a Kubernetes system, and in this context they're commonly referred to as Kubernetes objects. It's helpful to think of Kubernetes objects as the work orders you submit to Kubernetes: you list what resources you need and how they should work, and then Kubernetes will constantly work to ensure that they exist in your cluster.

      One kind of Kubernetes object, known as a deployment, is a set of identical, indistinguishable pods. In Kubernetes, a pod is a grouping of one or more containers which are able to communicate over the same shared network and interact with the same shared storage. A deployment runs more than one replica of the parent application at a time and automatically replaces any instances that fail, ensuring that your application is always available to serve user requests.

      In this step, you'll create a Kubernetes object description file, also known as a manifest, for a deployment. This manifest will contain all of the configuration details needed to deploy your Go app to your cluster.

      Begin by creating a deployment manifest in the root directory of your project: go-app/. For small projects such as this one, keeping them in the root directory minimizes the complexity. For larger projects, however, it may be beneficial to store your manifests in a separate subdirectory so as to keep everything organized.

      Create a new file called deployment.yml:

      Different versions of the Kubernetes API contain different object definitions, so at the top of this file you must define the apiVersion you're using to create this object. For the purpose of this tutorial, you will be using the apps/v1 grouping as it contains many of the core Kubernetes object definitions that you'll need in order to create a deployment. Add a field below apiVersion describing the kind of Kubernetes object you're creating. In this case, you're creating a Deployment:

      go-app/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      

      Then define the metadata for your deployment. A metadata field is required for every Kubernetes object as it contains information such as the unique name of the object. This name is useful as it allows you to distinguish different deployments from one another and identify them using names that are human-readable:

      go-app/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
          name: go-web-app
      

      Next, you'll build out the spec block of your deployment.yml. A spec field is a requirement for every Kubernetes object, but its precise format differs for each type of object. In the case of a deployment, it can contain information such as the number of replicas of you want to run. In Kubernetes, a replica is the number of pods you want to run in your cluster. Here, set the number of replicas to 5:

      go-app/deployment.yml

      . . .
      metadata:
          name: go-web-app
      spec:
        replicas: 5
      

      Next, create a selector block nested under the spec block. This will serve as a label selector for your pods. Kubernetes uses label selectors to define how the deployment finds the pods which it must manage.

      Within this selector block, define matchLabels and add the name label. Essentially, the matchLabels field tells Kubernetes what pods the deployment applies to. In this example, the deployment will apply to any pods with the name go-web-app:

      go-app/deployment.yml

      . . .
      spec:
        replicas: 5
        selector:
          matchLabels:
            name: go-web-app
      

      After this, add a template block. Every deployment creates a set of pods using the labels specified in a template block. The first subfield in this block is metadata which contains the labels that will be applied to all of the pods in this deployment. These labels are key/value pairs that are used as identifying attributes of Kubernetes objects. When you define your service later on, you can specify that you want all the pods with this name label to be grouped under that service. Set this name label to go-web-app:

      go-app/deployment.yml

      . . .
      spec:
        replicas: 5
        selector:
          matchLabels:
            name: go-web-app
        template:
          metadata:
            labels:
              name: go-web-app
      

      The second part of this template block is the spec block. This is different from the spec block you added previously, as this one applies only to the pods created by the template block, rather than the whole deployment.

      Within this spec block, add a containers field and once again define a name attribute. This name field defines the name of any containers created by this particular deployment. Below that, define the image you want to pull down and deploy. Be sure to change sammy to your own Docker Hub username:

      go-app/deployment.yml

      . . .
        template:
          metadata:
            labels:
              name: go-web-app
          spec:
            containers:
            - name: application
              image: sammy/go-web-app
      

      Following that, add an imagePullPolicy field set to IfNotPresent which will direct the deployment to only pull an image if it has not already done so before. Then, lastly, add a ports block. There, define the containerPort which should match the port number that your Go application listens on. In this case, the port number is 3000:

      go-app/deployment.yml

      . . .
          spec:
            containers:
            - name: application
              image: sammy/go-web-app
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 3000
      

      The full version of your deployment.yml will look like this:

      go-app/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: go-web-app
      spec:
        replicas: 5
        selector:
          matchLabels:
            name: go-web-app
        template:
          metadata:
            labels:
              name: go-web-app
          spec:
            containers:
            - name: application
              image: sammy/go-web-app
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 3000
      

      Save and close the file.

      Next, apply your new deployment with the following command:

      • kubectl apply -f deployment.yml

      Note: For more information on all of the configuration available to you for deployments, please check out the official Kubernetes documentation here: Kubernetes Deployments

      In the next step, you'll create another kind of Kubernetes object which will manage how you access the pods that exist in your new deployment. This service will create a load balancer which will then expose a single IP address, and requests to this IP address will be distributed to the replicas in your deployment. This service will also handle port forwarding rules so that you can access your application over HTTP.

      Step 5 — Creating a Service

      Now that you have a successful Kubernetes deployment, you're ready to expose your application to the outside world. In order to do this, you'll need to define another kind of Kubernetes object: a service. This service will expose the same port on all of your cluster's nodes. Your nodes will then forward any incoming traffic on that port to the pods running your application.

      Note: For clarity, we will define this service object in a separate file. However, it is possible to group multiple resource manifests in the same YAML file, as long as they're separated by ---. See this page from the Kubernetes documentation for more details.

      Create a new file called service.yml:

      Start this file off by again defining the apiVersion and the kind fields in a similar fashion to your deployment.yml file. This time, point the apiVersion field to v1, the Kubernetes API commonly used for services:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      

      Next, add the name of your service in a metadata block as you did in deployment.yml. This could be anything you like, but for clarity we will call it go-web-service:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      

      Next, create a spec block. This spec block will be different than the one included in your deployment, and it will contain the type of this service, as well as the port forwarding configuration and the selector.

      Add a field defining this service's type and set it to LoadBalancer. This will automatically provision a load balancer that will act as the main entry point to your application.

      Warning: The method for creating a load balancer outlined in this step will only work for Kubernetes clusters provisioned from cloud providers that also support external load balancers. Additionally, be advised that provisioning a load balancer from a cloud provider will incur additional costs. If this is a concern for you, you may want to look into exposing an external IP address using an Ingress.

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      spec:
        type: LoadBalancer
      

      Then add a ports block where you'll define how you want your apps to be accessed. Nested within this block, add the following fields:

      • name, pointing to http
      • port, pointing to port 80
      • targetPort, pointing to port 3000

      This will take incoming HTTP requests on port 80 and forward them to the targetPort of 3000. This targetPort is the same port on which your Go application is running:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      spec:
        type: LoadBalancer
        ports:
        - name: http
          port: 80
          targetPort: 3000
      

      Lastly, add a selector block as you did in the deployments.yml file. This selector block is important, as it maps any deployed pods named go-web-app to this service:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      spec:
        type: LoadBalancer
        ports:
        - name: http
          port: 80
          targetPort: 3000
        selector:
          name: go-web-app
      

      After adding these lines, save and close the file. Following that, apply this service to your Kubernetes cluster by once again using the kubectl apply command like so:

      • kubectl apply -f service.yml

      This command will apply the new Kubernetes service as well as create a load balancer. This load balancer will serve as the public-facing entry point to your application running within the cluster.

      To view the application, you will need the new load balancer's IP address. Find it by running the following command:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE go-web-service LoadBalancer 10.245.107.189 203.0.113.20 80:30533/TCP 10m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3h4m

      You may have more than one service running, but find the one labeled go-web-service. Find the EXTERNAL-IP column and copy the IP address associated with the go-web-service. In this example output, this IP address is 203.0.113.20. Then, paste the IP address into the URL bar of your browser to the view the application running on your Kubernetes cluster.

      Note: When Kubernetes creates a load balancer in this manner, it does so asynchronously. Consequently, the kubectl get services command's output may show the EXTERNAL-IP address of the LoadBalancer remaining in a <pending> state for some time after running the kubectl apply command. If this the case, wait a few minutes and try re-running the command to ensure that the load balancer was created and is functioning as expected.

      The load balancer will take in the request on port 80 and forward it to one of the pods running within your cluster.

      Your working Go App!

      With that, you've created a Kubernetes service coupled with a load balancer, giving you a single, stable entry point to application.

      Conclusion

      In this tutorial, you've built Go application, containerized it with Docker, and then deployed it to a Kubernetes cluster. You then created a load balancer that provides a resilient entry point to this application, ensuring that it will remain highly available even if one of the nodes in your cluster fails. You can use this tutorial to deploy your own Go application to a Kubernetes cluster, or continue learning other Kubernetes and Docker concepts with the sample application you created in Step 1.

      Moving forward, you could map your load balancer's IP address to a domain name that you control so that you can access the application through a human-readable web address rather than the load balancer IP. Additionally, the following Kubernetes tutorials may be of interest to you:

      Finally, if you'd like to learn more about Go, we encourage you to check out our series on How To Code in Go.



      Source link