One place for hosting & domains

      Nodejs

      How To Set Up a Node.js Application for Production on Debian 10


      Introduction

      Node.js is an open-source JavaScript runtime environment for building server-side and networking applications. The platform runs on Linux, macOS, FreeBSD, and Windows. Though you can run Node.js applications at the command line, this tutorial will focus on running them as a service. This means that the applications will restart on reboot or failure and are safe for use in a production environment.

      In this tutorial, you will set up a production-ready Node.js environment on a single Debian 10 server. This server will run a Node.js application managed by PM2, and provide users with secure access to the application through an Nginx reverse proxy. The Nginx server will offer HTTPS, using a free certificate provided by Let’s Encrypt.

      Prerequisites

      This guide assumes that you have the following:

      When you’ve completed the prerequisites, you will have a server serving your domain’s default placeholder page at https://your_domain/.

      Step 1 — Installing Node.js

      Let’s begin by installing the latest LTS release of Node.js, using the NodeSource package archives.

      To install the NodeSource PPA and access its contents, you will first need to update your package index and install curl:

      • sudo apt update
      • sudo apt install curl

      Make sure you’re in your home directory, and then use curl to retrieve the installation script for the Node.js 10.x archives:

      • cd ~
      • curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh

      You can inspect the contents of this script with nano or your preferred text editor:

      When you're done inspecting the script, run it under sudo:

      • sudo bash nodesource_setup.sh

      The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script from Nodesource, you can install the Node.js package:

      To check which version of Node.js you have installed after these initial steps, type:

      Output

      v10.16.0

      Note: When installing from the NodeSource PPA, the Node.js executable is called nodejs, rather than node.

      The nodejs package contains the nodejs binary as well as npm, a package manager for Node modules, so you don't need to install npm separately.

      npm uses a configuration file in your home directory to keep track of updates. It will be created the first time you run npm. Execute this command to verify that npm is installed and to create the configuration file:

      Output

      6.9.0

      In order for some npm packages to work (those that require compiling code from source, for example), you will need to install the build-essential package:

      • sudo apt install build-essential

      You now have the necessary tools to work with npm packages that require compiling code from source.

      With the Node.js runtime installed, we can move on to writing a Node.js application.

      Step 2 — Creating a Node.js Application

      Let's write a Hello World application that returns "Hello World" to any HTTP requests. This sample application will help you get Node.js set up. You can replace it with your own application — just make sure that you modify your application to listen on the appropriate IP addresses and ports.

      First, let's create a sample application called hello.js:

      Insert the following code into the file:

      ~/hello.js

      const http = require('http');
      
      const hostname = 'localhost';
      const port = 3000;
      
      const server = http.createServer((req, res) => {
        res.statusCode = 200;
        res.setHeader('Content-Type', 'text/plain');
        res.end('Hello World!n');
      });
      
      server.listen(port, hostname, () => {
        console.log(`Server running at http://${hostname}:${port}/`);
      });
      

      Save the file and exit the editor.

      This Node.js application listens on the specified address (localhost) and port (3000), and returns "Hello World!" with a 200 HTTP success code. Since we're listening on localhost, remote clients won't be able to connect to our application.

      To test your application, type:

      You will see the following output:

      Output

      Server running at http://localhost:3000/

      Note: Running a Node.js application in this manner will block additional commands until you kill the application by pressing CTRL+C.

      To test the application, open another terminal session on your server, and connect to localhost with curl:

      • curl http://localhost:3000

      If you see the following output, the application is working properly and listening on the correct address and port:

      Output

      Hello World!

      If you do not see the expected output, make sure that your Node.js application is running and configured to listen on the proper address and port.

      Once you're sure it's working, kill the application (if you haven't already) by pressing CTRL+C.

      Step 3 — Installing PM2

      Next let's install PM2, a process manager for Node.js applications. PM2 makes it possible to daemonize applications so that they will run in the background as a service.

      Use npm to install the latest version of PM2 on your server:

      The -g option tells npm to install the module globally, so it's available system-wide.

      Let's first use the pm2 start command to run the hello.js application in the background:

      This also adds your application to PM2's process list, which is outputted every time you start an application:

      Output

      [PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2 [PM2] PM2 Successfully daemonized [PM2] Starting /home/sammy/hello.js in fork_mode (1 instance) [PM2] Done. ┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────┬───────────┬───────┬──────────┐ │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ ├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────┼───────────┼───────┼──────────┤ │ hello │ 0 │ fork │ 1338 │ online │ 0 │ 0s │ 0% │ 23.0 MB │ sammy │ disabled │ └──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────┴───────────┴───────┴──────────┘ Use `pm2 show <id|name>` to get more details about an app

      As you can see, PM2 automatically assigns an App name based on the filename without the .js extension, along with a PM2 id. PM2 also maintains other information, such as the PID of the process, its current status, and memory usage.

      Applications that are running under PM2 will be restarted automatically if the application crashes or is killed, but we can take an additional step to get the application to launch on system startup using the startup subcommand. This subcommand generates and configures a startup script to launch PM2 and its managed processes on server boots. Type the following:

      You will see output that looks like this, describing the service configuration that PM2 has generated:

      Output

      [PM2] Init System found: systemd Platform systemd Template [Unit] Description=PM2 process manager Documentation=https://pm2.keymetrics.io/ After=network.target [Service] Type=forking User=root LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin Environment=PM2_HOME=/root/.pm2 PIDFile=/root/.pm2/pm2.pid Restart=on-failure ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill [Install] WantedBy=multi-user.target Target path /etc/systemd/system/pm2-root.service Command list [ 'systemctl enable pm2-root' ] [PM2] Writing init configuration in /etc/systemd/system/pm2-root.service [PM2] Making script booting at startup... [PM2] [-] Executing: systemctl enable pm2-root... Created symlink /etc/systemd/system/multi-user.target.wants/pm2-root.service → /etc/systemd/system/pm2-root.service. [PM2] [v] Command successfully executed. +---------------------------------------+ [PM2] Freeze a process list on reboot via: $ pm2 save [PM2] Remove init script via: $ pm2 unstartup systemd

      You have now created a systemd unit that runs pm2 on boot. This pm2 instance, in turn, runs hello.js.

      Start the service with systemctl:

      • sudo systemctl start pm2-root.service

      Check the status of the systemd unit:

      • systemctl status pm2-root.service

      You should see output like the following:

      Output

      ● pm2-root.service - PM2 process manager Loaded: loaded (/etc/systemd/system/pm2-root.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2019-07-12 16:09:54 UTC; 4s ago

      For a detailed overview of systemd, see Systemd Essentials: Working with Services, Units, and the Journal.

      In addition to those we have covered, PM2 provides many subcommands that allow you to manage or look up information about your applications.

      Stop an application with this command (specify the PM2 App name or id):

      Restart an application:

      • pm2 restart app_name_or_id

      List the applications currently managed by PM2:

      Get information about a specific application using its App name:

      The PM2 process monitor can be pulled up with the monit subcommand. This displays the application status, CPU, and memory usage:

      Note that running pm2 without any arguments will also display a help page with example usage.

      Now that your Node.js application is running and managed by PM2, let's set up the reverse proxy.

      Step 4 — Setting Up Nginx as a Reverse Proxy Server

      Your application is running and listening on localhost, but you need to set up a way for your users to access it. We will set up the Nginx web server as a reverse proxy for this purpose.

      In the prerequisite tutorial, you set up your Nginx configuration in the /etc/nginx/sites-available/your_domain file. Open this file for editing:

      • sudo nano /etc/nginx/sites-available/your_domain

      Within the server block, you should have an existing location / block. Replace the contents of that block with the following configuration. If your application is set to listen on a different port, update the highlighted portion to the correct port number:

      /etc/nginx/sites-available/your_domain

      server {
      ...
          location / {
              proxy_pass http://localhost:3000;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;
          }
      ...
      }
      

      This configures the server to respond to requests at its root. Assuming our server is available at your_domain, accessing https://your_domain/ via a web browser would send the request to hello.js, listening on port 3000 at localhost.

      You can add additional location blocks to the same server block to provide access to other applications on the same server. For example, if you were also running another Node.js application on port 3001, you could add this location block to allow access to it via https://your_domain/app2:

      /etc/nginx/sites-available/your_domain — Optional

      server {
      ...
          location /app2 {
              proxy_pass http://localhost:3001;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;
          }
      ...
      }
      

      Once you are done adding the location blocks for your applications, save the file and exit your editor.

      Make sure you didn't introduce any syntax errors by typing:

      Restart Nginx:

      • sudo systemctl restart nginx

      Assuming that your Node.js application is running and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy. Try it out by accessing your domain in the browser: https://your_domain.

      Conclusion

      Congratulations! You now have your Node.js application running behind an Nginx reverse proxy on a Debian 10 server. This reverse proxy setup is flexible enough to provide your users access to other applications or static web content that you want to share.



      Source link

      How To Scale a Node.js Application with MongoDB on Kubernetes Using Helm


      Introduction

      Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

      When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

      In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

      Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

      Navigate to the node_project directory:

      The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

      For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      When we deploy the Helm mongodb-replicaset chart, it will create:

      • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
      • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

      For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

      The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

      Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node's process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

      The constants for the connection URI and the URI string itself currently look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      ...
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      ...
      

      In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

      Add MONGO_REPLICASET to both the URI constant object and the connection string:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB,
        MONGO_REPLICASET
      } = process.env;
      
      ...
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
      ...
      

      Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

      Save and close the file when you are finished editing.

      With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

      Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

      • docker build -t your_dockerhub_username/node-replicas .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-replicas

      You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

      Step 2 — Creating Secrets for the MongoDB Replica Set

      The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

      • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
      • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

      With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

      First, let's create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

      • openssl rand -base64 756 > key.txt

      The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

      You can now create a Secret called keyfilesecret using this file with kubectl create:

      • kubectl create secret generic keyfilesecret --from-file=key.txt

      This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

      You will see the following output indicating that your Secret has been created:

      Output

      secret/keyfilesecret created

      Remove key.txt:

      Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

      Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

      Convert your database username:

      • echo -n 'your_database_username' | base64

      Note down the value you see in the output.

      Next, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

      ~/node_project/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: mongo-secret
      data:
        user: your_encoded_username
        password: your_encoded_password
      

      Here, we're using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

      Save and close the file when you are finished editing.

      Create the Secret object with the following command:

      • kubectl create -f secret.yaml

      You will see the following output:

      Output

      secret/mongo-secret created

      Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

      With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

      Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

      Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we've just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

      Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

      • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
      • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
      • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

      Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

      Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage — which we can check by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 21m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

      You will set values in this file that will do the following:

      • Enable authorization.
      • Reference your keyfilesecret and mongo-secret objects.
      • Specify 1Gi for your PersistentVolumes.
      • Set your replica set name to db.
      • Specify 3 replicas for the set.
      • Pin the mongo image to the latest version at the time of writing: 4.1.9.

      Paste the following code into the file:

      ~/node_project/mongodb-values.yaml

      replicas: 3
      port: 27017
      replicaSetName: db
      podDisruptionBudget: {}
      auth:
        enabled: true
        existingKeySecret: keyfilesecret
        existingAdminSecret: mongo-secret
      imagePullSecrets: []
      installImage:
        repository: unguiculus/mongodb-install
        tag: 0.7
        pullPolicy: Always
      copyConfigImage:
        repository: busybox
        tag: 1.29.3
        pullPolicy: Always
      image:
        repository: mongo
        tag: 4.1.9
        pullPolicy: Always
      extraVars: {}
      metrics:
        enabled: false
        image:
          repository: ssalaues/mongodb-exporter
          tag: 0.6.1
          pullPolicy: IfNotPresent
        port: 9216
        path: /metrics
        socketTimeout: 3s
        syncTimeout: 1m
        prometheusServiceDiscovery: true
        resources: {}
      podAnnotations: {}
      securityContext:
        enabled: true
        runAsUser: 999
        fsGroup: 999
        runAsNonRoot: true
      init:
        resources: {}
        timeout: 900
      resources: {}
      nodeSelector: {}
      affinity: {}
      tolerations: []
      extraLabels: {}
      persistentVolume:
        enabled: true
        #storageClass: "-"
        accessModes:
          - ReadWriteOnce
        size: 1Gi
        annotations: {}
      serviceAnnotations: {}
      terminationGracePeriodSeconds: 30
      tls:
        enabled: false
      configmap: {}
      readinessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 1
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      livenessProbe:
        initialDelaySeconds: 30
        timeoutSeconds: 5
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      

      The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

      Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

      To learn more about the other parameters included in the file, see the configuration table included with the repo.

      Save and close the file when you are finished editing.

      Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

      This will get the latest chart information from the stable repository.

      Finally, install the chart with the following command:

      • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

      Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

      • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

      Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we've specified. We've pointed to these options by including the -f flag and our mongodb-values.yaml file.

      Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

      Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

      Output

      NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

      You can now check on the creation of your Pods with the following command:

      You will see output like the following as the Pods are being created:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

      The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod's containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

      Once the Pods have been created and all of their associated containers are running, you will see this output:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

      In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

      Output

      NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

      This means that the first member of our StatefulSet will have the following DNS entry:

      mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local
      

      Because we need our application to connect to each MongoDB instance, it's essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

      With your database instances up and running, you are ready to create the chart for your Node application.

      Step 4 — Creating a Custom Application Chart and Configuring Parameters

      We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

      First, create a new chart directory called nodeapp with the following command:

      This will create a directory called nodeapp in your ~/node_project folder with the following resources:

      • A Chart.yaml file with basic information about your chart.
      • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
      • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
      • A templates/ directory with the template files that will generate Kubernetes manifests.
      • A templates/tests/ directory for test files.
      • A charts/ directory for any charts that this chart depends on.

      The first file we will modify out of these default files is values.yaml. Open that file now:

      The values that we will set here include:

      • The number of replicas.
      • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
      • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
      • The targetPort to specify the port on the Pod where our application will be exposed.

      We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

      Configure the following values in the values.yaml file:

      ~/node_project/nodeapp/values.yaml

      # Default values for nodeapp.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 3
      
      image:
        repository: your_dockerhub_username/node-replicas
        tag: latest
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: LoadBalancer
        port: 80
        targetPort: 8080
      ...
      

      Save and close the file when you are finished editing.

      Next, open a secret.yaml file in the nodeapp/templates directory:

      • nano nodeapp/templates/secret.yaml

      In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

      Add the following code to the file:

      ~/node_project/nodeapp/templates/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: {{ .Release.Name }}-auth
      data:
        MONGO_USERNAME: your_encoded_username
        MONGO_PASSWORD: your_encoded_password
      

      The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

      Save and close the file when you are finished.

      Next, open a file to create a ConfigMap for your application:

      • nano nodeapp/templates/configmap.yaml

      In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

      According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

      Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

      ~/node_project/nodeapp/templates/configmap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: {{ .Release.Name }}-config
      data:
        MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
        MONGO_PORT: "27017"
        MONGO_DB: "sharkinfo"
        MONGO_REPLICASET: "db"
      

      Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

      Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

      Save and close the file when you are finished editing.

      With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

      Step 5 — Integrating Environment Variables into Your Helm Deployment

      With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

      Open the application Deployment template for editing:

      • nano nodeapp/templates/deployment.yaml

      Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

      In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              ports:
      

      Next, add the following keys to the list of env variables:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              - name: MONGO_USERNAME
                valueFrom:
                  secretKeyRef:
                    key: MONGO_USERNAME
                    name: {{ .Release.Name }}-auth
              - name: MONGO_PASSWORD
                valueFrom:
                  secretKeyRef:
                    key: MONGO_PASSWORD
                    name: {{ .Release.Name }}-auth
              - name: MONGO_HOSTNAME
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_HOSTNAME
                    name: {{ .Release.Name }}-config
              - name: MONGO_PORT
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_PORT
                    name: {{ .Release.Name }}-config
              - name: MONGO_DB
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_DB
                    name: {{ .Release.Name }}-config      
              - name: MONGO_REPLICASET
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_REPLICASET
                    name: {{ .Release.Name }}-config        
      

      Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

      Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            ...
      

      Next, let's modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

      • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
      • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

      For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

      In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod's container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

      Add the following modification to the stated path for the liveness and readiness probes:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            livenessProbe:
              httpGet:
                path: /sharks
                port: http
            readinessProbe:
              httpGet:
                path: /sharks
                port: http
      

      Save and close the file when you are finished editing.

      You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

      • helm install --name nodejs ./nodeapp

      Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

      Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

      You will see the following output indicating that your release has been created:

      Output

      NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

      Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

      Check the status of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

      Once your Pods are up and running, check your Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer 10.245.33.46 your_lb_ip 80:31518/TCP 3m22s

      The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

      You should see the following landing page:

      Application Landing Page

      Now that your replicated application is working, let's add some test data to ensure that replication is working between members of the replica set.

      Step 6 — Testing MongoDB Replication

      With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

      First, make sure you have navigated your browser to the application landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      Now head back to the shark information form by clicking on Sharks in the top navigation bar:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

      Complete Shark Collection

      Let's check that the data we've entered has been replicated between the primary and secondary members of our replica set.

      Get a list of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

      To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

      When prompted, enter the password associated with this username:

      Output

      MongoDB shell version v4.1.9 Enter password:

      You will be dropped into an administrative shell:

      Output

      MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

      Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

      You will see output like the following, indicating the hostname of the primary:

      Output

      db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

      Next, switch to your sharkinfo database:

      Output

      switched to db sharkinfo

      List the collections in the database:

      Output

      sharks

      Output the documents in the collection:

      You will see the following output:

      Output

      { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      Exit the MongoDB Shell:

      Now that we have checked the data on our primary, let's check that it's being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

      Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

      Switch to the sharkinfo database:

      Output

      switched to db sharkinfo

      Permit the read operation of the documents in the sharks collection:

      Output the documents in the collection:

      You should now see the same information that you saw when running this method on your primary instance:

      Output

      db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      This output confirms that your application data is being replicated between the members of your replica set.

      Conclusion

      You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm's stable repository and other chart repositories.

      As you move toward production, consider implementing the following:

      To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.



      Source link

      How To Mock Services Using Mountebank and Node.js


      The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      In complex service-oriented architectures (SOA), programs often need to call multiple services to run through a given workflow. This is fine once everything is in place, but if the code you are working on requires a service that is still in development, you can be stuck waiting for other teams to finish their work before beginning yours. Additionally, for testing purposes you may need to interact with external vendor services, like a weather API or a record-keeping system. Vendors usually don’t give you as many environments as you need, and often don’t make it easy to control test data on their systems. In these situations, unfinished services and services outside of your control can make code testing frustrating.

      The solution to all of these problems is to create a service mock. A service mock is code that simulates the service that you would use in the final product, but is lighter weight, less complex, and easier to control than the actual service you would use in production. You can set a mock service to return a default response or specific test data, then run the software you’re interested in testing as if the dependent service were really there. Because of this, having a flexible way to mock services can make your workflow faster and more efficient.

      In an enterprise setting, making mock services is sometimes called service virtualization. Service virtualization is often associated with expensive enterprise tools, but you don’t need an expensive tool to mock a service. Mountebank is a free and open source service-mocking tool that you can use to mock HTTP services, including REST and SOAP services. You can also use it to mock SMTP or TCP requests.

      In this guide, you will build two flexible service-mocking applications using Node.js and Mountebank. Both of the mock services will listen to a specific port for REST requests in HTTP. In addition to this simple mocking behavior, the service will also retrieve mock data from a comma-separated values (CSV) file. After this tutorial, you’ll be able to mock all kinds of service behavior so you can more easily develop and test your applications.

      Prerequisites

      To follow this tutorial, you will need the following:

      Step 1 — Starting a Node.js Application

      In this step, you are going to create a basic Node.js application that will serve as the base of your Mountebank instance and the mock services you will create in later steps.

      Note: Mountebank can be used as a standalone application by installing it globally using the command npm install -g mountebank. You can then run it with the mb command and add mocks using REST requests.

      While this is the fastest way to get Mountebank up and running, building the Mountebank application yourself allows you to run a set of predefined mocks when the app starts up, which you can then store in source control and share with your team. This tutorial will build the Mountebank application manually to take advantage of this.

      First, create a new directory to put your application in. You can name it whatever you want, but in this tutorial we’ll name it app:

      Move into your newly created directory with the following command:

      To start a new Node.js application, run npm init and fill out the prompts:

      The data from these prompts will be used to fill out your package.json file, which describes what your application is, what packages it relies on, and what different scripts it uses. In Node.js applications, scripts define commands that build, run, and test your application. You can go with the defaults for the prompts or fill in your package name, version number, etc.

      After you finish this command, you'll have a basic Node.js application, including the package.json file.

      Now install the Mountebank npm package using the following:

      • npm install -save mountebank

      This command grabs the Mountebank package and installs it to your application. Make sure to use the -save flag in order to update your package.json file with Mountebank as a dependency.

      Next, add a start script to your package.json that runs the command node src/index.js. This script defines the entry point of your app as index.js, which you'll create in a later step.

      Open up package.json in a text editor. You can use whatever text editor you want, but this tutorial will use nano.

      Navigate to the "scripts" section and add the line "start": "node src/index.js". This will add a start command to run your application.

      Your package.json file should look similar to this, depending on how you filled in the initial prompts:

      app/package.json

      {
        "name": "diy-service-virtualization",
        "version": "1.0.0",
        "description": "An application to mock services.",
        "main": "index.js",
        "scripts": {
          "start": "node src/index.js"
        },
        "author": "Dustin Ewers",
        "license": "MIT",
        "dependencies": {
          "mountebank": "^2.0.0"
        }
      }
      

      You now have the base for your Mountebank application, which you built by creating your app, installing Mountebank, and adding a start script. Next, you'll add a settings file to store application-specific settings.

      Step 2 — Creating a Settings File

      In this step, you will create a settings file that determines which ports the Mountebank instance and the two mock services will listen to.

      Each time you run an instance of Mountebank or a mock service, you will need to specify what network port that service will run on (e.g., http://localhost:5000/). By putting these in a settings file, the other parts of your application will be able to import these settings whenever they need to know the port number for the services and the Mountebank instance. While you could directly code these into your application as constants, changing the settings later will be easier if you store them in a file. This way, you will only have to change the values in one place.

      Begin by making a directory called src from your app directory:

      Navigate to the folder you just created:

      Create a file called settings.js and open it in your text editor:

      Next, add settings for the ports for the main Mountebank instance and the two mock services you'll create later:

      app/src/settings.js

      module.exports = {
          port: 5000,
          hello_service_port: 5001,
          customer_service_port: 5002
      }
      

      This settings file has three entries: port: 5000 assigns port 5000 to the main Mountebank instance, hello_service_port: 5001 assigns port 5001 to the Hello World test service that you will create in a later step, and customer_service_port: 5002 assigns port 5002 to the mock service app that will respond with CSV data. If the ports here are occupied, feel free to change them to whatever you want. module.exports = makes it possible for your other files to import these settings.

      In this step, you used settings.js to define the ports that Mountebank and your mock services will listen to and made these settings available to other parts of your app. In the next step, you will build an initialization script with these settings to start Mountebank.

      Step 3 — Building the Initialization Script

      In this step, you're going to create a file that starts an instance of Mountebank. This file will be the entry point of the application, meaning that, when you run the app, this script will run first. You will add more lines to this file as you build new service mocks.

      From the src directory, create a file called index.js and open it in your text editor:

      To start an instance of Mountebank that will run on the port specified in the settings.js file you created in the last step, add the following code to the file:

      app/src/index.js

      const mb = require('mountebank');
      const settings = require('./settings');
      
      const mbServerInstance = mb.create({
              port: settings.port,
              pidfile: '../mb.pid',
              logfile: '../mb.log',
              protofile: '../protofile.json',
              ipWhitelist: ['*']
          });
      

      This code does three things. First, it imports the Mountebank npm package that you installed earlier (const mb = require('mountebank');). Then, it imports the settings module you created in the previous step (const settings = require('./settings');). Finally, it creates an instance of the Mountebank server with mb.create().

      The server will listen at the port specified in the settings file. The pidfile, logfile, and protofile parameters are for files that Mountebank uses internally to record its process ID, specify where it keeps its logs, and set a file to load custom protocol implementations. The ipWhitelist setting specifies what IP addresses are allowed to communicate with the Mountebank server. In this case, you're opening it up to any IP address.

      Save and exit from the file.

      After this file is in place, enter the following command to run your application:

      The command prompt will disappear, and you will see the following:

      • info: [mb:5000] mountebank v2.0.0 now taking orders - point your browser to http://localhost:5000/ for help

      This means your application is open and ready to take requests.

      Next, check your progress. Open up a new terminal window and use curl to send the following GET request to the Mountebank server:

      • curl http://localhost:5000/

      This will return the following JSON response:

      Output

      { "_links": { "imposters": { "href": "http://localhost:5000/imposters" }, "config": { "href": "http://localhost:5000/config" }, "logs": { "href": "http://localhost:5000/logs" } } }

      The JSON that Mountebank returns describes the three different endpoints you can use to add or remove objects in Mountebank. By using curl to send reqests to these endpoints, you can interact with your Mountebank instance.

      When you're done, switch back to your first terminal window and exit the application using CTRL + C. This exits your Node.js app so you can continue adding to it.

      Now you have an application that successfully runs an instance of Mountebank. In the next step, you will create a Mountebank client that uses REST requests to add mock services to your Mountebank application.

      Step 4 — Building a Mountebank Client

      Mountebank communicates using a REST API. You can manage the resources of your Mountebank instance by sending HTTP requests to the different endpoints mentioned in the last step. To add a mock service, you send a HTTP POST request to the imposters endpoint. An imposter is the name for a mock service in Mountebank. Imposters can be simple or complex, depending on the behaviors you want in your mock.

      In this step, you will build a Mountebank client to automatically send POST requests to the Mountebank service. You could send a POST request to the imposters endpoint using curl or Postman, but you'd have to send that same request every time you restart your test server. If you're running a sample API with several mocks, it will be more efficient to write a client script to do this for you.

      Begin by installing the node-fetch library:

      • npm install -save node-fetch

      The node-fetch library gives you an implementation of the JavaScript Fetch API, which you can use to write shorter HTTP requests. You could use the standard http library, but using node-fetch is a lighter weight solution.

      Now, create a client module to send requests to Mountebank. You only need to post imposters, so this module will have one method.

      Use nano to create a file called mountebank-helper.js:

      • nano mountebank-helper.js

      To set up the client, put the following code in the file:

      app/src/mountebank-helper.js

      const fetch = require('node-fetch');
      const settings = require('./settings');
      
      function postImposter(body) {
          const url = `http://127.0.0.1:${settings.port}/imposters`;
      
          return fetch(url, {
                          method:'POST',
                          headers: { 'Content-Type': 'application/json' },
                          body: JSON.stringify(body)
                      });
      }
      
      module.exports = { postImposter };
      

      This code starts off by pulling in the node-fetch library and your settings file. This module then exposes a function called postImposter that posts service mocks to Mountebank. Next, body: determines that the function takes JSON.stringify(body), a JavaScript object. This object is what you're going to POST to the Mountebank service. Since this method is running locally, you run your request against 127.0.0.1 (localhost). The fetch method takes the object sent in the parameters and sends the POST request to the url.

      In this step, you created a Mountebank client to post new mock services to the Mountebank server. In the next step, you'll use this client to create your first mock service.

      Step 5 — Creating Your First Mock Service

      In previous steps, you built an application that creates a Mountebank server and code to call that server. Now it's time to use that code to build an imposter, or a mock service.

      In Mountebank, each imposter contains stubs. Stubs are configuration sets that determine the response that an imposter will give. Stubs can be further divided into combinations of predicates and responses. A predicate is the rule that triggers the imposter's response. Predicates can use lots of different types of information, including URLs, request content (using XML or JSON), and HTTP methods.

      Looked at from the point of view of a Model-View-Controller (MVC) app, an imposter acts like a controller and the stubs like actions within that controller. Predicates are routing rules that point toward a specific controller action.

      To create your first mock service, create a file called hello-service.js. This file will contain the definition of your mock service.

      Open hello-service.js in your text editor:

      Then add the following code:

      app/src/hello-service.js

      const mbHelper = require('./mountebank-helper');
      const settings = require('./settings');
      
      function addService() {
          const response = { message: "hello world" }
      
          const stubs = [
              {
                  predicates: [ {
                      equals: {
                          method: "GET",
                          "path": "/"
                      }
                  }],
                  responses: [
                      {
                          is: {
                              statusCode: 200,
                              headers: {
                                  "Content-Type": "application/json"
                              },
                              body: JSON.stringify(response)
                          }
                      }
                  ]
              }
          ];
      
          const imposter = {
              port: settings.hello_service_port,
              protocol: 'http',
              stubs: stubs
          };
      
          return mbHelper.postImposter(imposter);
      }
      
      module.exports = { addService };
      
      

      This code defines an imposter with a single stub that contains a predicate and a response. Then it sends that object to the Mountebank server. This code will add a new mock service that listens for GET requests to the root url and returns { message: "hello world" } when it gets one.

      Let's take a look at the addService() function that the preceding code creates. First, it defines a response message hello world:

          const response = { message: "hello world" }
      ...
      

      Then, it defines a stub:

      ...
              const stubs = [
              {
                  predicates: [ {
                      equals: {
                          method: "GET",
                          "path": "/"
                      }
                  }],
                  responses: [
                      {
                          is: {
                              statusCode: 200,
                              headers: {
                                  "Content-Type": "application/json"
                              },
                              body: JSON.stringify(response)
                          }
                      }
                  ]
              }
          ];
      ...
      

      This stub has two parts. The predicate part is looking for a GET request to the root (/) URL. This means that stubs will return the response when someone sends a GET request to the root URL of the mock service. The second part of the stub is the responses array. In this case, there is one response, which returns a JSON result with an HTTP status code of 200.

      The final step defines an imposter that contains that stub:

      ...
          const imposter = {
              port: settings.hello_service_port,
              protocol: 'http',
              stubs: stubs
          };
      ...
      

      This is the object you're going to send to the /imposters endpoint to create an imposter that mocks a service with a single endpoint. The preceding code defines your imposter by setting the port to the port you determined in the settings file, setting the protocol to HTTP, and assigning stubs as the imposter's stubs.

      Now that you have a mock service, the code sends it to the Mountebank server:

      ...
          return mbHelper.postImposter(imposter);
      ...
      

      As mentioned before, Mountebank uses a REST API to manage its objects. The preceding code uses the postImposter() function that you defined earlier to send a POST request to the server to activate the service.

      Once you are finished with hello-service.js, save and exit from the file.

      Next, call the newly created addService() function in index.js. Open the file in your text editor:

      To make sure that the function is called when the Mountebank instance is created, add the following highlighted lines:

      app/src/index.js

      const mb = require('mountebank');
      const settings = require('./settings');
      const helloService = require('./hello-service');
      
      const mbServerInstance = mb.create({
              port: settings.port,
              pidfile: '../mb.pid',
              logfile: '../mb.log',
              protofile: '../protofile.json',
              ipWhitelist: ['*']
          });
      
      mbServerInstance.then(function() {
          helloService.addService();
      });
      

      When a Mountebank instance is created, it returns a promise. A promise is an object that does not determine its value until later. This can be used to simplify asynchronous function calls. In the preceding code, the .then(function(){...}) function executes when the Mountebank server is initialized, which happens when the promise resolves.

      Save and exit index.js.

      To test that the mock service is created when Mountebank initializes, start the application:

      The Node.js process will occupy the terminal, so open up a new terminal window and send a GET request to http://localhost:5001/:

      • curl http://localhost:5001

      You will receive the following response, signifying that the service is working:

      Output

      {"message": "hello world"}

      Now that you tested your application, switch back to the first terminal window and exit the Node.js application using CTRL + C.

      In this step, you created your first mock service. This is a test service mock that returns hello world in response to a GET request. This mock is meant for demonstration purposes; it doesn't really give you anything you couldn't get by building a small Express application. In the next step, you'll create a more complex mock that takes advantage of some of Mountebank's features.

      Step 6 — Building a Data-Backed Mock Service

      While the type of service you created in the previous step is fine for some scenarios, most tests require a more complex set of responses. In this step, you're going to create a service that takes a parameter from the URL and uses it to look up a record in a CSV file.

      First, move back to the main app directory:

      Create a folder called data:

      Open a file for your customer data called customers.csv:

      Add in the following test data so that your mock service has something to retrieve:

      app/data/customers.csv

      id,first_name,last_name,email,favorite_color 
      1,Erda,Birkin,ebirkinb@google.com.hk,Aquamarine
      2,Cherey,Endacott,cendacottc@freewebs.com,Fuscia
      3,Shalom,Westoff,swestoffd@about.me,Red
      4,Jo,Goulborne,jgoulbornee@example.com,Red
      

      This is fake customer data generated by the API mocking tool Mockaroo, similar to the fake data you'd load into a customers table in the service itself.

      Save and exit the file.

      Then, create a new module called customer-service.js in the src directory:

      • nano src/customer-service.js

      To create an imposter that listens for GET requests on the /customers/ endpoint, add the following code:

      app/src/customer-service.js

      const mbHelper = require('./mountebank-helper');
      const settings = require('./settings');
      
      function addService() {
          const stubs = [
              {
                  predicates: [{
                      and: [
                          { equals: { method: "GET" } },
                          { startsWith: { "path": "/customers/" } }
                      ]
                  }],
                  responses: [
                      {
                          is: {
                              statusCode: 200,
                              headers: {
                                  "Content-Type": "application/json"
                              },
                              body: '{ "firstName": "${row}[first_name]", "lastName": "${row}[last_name]", "favColor": "${row}[favorite_color]" }'
                          },
                          _behaviors: {
                              lookup: [
                                  {
                                      "key": {
                                        "from": "path",
                                        "using": { "method": "regex", "selector": "/customers/(.*)$" },
                                        "index": 1
                                      },
                                      "fromDataSource": {
                                        "csv": {
                                          "path": "data/customers.csv",
                                          "keyColumn": "id"
                                        }
                                      },
                                      "into": "${row}"
                                    }
                              ]
                          }
                      }
                  ]
              }
          ];
      
          const imposter = {
              port: settings.customer_service_port,
              protocol: 'http',
              stubs: stubs
          };
      
          return mbHelper.postImposter(imposter);
      }
      
      module.exports = { addService };
      
      

      This code defines a service mock that looks for GET requests with a URL format of customers/<id>. When a request is received, it will query the URL for the id of the customer and then return the corresponding record from the CSV file.

      This code uses a few more Mountebank features than the hello service you created in the last step. First, it uses a feature of Mountebank called behaviors. Behaviors are a way to add functionality to a stub. In this case, you're using the lookup behavior to look up a record in a CSV file:

      ...
        _behaviors: {
            lookup: [
                {
                    "key": {
                      "from": "path",
                      "using": { "method": "regex", "selector": "/customers/(.*)$" },
                      "index": 1
                    },
                    "fromDataSource": {
                      "csv": {
                        "path": "data/customers.csv",
                        "keyColumn": "id"
                      }
                    },
                    "into": "${row}"
                  }
            ]
        }
      ...
      

      The key property uses a regular expression to parse the incoming path. In this case, you're taking the id that comes after customers/ in the URL.

      The fromDataSource property points to the file you're using to store your test data.

      The into property injects the result into a variable ${row}. That variable is referenced in the following body section:

      ...
        is: {
            statusCode: 200,
            headers: {
                "Content-Type": "application/json"
            },
            body: '{ "firstName": "${row}[first_name]", "lastName": "${row}[last_name]", "favColor": "${row}[favorite_color]" }'
        },
      ...
      

      The row variable is used to populate the body of the response. In this case, it's a JSON string with the customer data.

      Save and exit the file.

      Next, open index.js to add the new service mock to your initialization function:

      Add the highlighted line:

      app/src/index.js

      const mb = require('mountebank');
      const settings = require('./settings');
      const helloService = require('./hello-service');
      const customerService = require('./customer-service');
      
      const mbServerInstance = mb.create({
              port: settings.port,
              pidfile: '../mb.pid',
              logfile: '../mb.log',
              protofile: '../protofile.json',
              ipWhitelist: ['*']
          });
      
      mbServerInstance.then(function() {
          helloService.addService();
          customerService.addService();
      });
      
      

      Save and exit the file.

      Now start Mountebank with npm start. This will hide the prompt, so open up another terminal window. Test your service by sending a GET request to localhost:5002/customers/3. This will look up the customer information under id 3.

      • curl localhost:5002/customers/3

      You will see the following response:

      Output

      { "firstName": "Shalom", "lastName": "Westoff", "favColor": "Red" }

      In this step, you created a mock service that read data from a CSV file and returned it as a JSON response. From here, you can continue to build more complex mocks that match the services you need to test.

      Conclusion

      In this article you created your own service-mocking application using Mountebank and Node.js. Now you can build mock services and share them with your team. Whether it's a complex scenario involving a vendor service you need to test around or a simple mock while you wait for another team to finish their work, you can keep your team moving by creating mock services.

      If you want to learn more about Mountebank, check out their documentation. If you'd like to containerize this application, check out Containerizing a Node.js Application for Development With Docker Compose. If you'd like to run this application in a production-like environment, check out How To Set Up a Node.js Application for Production on Ubuntu 18.04.



      Source link