One place for hosting & domains

      MongoDB

      Cómo instalar MongoDB en Ubuntu 18.04


      El autor seleccionó a Creative Commons Corporation para que recibiese una donación de $100 como parte del programa Write for DOnations.

      Introducción

      MongoDB es una base de datos de documentos gratuita y de código abierto utilizada comúnmente en las aplicaciones web modernas.

      En este tutorial, instalará MongoDB, administrará su servicio y, de forma opcional, habilitará el acceso remoto.

      Requisitos previos

      Para completar este tutorial, necesitará lo siguiente:

      Paso 1: Instalar MongoDB

      En los repositorios oficiales de paquetes de Ubuntu se incluye una versión actualizada de MongoDB. Esto significa que podemos instalar los paquetes necesarios utilizando apt.

      Primero, actualice la lista de paquetes para tener la versión más reciente de listados de repositorios:

      A continuación, instale el propio paquete de MongoDB:

      • sudo apt install -y mongodb

      Con este comando. se instalan varios paquetes que contienen la versión estable más reciente de MongoDB, junto con herramientas de administración útiles para el servidor de MongoDB. El servidor de la base de datos se inicia de forma automática tras la instalación.

      A continuación, comprobaremos que el servidor esté activo y funcione de forma correcta.

      Paso 2: Comprobar el servicio y la base de datos

      El proceso de instalación inició MongoDB de forma automática, pero verificaremos que el servicio se inicie y que la base de datos funcione.

      Primero, compruebe el estado del servicio:

      • sudo systemctl status mongodb

      Verá este resultado:

      Output

      mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2018-05-26 07:48:04 UTC; 2min 17s ago Docs: man:mongod(1) Main PID: 2312 (mongod) Tasks: 23 (limit: 1153) CGroup: /system.slice/mongodb.service └─2312 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

      Según systemd, el servidor MongoDB se encuentra activo y funcionando.

      Podemos verificar esto en profundidad estableciendo conexión con el servidor de la base de datos y ejecutando un comando de diagnóstico.

      Ejecute este comando:

      • mongo --eval 'db.runCommand({ connectionStatus: 1 })'

      Con esto se mostrarán la versión actual de la base de datos, la dirección y el puerto del servidor, y el resultado del comando status:

      Output

      MongoDB shell version v3.6.3 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.3 { "authInfo" : { "authenticatedUsers" : [ ], "authenticatedUserRoles" : [ ] }, "ok" : 1 }

      Un valor de 1 para el campo ok en la respuesta indica que el servidor funciona correctamente.

      A continuación, veremos la forma de administrar la instancia del servidor.

      Paso 3: Gestionar el servicio de MongoDB

      MongoDB se instala como un servicio systemd. Esto significa que puede administrarlo utilizando comandos systemd estándares, junto con todos los demás servicios de sistema en Ubuntu.

      Para verificar el estado del servicio, escriba lo siguiente:

      • sudo systemctl status mongodb

      Puede detener el servidor en cualquier momento escribiendo lo siguiente:

      • sudo systemctl stop mongodb

      Para iniciar el servidor cuando esté detenido, escriba lo siguiente:

      • sudo systemctl start mongodb

      También puede reiniciar el servidor con un solo comando:

      • sudo systemctl restart mongodb

      Por defecto, MongoDB se configura para iniciarse de forma automática con el servidor. Si desea desactivar el inicio automático, escriba lo siguiente:

      • sudo systemctl disable mongodb

      Volver a habilitarlo es igualmente sencillo. Para hacerlo, utilice lo siguiente:

      • sudo systemctl enable mongodb

      A continuación, ajustaremos la configuración del firewall para nuestra instalación MongoDB.

      Paso 4: Ajustar el firewall (opcional)

      Suponiendo que siguió las instrucciones del tutorial de configuración inicial para servidores para habilitar el firewall en su servidor, no será posible acceder al servidor de MongoDB desde Internet.

      Si tiene intención de usar el servidor de MongoDB solo a nivel local con aplicaciones que se ejecuten en el mismo servidor, este es el ajuste recomendado y seguro. Sin embargo, si desea poder conectarse con su servidor MongoDB desde Internet, deberá permitir las conexiones entrantes en ufw.

      Para permitir el acceso a MongoDB en su puerto predeterminado 27017 desde cualquier parte, podría utilizar sudo ufw allow 27017. Sin embargo, permitir el acceso a Internet al servidor de MongoDB en una instalación predeterminada proporciona acceso ilimitado al servidor de la base de datos y a sus datos.

      En la mayoría de los casos, solo se debe acceder a MongoDB desde determinadas ubicaciones de confianza, como otro servidor que aloje una aplicación. Para realizar esta tarea, puede permitir el acceso en el puerto predeterminado de MongoDB mientras especifique la dirección IP de otro servidor que, de forma explícita, tendrá permitido conectarse:

      • sudo ufw allow from your_other_server_ip/32 to any port 27017

      Puede verificar el cambio en los ajustes del firewall con ufw:

      Debería ver la habilitación del tráfico hacia el puerto 27017 en el resultado:

      Output

      Status: active
      
      To                         Action      From
      --                         ------      ----
      OpenSSH                    ALLOW       Anywhere
      27017                      ALLOW       Anywhere
      OpenSSH (v6)               ALLOW       Anywhere (v6)
      27017 (v6)                 ALLOW       Anywhere (v6)
      

      Si decidió permitir que solo una dirección IP determinada se conecte con el servidor de MongoDB, en el resultado se enumerará la dirección IP de la ubicación autorizada en lugar de Anywhere.

      Puede encontrar ajustes de firewall más avanzados para restringir el acceso a servicios en Aspectos básicos de UFW: Reglas y comandos comunes de firewall.

      Aunque el puerto esté abierto, MongoDB solo escucha en la dirección local 127.0.0.1. Para permitir conexiones remotas, agregue la dirección IP pública de su servidor al archivo mongod.conf.

      Abra el archivo de configuración de MongoDB en su editor:

      • sudo nano /etc/mongodb.conf

      Agregue la dirección IP de su servidor al valor bindIP:

      ...
      logappend=true
      
      bind_ip = 127.0.0.1,your_server_ip
      #port = 27017
      
      ...
      

      Asegúrese de situar una coma entre la dirección IP existente y la que agregó.

      Guarde el archivo, cierre el editor y reinicie MongoDB:

      • sudo systemctl restart mongodb

      MongoDB ahora recibirá conexiones remotas, pero cualquiera podrá acceder a él. Siga la parte 2 de Cómo instalar y proteger MongoDB en Ubuntu 16.04 para agregar un usuario administrativo y ampliar el bloqueo.

      Conclusión

      Puede encontrar tutoriales más detallados sobre cómo configurar y utilizar MongoDB en estos artículos de la comunidad de DigitalOcean. La documentación oficial de MongoDB también es un gran recurso en el que se abordan las posibilidades que ofrece MongoDB.



      Source link

      How To Set Up Flask with MongoDB and Docker


      The author selected the Internet Archive to receive a donation as part of the Write for DOnations program.

      Introduction

      Developing web applications can become complex and time consuming when building and maintaining a number of different technologies. Considering lighter weight options designed to reduce complexity and time-to-production for your application can result in a more flexible and scalable solution. As a micro web framework built on Python, Flask provides an extensible way for developers to grow their applications through extensions that can be integrated into projects. To continue the scalability of a developer’s tech stack, MongoDB is a NoSQL database is designed to scale and work with frequent changes. Developers can use Docker to simplify the process of packaging and deploying their applications.

      Docker Compose has further simplified the development environment by allowing you to define your infrastructure, including your application services, network volumes, and bind mounts, in a single file. Using Docker Compose provides ease of use over running multiple docker container run commands. It allows you to define all your services in a single Compose file, and with a single command you create and start all the services from your configuration. This ensures that there is version control throughout your container infrastructure. Docker Compose uses a project name to isolate environments from each other, this allows you to run multiple environments on a single host.

      In this tutorial you will build, package, and run your to-do web application with Flask, Nginx, and MongoDB inside of Docker containers. You will define the entire stack configuration in a docker-compose.yml file, along with configuration files for Python, MongoDB, and Nginx. Flask requires a web server to serve HTTP requests, so you will also use Gunicorn, which is a Python WSGI HTTP Server, to serve the application. Nginx acts as a reverse proxy server that forwards requests to Gunicorn for processing.

      Prerequisites

      To follow this tutorial, you will need the following:

      Step 1 — Writing the Stack Configuration in Docker Compose

      Building your applications on Docker allows you to version infrastructure easily depending on configuration changes you make in Docker Compose. The infrastructure can be defined in a single file and built with a single command. In this step, you will set up the docker-compose.yml file to run your Flask application.

      The docker-compose.yml file lets you define your application infrastructure as individual services. The services can be connected to each other and each can have a volume attached to it for persistent storage. Volumes are stored in a part of the host filesystem managed by Docker (/var/lib/docker/volumes/ on Linux).

      Volumes are the best way to persist data in Docker, as the data in the volumes can be exported or shared with other applications. For additional information about sharing data in Docker, you can refer to How To Share Data Between the Docker Container and the Host.

      To get started, create a directory for the application in the home directory on your server:

      Move into the newly created directory:

      Next, create the docker-compose.yml file:

      The docker-compose.yml file starts with a version number that identifies the Docker Compose file version. Docker Compose file version 3 targets Docker Engine version 1.13.0+, which is a prerequisite for this setup. You will also add the services tag that you will define in the next step:

      docker-compose.yml

      version: '3'
      services:
      

      You will now define flask as the first service in your docker-compose.yml file. Add the following code to define the Flask service:

      docker-compose.yml

      . . .
        flask:
          build:
            context: app
            dockerfile: Dockerfile
          container_name: flask
          image: digitalocean.com/flask-python:3.6
          restart: unless-stopped
          environment:
            APP_ENV: "prod"
            APP_DEBUG: "False"
            APP_PORT: 5000
            MONGODB_DATABASE: flaskdb
            MONGODB_USERNAME: flaskuser
            MONGODB_PASSWORD: your_mongodb_password
            MONGODB_HOSTNAME: mongodb
          volumes:
            - appdata:/var/www
          depends_on:
            - mongodb
          networks:
            - frontend
            - backend
      

      The build property defines the context of the build. In this case, the app folder that will contain the Dockerfile.

      You use the container_name property to define a name for each container. The image property specifies the image name and what the Docker image will be tagged as. The restart property defines how the container should be restarted—in your case it is unless-stopped. This means your containers will only be stopped when the Docker Engine is stopped/restarted or when you explicitly stop the containers. The benefit of using the unless-stopped property is that the containers will start automatically once the Docker Engine is restarted or any error occurs.

      The environment property contains the environment variables that are passed to the container. You need to provide a secure password for the environment variable MONGODB_PASSWORD. The volumes property defines the volumes the service is using. In your case the volume appdata is mounted inside the container at the /var/www directory. The depends_on property defines a service that Flask depends on to function properly. In this case, the flask service will depend on mongodb since the mongodb service acts as the database for your application. depends_on ensures that the flask service only runs if the mongodb service is running.

      The networks property specifies frontend and backend as the networks the flask service will have access to.

      With the flask service defined, you’re ready to add the MongoDB configuration to the file. In this example, you will use the official 4.0.8 version mongo image. Add the following code to your docker-compose.yml file following the flask service:

      docker-compose.yml

      . . .
        mongodb:
          image: mongo:4.0.8
          container_name: mongodb
          restart: unless-stopped
          command: mongod --auth
          environment:
            MONGO_INITDB_ROOT_USERNAME: mongodbuser
            MONGO_INITDB_ROOT_PASSWORD: your_mongodb_root_password
            MONGO_INITDB_DATABASE: flaskdb
            MONGODB_DATA_DIR: /data/db
            MONDODB_LOG_DIR: /dev/null
          volumes:
            - mongodbdata:/data/db
          networks:
            - backend
      

      The container_name for this service is mongodb with a restart policy of unless-stopped. You use the command property to define the command that will be executed when the container is started. The command mongod --auth will disable logging into the MongoDB shell without credentials, which will secure MongoDB by requiring authentication.

      The environment variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD create a root user with the given credentials, so be sure to replace the placeholder with a strong password.

      MongoDB stores its data in /data/db by default, therefore the data in the /data/db folder will be written to the named volume mongodbdata for persistence. As a result you won’t lose your databases in the event of a restart. The mongoDB service does not expose any ports, so the service will only be accessible through the backend network.

      Next, you will define the web server for your application. Add the following code to your docker-compose.yml file to configure Nginx:

      docker-compose.yml

      . . .
        webserver:
          build:
            context: nginx
            dockerfile: Dockerfile
          image: digitalocean.com/webserver:latest
          container_name: webserver
          restart: unless-stopped
          environment:
            APP_ENV: "prod"
            APP_NAME: "webserver"
            APP_DEBUG: "false"
            SERVICE_NAME: "webserver"
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - nginxdata:/var/log/nginx
          depends_on:
            - flask
          networks:
            - frontend
      

      Here you have defined the context of the build, which is the nginx folder containing the Dockerfile. With the image property, you specify the image used to tag and run the container. The ports property will configure the Nginx service to be publicly accessible through :80 and :443 and volumes mounts the nginxdata volume inside the container at /var/log/nginx directory.

      You’ve defined the service on which the web server service depends_on as flask. Finally the networks property defines the networks web server service will have access to the frontend.

      Next, you will create bridge networks to allow the containers to communicate with each other. Append the following lines to the end of your file:

      docker-compose.yml

      . . .
      networks:
        frontend:
          driver: bridge
        backend:
          driver: bridge
      

      You defined two networks—frontend and backend—for the services to connect to. The front-end services, such as Nginx, will connect to the frontend network since it needs to be publicly accessible. Back-end services, such as MongoDB, will connect to the backend network to prevent unauthorized access to the service.

      Next, you will use volumes to persist the database, application, and configuration files. Since your application will use the databases and files, it is imperative to persist the changes made to them. The volumes are managed by Docker and stored on the filesystem. Add this code to the docker-compose.yml file to configure the volumes:

      docker-compose.yml

      . . .
      volumes:
        mongodbdata:
          driver: local
        appdata:
          driver: local
        nginxdata:
          driver: local
      

      The volumes section declares the volumes that the application will use to persist data. Here you have defined the volumes mongodbdata, appdata, and nginxdata for persisting your MongoDB databases, Flask application data, and the Nginx web server logs, respectively. All of these volumes use a local driver to store the data locally. The volumes are used to persist this data so that data like your MongoDB databases and Nginx webserver logs could be lost once you restart the containers.

      Your complete docker-compose.yml file will look like this:

      docker-compose.yml

      version: '3'
      services:
      
        flask:
          build:
            context: app
            dockerfile: Dockerfile
          container_name: flask
          image: digitalocean.com/flask-python:3.6
          restart: unless-stopped
          environment:
            APP_ENV: "prod"
            APP_DEBUG: "False"
            APP_PORT: 5000
            MONGODB_DATABASE: flaskdb
            MONGODB_USERNAME: flaskuser
            MONGODB_PASSWORD: your_mongodb_password
            MONGODB_HOSTNAME: mongodb
          volumes:
            - appdata:/var/www
          depends_on:
            - mongodb
          networks:
            - frontend
            - backend
      
        mongodb:
          image: mongo:4.0.8
          container_name: mongodb
          restart: unless-stopped
          command: mongod --auth
          environment:
            MONGO_INITDB_ROOT_USERNAME: mongodbuser
            MONGO_INITDB_ROOT_PASSWORD: your_mongodb_root_password
            MONGO_INITDB_DATABASE: flaskdb
            MONGODB_DATA_DIR: /data/db
            MONDODB_LOG_DIR: /dev/null
          volumes:
            - mongodbdata:/data/db
          networks:
            - backend
      
        webserver:
          build:
            context: nginx
            dockerfile: Dockerfile
          image: digitalocean.com/webserver:latest
          container_name: webserver
          restart: unless-stopped
          environment:
            APP_ENV: "prod"
            APP_NAME: "webserver"
            APP_DEBUG: "true"
            SERVICE_NAME: "webserver"
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - nginxdata:/var/log/nginx
          depends_on:
            - flask
          networks:
            - frontend
      
      networks:
        frontend:
          driver: bridge
        backend:
          driver: bridge
      
      volumes:
        mongodbdata:
          driver: local
        appdata:
          driver: local
        nginxdata:
          driver: local
      

      Save the file and exit the editor after verifying your configuration.

      You’ve defined the Docker configuration for your entire application stack in the docker-compose.yml file. You will now move on to writing the Dockerfiles for Flask and the web server.

      Step 2 — Writing the Flask and Web Server Dockerfiles

      With Docker, you can build containers to run your applications from a file called Dockerfile. The Dockerfile is a tool that enables you to create custom images that you can use to install the software required by your application and configure your containers based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

      In this step, you’ll write the Dockerfiles for the Flask and web server services. To get started, create the app directory for your Flask application:

      Next, create the Dockerfile for your Flask app in the app directory:

      Add the following code to the file to customize your Flask container:

      app/Dockerfile

      FROM python:3.6.8-alpine3.9
      
      LABEL MAINTAINER="FirstName LastName <example@domain.com>"
      
      ENV GROUP_ID=1000 
          USER_ID=1000
      
      WORKDIR /var/www/
      

      In this Dockerfile, you are creating an image on top of the 3.6.8-alpine3.9 image that is based on Alpine 3.9 with Python 3.6.8 pre-installed.

      The ENV directive is used to define the environment variables for our group and user ID.
      Linux Standard Base (LSB) specifies that UIDs and GIDs 0-99 are statically allocated by the system. UIDs 100-999 are supposed to be allocated dynamically for system users and groups. UIDs 1000-59999 are supposed to be dynamically allocated for user accounts. Keeping this in mind you can safely assign a UID and GID of 1000, furthermore you can change the UID/GID by updating the GROUP_ID and USER_ID to match your requirements.

      The WORKDIR directive defines the working directory for the container. Be sure to replace the LABEL MAINTAINER field with your name and email address.

      Add the following code block to copy the Flask application into the container and install the necessary dependencies:

      app/Dockerfile

      . . .
      ADD ./requirements.txt /var/www/requirements.txt
      RUN pip install -r requirements.txt
      ADD . /var/www/
      RUN pip install gunicorn
      

      The following code will use the ADD directive to copy files from the local app directory to the /var/www directory on the container. Next, the Dockerfile will use the RUN directive to install Gunicorn and the packages specified in the requirements.txt file, which you will create later in the tutorial.

      The following code block adds a new user and group and initializes the application:

      app/Dockerfile

      . . .
      RUN addgroup -g $GROUP_ID www
      RUN adduser -D -u $USER_ID -G www www -s /bin/sh
      
      USER www
      
      EXPOSE 5000
      
      CMD [ "gunicorn", "-w", "4", "--bind", "0.0.0.0:5000", "wsgi"]
      

      By default, Docker containers run as the root user. The root user has access to everything in the system, so the implications of a security breach can be disastrous. To mitigate this security risk, this will create a new user and group that will only have access to the /var/www directory.

      This code will first use the addgroup command to create a new group named www. The -g flag will set the group ID to the ENV GROUP_ID=1000 variable that is defined earlier in the Dockerfile.

      The adduser -D -u $USER_ID -G www www -s /bin/sh lines creates a www user with a user ID of 1000, as defined by the ENV variable. The -s flag creates the user’s home directory if it does not exist and sets the default login shell to /bin/sh. The -G flag is used to set the user’s initial login group to www, which was created by the previous command.

      The USER command defines that the programs run in the container will use the www user. Gunicorn will listen on :5000, so you will open this port with the EXPOSE command.

      Finally, the CMD [ "gunicorn", "-w", "4", "--bind", "0.0.0.0:5000", "wsgi"] line runs the command to start the Gunicorn server with four workers listening on port 5000. The number should generally be between 2–4 workers per core in the server, Gunicorn documentation recommends (2 x $num_cores) + 1 as the number of workers to start with.

      Your completed Dockerfile will look like the following:

      app/Dockerfile

      FROM python:3.6.8-alpine3.9
      
      LABEL MAINTAINER="FirstName LastName <example@domain.com>"
      
      ENV GROUP_ID=1000 
          USER_ID=1000
      
      WORKDIR /var/www/
      
      ADD . /var/www/
      RUN pip install -r requirements.txt
      RUN pip install gunicorn
      
      RUN addgroup -g $GROUP_ID www
      RUN adduser -D -u $USER_ID -G www www -s /bin/sh
      
      USER www
      
      EXPOSE 5000
      
      CMD [ "gunicorn", "-w", "4", "--bind", "0.0.0.0:5000", "wsgi"]
      

      Save the file and exit the text editor.

      Next, create a new directory to hold your Nginx configuration:

      Then create the Dockerfile for your Nginx web server in the nginx directory:

      Add the following code to the file to create the Dockerfile that will build the image for your Nginx container:

      nginx/Dockerfile

      FROM digitalocean.com/alpine:latest
      
      LABEL MAINTAINER="FirstName LastName <example@domain.com>"
      
      RUN apk --update add nginx && 
          ln -sf /dev/stdout /var/log/nginx/access.log && 
          ln -sf /dev/stderr /var/log/nginx/error.log && 
          mkdir /etc/nginx/sites-enabled/ && 
          mkdir -p /run/nginx && 
          rm -rf /etc/nginx/conf.d/default.conf && 
          rm -rf /var/cache/apk/*
      
      COPY conf.d/app.conf /etc/nginx/conf.d/app.conf
      
      EXPOSE 80 443
      CMD ["nginx", "-g", "daemon off;"]
      

      This Nginx Dockerfile uses an alpine base image, which is a tiny Linux distribution with a minimal attack surface built for security.

      In the RUN directive you are installing nginx as well as creating symbolic links to publish the error and access logs to the standard error (/dev/stderr) and output (/dev/stdout). Publishing errors to standard error and output is a best practice since containers are ephemeral, doing this the logs are shipped to docker logs and from there you can forward your logs to a logging service like the Elastic stack for persistance. After this is done, commands are run to remove the default.conf and /var/cache/apk/* to reduce the size of the resulting image. Executing all of these commands in a single RUN decreases the number of layers in the image, which also reduces the size of the resulting image.

      The COPY directive copies the app.conf web server configuration inside of the container. The EXPOSE directive ensures the containers listen on ports :80 and :443, as your application will run on :80 with :443 as the secure port.

      Finally, the CMD directive defines the command to start the Nginx server.

      Save the file and exit the text editor.

      Now that the Dockerfile is ready, you are ready to configure the Nginx reverse proxy to route traffic to the Flask application.

      Step 3 — Configuring the Nginx Reverse Proxy

      In this step, you will configure Nginx as a reverse proxy to forward requests to Gunicorn on :5000. A reverse proxy server is used to direct client requests to the appropriate back-end server. It provides an additional layer of abstraction and control to ensure the smooth flow of network traffic between clients and servers.

      Get started by creating the nginx/conf.d directory:

      To configure Nginx, you need to create an app.conf file with the following configuration in the nginx/conf.d/ folder. The app.conf file contains the configuration that the reverse proxy needs to forward the requests to Gunicorn.

      • nano nginx/conf.d/app.conf

      Put the following contents into the app.conf file:

      nginx/conf.d/app.conf

      upstream app_server {
          server flask:5000;
      }
      
      server {
          listen 80;
          server_name _;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          client_max_body_size 64M;
      
          location / {
              try_files $uri @proxy_to_app;
          }
      
          location @proxy_to_app {
              gzip_static on;
      
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
              proxy_set_header Host $http_host;
              proxy_buffering off;
              proxy_redirect off;
              proxy_pass http://app_server;
          }
      }
      

      This will first define the upstream server, which is commonly used to specify a web or app server for routing or load balancing.

      Your upstream server, app_server, defines the server address with the server directive, which is identified by the container name flask:5000.

      The configuration for the Nginx web server is defined in the server block. The listen directive defines the port number on which your server will listen for incoming requests. The error_log and access_log directives define the files for writing logs. The proxy_pass directive is used to set the upstream server for forwarding the requests to http://app_server.

      Save and close the file.

      With the Nginx web server configured, you can move on to creating the Flask to-do API.

      Step 4 — Creating the Flask To-do API

      Now that you’ve built out your environment, you’re ready to build your application. In this step, you will write a to-do API application that will save and display to-do notes sent in from a POST request.

      Get started by creating the requirements.txt file in the app directory:

      • nano app/requirements.txt

      This file is used to install the dependencies for your application. The implementation of this tutorial will use Flask, Flask-PyMongo, and requests. Add the following to the requirements.txt file:

      app/requirements.txt

      Flask==1.0.2
      Flask-PyMongo==2.2.0
      requests==2.20.1
      

      Save the file and exit the editor after entering the requirements.

      Next, create the app.py file to contain the Flask application code in the app directory:

      In your new app.py file, enter in the code to import the dependencies:

      app/app.py

      import os
      from flask import Flask, request, jsonify
      from flask_pymongo import PyMongo
      

      The os package is used to import the environment variables. From the flask library you imported the Flask, request, and jsonify objects to instantiate the application, handle requests, and send JSON responses, respectively. From flask_pymongo you imported the PyMongo object to interact with the MongoDB.

      Next, add the code needed to connect to MongoDB:

      app/app.py

      . . .
      application = Flask(__name__)
      
      application.config["MONGO_URI"] = 'mongodb://' + os.environ['MONGODB_USERNAME'] + ':' + os.environ['MONGODB_PASSWORD'] + '@' + os.environ['MONGODB_HOSTNAME'] + ':27017/' + os.environ['MONGODB_DATABASE']
      
      mongo = PyMongo(application)
      db = mongo.db
      

      The Flask(__name__) loads the application object into the application variable. Next, the code builds the MongoDB connection string from the environment variables using os.environ. Passing the application object in to the PyMongo() method will give you the mongo object, which in turn gives you the db object from mongo.db.

      Now you will add the code to create an index message:

      app/app.py

      . . .
      @application.route('/')
      def index():
          return jsonify(
              status=True,
              message='Welcome to the Dockerized Flask MongoDB app!'
          )
      

      The @application.route('/') defines the / GET route of your API. Here your index() function returns a JSON string using the jsonify method.

      Next, add the /todo route to list all to-do’s:

      app/app.py

      . . .
      @application.route('/todo')
      def todo():
          _todos = db.todo.find()
      
          item = {}
          data = []
          for todo in _todos:
              item = {
                  'id': str(todo['_id']),
                  'todo': todo['todo']
              }
              data.append(item)
      
          return jsonify(
              status=True,
              data=data
          )
      

      The @application.route('/todo') defines the /todo GET route of your API, which returns the to-dos in the database. The db.todo.find() method returns all the to-dos in the database. Next, you iterate over the _todos to build an item that includes only the id and todo from the objects appending them to a data array and finally returns them as JSON.

      Next, add the code for creating the to-do:

      app/app.py

      . . .
      @application.route('/todo', methods=['POST'])
      def createTodo():
          data = request.get_json(force=True)
          item = {
              'todo': data['todo']
          }
          db.todo.insert_one(item)
      
          return jsonify(
              status=True,
              message='To-do saved successfully!'
          ), 201
      

      The @application.route('/todo') defines the /todo POST route of your API, which creates a to-do note in the database. The request.get_json(force=True) gets the JSON that you post to the route, and item is used to build the JSON that will be saved in the to-do. The db.todo.insert_one(item) is used to insert one item into the database. After the to-do is saved in the database you return a JSON response with a status code of 201 CREATED.

      Now you add the code to run the application:

      app/app.py

      . . .
      if __name__ == "__main__":
          ENVIRONMENT_DEBUG = os.environ.get("APP_DEBUG", True)
          ENVIRONMENT_PORT = os.environ.get("APP_PORT", 5000)
          application.run(host='0.0.0.0', port=ENVIRONMENT_PORT, debug=ENVIRONMENT_DEBUG)
      

      The condition __name__ == "__main__" is used to check if the global variable, __name__, in the module is the entry point to your program, is "__main__", then run the application. If the __name__ is equal to "__main__" then the code inside the if block will execute the app using this command application.run(host='0.0.0.0', port=ENVIRONMENT_PORT, debug=ENVIRONMENT_DEBUG).

      Next, we get the values for the ENVIRONMENT_DEBUG and ENVIRONMENT_PORT from the environment variables using os.environ.get(), using the key as the first parameter and default value as the second parameter. The application.run() sets the host, port, and debug values for the application.

      The completed app.py file will look like this:

      app/app.py

      import os
      from flask import Flask, request, jsonify
      from flask_pymongo import PyMongo
      
      application = Flask(__name__)
      
      application.config["MONGO_URI"] = 'mongodb://' + os.environ['MONGODB_USERNAME'] + ':' + os.environ['MONGODB_PASSWORD'] + '@' + os.environ['MONGODB_HOSTNAME'] + ':27017/' + os.environ['MONGODB_DATABASE']
      
      mongo = PyMongo(application)
      db = mongo.db
      
      @application.route('/')
      def index():
          return jsonify(
              status=True,
              message='Welcome to the Dockerized Flask MongoDB app!'
          )
      
      @application.route('/todo')
      def todo():
          _todos = db.todo.find()
      
          item = {}
          data = []
          for todo in _todos:
              item = {
                  'id': str(todo['_id']),
                  'todo': todo['todo']
              }
              data.append(item)
      
          return jsonify(
              status=True,
              data=data
          )
      
      @application.route('/todo', methods=['POST'])
      def createTodo():
          data = request.get_json(force=True)
          item = {
              'todo': data['todo']
          }
          db.todo.insert_one(item)
      
          return jsonify(
              status=True,
              message='To-do saved successfully!'
          ), 201
      
      if __name__ == "__main__":
          ENVIRONMENT_DEBUG = os.environ.get("APP_DEBUG", True)
          ENVIRONMENT_PORT = os.environ.get("APP_PORT", 5000)
          application.run(host='0.0.0.0', port=ENVIRONMENT_PORT, debug=ENVIRONMENT_DEBUG)
      

      Save the file and exit the editor.

      Next, create the wsgi.py file in the app directory.

      The wsgi.py file creates an application object (or callable) so that the server can use it. Each time a request comes, the server uses this application object to run the application’s request handlers upon parsing the URL.

      Put the following contents into the wsgi.py file, save the file, and exit the text editor:

      app/wsgi.py

      from app import application
      
      if __name__ == "__main__":
        application.run()
      

      This wsgi.py file imports the application object from the app.py file and creates an application object for the Gunicorn server.

      The to-do app is now in place, so you’re ready to start running the application in containers.

      Step 5 — Building and Running the Containers

      Now that you have defined all of the services in your docker-compose.yml file and their configurations, you can start the containers.

      Since the services are defined in a single file, you need to issue a single command to start the containers, create the volumes, and set up the networks. This command also builds the image for your Flask application and the Nginx web server. Run the following command to build the containers:

      When running the command for the first time, it will download all of the necessary Docker images, which can take some time. Once the images are downloaded and stored in your local machine, docker-compose will create your containers. The -d flag daemonizes the process, which allows it to run as a background process.

      Use the following command to list the running containers once the build process is complete:

      You will see output similar to the following:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f20e9a7fd2b9 digitalocean.com/webserver:latest "nginx -g 'daemon of…" 2 weeks ago Up 2 weeks 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp webserver 3d53ea054517 digitalocean.com/flask-python:3.6 "gunicorn -w 4 --bin…" 2 weeks ago Up 2 weeks 5000/tcp flask 96f5a91fc0db mongo:4.0.8 "docker-entrypoint.s…" 2 weeks ago Up 2 weeks 27017/tcp mongodb

      The CONTAINER ID is a unique identifier that is used to access containers. The IMAGE defines the image name for the given container. The NAMES field is the service name under which containers are created, similar to CONTAINER ID these can be used to access containers. Finally, the STATUS provides information regarding the state of the container whether it’s running, restarting, or stopped.

      You’ve used the docker-compose command to build your containers from your configuration files. In the next step, you will create a MongoDB user for your application.

      Step 6 — Creating a User for Your MongoDB Database

      By default, MongoDB allows users to log in without credentials and grants unlimited privileges. In this step, you will secure your MongoDB database by creating a dedicated user to access it.

      To do this, you will need the root username and password that you set in the docker-compose.yml file environment variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD for the mongodb service. In general, it’s better to avoid using the root administrative account when interacting with the database. Instead, you will create a dedicated database user for your Flask application, as well as a new database that the Flask app will be allowed to access.

      To create a new user, first start an interactive shell on the mongodb container:

      • docker exec -it mongodb bash

      You use the docker exec command in order to run a command inside a running container along with the -it flag to run an interactive shell inside the container.

      Once inside the container, log in to the MongoDB root administrative account:

      You will be prompted for the password that you entered as the value for the MONGO_INITDB_ROOT_PASSWORD variable in the docker-compose.yml file. The password can be changed by setting a new value for the MONGO_INITDB_ROOT_PASSWORD in the mongodb service, in which case you will have to re-run the docker-compose up -d command.

      Run the show dbs; command to list all databases:

      You will see the following output:

      Output

      admin 0.000GB config 0.000GB local 0.000GB 5 rows in set (0.00 sec)

      The admin database is a special database that grants administrative permissions to users. If a user has read access to the admin database, they will have read and write permissions to all other databases. Since the output lists the admin database, the user has access to this database and can therefore read and write to all other databases.

      Saving the first to-do note will automatically create the MongoDB database. MongoDB allows you to switch to a database that does not exist using the use database command. It creates a database when a document is saved to a collection. Therefore the database is not created here; that will happen when you save your first to-do note in the database from the API. Execute the use command to switch to the flaskdb database:

      Next, create a new user that will be allowed to access this database:

      • db.createUser({user: 'flaskuser', pwd: 'your password', roles: [{role: 'readWrite', db: 'flaskdb'}]})
      • exit

      This command creates a user named flaskuser with readWrite access to the flaskdb database. Be sure to use a secure password in the pwd field. The user and pwd here are the values you defined in the docker-compose.yml file under the environment variables section for the flask service.

      Log in to the authenticated database with the following command:

      • mongo -u flaskuser -p your password --authenticationDatabase flaskdb

      Now that you have added the user, log out of the database.

      And finally, exit the container:

      You’ve now configured a dedicated database and user account for your Flask application. The database components are ready, so now you can move on to running the Flask to-do app.

      Step 7 — Running the Flask To-do App

      Now that your services are configured and running, you can test your application by navigating to http://your_server_ip in a browser. Additionally, you can run curl to see the JSON response from Flask:

      • curl -i http://your_server_ip

      You will receive the following response:

      Output

      {"message":"Welcome to the Dockerized Flask MongoDB app!","status":true}

      The configuration for the Flask application is passed to the application from the docker-compose.yml file. The configuration regarding the database connection is set using the MONGODB_* variables defined in the environment section of the flask service.

      To test everything out, create a to-do note using the Flask API. You can do this with a POST curl request to the /todo route:

      • curl -i -H "Content-Type: application/json" -X POST -d '{"todo": "Dockerize Flask application with MongoDB backend"}' http://your_server_ip/todo

      This request results in a response with a status code of 201 CREATED when the to-do item is saved to MongoDB:

      Output

      {"message":"To-do saved successfully!","status":true}

      You can list all of the to-do notes from MongoDB with a GET request to the /todo route:

      • curl -i http://your_server_ip/todo

      Output

      {"data":[{"id":"5c9fa25591cb7b000a180b60","todo":"Dockerize Flask application with MongoDB backend"}],"status":true}

      With this, you have Dockerized a Flask API running a MongoDB backend with Nginx as a reverse proxy deployed to your servers. For a production environment you can use sudo systemctl enable docker to ensure your Docker service automatically starts at runtime.

      Conclusion

      In this tutorial, you deployed a Flask application with Docker, MongoDB, Nginx, and Gunicorn. You now have a functioning modern stateless API application that can be scaled. Although you can achieve this result by using a command like docker container run, the docker-compose.yml simplifies your job as this stack can be put into version control and updated as necessary.

      From here you can also take a look at our further Python Framework tutorials.



      Source link

      How To Scale a Node.js Application with MongoDB on Kubernetes Using Helm


      Introduction

      Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

      When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

      In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

      Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

      Navigate to the node_project directory:

      The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

      For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      When we deploy the Helm mongodb-replicaset chart, it will create:

      • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
      • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

      For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

      The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

      Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node's process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

      The constants for the connection URI and the URI string itself currently look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      ...
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      ...
      

      In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

      Add MONGO_REPLICASET to both the URI constant object and the connection string:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB,
        MONGO_REPLICASET
      } = process.env;
      
      ...
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
      ...
      

      Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

      Save and close the file when you are finished editing.

      With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

      Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

      • docker build -t your_dockerhub_username/node-replicas .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-replicas

      You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

      Step 2 — Creating Secrets for the MongoDB Replica Set

      The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

      • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
      • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

      With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

      First, let's create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

      • openssl rand -base64 756 > key.txt

      The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

      You can now create a Secret called keyfilesecret using this file with kubectl create:

      • kubectl create secret generic keyfilesecret --from-file=key.txt

      This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

      You will see the following output indicating that your Secret has been created:

      Output

      secret/keyfilesecret created

      Remove key.txt:

      Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

      Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

      Convert your database username:

      • echo -n 'your_database_username' | base64

      Note down the value you see in the output.

      Next, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

      ~/node_project/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: mongo-secret
      data:
        user: your_encoded_username
        password: your_encoded_password
      

      Here, we're using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

      Save and close the file when you are finished editing.

      Create the Secret object with the following command:

      • kubectl create -f secret.yaml

      You will see the following output:

      Output

      secret/mongo-secret created

      Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

      With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

      Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

      Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we've just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

      Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

      • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
      • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
      • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

      Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

      Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage — which we can check by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 21m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

      You will set values in this file that will do the following:

      • Enable authorization.
      • Reference your keyfilesecret and mongo-secret objects.
      • Specify 1Gi for your PersistentVolumes.
      • Set your replica set name to db.
      • Specify 3 replicas for the set.
      • Pin the mongo image to the latest version at the time of writing: 4.1.9.

      Paste the following code into the file:

      ~/node_project/mongodb-values.yaml

      replicas: 3
      port: 27017
      replicaSetName: db
      podDisruptionBudget: {}
      auth:
        enabled: true
        existingKeySecret: keyfilesecret
        existingAdminSecret: mongo-secret
      imagePullSecrets: []
      installImage:
        repository: unguiculus/mongodb-install
        tag: 0.7
        pullPolicy: Always
      copyConfigImage:
        repository: busybox
        tag: 1.29.3
        pullPolicy: Always
      image:
        repository: mongo
        tag: 4.1.9
        pullPolicy: Always
      extraVars: {}
      metrics:
        enabled: false
        image:
          repository: ssalaues/mongodb-exporter
          tag: 0.6.1
          pullPolicy: IfNotPresent
        port: 9216
        path: /metrics
        socketTimeout: 3s
        syncTimeout: 1m
        prometheusServiceDiscovery: true
        resources: {}
      podAnnotations: {}
      securityContext:
        enabled: true
        runAsUser: 999
        fsGroup: 999
        runAsNonRoot: true
      init:
        resources: {}
        timeout: 900
      resources: {}
      nodeSelector: {}
      affinity: {}
      tolerations: []
      extraLabels: {}
      persistentVolume:
        enabled: true
        #storageClass: "-"
        accessModes:
          - ReadWriteOnce
        size: 1Gi
        annotations: {}
      serviceAnnotations: {}
      terminationGracePeriodSeconds: 30
      tls:
        enabled: false
      configmap: {}
      readinessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 1
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      livenessProbe:
        initialDelaySeconds: 30
        timeoutSeconds: 5
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      

      The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

      Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

      To learn more about the other parameters included in the file, see the configuration table included with the repo.

      Save and close the file when you are finished editing.

      Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

      This will get the latest chart information from the stable repository.

      Finally, install the chart with the following command:

      • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

      Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

      • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

      Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we've specified. We've pointed to these options by including the -f flag and our mongodb-values.yaml file.

      Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

      Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

      Output

      NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

      You can now check on the creation of your Pods with the following command:

      You will see output like the following as the Pods are being created:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

      The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod's containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

      Once the Pods have been created and all of their associated containers are running, you will see this output:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

      In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

      Output

      NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

      This means that the first member of our StatefulSet will have the following DNS entry:

      mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local
      

      Because we need our application to connect to each MongoDB instance, it's essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

      With your database instances up and running, you are ready to create the chart for your Node application.

      Step 4 — Creating a Custom Application Chart and Configuring Parameters

      We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

      First, create a new chart directory called nodeapp with the following command:

      This will create a directory called nodeapp in your ~/node_project folder with the following resources:

      • A Chart.yaml file with basic information about your chart.
      • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
      • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
      • A templates/ directory with the template files that will generate Kubernetes manifests.
      • A templates/tests/ directory for test files.
      • A charts/ directory for any charts that this chart depends on.

      The first file we will modify out of these default files is values.yaml. Open that file now:

      The values that we will set here include:

      • The number of replicas.
      • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
      • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
      • The targetPort to specify the port on the Pod where our application will be exposed.

      We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

      Configure the following values in the values.yaml file:

      ~/node_project/nodeapp/values.yaml

      # Default values for nodeapp.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 3
      
      image:
        repository: your_dockerhub_username/node-replicas
        tag: latest
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: LoadBalancer
        port: 80
        targetPort: 8080
      ...
      

      Save and close the file when you are finished editing.

      Next, open a secret.yaml file in the nodeapp/templates directory:

      • nano nodeapp/templates/secret.yaml

      In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

      Add the following code to the file:

      ~/node_project/nodeapp/templates/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: {{ .Release.Name }}-auth
      data:
        MONGO_USERNAME: your_encoded_username
        MONGO_PASSWORD: your_encoded_password
      

      The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

      Save and close the file when you are finished.

      Next, open a file to create a ConfigMap for your application:

      • nano nodeapp/templates/configmap.yaml

      In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

      According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

      Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

      ~/node_project/nodeapp/templates/configmap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: {{ .Release.Name }}-config
      data:
        MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
        MONGO_PORT: "27017"
        MONGO_DB: "sharkinfo"
        MONGO_REPLICASET: "db"
      

      Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

      Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

      Save and close the file when you are finished editing.

      With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

      Step 5 — Integrating Environment Variables into Your Helm Deployment

      With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

      Open the application Deployment template for editing:

      • nano nodeapp/templates/deployment.yaml

      Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

      In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              ports:
      

      Next, add the following keys to the list of env variables:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              - name: MONGO_USERNAME
                valueFrom:
                  secretKeyRef:
                    key: MONGO_USERNAME
                    name: {{ .Release.Name }}-auth
              - name: MONGO_PASSWORD
                valueFrom:
                  secretKeyRef:
                    key: MONGO_PASSWORD
                    name: {{ .Release.Name }}-auth
              - name: MONGO_HOSTNAME
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_HOSTNAME
                    name: {{ .Release.Name }}-config
              - name: MONGO_PORT
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_PORT
                    name: {{ .Release.Name }}-config
              - name: MONGO_DB
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_DB
                    name: {{ .Release.Name }}-config      
              - name: MONGO_REPLICASET
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_REPLICASET
                    name: {{ .Release.Name }}-config        
      

      Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

      Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            ...
      

      Next, let's modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

      • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
      • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

      For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

      In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod's container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

      Add the following modification to the stated path for the liveness and readiness probes:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            livenessProbe:
              httpGet:
                path: /sharks
                port: http
            readinessProbe:
              httpGet:
                path: /sharks
                port: http
      

      Save and close the file when you are finished editing.

      You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

      • helm install --name nodejs ./nodeapp

      Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

      Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

      You will see the following output indicating that your release has been created:

      Output

      NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

      Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

      Check the status of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

      Once your Pods are up and running, check your Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer 10.245.33.46 your_lb_ip 80:31518/TCP 3m22s

      The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

      You should see the following landing page:

      Application Landing Page

      Now that your replicated application is working, let's add some test data to ensure that replication is working between members of the replica set.

      Step 6 — Testing MongoDB Replication

      With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

      First, make sure you have navigated your browser to the application landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      Now head back to the shark information form by clicking on Sharks in the top navigation bar:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

      Complete Shark Collection

      Let's check that the data we've entered has been replicated between the primary and secondary members of our replica set.

      Get a list of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

      To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

      When prompted, enter the password associated with this username:

      Output

      MongoDB shell version v4.1.9 Enter password:

      You will be dropped into an administrative shell:

      Output

      MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

      Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

      You will see output like the following, indicating the hostname of the primary:

      Output

      db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

      Next, switch to your sharkinfo database:

      Output

      switched to db sharkinfo

      List the collections in the database:

      Output

      sharks

      Output the documents in the collection:

      You will see the following output:

      Output

      { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      Exit the MongoDB Shell:

      Now that we have checked the data on our primary, let's check that it's being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

      Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

      Switch to the sharkinfo database:

      Output

      switched to db sharkinfo

      Permit the read operation of the documents in the sharks collection:

      Output the documents in the collection:

      You should now see the same information that you saw when running this method on your primary instance:

      Output

      db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      This output confirms that your application data is being replicated between the members of your replica set.

      Conclusion

      You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm's stable repository and other chart repositories.

      As you move toward production, consider implementing the following:

      To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.



      Source link