One place for hosting & domains

      February 2019

      Understanding Managed Databases


      Introduction

      Secure, reliable data storage is a must for nearly every modern application. However, the infrastructure needed for a self-managed, on-premises database can be prohibitively expensive for many teams. Similarly, employees who have the skills and experience needed to maintain a production database effectively can be difficult to come by.

      The spread of cloud computing services has lowered the barriers to entry associated with provisioning a database, but many developers still lack the time or expertise needed to manage and tune a database to suit their needs. For this reason, many businesses are turning to managed database services to help them build and scale their databases in line with their growth.

      In this conceptual article, we will go over what managed databases are and how they can be beneficial to many organizations. We will also cover some practical considerations one should make before building their next application on top of a managed database solution.

      Managed Databases in a Nutshell

      A managed database is a cloud computing service in which the end user pays a cloud service provider for access to a database. Unlike a typical database, users don’t have to set up or maintain a managed database on their own; rather, it’s the provider’s responsibility to oversee the database’s infrastructure. This allows the user to focus on building their application instead of spending time configuring their database and keeping it up to date.

      The process of provisioning a managed database varies by provider, but in general it’s similar to that of any other cloud-based service. After registering an account and logging in to the dashboard, the user reviews the available database options — such as the database engine and cluster size — and then chooses the setup that’s right for them. After you provision the managed database, you can connect to it through a GUI or client and can then begin loading data and and integrating the database with your application.

      Managed data solutions simplify the process of provisioning and maintaining a database. Instead of running commands from a terminal to install and set one up, you can deploy a production-ready database with just a few clicks in your browser. By simplifying and automating database management, cloud providers make it easier for anyone, even novice database users, to build data-driven applications and websites. This was the result of a decades-long trend towards simplifying, automating, and abstracting various database management tasks, which was itself a response to pain points long felt by database administrators.

      Pain Points of On-Premises and Self-Managed Databases

      Prior to the rise of the cloud computing model, any organization in need of a data center had to supply all the time, space, and resources that went into setting one up. Once their database was up and running, they also had to maintain the hardware, keep its software updated, hire a team to manage the database, and train their employees on how to use it.

      As cloud computing services grew in popularity in the 2000s, it became easier and more affordable to provision server infrastructure, since the hardware and the space required for it no longer had to be owned or managed by those using it. Likewise, setting up a database entirely within the cloud became far less difficult; a business or developer would just have to requisition a server, install and configure their chosen database management system, and begin storing data.

      While cloud computing did make the process of setting up a traditional database easier, it didn’t address all of its problems. For instance, in the cloud it can still be difficult to pinpoint the ideal size of a database’s infrastructure footprint before it begins collecting data. This is important because cloud consumers are charged based on the resources they consume, and they risk paying for more than what they require if the server they provision is larger than necessary. Additionally, as with traditional on-premises databases, managing one’s database in the cloud can be a costly endeavor. Depending on your needs, you may still need to hire an experienced database administrator or spend a significant amount of time and money training your existing staff to manage your database effectively.

      Many of these issues are compounded for smaller organizations and independent developers. While a large business can usually afford to hire employees with a deep knowledge of databases, smaller teams usually have fewer resources available, leaving them with only their existing institutional knowledge. This makes tasks like replication, migrations, and backups all the more difficult and time consuming, as they can require a great deal of on-the-job learning as well as trial and error.

      Managed databases help to resolve these pain points with a host of benefits to businesses and developers. Let’s walk through some of these benefits and how they can impact development teams.

      Benefits of Managed Databases

      Managed database services can help to reduce many of the headaches associated with provisioning and managing a database. For one thing, developers build applications on top of managed database services to drastically speed up the process of provisioning a database server. With a self-managed solution, you must obtain a server (either on-premises or in the cloud), connect to it from a client or terminal, configure and secure it, and then install and set up the database management software before you can begin storing data. With a managed database, you only have to decide on the initial size of the database server, configure any additional provider-specific options, and you’ll have a new database ready to integrate with your app or website. This can usually be done in just a few minutes through the provider’s user interface.

      Another appeal of managed databases is automation. Self-managed databases can consume a large amount of an organization’s resources because its employees have to perform every administrative task — from scaling to performing updates, running migrations, and creating backups — manually. With a managed database, however, these and other tasks are done either automatically or on-demand, which markedly reduces the risk of human error.

      This relates to the fact that managed database services help to streamline the process of database scaling. Scaling a self-managed database can be very time- and resource-intensive. Whether you choose sharding, replication, load balancing, or something else as your scaling strategy, if you manage the infrastructure yourself then you’re responsible for ensuring that no data is lost in the process and that the application will continue to work properly. If you integrate your application with a managed database service, however, you can scale the database cluster on demand. Rather than having to work out the optimal server size or CPU usage beforehand, you can quickly provision more resources on-the-fly. This helps you avoid using unnecessary resources, meaning you also won’t pay for what you don’t need.

      Managed solutions tend to have built-in high-availability. In the context of cloud computing, a service is said to be highly available if it is stable and likely to run without failure for long periods of time. Most reputable cloud providers’ products come with a service level agreement (SLA), a commitment between the provider and its customers that guarantees the availability and reliability of their services. A typical SLA will specify how much downtime the customer should expect, and many also define the compensation for customers if these service levels are not met. This provides assurance for the customer that their database won’t crash and, if it does, they can at least expect some kind of reparation from the provider.

      In general, managed databases simplify the tasks associated with provisioning and maintaining a database. Depending on the provider, you or your team will still likely need some level of experience working with databases in order to provision a database and interact with it as you build and scale your application. Ultimately, though, the database-specific experience needed to administer a managed database will be much less than with self-managed solution.

      Of course, managed databases aren’t able to solve every problem, and may prove to be a less-than-ideal choice for some. Next, we’ll go over a few of the potential drawbacks one should consider before provisioning a managed database.

      Practical Considerations

      A managed database service can ease the stress of deploying and maintaining a database, but there are still a few things to keep in mind before committing to one. Recall that a principal draw of managed databases is that they abstract away most of the more tedious aspects of database administration. To this end, a managed database provider aims to deliver a rudimentary database that will satisfy the most common use cases. Accordingly, their database offerings won’t feature tons of customization options or the unique features included in more specialized database software. Because of this, you won’t have as much freedom to tailor your database and you’ll be limited to what the cloud provider has to offer.

      A managed database is almost always more expensive than a self-managed one. This makes sense, since you’re paying for the cloud provider to support you in managing the database, but it can be a cause for concern for teams with limited resources. Moreover, pricing for managed databases is usually based on how much storage and RAM the database uses, how many reads it handles, and how many backups of the database the user creates. Likewise, any application using a managed database service that handle large amounts of data or traffic will be more expensive than if it were to use a self-managed cloud database.

      One should also reflect on the impact switching to a managed database will have on their internal workflows and whether or not they’ll be able to adjust to those changes. Every provider differs, and depending on their SLA they may shoulder responsibility for only some administration tasks, which would be problematic for developers looking for a full-service solution. On the other hand, some providers could have a prohibitively restrictive SLA or make the customer entirely dependent on the provider in question, a situation known as vendor lock-in.

      Lastly, and perhaps most importantly, one should carefully consider whether or not any managed database service they’re considering using will meet their security needs. All databases, including on-premises databases, are prone to certain security threats, like SQL injection attacks or data leaks. However, the security dynamic is far different for databases hosted in the cloud. Managed database users can’t control the physical location of their data or who has access to it, nor can they ensure compliance with specific security standards. This can be especially problematic if your client has heightened security needs.

      To illustrate, imagine that you’re hired by a bank to build an application where its clients can access financial records and make payments. The bank may stipulate that the app must have data at rest encryption and appropriately scoped user permissions, and that it must be compliant with certain regulatory standards like PCI DSS. Not all managed database providers adhere to the same regulatory standards or maintain the same security practices, and they’re unlikely to adopt new standards or practices for just one of their customers. For this reason, it’s critical that you ensure any managed database provider you rely on for such an application is able to meet your security needs as well as the needs of your clients.

      Conclusion

      Managed databases have many features that appeal to a wide variety of businesses and developers, but a managed database may not solve every problem or suit everyone’s needs. Some may find that a managed database’s limited feature set and configuration options, increased cost, and reduced flexibility outweigh any of its potential advantages. However, compelling benefits like ease of use, scalability, automated backups and upgrades, and high availability have led to increased adoption of managed database solutions in a variety of industries.

      If you’re interested in learning more about DigitalOcean Managed Databases, we encourage you to check out our Managed Databases product documentation.



      Source link

      Como Usar o Traefik como um Proxy Reverso para Containers do Docker no Ubuntu 18.04


      O autor selecionou o Girls Who Code para receber uma doação como parte do programa Write for DOnations.

      Introdução

      O Docker pode ser uma maneira eficiente de executar aplicativos web em produção, mas você pode querer executar vários aplicativos no mesmo host do Docker. Nesta situação, você precisará configurar um proxy reverso, já que você só deseja expor as portas 80 e 443 para o resto do mundo.

      O Traefik é um proxy reverso que reconhece o Docker e inclui seu próprio painel de monitoramento ou dashboard. Neste tutorial, você usará o Traefik para rotear solicitações para dois containers de aplicação web diferentes: um container WordPress e um container Adminer, cada um falando com um banco de dados MySQL. Você irá configurar o Traefik para servir tudo através de HTTPS utilizando o Let’s Encrypt.

      Pré-requisitos

      Para acompanhar este tutorial, você vai precisar do seguinte:

      Passo 1 — Configurando e Executando o Traefik

      O projeto do Traefik tem uma imagem Docker oficial, portanto vamos utilizá-la para executar o Traefik em um container Docker.

      Mas antes de colocarmos o nosso container Traefik em funcionamento, precisamos criar um arquivo de configuração e configurar uma senha criptografada para que possamos acessar o painel de monitoramento.

      Usaremos o utilitário htpasswd para criar essa senha criptografada. Primeiro, instale o utilitário, que está incluído no pacote apache2-utils:

      • sudo apt-get install apache2-utils

      Em seguida, gere a senha com o htpasswd. Substitua senha_segura pela senha que você gostaria de usar para o usuário admin do Traefik:

      • htpasswd -nb admin senha_segura

      A saída do programa ficará assim:

      Output

      admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/

      Você utilizará essa saída no arquivo de configuração do Traefik para configurar a Autenticação Básica de HTTP para a verificação de integridade do Traefik e para o painel de monitoramento. Copie toda a linha de saída para poder colá-la mais tarde.

      Para configurar o servidor Traefik, criaremos um novo arquivo de configuração chamado traefik.toml usando o formato TOML. O TOML é uma linguagem de configuração semelhante ao arquivos INI, mas padronizado. Esse arquivo nos permite configurar o servidor Traefik e várias integrações, ou providers, que queremos usar. Neste tutorial, usaremos três dos provedores disponíveis do Traefik: api,docker e acme, que é usado para suportar o TLS utilizando o Let’s Encrypt.

      Abra seu novo arquivo no nano ou no seu editor de textos favorito:

      Primeiro, adicione dois EntryPoints nomeados http ehttps, que todos os backends terão acesso por padrão:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      

      Vamos configurar os EntryPoints http e https posteriormente neste arquivo.

      Em seguida, configure o provider api, que lhe dá acesso a uma interface do painel. É aqui que você irá colar a saída do comando htpasswd:

      traefik.toml

      
      ...
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:sua_senha_criptografada"]
      
      [api]
      entrypoint="dashboard"
      

      O painel é uma aplicação web separada que será executada no container do Traefik. Vamos definir o painel para executar na porta 8080.

      A seção entrypoints.dashboard configura como nos conectaremos com o provider da api, e a seção entrypoints.dashboard.auth.basic configura a Autenticação Básica HTTP para o painel. Use a saída do comando htpasswd que você acabou de executar para o valor da entrada users. Você poderia especificar logins adicionais, separando-os com vírgulas.

      Definimos nosso primeiro entryPoint, mas precisaremos definir outros para comunicação HTTP e HTTPS padrão que não seja direcionada para o provider da api. A seção entryPoints configura os endereços que o Traefik e os containers com proxy podem escutar. Adicione estas linhas ao arquivo logo abaixo do cabeçalho entryPoints:

      traefik.toml

      
      ...
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      ...
      

      O entrypoint http manipula a porta 80, enquanto o entrypoint https usa a porta443 para o TLS/SSL. Redirecionamos automaticamente todo o tráfego na porta 80 para o entrypoint https para forçar conexões seguras para todas as solicitações.

      Em seguida, adicione esta seção para configurar o suporte ao certificado Let's Encrypt do Traefik:

      traefik.toml

      ...
      [acme]
      email = "seu_email@seu_domínio"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      

      Esta seção é chamada acme porque ACME é o nome do protocolo usado para se comunicar com o Let's Encrypt para gerenciar certificados. O serviço Let's Encrypt requer o registro com um endereço de e-mail válido, portanto, para que o Traefik gere certificados para nossos hosts, defina a chave email como seu endereço de e-mail. Em seguida, vamos especificar que armazenaremos as informações que vamos receber do Let's Encrypt em um arquivo JSON chamado acme.json. A chave entryPoint precisa apontar para a porta de manipulação do entrypoint 443, que no nosso caso é o entrypoint https.

      A chave onHostRule determina como o Traefik deve gerar certificados. Queremos buscar nossos certificados assim que nossos containers com os nomes de host especificados forem criados, e é isso que a configuração onHostRule fará.

      A seção acme.httpChallenge nos permite especificar como o Let's Encrypt pode verificar se o certificado deve ser gerado. Estamos configurando-o para servir um arquivo como parte do desafio através do entrypoint http.

      Finalmente, vamos configurar o provider docker adicionando estas linhas ao arquivo:

      traefik.toml

      
      ...
      [docker]
      domain = "seu_domínio"
      watch = true
      network = "web"
      

      O provedor docker permite que o Traefik atue como um proxy na frente dos containers do Docker. Configuramos o provider para vigiar ou watch por novos containers na rede web (que criaremos em breve) e os expor como subdomínios de seu_domínio.

      Neste ponto, o traefik.toml deve ter o seguinte conteúdo:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:sua_senha_criptografada"]
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      
      [api]
      entrypoint="dashboard"
      
      [acme]
      email = "seu_email@seu_domínio"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      
      [docker]
      domain = "seu_domínio"
      watch = true
      network = "web"
      

      Salve o arquivo e saia do editor. Com toda essa configuração pronta, podemos ativar o Traefik.

      Passo 2 – Executando o Container Traefik

      Em seguida, crie uma rede do Docker para o proxy compartilhar com os containers. A rede do Docker é necessária para que possamos usá-la com aplicações que são executadas usando o Docker Compose. Vamos chamar essa rede de web.

      • docker network create web

      Quando o container Traefik iniciar, nós o adicionaremos a essa rede. Em seguida, podemos adicionar containers adicionais a essa rede posteriormente para o Traefik fazer proxy.

      Em seguida, crie um arquivo vazio que conterá as informações do Let's Encrypt. Compartilharemos isso no container para que o Traefik possa usá-lo:

      O Traefik só poderá usar esse arquivo se o usuário root dentro do container tiver acesso exclusivo de leitura e gravação a ele. Para fazer isso, bloqueie as permissões em acme.json para que somente o proprietário do arquivo tenha permissão de leitura e gravação.

      Depois que o arquivo for repassado para o Docker, o proprietário será automaticamente alterado para o usuário root dentro do container.

      Finalmente, crie o container Traefik com este comando:

      • docker run -d
      • -v /var/run/docker.sock:/var/run/docker.sock
      • -v $PWD/traefik.toml:/traefik.toml
      • -v $PWD/acme.json:/acme.json
      • -p 80:80
      • -p 443:443
      • -l traefik.frontend.rule=Host:monitor.seu_domínio
      • -l traefik.port=8080
      • --network web
      • --name traefik
      • traefik:1.7.2-alpine

      O comando é um pouco longo, então vamos dividi-lo. Usamos a flag -d para executar o container em segundo plano como um daemon. Em seguida, compartilhamos nosso arquivo docker.sock dentro do container para que o processo do Traefik possa escutar por alterações nos containers. Compartilhamos também o arquivo de configuração traefik.toml e o arquivoacme.json que criamos dentro do container.

      Em seguida, mapeamos as portas :80 e :443 do nosso host Docker para as mesmas portas no container Traefik, para que o Traefik receba todo o tráfego HTTP e HTTPS para o servidor.

      Em seguida, configuramos dois labels do Docker que informam ao Traefik para direcionar o tráfego para o monitor.seu_domínio para a porta :8080 dentro do container do Traefik, expondo o painel de monitoramento.

      Configuramos a rede do container para web, e nomeamos o container para traefik.

      Finalmente, usamos a imagem traefik:1.7.2-alpine para este container, porque é pequena.

      Um ENTRYPOINT da imagem do Docker é um comando que sempre é executado quando um container é criado a partir da imagem. Neste caso, o comando é o binário traefik dentro do container. Você pode passar argumentos adicionais para esse comando quando você inicia o container, mas definimos todas as nossas configurações no arquivo traefik.toml.

      Com o container iniciado, agora você tem um painel que você pode acessar para ver a integridade de seus containers. Você também pode usar este painel para visualizar os frontends e backends que o Traefik registrou. Acesse o painel de monitoramento apontando seu navegador para https://monitor.seu_domínio. Você será solicitado a fornecer seu nome de usuário e senha, que são admin e a senha que você configurou no Passo 1.

      Uma vez logado, você verá uma interface semelhante a esta:

      Ainda não há muito o que ver, mas deixe essa janela aberta e você verá o conteúdo mudar à medida que você adiciona containers para o Traefik trabalhar.

      Agora temos nosso proxy Traefik em execução, configurado para funcionar com o Docker, e pronto para monitorar outros containers Docker. Vamos iniciar alguns containers para que o Traefik possa agir como proxy para eles.

      Com o container do Traefik em execução, você está pronto para executar aplicações por trás dele. Vamos lançar os seguintes containers por trás do Traefik:

      1. Um blog usando a imagem oficial do WordPress.

      2. Um servidor de gerenciamento de banco de dados usando a imagem oficial do Adminer.

      Vamos gerenciar essas duas aplicações com o Docker Compose usando um arquivo docker-compose.yml. Abra o arquivo docker-compose.yml em seu editor:

      Adicione as seguintes linhas ao arquivo para especificar a versão e as redes que usaremos:

      docker-compose.yml

      • version: "3"
      • networks:
      • web:
      • external: true
      • internal:
      • external: false

      Usamos a versão 3 do Docker Compose porque é a mais nova versão principal do formato de arquivo Compose.

      Para o Traefik reconhecer nossas aplicações, elas devem fazer parte da mesma rede e, uma vez que criamos a rede manualmente, nós a inserimos especificando o nome da rede web e configurandoexternal para true. Em seguida, definimos outra rede para que possamos conectar nossos containers expostos a um container de banco de dados que não vamos expor por meio do Traefik. Chamaremos essa rede de internal.

      Em seguida, definiremos cada um dos nossos serviços ou services, um de cada vez. Vamos começar com o container blog, que basearemos na imagem oficial do WordPress. Adicione esta configuração ao arquivo:

      docker-compose.yml

      
      version: "3"
      ...
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.seu_domínio
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      A chave environment permite que você especifique variáveis de ambiente que serão definidas dentro do container. Ao não definir um valor para WORDPRESS_DB_PASSWORD, estamos dizendo ao Docker Compose para obter o valor de nosso shell e repassá-lo quando criamos o container. Vamos definir essa variável de ambiente em nosso shell antes de iniciar os containers. Dessa forma, não codificamos senhas no arquivo de configuração.

      A seção labels é onde você especifica os valores de configuração do Traefik. As labels do Docker não fazem nada sozinhas, mas o Traefik as lê para saber como tratar os containers. Veja o que cada uma dessas labels faz:

      • traefik.backend especifica o nome do serviço de backend no Traefik (que aponta para o container real blog).

      • traefik.frontend.rule=Host:blog.seu_domínio diz ao Traefik para examinar o host solicitado e, se ele corresponde ao padrão de blog.seu_domínio, ele deve rotear o tráfego para o container blog.

      • traefik.docker.network=web especifica qual rede procurar sob o Traefik para encontrar o IP interno para esse container. Como o nosso container Traefik tem acesso a todas as informações do Docker, ele possivelmente levaria o IP para a rede internal se não especificássemos isso.

      • traefik.port especifica a porta exposta que o Traefik deve usar para rotear o tráfego para esse container.

      Com essa configuração, todo o tráfego enviado para a porta 80 do host do Docker será roteado para o container blog.

      Atribuímos este container a duas redes diferentes para que o Traefik possa encontrá-lo através da rede web e possa se comunicar com o container do banco de dados através da rede internal.

      Por fim, a chave depends_on informa ao Docker Compose que este container precisa ser iniciado após suas dependências estarem sendo executadas. Como o WordPress precisa de um banco de dados para ser executado, devemos executar nosso container mysql antes de iniciar nosso containerblog.

      Em seguida, configure o serviço MySQL adicionando esta configuração ao seu arquivo:

      docker-compose.yml

      
      services:
      ...
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      

      Estamos usando a imagem oficial do MySQL 5.7 para este container. Você notará que estamos mais uma vez usando um item environment sem um valor. As variáveis MYSQL_ROOT_PASSWORD eWORDPRESS_DB_PASSWORD precisarão ser configuradas com o mesmo valor para garantir que nosso container WordPress possa se comunicar com o MySQL. Nós não queremos expor o container mysql para o Traefik ou para o mundo externo, então estamos atribuindo este container apenas à rede internal. Como o Traefik tem acesso ao soquete do Docker, o processo ainda irá expor um frontend para o container mysql por padrão, então adicionaremos a label traefik.enable=false para especificar que o Traefik não deve expor este container.

      Por fim, adicione essa configuração para definir o container do Adminer:

      docker-compose.yml

      
      services:
      ...
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.seu_domínio
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      Este container é baseado na imagem oficial do Adminer. A configuração network e depends_on para este container corresponde exatamente ao que estamos usando para o container blog.

      No entanto, como estamos redirecionando todo o tráfego para a porta 80 em nosso host Docker diretamente para o container blog, precisamos configurar esse container de forma diferente para que o tráfego chegue ao container adminer. A linha traefik.frontend.rule=Host:db-admin.seu_domínio diz ao Traefik para examinar o host solicitado. Se ele corresponder ao padrão do db-admin.seu_domínio, o Traefik irá rotear o tráfego para o container adminer.

      Neste ponto, docker-compose.yml deve ter o seguinte conteúdo:

      docker-compose.yml

      
      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.seu_domínio
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.seu_domínio
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      Salve o arquivo e saia do editor de texto.

      Em seguida, defina valores em seu shell para as variáveis WORDPRESS_DB_PASSWORD e MYSQL_ROOT_PASSWORD antes de iniciar seus containers:

      • export WORDPRESS_DB_PASSWORD=senha_segura_do_banco_de_dados
      • export MYSQL_ROOT_PASSWORD=senha_segura_do_banco_de_dados

      Substitua senhaseguradobancodedados pela sua senha do banco de dados desejada. Lembre-se de usar a mesma senha tanto para `WORDPRESSDBPASSWORDquanto paraMYSQLROOT_PASSWORD`.

      Com estas variáveis definidas, execute os containers usando o docker-compose:

      Agora, dê outra olhada no painel de administrador do Traefik. Você verá que agora existe um backend e um frontend para os dois servidores expostos:

      Navegue até blog.seu_domínio, substituindo seu_domínio pelo seu domínio. Você será redirecionado para uma conexão TLS e poderá agora concluir a configuração do WordPress:

      Agora acesse o Adminer visitando db-admin.seu_domínio no seu navegador, novamente substituindo seu_domínio pelo seu domínio. O container mysql não está exposto ao mundo externo, mas o container adminer tem acesso a ele através da rede internal do Docker que eles compartilham usando o nome do container mysql como um nome de host.

      Na tela de login do Adminer, use o nome de usuário root, use mysql para o server, e use o valor que você definiu para MYSQL_ROOT_PASSWORD para a senha. Uma vez logado, você verá a interface de usuário do Adminer:

      Ambos os sites agora estão funcionando, e você pode usar o painel em monitor.seu_domínio para ficar de olho em suas aplicações.

      Conclusão

      Neste tutorial, você configurou o Traefik para fazer proxy das solicitações para outras aplicações em containers Docker.

      A configuração declarativa do Traefik no nível do container da aplicação facilita a configuração de mais serviços, e não há necessidade de reiniciar o container traefik quando você adiciona novas aplicações para fazer proxy, uma vez que o Traefik percebe as alterações imediatamente através do arquivo de soquete do Docker que ele está monitorando.

      Para saber mais sobre o que você pode fazer com o Traefik, consulte a documentação oficial do Traefik.

      Por Keith Thompson



      Source link

      How To Integrate MongoDB with Your Node Application


      Introduction

      As you work with Node.js, you may find yourself developing a project that stores and queries data. In this case, you will need to choose a database solution that makes sense for your application’s data and query types.

      In this tutorial, you will integrate a MongoDB database with an existing Node application. NoSQL databases like MongoDB can be useful if your data requirements include scalability and flexibility. MongoDB also integrates well with Node since it is designed to work asynchronously with JSON objects.

      To integrate MongoDB into your project, you will use the Object Document Mapper (ODM) Mongoose to create schemas and models for your application data. This will allow you to organize your application code following the model-view-controller (MVC) architectural pattern, which lets you separate the logic of how your application handles user input from how your data is structured and rendered to the user. Using this pattern can facilitate future testing and development by introducing a separation of concerns into your codebase.

      At the end of the tutorial, you will have a working shark information application that will take a user’s input about their favorite sharks and display the results in the browser:

      Shark Output

      Prerequisites

      Step 1 — Creating a Mongo User

      Before we begin working with the application code, we will create an administrative user that will have access to our application’s database. This user will have administrative privileges on any database, which will give you the flexibility to switch and create new databases as needed.

      First, check that MongoDB is running on your server:

      • sudo systemctl status mongodb

      The following output indicates that MongoDB is running:

      Output

      ● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2019-01-31 21:07:25 UTC; 21min ago ...

      Next, open the Mongo shell to create your user:

      This will drop you into an administrative shell:

      Output

      MongoDB shell version v3.6.3 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.3 ... >

      You will see some administrative warnings when you open the shell due to your unrestricted access to the admin database. You can learn more about restricting this access by reading How To Install and Secure MongoDB on Ubuntu 16.04, for when you move into a production setup.

      For now, you can use your access to the admin database to create a user with userAdminAnyDatabase privileges, which will allow password-protected access to your application's databases.

      In the shell, specify that you want to use the admin database to create your user:

      Next, create a role and password by adding a username and password with the db.createUser command. After you type this command, the shell will prepend three dots before each line until the command is complete. Be sure to replace the user and password provided here with your own username and password:

      • db.createUser(
      • {
      • user: "sammy",
      • pwd: "your_password",
      • roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
      • }
      • )

      This creates an entry for the user sammy in the admin database. The username you select and the admin database will serve as identifiers for your user.

      The output for the entire process will look like this, including the message indicating that the entry was successful:

      Output

      > db.createUser( ... { ... user: "sammy", ... pwd: "your_password", ... roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] ... } ...) Successfully added user: { "user" : "sammy", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" } ] }

      With your user and password created, you can now exit the Mongo shell:

      Now that you have created your database user, you can move on to cloning the starter project code and adding the Mongoose library, which will allow you to implement schemas and models for the collections in your databases.

      Step 2 — Adding Mongoose and Database Information to the Project

      Our next steps will be to clone the application starter code and add Mongoose and our MongoDB database information to the project.

      In your non-root user's home directory, clone the nodejs-image-demo repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-image-demo.git node_project

      Change to the node_project directory:

      Before modifying the project code, let's take a look at the project's structure using the tree command.

      Tip: tree is a useful command for viewing file and directory structures from the command line. You can install it with the following command:

      To use it, cd into a given directory and type tree. You can also provide the path to the starting point with a command like:

      • tree /home/sammy/sammys-project

      Type the following to look at the node_project directory:

      The structure of the current project looks like this:

      Output

      ├── Dockerfile ├── README.md ├── app.js ├── package-lock.json ├── package.json └── views ├── css │ └── styles.css ├── index.html └── sharks.html

      We will be adding directories to this project as we move through the tutorial, and tree will be a useful command to help us track our progress.

      Next, add the mongoose npm package to the project with the npm install command:

      This command will create a node_modules directory in your project directory, using the dependencies listed in the project's package.json file, and will add mongoose to that directory. It will also add mongoose to the dependencies listed in your package.json file. For a more detailed discussion of package.json, please see Step 1 in How To Build a Node.js Application with Docker.

      Before creating any Mongoose schemas or models, we will add our database connection information so that our application will be able to connect to our database.

      In order to separate your application's concerns as much as possible, create a separate file for your database connection information called db.js. You can open this file with nano or your favorite editor:

      First, import the mongoose module using the require function:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      

      This will give you access to Mongoose's built-in methods, which you will use to create the connection to your database.

      Next, add the following constants to define information for Mongo's connection URI. Though the username and password are optional, we will include them so that we can require authentication for our database. Be sure to replace the username and password listed below with your own information, and feel free to call the database something other than 'sharkinfo' if you would prefer:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      

      Because we are running our database locally, we have used 127.0.0.1 as the hostname. This would change in other development contexts: for example, if you are using a separate database server or working with multiple nodes in a containerized workflow.

      Finally, define a constant for the URI and create the connection using the mongoose.connect() method:

      ~/node_project/db.js

      ...
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Note that in the URI we've specified the authSource for our user as the admin database. This is necessary since we have specified a username in our connection string. Using the useNewUrlParser flag with mongoose.connect() specifies that we want to use Mongo's new URL parser.

      Save and close the file when you are finished editing.

      As a final step, add the database connection information to the app.js file so that the application can use it. Open app.js:

      The first lines of the file will look like this:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      
      const path = __dirname + '/views/';
      ...
      

      Below the router constant definition, located near the top of the file, add the following line:

      ~/node_project/app.js

      ...
      const router = express.Router();
      const db = require('./db');
      
      const path = __dirname + '/views/';
      ...
      

      This tells the application to use the database connection information specified in db.js.

      Save and close the file when you are finished editing.

      With your database information in place and Mongoose added to your project, you are ready to create the schemas and models that will shape the data in your sharks collection.

      Step 3 — Creating Mongoose Schemas and Models

      Our next step will be to think about the structure of the sharks collection that users will be creating in the sharkinfo database with their input. What structure do we want these created documents to have? The shark information page of our current application includes some details about different sharks and their behaviors:

      Shark Info Page

      In keeping with this theme, we can have users add new sharks with details about their overall character. This goal will shape how we create our schema.

      To keep your schemas and models distinct from the other parts of your application, create a models directory in the current project directory:

      Next, open a file called sharks.js to create your schema and model:

      Import the mongoose module at the top of the file:

      ~/node_project/models/sharks.js

      const mongoose = require('mongoose');
      

      Below this, define a Schema object to use as the basis for your shark schema:

      ~/node_project/models/sharks.js

      const mongoose = require('mongoose');
      const Schema = mongoose.Schema;
      

      You can now define the fields you would like to include in your schema. Because we want to create a collection with individual sharks and information about their behaviors, let's include a name key and a character key. Add the following Shark schema below your constant definitions:

      ~/node_project/models/sharks.js

      ...
      const Shark = new Schema ({
              name: { type: String, required: true },
              character: { type: String, required: true },
      });
      

      This definition includes information about the type of input we expect from users — in this case, a string — and whether or not that input is required.

      Finally, create the Shark model using Mongoose's model() function. This model will allow you to query documents from your collection and validate new documents. Add the following line at the bottom of the file:

      ~/node_project/models/sharks.js

      ...
      module.exports = mongoose.model('Shark', Shark)
      

      This last line makes our Shark model available as a module using the module.exports property. This property defines the values that the module will export, making them available for use elsewhere in the application.

      The finished models/sharks.js file looks like this:

      ~/node_project/models/sharks.js

      const mongoose = require('mongoose');
      const Schema = mongoose.Schema;
      
      const Shark = new Schema ({
              name: { type: String, required: true },
              character: { type: String, required: true },
      });
      
      module.exports = mongoose.model('Shark', Shark)
      

      Save and close the file when you are finished editing.

      With the Shark schema and model in place, you can start working on the logic that will determine how your application will handle user input.

      Step 4 — Creating Controllers

      Our next step will be to create the controller component that will determine how user input gets saved to our database and returned to the user.

      First, create a directory for the controller:

      Next, open a file in that folder called sharks.js:

      • nano controllers/sharks.js

      At the top of the file, we'll import the module with our Shark model so that we can use it in our controller's logic. We'll also import the path module to access utilities that will allow us to set the path to the form where users will input their sharks.

      Add the following require functions to the beginning of the file:

      ~/node_project/controllers/sharks.js

      const path = require('path');
      const Shark = require('../models/sharks');
      

      Next, we'll write a sequence of functions that we will export with the controller module using Node's exports shortcut. These functions will include the three tasks related to our user's shark data:

      • Sending users the shark input form.
      • Creating a new shark entry.
      • Displaying the sharks back to users.

      To begin, create an index function to display the sharks page with the input form. Add this function below your imports:

      ~/node_project/controllers/sharks.js

      ...
      exports.index = function (req, res) {
          res.sendFile(path.resolve('views/sharks.html'));
      };
      

      Next, below the index function, add a function called create to make a new shark entry in your sharks collection:

      ~/node_project/controllers/sharks.js

      ...
      exports.create = function (req, res) {
          var newShark = new Shark(req.body);
          console.log(req.body);
          newShark.save(function (err) {
                  if(err) {
                  res.status(400).send('Unable to save shark to database');
              } else {
                  res.redirect('/sharks/getshark');
              }
        });
                     };
      

      This function will be called when a user posts shark data to the form on the sharks.html page. We will create the route with this POST endpoint later in the tutorial when we create our application's routes. With the body of the POST request, our create function will make a new shark document object, here called newShark, using the Shark model that we've imported. We've added a console.log method to output the shark entry to the console in order to check that our POST method is working as intended, but you should feel free to omit this if you would prefer.

      Using the newShark object, the create function will then call Mongoose's model.save() method to make a new shark document using the keys you defined in the Shark model. This callback function follows the standard Node callback pattern: callback(error, results). In the case of an error, we will send a message reporting the error to our users, and in the case of success, we will use the res.redirect() method to send users to the endpoint that will render their shark information back to them in the browser.

      Finally, the list function will display the collection's contents back to the user. Add the following code below the create function:

      ~/node_project/controllers/sharks.js

      ...
      exports.list = function (req, res) {
              Shark.find({}).exec(function (err, sharks) {
                      if (err) {
                              return res.send(500, err);
                      }
                      res.render('getshark', {
                              sharks: sharks
                   });
              });
      };
      

      This function uses the Shark model with Mongoose's model.find() method to return the sharks that have been entered into the sharks collection. It does this by returning the query object — in this case, all of the entries in the sharks collection — as a promise, using Mongoose's exec() function. In the case of an error, the callback function will send a 500 error.

      The returned query object with the sharks collection will be rendered in a getshark page that we will create in the next step using the EJS templating language.

      The finished file will look like this:

      ~/node_project/controllers/sharks.js

      const path = require('path');
      const Shark = require('../models/sharks');
      
      exports.index = function (req, res) {
          res.sendFile(path.resolve('views/sharks.html'));
      };
      
      exports.create = function (req, res) {
          var newShark = new Shark(req.body);
          console.log(req.body);
          newShark.save(function (err) {
                  if(err) {
                  res.status(400).send('Unable to save shark to database');
              } else {
                  res.redirect('/sharks/getshark');
              }
        });
                     };
      
      exports.list = function (req, res) {
              Shark.find({}).exec(function (err, sharks) {
                      if (err) {
                              return res.send(500, err);
                      }
                      res.render('getshark', {
                              sharks: sharks
                   });
              });
      };
      

      Keep in mind that though we are not using arrow functions here, you may wish to include them as you iterate on this code in your own development process.

      Save and close the file when you are finished editing.

      Before moving on to the next step, you can run tree again from your node_project directory to view the project's structure at this point. This time, for the sake of brevity, we'll tell tree to omit the node_modules directory using the -I option:

      With the additions you've made, your project's structure will look like this:

      Output

      ├── Dockerfile ├── README.md ├── app.js ├── controllers │ └── sharks.js ├── db.js ├── models │ └── sharks.js ├── package-lock.json ├── package.json └── views ├── css │ └── styles.css ├── index.html └── sharks.html

      Now that you have a controller component to direct how user input gets saved and returned to the user, you can move on to creating the views that will implement your controller's logic.

      Step 5 — Using EJS and Express Middleware to Collect and Render Data

      To enable our application to work with user data, we will do two things: first, we will include a built-in Express middleware function, urlencoded(), that will enable our application to parse our user's entered data. Second, we will add template tags to our views to enable dynamic interaction with user data in our code.

      To work with Express's urlencoded() function, first open your app.js file:

      Above your express.static() function, add the following line:

      ~/node_project/app.js

      ...
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      ...
      

      Adding this function will enable access to the parsed POST data from our shark information form. We are specifying true with the extended option to enable greater flexibility in the type of data our application will parse (including things like nested objects). Please see the function documentation for more information about options.

      Save and close the file when you are finished editing.

      Next, we will add template functionality to our views. First, install the ejs package with npm install:

      Next, open the sharks.html file in the views folder:

      In Step 3, we looked at this page to determine how we should write our Mongoose schema and model:

      Shark Info Page

      Now, rather than having a two column layout, we will introduce a third column with a form where users can input information about sharks.

      As a first step, change the dimensions of the existing columns to 4 to create three equal-sized columns. Note that you will need to make this change on the two lines that currently read <div class="col-lg-6">. These will both become <div class="col-lg-4">:

      ~/node_project/views/sharks.html

      ...
      <div class="container">
          <div class="row">
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
                      </div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
              </div>
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
              </div>
          </div>
        </div>
      
       </html> 
      

      For an introduction to Bootstrap's grid system, including its row and column layouts, please see this introduction to Bootstrap.

      Next, add another column that includes the named endpoint for the POST request with the user's shark data and the EJS template tags that will capture that data. This column will go below the closing </p> and </div> tags from the preceding column and above the closing tags for the row, container, and HTML document. These closing tags are already in place in your code; they are also marked below with comments. Leave them in place as you add the following code to create the new column:

      ~/node_project/views/sharks.html

      ...
             </p> <!-- closing p from previous column -->
         </div> <!-- closing div from previous column -->
      <div class="col-lg-4">
                  <p>
                      <form action="/sharks/addshark" method="post">
                          <div class="caption">Enter Your Shark</div>
                          <input type="text" placeholder="Shark Name" name="name" <%=sharks[i].name; %>
                          <input type="text" placeholder="Shark Character" name="character" <%=sharks[i].character; %>
                          <button type="submit">Submit</button>
                      </form>
                  </p>
              </div> 
          </div> <!-- closing div for row -->
      </div> <!-- closing div for container -->
      
      </html> <!-- closing html tag -->
      

      In the form tag, you are adding a "/sharks/addshark" endpoint for the user's shark data and specifying the POST method to submit it. In the input fields, you are specifying fields for "Shark Name" and "Shark Character", aligning with the Shark model you defined earlier.

      To add the user input to your sharks collection, you are using EJS template tags (<%=, %>) along with JavaScript syntax to map the user's entries to the appropriate fields in the newly created document. For more about JavaScript objects, please see our article on Understanding JavaScript Objects. For more on EJS template tags, please see the EJS documentation.

      The entire container with all three columns, including the column with your shark input form, will look like this when finished:

      ~/node_project/views/sharks.html

      ...
      <div class="container">
          <div class="row">
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
                      </div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
              </div>
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
              </div>
          <div class="col-lg-4">
                  <p>
                      <form action="/sharks/addshark" method="post">
                          <div class="caption">Enter Your Shark</div>
                          <input type="text" placeholder="Shark Name" name="name" <%=sharks[i].name; %>
                          <input type="text" placeholder="Shark Character" name="character" <%=sharks[i].character; %>
                          <button type="submit">Submit</button>
                      </form>
                  </p>
              </div>
          </div>
        </div>
      
      </html>
      

      Save and close the file when you are finished editing.

      Now that you have a way to collect your user's input, you can create an endpoint to display the returned sharks and their associated character information.

      Copy the newly modified sharks.html file to a file called getshark.html:

      • cp views/sharks.html views/getshark.html

      Open getshark.html:

      Inside the file, we will modify the column that we used to create our sharks input form by replacing it with a column that will display the sharks in our sharks collection. Again, your code will go between the existing </p> and </div> tags from the preceding column and the closing tags for the row, container, and HTML document. Remember to leave these tags in place as you add the following code to create the column:

      ~/node_project/views/getshark.html

      ...
             </p> <!-- closing p from previous column -->
         </div> <!-- closing div from previous column -->
      <div class="col-lg-4">
                 <p>
                    <div class="caption">Your Sharks</div>
                        <ul>
                           <% sharks.forEach(function(shark) { %>
                              <p>Name: <%= shark.name %></p>
                              <p>Character: <%= shark.character %></p>
                           <% }); %>
                        </ul>
                  </p>
              </div>
          </div> <!-- closing div for row -->
      </div> <!-- closing div for container -->
      
      </html> <!-- closing html tag -->
      

      Here you are using EJS template tags and the forEach() method to output each value in your sharks collection, including information about the most recently added shark.

      The entire container with all three columns, including the column with your sharks collection, will look like this when finished:

      ~/node_project/views/getshark.html

      ...
      <div class="container">
          <div class="row">
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
                      </div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
              </div>
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
              </div>
          <div class="col-lg-4">
                  <p>
                    <div class="caption">Your Sharks</div>
                        <ul>
                           <% sharks.forEach(function(shark) { %>
                              <p>Name: <%= shark.name %></p>
                              <p>Character: <%= shark.character %></p>
                           <% }); %>
                        </ul>
                  </p>
              </div>
          </div>
        </div>
      
      </html>
      

      Save and close the file when you are finished editing.

      In order for the application to use the templates you've created, you will need to add a few lines to your app.js file. Open it again:

      Above where you added the express.urlencoded() function, add the following lines:

      ~/node_project/app.js

      ...
      app.engine('html', require('ejs').renderFile);
      app.set('view engine', 'html');
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      
      ...
      

      The app.engine method tells the application to map the EJS template engine to HTML files, while app.set defines the default view engine.

      Your app.js file should now look like this:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      const db = require('./db');
      
      const path = __dirname + '/views/';
      const port = 8080;
      
      router.use(function (req,res,next) {
        console.log('/' + req.method);
        next();
      });
      
      router.get('/',function(req,res){
        res.sendFile(path + 'index.html');
      });
      
      router.get('/sharks',function(req,res){
        res.sendFile(path + 'sharks.html');
      });
      
      app.engine('html', require('ejs').renderFile);
      app.set('view engine', 'html');
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      app.use('/', router);
      
      app.listen(port, function () {
        console.log('Example app listening on port 8080!')
      })
      

      Now that you have created views that can work dynamically with user data, it's time to create your project's routes to bring together your views and controller logic.

      Step 6 — Creating Routes

      The final step in bringing the application's components together will be creating routes. We will separate our routes by function, including a route to our application's landing page and another route to our sharks page. Our sharks route will be where we integrate our controller's logic with the views we created in the previous step.

      First, create a routes directory:

      Next, open a file called index.js in this directory:

      This file will first import the express, router, and path objects, allowing us to define the routes we want to export with the router object, and making it possible to work dynamically with file paths. Add the following code at the top of the file:

      ~/node_project/routes/index.js

      const express = require('express');
      const router = express.Router();
      const path = require('path');
      

      Next, add the following router.use function, which loads a middleware function that will log the router's requests and pass them on to the application's route:

      ~/node_project/routes/index.js

      ...
      
      router.use (function (req,res,next) {
        console.log('/' + req.method);
        next();
      });
      

      Requests to our application's root will be directed here first, and from here users will be directed to our application's landing page, the route we will define next. Add the following code below the router.use function to define the route to the landing page:

      ~/node_project/routes/index.js

      ...
      
      router.get('/',function(req,res){
        res.sendFile(path.resolve('views/index.html'));
      });
      

      When users visit our application, the first place we want to send them is to the index.html landing page that we have in our views directory.

      Finally, to make these routes accessible as importable modules elsewhere in the application, add a closing expression to the end of the file to export the router object:

      ~/node_project/routes/index.js

      ...
      
      module.exports = router;
      

      The finished file will look like this:

      ~/node_project/routes/index.js

      const express = require('express');
      const router = express.Router();
      const path = require('path');
      
      router.use (function (req,res,next) {
        console.log('/' + req.method);
        next();
      });
      
      router.get('/',function(req,res){
        res.sendFile(path.resolve('views/index.html'));
      });
      
      module.exports = router;
      

      Save and close this file when you are finished editing.

      Next, open a file called sharks.js to define how the application should use the different endpoints and views we've created to work with our user's shark input:

      At the top of the file, import the express and router objects:

      ~/node_project/routes/sharks.js

      const express = require('express');
      const router = express.Router();
      

      Next, import a module called shark that will allow you to work with the exported functions you defined with your controller:

      ~/node_project/routes/sharks.js

      const express = require('express');
      const router = express.Router();
      const shark = require('../controllers/sharks');
      

      Now you can create routes using the index, create, and list functions you defined in your sharks controller file. Each route will be associated with the appropriate HTTP method: GET in the case of rendering the main sharks information landing page and returning the list of sharks to the user, and POST in the case of creating a new shark entry:

      ~/node_project/routes/sharks.js

      ...
      
      router.get('/', function(req, res){
          shark.index(req,res);
      });
      
      router.post('/addshark', function(req, res) {
          shark.create(req,res);
      });
      
      router.get('/getshark', function(req, res) {
          shark.list(req,res);
      });
      

      Each route makes use of the related function in controllers/sharks.js, since we have made that module accessible by importing it at the top of this file.

      Finally, close the file by attaching these routes to the router object and exporting them:

      ~/node_project/routes/index.js

      ...
      
      module.exports = router;
      

      The finished file will look like this:

      ~/node_project/routes/sharks.js

      const express = require('express');
      const router = express.Router();
      const shark = require('../controllers/sharks');
      
      router.get('/', function(req, res){
          shark.index(req,res);
      });
      
      router.post('/addshark', function(req, res) {
          shark.create(req,res);
      });
      
      router.get('/getshark', function(req, res) {
          shark.list(req,res);
      });
      
      module.exports = router;
      

      Save and close the file when you are finished editing.

      The last step in making these routes accessible to your application will be to add them to app.js. Open that file again:

      Below your db constant, add the following import for your routes:

      ~/node_project/app.js

      ...
      const db = require('./db');
      const sharks = require('./routes/sharks');
      

      Next, replace the app.use function that currently mounts your router object with the following line, which will mount the sharks router module:

      ~/node_project/app.js

      ...
      app.use(express.static(path));
      app.use('/sharks', sharks);
      
      app.listen(port, function () {
              console.log("Example app listening on port 8080!")
      })
      

      You can now delete the routes that were previously defined in this file, since you are importing your application's routes using the sharks router module.

      The final version of your app.js file will look like this:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      const db = require('./db');
      const sharks = require('./routes/sharks');
      
      const path = __dirname + '/views/';
      const port = 8080;
      
      app.engine('html', require('ejs').renderFile);
      app.set('view engine', 'html');
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      app.use('/sharks', sharks);
      
      app.listen(port, function () {
        console.log('Example app listening on port 8080!')
      })
      

      Save and close the file when you are finished editing.

      You can now run tree again to see the final structure of your project:

      Your project structure will now look like this:

      Output

      ├── Dockerfile ├── README.md ├── app.js ├── controllers │ └── sharks.js ├── db.js ├── models │ └── sharks.js ├── package-lock.json ├── package.json ├── routes │ ├── index.js │ └── sharks.js └── views ├── css │ └── styles.css ├── getshark.html ├── index.html └── sharks.html

      With all of your application components created and in place, you are now ready to add a test shark to your database!

      If you followed the initial server setup tutorial in the prerequisites, you will need to modify your firewall, since it currently only allows SSH traffic. To permit traffic to port 8080 run:

      Start the application:

      Next, navigate your browser to http://your_server_ip:8080. You will see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see the following information page, with the shark input form added:

      Shark Info Form

      In the form, add a shark of your choosing. For the purpose of this demonstration, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      You will also see output in your console indicating that the shark has been added to your collection:

      Output

      Example app listening on port 8080! { name: 'Megalodon Shark', character: 'Ancient' }

      If you would like to create a new shark entry, head back to the Sharks page and repeat the process of adding a shark.

      You now have a working shark information application that allows users to add information about their favorite sharks.

      Conclusion

      In this tutorial, you built out a Node application by integrating a MongoDB database and rewriting the application's logic using the MVC architectural pattern. This application can act as a good starting point for a fully-fledged CRUD application.

      For more resources on the MVC pattern in other contexts, please see our Django Development series or How To Build a Modern Web Application to Manage Customer Information with Django and React on Ubuntu 18.04.

      For more information on working with MongoDB, please see our library of tutorials on MongoDB.



      Source link