One place for hosting & domains

      How to Deploy a Static Site on Linode Kubernetes Engine

      Updated by Linode Contributed by Linode


      Linode Kubernetes Engine (LKE) is currently in Private Beta, and you may not have access to LKE through the Cloud Manager or other tools. To request access to the Private Beta, sign up here. Beta access awards you $100/month in free credits for the duration of the beta, which is automatically applied to your account when an LKE cluster is in use. Additionally, you will have access to the Linode Green Light community, a new program connecting beta users with our product and engineering teams.

      Because LKE is in Beta, there may be breaking changes to how you access and manage LKE. This guide will be updated to reflect these changes if and when they occur.

      Linode Kubernetes Engine (LKE) allows you to easily create, scale, and manage Kubernetes clusters to meet your application’s demands, reducing the often complicated cluster set-up process to just a few clicks. Linode manages your Kubernetes master node, and you select how many Linodes you want to add as worker nodes to your cluster.

      Deploying a static site using an LKE cluster is a great example to follow when learning Kubernetes. A container image for a static site can be written in less than ten lines, and only one container image is needed, so it’s less complicated to deploy a static site on Kubernetes than some other applications that require multiple components.


      Following the instructions in this guide will create billable resources on your account in the form of Linodes and NodeBalancers. You will be billed an hourly rate for the time that these resources exist on your account. Be sure to follow the tear-down section at the end of this guide if you do not wish to continue using these resources.

      In this Guide

      This guide will show you how to:

      Before You Begin

      • You should have a working knowledge of Kubernetes’ key concepts, including master and worker nodes, Pods, Deployments, and Services. For more information on Kubernetes, see our Beginner’s Guide to Kubernetes series.

      • You will also need to prepare your workstation with some prerequisite software:

      • Finally, you will need to create a cluster on LKE, if you do not already have one:

      Install kubectl

      You should have kubectl installed on your local workstation. kubectl is the command line interface for Kubernetes, and allows you to remotely connect to your Kubernetes cluster to perform tasks.


      Install via Homebrew:

      brew install kubernetes-cli

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.


      1. Download the latest kubectl release:

        curl -LO$(curl -s
      2. Make the downloaded file executable:

        chmod +x ./kubectl
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl



      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Install Git

      To perform some of the commands in this guide you will need to have Git installed on your workstation. Git is a version control system that allows you to save your codebase in various states to ease development and deployment. Follow our How to Install Git on Linux, Mac or Windows guide for instructions on how to install Git.

      Install Docker

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, or to install on Mac or Windows, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
      3. Add Docker’s GPG key:

        curl -fsSL | sudo apt-key add -
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88

        You should see output similar to the following:

        pub   4096R/0EBFCD88 2017-02-22
                Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) 
        sub   4096R/F273FCD8 2017-02-22
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"


        For Ubuntu 19.04, if you get an E: Package 'docker-ce' has no installation candidate error, this is because the stable version of docker is not yet available. Therefore, you will need to use the edge / test repository.

        sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable edge test"
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
      7. Add your limited Linux user account to the docker group:

        sudo usermod -aG docker $USER


        After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        docker run hello-world

      Sign up for a Docker Hub Account

      You will use Docker Hub to store your Docker image. If you don’t already have a Docker Hub account, create one now.

      Install Hugo

      A static site generator (SSG) is usually a command line tool that takes text files written in a markup language like Markdown, applies a stylized template to the content, and produces valid HTML, CSS, and JavaScript files. Static sites are prized for their simplicity and speed, as they do not generally have to interact with a database.

      The Linode documentation website, and this guide, employ Hugo. Hugo is a powerful and fast SSG written in the Go programming language, but you can choose one that best suits your needs by reading our How to Choose a Static Site Generator guide.

      The steps in this guide are generally the same across SSGs: install a static site generator, create some content in a text file, and then generate your site’s HTML through a build process.

      To download and install Hugo, you can use a package manager.

      • For Debian and Ubuntu:

        sudo apt-get install hugo
      • For Red Hat, Fedora, and CentOS:

        sudo dnf install hugo
      • For macOS, use Homebrew:

        brew install hugo
      • For Windows, use Chocolatey:

        choco install hugo

      For more information on downloading Hugo, you can visit the official Hugo website.

      Create a Static Site Using Hugo

      In this section you will create a static site on your workstation using Hugo.

      1. Use Hugo to scaffold a new site. This command will create a new directory with the name you provide, and inside that directory it will create the default Hugo directory structure and configuration files:

        hugo new site lke-example
      2. Move into the new directory:

        cd lke-example
      3. Initialize the directory as a Git repository. This will allow you to track changes to your website and save it in version control.

        git init
      4. Hugo allows for custom themes. For the sake of this example, you will install the Ananke theme as a Git submodule.

        git submodule add themes/ananke


        Git submodules allow you to include one Git repository within another, each maintaining their own version history. To view a collection of Hugo themes, visit the Hugo theme collection.
      5. In the text editor of your choice, open the config.toml file and add the following line to the end:

        theme = "ananke"

        This line instructs Hugo to search for a folder named ananke in the themes directory and applies the templating it finds to the static site.

      6. Add an example first post to your Hugo site:

        hugo new posts/

        This will create a Markdown file in the content/posts/ directory with the name You will see output like the following:

        /Users/linode/k8s/lke/lke-example/content/posts/ created
      7. Open the file in the text editor of your choosing. You will see a few lines of front matter, a format Hugo uses for extensible metadata, at the top of the file:

        title: "First_post"
        date: 2019-07-29T14:22:04-04:00
        draft: false

        Change the title to your desired value, and change draft to false. Then, add some example Markdown text to the bottom of the file, like the example below:

        title: "First Post About LKE Clusters"
        date: 2019-07-29T14:22:04-04:00
        draft: false
        ## LKE Clusters
        Linode Kubernetes Engine (LKE) clusters are:
        - Fast
        - Affordable
        - Scalable
      8. You can preview your changes by starting the local Hugo server:

        hugo server

        You should see output like the following:

                           | EN
          Pages              |  8
          Paginator pages    |  0
          Non-page files     |  0
          Static files       |  3
          Processed images   |  0
          Aliases            |  0
          Sitemaps           |  1
          Cleaned            |  0
        Total in 6 ms
        Watching for changes in /Users/linode/k8s/lke/lke-example/{content,data,layouts,static,themes}
        Watching for config changes in /Users/linode/k8s/lke/lke-example/config.toml
        Serving pages from memory
        Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
        Web Server is available at http://localhost:1313/ (bind address
        Press Ctrl+C to stop
      9. Visit the URL that Hugo is running on. In the above example, the URL is http://localhost:1313. This server automatically updates whenever you make a change to a file in the Hugo site directory. To stop this server, enter CTRL-C on your keyboard in your terminal.

      10. When you are satisfied with your static site, you can generate the HTML, CSS, and JavaScript for your site by building the site:

        hugo -v

        Hugo creates the site’s files in the public/ directory. View the files by listing them:

        ls public
      11. You can build the site at any time from your source Markdown content files, so it’s common practice to keep built files out of a Git repository. This practice keeps the size of the repository to a minimum.

        You can instruct Git to ignore certain files within a repository by adding them to a .gitignore file. Add the public/ directory to your .gitignore file to exclude these files from the repository:

        echo 'public/' >> .gitignore
      12. Add and commit the source files to the Git repository:

        git add .
        git commit -m "Initial commit. Includes all of the source files, configuration, and first post."

        You are now ready to create a Docker image from the static site you’ve just created.

      Create a Docker Image

      In this section you will create a Docker container for your static site, which you will then run on your LKE cluster. Before deploying it on your cluster, you’ll test its functionality on your workstation.

      1. In your Hugo static site folder, create a new text file named Dockerfile and open it in the text editor of your choosing. A Dockerfile tells Docker how to create the container.

      2. Add the following contents to the Dockerfile. Each command has accompanying comments that describe their function:

        # Install the latest Debain operating system.
        FROM debian:latest as HUGO
        # Install Hugo.
        RUN apt-get update -y
        RUN apt-get install hugo -y
        # Copy the contents of the current working directory to the
        # static-site directory.
        COPY . /static-site
        # Command Hugo to build the static site from the source files,
        # setting the destination to the public directory.
        RUN hugo -v --source=/static-site --destination=/static-site/public
        # Install NGINX, remove the default NGINX index.html file, and
        # copy the built static site files to the NGINX html directory.
        FROM nginx:stable-alpine
        RUN mv /usr/share/nginx/html/index.html /usr/share/nginx/html/old-index.html
        COPY --from=HUGO /static-site/public/ /usr/share/nginx/html/
        # Instruct the container to listen for requests on port 80 (HTTP).
        EXPOSE 80

        Save the Dockerfile and return to the command prompt.

      3. Create a new text file named .dockerignore in your Hugo static site folder and add the following lines:



        This file, similar to the .gitignore file you created in the previous section, allows you to ignore certain files within the working directory that you would like to leave out of the container. Because you want the container to be the smallest size possible, the .dockerignore file will include the public/ folder and some hidden folders that Git creates.

      4. Run the Docker build command. Replace mydockerhubusername with your Docker Hub username. The period at the end of the command tells Docker to use the current directory as its build context.

        docker build -t mydockerhubusername/lke-example:v1 .


        In the example below, the container image is named lke-example and has been given a version tag of v1. Feel free to change these values.

      5. Docker will download the required Debian and NGINX images, as well as install Hugo into the image. Once complete, you should see output similar to the following:

        Successfully built 320ae416c940
        Successfully tagged mydockerhubusername/lke-example:v1
      6. You can view the image by listing all local images:

        docker images
        REPOSITORY                       TAG   IMAGE ID       CREATED             SIZE
        mydockerhubusername/lke-example  v1    320ae416c940   About an hour ago   20.8MB

      Test the Docker Image

      1. You can test your new image by creating a container with it locally. To do so, enter the following run command:

        docker run -p 8080:80 -d mydockerhubusername/lke-example:v1

        The -p flag instructs Docker to forward port 8080 on localhost to port 80 on the container. The -d flag instructs Docker to run in detached mode so that you are returned to the command prompt once the container initializes.

      2. Once the container has started, open your browser and navigate to localhost:8080. You should see your static site.

      3. You can stop the running container by finding the ID of the container and issuing the stop command. To find the ID of the container, use the ps command:

        docker ps

        You should see a list of actively running containers, similar to the following:

        b4a7b959a6c7        mydockerhubusername/lke-example:v1         "nginx -g 'daemon of…"   5 hours ago         Up 5 hours>80/tcp        romantic_mahavira
      4. Note the random string of numbers and letters next to the image name. In the above example, the string is b4a7b959a6c7. Issue the stop command, supplying the string of numbers and letters:

        docker stop b4a7b959a6c7

      Upload the Image to Docker Hub

      1. Now that you have a working container image, you can push that image to Docker Hub. First, log in to Docker Hub from your workstation’s terminal:

        docker login
      2. Next, push the image, with version tag, to Docker Hub, using the push command:

        docker push mydockerhubusername/lke-example:v1

        You can now view your image on Docker Hub as a repository. To view all of your repositories, navigate to the Docker Hub repository listing page.

      3. Lastly, add the Dockerfile and .dockerignore file to your Git repository:

        git add .
        git commit -m "Add Dockerfile and .dockerignore."

        You are now ready to deploy the container to your LKE cluster.

      Deploying the Container to LKE

      In this section, you will create a Deployment from the container you created in the previous section, and a Service to load balance the deployment.

      1. Begin by navigating to a location outside of your static site directory. You will not need your static site directory for the remainder of this guide.

        cd ..
      2. Create a new directory to house your Kubernetes manifests, and move into that directory:

        mkdir manifests && cd manifests

      Create a Deployment

      1. In the text editor of your choice, create a new YAML manifest file for your Deployment. Name the file static-site-deployment.yaml, save it to your manifests directory, and enter the contents of this snippet:

        apiVersion: apps/v1
        kind: Deployment
          name: static-site-deployment
            app: static-site
          replicas: 3
              app: static-site
                app: static-site
              - name: static-site
                image: mydockerhubusername/lke-example:v1
                imagePullPolicy: Always
                - containerPort: 80
        • In this example the number of replica Pods is set to 3 on line 8. This value can be changed to meet the needs of your website.
        • The spec.containers.image field on line 19 should be changed to match the name of the container image you pushed to Docker Hub. Be sure to include the proper version tag at the end of the container name.
        • imagePullPolicy: Always on line 20 ensures that each time a Pod is created, the most recent version of the container image will be pulled from Docker Hub.
      2. Once you have a Deployment manifest, you can apply the deployment to the LKE cluster with kubectl:

        kubectl apply -f static-site-deployment.yaml
      3. You can check on the progress of your Deployment by listing the available pods:

        kubectl get pods

        If your Deployment was successful, you should see output like the following:

        NAME                                    READY   STATUS   RESTARTS   AGE
        static-site-deployment-cdb88b5bb-7pbjc  1/1     Running  0          1h
        static-site-deployment-cdb88b5bb-gx9h5  1/1     Running  0          1h
        static-site-deployment-cdb88b5bb-lzdvh  1/1     Running  0          1h

      Create a Service

      1. Create a Service manifest file to provide load balancing for the deployment. Load balancing ensures that traffic is balanced efficiently across multiple backend nodes, improving site performance and ensuring that your static site will be accessible should a node go down.

        Specifically, the Service manifest that will be used in this guide will trigger the creation of a Linode NodeBalancer.


      2. Name the file static-site-service.yaml, save it to your manifests directory, and enter the contents of this snippet:

        apiVersion: v1
        kind: Service
          name: static-site-service
            app: static-site
          type: LoadBalancer
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
            app: static-site
          sessionAffinity: None
      3. Once you’ve created your Service manifest file, you can apply it to the LKE cluster:

        kubectl apply -f static-site-service.yaml
      4. You can check on the status of your Service by listing the Services currently running on your server:

        kubectl get services

        You should see output similar to the following:

        NAME                 TYPE          CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
        kubernetes           ClusterIP     <none>           443/TCP        20h
        static-site-service  LoadBalancer        80:32648/TCP   100m
      5. Note the external IP address of the Service you created. This is the IP address of the NodeBalancer, and you can use it to view your static site.

      6. In the above example, the IP address is Navigate to the external IP address in the browser of your choice to view your static site. You should see the same content as when you tested your Docker image on your workstation.

      Next Steps

      If you’d like to continue using the static site that you created in this guide, you may want to assign a domain to it. Review the DNS Records: An Introduction and DNS Manager guides for help with setting up DNS. When setting up your DNS record, use the external IP address that you noted at the end of the previous section.

      If you would rather not continue using the cluster you just created, review the tear-down section to remove the billable Linode resources that were generated.

      Tear Down your LKE Cluster and NodeBalancer

      • To remove the NodeBalancer you created, all you need to do is delete the underlying Service. From your workstation:

        kubectl delete service static-site-service

        Alternatively, you can use the manifest file you created to delete the Service. From your workstation:

        kubectl delete -f static-site-service.yaml
      • To remove the LKE Cluster and the associated nodes from your account, navigate to the Linode Cloud Manager:

        1. Click on the Kubernetes link in the sidebar. A new page with a table which lists your clusters will appear.

        2. Click on the more options elipsis next to the cluster you would like to delete, and select Delete.

        3. You will be prompted to enter the name of the cluster to confirm the action. Enter the cluster name and click Delete.

      • Lastly, remove the KUBECONFIG line you added to your Bash profile to remove the LKE cluster from your available contexts.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link

      A Guide to Evaluating Cloud Vendor Lock-In with Site Reliability Engineering

      Adoption of cloud services from the Big 3—AWS, Azure and Google—has outpaced new platforms and smaller niche vendors, prompting evolving discussion around the threat of “lock-in.” Thorough evaluation using common tenets can help determine the risk of adopting a new cloud service.

      Vendor lock-in is a service delivery method that makes a customer reliant on the vendor and limited third-party vendor partners. A critical aspect of vendor lock-in revolves around the risk a cloud service will inhibit the velocity of teams building upon cloud platforms. Vendor lock-in can immobilize a team, and changing vendors comes with substantial switching costs. That’s why even when a customer does commit to a vendor, they’re increasingly demanding the spend portability to switch infrastructure solutions within that vendor’s portfolio.

      There are common traits and tradeoffs to cloud vendor lock-ins, but how the services are consumed by an organization or team dictates how risky a lock-in may be. Just like evaluating the risks and rewards of any critical infrastructure choice, the evaluation should be structured around the team’s principles to make it more likely that the selection and adoption of a cloud service will be successful.

      The Ethos of Site Reliability Engineering

      Site Reliability Engineering (SRE), as both an ethos and job role, has gained a following across startups and enterprises. While teams don’t have to strictly adhere to it as a framework, the lessons and approaches documented in Google’s Site Reliability Engineering book provide an incredible common language for the ownership and operation of production services in modern cloud infrastructure platforms.

      Product Life Cycle
      The lifecycle of a typical service. The PRR can be started at any point of the service lifecycle, but the stages at which SRE engagement is applied have expanded over time. Learn more.

      The SRE ethos can provide significant guidance for the adoption of new platforms and infrastructure, particularly in its description of the Product Readiness Review (PRR). While the PRR as described in the Google SRE Book focuses on the onboarding of internally developed services for adoption of support by the SRE role, its steps and lessons can equally apply to an externally provided service.

      The SRE ethos also provides a shared language for understanding the principles of configuration and maintenance as code, or at least automation. If “Infrastructure as Code” is an operational tenant of running production systems on Cloud platforms, then a code “commit” can be thought of as a logical unit of change required to enact an operational change to a production or planned service. The number of commits required to enact adoption of a service can be considered a measurement of the amount of actions required to adopt a given cloud service, functionally representing a measurement of lock-in required for adoption of the cloud service.

      Combining the principles or SRE and its PRR model with the measurement of commits is a relatively simple and transparent way to evaluate the degrees of lock-in adoption the organization and team will be exposed to during adoption of a cloud service.

      Eliminating toil, a key tenet of SRE, should also be considered. Toil is a necessary but undesirable artifact of managing and supporting production infrastructure and has real cost to deployment and ongoing operations. Any evaluation or readiness review of a service must stick to the tenants of the SRE framework and acknowledge toil. If not, the reality of supporting that service will not be realized and may prevent a successful adoption.

      Putting Site Reliability Engineering and Product Readiness Review to Work

      Let’s create a hypothetical organization composed of engineers looking to release a new SaaS product feature as a set of newly developed services. The team is part of an engineering organization that has a relatively modern and capable toolset:

      • Systems infrastructure is provisioned and managed by Change Management backed by version control
      • Application Services are developed and deployed as containers
      • CI/CD is trusted and used by the organization to manage code release lifecycles
      • Everything and anything can be instrumented for monitoring and on-call alarms are sane
      • All underlying infrastructure is “reliable,” and all data sets are “protected” according to their requirements

      The team can deploy the new application feature as a set of services through its traditional toolset. But this release represents a unique opportunity to evaluate new infrastructure platforms that may potentially leverage new features or paradigms of that platform. For example, a hosted container service might simplify horizontal scaling or improve self-healing behaviors over the existing toolsets and infrastructure, and because the new services are greenfield, they can be developed to best fit the platform of choice.

      However, the adoption of a new platform exposes the organization to substantial risk, both from the inevitable growing pains in adopting any new service, and from exposing the organization to unforeseen and unacceptable amounts of vendor lock-in given its tradeoffs. This necessitates a risk discovery process as part of the other operational concerns of the PRR.

      Risk Discovery During a PRR

      Using the SRE PRR model and code commits as a measurement of change, we should have everything we need to evaluate the viability of a service along with its exposure to lock-in. The PRR model can be used to discover and map specifics of the service to the SRE practices and principles.

      Let’s again consider the hypothetical hosted container service that has unique and advantageous auto-scaling qualities. While it may simplify the operational burden of scaling, the new service presents a substantial new barrier to application delivery lifecycles.

      Each of the following items represent commits discovered during a PRR of the container service:

      • The in-place change management toolset may not have the library or features to provision and manage the new container service, and several commits may be required to augment change management to reflect that.
      • More than just eliminating toil, those change management commits are required in order to include the new container service into a responsible part of the CI/CD pipeline.
      • The new container service likely has new constraints for network security or API authorization and RBAC, which needs to be accounted for in order to minimize security risk to the organization. Representing these as infrastructure as code can require a non-trivial effort represented by more commits associated with the service.
      • The new container service may have network boundaries that present a barrier to log aggregation or access to other internal services required as a dependency for the new feature. Circumventing the boundaries or mirroring dependencies will likely require commits to implement and can be critical to the tenet of minimizing toil.
      • Persisting and accessing data in a reliable and performant way could be significantly more effort than any other task in the evaluation. The SRE book has a chapter on Data Integrity that is uniquely suited to cloud services, highlighting the amount of paradigms and effort required to make data available reliably in cloud services. Adoption of those paradigms and associated effort would have to be represented as commits to the overall adoption, continuing the exposure of risk.
      • Monitoring and instrumenting, though often overlooked, are anchor tenants of Site Reliability Engineering. Monitoring of distributed systems comes with a steep learning curve and can present a significant risk to the rest of the organization if not appropriately architected and committed to the monitoring toolset.
      • Finally, how will the new service and infrastructure be decommissioned? No production service lives forever, and eventually a feature service will either be re-engineered or potentially removed. Not only does the team need to understand how to decommission the platform, but they must also have the tooling and procedures to remove sensitive data, de-couple dependencies or make other efforts that are likely to require commits to enact.

      Takeaways from a PRR Using the SRE Framework

      While certainly not a complete list of things to discover during a PRR, the list above highlights key areas where changes must be made to the organization’s existing toolset and infrastructure in order to accommodate the adoption of a new container hosting service. The engineering time, as measured by commits, can evaluate the amount of work required. Since the work is specific to the container service, it functionally represents measurable vendor lock-in.

      Another hosting service with less elegant features may have APIs or tooling that are less effort for adoption, minimizing lock-in. Or the amount of potential lock-in is so significant that other solutions, such as a self-hosted container service, represent more value for less effort and lock-in risk than the adoption of an outside service. The evaluation can only be made by the team responsible for the decision. The right mode of evaluation, however, can make that decision a whole lot easier.

      Dan Lotterman


      Source link

      Como Configurar um Banco de Dados Remoto para Otimizar o Desempenho do Site com o MySQL no Ubuntu 18.04


      À medida que sua aplicação ou site cresce, pode chegar um momento em que você superou a configuração atual do seu servidor. Se você estiver hospedando o seu servidor web e o back-end do banco de dados na mesma máquina, pode ser uma boa ideia separar essas duas funções para que cada uma possa operar em seu próprio hardware e compartilhar a carga de responder às solicitações dos visitantes.

      Neste guia, veremos como configurar um servidor de banco de dados MySQL remoto ao qual sua aplicação web pode se conectar. Usaremos o WordPress como exemplo para ter algo para trabalhar, mas a técnica é amplamente aplicável a qualquer aplicação suportada pelo MySQL.


      Antes de iniciar este tutorial, você precisará de:

      • Dois servidores Ubuntu 18.04. Cada um deles deve ter um usuário não-root com privilégios sudo e um firewall UFW habilitado, conforme descrito em nosso tutorial de Configuração Inicial de servidor com Ubuntu 18.04. Um desses servidores hospedará seu back-end MySQL e, ao longo deste guia, o chamaremos de servidor de banco de dados. O outro se conectará ao seu servidor de banco de dados remotamente e atuará como seu servidor web; da mesma forma, iremos nos referir a ele como servidor web ao longo deste guia.
      • Nginx e PHP instalado em seu servidor web. Nosso tutorial How To Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 18.04 o guiará no processo, mas observe que você deve pular o Passo 2 deste tutorial, que se concentra na instalação do MySQL, pois você instalará o MySQL no seu servidor de banco de dados.
      • MySQL instalado em seu servidor de banco de dados. Siga o tutorial Como Instalar o MySQL no Ubuntu 18.04 para configurar isso.
      • Opcionalmente (mas altamente recomendado), certificados TLS/SSL da Let’s Encrypt instalados em seu servidor web. Você precisará comprar um nome de domínio e ter registros DNS configurados para seu servidor, mas os certificados em si são gratuitos. Nosso guia Como Proteger o Nginx com o Let’s Encrypt no Ubuntu 18.04 lhe mostrará como obter esses certificados.

      Passo 1 — Configurando o MySQL para Escutar Conexões Remotas

      Ter os dados armazenados em um servidor separado é uma boa maneira de expandir elegantemente após atingir o limite máximo de desempenho de uma configuração de uma única máquina. Ela também fornece a estrutura básica necessária para balancear a carga e expandir sua infraestrutura ainda mais posteriormente. Após instalar o MySQL, seguindo o tutorial de pré-requisitos, você precisará alterar alguns valores de configuração para permitir conexões a partir de outros computadores.

      A maioria das mudanças na configuração do servidor MySQL pode ser feita no arquivo mysqld.cnf, que é armazenado no diretório /etc/mysql/mysql.conf.d/ por padrão. Abra este arquivo em seu servidor de banco de dados com privilégios de root em seu editor preferido. Aqui, iremos usar o nano:

      • sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

      Este arquivo é dividido em seções indicadas por labels entre colchetes ([ e ]). Encontre a seção com o label mysqld:


      . . .
      . . .

      Nesta seção, procure um parâmetro chamado bind-address. Isso informa ao software do banco de dados em qual endereço de rede escutar as conexões.

      Por padrão, isso está definido como, significando que o MySQL está configurado para escutar apenas conexões locais. Você precisa alterar isso para fazer referência a um endereço IP externo onde seu servidor pode ser acessado.

      Se os dois servidores estiverem em um datacenter com recursos de rede privada, use o IP da rede privada do seu servidor de banco de dados. Caso contrário, você pode usar seu endereço IP público:


      . . .
      bind-address = ip_do_servidor_de_banco_de_dados

      Como você se conectará ao seu banco de dados pela Internet, é recomendável que você exija conexões criptografadas para manter seus dados seguros. Se você não criptografar sua conexão MySQL, qualquer pessoa na rede poderá fazer sniff por informações confidenciais entre seus servidores web e de banco de dados. Para criptografar conexões MySQL, adicione a seguinte linha após a linha bind-address que você acabou de atualizar:


      . . .
      require_secure_transport = on
      . . .

      Salve e feche o arquivo quando terminar. Se você estiver usando nano, faça isso pressionando CTRL+X, Y e, em seguida, ENTER.

      Para que as conexões SSL funcionem, você precisará criar algumas chaves e certificados. O MySQL vem com um comando que os configura automaticamente. Execute o seguinte comando, que cria os arquivos necessários. Ele também os torna legíveis pelo servidor MySQL, especificando o UID do usuário mysql:

      • sudo mysql_ssl_rsa_setup --uid=mysql

      Para forçar o MySQL a atualizar sua configuração e ler as novas informações de SSL, reinicie o banco de dados:

      • sudo systemctl restart mysql

      Para confirmar que o servidor agora está escutando na interface externa, execute o seguinte comando netstat:

      • sudo netstat -plunt | grep mysqld


      tcp 0 0 ip_do_servidor_de_banco_de_dados:3306* LISTEN 27328/mysqld

      O netstat imprime estatísticas sobre o sistema de rede do seu servidor. Esta saída nos mostra que um processo chamado mysqld está anexado ao ip_do_servidor_de_banco_de_dados na porta 3306, a porta padrão do MySQL, confirmando que o servidor está escutando na interface apropriada.

      Em seguida, abra essa porta no firewall para permitir o tráfego através dela:

      Essas são todas as alterações de configuração que você precisa fazer no MySQL. A seguir, veremos como configurar um banco de dados e alguns perfis de usuário, um dos quais você usará para acessar o servidor remotamente.

      Passo 2 — Configurando um Banco de Dados para o WordPress e Credenciais Remotas

      Embora o próprio MySQL agora esteja escutando em um endereço IP externo, atualmente não há usuários ou bancos de dados habilitados para controle remoto configurados. Vamos criar um banco de dados para o WordPress e um par de usuários que possam acessá-lo.

      Comece conectando-se ao MySQL como o usuário root do MySQL:

      Nota: Se você tiver a autenticação por senha ativada, conforme descrito no Passo 3 do pré-requisito do tutorial do MySQL, você precisará usar o seguinte comando para acessar o shell do MySQL:

      Depois de executar este comando, você será solicitado a fornecer sua senha de root do MySQL e, após inseri-la, receberá um novo prompt mysql>.

      No prompt do MySQL, crie um banco de dados que o WordPress usará. Pode ser útil atribuir a esse banco de dados um nome reconhecível para que você possa identificá-lo facilmente mais tarde. Aqui, vamos chamá-lo de wordpress:

      • CREATE DATABASE wordpress;

      Agora que você criou seu banco de dados, você precisará criar um par de usuários. Criaremos um usuário somente local e um usuário remoto vinculado ao endereço IP do servidor web.

      Primeiro, crie seu usuário local, wpuser, e faça com que esta conta corresponda apenas às tentativas de conexão local usando localhost na declaração:

      • CREATE USER 'wpuser'@'localhost' IDENTIFIED BY 'senha';

      Em seguida, conceda a esta conta acesso total ao banco de dados wordpress:

      • GRANT ALL PRIVILEGES ON wordpress.* TO 'wpuser'@'localhost';

      Agora, esse usuário pode executar qualquer operação no banco de dados do WordPress, mas essa conta não pode ser usada remotamente, pois corresponde apenas às conexões da máquina local. Com isso em mente, crie uma conta complementar que corresponda às conexões exclusivamente do seu servidor web. Para isso, você precisará do endereço IP do seu servidor web.

      Observe que você deve usar um endereço IP que utilize a mesma rede que você configurou no seu arquivo mysqld.cnf. Isso significa que, se você especificou um IP de rede privada no arquivo mysqld.cnf, precisará incluir o IP privado do seu servidor web nos dois comandos a seguir. Se você configurou o MySQL para usar a internet pública, você deve fazer isso corresponder ao endereço IP público do servidor web.

      • CREATE USER 'remotewpuser'@'ip_do_servidor_web' IDENTIFIED BY 'senha';

      Depois de criar sua conta remota, conceda a ela os mesmos privilégios que o usuário local:

      • GRANT ALL PRIVILEGES ON wordpress.* TO 'remotewpuser'@'ip_do_servidor_web';

      Por fim, atualize os privilégios para que o MySQL saiba começar a usá-los:

      Então saia do prompt do MySQL digitando:

      Agora que você configurou um novo banco de dados e um usuário habilitado remotamente, você pode testar se consegue se conectar ao banco de dados a partir do seu servidor web.

      Passo 3 — Testando Conexões Remotas e Locais

      Antes de continuar, é melhor verificar se você pode se conectar ao seu banco de dados tanto a partir da máquina local — seu servidor de banco de dados — quanto pelo seu servidor web.

      Primeiro, teste a conexão local a partir do seu servidor de banco de dados tentando fazer login com sua nova conta:

      Quando solicitado, digite a senha que você configurou para esta conta.

      Se você receber um prompt do MySQL, então a conexão local foi bem-sucedida. Você pode sair novamente digitando:

      Em seguida, faça login no seu servidor web para testar as conexões remotas:

      • ssh sammy@ip_do_servidor_web

      Você precisará instalar algumas ferramentas de cliente para MySQL em seu servidor web para acessar o banco de dados remoto. Primeiro, atualize o cache de pacotes local se você não tiver feito isso recentemente:

      Em seguida, instale os utilitários de cliente do MySQL:

      • sudo apt install mysql-client

      Depois disso, conecte-se ao seu servidor de banco de dados usando a seguinte sintaxe:

      • mysql -u remotewpuser -h ip_do_servidor_de_banco_de_dados -p

      Novamente, você deve certificar-se que está usando o endereço IP correto para o servidor de banco de dados. Se você configurou o MySQL para escutar na rede privada, digite o IP da rede privada do seu banco de dados. Caso contrário, digite o endereço IP público do seu servidor de banco de dados.

      Você será solicitado a inserir a senha da sua conta remotewpuser. Depois de inseri-la, e se tudo estiver funcionando conforme o esperado, você verá o prompt do MySQL. Verifique se a conexão está usando SSL com o seguinte comando:

      Se a conexão realmente estiver usando SSL, a linha SSL: indicará isso, como mostrado aqui:


      -------------- mysql Ver 14.14 Distrib 5.7.18, for Linux (x86_64) using EditLine wrapper Connection id: 52 Current database: Current user: remotewpuser@ SSL: Cipher in use is DHE-RSA-AES256-SHA Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 5.7.18-0ubuntu0.16.04.1 (Ubuntu) Protocol version: 10 Connection: via TCP/IP Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 TCP port: 3306 Uptime: 3 hours 43 min 40 sec Threads: 1 Questions: 1858 Slow queries: 0 Opens: 276 Flush tables: 1 Open tables: 184 Queries per second avg: 0.138 --------------

      Depois de verificar que você pode se conectar remotamente, vá em frente e saia do prompt:

      Com isso, você verificou o acesso local e o acesso a partir do servidor web, mas não verificou se outras conexões serão recusadas. Para uma verificação adicional, tente fazer o mesmo em um terceiro servidor para o qual você não configurou uma conta de usuário específica para garantir que esse outro servidor não tenha o acesso concedido.

      Observe que antes de executar o seguinte comando para tentar a conexão, talvez seja necessário instalar os utilitários de cliente do MySQL, como você fez acima:

      • mysql -u wordpressuser -h ip_do_servidor_de_banco_de_dados -p

      Isso não deve ser concluído com êxito e deve gerar um erro semelhante a este:


      ERROR 1130 (HY000): Host '' is not allowed to connect to this MySQL server

      Isso é esperado, já que você não criou um usuário do MySQL que tem permissão para se conectar a partir deste servidor, e também é desejado, uma vez que você quer ter certeza de que seu servidor de banco de dados negará o acesso de usuários não autorizados ao seu servidor do MySQL.

      Após testar com êxito sua conexão remota, você pode instalar o WordPress em seu servidor web.

      Passo 4 — Instalando o WordPress

      Para demonstrar os recursos do seu novo servidor MySQL com capacidade remota, passaremos pelo processo de instalação e configuração do WordPress — o popular sistema de gerenciamento de conteúdo — em seu servidor web. Isso exigirá que você baixe e extraia o software, configure suas informações de conexão e então execute a instalação baseada em web do WordPress.

      No seu servidor web, faça o download da versão mais recente do WordPress para o seu diretório home:

      • cd ~
      • curl -O

      Extraia os arquivos, que criarão um diretório chamado wordpress no seu diretório home:

      O WordPress inclui um arquivo de configuração de exemplo que usaremos como ponto de partida. Faça uma cópia deste arquivo, removendo -sample do nome do arquivo para que ele seja carregado pelo WordPress:

      • cp ~/wordpress/wp-config-sample.php ~/wordpress/wp-config.php

      Quando você abre o arquivo, sua primeira abordagem será ajustar algumas chaves secretas para fornecer mais segurança à sua instalação. O WordPress fornece um gerador seguro para esses valores, para que você não precise criar bons valores por conta própria. Eles são usados apenas internamente, portanto, não prejudicará a usabilidade ter valores complexos e seguros aqui.

      Para obter valores seguros do gerador de chave secreta do WordPress, digite:

      • curl -s

      Isso imprimirá algumas chaves na sua saída. Você as adicionará momentaneamente ao seu arquivo wp-config.php:

      Atenção! É importante que você solicite seus próprios valores únicos sempre. Não copie os valores mostrados aqui!


      define('AUTH_KEY', 'L4|2Yh(giOtMLHg3#] DO NOT COPY THESE VALUES %G00o|te^5YG@)'); define('SECURE_AUTH_KEY', 'DCs-k+MwB90/-E(=!/ DO NOT COPY THESE VALUES +WBzDq:7U[#Wn9'); define('LOGGED_IN_KEY', '*0kP!|VS.K=;#fPMlO DO NOT COPY THESE VALUES +&[%8xF*,18c @'); define('NONCE_KEY', 'fmFPF?UJi&(j-{8=$- DO NOT COPY THESE VALUES CCZ?Q+_~1ZU~;G'); define('AUTH_SALT', '@qA7f}2utTEFNdnbEa DO NOT COPY THESE VALUES t}Vw+8=K%20s=a'); define('SECURE_AUTH_SALT', '%BW6s+d:7K?-`C%zw4 DO NOT COPY THESE VALUES 70U}PO1ejW+7|8'); define('LOGGED_IN_SALT', '-l>F:-dbcWof%4kKmj DO NOT COPY THESE VALUES 8Ypslin3~d|wLD'); define('NONCE_SALT', '4J(<`4&&F (WiK9K#] DO NOT COPY THESE VALUES ^ZikS`es#Fo:V6');

      Copie a saída que você recebeu para a área de transferência e abra o arquivo de configuração no seu editor de texto:

      • nano ~/wordpress/wp-config.php

      Encontre a seção que contém os valores fictícios para essas configurações. Será algo parecido com isto:


      . . .
      define('AUTH_KEY',         'put your unique phrase here');
      define('SECURE_AUTH_KEY',  'put your unique phrase here');
      define('LOGGED_IN_KEY',    'put your unique phrase here');
      define('NONCE_KEY',        'put your unique phrase here');
      define('AUTH_SALT',        'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT',   'put your unique phrase here');
      define('NONCE_SALT',       'put your unique phrase here');
      . . .

      Exclua essas linhas e cole os valores que você copiou a partir da linha de comando.

      Em seguida, insira as informações de conexão para seu banco de dados remoto. Essas linhas de configuração estão na parte superior do arquivo, logo acima de onde você colou suas chaves. Lembre-se de usar o mesmo endereço IP que você usou no teste de banco de dados remoto anteriormente:


      . . .
      /** The name of the database for WordPress */
      define('DB_NAME', 'wordpress');
      /** MySQL database username */
      define('DB_USER', 'remotewpuser');
      /** MySQL database password */
      define('DB_PASSWORD', 'password');
      /** MySQL hostname */
      define('DB_HOST', 'db_server_ip');
      . . .

      E, finalmente, em qualquer lugar do arquivo, adicione a seguinte linha que diz ao WordPress para usar uma conexão SSL para o nosso banco de dados MySQL:



      Salve e feche o arquivo.

      Em seguida, copie os arquivos e diretórios encontrados no diretório ~/wordpress para a raiz de documentos do Nginx. Observe que este comando inclui a flag -a para garantir que todas as permissões existentes sejam transferidas:

      • sudo cp -a ~/wordpress/* /var/www/html

      Depois disso, a única coisa a fazer é modificar a propriedade do arquivo. Altere a propriedade de todos os arquivos na raiz de documentos para www-data, o usuário padrão do servidor web do Ubuntu:

      • sudo chown -R www-data:www-data /var/www/html

      Com isso, o WordPress está instalado e você está pronto para executar sua rotina de configuração baseada em web.

      Passo 5 — Configurando o WordPress Através da Interface Web

      O WordPress possui um processo de configuração baseado na web. Conforme você avança, ele fará algumas perguntas e instalará todas as tabelas necessárias no seu banco de dados. Aqui, abordaremos as etapas iniciais da configuração do WordPress, que você pode usar como ponto de partida para criar seu próprio site personalizado que usa um back-end de banco de dados remoto.

      Navegue até o nome de domínio (ou endereço IP público) associado ao seu servidor web:

      Você verá uma tela de seleção de idioma para o instalador do WordPress. Selecione o idioma apropriado e clique na tela principal de instalação:

      WordPress install screen

      Depois de enviar suas informações, você precisará fazer login na interface de administração do WordPress usando a conta que você acabou de criar. Você será direcionado para um painel onde poderá personalizar seu novo site WordPress.


      Ao seguir este tutorial, você configurou um banco de dados MySQL para aceitar conexões protegidas por SSL a partir de uma instalação remota do WordPress. Os comandos e técnicas usados neste guia são aplicáveis a qualquer aplicação web escrita em qualquer linguagem de programação, mas os detalhes específicos da implementação serão diferentes. Consulte a documentação do banco de dados da aplicação ou linguagem para obter mais informações.

      Source link