One place for hosting & domains

      How to Install and Secure phpMyAdmin with Nginx on a Debian 9 server


      Introduction

      While many users need the functionality of a database system like MySQL, interacting with the system solely from the MySQL command-line client requires familiarity with the SQL language, so it may not be the preferred interface for some.

      phpMyAdmin was created so that users can interact with MySQL through an intuitive web interface, running alongside a PHP development environment. In this guide, we’ll discuss how to install phpMyAdmin on top of an Nginx server, and how to configure the server for increased security.

      Note: There are important security considerations when using software like phpMyAdmin, since it runs on the database server, it deals with database credentials, and it enables a user to easily execute arbitrary SQL queries into your database. Because phpMyAdmin is a widely-deployed PHP application, it is frequently targeted for attack. We will go over some security measures you can take in this tutorial so that you can make informed decisions.

      Prerequisites

      Before you get started with this guide, you’ll need the following available to you:

      Because phpMyAdmin handles authentication using MySQL credentials, it is strongly advisable to install an SSL/TLS certificate to enable encrypted traffic between server and client. If you don’t have an existing domain configured with a valid certificate, you can follow the guide on How to Secure Nginx with Let’s Encrypt on Debian 9.

      Warning: If you don’t have an SSL/TLS certificate installed on the server and you still want to proceed, please consider enforcing access via SSH Tunnels as explained in Step 5 of this guide.

      Once you have met these prerequisites, you can go ahead with the rest of the guide.

      Step 1 — Installing phpMyAdmin

      The first thing we need to do is install phpMyAdmin on the LEMP server. We’re going to use the default Debian repositories to achieve this goal.

      Let’s start by updating the server’s package index with:

      Now you can install phpMyAdmin with:

      • sudo apt install phpmyadmin

      During the installation process, you will be prompted to choose the web server (either Apache or Lighthttp) to configure. Because we are using Nginx as web server, we shouldn't make a choice here. Press tab and then OK to advance to the next step.

      Next, you’ll be prompted whether to use dbconfig-common for configuring the application database. Select Yes. This will set up the internal database and administrative user for phpMyAdmin. You will be asked to define a new password for the phpmyadmin MySQL user. You can also leave it blank and let phpMyAdmin randomly create a password.

      The installation will now finish. For the Nginx web server to find and serve the phpMyAdmin files correctly, we’ll need to create a symbolic link from the installation files to Nginx's document root directory:

      • sudo ln -s /usr/share/phpmyadmin /var/www/html

      Your phpMyAdmin installation is now operational. To access the interface, go to your server's domain name or public IP address followed by /phpmyadmin in your web browser:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin login screen

      As mentioned before, phpMyAdmin handles authentication using MySQL credentials, which means you should use the same username and password you would normally use to connect to the database via console or via an API. If you need help creating MySQL users, check this guide on How To Manage an SQL Database.

      Note: Logging into phpMyAdmin as the root MySQL user is discouraged because it represents a significant security risk. We'll see how to disable root login in a subsequent step of this guide.

      Your phpMyAdmin installation should be completely functional at this point. However, by installing a web interface, we've exposed our MySQL database server to the outside world. Because of phpMyAdmin's popularity, and the large amounts of data it may provide access to, installations like these are common targets for attacks. In the following sections of this guide, we'll see a few different ways in which we can make our phpMyAdmin installation more secure.

      Step 2 — Changing phpMyAdmin's Default Location

      One of the most basic ways to protect your phpMyAdmin installation is by making it harder to find. Bots will scan for common paths, like /phpmyadmin, /pma, /admin, /mysql and such. Changing the interface's URL from /phpmyadmin to something non-standard will make it much harder for automated scripts to find your phpMyAdmin installation and attempt brute-force attacks.

      With our phpMyAdmin installation, we've created a symbolic link pointing to /usr/share/phpmyadmin, where the actual application files are located. To change phpMyAdmin's interface URL, we will rename this symbolic link.

      First, let's navigate to the Nginx document root directory and list the files it contains to get a better sense of the change we'll make:

      You’ll receive the following output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 phpmyadmin -> /usr/share/phpmyadmin

      The output shows that we have a symbolic link called phpmyadmin in this directory. We can change this link name to whatever we'd like. This will in turn change phpMyAdmin's access URL, which can help obscure the endpoint from bots hardcoded to search common endpoint names.

      Choose a name that obscures the purpose of the endpoint. In this guide, we'll name our endpoint /nothingtosee, but you should choose an alternate name. To accomplish this, we'll rename the link:

      • sudo mv phpmyadmin nothingtosee
      • ls -l

      After running the above commands, you’ll receive this output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 nothingtosee -> /usr/share/phpmyadmin

      Now, if you go to the old URL, you'll get a 404 error:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin 404 error

      Your phpMyAdmin interface will now be available at the new URL we just configured:

      https://server_domain_or_IP/nothingtosee
      

      phpMyAdmin login screen

      By obfuscating phpMyAdmin's real location on the server, you're securing its interface against automated scans and manual brute-force attempts.

      Step 3 — Disabling Root Login

      On MySQL as well as within regular Linux systems, the root account is a special administrative account with unrestricted access to the system. In addition to being a privileged account, it's a known login name, which makes it an obvious target for brute-force attacks. To minimize risks, we'll configure phpMyAdmin to deny any login attempts coming from the user root. This way, even if you provide valid credentials for the user root, you'll still get an "access denied" error and won't be allowed to log in.

      Because we chose to use dbconfig-common to configure and store phpMyAdmin settings, the default configuration is currently stored in the database. We'll need to create a new config.inc.php file to define our custom settings.

      Even though the PHP files for phpMyAdmin are located inside /usr/share/phpmyadmin, the application uses configuration files located at /etc/phpmyadmin. We will create a new custom settings file inside /etc/phpmyadmin/conf.d, and name it pma_secure.php:

      • sudo nano /etc/phpmyadmin/conf.d/pma_secure.php

      The following configuration file contains the necessary settings to disable passwordless logins (AllowNoPassword set to false) and root login (AllowRoot set to false):

      /etc/phpmyadmin/conf.d/pma_secure.php

      <?php
      
      # PhpMyAdmin Settings
      # This should be set to a random string of at least 32 chars
      $cfg['blowfish_secret'] = '3!#32@3sa(+=_4?),5XP_:U%%834sdfSdg43yH#{o';
      
      $i=0;
      $i++;
      
      $cfg['Servers'][$i]['auth_type'] = 'cookie';
      $cfg['Servers'][$i]['AllowNoPassword'] = false;
      $cfg['Servers'][$i]['AllowRoot'] = false;
      
      ?>
      

      Save the file when you're done editing by pressing CTRL + X then y to confirm changes and ENTER. The changes will apply automatically. If you reload the login page now and try to log in as root, you will get an Access Denied error:

      access denied

      Root login is now prohibited on your phpMyAdmin installation. This security measure will block brute-force scripts from trying to guess the root database password on your server. Moreover, it will enforce the usage of less-privileged MySQL accounts for accessing phpMyAdmin's web interface, which by itself is an important security practice.

      Step 4 — Creating an Authentication Gateway

      Hiding your phpMyAdmin installation on an unusual location might sidestep some automated bots scanning the network, but it's useless against targeted attacks. To better protect a web application with restricted access, it's generally more effective to stop attackers before they can even reach the application. This way, they'll be unable to use generic exploits and brute-force attacks to guess access credentials.

      In the specific case of phpMyAdmin, it's even more important to keep the login interface locked away. By keeping it open to the world, you're offering a brute-force platform for attackers to guess your database credentials.

      Adding an extra layer of authentication to your phpMyAdmin installation enables you to increase security. Users will be required to pass through an HTTP authentication prompt before ever seeing the phpMyAdmin login screen. Most web servers, including Nginx, provide this capability natively.

      To set this up, we first need to create a password file to store the authentication credentials. Nginx requires that passwords be encrypted using the crypt() function. The OpenSSL suite, which should already be installed on your server, includes this functionality.

      To create an encrypted password, type:

      You will be prompted to enter and confirm the password that you wish to use. The utility will then display an encrypted version of the password that will look something like this:

      Output

      O5az.RSPzd.HE

      Copy this value, as you will need to paste it into the authentication file we'll be creating.

      Now, create an authentication file. We'll call this file pma_pass and place it in the Nginx configuration directory:

      • sudo nano /etc/nginx/pma_pass

      In this file, you’ll specify the username you would like to use, followed by a colon (:), followed by the encrypted version of the password you received from the openssl passwd utility.

      We are going to name our user sammy, but you should choose a different username. The file should look like this:

      /etc/nginx/pma_pass

      sammy:O5az.RSPzd.HE
      

      Save and close the file when you're done.

      Now we're ready to modify the Nginx configuration file. For this guide, we'll use the configuration file located at /etc/nginx/sites-available/example.com. You should use the relevant Nginx configuration file for the web location where phpMyAdmin is currently hosted. Open this file in your text editor to get started:

      • sudo nano /etc/nginx/sites-available/example.com

      Locate the server block, and the location / section within it. We need to create a new location section within this block to match phpMyAdmin's current path on the server. In this guide, phpMyAdmin's location relative to the web root is /nothingtosee:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location / {
                      try_files $uri $uri/ =404;
              }
      
              location /nothingtosee {
                      # Settings for phpMyAdmin will go here
              }
      
          . . .
      }
      

      Within this block, we'll need to set up two different directives: auth_basic, which defines the message that will be displayed on the authentication prompt, and auth_basic_user_file, pointing to the file we just created. This is how your configuration file should look like when you're finished:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location /nothingtosee {
                      auth_basic "Admin Login";
                      auth_basic_user_file /etc/nginx/pma_pass;
              }
      
      
          . . .
      }
      

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      To activate the new authentication gate, you must reload the web server:

      • sudo systemctl reload nginx

      Now, if you visit the phpMyAdmin URL in your web browser, you should be prompted for the username and password you added to the pma_pass file:

      https://server_domain_or_IP/nothingtosee
      

      Nginx authentication page

      Once you enter your credentials, you'll be taken to the standard phpMyAdmin login page.

      Note: If refreshing the page does not work, you may have to clear your cache or use a different browser session if you've already been using phpMyAdmin.

      In addition to providing an extra layer of security, this gateway will help keep your MySQL logs clean of spammy authentication attempts.

      Step 5 — Setting Up Access via Encrypted Tunnels (Optional)

      For increased security, it is possible to lock down your phpMyAdmin installation to authorized hosts only. You can whitelist authorized hosts in your Nginx configuration file, so that any request coming from an IP address that is not on the list will be denied.

      Even though this feature alone can be enough in some use cases, it's not always the best long-term solution, mainly due to the fact that most people don't access the Internet from static IP addresses. As soon as you get a new IP address from your Internet provider, you'll be unable to get to the phpMyAdmin interface until you update the Nginx configuration file with your new IP address.

      For a more robust long-term solution, you can use IP-based access control to create a setup in which users will only have access to your phpMyAdmin interface if they're accessing from either an authorized IP address or localhost via SSH tunneling. We'll see how to set this up in the sections below.

      Combining IP-based access control with SSH tunneling greatly increases security because it fully blocks access coming from the public internet (except for authorized IPs), in addition to providing a secure channel between user and server through the use of encrypted tunnels.

      Setting Up IP-Based Access Control on Nginx

      On Nginx, IP-based access control can be defined in the corresponding location block of a given site, using the directives allow and deny. For instance, if we want to only allow requests coming from a given host, we should include the following two lines, in this order, inside the relevant location block for the site we would like to protect:

      allow hostname_or_IP;
      deny all;
      

      You can allow as many hosts as you want, you only need to include one allow line for each authorized host/IP inside the respective location block for the site you're protecting. The directives will be evaluated in the same order as they are listed, until a match is found or the request is finally denied due to the deny all directive.

      We'll now configure Nginx to only allow requests coming from localhost or your current IP address. First, you'll need to know the current public IP address your local machine is using to connect to the Internet. There are various ways to obtain this information; for simplicity, we're going to use the service provided by ipinfo.io. You can either open the URL https://ipinfo.io/ip in your browser, or run the following command from your local machine:

      • curl https://ipinfo.io/ip

      You should get a simple IP address as output, like this:

      Output

      203.0.113.111

      That is your current public IP address. We'll configure phpMyAdmin's location block to only allow requests coming from that IP, in addition to localhost. We'll need to edit once again the configuration block for phpMyAdmin inside /etc/nginx/sites-available/example.com.

      Open the Nginx configuration file using your command-line editor of choice:

      • sudo nano /etc/nginx/sites-available/example.com

      Because we already have an access rule within our current configuration, we need to combine it with IP-based access control using the directive satisfy all. This way, we can keep the current HTTP authentication prompt for increased security.

      This is how your phpMyAdmin Nginx configuration should look like after you're done editing:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee {
              satisfy all; #requires both conditions
      
              allow 203.0.113.111; #allow your IP
              allow 127.0.0.1; #allow localhost via SSH tunnels
              deny all; #deny all other sources
      
              auth_basic "Admin Login";
              auth_basic_user_file /etc/nginx/pma_pass;
          }
      
          . . .
      }
      

      Remember to replace nothingtosee with the actual path where phpMyAdmin can be found, and the highlighted IP address with your current public IP address.

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      Now reload the web server so the changes take effect:

      • sudo systemctl reload nginx

      Because your IP address is explicitly listed as an authorized host, your access shouldn't be disturbed. Anyone else trying to access your phpMyAdmin installation will now get a 403 error (Forbidden):

      https://server_domain_or_IP/nothingtosee
      

      403 error

      In the next section, we'll see how to use SSH tunneling to access the web server through local requests. This way, you'll still be able to access phpMyAdmin's interface even when your IP address changes.

      Accessing phpMyAdmin Through an Encrypted Tunnel

      SSH tunneling works as a way of redirecting network traffic through encrypted channels. By running an ssh command similar to what you would use to log into a server, you can create a secure "tunnel" between your local machine and that server. All traffic coming in on a given local port can now be redirected through the encrypted tunnel and use the remote server as a proxy, before reaching out to the internet. It's similar to what happens when you use a VPN (Virtual Private Network), however SSH tunneling is much simpler to set up.

      We'll use SSH tunneling to proxy our requests to the remote web server running phpMyAdmin. By creating a tunnel between your local machine and the server where phpMyAdmin is installed, you can redirect local requests to the remote web server, and what's more important, traffic will be encrypted and requests will reach Nginx as if they're coming from localhost. This way, no matter what IP address you're connecting from, you'll be able to securely access phpMyAdmin's interface.

      Because the traffic between your local machine and the remote web server will be encrypted, this is a safe alternative for situations where you can't have an SSL/TLS certificate installed on the web server running phpMyAdmin.

      From your local machine, run this command whenever you need access to phpMyAdmin:

      • ssh user@server_domain_or_IP -L 8000:localhost:80 -L 8443:localhost:443 -N

      Let's examine each part of the command:

      • user: SSH user to connect to the server where phpMyAdmin is running
      • hostname_or_IP: SSH host where phpMyAdmin is running
      • -L 8000:localhost:80 redirects HTTP traffic on port 8000
      • -L 8443:localhost:443 redirects HTTPS traffic on port 8443
      • -N: do not execute remote commands

      Note: This command will block the terminal until interrupted with a CTRL+C, in which case it will end the SSH connection and stop the packet redirection. If you'd prefer to run this command in background mode, you can use the SSH option -f.

      Now, go to your browser and replace server_domain_or_IP with localhost:PORT, where PORT is either 8000 for HTTP or 8443 for HTTPS:

      http://localhost:8000/nothingtosee
      
      https://localhost:443/nothingtosee
      

      phpMyAdmin login screen

      Note: If you're accessing phpMyAdmin via https, you might get an alert message questioning the security of the SSL certificate. This happens because the domain name you're using (localhost) doesn't match the address registered within the certificate (domain where phpMyAdmin is actually being served). It is safe to proceed.

      All requests on localhost:8000 (HTTP) and localhost:8443 (HTTPS) are now being redirected through a secure tunnel to your remote phpMyAdmin application. Not only have you increased security by disabling public access to your phpMyAdmin, you also protected all traffic between your local computer and the remote server by using an encrypted tunnel to send and receive data.

      If you'd like to enforce the usage of SSH tunneling to anyone who wants access to your phpMyAdmin interface (including you), you can do that by removing any other authorized IPs from the Nginx configuration file, leaving 127.0.0.1 as the only allowed host to access that location. Considering nobody will be able to make direct requests to phpMyAdmin, it is safe to remove HTTP authentication in order to simplify your setup. This is how your configuration file would look like in such a scenario:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee { 
              allow 127.0.0.1; #allow localhost only
              deny all; #deny all other sources
          }
      
          . . .
      }
      

      Once you reload Nginx's configuration with sudo systemctl reload nginx, your phpMyAdmin installation will be locked down and users will be required to use SSH tunnels in order to access phpMyAdmin's interface via redirected requests.

      Conclusion

      In this tutorial, we saw how to install phpMyAdmin on Ubuntu 18.04 running Nginx as the web server. We also covered advanced methods to secure a phpMyAdmin installation on Ubuntu, such as disabling root login, creating an extra layer of authentication, and using SSH tunneling to access a phpMyAdmin installation via local requests only.

      After completing this tutorial, you should be able to manage your MySQL databases from a reasonably secure web interface. This user interface exposes most of the functionality available via the MySQL command line. You can browse databases and schema, execute queries, and create new data sets and structures.



      Source link

      How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes Ingresses offer you a flexible way of routing traffic from beyond your cluster to internal Kubernetes Services. Ingress Resources are objects in Kubernetes that define rules for routing HTTP and HTTPS traffic to Services. For these to work, an Ingress Controller must be present; its role is to implement the rules by accepting traffic (most likely via a Load Balancer) and routing it to the appropriate Services. Most Ingress Controllers use only one global Load Balancer for all Ingresses, which is more efficient than creating a Load Balancer per every Service you wish to expose.

      Helm is a package manager for managing Kubernetes. Using Helm Charts with your Kubernetes provides configurability and lifecycle management to update, rollback, and delete a Kubernetes application.

      In this guide, you’ll set up the Kubernetes-maintained Nginx Ingress Controller using Helm. You’ll then create an Ingress Resource to route traffic from your domains to example Hello World back-end services. Once you’ve set up the Ingress, you’ll install Cert-Manager to your cluster to be able to automatically provision Let’s Encrypt TLS certificates to secure your Ingresses.

      Prerequisites

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

      • A fully registered domain name with two available A records. This tutorial will use hw1.example.com and hw2.example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      Step 1 — Setting Up Hello World Deployments

      In this section, before you deploy the Nginx Ingress, you will deploy a Hello World app called hello-kubernetes to have some Services to which you’ll route the traffic. To confirm that the Nginx Ingress works properly in the next steps, you’ll deploy it twice, each time with a different welcome message that will be shown when you access it from your browser.

      You’ll store the deployment configuration on your local machine. The first deployment configuration will be in a file named hello-kubernetes-first.yaml. Create it using a text editor:

      • nano hello-kubernetes-first.yaml

      Add the following lines:

      hello-kubernetes-first.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-first
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-first
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-first
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-first
        template:
          metadata:
            labels:
              app: hello-kubernetes-first
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the first deployment!
      

      This configuration defines a Deployment and a Service. The Deployment consists of three replicas of the paulbouwer/hello-kubernetes:1.5 image, and an environment variable named MESSAGE—you will see its value when you access the app. The Service here is defined to expose the Deployment in-cluster at port 80.

      Save and close the file.

      Then, create this first variant of the hello-kubernetes app in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-first.yaml

      You’ll see the following output:

      Output

      service/hello-kubernetes-first created deployment.apps/hello-kubernetes-first created

      To verify the Service’s creation, run the following command:

      • kubectl get service hello-kubernetes-first

      The output will look like this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 35s

      You’ll see that the newly created Service has a ClusterIP assigned, which means that it is working properly. All traffic sent to it will be forwarded to the selected Deployment on port 8080. Now that you have deployed the first variant of the hello-kubernetes app, you’ll work on the second one.

      Open a file called hello-kubernetes-second.yaml for editing:

      • nano hello-kubernetes-second.yaml

      Add the following lines:

      hello-kubernetes-second.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-second
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-second
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-second
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-second
        template:
          metadata:
            labels:
              app: hello-kubernetes-second
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the second deployment!
      

      Save and close the file.

      This variant has the same structure as the previous configuration; the only differences are in the Deployment and Service names, to avoid collisions, and the message.

      Now create it in Kubernetes with the following command:

      • kubectl create -f hello-kubernetes-second.yaml

      The output will be:

      Output

      service/hello-kubernetes-second created deployment.apps/hello-kubernetes-second created

      Verify that the second Service is up and running by listing all of your services:

      The output will be similar to this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 54s hello-kubernetes-second ClusterIP 10.245.99.130 <none> 80:30303/TCP 12s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 5m

      Both hello-kubernetes-first and hello-kubernetes-second are listed, which means that Kubernetes has created them successfully.

      You've created two deployments of the hello-kubernetes app with accompanying Services. Each one has a different message set in the deployment specification, which allow you to differentiate them during testing. In the next step, you'll install the Nginx Ingress Controller itself.

      Step 2 — Installing the Kubernetes Nginx Ingress Controller

      Now you'll install the Kubernetes-maintained Nginx Ingress Controller using Helm. Note that there are several Nginx Ingresses.

      The Nginx Ingress Controller consists of a Pod and a Service. The Pod runs the Controller, which constantly polls the /ingresses endpoint on the API server of your cluster for updates to available Ingress Resources. The Service is of type LoadBalancer, and because you are deploying it to a DigitalOcean Kubernetes cluster, the cluster will automatically create a DigitalOcean Load Balancer, through which all external traffic will flow to the Controller. The Controller will then route the traffic to appropriate Services, as defined in Ingress Resources.

      Only the LoadBalancer Service knows the IP address of the automatically created Load Balancer. Some apps (such as ExternalDNS) need to know its IP address, but can only read the configuration of an Ingress. The Controller can be configured to publish the IP address on each Ingress by setting the controller.publishService.enabled parameter to true during helm install. It is recommended to enable this setting to support applications that may depend on the IP address of the Load Balancer.

      To install the Nginx Ingress Controller to your cluster, run the following command:

      • helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true

      This command installs the Nginx Ingress Controller from the stable charts repository, names the Helm release nginx-ingress, and sets the publishService parameter to true.

      The output will look like:

      Output

      NAME: nginx-ingress LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nginx-ingress-controller 1 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE nginx-ingress-controller-7658988787-npv28 0/1 ContainerCreating 0 0s nginx-ingress-default-backend-7f5d59d759-26xq2 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.245.9.107 <pending> 80:31305/TCP,443:30519/TCP 0s nginx-ingress-default-backend ClusterIP 10.245.221.49 <none> 80/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE nginx-ingress 1 0s ==> v1beta1/ClusterRole NAME AGE nginx-ingress 0s ==> v1beta1/ClusterRoleBinding NAME AGE nginx-ingress 0s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-ingress-controller 0/1 1 0 0s nginx-ingress-default-backend 0/1 1 0 0s ==> v1beta1/Role NAME AGE nginx-ingress 0s ==> v1beta1/RoleBinding NAME AGE nginx-ingress 0s NOTES: ...

      Helm has logged what resources in Kubernetes it created as a part of the chart installation.

      You can watch the Load Balancer become available by running:

      • kubectl get services -o wide -w nginx-ingress-controller

      You've installed the Nginx Ingress maintained by the Kubernetes community. It will route HTTP and HTTPS traffic from the Load Balancer to appropriate back-end Services, configured in Ingress Resources. In the next step, you'll expose the hello-kubernetes app deployments using an Ingress Resource.

      Step 3 — Exposing the App Using an Ingress

      Now you're going to create an Ingress Resource and use it to expose the hello-kubernetes app deployments at your desired domains. You'll then test it by accessing it from your browser.

      You'll store the Ingress in a file named hello-kubernetes-ingress.yaml. Create it using your editor:

      • nano hello-kubernetes-ingress.yaml

      Add the following lines to your file:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
      spec:
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      In the code above, you define an Ingress Resource with the name hello-kubernetes-ingress. Then, you specify two host rules, so that hw1.example.com is routed to the hello-kubernetes-first Service, and hw2.example.com is routed to the Service from the second deployment (hello-kubernetes-second).

      Remember to replace the highlighted domains with your own, then save and close the file.

      Create it in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-ingress.yaml

      Next, you'll need to ensure that your two domains are pointed to the Load Balancer via A records. This is done through your DNS provider. To configure your DNS records on DigitalOcean, see How to Manage DNS Records.

      You can now navigate to hw1.example.com in your browser. You will see the following:

      Hello Kubernetes - First Deployment

      The second variant (hw2.example.com) will show a different message:

      Hello Kubernetes - Second Deployment

      With this, you have verified that the Ingress Controller correctly routes requests; in this case, from your two domains to two different Services.

      You've created and configured an Ingress Resource to serve the hello-kubernetes app deployments at your domains. In the next step, you'll set up Cert-Manager, so you'll be able to secure your Ingress Resources with free TLS certificates from Let's Encrypt.

      Step 4 — Securing the Ingress Using Cert-Manager

      To secure your Ingress Resources, you'll install Cert-Manager, create a ClusterIssuer for production, and modify the configuration of your Ingress to take advantage of the TLS certificates. ClusterIssuers are Cert-Manager Resources in Kubernetes that provision TLS certificates. Once installed and configured, your app will be running behind HTTPS.

      Before installing Cert-Manager to your cluster via Helm, you'll manually apply the required CRDs (Custom Resource Definitions) from the jetstack/cert-manager repository by running the following command:

      • kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml

      You will see the following output:

      Output

      customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created

      This shows that Kubernetes has applied the custom resources you require for cert-manager.

      Note: If you've followed this tutorial and the prerequisites, you haven't created a Kubernetes namespace called cert-manager, so you won't have to run the command in this note block. However, if this namespace does exist on your cluster, you'll need to inform Cert-Manager not to validate it with the following command:

      • kubectl label namespace cert-manager certmanager.k8s.io/disable-validation="true"

      The Webhook component of Cert-Manager requires TLS certificates to securely communicate with the Kubernetes API server. In order for Cert-Manager to generate certificates for it for the first time, resource validation must be disabled on the namespace it is deployed in. Otherwise, it would be stuck in an infinite loop; unable to contact the API and unable to generate the TLS certificates.

      The output will be:

      Output

      namespace/cert-manager labeled

      Next, you'll need to add the Jetstack Helm repository to Helm, which hosts the Cert-Manager chart. To do this, run the following command:

      • helm repo add jetstack https://charts.jetstack.io

      Helm will display the following output:

      Output

      "jetstack" has been added to your repositories

      Finally, install Cert-Manager into the cert-manager namespace:

      • helm install --name cert-manager --namespace cert-manager jetstack/cert-manager

      You will see the following output:

      Output

      NAME: cert-manager LAST DEPLOYED: ... NAMESPACE: cert-manager STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME AGE cert-manager-edit 3s cert-manager-view 3s cert-manager-webhook:webhook-requester 3s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cert-manager-5d669ffbd8-rb6tr 0/1 ContainerCreating 0 2s cert-manager-cainjector-79b7fc64f-gqbtz 0/1 ContainerCreating 0 2s cert-manager-webhook-6484955794-v56lx 0/1 ContainerCreating 0 2s ... NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://docs.cert-manager.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://docs.cert-manager.io/en/latest/reference/ingress-shim.html

      The output shows that the installation was successful. As listed in the NOTES in the output, you'll need to set up an Issuer to issue TLS certificates.

      You'll now create one that issues Let's Encrypt certificates, and you'll store its configuration in a file named production_issuer.yaml. Create it and open it for editing:

      • nano production_issuer.yaml

      Add the following lines:

      production_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      This configuration defines a ClusterIssuer that contacts Let's Encrypt in order to issue certificates. You'll need to replace your_email_address with your email address in order to receive possible urgent notices regarding the security and expiration of your certificates.

      Save and close the file.

      Roll it out with kubectl:

      • kubectl create -f production_issuer.yaml

      You will see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      With Cert-Manager installed, you're ready to introduce the certificates to the Ingress Resource defined in the previous step. Open hello-kubernetes-ingress.yaml for editing:

      • nano hello-kubernetes-ingress.yaml

      Add the highlighted lines:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - hw1.example.com
          - hw2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      The tls block under spec defines in what Secret the certificates for your sites (listed under hosts) will store their certificates, which the letsencrypt-prod ClusterIssuer issues. This must be different for every Ingress you create.

      Remember to replace the hw1.example.com and hw2.example.com with your own domains. When you've finished editing, save and close the file.

      Re-apply this configuration to your cluster by running the following command:

      • kubectl apply -f hello-kubernetes-ingress.yaml

      You will see the following output:

      Output

      ingress.extensions/hello-kubernetes-ingress configured

      You'll need to wait a few minutes for the Let's Encrypt servers to issue a certificate for your domains. In the meantime, you can track its progress by inspecting the output of the following command:

      • kubectl describe certificate hello-kubernetes

      The end of the output will look similar to this:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Generated 56s cert-manager Generated new private key Normal GenerateSelfSigned 56s cert-manager Generated temporary self signed certificate Normal OrderCreated 56s cert-manager Created Order resource "hello-kubernetes-1197334873" Normal OrderComplete 31s cert-manager Order "hello-kubernetes-1197334873" completed successfully Normal CertIssued 31s cert-manager Certificate issued successfully

      When your last line of output reads Certificate issued successfully, you can exit by pressing CTRL + C. Navigate to one of your domains in your browser to test. You'll see the padlock to the left of the address bar in your browser, signifying that your connection is secure.

      In this step, you have installed Cert-Manager using Helm and created a Let's Encrypt ClusterIssuer. After, you updated your Ingress Resource to take advantage of the Issuer for generating TLS certificates. In the end, you have confirmed that HTTPS works correctly by navigating to one of your domains in your browser.

      Conclusion

      You have now successfully set up the Nginx Ingress Controller and Cert-Manager on your DigitalOcean Kubernetes cluster using Helm. You are now able to expose your apps to the Internet, at your domains, secured using Let's Encrypt TLS certificates.

      For further information about the Helm package manager, read this introduction article.



      Source link

      Cómo instalar Nginx en Ubuntu 18.04


      Introducción

      Nginx es uno de los servidores web más populares del mundo y es responsable de alojar algunos de los sitios más grandes y con mayor tráfico de Internet. En la mayoría de los casos, tiene más recursos que Apache y se puede usar como un servidor web o como un proxy inverso.

      En esta guía, hablaremos sobre cómo instalar Nginx en su servidor Ubuntu 18.04.

      Requisitos previos

      Antes de empezar los pasos de esta guía, debe tener una cuenta de usuario regular que no sea root y que cuente con privilegios de sudo configurados en su servidor. Siga nuestra guía de configuración inicial del servidor para Ubuntu 18.04 para aprender a configurar una cuenta de usuario regular.

      Cuando tenga una cuenta disponible, inicie sesión como usuario no root para poder empezar.

      Paso 1 — Instalar Nginx

      Dado a que Nginx está disponible en los repositorios predeterminados de Ubuntu, puede instalarlo desde estos repositorios usando el sistema de empaquetado apt.

      Ya que esta es nuestra primera interacción con el sistema de empaquetado apt en esta sesión, vamos a actualizar nuestro índice de paquetes local para que podamos tener acceso a los listados de paquetes más recientes. Tras hacerlo, podremos instalar nginx:

      • sudo apt update
      • sudo apt install nginx

      Una vez que se acepte el procedimiento, apt le instalará Nginx y las dependencias que pueda necesitar a su servidor.

      Paso 2 — Configurar el Firewall

      Antes de probar Nginx, se debe configurar el software de firewall de forma que permita el acceso al servicio. Nginx se registra a sí mismo como un servicio con ufw al instalarse, haciendo que permitir el acceso de Nginx sea fácil.

      Obtenga una lista de las configuraciones de las aplicaciones con las que ufw sabe trabajar escribiendo:

      Se debería propagar una lista de los perfiles de aplicaciones:

      Output

      Available applications: Nginx Full Nginx HTTP Nginx HTTPS OpenSSH

      Como puede ver, hay tres perfiles disponibles para Nginx:

      • Nginx Full: Este perfil abre tanto el puerto 80 (tráfico web normal, no cifrado) como el puerto 443 (tráfico TLS/SSL cifrado)
      • Nginx HTTP: Este perfil solamente abre el puerto 80 (tráfico web normal, no cifrado)
      • Nginx HTTPS: Este perfil solamente abre el puerto 443 (tráfico TLS/SSL cifrado)

      Es recomendable que active el perfil más restrictivo que aún permita el tráfico que haya configurado. Debido a que en esta guía todavía no configuramos SSL para nuestro servidor, únicamente vamos a tener que permitir tráfico en el puerto 80.

      Puede habilitar esto ingresando:

      • sudo ufw allow 'Nginx HTTP'

      Puede verificar el cambio ingresando:

      Debe ver el tráfico HTTP que se permite en el resultado que se muestra:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx HTTP ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx HTTP (v6) ALLOW Anywhere (v6)

      Paso 3 — Verificar su servidor web

      Ubuntu 18.04 inicia Nginx al concluir el proceso de instalación. El servidor web ya debería estar abierto y funcionando.

      Para asegurarnos de que el servicio se está ejecutando, podemos verificar usando el sistema init systemd y escribiendo:

      Output

      ● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-04-20 16:08:19 UTC; 3 days ago Docs: man:nginx(8) Main PID: 2369 (nginx) Tasks: 2 (limit: 1153) CGroup: /system.slice/nginx.service ├─2369 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; └─2380 nginx: worker process

      Como se puede ver arriba, parece que el servicio ha comenzado correctamente. No obstante, la mejor manera de probar esto es verdaderamente solicitando una página de Nginx.

      Puede acceder a la página de aterrizaje de Nginx predeterminada para confirmar que el software esté funcionando de la manera correcta navegando a la dirección IP de su servidor. Si no sabe cuál es la dirección IP de su servidor, puede conseguirla de varias formas.

      Trate de ingresar esto en la línea de comandos de su servidor:

      • ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's//.*$//'

      Le dará algunas líneas. Puede probar cada una en su navegador web para ver si funciona.

      Alternativamente, puede escribir lo siguiente, lo que debería darle su dirección IP pública como se ve desde otra ubicación en Internet:

      Una vez que tenga la dirección IP, ingrésela en la barra de direcciones de su navegador:

      http://your_server_ip
      

      Debería ver la página de aterrizaje de Nginx predeterminada:

      página de Nginx predeterminada

      Se incluye esta página con Nginx para mostrarle que el servidor está funcionando correctamente.

      Paso 4 — Gestionar el proceso de Nginx

      Ahora que su servidor web está funcionando, vamos a repasar algunos comandos de gestión básicos.

      Para detener su servidor web, ingrese:

      • sudo systemctl stop nginx

      Para iniciar el servidor web una vez que se haya detenido, ingrese:

      • sudo systemctl start nginx

      Para detener y luego volver a iniciar el servicio, ingrese:

      • sudo systemctl restart nginx

      Si simplemente está haciendo cambios de configuración, a menudo Nginx se puede recargar sin perder las conexiones. Para hacerlo, ingrese:

      • sudo systemctl reload nginx

      De forma predeterminada, Nginx está configurado para empezar automáticamente una vez que el servidor se inicia. Puede desactivar este comportamiento si no desea que suceda así, ingresando:

      • sudo systemctl disable nginx

      Para volver a habilitar el servicio para que empiece tras la iniciación, puede ingresar:

      • sudo systemctl enable nginx

      Paso 5 – Configurar los bloques del servidor (Recomendado)

      Al usar el servidor web Nginx, se pueden usar los bloques del servidor (parecidos a los hosts virtuales en Apache) para encapsular los detalles de configuración y alojar más de un dominio desde un solo servidor. Vamos a configurar un dominio llamado example.com, pero debe reemplazarlo con su propio nombre de dominio. Consulte nuestra Introducción a DigitalOcean DNS para aprender más sobre cómo configurar un nombre de dominio con DigitalOcean.

      Nginx en Ubuntu 18.04 cuenta con un bloqueo del servidor que está habilitado de forma predeterminada y que está configurado para servir documentos fuera de un directorio en /var/www/html. Aunque esto funciona bien para un solo sitio, puede tornarse complicado si hospeda varios sitios. En vez de modificar /var/www/html, vamos a crear una estructura de directorios dentro de /var/www para nuestro sitio example.com, dejando a /var/www/html es su lugar como el directorio predeterminado que debe servirse en caso de que una solicitud de un cliente no coincida con ningún otro sitio.

      Cree el directorio para example.com como se indica a continuación, usando el indicador -p para crear cualquier directorio matriz que pueda requerirse:

      • sudo mkdir -p /var/www/example.com/html

      Posteriormente, asigne la titularidad del directorio con la variable de entorno $USER:

      • sudo chown -R $USER:$USER /var/www/example.com/html

      Si no ha modificado su valor de umask, los permisos de sus roots web deberían ser los correctos, pero puede verificarlo ingresando:

      • sudo chmod -R 755 /var/www/example.com

      Luego, cree una página index.html como ejemplo utilizando nano o su editor preferido:

      • nano /var/www/example.com/html/index.html

      Adentro, agregue el siguiente HTML como ejemplo:

      /var/www/example.com/html/index.html

      <html>
          <head>
              <title>Welcome to Example.com!</title>
          </head>
          <body>
              <h1>Success!  The example.com server block is working!</h1>
          </body>
      </html>
      

      Una vez que haya acabado, guarde y cierre el archivo.

      Para que Nginx le proporcione servicios a este contenido, se debe crear un bloque del servidor usando las directivas correctas. En vez de modificar el archivo de configuración predeterminado directamente, vamos a hacer uno nuevo en /etc/nginx/sites-available/example.com:

      • sudo nano /etc/nginx/sites-available/example.com

      Pegue el siguiente bloque de configuración, el cual se parece al predeterminado, pero que se ha actualizado para nuestro nuevo directorio y nombre de dominio:

      /etc/nginx/sites-available/example.com

      server {
              listen 80;
              listen [::]:80;
      
              root /var/www/example.com/html;
              index index.html index.htm index.nginx-debian.html;
      
              server_name example.com www.example.com;
      
              location / {
                      try_files $uri $uri/ =404;
              }
      }
      

      Note que hemos actualizado la configuración root para nuestro nuevo directorio y el server_name(nombre de servidor) a nuestro nombre de dominio.

      Después, vamos a habilitar el archivo creando un enlace desde el mismo al directorio sites-enabled (habilitado para sitios), el cual Nginx usa para leer durante el inicio:

      • sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

      Ahora tenemos dos bloques del servidor habilitados y configurados para responder a las solicitudes dependiendo de sus directivas de listen (oír) y server_name (nombre de servidor) (puede consultar aquí para leer más sobre cómo Nginx procesa estas directivas):

      • example.com: Responderá a las solicitudes de example.com y de www.example.com.
      • Predeterminado: Responderá a cualquier solicitud en el puerto 80 que no coincida con los otros dos bloques.

      Solamente debe ajustar un solo valor en el archivo /etc/nginx/nginx.conf para evitar un posible problema de memoria de hash bucket, el que puede surgir al agregar nombres de servidores adicionales. Abra el archivo:

      • sudo nano /etc/nginx/nginx.conf

      Busque la directiva server_names_hash_bucket_size y quite el símbolo # para descomentar la línea:

      /etc/nginx/nginx.conf

      ...
      http {
          ...
          server_names_hash_bucket_size 64;
          ...
      }
      ...
      

      Posteriormente, haga una prueba para asegurarse de que no haya errores de sintaxis en ninguno de sus archivos de Nginx:

      Una vez que haya acabado, guarde y cierre el archivo.

      Si no hay ningún problema, reinicie Nginx para habilitar sus cambios:

      • sudo systemctl restart nginx

      Ahora, Nginx debería estar sirviendo su nombre de dominio. Puede probar esto navegando a http://example.com, donde debería ver algo parecido a lo siguiente:

      Primer bloqueo del servidor de Nginx

      Paso 6 — Familiarizarse con archivos y directorios importantes de Nginx

      Ahora que sabe cómo gestionar el servicio de Nginx mismo, debería dedicar unos minutos a familiarizarse con algunos directorios y archivos importantes.

      Contenido

      • /var/www/html: El contenido web real, que de forma predeterminada únicamente consiste en la página de Nginx predeterminada que vio antes, recibe servicio de parte del directorio /var/www/html. Esto puede modificarse alterando los archivos de configuración de Nginx.

      Configuración del servidor

      • /etc/nginx: El directorio de configuración de Nginx. Todos los archivos de configuración de Nginx se alojan aquí.
      • /etc/nginx/nginx.conf: El archivo de configuración de Nginx principal. Esto se puede modificar para hacer cambios a la configuración global de Nginx.
      • /etc/nginx/sites-available/: El directorio donde se pueden almacenar los bloques del servidor por sitio. Nginx no usará los archivos de configuración que estén en este directorio a menos que estén vinculados al directorio sites-enabled. Generalmente, todas las configuraciones del bloque del servidor se llevan a cabo en este directorio y luego se habilitan vinculándolas al otro directorio.
      • /etc/nginx/sites-enabled/: El directorio donde se almacenan los bloques del servidor por sitio habilitados. Generalmente, estos se crean vinculándolos a los archivos de configuración que están en el directorio sites-available.
      • /etc/nginx/snippets: Este directorio contiene fragmentos de configuración que se pueden incluir en cualquier otro sitio de la configuración de Nginx. Los buenos candidatos para la refactorización en fragmentos serían los segmentos de configuración potencialmente repetibles.

      Registros del servidor

      • /var/log/nginx/access.log: Se registra cada solicitud a su servidor web en este archivo de registro, a menos que Nginx esté configurado para hacer algo diferente.
      • /var/log/nginx/error.log: Todo error de Nginx se registrará en este registro.

      Conclusión

      Ahora que tiene su servidor web instalado, tiene muchas opciones para el tipo de contenido al que puede servir y las tecnologías que quiera usar para crear una experiencia más abundante.

      Consulte este artículo sobre cómo configurar una combinación LEMP en Ubuntu 18.04 si desea desarrollar una combinación de aplicaciones más completa.



      Source link