One place for hosting & domains

      Ubuntu

      How to Install Tinc and Set Up a Basic VPN on Ubuntu 18.04


      Introduction

      Tinc is an open-source Virtual Private Network (VPN) daemon with useful features like encryption, optional compression, and automatic mesh routing that can opportunistically route VPN traffic directly between servers. These features differentiate tinc from other VPN solutions, and make it a good choice for creating a VPN out of many small, geographically distributed networks.

      In this tutorial, we will go over how to use tinc to create a secure VPN on which your servers can communicate as if they were on a local network. We will also demonstrate how to use tinc to set up a secure tunnel into a private network. We will be using Ubuntu 18.04 servers, but the configurations can be adapted for use with any other OS.

      Goals

      In order to cover multiple use cases, this tutorial outlines how to connect one client node to the VPN over a private network interface and another over a public one. You can, however, adapt this setup to suit your own needs. You’ll just need to plan out how you want your servers to access each other and adapt the examples presented in this tutorial to your own needs. If you are adapting this to your own setup, be sure to substitute the highlighted values in the examples with your own values. It may be in your interest, though, to first follow the tutorial as it’s written to make sure you understand the components and processes involved before modifying these instructions.

      To help keep things clear, this tutorial will refer to the servers like this:

      • server-01: All of the VPN nodes will connect to this machine, and the connection must be maintained for proper VPN functionality. Additional servers can be configured in the same way as this one to provide redundancy, if desired
      • client-01: Connects to the server-01 VPN node using its private network interface
      • client-02: Connects to the server-01 VPN node over the public network interface

      Note: Tinc itself doesn’t differentiate between servers (machines that host and deliver VPN services) and clients (the machines that connect to and use the secure private network), but it can be helpful to understand and visualize how tinc works by thinking of your servers like this.

      Here is a diagram of the VPN that we want to set up:

      Tinc VPN Setup

      The blue box represents our VPN and the pink represents the underlying private network. All three servers can communicate on the VPN, even though the private network is otherwise inaccessible to client-02.

      Prerequisites

      If you would like to follow this tutorial exactly, provision two Ubuntu 18.04 servers (server-01 and client-01) in the same datacenter and enable private networking on each. Then, create another Ubuntu 18.04 server (client-02) in a separate datacenter. Each server should have an administrative user and a firewall configured with ufw. To set this up, follow our initial server setup guide for Ubuntu 18.04.

      Additionally, later on in this tutorial we’ll need to transfer a few files between each machine using scp. Because of this, you’ll need to generate SSH keys on each of your servers, add both client-01 and client-02’s SSH keys to server-01’s authorized_keys file, and then add server-01’s SSH key to both client-01 and client-02’s authorized_keys files. For help setting this up, see our guide on How to Set Up SSH Keys on Ubuntu 18.04.

      Step 1 — Installing Tinc

      Tinc is available from the default Ubuntu APT repositories, which means we can install it with just a few commands.

      If you’ve not done so recently, run the following command on each server to update their respective package indexes:

      All servers

      Then install tinc on each server by running the following command:

      All servers

      With that, you’ve installed tinc on each of your servers. However, you’ll need to make some changes to tinc’s configuration on each machine in order to get your VPN up and running. Let’s begin with updating server-01.

      Step 2 — Configuring the Tinc Server

      Tinc requires that every machine that will be part of the VPN has the following three configuration components:

      • Tinc configuration files: There are three distinct files that configure the tinc daemon:
        • tinc.conf, which defines the netname, the network device over which the VPN will run, and other VPN options;
        • tinc-up, a script that activates the network device defined in tinc.conf after tinc is started;
        • tinc-down, which deactivates the network device whenever tinc stops.
      • Public/private key pairs: Tinc uses public/private key pairs to ensure that only users with valid keys are able to access the VPN.
      • Host configuration files: Each machine (or host) on the VPN has its own configuration file that holds the host’s actual IP address and the subnet where tinc will serve it

      Tinc uses a netname to distinguish one tinc VPN from another. This is helpful in cases where you want to set up multiple VPNs, but it’s recommended that you use a netname even if you are only planning on configuring one VPN. You can give your VPN whatever netname you like, but for simplicity we will call our VPN netname.

      On server-01, create the configuration directory structure for the VPN:

      server-01

      • sudo mkdir -p /etc/tinc/netname/hosts

      Use your preferred text editor to create a tinc.conf file. Here, we’ll use nano:

      server-01

      • sudo nano /etc/tinc/netname/tinc.conf

      Add the following lines to the empty file. These configure a tinc node named server_01 with a network interface called tun0 which will use IPv4:

      server-01:/etc/tinc/netname/tinc.conf

      Name = server_01
      AddressFamily = ipv4
      Interface = tun0
      

      Warning: Note how the value after the Name directive includes an underscore (_) rather than a hyphen (-). This is important, since tinc requires that the Name value contain only alphanumeric or underscore characters. If you use a hyphen here, you’ll encounter an error when you try to start the VPN later in this guide.

      Save and close the file after adding these lines. If you used nano, do so by pressing CTRL+X, Y, then ENTER.

      Next, create a host configuration file named server_01 in the hosts subdirectory. Ultimately, the client nodes will use this file to communicate with server-01:

      server-01

      • sudo nano /etc/tinc/netname/hosts/server_01

      Again, note that the name of this file contains an underscore rather than a hyphen. This way, it aligns with the Name directive in the tinc.conf file which will allow tinc to automatically append the server’s public RSA key to this file when we generate later on.

      Add the following lines to the file, making sure to include server-01’s public IP address:

      server-01:/etc/tinc/netname/hosts/server_01

      Address = server-01_public_IP_address
      Subnet = 10.0.0.1/32
      

      The Address field specifies how other nodes will connect to this server, and Subnet specifies which subnet this daemon will serve. Save and close the file.

      Next, generate a pair of public and private RSA keys for this host with the following command:

      server-01

      • sudo tincd -n netname -K4096

      After running this command, you’ll be prompted to enter filenames where tinc will save the public and private RSA keys:

      Output

      . . . Please enter a file to save private RSA key to [/etc/tinc/netname/rsa_key.priv]: Please enter a file to save public RSA key to [/etc/tinc/netname/hosts/server_01]:

      Press ENTER to accept the default locations at each prompt; doing so will tell tinc to store the private key in a file named rsa_key.priv and append the public key to the server_01 host configuration file.

      Next, create tinc-up, the script that will run whenever the netname VPN is started:

      server-01

      • sudo nano /etc/tinc/netname/tinc-up

      Add the following lines:

      server-01:/etc/tinc/netname/tinc-up

      #!/bin/sh
      ip link set $INTERFACE up
      ip addr add 10.0.0.1/32 dev $INTERFACE
      ip route add 10.0.0.0/24 dev $INTERFACE
      

      Here’s what each of these lines do:

      • ip link …: sets the status of tinc’s virtual network interface as up
      • ip addr …: adds the IP address 10.0.0.1 with a netmask of 32 to tinc’s virtual network interface, which will cause the other machines on the VPN to see server-01’s IP address as 10.0.0.1
      • ip route …: adds a route (10.0.0.0/24) which can be reached on tinc’s virtual network interface

      Save and close the file after adding these lines.

      Next, create a script to remove the virtual network interface when your VPN is stopped:

      server-01

      • sudo nano /etc/tinc/netname/tinc-down

      Add the following lines:

      server-01:/etc/tinc/netname/tinc-down

      #!/bin/sh
      ip route del 10.0.0.0/24 dev $INTERFACE
      ip addr del 10.0.0.1/32 dev $INTERFACE
      ip link set $INTERFACE down
      

      These lines have the opposite effects as those in the tinc-up script:

      • ip route …: deletes the 10.0.0.0/24 route
      • ip addr …: deletes the IP address 10.0.0.1 from tinc’s virtual network interface
      • ip link …: sets the status of tinc’s virtual network interface as down

      Save and close the file, then make both of these new network scripts executable:

      server-01

      • sudo chmod 755 /etc/tinc/netname/tinc-*

      As a final step of configuring server-01, add a firewall rule that will allow traffic through port 655, tinc’s default port:

      server-01

      server-01 is now fully configured and you can move on to setting up your client nodes.

      Step 3 — Configuring the Client Nodes

      Both of your client machines will require a slightly different configuration than the server, although the process will generally be quite similar.

      Because of the setup we’re aiming for in this guide, we will configure client-01 and client-02 almost identically with only a few slight differences between them. Hence, many of the commands given in this step must be run on both machines. Note, though, that if client-01 or client-02 require a specific command or special configuration, those instructions will be shown in a blue or red command block, respectively.

      On both client-01 and client-02, replicate the directory structure you created on server-01:

      client-01 & client-02

      • sudo mkdir -p /etc/tinc/netname/hosts

      Then create a tinc.conf file:

      client-01 & client-02

      • sudo nano /etc/tinc/netname/tinc.conf

      Add the following lines to the file on both machines:

      client-01 & client-02 /etc/tinc/netname/tinc.conf

      Name = node_name
      AddressFamily = ipv4
      Interface = tun0
      ConnectTo = server_01
      

      Be sure to substitute node_name with the respective client node’s name. Again, make sure this name uses an underscore (_) rather than a hyphen.

      Note that this file contains a ConnectTo directive pointing to server_01, while server-01’s tinc.conf file didn’t include this directive. By not including a ConnectTo statement on server-01, it means that server-01 will only listen for incoming connections. This works for our setup since it won’t connect to any other machines.

      Save and close the file.

      Next, create a host configuration file on each client node. Again, make sure the file name is spelled with an underscore instead of a hyphen:

      client-01 & client-02

      • sudo nano /etc/tinc/netname/hosts/node_name

      For client-01, add this line:

      client-01:/etc/tinc/netname/hosts/client_01

      Subnet = 10.0.0.2/32
      

      For client-02, add this line:

      client-02:/etc/tinc/netname/hosts/client_02

      Subnet = 10.0.0.3/32
      

      Note that each client has a different subnet that tinc will serve. Save and close the file.

      Next, generate the keypairs on each client machine:

      client-01 & client-02

      • sudo tincd -n netname -K4096

      Again as you did with server-01, when prompted to select files to store the RSA keys, press ENTER to accept the default choices.

      Following that, create the network interface start script on each client:

      client-01 & client-02

      • sudo nano /etc/tinc/netname/tinc-up

      For client-01, add these lines:

      client-01:/etc/tinc/netname/tinc-up

      #!/bin/sh
      ip link set $INTERFACE up
      ip addr add 10.0.0.2/32 dev $INTERFACE
      ip route add 10.0.0.0/24 dev $INTERFACE
      

      For client-02, add the following:

      client-02:/etc/tinc/netname/tinc-up

      #!/bin/sh
      ip link set $INTERFACE up
      ip addr add 10.0.0.3/32 dev $INTERFACE
      ip route add 10.0.0.0/24 dev $INTERFACE
      

      Save and close each file.

      Next, create the network interface stop script on each client:

      client-01 & client-02

      • sudo nano /etc/tinc/netname/tinc-down

      On client-01, add the following content to the empty file:

      client-01:/etc/tinc/netname/tinc-down

      #!/bin/sh
      ip route del 10.0.0.0/24 dev $INTERFACE
      ip addr del 10.0.0.2/32 dev $INTERFACE
      ip link set $INTERFACE down
      

      On client-02, add the following::

      client-02:/etc/tinc/netname/tinc-down

      #!/bin/sh
      ip route del 10.0.0.0/24 dev $INTERFACE
      ip addr del 10.0.0.3/32 dev $INTERFACE
      ip link set $INTERFACE down
      

      Save and close the files.

      Make networking scripts executable by running the following command on each client machine:

      client-01 & client-02

      • sudo chmod 755 /etc/tinc/netname/tinc-*

      Lastly, open up port 655 on each client:

      client-01 & client-02

      At this point, the client nodes are almost, although not quite, set up. They still need the public key that we created on server-01 in the previous step in order to authenticate the connection to the VPN.

      Step 4 — Distributing the Keys

      Each node that wants to communicate directly with another node must have exchanged public keys, which are inside of the host configuration files. In our case, server-01 needs to exchange public keys with the other nodes.

      Exchange Keys Between server-01 and client-01

      On client-01, copy its host configuration file to server-01. Because both client-01 and server-01 are in the same data center and both have private networking enabled, you can use server01’s private IP address here:

      client-01

      • scp /etc/tinc/netname/hosts/client_01 sammy@server-01_private_IP:/tmp

      Then on server-01, copy the client-01 host configuration file into the /etc/tinc/netname/hosts/ directory:

      server-01

      • sudo cp /tmp/client_01 /etc/tinc/netname/hosts/

      Then, while still on server-01, copy its host configuration file to client-01:

      server-01

      • scp /etc/tinc/netname/hosts/server_01 user@client-01_private_IP:/tmp

      On client-01, copy server-01’s file to the appropriate location:

      client-01

      • sudo cp /tmp/server_01 /etc/tinc/netname/hosts/

      On client-01, edit server-01’s host configuration file so the Address field is set to server-01’s private IP address. This way, client-01 will connect to the VPN via the private network:

      client-01

      • sudo nano /etc/tinc/netname/hosts/server_01

      Change the Address directive to point to server-01’s private IP address:

      client-01:/etc/tinc/netname/hosts/server_01

      Address = server-01_private_IP
      Subnet = 10.0.0.1/32
      

      Save and quit. Now let’s move on to our remaining node, client-02.

      Exchange Keys Between server-01 and client-02

      On client-02, copy its host configuration file to server-01:

      client-02

      • scp /etc/tinc/netname/hosts/client_02 sammy@server-01_public_IP:/tmp

      Then on server-01, copy the client_02 host configuration file into the appropriate location:

      server-01

      • sudo cp /tmp/client_02 /etc/tinc/netname/hosts/

      Then copy server-01’s host configuration file to client-02:

      server-01

      • scp /etc/tinc/netname/hosts/server_01 user@client-02_public_IP:/tmp

      On client-02, copy server-01’s file to the appropriate location:

      client-02

      • sudo cp /tmp/server_01 /etc/tinc/netname/hosts/

      Assuming you’re only setting up two client nodes, you’re finished distributing public keys. If, however, you’re creating a larger VPN, now is a good time to exchange the keys between those other nodes. Remember that if you want two nodes to directly communicate with each other (without a forwarding server between), they need to have exchanged their keys/hosts configuration files, and they need to be able to access each other’s real network interfaces. Also, it is fine to just copy each host’s configuration file to every node in the VPN.

      Step 5 — Testing the Configuration

      On each node, starting with server-01, start tinc with the following command:

      All servers

      • sudo tincd -n netname -D -d3

      This command includes the -n flag, which points to the netname for our VPN, netname. This is useful if you have more than one VPN set up and you need to specify which one you want to start. It also includes the -D flag, which prevents tinc from forking and detaching, as well as disables tinc’s automatic restart mechanism. Lastly, it includes the -d flag, which tells tinc to run in debug mode, with a debug level of 3.

      Note: When it comes to the tinc daemon, a debug level of 3 will show every request exchanged between any two of the servers, including authentication requests, key exchanges, and connection list updates. Higher debug levels show more information regarding network traffic, but for now we’re only concerned with whether the nodes can communicate with one another, so a level of 3 will suffice. In a production scenario, though, you would want to change to a lower debug level so as not to fill disks with log files.

      You can learn more about tinc’s debug levels by reviewing the official documentation.

      After starting the daemon on each node, you should see output with the names of each node as they connect to server-01. Now let’s test the connection over the VPN.

      In a separate window, on client-02, ping client-01’s VPN IP address. We assigned this to be 10.0.0.2, earlier:

      client-02

      The ping should work correctly, and you should see some debug output in the other windows about the connection on the VPN. This indicates that client-02 is able to communicate over the VPN through server-01 to client-01. Press CTRL+C to quit pinging.

      You may also use the VPN interfaces to do any other network communication, like application connections, copying files, and SSH.

      On each tinc daemon debug window, quit the daemon by pressing CTRL+.

      Step 6 — Configuring Tinc To Start Up on Boot

      Ubuntu servers use systemd as the default system manager to control starting and running processes. Because of this, we can enable the netname VPN to start up automatically at boot with a single systemctl command.

      Run the following command on each node to set the tinc VPN to start up whenever the machines boot:

      All servers

      • sudo systemctl enable tinc@netname

      Tinc is configured to start at boot on each of your machines and you can control it with the systemctl command. If you would like to start it now, run the following command on each of your nodes:

      All servers

      • sudo systemctl start tinc@netname

      Note: If you have multiple VPNs you enable or start each of them at once, like this:

      All servers

      • sudo systemctl start tinc@natename_01 tinc@netname_02 … tinc@netname_n

      With that, your tinc VPN fully configured and running on each of your nodes.

      Conclusion

      Now that you have gone through this tutorial, you should have a good foundation to build out your VPN to meet your needs. Tinc is very flexible, and any node can be configured to connect to any other node (that it can access over the network) so it can act as a mesh VPN without relying on one individual node.



      Source link

      Containerizing a Laravel 6 Application for Development with Docker Compose on Ubuntu 18.04


      Introduction

      To containerize an application refers to the process of adapting an application and its components in order to be able to run it in lightweight environments known as containers. Such environments are isolated and disposable, and can be leveraged for developing, testing, and deploying applications to production.

      In this guide, we’ll use Docker Compose to containerize a Laravel 6 application for development. When you’re finished, you’ll have a demo Laravel application running on three separate service containers:

      • An app service running PHP7.4-FPM;
      • A db service running MySQL 5.7;
      • An nginx service that uses the app service to parse PHP code before serving the Laravel application to the final user.

      To allow for a streamlined development process and facilitate application debugging, we’ll keep application files in sync by using shared volumes. We’ll also see how to use docker-compose exec commands to run Composer and Artisan on the app container.

      Prerequisites

      Step 1 — Obtaining the Demo Application

      To get started, we’ll fetch the demo Laravel application from its Github repository. We’re interested in the tutorial-01 branch, which contains the basic Laravel application we’ve created in the first guide of this series.

      To obtain the application code that is compatible with this tutorial, download release tutorial-1.0.1 to your home directory with:

      • cd ~
      • curl -L https://github.com/do-community/travellist-laravel-demo/archive/tutorial-1.0.1.zip -o travellist.zip

      We’ll need the unzip command to unpack the application code. In case you haven’t installed this package before, do so now with:

      • sudo apt update
      • sudo apt install unzip

      Now, unzip the contents of the application and rename the unpacked directory for easier access:

      • unzip travellist.zip
      • mv travellist-laravel-demo-tutorial-1.0.1 travellist-demo

      Navigate to the travellist-demo directory:

      In the next step, we’ll create a .env configuration file to set up the application.

      Step 2 — Setting Up the Application’s .env File

      The Laravel configuration files are located in a directory called config, inside the application’s root directory. Additionally, a .env file is used to set up environment-dependent configuration, such as credentials and any information that might vary between deploys. This file is not included in revision control.

      Warning: The environment configuration file contains sensitive information about your server, including database credentials and security keys. For that reason, you should never share this file publicly.

      The values contained in the .env file will take precedence over the values set in regular configuration files located at the config directory. Each installation on a new environment requires a tailored environment file to define things such as database connection settings, debug options, application URL, among other items that may vary depending on which environment the application is running.

      We’ll now create a new .env file to customize the configuration options for the development environment we’re setting up. Laravel comes with an example.env file that we can copy to create our own:

      Open this file using nano or your text editor of choice:

      The current .env file from the travellist demo application contains settings to use a local MySQL database, with 127.0.0.1 as database host. We need to update the DB_HOST variable so that it points to the database service we will create in our Docker environment. In this guide, we’ll call our database service db. Go ahead and replace the listed value of DB_HOST with the database service name:

      .env

      APP_NAME=Travellist
      APP_ENV=dev
      APP_KEY=
      APP_DEBUG=true
      APP_URL=http://localhost:8000
      
      LOG_CHANNEL=stack
      
      DB_CONNECTION=mysql
      DB_HOST=db
      DB_PORT=3306
      DB_DATABASE=travellist
      DB_USERNAME=travellist_user
      DB_PASSWORD=password
      ...
      

      Feel free to also change the database name, username, and password, if you wish. These variables will be leveraged in a later step where we’ll set up the docker-compose.yml file to configure our services.

      Save the file when you’re done editing. If you used nano, you can do that by pressing Ctrl+x, then Y and Enter to confirm.

      Step 3 — Setting Up the Application’s Dockerfile

      Although both our MySQL and Nginx services will be based on default images obtained from the Docker Hub, we still need to build a custom image for the application container. We’ll create a new Dockerfile for that.

      Our travellist image will be based on the php:7.4-fpm official PHP image from Docker Hub. On top of that basic PHP-FPM environment, we’ll install a few extra PHP modules and the Composer dependency management tool.

      We’ll also create a new system user; this is necessary to execute artisan and composer commands while developing the application. The uid setting ensures that the user inside the container has the same uid as your system user on your host machine, where you’re running Docker. This way, any files created by these commands are replicated in the host with the correct permissions. This also means that you’ll be able to use your code editor of choice in the host machine to develop the application that is running inside containers.

      Create a new Dockerfile with:

      Copy the following contents to your Dockerfile:

      Dockerfile

      FROM php:7.4-fpm
      
      # Arguments defined in docker-compose.yml
      ARG user
      ARG uid
      
      # Install system dependencies
      RUN apt-get update && apt-get install -y 
          git 
          curl 
          libpng-dev 
          libonig-dev 
          libxml2-dev 
          zip 
          unzip
      
      # Clear cache
      RUN apt-get clean && rm -rf /var/lib/apt/lists/*
      
      # Install PHP extensions
      RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
      
      # Get latest Composer
      COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
      
      # Create system user to run Composer and Artisan Commands
      RUN useradd -G www-data,root -u $uid -d /home/$user $user
      RUN mkdir -p /home/$user/.composer && 
          chown -R $user:$user /home/$user
      
      # Set working directory
      WORKDIR /var/www
      
      USER $user
      
      

      Don’t forget to save the file when you’re done.

      Our Dockerfile starts by defining the base image we’re using: php:7.4-fpm.

      After installing system packages and PHP extensions, we install Composer by copying the composer executable from its latest official image to our own application image.

      A new system user is then created and set up using the user and uid arguments that were declared at the beginning of the Dockerfile. These values will be injected by Docker Compose at build time.

      Finally, we set the default working dir as /var/www and change to the newly created user. This will make sure you’re connecting as a regular user, and that you’re on the right directory, when running composer and artisan commands on the application container.

      Step 4 — Setting Up Nginx Configuration and Database Dump Files

      When creating development environments with Docker Compose, it is often necessary to share configuration or initialization files with service containers, in order to set up or bootstrap those services. This practice facilitates making changes to configuration files to fine-tune your environment while you’re developing the application.

      We’ll now set up a folder with files that will be used to configure and initialize our service containers.

      To set up Nginx, we’ll share a travellist.conf file that will configure how the application is served. Create the docker-compose/nginx folder with:

      • mkdir -p docker-compose/nginx

      Open a new file named travellist.conf within that directory:

      • nano docker-compose/nginx/travellist.conf

      Copy the following Nginx configuration to that file:

      docker-compose/nginx/travellist.conf

      
      server {
          listen 80;
          index index.php index.html;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          root /var/www/public;
          location ~ .php$ {
              try_files $uri =404;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_pass app:9000;
              fastcgi_index index.php;
              include fastcgi_params;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param PATH_INFO $fastcgi_path_info;
          }
          location / {
              try_files $uri $uri/ /index.php?$query_string;
              gzip_static on;
          }
      }
      

      This file will configure Nginx to listen on port 80 and use index.php as default index page. It will set the document root to /var/www/public, and then configure Nginx to use the app service on port 9000 to process *.php files.

      Save and close the file when you’re done editing.

      To set up the MySQL database, we’ll share a database dump that will be imported when the container is initialized. This is a feature provided by the MySQL 5.7 image we’ll be using on that container.

      Create a new folder for your MySQL initialization files inside the docker-compose folder:

      • mkdir docker-compose/mysql

      Open a new .sql file:

      • nano docker-compose/mysql/init_db.sql

      The following MySQL dump is based on the database we’ve set up in our Laravel on LEMP guide. It will create a new table named places. Then, it will populate the table with a set of sample places.

      Add the following code to the file:

      docker-compose/mysql/db_init.sql

      DROP TABLE IF EXISTS `places`;
      
      CREATE TABLE `places` (
        `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
        `name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
        `visited` tinyint(1) NOT NULL DEFAULT '0',
        PRIMARY KEY (`id`)
      ) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
      
      INSERT INTO `places` (name, visited) VALUES ('Berlin',0),('Budapest',0),('Cincinnati',1),('Denver',0),('Helsinki',0),('Lisbon',0),('Moscow',1),('Nairobi',0),('Oslo',1),('Rio',0),('Tokyo',0);
      

      The places table contains three fields: id, name, and visited. The visited field is a flag used to identify the places that are still to go. Feel free to change the sample places or include new ones. Save and close the file when you’re done.

      We’ve finished setting up the application’s Dockerfile and the service configuration files. Next, we’ll set up Docker Compose to use these files when creating our services.

      Step 5 — Creating a Multi-Container Environment with Docker Compose

      Docker Compose enables you to create multi-container environments for applications running on Docker. It uses service definitions to build fully customizable environments with multiple containers that can share networks and data volumes. This allows for a seamless integration between application components.

      To set up our service definitions, we’ll create a new file called docker-compose.yml. Typically, this file is located at the root of the application folder, and it defines your containerized environment, including the base images you will use to build your containers, and how your services will interact.

      We’ll define three different services in our docker-compose.yml file: app, db, and nginx.

      The app service will build an image called travellist, based on the Dockerfile we’ve previously created. The container defined by this service will run a php-fpm server to parse PHP code and send the results back to the nginx service, which will be running on a separate container. The mysql service defines a container running a MySQL 5.7 server. Our services will share a bridge network named travellist.

      The application files will be synchronized on both the app and the nginx services via bind mounts. Bind mounts are useful in development environments because they allow for a performant two-way sync between host machine and containers.

      Create a new docker-compose.yml file at the root of the application folder:

      A typical docker-compose.yml file starts with a version definition, followed by a services node, under which all services are defined. Shared networks are usually defined at the bottom of that file.

      To get started, copy this boilerplate code into your docker-compose.yml file:

      docker-compose.yml

      version: "3.7"
      services:
      
      
      networks:
        travellist:
          driver: bridge
      

      We’ll now edit the services node to include the app, db and nginx services.

      The app Service

      The app service will set up a container named travellist-app. It builds a new Docker image based on a Dockerfile located in the same path as the docker-compose.yml file. The new image will be saved locally under the name travellist.

      Even though the document root being served as the application is located in the nginx container, we need the application files somewhere inside the app container as well, so we’re able to execute command line tasks with the Laravel Artisan tool.

      Copy the following service definition under your services node, inside the docker-compose.yml file:

      docker-compose.yml

        app:
          build:
            args:
              user: sammy
              uid: 1000
            context: ./
            dockerfile: Dockerfile
          image: travellist
          container_name: travellist-app
          restart: unless-stopped
          working_dir: /var/www/
          volumes:
            - ./:/var/www
          networks:
            - travellist
      

      These settings do the following:

      • build: This configuration tells Docker Compose to build a local image for the app service, using the specified path (context) and Dockerfile for instructions. The arguments user and uid are injected into the Dockerfile to customize user creation commands at build time.
      • image: The name that will be used for the image being built.
      • container_name: Sets up the container name for this service.
      • restart: Always restart, unless the service is stopped.
      • working_dir: Sets the default directory for this service as /var/www.
      • volumes: Creates a shared volume that will synchronize contents from the current directory to /var/www inside the container. Notice that this is not your document root, since that will live in the nginx container.
      • networks: Sets up this service to use a network named travellist.

      The db Service

      The db service uses a pre-built MySQL 5.7 image from Docker Hub. Because Docker Compose automatically loads .env variable files located in the same directory as the docker-compose.yml file, we can obtain our database settings from the Laravel .env file we created in a previous step.

      Include the following service definition in your services node, right after the app service:

      docker-compose.yml

        db:
          image: mysql:5.7
          container_name: travellist-db
          restart: unless-stopped
          environment:
            MYSQL_DATABASE: ${DB_DATABASE}
            MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
            MYSQL_PASSWORD: ${DB_PASSWORD}
            MYSQL_USER: ${DB_USERNAME}
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - ./docker-compose/mysql:/docker-entrypoint-initdb.d
          networks:
            - travellist
      

      These settings do the following:

      • image: Defines the Docker image that should be used for this container. In this case, we’re using a MySQL 5.7 image from Docker Hub.
      • container_name: Sets up the container name for this service: travellist-db.
      • restart: Always restart this service, unless it is explicitly stopped.
      • environment: Defines environment variables in the new container. We’re using values obtained from the Laravel .env file to set up our MySQL service, which will automatically create a new database and user based on the provided environment variables.
      • volumes: Creates a volume to share a .sql database dump that will be used to initialize the application database. The MySQL image will automatically import .sql files placed in the /docker-entrypoint-initdb.d directory inside the container.
      • networks: Sets up this service to use a network named travellist.

      The nginx Service

      The nginx service uses a pre-built Nginx image on top of Alpine, a lightweight Linux distribution. It creates a container named travellist-nginx, and it uses the ports definition to create a redirection from port 8000 on the host system to port 80 inside the container.

      Include the following service definition in your services node, right after the db service:

      docker-compose.yml

        nginx:
          image: nginx:1.17-alpine
          container_name: travellist-nginx
          restart: unless-stopped
          ports:
            - 8000:80
          volumes:
            - ./:/var/www
            - ./docker-compose/nginx:/etc/nginx/conf.d
          networks:
            - travellist
      

      These settings do the following:

      • image: Defines the Docker image that should be used for this container. In this case, we’re using the Alpine Nginx 1.17 image.
      • container_name: Sets up the container name for this service: travellist-nginx.
      • restart: Always restart this service, unless it is explicitly stopped.
      • ports: Sets up a port redirection that will allow external access via port 8000 to the web server running on port 80 inside the container.
      • volumes: Creates two shared volumes. The first one will synchronize contents from the current directory to /var/www inside the container. This way, when you make local changes to the application files, they will be quickly reflected in the application being served by Nginx inside the container. The second volume will make sure our Nginx configuration file, located at docker-compose/nginx/travellist.conf, is copied to the container’s Nginx configuration folder.
      • networks: Sets up this service to use a network named travellist.

      Finished docker-compose.yml File

      This is how our finished docker-compose.yml file looks like:

      docker-compose.yml

      version: "3.7"
      services:
        app:
          build:
            args:
              user: sammy
              uid: 1000
            context: ./
            dockerfile: Dockerfile
          image: travellist
          container_name: travellist-app
          restart: unless-stopped
          working_dir: /var/www/
          volumes:
            - ./:/var/www
          networks:
            - travellist
      
        db:
          image: mysql:5.7
          container_name: travellist-db
          restart: unless-stopped
          environment:
            MYSQL_DATABASE: ${DB_DATABASE}
            MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
            MYSQL_PASSWORD: ${DB_PASSWORD}
            MYSQL_USER: ${DB_USERNAME}
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - ./docker-compose/mysql:/docker-entrypoint-initdb.d
          networks:
            - travellist
      
        nginx:
          image: nginx:alpine
          container_name: travellist-nginx
          restart: unless-stopped
          ports:
            - 8000:80
          volumes:
            - ./:/var/www
            - ./docker-compose/nginx:/etc/nginx/conf.d/
          networks:
            - travellist
      
      networks:
        travellist:
          driver: bridge
      

      Make sure you save the file when you’re done.

      Step 5 — Running the Application with Docker Compose

      We’ll now use docker-compose commands to build the application image and run the services we specified in our setup.

      Build the app image with the following command:

      This command might take a few minutes to complete. You’ll see output similar to this:

      Output

      Building app Step 1/11 : FROM php:7.4-fpm ---> fa37bd6db22a Step 2/11 : ARG user ---> Running in f71eb33b7459 Removing intermediate container f71eb33b7459 ---> 533c30216f34 Step 3/11 : ARG uid ---> Running in 60d2d2a84cda Removing intermediate container 60d2d2a84cda ---> 497fbf904605 Step 4/11 : RUN apt-get update && apt-get install -y git curl libpng-dev libonig-dev ... Step 7/11 : COPY --from=composer:latest /usr/bin/composer /usr/bin/composer ---> e499f74896e3 Step 8/11 : RUN useradd -G www-data,root -u $uid -d /home/$user $user ---> Running in 232ef9c7dbd1 Removing intermediate container 232ef9c7dbd1 ---> 870fa3220ffa Step 9/11 : RUN mkdir -p /home/$user/.composer && chown -R $user:$user /home/$user ---> Running in 7ca8c0cb7f09 Removing intermediate container 7ca8c0cb7f09 ---> 3d2ef9519a8e Step 10/11 : WORKDIR /var/www ---> Running in 4a964f91edfa Removing intermediate container 4a964f91edfa ---> 00ada639da21 Step 11/11 : USER $user ---> Running in 9f8e874fede9 Removing intermediate container 9f8e874fede9 ---> fe176ff4702b Successfully built fe176ff4702b Successfully tagged travellist:latest

      When the build is finished, you can run the environment in background mode with:

      Output

      Creating travellist-db ... done Creating travellist-app ... done Creating travellist-nginx ... done

      This will run your containers in the background. To show information about the state of your active services, run:

      You’ll see output like this:

      Output

      Name Command State Ports ------------------------------------------------------------------------------- travellist-app docker-php-entrypoint php-fpm Up 9000/tcp travellist-db docker-entrypoint.sh mysqld Up 3306/tcp, 33060/tcp travellist-nginx nginx -g daemon off; Up 0.0.0.0:8000->80/tcp

      Your environment is now up and running, but we still need to execute a couple commands to finish setting up the application. You can use the docker-compose exec command to execute commands in the service containers, such as an ls -l to show detailed information about files in the application directory:

      • docker-compose exec app ls -l

      Output

      total 256 -rw-rw-r-- 1 sammy 1001 738 Jan 15 16:46 Dockerfile -rw-rw-r-- 1 sammy 1001 101 Jan 7 08:05 README.md drwxrwxr-x 6 sammy 1001 4096 Jan 7 08:05 app -rwxr-xr-x 1 sammy 1001 1686 Jan 7 08:05 artisan drwxrwxr-x 3 sammy 1001 4096 Jan 7 08:05 bootstrap -rw-rw-r-- 1 sammy 1001 1501 Jan 7 08:05 composer.json -rw-rw-r-- 1 sammy 1001 179071 Jan 7 08:05 composer.lock drwxrwxr-x 2 sammy 1001 4096 Jan 7 08:05 config drwxrwxr-x 5 sammy 1001 4096 Jan 7 08:05 database drwxrwxr-x 4 sammy 1001 4096 Jan 15 16:46 docker-compose -rw-rw-r-- 1 sammy 1001 1015 Jan 15 16:45 docker-compose.yml -rw-rw-r-- 1 sammy 1001 1013 Jan 7 08:05 package.json -rw-rw-r-- 1 sammy 1001 1405 Jan 7 08:05 phpunit.xml drwxrwxr-x 2 sammy 1001 4096 Jan 7 08:05 public -rw-rw-r-- 1 sammy 1001 273 Jan 7 08:05 readme.md drwxrwxr-x 6 sammy 1001 4096 Jan 7 08:05 resources drwxrwxr-x 2 sammy 1001 4096 Jan 7 08:05 routes -rw-rw-r-- 1 sammy 1001 563 Jan 7 08:05 server.php drwxrwxr-x 5 sammy 1001 4096 Jan 7 08:05 storage drwxrwxr-x 4 sammy 1001 4096 Jan 7 08:05 tests -rw-rw-r-- 1 sammy 1001 538 Jan 7 08:05 webpack.mix.js

      We’ll now run composer install to install the application dependencies:

      • docker-compose exec app composer install

      You’ll see output like this:

      Output

      Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 85 installs, 0 updates, 0 removals - Installing doctrine/inflector (1.3.1): Downloading (100%) - Installing doctrine/lexer (1.2.0): Downloading (100%) - Installing dragonmantank/cron-expression (v2.3.0): Downloading (100%) - Installing erusev/parsedown (1.7.4): Downloading (100%) - Installing symfony/polyfill-ctype (v1.13.1): Downloading (100%) - Installing phpoption/phpoption (1.7.2): Downloading (100%) - Installing vlucas/phpdotenv (v3.6.0): Downloading (100%) - Installing symfony/css-selector (v5.0.2): Downloading (100%) … Generating optimized autoload files > IlluminateFoundationComposerScripts::postAutoloadDump > @php artisan package:discover --ansi Discovered Package: facade/ignition Discovered Package: fideloper/proxy Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: nunomaduro/collision Package manifest generated successfully.

      The last thing we need to do before testing the application is to generate a unique application key with the artisan Laravel command-line tool. This key is used to encrypt user sessions and other sensitive data:

      • docker-compose exec app php artisan key:generate

      Output

      Application key set successfully.

      Now go to your browser and access your server’s domain name or IP address on port 8000:

      http://server_domain_or_IP:8000
      

      You’ll see a page like this:

      [Demo Laravel Application](s3://assets.digitalocean.com/articles/laravelatscale/travellist_docker.png)

      You can use the logs command to check the logs generated by your services:

      • docker-compose logs nginx
      Attaching to travellist-nginx
      travellist-nginx | 192.168.160.1 - - [23/Jan/2020:13:57:25 +0000] "GET / HTTP/1.1" 200 626 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"
      travellist-nginx | 192.168.160.1 - - [23/Jan/2020:13:57:26 +0000] "GET /favicon.ico HTTP/1.1" 200 0 "http://localhost:8000/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"
      travellist-nginx | 192.168.160.1 - - [23/Jan/2020:13:57:42 +0000] "GET / HTTP/1.1" 200 626 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"
      …
      

      If you want to pause your Docker Compose environment while keeping the state of all its services, run:

      Output

      Pausing travellist-db ... done Pausing travellist-nginx ... done Pausing travellist-app ... done

      You can then resume your services with:

      Output

      Unpausing travellist-app ... done Unpausing travellist-nginx ... done Unpausing travellist-db ... done

      To shut down your Docker Compose environment and remove all of its containers, networks, and volumes, run:

      Output

      Stopping travellist-nginx ... done Stopping travellist-db ... done Stopping travellist-app ... done Removing travellist-nginx ... done Removing travellist-db ... done Removing travellist-app ... done Removing network travellist-laravel-demo_travellist

      For an overview of all Docker Compose commands, please check the Docker Compose command-line reference.

      Conclusion

      In this guide, we’ve set up a Docker environment with three containers using Docker Compose to define our infrastructure in a YAML file.

      From this point on, you can work on your Laravel application without needing to install and set up a local web server for development and testing. Moreover, you’ll be working with a disposable environment that can be easily replicated and distributed, which can be helpful while developing your application and also when moving towards a production environment.



      Source link

      Cómo crear un clúster de Kubernetes usando Kubeadm en Ubuntu 16.04


      El autor seleccionó la Free and Open Source Fund para recibir una donación como parte del programa Write for DOnations.

      Introducción

      Kubernetes es un sistema de orquestación de contenedores que administra contenedores a escala. Fue inicialmente desarrollado por Google en base a su experiencia en ejecución de contenedores en producción, es de código abierto y una comunidad mundial impulsa su desarrollo de manera activa.

      Nota: Para este tutorial se utiliza la versión 1.14 de Kubernetes, la versión oficial admitida en el momento de la publicación de este artículo. Para obtener información actualizada sobre la versión más reciente, consulte las notas de la versión actual en la documentación oficial de Kubernetes.

      Kubeadm automatiza la instalación y la configuración de componentes de Kubernetes como el servidor de API, Controller Manager y Kube DNS. Sin embargo, no crea usuarios ni maneja la instalación de dependencias al nivel del sistema operativo ni su configuración. Para estas tareas preliminares, se puede usar una herramienta de administración de configuración como Ansible o SaltStack. El uso de estas herramientas hace que la creación de clústeres adicionales o la recreación de los existentes sea mucho más simple y menos propensa a errores.

      A través de esta guía, configurará un clúster de Kubernetes desde cero usando Ansible y Kubeadm, y luego implementará una aplicación de Nginx en contenedor.

      Objetivos

      Su clúster incluirá los siguientes recursos físicos:

      El nodo maestro (un nodo de Kubernetes hace referencia a un servidor) se encarga de administrar el estado del clúster. Ejecuta Etcd, que almacena datos de clústeres entre componentes que organizan cargas de trabajo en nodos de trabajo.

      Los nodos de trabajo son los servidores en los que se ejecutarán sus cargas de trabajo (es decir, aplicaciones y servicios en contenedores). Un trabajador seguirá ejecutando su volumen de trabajo una vez que se le asigne, incluso si el maestro se desactiva cuando la programación se complete. La capacidad de un clúster puede aumentarse añadiendo trabajadores.

      Tras completar esta guía, tendrá un clúster listo para ejecutar aplicaciones en contenedores siempre que los servidores del clúster cuenten con suficientes recursos de CPU y RAM para sus aplicaciones. Casi cualquier aplicación tradicional de Unix que incluya aplicaciones web, bases de datos, demonios y herramientas de línea de comandos pueden estar en contenedores y hechas para ejecutarse en el clúster. El propio clúster consumirá entre 300 y 500 MB de memoria, y un 10 % de CPU en cada nodo.

      Una vez que se configure el clúster, implementará el servidor web de Nginx para que garantice que se ejecuten correctamente las cargas de trabajo.

      Requisitos previos

      Paso 1: Configurar el directorio de espacio de trabajo y el archivo de inventario de Ansible

      En esta sección, creará un directorio en su máquina local que funcionará como su espacio de trabajo. También configurará Ansible a nivel local para que pueda comunicarse con sus servidores remotos y ejecutar comandos en ellos. Para hacer esto, creará un archivo hosts que contenga información de inventario, como las direcciones IP de sus servidores y los grupos a los que pertenece cada servidor.

      De sus tres servidores, uno será el maestro con un IP que se mostrará como master_ip. Los otros dos servidores serán trabajadores y tendrán los IP worker_1_ip y worker_2_ip.

      Cree un directorio llamado ~/kube-cluster en el directorio de inicio de su máquina local y use cd para posicionarse en él:

      • mkdir ~/kube-cluster
      • cd ~/kube-cluster

      Este directorio será su espacio de trabajo para el resto del tutorial y contendrá todos sus playbooks de Ansible. También será el directorio dentro del que ejecutará todos los comandos locales.

      Cree un archivo llamado ~/kube-cluster/hosts usando nano o su editor de texto favorito:

      • nano ~/kube-cluster/hosts

      Añada el siguiente texto al archivo, que aportará información específica sobre la estructura lógica de su clúster:

      ~/kube-cluster/hosts

      [masters]
      master ansible_host=master_ip ansible_user=root
      
      [workers]
      worker1 ansible_host=worker_1_ip ansible_user=root
      worker2 ansible_host=worker_2_ip ansible_user=root
      
      [all:vars]
      ansible_python_interpreter=/usr/bin/python3
      

      Posiblemente recuerde que los archivos de inventario de Ansible se utilizan para especificar datos del servidor, como direcciones IP, usuarios remotos y las agrupaciones de servidores que se abordarán como una sola unidad para ejecutar comandos. ~/kube-cluster/hosts será su archivo de inventario y usted agregó a este dos grupos de Ansible (maestros y trabajadores) para especificar la estructura lógica de su clúster.

      En el grupo de maestros, existe una entrada de servidor llamada “master” que enumera el IP del nodo maestro (master_ip) y especifica que Ansible debería ejecutar comandos remotos como usuario root.

      De modo similar, en el grupo de** trabajadores**, existen dos entradas para los servidores de trabajo worker_1_ip y worker_2_ip que también especifican el ansible_user como root.

      La última línea del archivo indica a Ansible que utilice los intérpretes de Python 3 de servidores remotos para sus operaciones de administración.

      Guarde y cierre el archivo después de agregar el texto.

      Después de configurar el inventario del servidor con grupos, instalaremos dependencias a nivel del sistema operativo y crearemos ajustes de configuración.

      Paso 2: Crear un usuario no root en todos los servidores remotos

      En esta sección, creará un usuario no root con privilegios sudo en todos los servidores para poder acceder a SSH manualmente como usuario sin privilegios. Esto puede ser útil si, por ejemplo, desea ver la información del sistema con comandos como top/htop, ver una lista de contenedores en ejecución o cambiar archivos de configuración pertenecientes a root. Estas operaciones se realizan de forma rutinaria durante el mantenimiento de un clúster y el empleo de un usuario no root para esas tareas minimiza el riesgo de modificar o eliminar archivos importantes, o de realizar de forma no intencionada otras operaciones peligrosas.

      Cree un archivo llamado ~/kube-cluster/initial.yml en el espacio de trabajo:

      • nano ~/kube-cluster/initial.yml

      A continuación, añada el siguiente play al archivo para crear un usuario no root con privilegios sudo en todos los servidores. Un play en Ansible es una colección de pasos que se deben realizar y se orientan a servidores y grupos específicos. El siguiente play creará un usuario sudo no root:

      ~/kube-cluster/initial.yml

      - hosts: all
        become: yes
        tasks:
          - name: create the 'ubuntu' user
            user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash
      
          - name: allow 'ubuntu' to have passwordless sudo
            lineinfile:
              dest: /etc/sudoers
              line: 'ubuntu ALL=(ALL) NOPASSWD: ALL'
              validate: 'visudo -cf %s'
      
          - name: set up authorized keys for the ubuntu user
            authorized_key: user=ubuntu key="{{item}}"
            with_file:
              - ~/.ssh/id_rsa.pub
      

      A continuación, se ofrece un desglose de las funciones de este playbook:

      • Crea el usuario no root ubuntu.

      • Configura el archivo sudoers para permitir que el usuario ubuntu ejecute comandos sudo sin una solicitud de contraseña.

      • Añade la clave pública de su máquina local (por lo general, ~/.ssh/id_rsa.pub) a la lista de claves autorizadas del usuario ubuntu remoto. Esto le permitirá usar SSH en cada servidor como usuario ubuntu.

      Guarde y cierre el archivo después de agregar el texto.

      A continuación, active el playbook ejecutando lo siguiente a nivel local:

      • ansible-playbook -i hosts ~/kube-cluster/initial.yml

      El comando se aplicará por completo en un plazo de dos a cinco minutos. Al finalizar, verá resultados similares al siguiente:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [master] ok: [worker1] ok: [worker2] TASK [create the 'ubuntu' user] **** changed: [master] changed: [worker1] changed: [worker2] TASK [allow 'ubuntu' user to have passwordless sudo] **** changed: [master] changed: [worker1] changed: [worker2] TASK [set up authorized keys for the ubuntu user] **** changed: [worker1] => (item=ssh-rsa AAAAB3... changed: [worker2] => (item=ssh-rsa AAAAB3... changed: [master] => (item=ssh-rsa AAAAB3... PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0 worker1 : ok=5 changed=4 unreachable=0 failed=0 worker2 : ok=5 changed=4 unreachable=0 failed=0

      Ahora que la configuración preliminar está completa, puede instalar dependencias específicas de Kubernetes.

      Paso 3: Instalar las dependencias de Kubernetetes

      En esta sección, instalará los paquetes a nivel del sistema operativo requeridos por Kubernetes con el administrador de paquetes de Ubuntu. Estos paquetes son los siguientes:

      • Docker: tiempo de ejecución de contenedores. Es el componente que ejecuta sus contenedores. La compatibilidad con otros tiempos de ejecución como rkt se encuentra en etapa de desarrollo activo en Kubernetes.

      • kubeadm: herramienta de CLI que instalará y configurará los distintos componentes de un clúster de manera estándar.

      • kubelet: servicio o programa del sistema que se ejecuta en todos los nodos y gestiona operaciones a nivel de nodo.

      • kubectl: herramienta de CLI que se utiliza para emitir comandos al clúster a través de su servidor de API.

      Cree un archivo llamado ~/kube-cluster/kube-dependencies.yml en el espacio de trabajo:

      • nano ~/kube-cluster/kube-dependencies.yml

      Añada los siguientes play al archivo para instalar estos paquetes en sus servidores:

      ~/kube-cluster/kube-dependencies.yml

      - hosts: all
        become: yes
        tasks:
         - name: install Docker
           apt:
             name: docker.io
             state: present
             update_cache: true
      
         - name: install APT Transport HTTPS
           apt:
             name: apt-transport-https
             state: present
      
         - name: add Kubernetes apt-key
           apt_key:
             url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
             state: present
      
         - name: add Kubernetes' APT repository
           apt_repository:
            repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
            state: present
            filename: 'kubernetes'
      
         - name: install kubelet
           apt:
             name: kubelet=1.14.0-00
             state: present
             update_cache: true
      
         - name: install kubeadm
           apt:
             name: kubeadm=1.14.0-00
             state: present
      
      - hosts: master
        become: yes
        tasks:
         - name: install kubectl
           apt:
             name: kubectl=1.14.0-00
             state: present
             force: yes
      

      El primer play del playbook hace lo siguiente:

      • Instala Docker, el tiempo de ejecución del contenedor.

      • Instala apt-transport-https, que le permite añadir fuentes HTTPS externas a su lista de fuentes APT.

      • Añade la clave apt del repositorio de APT de Kubernetes para la verificación de claves.

      • Añade el repositorio de APT de Kubernetes a la lista de fuentes APT de sus servidores remotos.

      • Instala kubelet y kubeadm.

      El segundo play consta de una única tarea que instala kubectl en su nodo maestro.

      Nota: Aunque en la documentación de Kubernetes se le recomienda usar la última versión estable de Kubernetes para su entorno, en este tutorial se utiliza una versión específica. Esto garantizará que pueda seguir los pasos correctamente, ya que Kubernetes cambia de forma rápida y es posible que la última versión no funcione con este tutorial.

      Guarde y cierre el archivo cuando termine.

      A continuación, active el playbook ejecutando lo siguiente a nivel local:

      • ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

      Al finalizar, verá resultados similares al siguiente:

      Output

      PLAY [all] **** TASK [Gathering Facts] **** ok: [worker1] ok: [worker2] ok: [master] TASK [install Docker] **** changed: [master] changed: [worker1] changed: [worker2] TASK [install APT Transport HTTPS] ***** ok: [master] ok: [worker1] changed: [worker2] TASK [add Kubernetes apt-key] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [add Kubernetes' APT repository] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubelet] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubeadm] ***** changed: [master] changed: [worker1] changed: [worker2] PLAY [master] ***** TASK [Gathering Facts] ***** ok: [master] TASK [install kubectl] ****** ok: [master] PLAY RECAP **** master : ok=9 changed=5 unreachable=0 failed=0 worker1 : ok=7 changed=5 unreachable=0 failed=0 worker2 : ok=7 changed=5 unreachable=0 failed=0

      Tras la ejecución, Docker, kubeadm y kubelet se instalarán en todos los servidores remotos. kubectl no es un componente necesario y solo se necesita para ejecutar comandos de clúster. Si la instalación se realiza solo en el nodo maestro, tiene sentido en este contexto, ya que ejecutará comandos kubectl solo desde el maestro. Tenga en cuenta, sin embargo, que los comandos kubectl pueden ejecutarse desde cualquiera de los nodos de trabajo o desde cualquier máquina en donde se pueda instalar y configurar para apuntar a un clúster.

      Con esto, quedarán instaladas todas las dependencias del sistema. Configuraremos el nodo maestro e iniciaremos el clúster.

      Paso 4: Configurar el nodo maestro

      A lo largo de esta sección, configurará el nodo maestro. Sin embargo, antes de crear playbooks, valdrá la pena abarcar algunos conceptos como Pods y complementos de red de Pods, ya que su clúster incluirá ambos.

      Un pod es una unidad atómica que ejecuta uno o más contenedores. Estos contenedores comparten recursos como volúmenes de archivos e interfaces de red en común. Los pods son la unidad básica de programación de Kubernetes: se garantiza que todos los contenedores de un pod se ejecuten en el mismo nodo en el que esté programado el pod.

      Cada pod tiene su propia dirección IP y un pod de un nodo debería poder acceder a un pod de otro usando el IP del pod. Los contenedores de un nodo único pueden comunicarse fácilmente a través de una interfaz local. Sin embargo, la comunicación entre los pods es más complicada y requiere un componente de red independiente que pueda dirigir de forma transparente el tráfico de un pod de un nodo a un pod de otro.

      Los complementos de red de pods ofrecen esta funcionalidad. Para este clúster usará Flannel, una opción estable y apta.

      Cree un playbook de Ansible llamado master.yml en su máquina local:

      • nano ~/kube-cluster/master.yml

      Añada el siguiente play al archivo para iniciar el clúster e instalar Flannel:

      ~/kube-cluster/master.yml

      - hosts: master
        become: yes
        tasks:
          - name: initialize the cluster
            shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
            args:
              chdir: $HOME
              creates: cluster_initialized.txt
      
          - name: create .kube directory
            become: yes
            become_user: ubuntu
            file:
              path: $HOME/.kube
              state: directory
              mode: 0755
      
          - name: copy admin.conf to user's kube config
            copy:
              src: /etc/kubernetes/admin.conf
              dest: /home/ubuntu/.kube/config
              remote_src: yes
              owner: ubuntu
      
          - name: install Pod network
            become: yes
            become_user: ubuntu
            shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
            args:
              chdir: $HOME
              creates: pod_network_setup.txt
      

      A continuación, se muestra un desglose de este play:

      • La primera tarea inicializa el clúster ejecutando kubeadm init. Al pasar el argumento --pod-network-cidr=10.244.0.0/16 se especifica la subred privada desde la cual se asignarán los IP del pod. Flannel utiliza la subred anterior por defecto; le indicaremos a kubeadm que use la misma subred.

      • Con la segunda tarea se crea un directorio .kube en /home/ubuntu. En este directorio se almacenarán datos de configuración, como los archivos de claves de administrador que se necesitan para establecer conexión con el clúster y la dirección API del clúster.

      • Con la tercera tarea se copia el archivo /etc/kubernetes/admin.conf que se generó desde kubeadm init al directorio principal de su usuario no root. Esto le permitirá usar kubectl para acceder al clúster recién creado.

      • Con la última tarea se ejecuta kubectl apply para instalar Flannel. kubectl apply -f descriptor.[yml|json]​​​​​​ es la sintaxis para indicar a ​​​kubectl​​​​​​ que cree los objetos descritos en ​​​​​​el archivo descriptor.[yml|json]​​​​​. El archivo kube-flannel.yml contiene las descripciones de los objetos necesarios para configurar Flannel en el clúster.

      Guarde y cierre el archivo cuando termine.

      Implemente el playbook a nivel local ejecutando lo siguiente:

      • ansible-playbook -i hosts ~/kube-cluster/master.yml

      Al finalizar, verá un resultado similar al siguiente:

      Output

      PLAY [master] **** TASK [Gathering Facts] **** ok: [master] TASK [initialize the cluster] **** changed: [master] TASK [create .kube directory] **** changed: [master] TASK [copy admin.conf to user's kube config] ***** changed: [master] TASK [install Pod network] ***** changed: [master] PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0

      Para comprobar el estado del nodo maestro, aplique SSH en él con el siguiente comando:

      Una vez que ingrese en el nodo maestro, ejecute lo siguiente:

      Ahora verá lo siguiente:

      Output

      NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0

      El resultado indica que el nodo master completó todas las tareas de inicialización y se encuentra en el estado Ready, a partir de lo cual puede comenzar a aceptar nodos de trabajo y ejecutar tareas enviadas al servidor de la API. Ahora, podrá añadir los trabajadores desde su máquina local.

      Paso 5: Configurar los nodos del trabajador

      La incorporación de trabajadores al clúster implica ejecutar un único comando en cada uno. Este comando incluye la información de clúster necesaria, como la dirección IP y el puerto del servidor de la API del maestro y un token seguro. Solo podrán incorporarse al clúster los nodos que puedan pasar el token seguro.

      Regrese a su espacio de trabajo y cree un libro de reproducción llamado workers.yml:

      • nano ~/kube-cluster/workers.yml

      Añada el siguiente texto al archivo para agregar los trabajadores al clúster:

      ~/kube-cluster/workers.yml

      - hosts: master
        become: yes
        gather_facts: false
        tasks:
          - name: get join command
            shell: kubeadm token create --print-join-command
            register: join_command_raw
      
          - name: set join command
            set_fact:
              join_command: "{{ join_command_raw.stdout_lines[0] }}"
      
      
      - hosts: workers
        become: yes
        tasks:
          - name: join cluster
            shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
            args:
              chdir: $HOME
              creates: node_joined.txt
      

      Esto es lo que hace el playbook:

      • El primer play obtiene el comando de incorporación que debe ejecutarse en los nodos de trabajo. Este comando se mostrará en el siguiente formato: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>. Una vez que obtiene el comando real con el token y los valores de hash adecuados, la tarea lo fija como un hecho para que el siguiente play pueda acceder a esta información.

      • El segundo play tiene una sola tarea que ejecuta el comando de incorporación en todos los nodos de trabajadores. Una vez que se complete esta tarea, los dos nodos de trabajo formarán parte del clúster.

      Guarde y cierre el archivo cuando termine.

      Implemente el playbook ejecutando lo siguiente a nivel local:

      • ansible-playbook -i hosts ~/kube-cluster/workers.yml

      Al finalizar, verá resultados similares al siguiente:

      Output

      PLAY [master] **** TASK [get join command] **** changed: [master] TASK [set join command] ***** ok: [master] PLAY [workers] ***** TASK [Gathering Facts] ***** ok: [worker1] ok: [worker2] TASK [join cluster] ***** changed: [worker1] changed: [worker2] PLAY RECAP ***** master : ok=2 changed=1 unreachable=0 failed=0 worker1 : ok=2 changed=1 unreachable=0 failed=0 worker2 : ok=2 changed=1 unreachable=0 failed=0

      Una vez agregados los nodos de trabajo, su clúster estará completamente configurado y activo, con los trabajadores listos para ejecutar cargas de trabajo. Antes de programar aplicaciones, comprobaremos que el clúster funcione como se espera.

      Paso 6: Verificar el clúster

      Un clúster puede fallar durante la configuración debido a la indisponibilidad de un nodo o a que la conexión de red entre el maestro y el trabajador no funciona correctamente. Comprobaremos el clúster y nos aseguraremos de que los nodos funcionen correctamente.

      Deberá comprobar el estado actual del clúster desde el nodo maestro para asegurarse de que los nodos estén listos. Si interrumpió la conexión con el nodo maestro, puede aplicar SSH en él de nuevo con el siguiente comando:

      Luego, ejecute el siguiente comando para obtener el estado del clúster:

      Verá resultados similares al siguiente:

      Output

      NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0 worker1 Ready <none> 1d v1.14.0 worker2 Ready <none> 1d v1.14.0

      Si todos sus nodos tienen el valor Ready para STATUS, significa que son parte del clúster y están listos para ejecutar cargas de trabajo.

      Sin embargo, si el valor de STATUS es NotReady para algunos de los nodos, es posible que aún no haya concluido la configuración de los nodos de trabajo. Espere entre 5 y 10 minutos antes de volver a ejecutar kubectl get node y verificar el nuevo resultado. Si el estado de algunos nodos todavía es NotReady, es posible que deba verificar y volver a ejecutar los comandos de los pasos anteriores.

      Ahora que la verificación de su clúster se completó con éxito, programaremos una aplicación de Nginx de ejemplo en el clúster.

      Paso 7: Ejecutar una aplicación en el clúster

      Ahora podrá implementar cualquier aplicación en contenedor en su clúster. Para que sea sencillo, implementaremos Nginx usando implementaciones y servicios para ver la forma en que se puede implementar esta aplicación en el clúster. Puede usar también los comandos que se muestran a continuación para otras aplicaciones en contenedores siempre que cambie el nombre de imagen de Docker y cualquier indicador pertinente (por ejemplo, ports y volumes).

      Dentro del nodo maestro, ejecute el siguiente comando para crear una implementación llamada nginx:

      • kubectl create deployment nginx --image=nginx

      Una implementación es un tipo de objeto de Kubernetes que garantiza que siempre haya un número especificado de pods ejecutándose según una plantilla definida, incluso cuando el pod se bloquee durante la vida útil del clúster. Con la implementación anterior se creará un pod con un contenedor desde la imagen de Docker de Nginx del registro de Docker.

      A continuación, ejecute el siguiente comando para crear un servicio llamado nginx que mostrará la aplicación públicamente. Lo hará a través de un NodePort, un esquema que permitirá el acceso al pod a través de un puerto arbitrario abierto en cada nodo del clúster:

      • kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort

      Los servicios son otro tipo de objeto de Kubernetes que exponen los servicios internos del clúster a clientes internos y externos. También pueden usar solicitudes de equilibrio de carga para varios pods y son un componente integral de Kubernetes que interactúa de forma frecuente con otros.

      Ejecute el siguiente comando:

      Con esto, se mostrará texto similar al siguiente:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.109.228.209 <none> 80:nginx_port/TCP 40m

      Desde la tercera línea del resultado anterior, puede recuperar el puerto en el que se ejecuta Nginx. Kubernetes asignará de forma aleatoria y automática un puerto superior al 30000, y garantizará que no esté ya vinculado a otro servicio.

      Para probar que todo esté funcionando, visite http://worker_1_ip:nginx_port o http://worker_2_ip:nginx_port a través de un navegador en su máquina local. Visualizará la página de bienvenida conocida de Nginx.

      Si desea eliminar la aplicación de Nginx, primero elimine el servicio nginx del nodo maestro:

      • kubectl delete service nginx

      Ejecute lo siguiente para asegurarse de que el servicio se haya eliminado:

      Verá lo siguiente:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

      Luego, elimine la implementación:

      • kubectl delete deployment nginx

      Ejecute lo siguiente para confirmar que esto funcionó:

      Output

      No resources found.

      Conclusión

      A través de esta guía, configuró correctamente un clúster de Kubernetes en Ubuntu 16.04 usando Kubeadm y Ansible para la automatización.

      Si se pregunta qué hacer con el clúster ahora que está configurado, un buen paso sería lograr implementar con comodidad aplicaciones y servicios propios en el clúster. A continuación, se presenta una lista de enlaces con más información que puede orientarlo en el proceso:

      • Implementar Docker en aplicaciones: contiene ejemplos en los que se detalla la forma de disponer aplicaciones en contenedores usando Docker.

      • Descripción general de los pods: explicación detallada de su funcionaminento y su relación con otros objetos Kubernetes. Los pods se encuentran en todas partes en Kubernetes. Por ello, si los comprende su trabajo será más sencillo.

      • Descripción general de las implementaciones: resumen sobre estas. Resulta útil comprender el funcionamiento de controladores como las implementaciones, ya que se utilizan con frecuencia en aplicaciones sin estado para escalar y reparar aplicaciones no saludables de forma automática.

      • Descripción general de services: abarca services, otro objeto usado con frecuencia en los clústeres de Kubernetes. Comprender los tipos de servicios y las opciones que tienen es esencial para ejecutar aplicaciones con y sin estado.

      Otros conceptos importantes que puede ver son los de Volume, Ingress y Secret, los cuales son útiles cuando se implementan aplicaciones de producción.

      Kubernetes ofrece muchas funciones y características. La documentación oficial de Kubernetes es la mejor opción para aprender conceptos, encontrar guías específicas para tareas y buscar referencias de API para varios objetos.



      Source link