One place for hosting & domains

      December 2019

      Para instalar Git en Ubuntu 18.04 [Guía de inicio rápido]


      Introducción

      Los sistemas de control de versión le permiten hacer aportes y colaborar en proyectos de desarrollo de software. Git es uno de los sistemas de control de versión más populares disponibles actualmente.

      Este tutorial le servirá como orientación en la instalación y configuración de Git en un servidor de Ubuntu 18.04. Para obtener una versión más detallada de este tutorial, con mejores explicaciones de cada paso, consulte Cómo instalar Git en Ubuntu 18.04.

      Paso 1: Actualizar paquetes predeterminados

      Con la sesión iniciada en su servidor de Ubuntu 18.04 como usuario sudo no root, primero actualice sus paquetes predeterminados.

      Paso 2: Instalar Git

      Paso 3: Confirmar la instalación correcta

      Puede confirmar que instaló correctamente Git si ejecuta el siguiente comando y recibe un resultado similar al que se muestra:

      Output

      git version 2.17.1

      Paso 4: Configurar Git

      Ahora que instaló Git, y a fin de prevenir las advertencias, debe configurarlo con su información.

      • git config --global user.name "Your Name"
      • git config --global user.email "youremail@domain.com"

      Si debe editar este archivo, puede usar un editor de texto como nano:

      ~/.gitconfig contents

      [user]
        name = Your Name
        email = youremail@domain.com
      

      Tutoriales relacionados

      Aquí tiene enlaces a tutoriales más detallados relacionados con esta guía:



      Source link

      How to Use Ansible to Install and Set Up LEMP on Ubuntu 18.04


      Introduction

      Server automation now plays an essential role in systems administration, due to the disposable nature of modern application environments. Configuration management tools such as Ansible are typically used to streamline the process of automating server setup by establishing standard procedures for new servers while also reducing human error associated with manual setups.

      Ansible offers a simple architecture that doesn’t require special software to be installed on nodes. It also provides a robust set of features and built-in modules which facilitate writing automation scripts.

      This guide explains how to use Ansible to automate the steps contained in our guide on How To Install Linux, Nginx, MySQL and PHP (LEMP) on Ubuntu 18.04. The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx (pronounced like “Engine-X”) web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.

      Prerequisites

      In order to execute the automated setup provided by the playbook we’re discussing in this guide, you’ll need:

      Before proceeding, you first need to make sure your Ansible control node is able to connect and execute commands on your Ansible host(s). For a connection test, please check step 3 of How to Install and Configure Ansible on Ubuntu 18.04.

      What Does this Playbook Do?

      This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 18.04.

      Running this playbook will perform the following actions on your Ansible hosts:

      1. Install aptitude, which is preferred by Ansible as an alternative to the apt package manager.
      2. Install the required LEMP packages.
      3. Set up the Nginx configuration file using the provided template.
      4. Enable the new Nginx configuration and disable the default one.
      5. Set the password for the MySQL root user.
      6. Remove anonymous MySQL accounts and the test database.
      7. Set up UFW to allow HTTP traffic on the configured port (80 by default).
      8. Set up a PHP test script using the provided template.

      Once the playbook has finished running, you will have a web PHP environment running on top of Nginx, based on the options you defined within your configuration variables.

      How to Use this Playbook

      The first thing we need to do is obtain the LEMP playbook and its dependencies from the do-community/ansible-playbooks repository. We need to clone this repository to a local folder inside the Ansible Control Node.

      In case you have cloned this repository before while following a different guide, access your existing ansible-playbooks copy and run a git pull command to make sure you have updated contents:

      • cd ~/ansible-playbooks
      • git pull

      If this is your first time using the do-community/ansible-playbooks repository, you should start by cloning the repository to your home folder with:

      • cd ~
      • git clone https://github.com/do-community/ansible-playbooks.git
      • cd ansible-playbooks

      The files we’re interested in are located inside the lemp_ubuntu1804 folder, which has the following structure:

      lemp_ubuntu1804
      ├── files
      │   ├── info.php.j2
      │   └── nginx.conf.j2
      ├── vars
      │   └── default.yml
      ├── playbook.yml
      └── readme.md
      

      Here is what each of these files are:

      • files/info.php.j2: Template file for setting up a PHP test page on the web server’s root
      • files/nginx.conf.j2: Template file for setting up the Nginx server.
        directory.
      • vars/default.yml: Variable file for customizing playbook settings.
      • playbook.yml: The playbook file, containing the tasks to be executed on the remote server(s).
      • readme.md: A text file containing information about this playbook.

      We’ll edit the playbook’s variable file to customize the configurations of both MySQL and Nginx. Access the lemp_ubuntu1804 directory and open the vars/default.yml file using your command line editor of choice:

      • cd lemp_ubuntu1804
      • nano vars/default.yml

      This file contains a few variables that require your attention:

      vars/default.yml

      ---
      mysql_root_password: "mysql_root_password"
      http_host: "your_domain"
      http_conf: "your_domain.conf"
      http_port: "80"
      

      The following list contains a brief explanation of each of these variables and how you might want to change them:

      • mysql_root_password: The desired password for the root MySQL account.
      • http_host: The host name or IP address for this web server.
      • http_conf: The name of the configuration file to be created inside /etc/nginx/sites-available, typically set to the host or application name for easier identification.
      • http_port: The port Nginx will use to serve this site. This is port 80 by default, but if you want to serve your site or application on a different port, enter it here.

      Once you’re done updating the variables inside vars/default.yml, save and close this file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

      You’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on everyserver in your inventory, by default. We can use the-lflag to make sure that only a subset of servers, or a single server, is affected by the playbook. We can also use the-u` flag to specify which user on the remote server we’re using to connect and execute the playbook commands on the remote hosts.

      To execute the playbook only on server1, connecting as root, you can use the following command:

      • ansible-playbook playbook.yml -l server1 -u root

      You will get output similar to this:

      Output

      PLAY [all] ***************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************** ok: [server1] TASK [Install Prerequisites] *********************************************************************************************************** changed: [server1] => (item=aptitude) ... TASK [UFW - Allow HTTP on port 80] ***************************************************************************************************** changed: [server1] TASK [Sets Up PHP Info Page] *********************************************************************************************************** changed: [server1] RUNNING HANDLER [Reload Nginx] ********************************************************************************************************* changed: [server1] PLAY RECAP ***************************************************************************************************************************** server1 : ok=12 changed=9 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide.

      When the playbook is finished running, go to your web browser and access the host or IP address of the server, as configured in the playbook variables, followed by /info.php:

      http://server_host_or_IP/info.php
      

      You will see a page like this:

      phpinfo page

      Because this page contains sensitive information about your PHP environment, it is recommended that you remove it from the server by running an rm -f /var/www/info.php command once you have finished setting it up.

      The Playbook Contents

      You can find the LEMP server setup featured in this tutorial in the lemp_ubuntu1804 folder inside the DigitalOcean Community Playbooks repository. To copy or download the script contents directly, click the Raw button towards the top of each script.

      The full contents of the playbook as well as its associated files are also included here for your convenience.

      vars/default.yml

      The default.yml variable file contains values that will be used within the playbook tasks, such as the password for the MySQL root account and the domain name to configure within Nginx.

      vars/default.yml

      ---
      mysql_root_password: "mysql_root_password"
      http_host: "your_domain"
      http_conf: "your_domain.conf"
      http_port: "80"
      

      files/nginx.conf.j2

      The nginx.conf.j2 file is a Jinja 2 template file that configures the Nginx web server. The variables used within this template are defined in the vars/default.yml variable file.

      files/nginx.conf.j2

      server {
             listen {{ http_port }};
             root /var/www/html;
             index index.php index.html index.htm index.nginx-debian.html;
             server_name {{ http_host }};
      
             location / {
                     try_files $uri $uri/ =404;
             }
      
             location ~ .php$ {
                     include snippets/fastcgi-php.conf;
                     fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
             }
      
             location ~ /.ht {
                     deny all;
             }
      }
      
      

      files/info.php.j2

      The info.php.j2 file is another Jinja template, used to set up a test PHP script in the document root of the newly configured LEMP server.

      files/info.php.j2

      <?php
      phpinfo();
      
      

      playbook.yml

      The playbook.yml file is where all tasks from this setup are defined. It starts by defining the group of servers that should be the target of this setup (all), after which it uses become: true to define that tasks should be executed with privilege escalation (sudo) by default. Then, it includes the vars/default.yml variable file to load configuration options.

      playbook.yml

      ---
      - hosts: all
        become: true
        vars_files:
         - vars/default.yml
      
       tasks:
         - name: Install Prerequisites
           apt: name={{ item }} update_cache=yes state=latest force_apt_get=yes
           loop: [ 'aptitude' ]
      
         - name: Install LEMP Packages
           apt: name={{ item }} update_cache=yes state=latest
           loop: [ 'nginx', 'mysql-server', 'python3-pymysql', 'php-fpm', 'php-mysql' ]
      
      # Nginx Configuration
      
         - name: Sets Nginx conf file
           template:
             src: "files/nginx.conf.j2"
             dest: "/etc/nginx/sites-available/{{ http_conf }}"
      
         - name: Enables new site
           file:
             src: "/etc/nginx/sites-available/{{ http_conf }}"
             dest: "/etc/nginx/sites-enabled/{{ http_conf }}"
             state: link
           notify: Reload Nginx
      
         - name: Removes "default" site
           file:
             path: "/etc/nginx/sites-enabled/default"
             state: absent
           notify: Reload Nginx
      
      # MySQL Configuration
      
         - name: Sets the root password
           mysql_user:
             name: root
             password: "{{ mysql_root_password }}"
             login_unix_socket: /var/run/mysqld/mysqld.sock
      
         - name: Removes all anonymous user accounts
           mysql_user:
             name: ''
             host_all: yes
             state: absent
             login_user: root
             login_password: "{{ mysql_root_password }}"
      
         - name: Removes the MySQL test database
           mysql_db:
             name: test
             state: absent
             login_user: root
             login_password: "{{ mysql_root_password }}"
      
      # UFW Configuration
      
         - name: "UFW - Allow HTTP on port {{ http_port }}"
           ufw:
             rule: allow
             port: "{{ http_port }}"
             proto: tcp
      
      # Sets Up PHP Info Page
      
         - name: Sets Up PHP Info Page
           template:
             src: "files/info.php.j2"
             dest: "/var/www/html/info.php"
      
      # Handlers
      
       handlers:
         - name: Reload Nginx
           service:
             name: nginx
             state: reloaded
      
         - name: Restart Nginx
           service:
             name: nginx
             state: restarted
      
      

      Feel free to modify these files to best suit your individual needs within your own workflow.

      Conclusion

      In this guide, we used Ansible to automate the process of installing and setting up a LEMP environment on a remote server. Because each individual typically has different needs when working with MySQL databases and users, we encourage you to check out the official Ansible documentation for more information and use cases of the mysql_user Ansible module.

      If you’d like to include other tasks in this playbook to further customize your server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks.



      Source link

      Getting Started with Containers and Kubernetes: A DigitalOcean Meetup Kit


      Getting Started with Containers and Kubernetes Meetup Kit Materials

      This meetup kit is designed to help a technical audience become familiar with core Kubernetes concepts and practices.

      The aim is to provide a complete set of resources for a speaker to host an event and deliver an introductory talk on containers and Kubernetes. It includes:

      This tutorial is intended to supplement the talk demo with additional detail and elucidation. It also serves as a reference for readers seeking to get a minimal containerized Flask app up and running on DigitalOcean Kubernetes.

      Introduction

      In the past decade, containerized applications and container clusters have rapidly replaced the old paradigm of scaling applications using virtual machines. Containers offer the same process isolation, but are generally more lightweight, portable, and performant than full virtualization. Container clusters, which can be used to manage thousands of running containers across a set of physical machines, abstract away much of the work of rolling out new versions of applications, scaling them, and efficiently scheduling workloads. Out of these, Kubernetes has emerged as a mature, production-ready system. It provides a rich set of features like rolling deployments, health checking, self-monitoring, workload autoscaling, and much, much more.

      This tutorial, designed to accompany the Slides and speaker notes for the Getting Started with Kubernetes Meetup Kit, will show you how to harness these technologies and deploy the “Hello World” Flask app onto a DigitalOcean Kubernetes cluster.

      Prerequisites

      To follow this tutorial, you will need:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled. This setup will use a DigitalOcean Kubernetes cluster
      • The kubectl command-line tool installed on your local machine or development server and configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
      • Docker installed on your local machine or development server. If you are working with Ubuntu 18.04, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04; otherwise, follow the official documentation for information about installing on other operating systems. Be sure to add your non-root user to the docker group, as described in Step 2 of the linked tutorial.
      • A Docker Hub account (optional). For an overview of how to set this up, refer to this introduction to Docker Hub. You’ll only need a Docker Hub account if you plan on modifying the Flask Docker image described in this tutorial.

      Step 1 — Cloning the App Repository and Building the Flask Image

      To begin, clone the demo Flask app repo onto your machine, navigate into the directory, and list the directory contents:

      • git clone https://github.com/do-community/k8s-intro-meetup-kit.git
      • cd k8s-intro-meetup-kit
      • ls

      Output

      LICENSE README.md app k8s

      The app directory contains the Flask demo app code, as well as the Dockerfile for building its container image. The k8s directory contains Kubernetes manifest files for a Pod, Deployment, and Service. To learn more about these Kubernetes objects, consult the slide deck or An Introduction to Kubernetes.

      Navigate into the app directory and print out the contents of the app.py file:

      Output

      from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' if __name__ == "__main__": app.run(debug=True, host='0.0.0.0')

      This code defines a single default route that will print “Hello World.” Additionally, the apps runs in debug mode to enable verbose output.

      In a similar fashion, cat out the contents of the app’s Dockerfile:

      Output

      FROM python:3-alpine WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . EXPOSE 5000 CMD ["python", "app.py"]

      This Dockerfile first sources a lightweight Alpine Linux Python parent image. It then copies in the Python requirements file, installs Flask, copies the app code into the container image, defines port 5000 as the container port, and finally sets the default command to python app.py.

      Next, build the app image:

      • cd app
      • docker build -t flask_demo:v0 .

      We give the image a name, flask_demo, and tag, v0 using the -t option.

      After Docker finishes the build, run the container using run:

      • docker run -p 5000:5000 flask:v0

      This command runs a container using the flask:v0 image, and forwards local port 5000 to container port 5000.

      If you’re running Docker on your local machine, navigate to http://localhost:5000 in your web browser. You should see “Hello World,” generated by the dockerized Flask app.

      If you’re running Docker on a dev server, navigate instead to http://dev_server_external_IP:5000. If you’re running a firewall like UFW, be sure to allow external access on port 5000. To learn more about doing this with UFW, consult UFW Essentials: Common Firewall Rules and Commands.

      At this point you can experiment with Docker commands like docker ps, docker top, and docker images to practice working with images and containers on your system.

      In the next step, we’ll deploy this demo app to your Kubernetes cluster. We’ll use a prebuilt image shared publicly on Docker Hub. If you’d like to customize the Flask app and use your own image, you should create a Docker Hub account and follow the steps in this introduction to push your image to a public repository. From there, Kubernetes will be able to pull and deploy the container image into your cluster.

      Step 2 — Deploying the Flask App on Kubernetes

      The app and Docker image described in the previous step have already been built and made publicly available in the flask-helloworld Docker Hub repository. You can optionally create your own repository for the app and substitute it for flask-helloworld throughout this step.

      We’ll first deploy this demo “Hello World” app into our cluster as a standalone Pod, then as a multi-pod Deployment, which we’ll finally expose as a LoadBalancer Service. At the end of this tutorial, the “Hello World” app will be publicly accessible from outside of the Kubernetes cluster.

      Before we launch any workloads into the cluster, we’ll create a Namespace in which the objects will run. Namespaces allow you to segment your cluster and limit scope for running workloads.

      Create a Namespace called flask:

      • kubectl create namespace flask

      Now, list all the Namespaces in your cluster:

      You should see your new Namespace as well as some default Namespaces like kube-system and default. In this tutorial, we are going to exclusively work within the flask Namespace.

      Navigate back out to the k8s directory in the demo repo:

      In this directory, you’ll see three Kubernetes manifest files:

      • flask-pod.yaml: The app Pod manifest
      • flask-deployment.yaml: The app Deployment manifest
      • flask-service.yaml: The app LoadBalancer Service manifest

      Let’s take a look at the Pod manifest:

      Output

      apiVersion: v1 kind: Pod metadata: name: flask-pod labels: app: flask-helloworld spec: containers: - name: flask image: hjdo/flask-helloworld:latest ports: - containerPort: 5000

      Here, we define a minimal Pod called flask-pod and label it with the app: flask-helloworld key-value pair.

      We then name the single container flask and set the image to flask-helloworld:latest from the hjdo/flask-helloworld Docker Hub repository. If you’re using an image stored in a different Docker Hub repo, you can reference it using the image field here. Finally, we open up port 5000 to accept incoming connections.

      Deploy this Pod into the flask Namespace using kubectl apply -f and the -n Namespace flag:

      • kubectl apply -f flask-pod.yaml -n flask

      After ten or so seconds, the Pod should be up and running in your cluster:

      Output

      NAME READY STATUS RESTARTS AGE flask-pod 1/1 Running 0 4s

      Since this Pod is running inside of the Kubernetes cluster, we need to forward a local port to the Pod’s containerPort to access the running app locally:

      • kubectl port-forward pods/flask-pod -n flask 5000:5000

      Here we use port-forward to forward local port 5000 to the Pod’s containerPort 5000.

      Navigate to http://localhost:5000, where you should once again see the “Hello World” text generated by the Flask app. If you’re running kubectl on a remote dev server, replace localhost with your dev server’s external IP address.

      Feel free to play around with kubectl commands like kubectl describe to explore the Pod resource. When you’re done, delete the Pod using delete:

      • kubectl delete pod flask-pod -n flask

      Next, we’ll roll out this Pod in a scalable fashion using the Deployment resource. Print out the contents of the flask-deployment.yaml manifest file:

      • cat flask-deployment.yaml

      Output

      apiVersion: apps/v1 kind: Deployment metadata: name: flask-dep labels: app: flask-helloworld spec: replicas: 2 selector: matchLabels: app: flask-helloworld template: metadata: labels: app: flask-helloworld spec: containers: - name: flask image: hjdo/flask-helloworld:latest ports: - containerPort: 5000

      Here, we define a Deployment called flask-dep with an app: flask-helloworld Label. Next, we request 2 replicas of a Pod template identical to the template we previously used to deploy the Flask app Pod. The selector field matches the app: flask-helloworld Pod template to the Deployment.

      Roll out the Deployment using kubectl apply -f:

      • kubectl apply -f flask-deployment.yaml -n flask

      After a brief moment, the Deployment should be up and running in your cluster:

      • kubectl get deploy -n flask

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE flask-dep 2/2 2 2 5s

      You can also pull up the individual Pods that are managed by the Deployment controller:

      Output

      NAME READY STATUS RESTARTS AGE flask-dep-876bd7677-bl4lg 1/1 Running 0 76s flask-dep-876bd7677-jbfpb 1/1 Running 0 76s

      To access the app, we have to forward a port inside of the cluster:

      • kubectl port-forward deployment/flask-dep -n flask 5000:5000

      This will forward local port 5000 to containerPort 5000 on one of the running Pods.

      You should be able to access the app at http://localhost:5000. If you’re running kubectl on a remote dev server, replace localhost with your dev server’s external IP address.

      At this point you can play around with commands like kubectl rollout and kubectl scale to experiment with rolling back Deployments and scaling them. To learn more about these and other kubectl commands, consult a kubectl Cheat Sheet.

      In the final step, we’ll expose this app to outside users using the LoadBalancer Service type, which will automatically provision a DigitalOcean cloud Load Balancer for the Flask app Service.

      Step 3 — Creating the App Service

      A Kubernetes Deployment allows the operator to flexibly scale a Pod template up or down, as well as manage rollouts and template updates. To create a stable network endpoint for this set of running Pod replicas, you can create a Kubernetes Service, which we’ll do here.

      Begin by inspecting the Service manifest file:

      Output

      apiVersion: v1 kind: Service metadata: name: flask-svc labels: app: flask-helloworld spec: type: LoadBalancer ports: - port: 80 targetPort: 5000 protocol: TCP selector: app: flask-helloworld

      This manifest defines a Service called flask-svc. We set the type to LoadBalancer to provision a DigitalOcean cloud Load Balancer that will route traffic to the Deployment Pods. To select the already running Deployment, the selector field is set to the Deployment’s app: flask-helloworld Label. Finally, we open up port 80 on the Load Balancer and instruct it to route traffic to the Pods’ containerPort 5000.

      To create the Service, use kubectl apply -f:

      • kubectl apply -f flask-service.yaml -n flask

      It may take a bit of time for Kubernetes to provision the cloud Load Balancer. You can track progress using the -w watch flag:

      Once you see an external IP for the flask-svc Service, navigate to it using your web browser. You should see the “Hello World” Flask app page.

      Conclusion

      This brief tutorial demonstrates how to containerize a minimal Flask app and deploy it to a Kubernetes cluster. It accompanies the meetup kit’s slides and speaker notes and GitHub repository.



      Source link