One place for hosting & domains


      Automating Server Setup with Ansible: A DigitalOcean Workshop Kit

      Automating Server Setup with Ansible Workshop Kit Materials

      This workshop kit is designed to help a technical audience become familiar with configuration management concepts and how to use Ansible to automate server infrastructure setup.

      The aim is to provide a complete set of resources for a speaker to host an event and deliver an introductory talk on Ansible. It includes:

      • Slides and speaker notes including short demo videos and commands for running an optional live demo. This talk runs for roughly 50 minutes.
      • A GitHub repository containing the demo app code and the necessary Ansible scripts to deploy that application to an Ubuntu server.
      • This tutorial, which walks a user through rolling out the Travellist demo Laravel application on a remote server.

      This tutorial is intended to supplement the talk demo with additional detail and elucidation. It also serves as a reference for readers seeking to deploy a Laravel application to a remote Ubuntu server using Ansible.


      Server automation now plays an essential role in systems administration, due to the disposable nature of modern application environments. Configuration management tools such as Ansible are typically used to streamline the process of automating server setup by establishing standard procedures for new servers. This has the benefit of reducing human error associated with manual setups.

      Ansible offers a simplified architecture that doesn’t require special software to be installed on nodes. It also provides a robust set of features and built-in modules which facilitate writing automation scripts.

      This tutorial, designed to accompany the Slides and speaker notes for the Automating Server Setup with Ansible Workshop Kit, will show you how to set up an inventory file and execute a set of provisioning scripts to fully automate the process of setting up a remote LEMP server (Linux, (E)Nginx, MariaDB and PHP-FPM) on Ubuntu 18.04 and to deploy a demo Laravel application to this system.

      Note: This material is intended to demonstrate how to use playbooks to automate server setup with Ansible. Although our demo consists of a Laravel application running on a LEMP server, readers are encouraged to modify and adapt the included setup to suit their own needs.


      To follow this tutorial, you will need:

      • One Ansible control node: an Ubuntu 18.04 machine with Ansible installed and configured to connect to your Ansible hosts using SSH keys. Make sure the control node has a regular user with sudo permissions and a firewall enabled, as explained in our Initial Server Setup guide, and a set of valid SSH keys. To set up Ansible, please follow our guide on How to Install and Configure Ansible on Ubuntu 18.04.
      • One or more Ansible Hosts: one or more remote Ubuntu 18.04 servers. Each host must have the control node’s public key added to its authorized_keys file, as explained in Step 2 of the How to Set Up SSH Keys on Ubuntu 18.04 guide. In case you are using DigitalOcean Droplets as nodes, you can use the control panel to add your public key to your Ansible hosts.

      Step 1 — Cloning the Demo Repository

      The first thing we need to do is clone the repository containing the Ansible provisioning scripts and the demo Laravel application that we’ll deploy to the remote servers. All the necessary files can be found at the do-community/ansible-laravel-demo Github repository.

      After logging in to your Ansible control node as your sudo user, clone the repository and navigate to the directory created by the git command:

      • git clone
      • cd ansible-laravel-demo

      Now, you can run an ls command to inspect the contents of the cloned repository:

      • ls -l --group-directories-first

      You’ll see output like this:


      drwxrwxr-x 3 sammy sammy 4096 Mar 24 15:24 application
      drwxrwxr-x 2 sammy sammy 4096 Mar 24 15:24 group_vars
      drwxrwxr-x 7 sammy sammy 4096 Mar 24 15:24 roles
      -rw-rw-r-- 1 sammy sammy  102 Mar 24 15:24 inventory-example
      -rw-rw-r-- 1 sammy sammy 1987 Mar 24 15:24 laravel-deploy.yml
      -rw-rw-r-- 1 sammy sammy  794 Mar 24 15:24 laravel-env.j2
      -rw-rw-r-- 1 sammy sammy  920 Mar 24 15:24
      -rw-rw-r-- 1 sammy sammy  318 Mar 24 15:24 server-setup.yml

      Here’s an overview of each of these folders and files and what they are:

      • application/: This directory contains the demo Laravel application that is going to be deployed on the remote server by the end of the workshop.
      • group_vars/: This directory holds variable files containing custom options for the application setup, such as database credentials and where to store the application files on the remote server.
      • roles/: This directory contains the different Ansible roles that handle the provisioning of an Ubuntu LEMP server.
      • inventory-example: This file can be used as a base to create a custom inventory for your infrastructure.
      • laravel-deploy.yml: This playbook will deploy the demo Laravel application to the remote server.
      • laravel-env.j2: This template is used by the laravel-deploy.yml playbook to set up the application environment file.
      • This file contains general information about the provisioning contained in this repository.
      • server-setup.yml: This playbook will provision a LEMP server using the roles defined in the roles/ directory.

      Step 2 — Setting Up the Inventory File and Testing Connection to Nodes

      We’ll now create an inventory file to list the hosts we want to manage using Ansible. First, copy the inventory-example file to a new file called hosts:

      • cp inventory-example hosts

      Now, use your text editor of choice to open the new inventory file and update it with your own servers. Here, we’ll use nano:

      The example inventory that comes with the workshop kit contains two Ansible groups: dev and production. This is meant to demonstrate how to use group variables to customize deployment in multiple environments. If you wish to test this setup with a single node, you can use either the dev or the production group and remove the other one from the inventory file.



      Note: the ansible_python_interpreter variable defines the path to the Python executable on the remote host. Here, we’re telling Ansible to set this variable for all hosts in this inventory file.

      Save and close the file when you’re done. If you are using nano, you can do that by hitting CTRL+X, then Y and ENTER to confirm.

      Once you’re done adjusting your inventory file, you can execute the ping Ansible module to test whether the control node is able to connect to the hosts:

      • ansible all -i hosts -m ping -u root

      Let’s break down this command:

      • all: This option tells Ansible to run the following command on all hosts from the designated inventory file.
      • -i hosts: Specifies which inventory should be used. When this option is not provided, Ansible will try to use the default inventory, which is typically located at /etc/ansible/hosts.
      • -m ping: This will execute the ping Ansible module, which will test connectivity to nodes and whether or not the Python executable can be found on the remote systems.
      • -u root: This option specifies which remote user should be used to connect to the nodes. We’re using the root account here as an example because this is typically the only account available on fresh new servers. Other connection options might be necessary depending on your infrastructure provider and SSH configuration.

      If your SSH connection to the nodes is properly set up, you’ll get the following output:

      Output | SUCCESS => { "changed": false, "ping": "pong" } | SUCCESS => { "changed": false, "ping": "pong" }

      The pong response means your control node is able to connect to your managed nodes, and that Ansible is able to execute Python commands on the remote hosts.

      Step 3 — Setting Up Variable Files

      Before running the playbooks that are included in this workshop kit, you’ll first need to edit the variable file that contains settings such as the name of the remote user to create and the database credentials to set up with MariaDB.

      Open the group_vars/all file using your text editor of choice:


      # Initial Server Setup
      remote_user: sammy
      # MySQL Setup
      mysql_root_password: MYSQL_ROOT_PASSWORD
      mysql_app_db: travellist
      mysql_app_user: travellist_user
      mysql_app_pass: DB_PASSWORD
      # Web Server Setup
      http_host: "{{ ansible_facts.eth0.ipv4.address }}"
      remote_www_root: /var/www
      app_root_dir: travellist-demo
      document_root: "{{ remote_www_root }}/{{ app_root_dir }}/public"
      # Laravel Env Variables
      app_name: Travellist
      app_env: dev
      app_debug: true
      app_url: "http://{{ http_host }}"
      db_host: localhost
      db_port: 3306
      db_database: "{{ mysql_app_db }}"
      db_user: "{{ mysql_app_user }}"
      db_pass: "{{ mysql_app_pass }}"

      The variables that need your attention are:

      • remote_user: The specified user will be created on the remote server and granted sudo privileges.
      • mysql_root_password: This variable defines the database root password for the MariaDB server. Note that this should be a secure password of your own choosing.
      • mysql_app_db: The name of the database to create for the Laravel application. You don’t need to change this value, but you are free to do so if you wish. This value will be used to set up the .env Laravel configuration file.
      • mysql_app_user: The name of the database user for the Laravel application. Again, you are not required to change this value, but you are free to do so.
      • mysql_app_pass: The database password for the Laravel application. This should be a secure password of your choosing.
      • http_host: The domain name or IP address of the remote host. Here, we’re using an Ansible fact that contains the ipv4 address for the eth0 network interface. In case you have domain names pointing to your remote hosts, you may want to create separate variable files for each of them, overwriting this value so that the Nginx configuration contains the correct hostname for each server.

      When you are finished editing these values, save and close the file.

      Creating additional variable files for multiple environments

      If you’ve set up your inventory file with multiple nodes, you might want to create additional variable files to configure each node accordingly. In our example inventory, we have created two distinct groups: dev and production. To avoid having the same database credentials and other settings in both environments, we need to create a separate variable file to hold production values.

      You might want to copy the default variable file and use it as base for your production values:

      • cp group_vars/all.yml group_vars/production.yml
      • nano group_vars/production.yml

      Because the all.yml file contains the default values that should be valid for all environments, you can remove all the variables that won’t need changing from the new production.yml file. The variables that you should update for each environment are highlighted here:


      # Initial Server Setup
      remote_user: prod_user
      # MySQL Setup
      mysql_root_password: MYSQL_PROD_ROOT_PASSWORD
      mysql_app_pass: MYSQL_PROD_APP_PASSWORD
      # Laravel Env Variables
      app_env: prod
      app_debug: false

      Notice that we’ve changed the app_env value to prod and set the app_debug value to false. These are recommended Laravel settings for production environments.

      Once you’re finished customizing your production variables, save and close the file.

      Encrypting variable files with Ansible Vault

      If you plan on sharing your Ansible setup with other users, it is important to keep the database credentials and other sensitive data in your variable files safe. This is possible with Ansible Vault, a feature that is included with Ansible by default. Ansible Vault allows you to encrypt variable files so that only users with access to the vault password can view, edit or unencrypt these files. The vault password is also necessary to run a playbook or a command that makes use of encrypted files.

      To encrypt your production variable file, run:

      • ansible-vault encrypt group_vars/production.yml

      You will be prompted to provide a vault password and confirm it. Once you’re finished, if you check the contents of that file, you’ll see that the data is now encrypted.

      If you want to view the variable file without changing its contents, you can use the view command:

      • ansible-vault view group_vars/production.yml

      You will be prompted to provide the same password you defined when encrypting that file with ansible-vault. After providing the password, the file’s contents will appear in your terminal. To exit the file view, type q.

      To edit a file that was previously encrypted with Ansible Vault, use the edit vault command:

      • ansible-vault edit group_vars/production.yml

      This command will prompt you to provide the vault password for that file. Your default terminal editor will then be used to open the file for editing. After making the desired changes, save and close the file, and it will be automatically encrypted again by Ansible Vault.

      You have now finished setting up your variable files. In the next step, we’ll run the playbook to set up Nginx, PHP-FPM, and MariaDB (which, along with a Linux-based operating system like Ubuntu, form the LEMP stack) on your remote server(s).

      Step 4 — Running the LEMP Playbook

      Before deploying the demo Laravel app to the remote server(s), we need to set up a LEMP environment that will serve the application. The server-setup.yml playbook includes the Ansible roles necessary to set this up. To inspect its contents, run:


      - hosts: all
        become: true
          - { role: setup, tags: ['setup'] }
          - { role: mariadb, tags: ['mysql', 'mariadb', 'db', 'lemp'] }
          - { role: php, tags: ['php', 'web', 'php-fpm', 'lemp'] }
          - { role: nginx, tags: ['nginx', 'web', 'http', 'lemp'] }
          - { role: composer, tags: ['composer'] }

      Here’s an overview of all the roles included within this playbook:

      • setup: Contains the tasks necessary to create a new system user and grant them sudo privileges, as well as enabling the ufw firewall.
      • mariadb: Installs the MariaDB database server and creates the application database and user.
      • php: Installs php-fpm and PHP modules that are necessary in order to run a Laravel application.
      • nginx: Installs the Nginx web server and enables access on port 80.
      • composer: Installs Composer globally.

      Notice that we’ve set up a few tags within each role. This is to facilitate re-running only parts of this playbook, if necessary. If you make changes to your Nginx template file, for instance, you might want to run only the Nginx role.

      The following command will execute this playbook on all servers from your inventory file. The --ask-vault-pass is only necessary in case you have used ansible-vault to encrypt variable files in the previous step:

      • ansible-playbook -i hosts server-setup.yml -u root --ask-vault-pass

      You’ll get output similar to this:


      PLAY [all] ********************************************************************************************** TASK [Gathering Facts] ********************************************************************************** ok: [] ok: [] TASK [setup : Install Prerequisites] ******************************************************************** changed: [] changed: [] ... RUNNING HANDLER [nginx : Reload Nginx] ****************************************************************** changed: [] changed: [] PLAY RECAP ********************************************************************************************** : ok=31 changed=27 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1 : ok=31 changed=27 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1

      Your node(s) are now ready to serve PHP applications using Nginx+PHP-FPM, with MariaDB as database server. In the next step, we’ll deploy the included demo Laravel app with the laravel-deploy.yml Ansible playbook.

      Step 5 — Deploying the Laravel Application

      Now that you have a working LEMP environment on your remote server(s), you can execute the laravel-deploy.yml playbook. This playbook will execute the following tasks:

      1. Create the application document root on the remote server, if it hasn’t already been created.
      2. Synchronize the local application folder to the remote server using the sync module.
      3. Use the acl module to set permissions for the www-data user on the storage folder.
      4. Set up the .env application file based on the laravel-env.j2 template.
      5. Install application dependencies with Composer.
      6. Generate application security key.
      7. Set up a public link for the storage folder.
      8. Run database migrations and seeders.

      This playbook should be executed by a non-root user with sudo permissions. This user should have been created when you executed the server-setup.yml playbook in the previous step, using the name defined by the remote_user variable.

      When you’re ready, run the laravel-deploy.yml playbook with:

      • ansible-playbook -i hosts laravel-deploy.yml -u sammy --ask-vault-pass

      The --ask-vault-pass is only necessary in case you have used ansible-vault to encrypt variable files in the previous step.

      You’ll get output similar to this:


      PLAY [all] ********************************************************************************************** TASK [Gathering Facts] ********************************************************************************** ok: [] ok: [] TASK [Make sure the remote app root exists and has the right permissions] ******************************* ok: [] ok: [] TASK [Rsync application files to the remote server] ***************************************************** ok: [] ok: [] ... TASK [Run Migrations + Seeders] ************************************************************************* ok: [] ok: [] PLAY RECAP ********************************************************************************************** : ok=10 changed=9 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 : ok=10 changed=9 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      When the execution is finished, you can access the demo application by pointing your browser to your node’s domain name or IP address:


      You will see a page like this:

      Laravel Travellist Demo


      This tutorial demonstrates how to set up an Ansible inventory file and connect to remote nodes, and how to run Ansible playbooks to set up a LEMP server and deploy a Laravel demo application to it. This guide compliments the Automating Server Setup with Ansible Workshop Kit’s slides and speaker notes, and is accompanied by a demo GitHub repository containing all necessary files to follow up with the demo component of this workshop.

      Source link

      Getting Started with Containers and Kubernetes: A DigitalOcean Workshop Kit

      Getting Started with Containers and Kubernetes Workshop Kit Materials

      This meetup kit is designed to help a technical audience become familiar with core Kubernetes concepts and practices.

      The aim is to provide a complete set of resources for a speaker to host an event and deliver an introductory talk on containers and Kubernetes. It includes:

      This tutorial is intended to supplement the talk demo with additional detail and elucidation. It also serves as a reference for readers seeking to get a minimal containerized Flask app up and running on DigitalOcean Kubernetes.


      In the past decade, containerized applications and container clusters have rapidly replaced the old paradigm of scaling applications using virtual machines. Containers offer the same process isolation, but are generally more lightweight, portable, and performant than full virtualization. Container clusters, which can be used to manage thousands of running containers across a set of physical machines, abstract away much of the work of rolling out new versions of applications, scaling them, and efficiently scheduling workloads. Out of these, Kubernetes has emerged as a mature, production-ready system. It provides a rich set of features like rolling deployments, health checking, self-monitoring, workload autoscaling, and much, much more.

      This tutorial, designed to accompany the Slides and speaker notes for the Getting Started with Kubernetes Meetup Kit, will show you how to harness these technologies and deploy the “Hello World” Flask app onto a DigitalOcean Kubernetes cluster.


      To follow this tutorial, you will need:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled. This setup will use a DigitalOcean Kubernetes cluster
      • The kubectl command-line tool installed on your local machine or development server and configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
      • Docker installed on your local machine or development server. If you are working with Ubuntu 18.04, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04; otherwise, follow the official documentation for information about installing on other operating systems. Be sure to add your non-root user to the docker group, as described in Step 2 of the linked tutorial.
      • A Docker Hub account (optional). For an overview of how to set this up, refer to this introduction to Docker Hub. You’ll only need a Docker Hub account if you plan on modifying the Flask Docker image described in this tutorial.

      Step 1 — Cloning the App Repository and Building the Flask Image

      To begin, clone the demo Flask app repo onto your machine, navigate into the directory, and list the directory contents:

      • git clone
      • cd k8s-intro-meetup-kit
      • ls


      LICENSE app k8s

      The app directory contains the Flask demo app code, as well as the Dockerfile for building its container image. The k8s directory contains Kubernetes manifest files for a Pod, Deployment, and Service. To learn more about these Kubernetes objects, consult the slide deck or An Introduction to Kubernetes.

      Navigate into the app directory and print out the contents of the file:


      from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' if __name__ == "__main__":, host='')

      This code defines a single default route that will print “Hello World.” Additionally, the apps runs in debug mode to enable verbose output.

      In a similar fashion, cat out the contents of the app’s Dockerfile:


      FROM python:3-alpine WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . EXPOSE 5000 CMD ["python", ""]

      This Dockerfile first sources a lightweight Alpine Linux Python parent image. It then copies in the Python requirements file, installs Flask, copies the app code into the container image, defines port 5000 as the container port, and finally sets the default command to python

      Next, build the app image:

      • cd app
      • docker build -t flask_demo:v0 .

      We give the image a name, flask_demo, and tag, v0 using the -t option.

      After Docker finishes the build, run the container using run:

      • docker run -p 5000:5000 flask_demo:v0

      This command runs a container using the flask:v0 image, and forwards local port 5000 to container port 5000.

      If you’re running Docker on your local machine, navigate to http://localhost:5000 in your web browser. You should see “Hello World,” generated by the dockerized Flask app.

      If you’re running Docker on a dev server, navigate instead to http://dev_server_external_IP:5000. If you’re running a firewall like UFW, be sure to allow external access on port 5000. To learn more about doing this with UFW, consult UFW Essentials: Common Firewall Rules and Commands.

      At this point you can experiment with Docker commands like docker ps, docker top, and docker images to practice working with images and containers on your system.

      In the next step, we’ll deploy this demo app to your Kubernetes cluster. We’ll use a prebuilt image shared publicly on Docker Hub. If you’d like to customize the Flask app and use your own image, you should create a Docker Hub account and follow the steps in this introduction to push your image to a public repository. From there, Kubernetes will be able to pull and deploy the container image into your cluster.

      Step 2 — Deploying the Flask App on Kubernetes

      The app and Docker image described in the previous step have already been built and made publicly available in the flask-helloworld Docker Hub repository. You can optionally create your own repository for the app and substitute it for flask-helloworld throughout this step.

      We’ll first deploy this demo “Hello World” app into our cluster as a standalone Pod, then as a multi-pod Deployment, which we’ll finally expose as a LoadBalancer Service. At the end of this tutorial, the “Hello World” app will be publicly accessible from outside of the Kubernetes cluster.

      Before we launch any workloads into the cluster, we’ll create a Namespace in which the objects will run. Namespaces allow you to segment your cluster and limit scope for running workloads.

      Create a Namespace called flask:

      • kubectl create namespace flask

      Now, list all the Namespaces in your cluster:

      You should see your new Namespace as well as some default Namespaces like kube-system and default. In this tutorial, we are going to exclusively work within the flask Namespace.

      Navigate back out to the k8s directory in the demo repo:

      In this directory, you’ll see three Kubernetes manifest files:

      • flask-pod.yaml: The app Pod manifest
      • flask-deployment.yaml: The app Deployment manifest
      • flask-service.yaml: The app LoadBalancer Service manifest

      Let’s take a look at the Pod manifest:


      apiVersion: v1 kind: Pod metadata: name: flask-pod labels: app: flask-helloworld spec: containers: - name: flask image: hjdo/flask-helloworld:latest ports: - containerPort: 5000

      Here, we define a minimal Pod called flask-pod and label it with the app: flask-helloworld key-value pair.

      We then name the single container flask and set the image to flask-helloworld:latest from the hjdo/flask-helloworld Docker Hub repository. If you’re using an image stored in a different Docker Hub repo, you can reference it using the image field here. Finally, we open up port 5000 to accept incoming connections.

      Deploy this Pod into the flask Namespace using kubectl apply -f and the -n Namespace flag:

      • kubectl apply -f flask-pod.yaml -n flask

      After ten or so seconds, the Pod should be up and running in your cluster:


      NAME READY STATUS RESTARTS AGE flask-pod 1/1 Running 0 4s

      Since this Pod is running inside of the Kubernetes cluster, we need to forward a local port to the Pod’s containerPort to access the running app locally:

      • kubectl port-forward pods/flask-pod -n flask 5000:5000

      Here we use port-forward to forward local port 5000 to the Pod’s containerPort 5000.

      Navigate to http://localhost:5000, where you should once again see the “Hello World” text generated by the Flask app. If you’re running kubectl on a remote dev server, replace localhost with your dev server’s external IP address.

      Feel free to play around with kubectl commands like kubectl describe to explore the Pod resource. When you’re done, delete the Pod using delete:

      • kubectl delete pod flask-pod -n flask

      Next, we’ll roll out this Pod in a scalable fashion using the Deployment resource. Print out the contents of the flask-deployment.yaml manifest file:

      • cat flask-deployment.yaml


      apiVersion: apps/v1 kind: Deployment metadata: name: flask-dep labels: app: flask-helloworld spec: replicas: 2 selector: matchLabels: app: flask-helloworld template: metadata: labels: app: flask-helloworld spec: containers: - name: flask image: hjdo/flask-helloworld:latest ports: - containerPort: 5000

      Here, we define a Deployment called flask-dep with an app: flask-helloworld Label. Next, we request 2 replicas of a Pod template identical to the template we previously used to deploy the Flask app Pod. The selector field matches the app: flask-helloworld Pod template to the Deployment.

      Roll out the Deployment using kubectl apply -f:

      • kubectl apply -f flask-deployment.yaml -n flask

      After a brief moment, the Deployment should be up and running in your cluster:

      • kubectl get deploy -n flask


      NAME READY UP-TO-DATE AVAILABLE AGE flask-dep 2/2 2 2 5s

      You can also pull up the individual Pods that are managed by the Deployment controller:


      NAME READY STATUS RESTARTS AGE flask-dep-876bd7677-bl4lg 1/1 Running 0 76s flask-dep-876bd7677-jbfpb 1/1 Running 0 76s

      To access the app, we have to forward a port inside of the cluster:

      • kubectl port-forward deployment/flask-dep -n flask 5000:5000

      This will forward local port 5000 to containerPort 5000 on one of the running Pods.

      You should be able to access the app at http://localhost:5000. If you’re running kubectl on a remote dev server, replace localhost with your dev server’s external IP address.

      At this point you can play around with commands like kubectl rollout and kubectl scale to experiment with rolling back Deployments and scaling them. To learn more about these and other kubectl commands, consult a kubectl Cheat Sheet.

      In the final step, we’ll expose this app to outside users using the LoadBalancer Service type, which will automatically provision a DigitalOcean cloud Load Balancer for the Flask app Service.

      Step 3 — Creating the App Service

      A Kubernetes Deployment allows the operator to flexibly scale a Pod template up or down, as well as manage rollouts and template updates. To create a stable network endpoint for this set of running Pod replicas, you can create a Kubernetes Service, which we’ll do here.

      Begin by inspecting the Service manifest file:


      apiVersion: v1 kind: Service metadata: name: flask-svc labels: app: flask-helloworld spec: type: LoadBalancer ports: - port: 80 targetPort: 5000 protocol: TCP selector: app: flask-helloworld

      This manifest defines a Service called flask-svc. We set the type to LoadBalancer to provision a DigitalOcean cloud Load Balancer that will route traffic to the Deployment Pods. To select the already running Deployment, the selector field is set to the Deployment’s app: flask-helloworld Label. Finally, we open up port 80 on the Load Balancer and instruct it to route traffic to the Pods’ containerPort 5000.

      To create the Service, use kubectl apply -f:

      • kubectl apply -f flask-service.yaml -n flask

      It may take a bit of time for Kubernetes to provision the cloud Load Balancer. You can track progress using the -w watch flag:

      Once you see an external IP for the flask-svc Service, navigate to it using your web browser. You should see the “Hello World” Flask app page.


      This brief tutorial demonstrates how to containerize a minimal Flask app and deploy it to a Kubernetes cluster. It accompanies the meetup kit’s slides and speaker notes and GitHub repository.

      Source link