One place for hosting & domains

      Install

      How To Install and Use Istio


      Introduction

      A service mesh is an infrastructure layer that allows you to manage communication between your application’s microservices. As more developers work with microservices, service meshes have evolved to make that work easier and more effective by consolidating common management and administrative tasks in a distributed setup.

      Using a service mesh like Istio can simplify tasks like service discovery, routing and traffic configuration, encryption and authentication/authorization, and monitoring and telemetry. Istio, in particular, is designed to work without major changes to pre-existing service code. When working with Kubernetes, for example, it is possible to add service mesh capabilities to applications running in your cluster by building out Istio-specific objects that work with existing application resources.

      In this tutorial, you will install Istio using the Helm package manager for Kubernetes. You will then use Istio to expose a demo Node.js application to external traffic by creating Gateway and Virtual Service resources. Finally, you will access the Grafana telemetry addon to visualize your application traffic data.

      Prerequisites

      To complete this tutorial, you will need:

      Note: We highly recommend a cluster with at least 8GB of available memory and 4vCPUs for this setup. This tutorial will use three of DigitalOcean’s standard 4GB/2vCPU Droplets as nodes.

      Step 1 — Packaging the Application

      To use our demo application with Kubernetes, we will need to clone the code and package it so that the kubelet agent can pull the image.

      Our first step will be to clone the nodejs-image-demo respository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker, which describes how to build an image for a Node.js application and how to create a container using this image. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      To get started, clone the nodejs-image-demo repository into a directory called istio_project:

      • git clone https://github.com/do-community/nodejs-image-demo.git istio_project

      Navigate to the istio_project directory:

      This directory contains files and folders for a shark information application that offers users basic information about sharks. In addition to the application files, the directory contains a Dockerfile with instructions for building a Docker image with the application code. For more information about the instructions in the Dockerfile, see Step 3 of How To Build a Node.js Application with Docker.

      To test that the application code and Dockerfile work as expected, you can build and tag the image using the docker build command, and then use the image to run a demo container. Using the -t flag with docker build will allow you to tag the image with your Docker Hub username so that you can push it to Docker Hub once you've tested it.

      Build the image with the following command:

      • docker build -t your_dockerhub_username/node-demo .

      The . in the command specifies that the build context is the current directory. We've named the image node-demo, but you are free to name it something else.

      Once the build process is complete, you can list your images with docker images:

      You will see the following output confirming the image build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-demo latest 37f1c2939dbf 5 seconds ago 77.6MB node 10-alpine 9dfa73010b19 2 days ago 75.3MB

      Next, you'll use docker run to create a container based on this image. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a customized name.

      Run the following command to build the container:

      • docker run --name node-demo -p 80:8080 -d your_dockerhub_username/node-demo

      Inspect your running containers with docker ps:

      You will see output confirming that your application container is running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49a67bafc325 your_dockerhub_username/node-demo "docker-entrypoint.s…" 8 seconds ago Up 6 seconds 0.0.0.0:80->8080/tcp node-demo

      You can now visit your server IP to test your setup: http://your_server_ip. Your application will display the following landing page:

      Application Landing Page

      Now that you have tested the application, you can stop the running container. Use docker ps again to get your CONTAINER ID:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49a67bafc325 your_dockerhub_username/node-demo "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:80->8080/tcp node-demo

      Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

      Now that you have tested the image, you can push it to Docker Hub. First, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-demo

      You now have an application image that you can pull to run your application with Kubernetes and Istio. Next, you can move on to installing Istio with Helm.

      Step 2 — Installing Istio with Helm

      Although Istio offers different installation methods, the documentation recommends using Helm to maximize flexibility in managing configuration options. We will install Istio with Helm and ensure that the Grafana addon is enabled so that we can visualize traffic data for our application.

      First, add the Istio release repository:

      • helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.1.7/charts/

      This will enable you to use the Helm charts in the repository to install Istio.

      Check that you have the repo:

      You should see the istio.io repo listed:

      Output

      NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879/charts istio.io https://storage.googleapis.com/istio-release/releases/1.1.7/charts/

      Next, install Istio's Custom Resource Definitions (CRDs) with the istio-init chart using the helm install command:

      • helm install --name istio-init --namespace istio-system istio.io/istio-init

      Output

      NAME: istio-init LAST DEPLOYED: Fri Jun 7 17:13:32 2019 NAMESPACE: istio-system STATUS: DEPLOYED ...

      This command commits 53 CRDs to the kube-apiserver, making them available for use in the Istio mesh. It also creates a namespace for the Istio objects called istio-system and uses the --name option to name the Helm release istio-init. A release in Helm refers to a particular deployment of a chart with specific configuration options enabled.

      To check that all of the required CRDs have been committed, run the following command:

      • kubectl get crds | grep 'istio.io|certmanager.k8s.io' | wc -l

      This should output the number 53.

      You can now install the istio chart. To ensure that the Grafana telemetry addon is installed with the chart, we will use the --set grafana.enabled=true configuration option with our helm install command. We will also use the installation protocol for our desired configuration profile: the default profile. Istio has a number of configuration profiles to choose from when installing with Helm that allow you to customize the Istio control plane and data plane sidecars. The default profile is recommended for production deployments, and we'll use it to familiarize ourselves with the configuration options that we would use when moving to production.

      Run the following helm install command to install the chart:

      • helm install --name istio --namespace istio-system --set grafana.enabled=true istio.io/istio

      Output

      NAME: istio LAST DEPLOYED: Fri Jun 7 17:18:33 2019 NAMESPACE: istio-system STATUS: DEPLOYED ...

      Again, we're installing our Istio objects into the istio-system namespace and naming the release — in this case, istio.

      We can verify that the Service objects we expect for the default profile have been created with the following command:

      • kubectl get svc -n istio-system

      The Services we would expect to see here include istio-citadel, istio-galley, istio-ingressgateway, istio-pilot, istio-policy, istio-sidecar-injector, istio-telemetry, and prometheus. We would also expect to see the grafana Service, since we enabled this addon during installation:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.245.85.162 <none> 3000/TCP 3m26s istio-citadel ClusterIP 10.245.135.45 <none> 8060/TCP,15014/TCP 3m25s istio-galley ClusterIP 10.245.46.245 <none> 443/TCP,15014/TCP,9901/TCP 3m26s istio-ingressgateway LoadBalancer 10.245.171.39 174.138.125.110 15020:30707/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30285/TCP,15030:31668/TCP,15031:32297/TCP,15032:30853/TCP,15443:30406/TCP 3m26s istio-pilot ClusterIP 10.245.56.97 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 3m26s istio-policy ClusterIP 10.245.206.189 <none> 9091/TCP,15004/TCP,15014/TCP 3m26s istio-sidecar-injector ClusterIP 10.245.223.99 <none> 443/TCP 3m25s istio-telemetry ClusterIP 10.245.5.215 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 3m26s prometheus ClusterIP 10.245.100.132 <none> 9090/TCP 3m26s

      We can also check for the corresponding Istio Pods with the following command:

      • kubectl get pods -n istio-system

      The Pods corresponding to these services should have a STATUS of Running, indicating that the Pods are bound to nodes and that the containers associated with the Pods are running:

      Output

      NAME READY STATUS RESTARTS AGE grafana-67c69bb567-t8qrg 1/1 Running 0 4m25s istio-citadel-fc966574d-v5rg5 1/1 Running 0 4m25s istio-galley-cf776876f-5wc4x 1/1 Running 0 4m25s istio-ingressgateway-7f497cc68b-c5w64 1/1 Running 0 4m25s istio-init-crd-10-bxglc 0/1 Completed 0 9m29s istio-init-crd-11-dv5lz 0/1 Completed 0 9m29s istio-pilot-785694f946-m5wp2 2/2 Running 0 4m25s istio-policy-79cff99c7c-q4z5x 2/2 Running 1 4m25s istio-sidecar-injector-c8ddbb99c-czvwq 1/1 Running 0 4m24s istio-telemetry-578b6f967c-zk56d 2/2 Running 1 4m25s prometheus-d8d46c5b5-k5wmg 1/1 Running 0 4m25s

      The READY field indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod -n pod_namespace
      • kubectl logs your_pod -n pod_namespace

      The final step in the Istio installation will be enabling the creation of Envoy proxies, which will be deployed as sidecars to services running in the mesh.

      Sidecars are typically used to add an extra layer of functionality in existing container environments. Istio's mesh architecture relies on communication between Envoy sidecars, which comprise the data plane of the mesh, and the components of the control plane. In order for the mesh to work, we need to ensure that each Pod in the mesh will also run an Envoy sidecar.

      There are two ways of accomplishing this goal: manual sidecar injection and automatic sidecar injection. We'll enable automatic sidecar injection by labeling the namespace in which we will create our application objects with the label istio-injection=enabled. This will ensure that the MutatingAdmissionWebhook controller can intercept requests to the kube-apiserver and perform a specific action — in this case, ensuring that all of our application Pods start with a sidecar.

      We'll use the default namespace to create our application objects, so we'll apply the istio-injection=enabled label to that namespace with the following command:

      • kubectl label namespace default istio-injection=enabled

      We can verify that the command worked as intended by running:

      • kubectl get namespace -L istio-injection

      You will see the following output:

      Output

      AME STATUS AGE ISTIO-INJECTION default Active 47m enabled istio-system Active 16m kube-node-lease Active 47m kube-public Active 47m kube-system Active 47m

      With Istio installed and configured, we can move on to creating our application Service and Deployment objects.

      Step 3 — Creating Application Objects

      With the Istio mesh in place and configured to inject sidecar Pods, we can create an application manifest with specifications for our Service and Deployment objects. Specifications in a Kubernetes manifest describe each object's desired state.

      Our application Service will ensure that the Pods running our containers remain accessible in a dynamic environment, as individual Pods are created and destroyed, while our Deployment will describe the desired state of our Pods.

      Open a file called node-app.yaml with nano or your favorite editor:

      First, add the following code to define the nodejs application Service:

      ~/istio_project/node-app.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nodejs
        labels: 
          app: nodejs
      spec:
        selector:
          app: nodejs
        ports:
        - name: http
          port: 8080 
      

      This Service definition includes a selector that will match Pods with the corresponding app: nodejs label. We've also specified that the Service will target port 8080 on any Pod with the matching label.

      We are also naming the Service port, in compliance with Istio's requirements for Pods and Services. The http value is one of the values Istio will accept for the name field.

      Next, below the Service, add the following specifications for the application Deployment. Be sure to replace the image listed under the containers specification with the image you created and pushed to Docker Hub in Step 1:

      ~/istio_project/node-app.yaml

      ...
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nodejs
        labels:
          version: v1
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nodejs
        template:
          metadata:
            labels:
              app: nodejs
              version: v1
          spec:
            containers:
            - name: nodejs
              image: your_dockerhub_username/node-demo
              ports:
              - containerPort: 8080
      

      The specifications for this Deployment include the number of replicas (in this case, 1), as well as a selector that defines which Pods the Deployment will manage. In this case, it will manage Pods with the app: nodejs label.

      The template field contains values that do the following:

      • Apply the app: nodejs label to the Pods managed by the Deployment. Istio recommends adding the app label to Deployment specifications to provide contextual information for Istio's metrics and telemetry.
      • Apply a version label to specify the version of the application that corresponds to this Deployment. As with the app label, Istio recommends including the version label to provide contextual information.
      • Define the specifications for the containers the Pods will run, including the container name and the image. The image here is the image you created in Step 1 and pushed to Docker Hub. The container specifications also include a containerPort configuration to point to the port each container will listen on. If ports remain unlisted here, they will bypass the Istio proxy. Note that this port, 8080, corresponds to the targeted port named in the Service definition.

      Save and close the file when you are finished editing.

      With this file in place, we can move on to editing the file that will contain definitions for Gateway and Virtual Service objects, which control how traffic enters the mesh and how it is routed once there.

      Step 4 — Creating Istio Objects

      To control access to a cluster and routing to Services, Kubernetes uses Ingress Resources and Controllers. Ingress Resources define rules for HTTP and HTTPS routing to cluster Services, while Controllers load balance incoming traffic and route it to the correct Services.

      For more information about using Ingress Resources and Controllers, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

      Istio uses a different set of objects to achieve similar ends, though with some important differences. Instead of using a Controller to load balance traffic, the Istio mesh uses a Gateway, which functions as a load balancer that handles incoming and outgoing HTTP/TCP connections. The Gateway then allows for monitoring and routing rules to be applied to traffic entering the mesh. Specifically, the configuration that determines traffic routing is defined as a Virtual Service. Each Virtual Service includes routing rules that match criteria with a specific protocol and destination.

      Though Kubernetes Ingress Resources/Controllers and Istio Gateways/Virtual Services have some functional similarities, the structure of the mesh introduces important differences. Kubernetes Ingress Resources and Controllers offer operators some routing options, for example, but Gateways and Virtual Services make a more robust set of functionalities available since they enable traffic to enter the mesh. In other words, the limited application layer capabilities that Kubernetes Ingress Controllers and Resources make available to cluster operators do not include the functionalities — including advanced routing, tracing, and telemetry — provided by the sidecars in the Istio service mesh.

      To allow external traffic into our mesh and configure routing to our Node app, we will need to create an Istio Gateway and Virtual Service. Open a file called node-istio.yaml for the manifest:

      First, add the definition for the Gateway object:

      ~/istio_project/node-isto.yaml

      apiVersion: networking.istio.io/v1alpha3
      kind: Gateway
      metadata:
        name: nodejs-gateway
      spec:
        selector:
          istio: ingressgateway 
        servers:
        - port:
            number: 80
            name: http
            protocol: HTTP
          hosts:
          - "*"
      

      In addition to specifying a name for the Gateway in the metadata field, we've included the following specifications:

      • A selector that will match this resource with the default Istio IngressGateway controller that was enabled with the configuration profile we selected when installing Istio.
      • A servers specification that specifies the port to expose for ingress and the hosts exposed by the Gateway. In this case, we are specifying all hosts with an asterisk (*) since we are not working with a specific secured domain.

      Below the Gateway definition, add specifications for the Virtual Service:

      ~/istio_project/node-istio.yaml

      ...
      ---
      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: nodejs
      spec:
        hosts:
        - "*"
        gateways:
        - nodejs-gateway
        http:
        - route:
          - destination:
              host: nodejs
      

      In addition to providing a name for this Virtual Service, we're also including specifications for this resource that include:

      • A hosts field that specifies the destination host. In this case, we're again using a wildcard value (*) to enable quick access to the application in the browser, since we're not working with a domain.
      • A gateways field that specifies the Gateway through which external requests will be allowed. In this case, it's our nodejs-gateway Gateway.
      • The http field that specifies how HTTP traffic will be routed.
      • A destination field that indicates where the request will be routed. In this case, it will be routed to the nodejs service, which implicitly expands to the Service's Fully Qualified Domain Name (FQDN) in a Kubernetes environment: nodejs.default.svc.cluster.local. It's important to note, though, that the FQDN will be based on the namespace where the rule is defined, not the Service, so be sure to use the FQDN in this field when your application Service and Virtual Service are in different namespaces. To learn about Kubernetes Domain Name System (DNS) more generally, see An Introduction to the Kubernetes DNS Service.

      Save and close the file when you are finished editing.

      With your yaml files in place, you can create your application Service and Deployment, as well as the Gateway and Virtual Service objects that will enable access to your application.

      Step 5 — Creating Application Resources and Enabling Telemetry Access

      Once you have created your application Service and Deployment objects, along with a Gateway and Virtual Service, you will be able to generate some requests to your application and look at the associated data in your Istio Grafana dashboards. First, however, you will need to configure Istio to expose the Grafana addon so that you can access the dashboards in your browser.

      We will enable Grafana access with HTTP, but when you are working in production or in sensitive environments, it is strongly recommended that you enable access with HTTPS.

      Because we set the --set grafana.enabled=true configuration option when installing Istio in Step 2, we have a Grafana Service and Pod in our istio-system namespace, which we confirmed in that Step.

      With those resources already in place, our next step will be to create a manifest for a Gateway and Virtual Service so that we can expose the Grafana addon.

      Open the file for the manifest:

      Add the following code to the file to create a Gateway and Virtual Service to expose and route traffic to the Grafana Service:

      ~/istio_project/node-grafana.yaml

      apiVersion: networking.istio.io/v1alpha3
      kind: Gateway
      metadata:
        name: grafana-gateway
        namespace: istio-system
      spec:
        selector:
          istio: ingressgateway
        servers:
        - port:
            number: 15031
            name: http-grafana
            protocol: HTTP
          hosts:
          - "*"
      ---
      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: grafana-vs
        namespace: istio-system
      spec:
        hosts:
        - "*"
        gateways:
        - grafana-gateway
        http:
        - match:
          - port: 15031
          route:
          - destination:
              host: grafana
              port:
                number: 3000
      

      Our Grafana Gateway and Virtual Service specifications are similar to those we defined for our application Gateway and Virtual Service in Step 4. There are a few differences, however:

      • Grafana will be exposed on the http-grafana named port (port 15031), and it will run on port 3000 on the host.
      • The Gateway and Virtual Service are both defined in the istio-system namespace.
      • The host in this Virtual Service is the grafana Service in the istio-system namespace. Since we are defining this rule in the same namespace that the Grafana Service is running in, FQDN expansion will again work without conflict.

      Note: Because our current MeshPolicy is configured to run TLS in permissive mode, we do not need to apply a Destination Rule to our manifest. If you selected a different profile with your Istio installation, then you will need to add a Destination Rule to disable mutual TLS when enabling access to Grafana with HTTP. For more information on how to do this, you can refer to the official Istio documentaion on enabling access to telemetry addons with HTTP.

      Save and close the file when you are finished editing.

      Create your Grafana resources with the following command:

      • kubectl apply -f node-grafana.yaml

      The kubectl apply command allows you to apply a particular configuration to an object in the process of creating or updating it. In our case, we are applying the configuration we specified in the node-grafana.yaml file to our Gateway and Virtual Service objects in the process of creating them.

      You can take a look at the Gateway in the istio-system namespace with the following command:

      • kubectl get gateway -n istio-system

      You will see the following output:

      Output

      NAME AGE grafana-gateway 47s

      You can do the same thing for the Virtual Service:

      • kubectl get virtualservice -n istio-system

      Output

      NAME GATEWAYS HOSTS AGE grafana-vs [grafana-gateway] [*] 74s

      With these resources created, we should be able to access our Grafana dashboards in the browser. Before we do that, however, let's create our application Service and Deployment, along with our application Gateway and Virtual Service, and check that we can access our application in the browser.

      Create the application Service and Deployment with the following command:

      • kubectl apply -f node-app.yaml

      Wait a few seconds, and then check your application Pods with the following command:

      Output

      NAME READY STATUS RESTARTS AGE nodejs-7759fb549f-kmb7x 2/2 Running 0 40s

      Your application containers are running, as you can see in the STATUS column, but why does the READY column list 2/2 if the application manifest from Step 3 only specified 1 replica?

      This second container is the Envoy sidecar, which you can inspect with the following command. Be sure to replace the pod listed here with the NAME of your own nodejs Pod:

      • kubectl describe pod nodejs-7759fb549f-kmb7x

      Output

      Name: nodejs-7759fb549f-kmb7x Namespace: default ... Containers: nodejs: ... istio-proxy: Container ID: docker://f840d5a576536164d80911c46f6de41d5bc5af5152890c3aed429a1ee29af10b Image: docker.io/istio/proxyv2:1.1.7 Image ID: docker-pullable://istio/proxyv2@sha256:e6f039115c7d5ef9c8f6b049866fbf9b6f5e2255d3a733bb8756b36927749822 Port: 15090/TCP Host Port: 0/TCP Args: ...

      Next, create your application Gateway and Virtual Service:

      • kubectl apply -f node-istio.yaml

      You can inspect the Gateway with the following command:

      Output

      NAME AGE nodejs-gateway 7s

      And the Virtual Service:

      • kubectl get virtualservice

      Output

      NAME GATEWAYS HOSTS AGE nodejs [nodejs-gateway] [*] 28s

      We are now ready to test access to the application. To do this, we will need the external IP associated with our istio-ingressgateway Service, which is a LoadBalancer Service type.

      Get the external IP for the istio-ingressgateway Service with the following command:

      • kubectl get svc -n istio-system

      You will see output like the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.245.85.162 <none> 3000/TCP 42m istio-citadel ClusterIP 10.245.135.45 <none> 8060/TCP,15014/TCP 42m istio-galley ClusterIP 10.245.46.245 <none> 443/TCP,15014/TCP,9901/TCP 42m istio-ingressgateway LoadBalancer 10.245.171.39 ingressgateway_ip 15020:30707/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30285/TCP,15030:31668/TCP,15031:32297/TCP,15032:30853/TCP,15443:30406/TCP 42m istio-pilot ClusterIP 10.245.56.97 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 42m istio-policy ClusterIP 10.245.206.189 <none> 9091/TCP,15004/TCP,15014/TCP 42m istio-sidecar-injector ClusterIP 10.245.223.99 <none> 443/TCP 42m istio-telemetry ClusterIP 10.245.5.215 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 42m prometheus ClusterIP 10.245.100.132 <none> 9090/TCP 42m

      The istio-ingressgateway should be the only Service with the TYPE LoadBalancer, and the only Service with an external IP.

      Navigate to this external IP in your browser: http://ingressgateway_ip.

      You should see the following landing page:

      Application Landing Page

      Next, generate some load to the site by clicking refresh five or six times.

      You can now check the Grafana dashboard to look at traffic data.

      In your browser, navigate to the following address, again using your istio-ingressgateway external IP and the port you defined in your Grafana Gateway manifest: http://ingressgateway_ip:15031.

      You will see the following landing page:

      Grafana Home Dash

      Clicking on Home at the top of the page will bring you to a page with an istio folder. To get a list of dropdown options, click on the istio folder icon:

      Istio Dash Options Dropdown Menu

      From this list of options, click on Istio Service Dashboard.

      This will bring you to a landing page with another dropdown menu:

      Service Dropdown in Istio Service Dash

      Select nodejs.default.svc.cluster.local from the list of available options.

      You will now be able to look at traffic data for that service:

      Nodejs Service Dash

      You now have a functioning Node.js application running in an Istio service mesh with Grafana enabled and configured for external access.

      Conclusion

      In this tutorial, you installed Istio using the Helm package manager and used it to expose a Node.js application Service using Gateway and Virtual Service objects. You also configured Gateway and Virtual Service objects to expose the Grafana telemetry addon, in order to look at traffic data for your application.

      As you move toward production, you will want to take steps like securing your application Gateway with HTTPS and ensuring that access to your Grafana Service is also secure.

      You can also explore other telemetry-related tasks, including collecting and processing metrics, logs, and trace spans.



      Source link

      How to Install and Secure phpMyAdmin with Nginx on a Debian 9 server


      Introduction

      While many users need the functionality of a database system like MySQL, interacting with the system solely from the MySQL command-line client requires familiarity with the SQL language, so it may not be the preferred interface for some.

      phpMyAdmin was created so that users can interact with MySQL through an intuitive web interface, running alongside a PHP development environment. In this guide, we’ll discuss how to install phpMyAdmin on top of an Nginx server, and how to configure the server for increased security.

      Note: There are important security considerations when using software like phpMyAdmin, since it runs on the database server, it deals with database credentials, and it enables a user to easily execute arbitrary SQL queries into your database. Because phpMyAdmin is a widely-deployed PHP application, it is frequently targeted for attack. We will go over some security measures you can take in this tutorial so that you can make informed decisions.

      Prerequisites

      Before you get started with this guide, you’ll need the following available to you:

      Because phpMyAdmin handles authentication using MySQL credentials, it is strongly advisable to install an SSL/TLS certificate to enable encrypted traffic between server and client. If you don’t have an existing domain configured with a valid certificate, you can follow the guide on How to Secure Nginx with Let’s Encrypt on Debian 9.

      Warning: If you don’t have an SSL/TLS certificate installed on the server and you still want to proceed, please consider enforcing access via SSH Tunnels as explained in Step 5 of this guide.

      Once you have met these prerequisites, you can go ahead with the rest of the guide.

      Step 1 — Installing phpMyAdmin

      The first thing we need to do is install phpMyAdmin on the LEMP server. We’re going to use the default Debian repositories to achieve this goal.

      Let’s start by updating the server’s package index with:

      Now you can install phpMyAdmin with:

      • sudo apt install phpmyadmin

      During the installation process, you will be prompted to choose the web server (either Apache or Lighthttp) to configure. Because we are using Nginx as web server, we shouldn't make a choice here. Press tab and then OK to advance to the next step.

      Next, you’ll be prompted whether to use dbconfig-common for configuring the application database. Select Yes. This will set up the internal database and administrative user for phpMyAdmin. You will be asked to define a new password for the phpmyadmin MySQL user. You can also leave it blank and let phpMyAdmin randomly create a password.

      The installation will now finish. For the Nginx web server to find and serve the phpMyAdmin files correctly, we’ll need to create a symbolic link from the installation files to Nginx's document root directory:

      • sudo ln -s /usr/share/phpmyadmin /var/www/html

      Your phpMyAdmin installation is now operational. To access the interface, go to your server's domain name or public IP address followed by /phpmyadmin in your web browser:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin login screen

      As mentioned before, phpMyAdmin handles authentication using MySQL credentials, which means you should use the same username and password you would normally use to connect to the database via console or via an API. If you need help creating MySQL users, check this guide on How To Manage an SQL Database.

      Note: Logging into phpMyAdmin as the root MySQL user is discouraged because it represents a significant security risk. We'll see how to disable root login in a subsequent step of this guide.

      Your phpMyAdmin installation should be completely functional at this point. However, by installing a web interface, we've exposed our MySQL database server to the outside world. Because of phpMyAdmin's popularity, and the large amounts of data it may provide access to, installations like these are common targets for attacks. In the following sections of this guide, we'll see a few different ways in which we can make our phpMyAdmin installation more secure.

      Step 2 — Changing phpMyAdmin's Default Location

      One of the most basic ways to protect your phpMyAdmin installation is by making it harder to find. Bots will scan for common paths, like /phpmyadmin, /pma, /admin, /mysql and such. Changing the interface's URL from /phpmyadmin to something non-standard will make it much harder for automated scripts to find your phpMyAdmin installation and attempt brute-force attacks.

      With our phpMyAdmin installation, we've created a symbolic link pointing to /usr/share/phpmyadmin, where the actual application files are located. To change phpMyAdmin's interface URL, we will rename this symbolic link.

      First, let's navigate to the Nginx document root directory and list the files it contains to get a better sense of the change we'll make:

      You’ll receive the following output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 phpmyadmin -> /usr/share/phpmyadmin

      The output shows that we have a symbolic link called phpmyadmin in this directory. We can change this link name to whatever we'd like. This will in turn change phpMyAdmin's access URL, which can help obscure the endpoint from bots hardcoded to search common endpoint names.

      Choose a name that obscures the purpose of the endpoint. In this guide, we'll name our endpoint /nothingtosee, but you should choose an alternate name. To accomplish this, we'll rename the link:

      • sudo mv phpmyadmin nothingtosee
      • ls -l

      After running the above commands, you’ll receive this output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 nothingtosee -> /usr/share/phpmyadmin

      Now, if you go to the old URL, you'll get a 404 error:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin 404 error

      Your phpMyAdmin interface will now be available at the new URL we just configured:

      https://server_domain_or_IP/nothingtosee
      

      phpMyAdmin login screen

      By obfuscating phpMyAdmin's real location on the server, you're securing its interface against automated scans and manual brute-force attempts.

      Step 3 — Disabling Root Login

      On MySQL as well as within regular Linux systems, the root account is a special administrative account with unrestricted access to the system. In addition to being a privileged account, it's a known login name, which makes it an obvious target for brute-force attacks. To minimize risks, we'll configure phpMyAdmin to deny any login attempts coming from the user root. This way, even if you provide valid credentials for the user root, you'll still get an "access denied" error and won't be allowed to log in.

      Because we chose to use dbconfig-common to configure and store phpMyAdmin settings, the default configuration is currently stored in the database. We'll need to create a new config.inc.php file to define our custom settings.

      Even though the PHP files for phpMyAdmin are located inside /usr/share/phpmyadmin, the application uses configuration files located at /etc/phpmyadmin. We will create a new custom settings file inside /etc/phpmyadmin/conf.d, and name it pma_secure.php:

      • sudo nano /etc/phpmyadmin/conf.d/pma_secure.php

      The following configuration file contains the necessary settings to disable passwordless logins (AllowNoPassword set to false) and root login (AllowRoot set to false):

      /etc/phpmyadmin/conf.d/pma_secure.php

      <?php
      
      # PhpMyAdmin Settings
      # This should be set to a random string of at least 32 chars
      $cfg['blowfish_secret'] = '3!#32@3sa(+=_4?),5XP_:U%%834sdfSdg43yH#{o';
      
      $i=0;
      $i++;
      
      $cfg['Servers'][$i]['auth_type'] = 'cookie';
      $cfg['Servers'][$i]['AllowNoPassword'] = false;
      $cfg['Servers'][$i]['AllowRoot'] = false;
      
      ?>
      

      Save the file when you're done editing by pressing CTRL + X then y to confirm changes and ENTER. The changes will apply automatically. If you reload the login page now and try to log in as root, you will get an Access Denied error:

      access denied

      Root login is now prohibited on your phpMyAdmin installation. This security measure will block brute-force scripts from trying to guess the root database password on your server. Moreover, it will enforce the usage of less-privileged MySQL accounts for accessing phpMyAdmin's web interface, which by itself is an important security practice.

      Step 4 — Creating an Authentication Gateway

      Hiding your phpMyAdmin installation on an unusual location might sidestep some automated bots scanning the network, but it's useless against targeted attacks. To better protect a web application with restricted access, it's generally more effective to stop attackers before they can even reach the application. This way, they'll be unable to use generic exploits and brute-force attacks to guess access credentials.

      In the specific case of phpMyAdmin, it's even more important to keep the login interface locked away. By keeping it open to the world, you're offering a brute-force platform for attackers to guess your database credentials.

      Adding an extra layer of authentication to your phpMyAdmin installation enables you to increase security. Users will be required to pass through an HTTP authentication prompt before ever seeing the phpMyAdmin login screen. Most web servers, including Nginx, provide this capability natively.

      To set this up, we first need to create a password file to store the authentication credentials. Nginx requires that passwords be encrypted using the crypt() function. The OpenSSL suite, which should already be installed on your server, includes this functionality.

      To create an encrypted password, type:

      You will be prompted to enter and confirm the password that you wish to use. The utility will then display an encrypted version of the password that will look something like this:

      Output

      O5az.RSPzd.HE

      Copy this value, as you will need to paste it into the authentication file we'll be creating.

      Now, create an authentication file. We'll call this file pma_pass and place it in the Nginx configuration directory:

      • sudo nano /etc/nginx/pma_pass

      In this file, you’ll specify the username you would like to use, followed by a colon (:), followed by the encrypted version of the password you received from the openssl passwd utility.

      We are going to name our user sammy, but you should choose a different username. The file should look like this:

      /etc/nginx/pma_pass

      sammy:O5az.RSPzd.HE
      

      Save and close the file when you're done.

      Now we're ready to modify the Nginx configuration file. For this guide, we'll use the configuration file located at /etc/nginx/sites-available/example.com. You should use the relevant Nginx configuration file for the web location where phpMyAdmin is currently hosted. Open this file in your text editor to get started:

      • sudo nano /etc/nginx/sites-available/example.com

      Locate the server block, and the location / section within it. We need to create a new location section within this block to match phpMyAdmin's current path on the server. In this guide, phpMyAdmin's location relative to the web root is /nothingtosee:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location / {
                      try_files $uri $uri/ =404;
              }
      
              location /nothingtosee {
                      # Settings for phpMyAdmin will go here
              }
      
          . . .
      }
      

      Within this block, we'll need to set up two different directives: auth_basic, which defines the message that will be displayed on the authentication prompt, and auth_basic_user_file, pointing to the file we just created. This is how your configuration file should look like when you're finished:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location /nothingtosee {
                      auth_basic "Admin Login";
                      auth_basic_user_file /etc/nginx/pma_pass;
              }
      
      
          . . .
      }
      

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      To activate the new authentication gate, you must reload the web server:

      • sudo systemctl reload nginx

      Now, if you visit the phpMyAdmin URL in your web browser, you should be prompted for the username and password you added to the pma_pass file:

      https://server_domain_or_IP/nothingtosee
      

      Nginx authentication page

      Once you enter your credentials, you'll be taken to the standard phpMyAdmin login page.

      Note: If refreshing the page does not work, you may have to clear your cache or use a different browser session if you've already been using phpMyAdmin.

      In addition to providing an extra layer of security, this gateway will help keep your MySQL logs clean of spammy authentication attempts.

      Step 5 — Setting Up Access via Encrypted Tunnels (Optional)

      For increased security, it is possible to lock down your phpMyAdmin installation to authorized hosts only. You can whitelist authorized hosts in your Nginx configuration file, so that any request coming from an IP address that is not on the list will be denied.

      Even though this feature alone can be enough in some use cases, it's not always the best long-term solution, mainly due to the fact that most people don't access the Internet from static IP addresses. As soon as you get a new IP address from your Internet provider, you'll be unable to get to the phpMyAdmin interface until you update the Nginx configuration file with your new IP address.

      For a more robust long-term solution, you can use IP-based access control to create a setup in which users will only have access to your phpMyAdmin interface if they're accessing from either an authorized IP address or localhost via SSH tunneling. We'll see how to set this up in the sections below.

      Combining IP-based access control with SSH tunneling greatly increases security because it fully blocks access coming from the public internet (except for authorized IPs), in addition to providing a secure channel between user and server through the use of encrypted tunnels.

      Setting Up IP-Based Access Control on Nginx

      On Nginx, IP-based access control can be defined in the corresponding location block of a given site, using the directives allow and deny. For instance, if we want to only allow requests coming from a given host, we should include the following two lines, in this order, inside the relevant location block for the site we would like to protect:

      allow hostname_or_IP;
      deny all;
      

      You can allow as many hosts as you want, you only need to include one allow line for each authorized host/IP inside the respective location block for the site you're protecting. The directives will be evaluated in the same order as they are listed, until a match is found or the request is finally denied due to the deny all directive.

      We'll now configure Nginx to only allow requests coming from localhost or your current IP address. First, you'll need to know the current public IP address your local machine is using to connect to the Internet. There are various ways to obtain this information; for simplicity, we're going to use the service provided by ipinfo.io. You can either open the URL https://ipinfo.io/ip in your browser, or run the following command from your local machine:

      • curl https://ipinfo.io/ip

      You should get a simple IP address as output, like this:

      Output

      203.0.113.111

      That is your current public IP address. We'll configure phpMyAdmin's location block to only allow requests coming from that IP, in addition to localhost. We'll need to edit once again the configuration block for phpMyAdmin inside /etc/nginx/sites-available/example.com.

      Open the Nginx configuration file using your command-line editor of choice:

      • sudo nano /etc/nginx/sites-available/example.com

      Because we already have an access rule within our current configuration, we need to combine it with IP-based access control using the directive satisfy all. This way, we can keep the current HTTP authentication prompt for increased security.

      This is how your phpMyAdmin Nginx configuration should look like after you're done editing:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee {
              satisfy all; #requires both conditions
      
              allow 203.0.113.111; #allow your IP
              allow 127.0.0.1; #allow localhost via SSH tunnels
              deny all; #deny all other sources
      
              auth_basic "Admin Login";
              auth_basic_user_file /etc/nginx/pma_pass;
          }
      
          . . .
      }
      

      Remember to replace nothingtosee with the actual path where phpMyAdmin can be found, and the highlighted IP address with your current public IP address.

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      Now reload the web server so the changes take effect:

      • sudo systemctl reload nginx

      Because your IP address is explicitly listed as an authorized host, your access shouldn't be disturbed. Anyone else trying to access your phpMyAdmin installation will now get a 403 error (Forbidden):

      https://server_domain_or_IP/nothingtosee
      

      403 error

      In the next section, we'll see how to use SSH tunneling to access the web server through local requests. This way, you'll still be able to access phpMyAdmin's interface even when your IP address changes.

      Accessing phpMyAdmin Through an Encrypted Tunnel

      SSH tunneling works as a way of redirecting network traffic through encrypted channels. By running an ssh command similar to what you would use to log into a server, you can create a secure "tunnel" between your local machine and that server. All traffic coming in on a given local port can now be redirected through the encrypted tunnel and use the remote server as a proxy, before reaching out to the internet. It's similar to what happens when you use a VPN (Virtual Private Network), however SSH tunneling is much simpler to set up.

      We'll use SSH tunneling to proxy our requests to the remote web server running phpMyAdmin. By creating a tunnel between your local machine and the server where phpMyAdmin is installed, you can redirect local requests to the remote web server, and what's more important, traffic will be encrypted and requests will reach Nginx as if they're coming from localhost. This way, no matter what IP address you're connecting from, you'll be able to securely access phpMyAdmin's interface.

      Because the traffic between your local machine and the remote web server will be encrypted, this is a safe alternative for situations where you can't have an SSL/TLS certificate installed on the web server running phpMyAdmin.

      From your local machine, run this command whenever you need access to phpMyAdmin:

      • ssh user@server_domain_or_IP -L 8000:localhost:80 -L 8443:localhost:443 -N

      Let's examine each part of the command:

      • user: SSH user to connect to the server where phpMyAdmin is running
      • hostname_or_IP: SSH host where phpMyAdmin is running
      • -L 8000:localhost:80 redirects HTTP traffic on port 8000
      • -L 8443:localhost:443 redirects HTTPS traffic on port 8443
      • -N: do not execute remote commands

      Note: This command will block the terminal until interrupted with a CTRL+C, in which case it will end the SSH connection and stop the packet redirection. If you'd prefer to run this command in background mode, you can use the SSH option -f.

      Now, go to your browser and replace server_domain_or_IP with localhost:PORT, where PORT is either 8000 for HTTP or 8443 for HTTPS:

      http://localhost:8000/nothingtosee
      
      https://localhost:443/nothingtosee
      

      phpMyAdmin login screen

      Note: If you're accessing phpMyAdmin via https, you might get an alert message questioning the security of the SSL certificate. This happens because the domain name you're using (localhost) doesn't match the address registered within the certificate (domain where phpMyAdmin is actually being served). It is safe to proceed.

      All requests on localhost:8000 (HTTP) and localhost:8443 (HTTPS) are now being redirected through a secure tunnel to your remote phpMyAdmin application. Not only have you increased security by disabling public access to your phpMyAdmin, you also protected all traffic between your local computer and the remote server by using an encrypted tunnel to send and receive data.

      If you'd like to enforce the usage of SSH tunneling to anyone who wants access to your phpMyAdmin interface (including you), you can do that by removing any other authorized IPs from the Nginx configuration file, leaving 127.0.0.1 as the only allowed host to access that location. Considering nobody will be able to make direct requests to phpMyAdmin, it is safe to remove HTTP authentication in order to simplify your setup. This is how your configuration file would look like in such a scenario:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee { 
              allow 127.0.0.1; #allow localhost only
              deny all; #deny all other sources
          }
      
          . . .
      }
      

      Once you reload Nginx's configuration with sudo systemctl reload nginx, your phpMyAdmin installation will be locked down and users will be required to use SSH tunnels in order to access phpMyAdmin's interface via redirected requests.

      Conclusion

      In this tutorial, we saw how to install phpMyAdmin on Ubuntu 18.04 running Nginx as the web server. We also covered advanced methods to secure a phpMyAdmin installation on Ubuntu, such as disabling root login, creating an extra layer of authentication, and using SSH tunneling to access a phpMyAdmin installation via local requests only.

      After completing this tutorial, you should be able to manage your MySQL databases from a reasonably secure web interface. This user interface exposes most of the functionality available via the MySQL command line. You can browse databases and schema, execute queries, and create new data sets and structures.



      Source link

      How to Use Ansible to Install and Set Up Docker on Ubuntu 18.04


      Introduction

      With the popularization of containerized applications and microservices, server automation now plays an essential role in systems administration. It is also a way to establish standard procedures for new servers and reduce human error.

      This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 18.04. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system.

      While you can complete this setup manually, using a configuration management tool like Ansible to automate the process will save you time and establish standard procedures that can be repeated through tens to hundreds of nodes. Ansible offers a simple architecture that doesn’t require special software to be installed on nodes, and it provides a robust set of features and built-in modules which facilitate writing automation scripts.

      Pre-Flight Check

      In order to execute the automated setup provided by the playbook discussed in this guide, you’ll need:

      Testing Connectivity to Nodes

      To make sure Ansible is able to execute commands on your nodes, run the following command from your Ansible Control Node:

      This command will use Ansible's built-in ping module to run a connectivity test on all nodes from your default inventory file, connecting as the current system user. The ping module will test whether:

      • your Ansible hosts are accessible;
      • your Ansible Control Node has valid SSH credentials;
      • your hosts are able to run Ansible modules using Python.

      If you installed and configured Ansible correctly, you will get output similar to this:

      Output

      server1 | SUCCESS => { "changed": false, "ping": "pong" } server2 | SUCCESS => { "changed": false, "ping": "pong" } server3 | SUCCESS => { "changed": false, "ping": "pong" }

      Once you get a pong reply back from a host, it means you're ready to run Ansible commands and playbooks on that server.

      Note: If you are unable to get a successful response back from your servers, check our Ansible Cheat Sheet Guide for more information on how to run Ansible commands with custom connection options.

      What Does this Playbook Do?

      This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu 18.04.

      Running this playbook will perform the following actions on your Ansible hosts:

      1. Install aptitude, which is preferred by Ansible as an alternative to the apt package manager.
      2. Install the required system packages.
      3. Install the Docker GPG APT key.
      4. Add the official Docker repository to the apt sources.
      5. Install Docker.
      6. Install the Python Docker module via pip.
      7. Pull the default image specified by default_container_image from Docker Hub.
      8. Create the number of containers defined by create_containers field, each using the image defined by default_container_image, and execute the command defined in default_container_command in each new container.

      Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables.

      How to Use this Playbook

      To get started, we'll download the contents of the playbook to your Ansible Control Node. For your convenience, the contents of the playbook are also included in the next section of this guide.

      Use curl to download this playbook from the command line:

      • curl -L https://raw.githubusercontent.com/do-community/ansible-playbooks/master/docker/ubuntu1804.yml -o docker_ubuntu.yml

      This will download the contents of the playbook to a file named docker_ubuntu.yml in your current working directory. You can examine the contents of the playbook by opening the file with your command-line editor of choice:

      Once you've opened the playbook file, you should notice a section named vars with variables that require your attention:

      docker_ubuntu.yml

      . . .
      vars:
        create_containers: 4
        default_container_name: docker
        default_container_image: ubuntu
        default_container_command: sleep 1d
      . . .
      

      Here's what these variables mean:

      • create_containers: The number of containers to create.
      • default_container_name: Default container name.
      • default_container_image: Default Docker image to be used when creating containers.
      • default_container_command: Default command to run on new containers.

      Once you're done updating the variables inside docker_ubuntu.yml, save and close the file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

      You're now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on all servers from your inventory, by default. We can use the -l flag to make sure that only a subset of servers, or a single server, is affected by the playbook. To execute the playbook only on server1, you can use the following command:

      • ansible-playbook docker_ubuntu.yml -l server1

      You will get output similar to this:

      Output

      ... TASK [Add Docker GPG apt Key] ******************************************************************************************************************** changed: [server1] TASK [Add Docker Repository] ********************************************************************************************************************* changed: [server1] TASK [Update apt and install docker-ce] ********************************************************************************************************** changed: [server1] TASK [Install Docker Module for Python] ********************************************************************************************************** changed: [server1] TASK [Pull default Docker image] ***************************************************************************************************************** changed: [server1] TASK [Create default containers] ***************************************************************************************************************** changed: [server1] => (item=1) changed: [server1] => (item=2) changed: [server1] => (item=3) changed: [server1] => (item=4) PLAY RECAP *************************************************************************************************************************************** server1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide.

      When the playbook is finished running, log in via SSH to the server provisioned by Ansible and run docker ps -a to check if the containers were successfully created:

      You should see output similar to this:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a3fe9bfb89cf ubuntu "sleep 1d" 5 minutes ago Created docker4 8799c16cde1e ubuntu "sleep 1d" 5 minutes ago Created docker3 ad0c2123b183 ubuntu "sleep 1d" 5 minutes ago Created docker2 b9350916ffd8 ubuntu "sleep 1d" 5 minutes ago Created docker1

      This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server.

      The Playbook Contents

      You can find the Docker playbook featured in this tutorial in the ansible-playbooks repository within the DigitalOcean Community GitHub organization. To copy or download the script contents directly, click the Raw button towards the top of the script, or click here to view the raw contents directly.

      The full contents are also included here for your convenience:

      docker_ubuntu.yml

      
      ---
      - hosts: all
        become: true
        vars:
          create_containers: 4
          default_container_name: docker
          default_container_image: ubuntu
          default_container_command: sleep 1d
      
        tasks:
          - name: Install aptitude using apt
            apt: name=aptitude state=latest update_cache=yes force_apt_get=yes
      
          - name: Install required system packages
            apt: name={{ item }} state=latest update_cache=yes
            loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools']
      
          - name: Add Docker GPG apt Key
            apt_key:
              url: https://download.docker.com/linux/ubuntu/gpg
              state: present
      
          - name: Add Docker Repository
            apt_repository:
              repo: deb https://download.docker.com/linux/ubuntu bionic stable
              state: present
      
          - name: Update apt and install docker-ce
            apt: update_cache=yes name=docker-ce state=latest
      
          - name: Install Docker Module for Python
            pip:
              name: docker
      
          # Pull image specified by variable default_image from the Docker Hub
          - name: Pull default Docker image
            docker_image:
              name: "{{ default_container_image }}"
              source: pull
      
          # Creates the number of containers defined by the variable create_containers, using default values
          - name: Create default containers
            docker_container:
              name: "{{ default_container_name }}{{ item }}"
              image: "{{ default_container_image }}"
              command: "{{ default_container_command }}"
              state: present
            with_sequence: count={{ create_containers }}
      
      

      Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks.

      Conclusion

      Automating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams' development processes.

      In this guide, we demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container Ansible module.

      If you'd like to include other tasks in this playbook to further customize your initial server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks.



      Source link