One place for hosting & domains

      How to Use Ansible to Install and Set Up Docker on Ubuntu 18.04


      Introduction

      With the popularization of containerized applications and microservices, server automation now plays an essential role in systems administration. It is also a way to establish standard procedures for new servers and reduce human error.

      This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 18.04. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system.

      While you can complete this setup manually, using a configuration management tool like Ansible to automate the process will save you time and establish standard procedures that can be repeated through tens to hundreds of nodes. Ansible offers a simple architecture that doesn’t require special software to be installed on nodes, and it provides a robust set of features and built-in modules which facilitate writing automation scripts.

      Pre-Flight Check

      In order to execute the automated setup provided by the playbook discussed in this guide, you’ll need:

      Testing Connectivity to Nodes

      To make sure Ansible is able to execute commands on your nodes, run the following command from your Ansible Control Node:

      This command will use Ansible's built-in ping module to run a connectivity test on all nodes from your default inventory file, connecting as the current system user. The ping module will test whether:

      • your Ansible hosts are accessible;
      • your Ansible Control Node has valid SSH credentials;
      • your hosts are able to run Ansible modules using Python.

      If you installed and configured Ansible correctly, you will get output similar to this:

      Output

      server1 | SUCCESS => { "changed": false, "ping": "pong" } server2 | SUCCESS => { "changed": false, "ping": "pong" } server3 | SUCCESS => { "changed": false, "ping": "pong" }

      Once you get a pong reply back from a host, it means you're ready to run Ansible commands and playbooks on that server.

      Note: If you are unable to get a successful response back from your servers, check our Ansible Cheat Sheet Guide for more information on how to run Ansible commands with custom connection options.

      What Does this Playbook Do?

      This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu 18.04.

      Running this playbook will perform the following actions on your Ansible hosts:

      1. Install aptitude, which is preferred by Ansible as an alternative to the apt package manager.
      2. Install the required system packages.
      3. Install the Docker GPG APT key.
      4. Add the official Docker repository to the apt sources.
      5. Install Docker.
      6. Install the Python Docker module via pip.
      7. Pull the default image specified by default_container_image from Docker Hub.
      8. Create the number of containers defined by create_containers field, each using the image defined by default_container_image, and execute the command defined in default_container_command in each new container.

      Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables.

      How to Use this Playbook

      To get started, we'll download the contents of the playbook to your Ansible Control Node. For your convenience, the contents of the playbook are also included in the next section of this guide.

      Use curl to download this playbook from the command line:

      • curl -L https://raw.githubusercontent.com/do-community/ansible-playbooks/master/docker/ubuntu1804.yml -o docker_ubuntu.yml

      This will download the contents of the playbook to a file named docker_ubuntu.yml in your current working directory. You can examine the contents of the playbook by opening the file with your command-line editor of choice:

      Once you've opened the playbook file, you should notice a section named vars with variables that require your attention:

      docker_ubuntu.yml

      . . .
      vars:
        create_containers: 4
        default_container_name: docker
        default_container_image: ubuntu
        default_container_command: sleep 1d
      . . .
      

      Here's what these variables mean:

      • create_containers: The number of containers to create.
      • default_container_name: Default container name.
      • default_container_image: Default Docker image to be used when creating containers.
      • default_container_command: Default command to run on new containers.

      Once you're done updating the variables inside docker_ubuntu.yml, save and close the file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

      You're now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on all servers from your inventory, by default. We can use the -l flag to make sure that only a subset of servers, or a single server, is affected by the playbook. To execute the playbook only on server1, you can use the following command:

      • ansible-playbook docker_ubuntu.yml -l server1

      You will get output similar to this:

      Output

      ... TASK [Add Docker GPG apt Key] ******************************************************************************************************************** changed: [server1] TASK [Add Docker Repository] ********************************************************************************************************************* changed: [server1] TASK [Update apt and install docker-ce] ********************************************************************************************************** changed: [server1] TASK [Install Docker Module for Python] ********************************************************************************************************** changed: [server1] TASK [Pull default Docker image] ***************************************************************************************************************** changed: [server1] TASK [Create default containers] ***************************************************************************************************************** changed: [server1] => (item=1) changed: [server1] => (item=2) changed: [server1] => (item=3) changed: [server1] => (item=4) PLAY RECAP *************************************************************************************************************************************** server1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide.

      When the playbook is finished running, log in via SSH to the server provisioned by Ansible and run docker ps -a to check if the containers were successfully created:

      You should see output similar to this:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a3fe9bfb89cf ubuntu "sleep 1d" 5 minutes ago Created docker4 8799c16cde1e ubuntu "sleep 1d" 5 minutes ago Created docker3 ad0c2123b183 ubuntu "sleep 1d" 5 minutes ago Created docker2 b9350916ffd8 ubuntu "sleep 1d" 5 minutes ago Created docker1

      This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server.

      The Playbook Contents

      You can find the Docker playbook featured in this tutorial in the ansible-playbooks repository within the DigitalOcean Community GitHub organization. To copy or download the script contents directly, click the Raw button towards the top of the script, or click here to view the raw contents directly.

      The full contents are also included here for your convenience:

      docker_ubuntu.yml

      
      ---
      - hosts: all
        become: true
        vars:
          create_containers: 4
          default_container_name: docker
          default_container_image: ubuntu
          default_container_command: sleep 1d
      
        tasks:
          - name: Install aptitude using apt
            apt: name=aptitude state=latest update_cache=yes force_apt_get=yes
      
          - name: Install required system packages
            apt: name={{ item }} state=latest update_cache=yes
            loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools']
      
          - name: Add Docker GPG apt Key
            apt_key:
              url: https://download.docker.com/linux/ubuntu/gpg
              state: present
      
          - name: Add Docker Repository
            apt_repository:
              repo: deb https://download.docker.com/linux/ubuntu bionic stable
              state: present
      
          - name: Update apt and install docker-ce
            apt: update_cache=yes name=docker-ce state=latest
      
          - name: Install Docker Module for Python
            pip:
              name: docker
      
          # Pull image specified by variable default_image from the Docker Hub
          - name: Pull default Docker image
            docker_image:
              name: "{{ default_container_image }}"
              source: pull
      
          # Creates the number of containers defined by the variable create_containers, using default values
          - name: Create default containers
            docker_container:
              name: "{{ default_container_name }}{{ item }}"
              image: "{{ default_container_image }}"
              command: "{{ default_container_command }}"
              state: present
            with_sequence: count={{ create_containers }}
      
      

      Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks.

      Conclusion

      Automating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams' development processes.

      In this guide, we demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container Ansible module.

      If you'd like to include other tasks in this playbook to further customize your initial server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks.



      Source link

      How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes Ingresses offer you a flexible way of routing traffic from beyond your cluster to internal Kubernetes Services. Ingress Resources are objects in Kubernetes that define rules for routing HTTP and HTTPS traffic to Services. For these to work, an Ingress Controller must be present; its role is to implement the rules by accepting traffic (most likely via a Load Balancer) and routing it to the appropriate Services. Most Ingress Controllers use only one global Load Balancer for all Ingresses, which is more efficient than creating a Load Balancer per every Service you wish to expose.

      Helm is a package manager for managing Kubernetes. Using Helm Charts with your Kubernetes provides configurability and lifecycle management to update, rollback, and delete a Kubernetes application.

      In this guide, you’ll set up the Kubernetes-maintained Nginx Ingress Controller using Helm. You’ll then create an Ingress Resource to route traffic from your domains to example Hello World back-end services. Once you’ve set up the Ingress, you’ll install Cert-Manager to your cluster to be able to automatically provision Let’s Encrypt TLS certificates to secure your Ingresses.

      Prerequisites

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

      • A fully registered domain name with two available A records. This tutorial will use hw1.example.com and hw2.example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      Step 1 — Setting Up Hello World Deployments

      In this section, before you deploy the Nginx Ingress, you will deploy a Hello World app called hello-kubernetes to have some Services to which you’ll route the traffic. To confirm that the Nginx Ingress works properly in the next steps, you’ll deploy it twice, each time with a different welcome message that will be shown when you access it from your browser.

      You’ll store the deployment configuration on your local machine. The first deployment configuration will be in a file named hello-kubernetes-first.yaml. Create it using a text editor:

      • nano hello-kubernetes-first.yaml

      Add the following lines:

      hello-kubernetes-first.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-first
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-first
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-first
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-first
        template:
          metadata:
            labels:
              app: hello-kubernetes-first
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the first deployment!
      

      This configuration defines a Deployment and a Service. The Deployment consists of three replicas of the paulbouwer/hello-kubernetes:1.5 image, and an environment variable named MESSAGE—you will see its value when you access the app. The Service here is defined to expose the Deployment in-cluster at port 80.

      Save and close the file.

      Then, create this first variant of the hello-kubernetes app in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-first.yaml

      You’ll see the following output:

      Output

      service/hello-kubernetes-first created deployment.apps/hello-kubernetes-first created

      To verify the Service’s creation, run the following command:

      • kubectl get service hello-kubernetes-first

      The output will look like this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 35s

      You’ll see that the newly created Service has a ClusterIP assigned, which means that it is working properly. All traffic sent to it will be forwarded to the selected Deployment on port 8080. Now that you have deployed the first variant of the hello-kubernetes app, you’ll work on the second one.

      Open a file called hello-kubernetes-second.yaml for editing:

      • nano hello-kubernetes-second.yaml

      Add the following lines:

      hello-kubernetes-second.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes-second
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes-second
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes-second
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes-second
        template:
          metadata:
            labels:
              app: hello-kubernetes-second
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
              env:
              - name: MESSAGE
                value: Hello from the second deployment!
      

      Save and close the file.

      This variant has the same structure as the previous configuration; the only differences are in the Deployment and Service names, to avoid collisions, and the message.

      Now create it in Kubernetes with the following command:

      • kubectl create -f hello-kubernetes-second.yaml

      The output will be:

      Output

      service/hello-kubernetes-second created deployment.apps/hello-kubernetes-second created

      Verify that the second Service is up and running by listing all of your services:

      The output will be similar to this:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-kubernetes-first ClusterIP 10.245.85.236 <none> 80:31623/TCP 54s hello-kubernetes-second ClusterIP 10.245.99.130 <none> 80:30303/TCP 12s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 5m

      Both hello-kubernetes-first and hello-kubernetes-second are listed, which means that Kubernetes has created them successfully.

      You've created two deployments of the hello-kubernetes app with accompanying Services. Each one has a different message set in the deployment specification, which allow you to differentiate them during testing. In the next step, you'll install the Nginx Ingress Controller itself.

      Step 2 — Installing the Kubernetes Nginx Ingress Controller

      Now you'll install the Kubernetes-maintained Nginx Ingress Controller using Helm. Note that there are several Nginx Ingresses.

      The Nginx Ingress Controller consists of a Pod and a Service. The Pod runs the Controller, which constantly polls the /ingresses endpoint on the API server of your cluster for updates to available Ingress Resources. The Service is of type LoadBalancer, and because you are deploying it to a DigitalOcean Kubernetes cluster, the cluster will automatically create a DigitalOcean Load Balancer, through which all external traffic will flow to the Controller. The Controller will then route the traffic to appropriate Services, as defined in Ingress Resources.

      Only the LoadBalancer Service knows the IP address of the automatically created Load Balancer. Some apps (such as ExternalDNS) need to know its IP address, but can only read the configuration of an Ingress. The Controller can be configured to publish the IP address on each Ingress by setting the controller.publishService.enabled parameter to true during helm install. It is recommended to enable this setting to support applications that may depend on the IP address of the Load Balancer.

      To install the Nginx Ingress Controller to your cluster, run the following command:

      • helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true

      This command installs the Nginx Ingress Controller from the stable charts repository, names the Helm release nginx-ingress, and sets the publishService parameter to true.

      The output will look like:

      Output

      NAME: nginx-ingress LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nginx-ingress-controller 1 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE nginx-ingress-controller-7658988787-npv28 0/1 ContainerCreating 0 0s nginx-ingress-default-backend-7f5d59d759-26xq2 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.245.9.107 <pending> 80:31305/TCP,443:30519/TCP 0s nginx-ingress-default-backend ClusterIP 10.245.221.49 <none> 80/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE nginx-ingress 1 0s ==> v1beta1/ClusterRole NAME AGE nginx-ingress 0s ==> v1beta1/ClusterRoleBinding NAME AGE nginx-ingress 0s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-ingress-controller 0/1 1 0 0s nginx-ingress-default-backend 0/1 1 0 0s ==> v1beta1/Role NAME AGE nginx-ingress 0s ==> v1beta1/RoleBinding NAME AGE nginx-ingress 0s NOTES: ...

      Helm has logged what resources in Kubernetes it created as a part of the chart installation.

      You can watch the Load Balancer become available by running:

      • kubectl get services -o wide -w nginx-ingress-controller

      You've installed the Nginx Ingress maintained by the Kubernetes community. It will route HTTP and HTTPS traffic from the Load Balancer to appropriate back-end Services, configured in Ingress Resources. In the next step, you'll expose the hello-kubernetes app deployments using an Ingress Resource.

      Step 3 — Exposing the App Using an Ingress

      Now you're going to create an Ingress Resource and use it to expose the hello-kubernetes app deployments at your desired domains. You'll then test it by accessing it from your browser.

      You'll store the Ingress in a file named hello-kubernetes-ingress.yaml. Create it using your editor:

      • nano hello-kubernetes-ingress.yaml

      Add the following lines to your file:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
      spec:
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      In the code above, you define an Ingress Resource with the name hello-kubernetes-ingress. Then, you specify two host rules, so that hw1.example.com is routed to the hello-kubernetes-first Service, and hw2.example.com is routed to the Service from the second deployment (hello-kubernetes-second).

      Remember to replace the highlighted domains with your own, then save and close the file.

      Create it in Kubernetes by running the following command:

      • kubectl create -f hello-kubernetes-ingress.yaml

      Next, you'll need to ensure that your two domains are pointed to the Load Balancer via A records. This is done through your DNS provider. To configure your DNS records on DigitalOcean, see How to Manage DNS Records.

      You can now navigate to hw1.example.com in your browser. You will see the following:

      Hello Kubernetes - First Deployment

      The second variant (hw2.example.com) will show a different message:

      Hello Kubernetes - Second Deployment

      With this, you have verified that the Ingress Controller correctly routes requests; in this case, from your two domains to two different Services.

      You've created and configured an Ingress Resource to serve the hello-kubernetes app deployments at your domains. In the next step, you'll set up Cert-Manager, so you'll be able to secure your Ingress Resources with free TLS certificates from Let's Encrypt.

      Step 4 — Securing the Ingress Using Cert-Manager

      To secure your Ingress Resources, you'll install Cert-Manager, create a ClusterIssuer for production, and modify the configuration of your Ingress to take advantage of the TLS certificates. ClusterIssuers are Cert-Manager Resources in Kubernetes that provision TLS certificates. Once installed and configured, your app will be running behind HTTPS.

      Before installing Cert-Manager to your cluster via Helm, you'll manually apply the required CRDs (Custom Resource Definitions) from the jetstack/cert-manager repository by running the following command:

      • kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml

      You will see the following output:

      Output

      customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created

      This shows that Kubernetes has applied the custom resources you require for cert-manager.

      Note: If you've followed this tutorial and the prerequisites, you haven't created a Kubernetes namespace called cert-manager, so you won't have to run the command in this note block. However, if this namespace does exist on your cluster, you'll need to inform Cert-Manager not to validate it with the following command:

      • kubectl label namespace cert-manager certmanager.k8s.io/disable-validation="true"

      The Webhook component of Cert-Manager requires TLS certificates to securely communicate with the Kubernetes API server. In order for Cert-Manager to generate certificates for it for the first time, resource validation must be disabled on the namespace it is deployed in. Otherwise, it would be stuck in an infinite loop; unable to contact the API and unable to generate the TLS certificates.

      The output will be:

      Output

      namespace/cert-manager labeled

      Next, you'll need to add the Jetstack Helm repository to Helm, which hosts the Cert-Manager chart. To do this, run the following command:

      • helm repo add jetstack https://charts.jetstack.io

      Helm will display the following output:

      Output

      "jetstack" has been added to your repositories

      Finally, install Cert-Manager into the cert-manager namespace:

      • helm install --name cert-manager --namespace cert-manager jetstack/cert-manager

      You will see the following output:

      Output

      NAME: cert-manager LAST DEPLOYED: ... NAMESPACE: cert-manager STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME AGE cert-manager-edit 3s cert-manager-view 3s cert-manager-webhook:webhook-requester 3s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cert-manager-5d669ffbd8-rb6tr 0/1 ContainerCreating 0 2s cert-manager-cainjector-79b7fc64f-gqbtz 0/1 ContainerCreating 0 2s cert-manager-webhook-6484955794-v56lx 0/1 ContainerCreating 0 2s ... NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://docs.cert-manager.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://docs.cert-manager.io/en/latest/reference/ingress-shim.html

      The output shows that the installation was successful. As listed in the NOTES in the output, you'll need to set up an Issuer to issue TLS certificates.

      You'll now create one that issues Let's Encrypt certificates, and you'll store its configuration in a file named production_issuer.yaml. Create it and open it for editing:

      • nano production_issuer.yaml

      Add the following lines:

      production_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      This configuration defines a ClusterIssuer that contacts Let's Encrypt in order to issue certificates. You'll need to replace your_email_address with your email address in order to receive possible urgent notices regarding the security and expiration of your certificates.

      Save and close the file.

      Roll it out with kubectl:

      • kubectl create -f production_issuer.yaml

      You will see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      With Cert-Manager installed, you're ready to introduce the certificates to the Ingress Resource defined in the previous step. Open hello-kubernetes-ingress.yaml for editing:

      • nano hello-kubernetes-ingress.yaml

      Add the highlighted lines:

      hello-kubernetes-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - hw1.example.com
          - hw2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: hw1.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-first
                servicePort: 80
        - host: hw2.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes-second
                servicePort: 80
      

      The tls block under spec defines in what Secret the certificates for your sites (listed under hosts) will store their certificates, which the letsencrypt-prod ClusterIssuer issues. This must be different for every Ingress you create.

      Remember to replace the hw1.example.com and hw2.example.com with your own domains. When you've finished editing, save and close the file.

      Re-apply this configuration to your cluster by running the following command:

      • kubectl apply -f hello-kubernetes-ingress.yaml

      You will see the following output:

      Output

      ingress.extensions/hello-kubernetes-ingress configured

      You'll need to wait a few minutes for the Let's Encrypt servers to issue a certificate for your domains. In the meantime, you can track its progress by inspecting the output of the following command:

      • kubectl describe certificate hello-kubernetes

      The end of the output will look similar to this:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Generated 56s cert-manager Generated new private key Normal GenerateSelfSigned 56s cert-manager Generated temporary self signed certificate Normal OrderCreated 56s cert-manager Created Order resource "hello-kubernetes-1197334873" Normal OrderComplete 31s cert-manager Order "hello-kubernetes-1197334873" completed successfully Normal CertIssued 31s cert-manager Certificate issued successfully

      When your last line of output reads Certificate issued successfully, you can exit by pressing CTRL + C. Navigate to one of your domains in your browser to test. You'll see the padlock to the left of the address bar in your browser, signifying that your connection is secure.

      In this step, you have installed Cert-Manager using Helm and created a Let's Encrypt ClusterIssuer. After, you updated your Ingress Resource to take advantage of the Issuer for generating TLS certificates. In the end, you have confirmed that HTTPS works correctly by navigating to one of your domains in your browser.

      Conclusion

      You have now successfully set up the Nginx Ingress Controller and Cert-Manager on your DigitalOcean Kubernetes cluster using Helm. You are now able to expose your apps to the Internet, at your domains, secured using Let's Encrypt TLS certificates.

      For further information about the Helm package manager, read this introduction article.



      Source link

      How To Set Up a CD Pipeline with Spinnaker on DigitalOcean Kubernetes


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Spinnaker is an open-source resource management and continuous delivery application for fast, safe, and repeatable deployments, using a powerful and customizable pipeline system. Spinnaker allows for automated application deployments to many platforms, including DigitalOcean Kubernetes. When deploying, you can configure Spinnaker to use built-in deployment strategies, such as Highlander and Red/black, with the option of creating your own deployment strategy. It can integrate with other DevOps tools, like Jenkins and TravisCI, and can be configured to monitor GitHub repositories and Docker registries.

      Spinnaker is managed by Halyard, a tool specifically built for configuring and deploying Spinnaker to various platforms. Spinnaker requires external storage for persisting your application’s settings and pipelines. It supports different platforms for this task, like DigitalOcean Spaces.

      In this tutorial, you’ll deploy Spinnaker to DigitalOcean Kubernetes using Halyard, with DigitalOcean Spaces as the underlying back-end storage. You’ll also configure Spinnaker to be available at your desired domain, secured using Let’s Encrypt TLS certificates. Then, you will create a sample application in Spinnaker, create a pipeline, and deploy a Hello World app to your Kubernetes cluster. After testing it, you’ll introduce authentication and authorization via GitHub Organizations. By the end, you will have a secured and working Spinnaker deployment in your Kubernetes cluster.

      Note: This tutorial has been specifically tested with Spinnaker 1.13.5.

      Prerequisites

      • Halyard installed on your local machine, according to the official instructions. Please note that using Halyard on Ubuntu versions higher than 16.04 is not supported. In such cases, you can use it via Docker.

      • A DigitalOcean Kubernetes cluster with your connection configured as the kubectl default. The cluster must have at least 8GB RAM and 4 CPU cores available for Spinnaker (more will be required in the case of heavier use). Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To create a Kubernetes cluster on DigitalOcean, see the Kubernetes Quickstart.

      • An Nginx Ingress Controller and cert-manager installed on the cluster. For a guide on how to do this, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

      • A DigitalOcean Space with API keys (access and secret). To create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

      • A domain name with three DNS A records pointed to the DigitalOcean Load Balancer used by the Ingress. If you’re using DigitalOcean to manage your domain’s DNS records, consult How to Create DNS Records to create A records. In this tutorial, we’ll refer to the A records as spinnaker.example.com, spinnaker-api.example.com, and hello-world.example.com.

      • A GitHub account, added to a GitHub Organization with admin permissions and public visibility. The account must also be a member of a Team in the Organization. This is required to complete Step 5.

      Step 1 — Adding a Kubernetes Account with Halyard

      In this section, you will add a Kubernetes account to Spinnaker via Halyard. An account, in Spinnaker’s terms, is a named credential it uses to access a cloud provider.

      As part of the prerequisite, you created the echo1 and echo2 services and an echo_ingress ingress for testing purposes; you will not need these in this tutorial, so you can now delete them.

      Start off by deleting the ingress by running the following command:

      • kubectl delete -f echo_ingress.yaml

      Then, delete the two test services:

      • kubectl delete -f echo1.yaml && kubectl delete -f echo2.yaml

      The kubectl delete command accepts the file to delete when passed the -f parameter.

      Next, from your local machine, create a folder that will serve as your workspace:

      Navigate to your workspace by running the following command:

      Halyard does not yet know where it should deploy Spinnaker. Enable the Kubernetes provider with this command:

      • hal config provider kubernetes enable

      You'll receive the following output:

      Output

      + Get current deployment Success + Edit the kubernetes provider Success Problems in default.provider.kubernetes: - WARNING Provider kubernetes is enabled, but no accounts have been configured. + Successfully enabled kubernetes

      Halyard logged all the steps it took to enable the Kubernetes provider, and warned that no accounts are defined yet.

      Next, you'll create a Kubernetes service account for Spinnaker, along with RBAC. A service account is a type of account that is scoped to a single namespace. It is used by software, which may perform various tasks in the cluster. RBAC (Role Based Access Control) is a method of regulating access to resources in a Kubernetes cluster. It limits the scope of action of the account to ensure that no important configurations are inadvertently changed on your cluster.

      Here, you will grant Spinnaker cluster-admin permissions to allow it to control the whole cluster. If you wish to create a more restrictive environment, consult the official Kubernetes documentation on RBAC.

      First, create the spinnaker namespace by running the following command:

      • kubectl create ns spinnaker

      The output will look like:

      Output

      namespace/spinnaker created

      Run the following command to create a service account named spinnaker-service-account:

      • kubectl create serviceaccount spinnaker-service-account -n spinnaker

      You've used the -n flag to specify that kubectl create the service account in the spinnaker namespace. The output will be:

      Output

      serviceaccount/spinnaker-service-account created

      Then, bind it to the cluster-admin role:

      • kubectl create clusterrolebinding spinnaker-service-account --clusterrole cluster-admin --serviceaccount=spinnaker:spinnaker-service-account

      You will see the following output:

      Output

      clusterrolebinding.rbac.authorization.k8s.io/spinnaker-service-account created

      Halyard uses the local kubectl to access the cluster. You'll need to configure it to use the newly created service account before deploying Spinnaker. Kubernetes accounts authenticate using usernames and tokens. When a service account is created, Kubernetes makes a new secret and populates it with the account token. To retrieve the token for the spinnaker-service-account, you'll first need to get the name of the secret. You can fetch it into a console variable, named TOKEN_SECRET, by running:

      • TOKEN_SECRET=$(kubectl get serviceaccount -n spinnaker spinnaker-service-account -o jsonpath='{.secrets[0].name}')

      This gets information about the spinnaker-service-account from the namespace spinnaker, and fetches the name of the first secret it contains by passing in a JSON path.

      Fetch the contents of the secret into a variable named TOKEN by running:

      • TOKEN=$(kubectl get secret -n spinnaker $TOKEN_SECRET -o jsonpath='{.data.token}' | base64 --decode)

      You now have the token available in the environment variable TOKEN. Next, you'll need to set credentials for the service account in kubectl:

      • kubectl config set-credentials spinnaker-token-user --token $TOKEN

      You will see the following output:

      Output

      User "spinnaker-token-user" set.

      Then, you'll need to set the user of the current context to the newly created spinnaker-token-user by running the following command:

      • kubectl config set-context --current --user spinnaker-token-user

      By setting the current user to spinnaker-token-user, kubectl is now configured to use the spinnaker-service-account, but Halyard does not know anything about that. Add an account to its Kubernetes provider by executing:

      • hal config provider kubernetes account add spinnaker-account --provider-version v2

      The output will look like this:

      Output

      + Get current deployment Success + Add the spinnaker-account account Success + Successfully added account spinnaker-account for provider kubernetes.

      This commmand adds a Kubernetes account to Halyard, named spinnaker-account, and marks it as a service account.

      Generally, Spinnaker can be deployed in two ways: distributed installation or local installation. Distributed installation is what you're completing in this tutorial—you're deploying it to the cloud. Local installation, on the other hand, means that Spinnaker will be downloaded and installed on the machine Halyard runs on. Because you're deploying Spinnaker to Kubernetes, you'll need to mark the deployment as distributed, like so:

      • hal config deploy edit --type distributed --account-name spinnaker-account

      Since your Spinnaker deployment will be building images, it is necessary to enable artifacts in Spinnaker. You can enable them by running the following command:

      • hal config features edit --artifacts true

      Here you've enabled artifacts to allow Spinnaker to store more metadata about the objects it creates.

      You've added a Kubernetes account to Spinnaker, via Halyard. You enabled the Kubernetes provider, configured RBAC roles, and added the current kubectl config to Spinnaker, thus adding an account to the provider. Now you'll set up your back-end storage.

      Step 2 — Configuring the Space as the Underlying Storage

      In this section, you will configure the Space as the underlying storage for the Spinnaker deployment. Spinnaker will use the Space to store its configuration and pipeline-related data.

      To configure S3 storage in Halyard, run the following command:

      • hal config storage s3 edit --access-key-id your_space_access_key --secret-access-key --endpoint spaces_endpoint_with_region_prefix --bucket space_name --no-validate

      Remember to replace your_space_access_key with your Space access key and spaces_endpoint_with_region_prefix with the endpoint of your Space. This is usually region-id.digitaloceanspaces.com, where region-id is the region of your Space. You can replace space_name with the name of your Space. The --no-validate flag tells Halyard not to validate the settings given right away, because DigitalOcean Spaces validation is not supported.

      Once you've run this command, Halyard will ask you for your secret access key. Enter it to continue and you'll then see the following output:

      Output

      + Get current deployment Success + Get persistent store Success + Edit persistent store Success + Successfully edited persistent store "s3".

      Now that you've configured s3 storage, you'll ensure that your deployment will use this as its storage by running the following command:

      • hal config storage edit --type s3

      The output will look like this:

      Output

      + Get current deployment Success + Get persistent storage settings Success + Edit persistent storage settings Success + Successfully edited persistent storage.

      You've set up your Space as the underlying storage that your instance of Spinnaker will use. Now you'll deploy Spinnaker to your Kubernetes cluster and expose it at your domains using the Nginx Ingress Controller.

      Step 3 — Deploying Spinnaker to Your Cluster

      In this section, you will deploy Spinnaker to your cluster using Halyard, and then expose its UI and API components at your domains using an Nginx Ingress. First, you'll configure your domain URLs: one for Spinnaker's user interface and one for the API component. Then you'll pick your desired version of Spinnaker and deploy it using Halyard. Finally you'll create an ingress and configure it as an Nginx controller.

      First, you'll need to edit Spinnaker's UI and API URL config values in Halyard and set them to your desired domains. To set the API endpoint to your desired domain, run the following command:

      • hal config security api edit --override-base-url https://spinnaker-api.example.com

      The output will look like:

      Output

      + Get current deployment Success + Get API security settings Success + Edit API security settings Success ...

      To set the UI endpoint to your domain, which is where you will access Spinnaker, run:

      • hal config security ui edit --override-base-url https://spinnaker.example.com

      The output will look like:

      Output

      + Get current deployment Success + Get UI security settings Success + Edit UI security settings Success + Successfully updated UI security settings.

      Remember to replace spinnaker-api.example.com and spinnaker.example.com with your domains. These are the domains you have pointed to the Load Balancer that you created during the Nginx Ingress Controller prerequisite.

      You've created and secured Spinnaker's Kubernetes account, configured your Space as its underlying storage, and set its UI and API endpoints to your domains. Now you can list the available Spinnaker versions:

      Your output will show a list of available versions. At the time of writing this article 1.13.5 was the latest version:

      Output

      + Get current deployment Success + Get Spinnaker version Success + Get released versions Success + You are on version "", and the following are available: - 1.11.12 (Cobra Kai): Changelog: https://gist.GitHub.com/spinnaker-release/29a01fa17afe7c603e510e202a914161 Published: Fri Apr 05 14:55:40 UTC 2019 (Requires Halyard >= 1.11) - 1.12.9 (Unbreakable): Changelog: https://gist.GitHub.com/spinnaker-release/7fa9145349d6beb2f22163977a94629e Published: Fri Apr 05 14:11:44 UTC 2019 (Requires Halyard >= 1.11) - 1.13.5 (BirdBox): Changelog: https://gist.GitHub.com/spinnaker-release/23af06bc73aa942c90f89b8e8c8bed3e Published: Mon Apr 22 14:32:29 UTC 2019 (Requires Halyard >= 1.17)

      To select a version to install, run the following command:

      • hal config version edit --version 1.13.5

      It is recommended to always select the latest version, unless you encounter some kind of regression.

      You will see the following output:

      Output

      + Get current deployment Success + Edit Spinnaker version Success + Spinnaker has been configured to update/install version "version". Deploy this version of Spinnaker with `hal deploy apply`.

      You have now fully configured Spinnaker's deployment. You'll deploy it with the following command:

      This command could take a few minutes to finish.

      The final output will look like this:

      Output

      + Get current deployment Success + Prep deployment Success + Preparation complete... deploying Spinnaker + Get current deployment Success + Apply deployment Success + Deploy spin-redis Success + Deploy spin-clouddriver Success + Deploy spin-front50 Success + Deploy spin-orca Success + Deploy spin-deck Success + Deploy spin-echo Success + Deploy spin-gate Success + Deploy spin-rosco Success ...

      Halyard is showing you the deployment status of each of Spinnaker's microservices. Behind the scenes, it calls kubectl to install them.

      Kubernetes will take some time—ten minutes on average—to bring all of the containers up, especially for the first time. You can watch the progress by running the following command:

      • kubectl get pods -n spinnaker -w

      You've deployed Spinnaker to your Kubernetes cluster, but it can't be accessed beyond your cluster.

      You'll be storing the ingress configuration in a file named spinnaker-ingress.yaml. Create it using your text editor:

      • nano spinnaker-ingress.yaml

      Add the following lines:

      spinnaker-ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: spinnaker-ingress
        namespace: spinnaker
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - spinnaker-api.example.com
          - spinnaker.example.com
          secretName: spinnaker
        rules:
        - host: spinnaker-api.example.com
          http:
            paths:
            - backend:
                serviceName: spin-gate
                servicePort: 8084
        - host: spinnaker.example.com
          http:
            paths:
            - backend:
                serviceName: spin-deck
                servicePort: 9000
      

      Remember to replace spinnaker-api.example.com with your API domain, and spinnaker.example.com with your UI domain.

      The configuration file defines an ingress called spinnaker-ingress. The annotations specify that the controller for this ingress will be the Nginx controller, and that the letsencrypt-prod cluster issuer will generate the TLS certificates, defined in the prerequisite tutorial.

      Then, it specifies that TLS will secure the UI and API domains. It sets up routing by directing the API domain to the spin-gate service (Spinnaker's API containers), and the UI domain to the spin-deck service (Spinnaker's UI containers) at the appropriate ports 8084 and 9000.

      Save and close the file.

      Create the Ingress in Kubernetes by running:

      • kubectl create -f spinnaker-ingress.yaml

      You'll see the following output:

      Output

      ingress.extensions/spinnaker-ingress created

      Wait a few minutes for Let's Encrypt to provision the TLS certificates, and then navigate to your UI domain, spinnaker.example.com, in a browser. You will see Spinnaker's user interface.

      Spinnaker's home page

      You've deployed Spinnaker to your cluster, exposed the UI and API components at your domains, and tested if it works. Now you'll create an application in Spinnaker and run a pipeline to deploy the Hello World app.

      Step 4 — Creating an Application and Running a Pipeline

      In this section, you will use your access to Spinnaker at your domain to create an application with it. You'll then create and run a pipeline to deploy a Hello World app, which can be found at paulbouwer/hello-kubernetes. You'll access the app afterward.

      Navigate to your domain where you have exposed Spinnaker's UI. In the upper right corner, press on Actions, then select Create Application. You will see the New Application form.

      Creating a new Application in Spinnaker

      Type in hello-world as the name, input your email address, and press Create.

      When the page loads, navigate to Pipelines by clicking the first tab in the top menu. You will see that there are no pipelines defined yet.

      No pipelines defined in Spinnaker

      Press on Configure a new pipeline and a new form will open.

      Creating a new Pipeline in Spinnaker

      Fill in Deploy Hello World Application as your pipeline's name, and press Create.

      On the next page, click the Add Stage button. As the Type, select Deploy (Manifest), which is used for deploying Kubernetes manifests you specify. For the Stage Name, type in Deploy Hello World. Scroll down, and in the textbox under Manifest Configuration, enter the following lines:

      Manifest Configuration

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-world-ingress
        namespace: spinnaker
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - hello-world.example.com
          secretName: hello-world
        rules:
        - host: hello-world.example.com
          http:
            paths:
            - backend:
                serviceName: hello-kubernetes
                servicePort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes
        namespace: spinnaker
      spec:
        type: ClusterIP
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes
        namespace: spinnaker
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes
        template:
          metadata:
            labels:
              app: hello-kubernetes
          spec:
            containers:
            - name: hello-kubernetes
              image: paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
      

      Remember to replace hello-world.example.com with your domain, which is also pointed at your Load Balancer.

      In this configuration, you define a Deployment, consisting of three replicas of the paulbouwer/hello-kubernetes:1.5 image. You also define a Service to be able to access it and an Ingress to expose the Service at your domain.

      Press Save Changes in the bottom right corner of the screen. When it finishes, navigate back to Pipelines. On the right side, select the pipeline you just created and press the Start Manual Execution link. When asked to confirm, press Run.

      This pipeline will take a short time to complete. You will see the progress bar complete when it has successfully finished.

      Successfully ran a Pipeline

      You can now navigate to the domain you defined in the configuration. You will see the Hello World app, which Spinnaker just deployed.

      Hello World App

      You've created an application in Spinnaker, ran a pipeline to deploy a Hello World app, and accessed it. In the next step, you will secure Spinnaker by enabling GitHub Organizations authorization.

      Step 5 — Enabling Role-Based Access with GitHub Organizations

      In this section, you will enable GitHub OAuth authentication and GitHub Organizations authorization. Enabling GitHub OAuth authentication forces Spinnaker users to log in via GitHub, therefore preventing anonymous access. Authorization via GitHub Organizations restricts access only to those in an Organization. A GitHub Organization can contain Teams (named groups of members), which you will be able to use to restrict access to resources in Spinnaker even further.

      For OAuth authentication to work, you'll first need to set up the authorization callback URL, which is where the user will be redirected after authorization. This is your API domain ending with /login. You need to specify this manually to prevent Spinnaker and other services from guessing. To configure this, run the following command:

      • hal config security authn oauth2 edit --pre-established-redirect-uri https://spinnaker-api.example.com/login

      You will see this output:

      Output

      + Get current deployment Success + Get authentication settings Success + Edit oauth2 authentication settings Success + Successfully edited oauth2 method.

      To set up OAuth authentication with GitHub, you'll need to create an OAuth application for your Organization. To do so, navigate to your Organization on GitHub, go to Settings, click on Developer Settings, and then select OAuth Apps from the left-hand menu. Afterward, click the New OAuth App button on the right. You will see the Register a new OAuth application form.

      Creating a new OAuth App on GitHub

      Enter spinnaker-auth as the name. For the Homepage URL, enter https://spinnaker.example.com, and for the Authorization callback URL, enter https://spinnaker-api.example.com/login. Then, press Register Application.

      You'll be redirected to the settings page for your new OAuth app. Note the Client ID and Client Secret values—you'll need them for the next command.

      With the OAuth app created, you can configure Spinnaker to use the OAuth app by running the following command:

      • hal config security authn oauth2 edit --client-id client_id --client-secret client_secret --provider GitHub

      Remember to replace client_id and client_secret with the values shown on the GitHub settings page.

      You output will be similar to the following:

      Output

      + Get current deployment Success + Get authentication settings Success + Edit oauth2 authentication settings Success Problems in default.security.authn: - WARNING An authentication method is fully or partially configured, but not enabled. It must be enabled to take effect. + Successfully edited oauth2 method.

      You've configured Spinnaker to use the OAuth app. Now, to enable it, execute:

      • hal config security authn oauth2 enable

      The output will look like:

      Output

      + Get current deployment Success + Edit oauth2 authentication settings Success + Successfully enabled oauth2

      You've configured and enabled GitHub OAuth authentication. Now users will be forced to log in via GitHub in order to access Spinnaker. However, right now, everyone who has a GitHub account can log in, which is not what you want. To overcome this, you'll configure Spinnaker to restrict access to members of your desired Organization.

      You'll need to set this up semi-manually via local config files, because Halyard does not yet have a command for setting this. During deployment, Halyard will use the local config files to override the generated configuration.

      Halyard looks for custom configuration under ~/.hal/default/profiles/. Files named service-name-*.yml are picked up by Halyard and used to override the settings of a particular service. The service that you'll override is called gate, and serves as the API gateway for the whole of Spinnaker.

      Create a file under ~/.hal/default/profiles/ named gate-local.yml:

      • nano ~/.hal/default/profiles/gate-local.yml

      Add the following lines:

      gate-local.yml

      security:
       oauth2:
         providerRequirements:
           type: GitHub
           organization: your_organization_name
      

      Replace your_organization_name with the name of your GitHub Organization. Save and close the file.

      With this bit of configuration, only members of your GitHub Organization will be able to access Spinnaker.

      Note: Only those members of your GitHub Organization whose membership is set to Public will be able to log in to Spinnaker. This setting can be changed on the member list page of your Organization.

      Now, you'll integrate Spinnaker with an even more particular access-rule solution: GitHub Teams. This will enable you to specify which Team(s) will have access to resources created in Spinnaker, such as applications.

      To achieve this, you'll need to have a GitHub Personal Access Token for an admin account in your Organization. To create one, visit Personal Access Tokens and press the Generate New Token button. On the next page, give it a description of your choice and be sure to check the read:org scope, located under admin:org. When you are done, press Generate token and note it down when it appears—you won't be able to see it again.

      To configure GitHub Teams role authorization in Spinnaker, run the following command:

      • hal config security authz github edit --accessToken access_token --organization organization_name --baseUrl https://api.github.com

      Be sure to replace access_token with your personal access token you generated and replace organization_name with the name of the Organization.

      The output will be:

      Output

      + Get current deployment Success + Get GitHub group membership settings Success + Edit GitHub group membership settings Success + Successfully edited GitHub method.

      You've updated your GitHub group settings. Now, you'll set the authorization provider to GitHub by running the following command:

      • hal config security authz edit --type github

      The output will look like:

      Output

      + Get current deployment Success + Get group membership settings Success + Edit group membership settings Success + Successfully updated roles.

      After updating these settings, enable them by running:

      • hal config security authz enable

      You'll see the following output:

      Output

      + Get current deployment Success + Edit authorization settings Success + Successfully enabled authorization

      With all the changes in place, you can now apply the changes to your running Spinnaker deployment. Execute the following command to do this:

      Once it has finished, wait for Kubernetes to propagate the changes. This can take quite some time—you can watch the progress by running:

      • kubectl get pods -n spinnaker -w

      When all the pods' states become Running and availability 1/1, navigate to your Spinnaker UI domain. You will be redirected to GitHub and asked to log in, if you're not already. If the account you logged in with is a member of the Organization, you will be redirected back to Spinnaker and logged in. Otherwise, you will be denied access with a message that looks like this:

      {"error":"Unauthorized", "message":"Authentication Failed: User's provider info does not have all required fields.", "status":401, "timestamp":...}
      

      The effect of GitHub Teams integration is that Spinnaker now translates them into roles. You can use these roles in Spinnaker to incorporate additional restrictions to access for members of particular teams. If you try to add another application, you'll notice that you can now also specify permissions, which combine the level of access—read only or read and write—with a role, for that application.

      You've set up GitHub authentication and authorization. You have also configured Spinnaker to restrict access to members of your Organization, learned about roles and permissions, and considered the place of GitHub Teams when integrated with Spinnaker.

      Conclusion

      You have successfully configured and deployed Spinnaker to your DigitalOcean Kubernetes cluster. You can now manage and use your cloud resources more easily, from a central place. You can use triggers to automatically start a pipeline; for example, when a new Docker image has been added to the registry. To learn more about Spinnaker's terms and architecture, visit the official documentation. If you wish to deploy a private Docker registry to your cluster to hold your images, visit How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DO Kubernetes.



      Source link