One place for hosting & domains

      Manage

      How To Manage Multiple Servers with Ansible Ad Hoc Commands


      Introduction

      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers. With a minimalist design intended to get users up and running quickly, it allows you to control one to hundreds of systems from a central location with either playbooks or ad hoc commands.

      Unlike playbooks — which consist of collections of tasks that can be reused — ad hoc commands are tasks that you don’t perform frequently, such as restarting a service or retrieving information about the remote systems that Ansible manages.

      In this cheat sheet guide, you’ll learn how to use Ansible ad hoc commands to perform common tasks such as installing packages, copying files, and restarting services on one or more remote servers, from an Ansible control node.

      Prerequisites

      In order to follow this guide, you’ll need:

      • One Ansible control node. This guide assumes your control node is an Ubuntu 20.04 machine with Ansible installed and configured to connect to your Ansible hosts using SSH keys. Make sure the control node has a regular user with sudo permissions and a firewall enabled, as explained in our Initial Server Setup guide. To set up Ansible, please follow our guide on How to Install and Configure Ansible on Ubuntu 20.04.
      • Two or more Ansible hosts. An Ansible host is any machine that your Ansible control node is configured to automate. This guide assumes your Ansible hosts are remote Ubuntu 20.04 servers. Make sure each Ansible host has:
        • The Ansible control node’s SSH public key added to the authorized_keys of a system user. This user can be either root or a regular user with sudo privileges. To set this up, you can follow Step 2 of How to Set Up SSH Keys on Ubuntu 20.04.
      • An inventory file set up on the Ansible control node. Make sure you have a working inventory file containing all your Ansible hosts. To set this up, please refer to the guide on How To Set Up Ansible Inventories. Then, make sure you’re able to connect to your nodes by running the connection test outlined in the section Testing Connection to Ansible Hosts.

      Testing Connection to Ansible Hosts

      The following command will test connectivity between your Ansible control node and all your Ansible hosts. This command uses the current system user and its corresponding SSH key as the remote login, and includes the -m option, which tells Ansible to run the ping module. It also features the -i flag, which tells Ansible to ping the hosts listed in the specified inventory file

      • ansible all -i inventory -m ping

      If this is the first time you’re connecting to these servers via SSH, you’ll be asked to confirm the authenticity of the hosts you’re connecting to via Ansible. When prompted, type yes and then hit ENTER to confirm.

      You should get output similar to this:

      Output

      server1 | SUCCESS => { "changed": false, "ping": "pong" } server2 | SUCCESS => { "changed": false, "ping": "pong" }

      Once you get a "pong" reply back from a host, it means the connection is live and you’re ready to run Ansible commands on that server.

      Adjusting Connection Options

      By default, Ansible tries to connect to the nodes as a remote user with the same name as your current system user, using its corresponding SSH keypair.

      To connect as a different remote user, append the command with the -u flag and the name of the intended user:

      • ansible all -i inventory -m ping -u sammy

      If you’re using a custom SSH key to connect to the remote servers, you can provide it at execution time with the --private-key option:

      • ansible all -i inventory -m ping --private-key=~/.ssh/custom_id

      Note: For more information on how to connect to nodes, please refer to our How to Use Ansible guide, which demonstrates more connection options.

      Once you’re able to connect using the appropriate options, you can adjust your inventory file to automatically set your remote user and private key, in case they are different from the default values assigned by Ansible. Then, you won’t need to provide those parameters in the command line.

      The following example inventory file sets up the ansible_user variable only for the server1 server:

      ~/ansible/inventory

      server1 ansible_host=203.0.113.111 ansible_user=sammy
      server2 ansible_host=203.0.113.112
      

      Ansible will now use sammy as the default remote user when connecting to the server1 server.

      To set up a custom SSH key, include the ansible_ssh_private_key_file variable as follows:

      ~/ansible/inventory

      server1 ansible_host=203.0.113.111 ansible_ssh_private_key_file=/home/sammy/.ssh/custom_id
      server2 ansible_host=203.0.113.112
      

      In both cases, we have set up custom values only for server1. If you want to use the same settings for multiple servers, you can use a child group for that:

      ~/ansible/inventory

      [group_a]
      203.0.113.111
      203.0.113.112
      
      [group_b]
      203.0.113.113
      
      
      [group_a:vars]
      ansible_user=sammy
      ansible_ssh_private_key_file=/home/sammy/.ssh/custom_id
      

      This example configuration will assign a custom user and SSH key only for connecting to the servers listed in group_a.

      Defining Targets for Command Execution

      When running ad hoc commands with Ansible, you can target individual hosts, as well as any combination of groups, hosts and subgroups. For instance, this is how you would check connectivity for every host in a group named servers:

      • ansible servers -i inventory -m ping

      You can also specify multiple hosts and groups by separating them with colons:

      • ansible server1:server2:dbservers -i inventory -m ping

      To include an exception in a pattern, use an exclamation mark, prefixed by the escape character , as follows. This command will run on all servers from group1, except server2:

      • ansible group1:!server2 -i inventory -m ping

      In case you’d like to run a command only on servers that are part of both group1 and group2, for instance, you should use & instead. Don’t forget to prefix it with a escape character:

      • ansible group1:&group2 -i inventory -m ping

      For more information on how to use patterns when defining targets for command execution, please refer to Step 5 of our guide on How to Set Up Ansible Inventories.

      Running Ansible Modules

      Ansible modules are pieces of code that can be invoked from playbooks and also from the command-line to facilitate executing procedures on remote nodes. Examples include the apt module, used to manage system packages on Ubuntu, and the user module, used to manage system users. The ping command used throughout this guide is also a module, typically used to test connection from the control node to the hosts.

      Ansible ships with an extensive collection of built-in modules, some of which require the installation of additional software in order to provide full functionality. You can also create your own custom modules using your language of choice.

      To execute a module with arguments, include the -a flag followed by the appropriate options in double quotes, like this:

      ansible target -i inventory -m module -a "module options"
      

      As an example, this will use the apt module to install the package tree on server1:

      • ansible server1 -i inventory -m apt -a "name=tree"

      Running Bash Commands

      When a module is not provided via the -m option, the command module is used by default to execute the specified command on the remote server(s).

      This allows you to execute virtually any command that you could normally execute via an SSH terminal, as long as the connecting user has sufficient permissions and there aren’t any interactive prompts.

      This example executes the uptime command on all servers from the specified inventory:

      • ansible all -i inventory -a "uptime"

      Output

      server1 | CHANGED | rc=0 >> 14:12:18 up 55 days, 2:15, 1 user, load average: 0.03, 0.01, 0.00 server2 | CHANGED | rc=0 >> 14:12:19 up 10 days, 6:38, 1 user, load average: 0.01, 0.02, 0.00

      Using Privilege Escalation to Run Commands with sudo

      If the command or module you want to execute on remote hosts requires extended system privileges or a different system user, you’ll need to use Ansible’s privilege escalation module, become. This module is an abstraction for sudo as well as other privilege escalation software supported by Ansible on different operating systems.

      For instance, if you wanted to run a tail command to output the latest log messages from Nginx’s error log on a server named server1 from inventory, you would need to include the --become option as follows:

      • ansible server1 -i inventory -a "tail /var/log/nginx/error.log" --become

      This would be the equivalent of running a sudo tail /var/log/nginx/error.log command on the remote host, using the current local system user or the remote user set up within your inventory file.

      Privilege escalation systems such as sudo often require that you confirm your credentials by prompting you to provide your user’s password. That would cause Ansible to fail a command or playbook execution. You can then use the --ask-become-pass or -K option to make Ansible prompt you for that sudo password:

      • ansible server1 -i inventory -a "tail /var/log/nginx/error.log" --become -K

      Installing and Removing Packages

      The following example uses the apt module to install the nginx package on all nodes from the provided inventory file:

      • ansible all -i inventory -m apt -a "name=nginx" --become -K

      To remove a package, include the state argument and set it to absent:.

      • ansible all -i inventory -m apt -a "name=nginx state=absent" --become -K

      Copying Files

      With the file module, you can copy files between the control node and the managed nodes, in either direction. The following command copies a local text file to all remote hosts in the specified inventory file:

      • ansible all -i inventory -m copy -a "src=./file.txt dest=~/myfile.txt"

      To copy a file from the remote server to your control node, include the remote_src option:

      • ansible all -i inventory -m copy -a "src=~/myfile.txt remote_src=yes dest=./file.txt"

      Changing File Permissions

      To modify permissions on files and directories on your remote nodes, you can use the file module.

      The following command will adjust permissions on a file named file.txt located at /var/www on the remote host. It will set the file’s umask to 600, which will enable read and write permissions only for the current file owner. Additionally, it will set the ownership of that file to a user and a group called sammy:

      • ansible all -i inventory -m file -a "dest=/var/www/file.txt mode=600 owner=sammy group=sammy" --become -K

      Because the file is located in a directory typically owned by root, we might need sudo permissions to modify its properties. That’s why we include the --become and -K options. These will use Ansible’s privilege escalation system to run the command with extended privileges, and it will prompt you to provide the sudo password for the remote user.

      Restarting Services

      You can use the service module to manage services running on the remote nodes managed by Ansible. This will require extended system privileges, so make sure your remote user has sudo permissions and you include the --become option to use Ansible’s privilege escalation system. Using -K will prompt you to provide the sudo password for the connecting user.

      To restart the nginx service on all hosts in group called webservers, for instance, you would run:

      • ansible webservers -i inventory -m service -a "name=nginx state=restarted" --become -K

      Restarting Servers

      Although Ansible doesn’t have a dedicated module to restart servers, you can issue a bash command that calls the /sbin/reboot command on the remote host.

      Restarting the server will require extended system privileges, so make sure your remote user has sudo permissions and you include the --become option to use Ansible’s privilege escalation system. Using -K will prompt you to provide the sudo password for the connecting user.

      Warning: The following command will fully restart the server(s) targeted by Ansible. That might cause temporary disruption to any applications that rely on those servers.

      To restart all servers in a webservers group, for instance, you would run:

      • ansible webservers -i inventory -a "/sbin/reboot" --become -K

      Gathering Information About Remote Nodes

      The setup module returns detailed information about the remote systems managed by Ansible, also known as system facts.

      To obtain the system facts for server1, run:

      • ansible server1 -i inventory -m setup

      This will print a large amount of JSON data containing details about the remote server environment. To print only the most relevant information, include the "gather_subset=min" argument as follows:

      • ansible server1 -i inventory -m setup -a "gather_subset=min"

      To print only specific items of the JSON, you can use the filter argument. This will accept a wildcard pattern used to match strings, similar to fnmatch. For example, to obtain information about both the ipv4 and ipv6 network interfaces, you can use *ipv* as filter:

      • ansible server1 -i inventory -m setup -a "filter=*ipv*"

      Output

      server1 | SUCCESS => { "ansible_facts": { "ansible_all_ipv4_addresses": [ "203.0.113.111", "10.0.0.1" ], "ansible_all_ipv6_addresses": [ "fe80::a4f5:16ff:fe75:e758" ], "ansible_default_ipv4": { "address": "203.0.113.111", "alias": "eth0", "broadcast": "203.0.113.111", "gateway": "203.0.113.1", "interface": "eth0", "macaddress": "a6:f5:16:75:e7:58", "mtu": 1500, "netmask": "255.255.240.0", "network": "203.0.113.0", "type": "ether" }, "ansible_default_ipv6": {} }, "changed": false }

      If you’d like to check disk usage, you can run a Bash command calling the df utility, as follows:

      • ansible all -i inventory -a "df -h"

      Output

      server1 | CHANGED | rc=0 >> Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 798M 624K 798M 1% /run /dev/vda1 155G 2.3G 153G 2% / tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/vda15 105M 3.6M 101M 4% /boot/efi tmpfs 798M 0 798M 0% /run/user/0 server2 | CHANGED | rc=0 >> Filesystem Size Used Avail Use% Mounted on udev 2.0G 0 2.0G 0% /dev tmpfs 395M 608K 394M 1% /run /dev/vda1 78G 2.2G 76G 3% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/vda15 105M 3.6M 101M 4% /boot/efi tmpfs 395M 0 395M 0% /run/user/0

      Conclusion

      In this guide, we demonstrated how to use Ansible ad hoc commands to manage remote servers, including how to execute common tasks such as restarting a service or copying a file from the control node to the remote servers managed by Ansible. We’ve also seen how to gather information from the remote nodes using limiting and filtering parameters.

      As an additional resource, you can check Ansible’s official documentation on ad hoc commands.



      Source link

      How To Manage Your Kubernetes Configurations with Kustomize


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Deploying applications to Kubernetes can sometimes feel cumbersome. You deploy some Pods, backed by a Deployment, with accessibility defined in a Service. All of these resources require YAML files for proper definition and configuration.

      On top of this, your application might need to communicate with a database, manage web content, or set logging verbosity. Further, these parameters may need to differ depending on the environment to which you are deploying. All of this can result in a sprawling codebase of YAML definitions, each with one- or two-line changes that are difficult to pinpoint.

      Kustomize is an open-source configuration management tool developed to help address these concerns. Since Kubernetes 1.14, kubectl fully supports Kustomize and kustomization files.

      In this guide, you will build a small web application and then use Kustomize to manage your configuration sprawl. You will deploy your app to development and production environments with different configurations. You will also layer these variable configurations using Kustomize’s bases and overlays so that your code is easier to read and thus easier to maintain.

      Prerequisites

      For this tutorial, you will need:

      Step 1 — Deploying Your Application without Kustomize

      Before deploying your app with Kustomize, you will first deploy it more traditionally. In this case, you will deploy a development version of sammy-app—a static web application hosted on Nginx. You will store your web content as data in a ConfigMap, which you will mount on a Pod in a Deployment. Each of these will require a separate YAML file, which you will now create.

      First, make a folder for your application and all of its configuration files. This is where you’ll run all of the commands in this tutorial.

      Create a new folder in your home directory and navigate inside:

      • mkdir ~/sammy-app && cd ~/sammy-app

      Now use your preferred text editor to create and open a file called configmap.yml:

      Add the following content:

      ~/sammy-app/configmap.yml

      ---
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: sammy-app
        namespace: default
      data:
        body: >
          <html>
            <style>
              body {
                background-color: #222;
              }
              p {
                font-family:"Courier New";
                font-size:xx-large;
                color:#f22;
                text-align:center;
              }
            </style>
            <body>
              <p>DEVELOPMENT</p>
            </body>
          </html>
      

      This specification creates a new ConfigMap object. You are naming it sammy-app and saving some HTML web content inside data:.

      Save and close the file.

      Now create and open a second file called deployment.yml:

      Add the following content:

      ~/sammy-app/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: sammy-app
        namespace: default
        labels:
          app: sammy-app
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: sammy-app
        template:
          metadata:
            labels:
              app: sammy-app
          spec:
            containers:
            - name: server
              image: nginx:1.17
              volumeMounts:
                - name: sammy-app
                  mountPath: /usr/share/nginx/html
              ports:
              - containerPort: 80
                protocol: TCP
              resources:
                requests:
                  cpu: 100m
                  memory: "128M"
                limits:
                  cpu: 100m
                  memory: "256M"
              env:
              - name: LOG_LEVEL
                value: "DEBUG"
            volumes:
            - name: sammy-app
              configMap:
                name: sammy-app
                items:
                - key: body
                  path: index.html
      

      This specification creates a new Deployment object. You are adding the name and label of sammy-app, setting the number of replicas to 1, and specifying the object to use the Nginx version 1.17 container image. You are also setting the container’s port to 80, defining cpu and memory requests and limitations, and setting your logging level to DEBUG.

      Save and close the file.

      Now deploy these two files to your Kubernetes cluster. To create multiple Objects from stdin, pipe the cat command to kubectl:

      • cat configmap.yml deployment.yml | kubectl apply -f -

      Wait a few moments and then use kubectl to check the status of your application:

      • kubectl get pods -l app=sammy-app

      You will eventually see one Pod with your application running and 1/1 containers in the READY column:

      Output

      NAME READY STATUS RESTARTS AGE sammy-app-56bbd86cc9-chs75 1/1 Running 0 8s

      Your Pod is running and backed by a Deployment, but you still cannot access your application. First, you need to add a Service.

      Create and open a third YAML file called service.yml:

      Add the following content:

      ~/sammy-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: sammy-app
        labels:
          app: sammy-app
      spec:
        type: LoadBalancer
        ports:
        - name: sammy-app-http
          port: 80
          protocol: TCP
          targetPort: 80
        selector:
          app: sammy-app
      

      This specification creates a new Service object called sammy-app. For most cloud providers, setting spec.type to LoadBalancer will provision a load balancer. DigitalOcean Managed Kubernetes (DOKS), for instance, will provision a DigitalOcean LoadBalancer to make your application available to the Internet. spec.ports will target TCP port 80 for any Pod with the sammy-app label.

      Save and close the file.

      Now deploy the Service to your Kubernetes cluster:

      • kubectl apply -f service.yml

      Wait a few moments and then use kubectl to check the status of your application:

      Eventually, a public IP will appear for your Service under the EXTERNAL-IP column. A unique IP will appear in the place of your_external_ip:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 7h26m sammy-app LoadBalancer 10.245.186.235 <pending> 80:30303/TCP 65s sammy-app LoadBalancer 10.245.186.235 your_external_ip 80:30303/TCP 2m29s

      Copy the IP address that appears and enter it in your web browser. You will see the DEVELOPMENT version of your application.

      sammy-app in development

      From your terminal, type CTRL + C to stop watching your Services.

      In this step, you deployed a development version of sammy-app to Kubernetes. In Steps 2 and 3, you will use Kustomize to redeploy a development version of sammy-app and then deploy a production version with slightly different configurations. Using this new workflow, you will see how well Kustomize can manage configuration changes and simplify your development workflow.

      Step 2 — Deploying Your Application with Kustomize

      In this step, you will deploy the exact same application, but in the form that Kustomize expects instead of the default Kubernetes manner.

      Your filesystem currently looks like this:

      sammy-app/
      ├── configmap.yml
      ├── deployment.yml
      └── service.yml
      

      To make this application deployable with Kustomize, you need to add one file, kustomization.yml. Do so now:

      At a minimum, this file should specify what resources to manage when running kubectl with the -k option, which will direct kubectl to process the kustomization file.

      Add the following content:

      ~/sammy-app/kustomization.yml

      ---
      resources:
      - configmap.yml
      - deployment.yml
      - service.yml
      

      Save and close the file.

      Now, before deploying again, delete your existing Kubernetes resources from Step 1:

      • kubectl delete deployment/sammy-app service/sammy-app configmap/sammy-app

      And deploy them again, but this time with Kustomize:

      Instead of providing the -f option to kubectl to direct Kubernetes to create resources from a file, you provide -k and a directory (in this case, . denotes the current directory). This instructs kubectl to use Kustomize and to inspect that directory’s kustomization.yml.

      This creates all three resources: the ConfigMap, Deployment, and Service. Use the get pods command to check your deployment:

      • kubectl get pods -l app=sammy-app

      You will again see one Pod with your application running and 1/1 containers in the READY column.

      Now rerun the get services command. You will also see your Service with a publicly-accessible EXTERNAL-IP:

      • kubectl get services -l app=sammy-app

      You are now successfully using Kustomize to manage your Kubernetes configurations. In the next step, you will deploy sammy-app to production with a slightly different configuration. You will also use Kustomize to manage these variances.

      Step 3 — Managing Application Variance with Kustomize

      Configuration files for Kubernetes resources can really start to sprawl once you start dealing with multiple resource types, especially when there are small differences between environments (like development versus production, for example). You might have a deployment-development.yml and deployment-production.yml instead of just a deployment.yml. The situation might be similar for all of your other resources, too.

      Imagine what might happen when a new version of the Nginx Docker image is released, and you want to start using it. Perhaps you test the new version in deployment-development.yml and want to proceed, but then you forget to update deployment-production.yml with the new version. Suddenly, you’re running a different version of Nginx in development than you are in production. Small configuration errors like this can quickly break your application.

      Kustomize can greatly simplify these management issues. Remember that you now have a filesystem with your Kubernetes configuration files and a kustomization.yml:

      sammy-app/
      ├── configmap.yml
      ├── deployment.yml
      ├── kustomization.yml
      └── service.yml
      

      Imagine that you are now ready to deploy sammy-app to production. You’ve also decided that the production version of your application will differ from its development version in the following ways:

      • replicas will increase from 1 to 3.
      • container resource requests will increase from 100m CPU and 128M memory to 250m CPU and 256M memory.
      • container resource limits will increase from 100m CPU and 256M memory to 1 CPU and 1G memory.
      • the LOG_LEVEL environment variable will change from DEBUG to INFO.
      • ConfigMap data will change to display slightly different web content.

      To begin, create some new directories to organize things in a more Kustomize-specific way:

      This will hold your “default” configuration—your base. In your example, this is the development version of sammy-app.

      Now move your current configuration in sammy-app/ into this directory:

      • mv configmap.yml deployment.yml service.yml kustomization.yml base/

      Then make a new directory for your production configuration. Kustomize calls this an overlay. Think of overlays as layers on top of the base—they always require a base to function:

      • mkdir -p overlays/production

      Create another kustomization.yml file to define your production overlay:

      • nano overlays/production/kustomization.yml

      Add the following content:

      ~/sammy-app/overlays/production/kustomization.yml

      ---
      bases:
      - ../../base
      patchesStrategicMerge:
      - configmap.yml
      - deployment.yml
      

      This file will specify a base for the overlay and what strategy Kubernetes will use to patch the resources. In this example, you will specify a strategic-merge-style patch to update the ConfigMap and Deployment resources.

      Save and close the file.

      And finally, add new deployment.yml and configmap.yml files into the overlays/production/ directory.

      Create the new deployment.yml file first:

      • nano overlays/production/deployment.yml

      Add the following to your file. The highlighted sections denote changes from your development configuration:

      ~/sammy-app/overlays/production/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: sammy-app
        namespace: default
      spec:
        replicas: 3
        template:
          spec:
            containers:
            - name: server
              resources:
                requests:
                  cpu: 250m
                  memory: "256M"
                limits:
                  cpu: 1
                  memory: "1G"
              env:
              - name: LOG_LEVEL
                value: "INFO"
      

      Notice the contents of this new deployment.yml. It contains only the TypeMeta fields used to identify the resource that changed (in this case, the Deployment of your application), and just enough remaining fields to step into the nested structure to specify a new field value, e.g., the container resource requests and limits.

      Save and close the file.

      Now create a new configmap.yml for your production overlay:

      nano /overlays/production/configmap.yml
      

      Add the following content:

      ~/sammy-app/overlays/production/configmap.yml

      ---
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: sammy-app
        namespace: default
      data:
        body: >
          <html>
            <style>
              body {
                background-color: #222;
              }
              p {
                font-family:"Courier New";
                font-size:xx-large;
                color:#22f;
                text-align:center;
              }
            </style>
            <body>
              <p>PRODUCTION</p>
            </body>
          </html>
      

      Here you have changed the text to display PRODUCTION instead of DEVELOPMENT. Note that you also changed the text color from a red hue #f22 to a blue hue #22f. Consider how difficult it could be to locate and track such minor changes if you were not using a configuration management tool like Kustomize.

      Your directory structure now looks like this:

      sammy-app/
      ├── base
      │   ├── configmap.yml
      │   ├── deployment.yml
      │   ├── kustomization.yml
      │   └── service.yml
      └── overlays
          └── production
              ├── configmap.yml
              ├── deployment.yml
              └── kustomization.yml
      

      You are ready to deploy using your base configuration. First, delete the existing resources:

      • kubectl delete deployment/sammy-app service/sammy-app configmap/sammy-app

      Deploy your base configuration to Kubernetes:

      Inspect your deployment:

      • kubectl get pods,services -l app=sammy-app

      You will see the expected base configuration, with the development version visible on the EXTERNAL-IP of the Service:

      Output

      NAME READY STATUS RESTARTS AGE pod/sammy-app-5668b6dc75-rwbtq 1/1 Running 0 21s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/sammy-app LoadBalancer 10.245.110.172 your_external_ip 80:31764/TCP 7m43s

      Now deploy your production configuration:

      • kubectl apply -k overlays/production/

      Inspect your deployment again:

      • kubectl get pods,services -l app=sammy-app

      You will see the expected production configuration, with the production version visible on the EXTERNAL-IP of the Service:

      Output

      NAME READY STATUS RESTARTS AGE pod/sammy-app-86759677b4-h5ndw 1/1 Running 0 15s pod/sammy-app-86759677b4-t2dml 1/1 Running 0 17s pod/sammy-app-86759677b4-z56f8 1/1 Running 0 13s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/sammy-app LoadBalancer 10.245.110.172 your_external_ip 80:31764/TCP 8m59s

      Notice in the production configuration that there are 3 Pods in total instead of 1. You can view the Deployment resource to confirm that the less-apparent changes have taken effect, too:

      • kubectl get deployments -l app=sammy-app -o yaml

      Visit your_external_ip in a browser to view the production version of your site.

      sammy-app in production

      You are now using Kustomize to manage application variance. Thinking back to one of your original problems, if you now wanted to change the Nginx image version, you would only need to modify deployment.yml in the base, and your overlays that use that base will also receive that change through Kustomize. This greatly simplifies your development workflow, improves readability, and reduces the likelihood of errors.

      Conclusion

      In this tutorial, you built a small web application and deployed it to Kubernetes. You then used Kustomize to simplify the management of your application’s configuration for different environments. You reorganized a set of nearly duplicate YAML files into a layered model. This will reduce errors, reduce manual configuration, and keep your work more recognizable and maintainable.

      This, however, only scratches the surface of what Kustomize offers. There are dozens of official examples and plenty of in-depth technical documentation to explore if you are interested in learning more.



      Source link

      How To Manage a Redis Database eBook


      Download the Complete eBook!

      How To Manage a Redis Database eBook in EPUB format

      How To Manage a Redis Database eBook in PDF format

      Introduction to the eBook

      This book aims to provide an approachable introduction to Redis concepts by outlining many of the key-value store’s commands so readers can learn their patterns and syntax, thus building up readers’ understanding gradually. The goal for this book is to serve as an introduction to Redis for those interested in getting started with it, or key-value stores in general. For more experienced users, this book can function as a collection of helpful cheat sheets and in-depth reference.

      This book is based on the How To Manage a Redis Database tutorial series found on DigitalOcean Community. The topics that it covers include how to:

      1. Connect to a Redis database

      2. Create and use a variety of Redis data types, including strings, sets, hashes, and lists

      3. Manage Redis clients and replicas

      4. Run transactions in Redis

      5. Troubleshoot issues in a Redis installation

      Each chapter is self-contained and can be followed independently of the others. By reading through this book, you’ll become acquainted with many of Redis’s most widely used commands, which will help you as you begin to build applications that take advantage of its power and speed.

      Download the eBook

      You can download the eBook in either the EPUB or PDF format by following the links below.

      Download the Complete eBook!

      How To Manage a Redis Database eBook in EPUB format

      How To Manage a Redis Database eBook in PDF format

      If you’d like to learn more about how to use Redis, visit the DigitalOcean Community’s Redis section. Alternatively, if you want to learn about other open-source database management systems, like MySQL, PostgreSQL, MariaDB, or MongoDB, we encourage you to check out our full library of database-related content.



      Source link