One place for hosting & domains

      Learning to Use the ss Tool to its Full Potential


      Updated by Linode

      Contributed by

      Mihalis Tsoukalos

      Introduction

      The study of socket connections is important for every
      UNIX and network administrator because it allows you to better understand your Linux system’s status. Written by Alexey Kuznetosv to replace the famous netstat utility , the more capable ss (socket statistics) utility allows you to monitor TCP, UDP, and UNIX sockets. The purpose of this guide is to help you learn the ss utility and to use it as productively as possible.

      Note

      Running ss without using the sudo utility will result in different output. Practically, this means that running ss without root privileges will show the results available to the current user only. If you are not familiar with the sudo command,
      see the Users and Groups guide.

      Command Line Options

      The ss(8) binary supports many command line options, including the following:

      Option Definition
      -h The -h option shows a summary of all options.
      -V The -V option displays the version of ss
      -H The -H option tells ss to suppress the header line – this is useful when you want to process the generated output using a scripting language.
      -t The -t parameter tells ss to show TCP connections only.
      -u The –u parameter tells ss to show UDP connections only.
      -d The –d parameter tells ss to show DCCP sockets only.
      -S The –S parameter tells ss to show SCTP sockets only.
      -a The -a option tells ss to display both listening and non-listening sockets of every kind.
      -l The -l parameter tells ss to display listening sockets, which are omitted by default.
      -e The -e option tells ss to display detailed socket information.
      -x The -x parameter tells ss to display UNIX domain sockets only.
      -A The -A option allows you to select the socket types that you want to see. The -A option accepts the following set of identifiers that can be combined and separated by commas: all, inet, tcp, udp, raw, unix, packet, netlink, unix_dgram, unix_stream, unix_seqpacket, packet_raw and packet_dgram.
      -4 The -4 command line option tells ss to display IPv4 connections only.
      -6 The -6 command line option tells ss to display IPv6 connections only.
      -f FAMILY The -f tells ss to display sockets of type FAMILY. The supported values are unix, inet, inet6 and netlink.
      -s The -s option displays useful statistics about the current connections.
      -o The -o option displays timer information. There are five types of timers: on, which is either a TCP retrans timer, a TCP early retrans timer, or a tail loss probe timer; keepalive, which is the TCP keep alive timer; timewait, which is the timewait stage timer; persist, which is the zero window probe timer; and unknown, which is a timer that is none of the other timers.
      -n The -n option tells ss to disable the resolving of service names.
      -r The -r option tells ss to enable DNS resolving in the output, which is turned off by default.
      -m The -m parameter tells ss to display socket memory usage information.
      -p The -p parameter tells ss to display the process that is using a socket.
      -D FILE The -D parameter tells ss to save the output in the FILE file.

      Note

      The -A tcp option is equivalent to -t, the -A udp option is equivalent to -u and the –A unix
      option is equivalent to -x.

      Installing ss

      The ss tool is part of the IPROUTE2 Utility Suite. Since the ss command line tool is usually
      installed by default, you will not need to install it yourself. On a Debian Linux system, you can
      find the ss executable inside /bin.

      If for some reason ss is not installed on your Linux system, you should install the iproute2
      package using your favorite package manager.

      Examples

      Basic Usage

      The simplest way to use ss is without any command line parameters. When ss is
      used without any command line arguments, it prints all TCP, UDP and socket connections.
      The list might get big on busy machines, which means that it can become more difficult to parse – the output of wc(1), (a word count utility), shows that the list is long yet manageable:

      ss | wc
      
        
           94     750    7926
      
      

      If you also use the -a parameter to show all listening and non-listening sockets, the output will be much higher:

      ss -a | wc
      
        
          224    1682   19562
      
      

      Listing Sockets

      TCP

      The following command displays all listening and non-listening (-a) TCP (-t) sockets:

      ss -t -a
      
        
      State    Recv-Q  Send-Q  Local Address:Port   Peer Address:Port
      LISTEN   0       80      127.0.0.1:mysql      *:*
      LISTEN   0       128     *:ssh                *:*
      LISTEN   0       100     *:smtp               *:*
      ESTAB    0       204     109.74.193.253:ssh   2.86.7.61:55137
      LISTEN   0       128     :::http              :::*
      LISTEN   0       128     :::ssh               :::*
      LISTEN   0       128     :::https             :::*
      
      

      The output is separated into columns. The first column, state, shows the state of the TCP connection. As the example is using the -a
      option, both listening and non-listening states are included in the output.
      The second and the third columns, Recv-Q and Send-Q, show the amount of data queued for receive and
      transmit operations. The Local Address:Port column shows the IP address the process
      listens to as well as the port number that is used – you can connect the name of the
      service with a numeric value by looking at the /etc/services file. The last column, Peer Address:Port, is useful when there is an active connection
      because it shows the address and port number of the client machine, though here it is without any real values for TCP connections that are in the
      LISTEN state.
      As the -r option is not used, you only see IP addresses in the output.

      Running ss -t without –a will display established TCP connections only:

      ss -t
      
        
      State  Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      ESTAB  0       204     109.74.193.253:ssh  2.86.7.61:55137
      
      

      UDP

      The following command displays all UDP (-u) sockets:

      ss -u -a
      
        
      State    Recv-Q  Send-Q  Local Address:Port                  Peer Address:Port
      UNCONN   0       0       *:mdns                              *:*
      UNCONN   1536    0       109.74.193.253:syslog               *:*
      UNCONN   0       0       *:54087                             *:*
      UNCONN   0       0       *:bootpc                            *:*
      UNCONN   0       0       109.74.193.253:ntp                  *:*
      UNCONN   0       0       127.0.0.1:ntp                       *:*
      UNCONN   0       0       *:ntp                               *:*
      UNCONN   0       0       :::mdns                             :::*
      UNCONN   0       0       :::48582                            :::*
      UNCONN   0       0       fe80::f03c:91ff:fe69:1381%eth0:ntp  :::*
      UNCONN   0       0       2a01:7e00::f03c:91ff:fe69:1381:ntp  :::*
      UNCONN   0       0       ::1:ntp                             :::*
      UNCONN   0       0       :::ntp                              :::*
      
      

      Running ss -u without –a will display established UDP connections only. In this case there are no established UDP connections:

      ss -u
      
        
      Recv-Q Send-Q  Local Address:Port  Peer Address:Port
      
      

      Display Statistics

      You can display statistics about the current connections using the -s option:

      ss -s
      
        
      Total: 199 (kernel 228)
      TCP:   9 (estab 1, closed 2, orphaned 0, synrecv 0, timewait 0/0), ports 0
      
      Transport  Total  IP  IPv6
      *          228    -   -
      RAW        0      0   0
      UDP        13     7   6
      TCP        7      4   3
      INET       20     11  9
      FRAG       0      0   0
      
      

      Filter by TCP State

      ss allows you to filter its output by state using the state and exclude keywords
      followed by a state identifier. The state keyword displays output that matches the
      provided identifier, whereas the exclude keyword displays everything except the output
      that matches the identifier.

      The use of state is illustrated in the next example:

      ss -t4 state established
      
        
      Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      0       0       109.74.193.253:ssh  2.86.7.61:55137
      
      

      The use of exclude is illustrated in the next example:

      ss -t4 exclude established
      
        
      State      Recv-Q  Send-Q  Local Address:Port   Peer Address:Port
      LISTEN     0       80      127.0.0.1:mysql      *:*
      LISTEN     0       128     *:ssh                *:*
      LISTEN     0       100     *:smtp               *:*
      TIME-WAIT  0       0       109.74.193.253:smtp  103.89.91.73:55668
      
      

      The -t4 command option returns IPv4 TCP connections.

      Filter Output by IP Address and Port Number

      The more you filter the output of ss, the more accurate and relevant information you will receive. There exist two ss options that allow
      you to include connections from certain IP addresses and port numbers.

      The following command shows traffic from a given IP address only, using the
      dst keyword:

      ss -nt dst 2.86.7.61
      
        
      State        Recv-Q  Send-Q  Local Address:Port         Peer Address:Port
      ESTAB        0      0        109.74.193.253:22          2.86.7.61:55137
      FIN-WAIT-1   0       32      ::ffff:109.74.193.253:443  ::ffff:2.86.7.61:56075
      ESTAB        0       0       ::ffff:109.74.193.253:443  ::ffff:2.86.7.61:56077
      ESTAB        0       0       ::ffff:109.74.193.253:443  ::ffff:2.86.7.61:56074
      ESTAB        0       0       ::ffff:109.74.193.253:443  ::ffff:2.86.7.61:56078
      
      

      If you want to display traffic from an entire network, you can replace the IP address with
      a network address such as 2.86.7/24.

      The following command displays information about the HTTP and the HTTPS protocols, which
      are associated with port numbers 80 and 443 as defined in /etc/services:

      ss -at '( dport = :http or dport = :https or sport = :http or sport = :https )'
      
        
      State      Recv-Q  Send-Q  Local Address:Port           Peer Address:Port
      LISTEN     0       128     :::http                      :::*
      LISTEN     0       128     :::https                     :::*
      ESTAB      0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:56046
      ESTAB      0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:56055
      ESTAB      0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:56047
      ESTAB      0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:56054
      ESTAB      0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:56056
      ESTAB      0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:56057
      TIME-WAIT  0       0       ::ffff:109.74.193.253:http   ::ffff:54.39.151.52:59854
      
      

      dport means destination port and sport means source port.

      The following command is equivalent to the previous command:

      ss -at '( dport = :80 or dport = :443 or sport = :80 or sport = :443 )'
      

      Display Timer Information

      The -o option displays timer information:

      ss -nt dst 2.86.7.61 -o
      
        
      State  Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      ESTAB  0       0       109.74.193.253:22   2.86.7.61:55137     timer:(keepalive,72min,0)
      
      

      Enable IP Address Resolving

      The -r parameter enables IP address resolving, which returns the domain names of the IP addresses:

      ss -r -t
      
        
      State  Recv-Q  Send-Q    Local Address:Port                  Peer Address:Port
      ESTAB  0       168       li140-253.members.linode.com:ssh    ppp-2-86-7-61.home.otenet.gr:50939
      ESTAB  0       0         li140-253.members.linode.com:https  ::ffff:216.244.66.228:37668
      
      

      Note

      A side effect of the -r command line option is that it slows the execution of
      the ss command due to the DNS lookups that need to be performed.

      Display Detailed Socket Information

      The -e option tells ss to display detailed socket information. The -e option
      is illustrated in the following example:

      ss -t -e
      
        
      State  Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      ESTAB  0       0       109.74.193.253:ssh  2.86.7.61:62897    timer:(keepalive,54min,0) ino:10195329 sk:11e 
      
      

      Show a Connection’s UNIX Process

      The -p option displays the process ID(s) and the process name of a connection:

      ss -t -p
      
        
      State  Recv-Q  Send-Q  Local Address:Port           Peer Address:Port
      ESTAB  0       204     109.74.193.253:ssh           2.86.7.61:55137            users:(("sshd",pid=3964,fd=3),("sshd",pid=3951,fd=3))
      ESTAB  0       51      ::ffff:109.74.193.253:https  ::ffff:176.9.146.74:57536  users:(("apache2",pid=30871,fd=29))
      
      

      The following command shows SSH-related processes on the current machine:

      ss -t -p -a | grep ssh
      
        
      LISTEN  0  128  *:ssh               *:*                    users:(("sshd",pid=812,fd=3))
      ESTAB   0  36   109.74.193.253:ssh  2.86.7.61:55137        users:(("sshd",pid=3964,fd=3),("sshd",pid=3951,fd=3))
      ESTAB   0  0    109.74.193.253:ssh  138.197.140.194:41992  users:(("sshd",pid=8538,fd=3),("sshd",pid=8537,fd=3))
      LISTEN  0  128  :::ssh              :::*                   users:(("sshd",pid=812,fd=4))
      
      

      Find Which Process is Using a Given Port Number

      With the help of ss and grep(1), you can discover which process is using
      a given port number:

      ss -tunap | grep :80
      
        
      tcp  LISTEN  0  128  :::80 :::*  users:(("apache2",pid=8772,fd=4),("apache2",pid=8717,fd=4),("apache2",pid=8715,fd=4),("apache2",pid=8714,fd=4),("apache2",pid=8713,fd=4),("apache2",pid=8712,fd=4),("apache2",pid=8711,fd=4),("apache2",pid=8709,fd=4))
      
      

      As Apache uses multiple child processes, you receive a list of processes for port number 80.

      The next command will do exactly the same thing without using grep(1):

      ss -tup -a sport = :80
      
        
      Netid  State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      tcp    LISTEN  0       128     :::http             :::*               users:(("apache2",pid=8715,fd=4),("apache2",pid=8714,fd=4),("apache2",pid=8713,fd=4),("apache2",pid=8712,fd=4),("apache2",pid=8711,fd=4),("apache2",pid=8709,fd=4))
      
      

      Find Open Ports Above Port Number 1024

      ss supports ranges when working with port numbers. This feature is illustrated in
      the following example that finds open port above port number 1024:

      ss -t -a sport > :1024
      
        
      State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      LISTEN  0       80      127.0.0.1:mysql     *:*
      
      

      Note

      The ss -t -a sport > :1024 command can be also written as ss -t -a sport '> :1024'.

      Search for Specific TCP Characteristics

      The following command shows all TCP connections that use IPv4 that are in
      listening state, as well as the name of the process using the socket without
      resolving the IP addresses and the port number:

      ss -t -4nlp
      
        
      State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      LISTEN  0       80      127.0.0.1:3306      *:*                users:(("mysqld",pid=1003,fd=17))
      LISTEN  0       128     *:22                *:*                users:(("sshd",pid=812,fd=3))
      LISTEN  0       100     *:25                *:*                users:(("smtpd",pid=9011,fd=6),("master",pid=1245,fd=13))
      
      

      The following command shows all SSH related connections and sockets:

      ss -at '( dport = :ssh or sport = :ssh )'
      
        
      State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      LISTEN  0       128     *:ssh               *:*
      ESTAB   0       0       109.74.193.253:ssh  2.86.7.61:64363
      LISTEN  0       128     :::ssh              :::*
      
      

      Show Sockets in a Listening State

      The following command shows TCP sockets in listening (-l) state:

      ss -l -t
      
        
      State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
      LISTEN  0       80      127.0.0.1:mysql     *:*
      LISTEN  0       128     *:ssh              *:*
      LISTEN  0       100     *:smtp              *:*
      LISTEN  0       128     :::http             :::*
      LISTEN  0       128     :::ssh              :::*
      LISTEN  0       128     :::https            :::*
      
      

      The following command shows IPv4 UDP sockets in listening state:

      ss -l -u -4
      
        
      State   Recv-Q  Send-Q  Local Address:Port      Peer Address:Port
      UNCONN  0       0       *:mdns                  *:*
      UNCONN  1536    0        109.74.193.253:syslog  *:*
      UNCONN  0       0        *:54087                *:*
      UNCONN  0       0        *:bootpc               *:*
      UNCONN  0       0        109.74.193.253:ntp     *:*
      UNCONN  0       0        127.0.0.1:ntp          *:*
      UNCONN  0       0        *:ntp                  *:*
      
      

      Advanced Filtering with ss

      The following ss command will list all TCP sockets that are in the ESTABLISHED state, use HTTP or HTTPS on the local machine and belong to the 2.86.7/24 network and display their timers:

      ss -o state established '( sport = :http or sport = :https )' dst 2.86.7/24
      
        
      Netid  Recv-Q  Send-Q  Local Address:Port           Peer Address:Port
      tcp    0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:63057  timer:(keepalive,119min,0)
      tcp    0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:63053  timer:(keepalive,119min,0)
      tcp    0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:63055  timer:(keepalive,119min,0)
      tcp    0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:63054  timer:(keepalive,119min,0)
      tcp    0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:63052  timer:(keepalive,119min,0)
      tcp    0       0       ::ffff:109.74.193.253:https  ::ffff:2.86.7.61:63056  timer:(keepalive,119min,0)
      
      

      Apart from the standard TCP state names (established,
      syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack,
      listen and closing), you can also use the following states:

      • all: For all the states.
      • bucket: For TCP minisockets (TIME-WAIT|SYN-RECV) states.
      • big: For all states except for minisockets – this is the opposite of bucket.
      • connected: For the not closed and not listening states.
      • synchronized: For connected and not SYN-SENT states.

      Using AWK to Process ss Output

      The following command displays a summary of all sockets based on their state:

      ss -t -u -a | awk '{print $1}' | grep -v State | sort | uniq -c | sort -nr
      
        
           13 udp
            7 tcp
            1 Netid
      
      

      The following command displays a summary of all sockets based on their protocol:

      ss -a | awk '{print $1}' | grep -v State | sort | uniq -c | sort -nr
      
        
          133 u_str
           37 u_dgr
           34 nl
           13 udp
            8 tcp
            1 u_seq
            1 p_raw
            1 Netid
      
      

      The last command will create a summary of all IPv6 TCP connections that are in
      the CONNECTED state:

      ss -t6 state connected | awk '{print $1}' | grep -v State | sort | uniq -c | sort -nr
      

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Webinar Series: GitOps Tool Sets on Kubernetes with CircleCI and Argo CD


      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a cloud native approach to building, testing, and deploying applications, covering release management, cloud native tools, service meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the last session of the series, GitOps Tool Sets on Kubernetes with CircleCI and Argo CD.

      Warning: The procedures in this tutorial are meant for demonstration purposes only. As a result, they don’t follow the best practices and security measures necessary for a production-ready deployment.

      Introduction

      Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with the increased control comes an increased complexity that can make CI/CD systems of cooperative code development, version control, change logging, and automated deployment and rollback particularly difficult to manage manually. To account for these difficulties, DevOps engineers have developed several methods of Kubernetes CI/CD automation, including the system of tooling and best practices called GitOps. GitOps, as proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.

      There are many tools that use Git as a focal point for DevOps processes on Kubernetes, including Gitkube developed by Hasura, Flux by Weaveworks, and Jenkins X, the topic of the second webinar in this series. In this tutorial, you will run through a demonstration of two additional tools that you can use to set up your own cloud-based GitOps CI/CD system: The Continuous Integration tool CircleCI and Argo CD, a declarative Continuous Delivery tool.

      CircleCI uses GitHub or Bitbucket repositories to organize application development and to automate building and testing on Kubernetes. By integrating with the Git repository, CircleCI projects can detect when a change is made to the application code and automatically test it, sending notifications of the change and the results of testing over email or other communication tools like Slack. CircleCI keeps logs of all these changes and test results, and the browser-based interface allows users to monitor the testing in real time, so that a team always knows the status of their project.

      As a sub-project of the Argo workflow management engine for Kubernetes, Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including ksonnet applications, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket.

      In this last article of the CI/CD with Kubernetes series, you will try out these GitOps tools by:

      By the end of this tutorial, you will have a basic understanding of how to construct a CI/CD pipeline on Kubernetes with a GitOps tool set.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 16.04 server with 16 GB of RAM or above. Since this tutorial is meant for demonstration purposes only, commands are run from the root account. Note that the unrestrained privileges of this account do not adhere to production-ready best practices and could affect your system. For this reason, it is suggested to follow these steps in a test environment such as a virtual machine or a DigitalOcean Droplet.

      • A Docker Hub Account. For an overview on getting started with Docker Hub, please see these instructions.

      • A GitHub account and basic knowledge of GitHub. For a primer on how to use GitHub, check out our How To Create a Pull Request on GitHub tutorial.

      • Familiarity with Kubernetes concepts. Please refer to the article An Introduction to Kubernetes for more details.

      • A Kubernetes cluster with the kubectl command line tool. This tutorial has been tested on a simulated Kubernetes cluster, set up in a local environment with Minikube, a program that allows you to try out Kubernetes tools on your own machine without having to set up a true Kubernetes cluster. To create a Minikube cluster, follow Step 1 of the second webinar in this series, Kubernetes Package Management with Helm and CI/CD with Jenkins X.

      Step 1 — Setting Up your CircleCI Workflow

      In this step, you will put together a standard CircleCI workflow that involves three jobs: testing code, building an image, and pushing that image to Docker Hub. In the testing phase, CircleCI will use pytest to test the code for a sample RSVP application. Then, it will build the image of the application code and push the image to DockerHub.

      First, give CircleCI access to your GitHub account. To do this, navigate to https://circleci.com/ in your favorite web browser:

      CircleCI Landing Page

      In the top right of the page, you will find a Sign Up button. Click this button, then click Sign Up with GitHub on the following page. The CircleCI website will prompt you for your GitHub credentials:

      Sign In to GitHub CircleCI Page

      Entering your username and password here gives CircleCI the permission to read your GitHub email address, deploy keys and add service hooks to your repository, create a list of your repositories, and add an SSH key to your GitHub account. These permissions are necessary for CircleCI to monitor and react to changes in your Git repository. If you would like to read more about the requested permissions before giving CircleCI your account information, see the CircleCI documentation.

      Once you have reviewed these permissions, enter your GitHub credentials and click Sign In. CircleCI will then integrate with your GitHub account and redirect your browser to the CircleCI welcome page:

      Welcome page for CircleCI

      Now that you have access to your CircleCI dashboard, open up another browser window and navigate to the GitHub repository for this webinar, https://github.com/do-community/rsvpapp-webinar4. If prompted to sign in to GitHub, enter your username and password. In this repository, you will find a sample RSVP application created by the CloudYuga team. For the purposes of this tutorial, you will use this application to demonstrate a GitOps workflow. Fork this repository to your GitHub account by clicking the Fork button at the top right of the screen.

      When you’ve forked the repository, GitHub will redirect you to https://github.com/your_GitHub_username/rsvpapp-webinar4. On the left side of the screen, you will see a Branch: master button. Click this button to reveal the list of branches for this project. Here, the master branch refers to the current official version of the application. On the other hand, the dev branch is a development sandbox, where you can test changes before promoting them to the official version in the master branch. Select the dev branch.

      Now that you are in the development section of this demonstration repository, you can start setting up a pipeline. CircleCI requires a YAML configuration file in the repository that describes the steps it needs to take to test your application. The repository you forked already has this file at .circleci/config.yml; in order to practice setting up CircleCI, delete this file and make your own.

      To create this configuration file, click the Create new file button and make a file named .circleci/config.yml:

      GitHub Create a new file Page

      Once you have this file open in GitHub, you can configure the workflow for CircleCI. To learn about this file’s contents, you will add the sections piece by piece. First, add the following:

      .circleci/config.yml

      version: 2
      jobs:
        test:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
      . . .
      

      In the preceding code, version refers to the version of CircleCI that you will use. jobs:test: means that you are setting up a test for your application, and machine:image: indicates where CircleCI will do the testing, in this case a virtual machine based on the circleci/classic:201808-01 image.

      Next, add the steps you would like CircleCI to take during the test:

      .circleci/config.yml

      . . .
          steps:
            - checkout
            - run:
                name: install dependencies
                command: |
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install software-properties-common
                  sudo add-apt-repository ppa:fkrull/deadsnakes
                  sudo apt-get update
                  sleep 5
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install python3.5
                  sleep 5
                  python -m pip install -r requirements.txt
      
            # run tests!
            # this example uses Django's built-in test-runner
            # other common Python testing frameworks include pytest and nose
            # https://pytest.org
            # https://nose.readthedocs.io
      
            - run:
                name: run tests
                command: |
                  python -m pytest tests/test_rsvpapp.py  
      
      . . .
      

      The steps of the test are listed out after steps:, starting with - checkout, which will checkout your project’s source code and copy it into the job’s space. Next, the - run: name: install dependencies step runs the listed commands to install the dependencies required for the test. In this case, you will be using the Django Web framework’s built-in test-runner and the testing tool pytest. After CircleCI downloads these dependencies, the -run: name: run tests step will instruct CircleCI to run the tests on your application.

      With the test job completed, add in the following contents to describe the build job:

      .circleci/config.yml

      . . .
        build:
      
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout 
            - run:
                name: build image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
      
        push:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
          steps:
            - checkout 
            - run:
                name: Push image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
                  echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
                  docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1    
      
      . . .
      

      As before, machine:image: means that CircleCI will build the application in a virtual machine based on the specified image. Under steps:, you will find - checkout again, followed by - run: name: build image. This means that CircleCi will build a Docker container from the rsvpapp image in your Docker Hub repository. You will set the $DOCKERHUB_USERNAME environment variable in the CircleCI interface, which the tutorial will cover after this YAML file is complete.

      After the build job is done, the push job will push the resulting image to your Docker Hub account.

      Finally, add the following lines to determine the workflows that coordinate the jobs you defined earlier:

      .circleci/config.yml

      . . .
      workflows:
        version: 2
        build-deploy:
          jobs:
            - test:
                context: DOCKERHUB
                filters:
                  branches:
                    only: dev        
            - build:
                context: DOCKERHUB 
                requires:
                  - test
                filters:
                  branches:
                    only: dev
            - push:
                context: DOCKERHUB
                requires:
                  - build
                filters:
                  branches:
                    only: dev
      

      These lines ensure that CircleCI executes the test, build, and push jobs in the correct order. context: DOCKERHUB refers to the context in which the test will take place. You will create this context after finalizing this YAML file. The only: dev line restrains the workflow to trigger only when there is a change to the dev branch of your repository, and ensures that CircleCI will build and test the code from dev.

      Now that you have added all the code for the .circleci/config.yml file, its contents should be as follows:

      .circleci/config.yml

      version: 2
      jobs:
        test:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout
            - run:
                name: install dependencies
                command: |
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install software-properties-common
                  sudo add-apt-repository ppa:fkrull/deadsnakes
                  sudo apt-get update
                  sleep 5
                  sudo rm /var/lib/dpkg/lock
                  sudo dpkg --configure -a
                  sudo apt-get install python3.5
                  sleep 5
                  python -m pip install -r requirements.txt
      
            # run tests!
            # this example uses Django's built-in test-runner
            # other common Python testing frameworks include pytest and nose
            # https://pytest.org
            # https://nose.readthedocs.io
      
            - run:
                name: run tests
                command: |
                  python -m pytest tests/test_rsvpapp.py  
      
        build:
      
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
      
          steps:
            - checkout 
            - run:
                name: build image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
      
        push:
          machine:
            image: circleci/classic:201808-01
            docker_layer_caching: true
          working_directory: ~/repo
          steps:
            - checkout 
            - run:
                name: Push image
                command: |
                  docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
                  echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
                  docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1    
      
      workflows:
        version: 2
        build-deploy:
          jobs:
            - test:
                context: DOCKERHUB
                filters:
                  branches:
                    only: dev        
            - build:
                context: DOCKERHUB 
                requires:
                  - test
                filters:
                  branches:
                    only: dev
            - push:
                context: DOCKERHUB
                requires:
                  - build
                filters:
                  branches:
                    only: dev
      

      Once you have added this file to the dev branch of your repository, return to the CircleCI dashboard.

      Next, you will create a CircleCI context to house the environment variables needed for the workflow that you outlined in the preceding YAML file. On the left side of the screen, you will find a SETTINGS button. Click this, then select Contexts under the ORGANIZATION heading. Finally, click the Create Context button on the right side of the screen:

      Create Context Screen for CircleCI

      CircleCI will then ask you for the name of this context. Enter DOCKERHUB, then click Create. Once you have created the context, select the DOCKERHUB context and click the Add Environment Variable button. For the first, type in the name DOCKERHUB_USERNAME, and in the Value enter your Docker Hub username.

      Add Environment Variable Screen for CircleCI

      Then add another environment variable, but this time, name it DOCKERHUB_PASSWORD and fill in the Value field with your Docker Hub password.

      When you’ve create the two environment variables for your DOCKERHUB context, create a CircleCI project for the test RSVP application. To do this, select the ADD PROJECTS button from the left-hand side menu. This will yield a list of GitHub projects tied to your account. Select rsvpapp-webinar4 from the list and click the Set Up Project button.

      Note: If rsvpapp-webinar4 does not show up in the list, reload the CircleCI page. Sometimes it can take a moment for the GitHub projects to show up in the CircleCI interface.

      You will now find yourself on the Set Up Project page:

      Set Up Project Screen for CircleCI

      At the top of the screen, CircleCI instructs you to create a config.yml file. Since you have already done this, scroll down to find the Start Building button on the right side of the page. By selecting this, you will tell CircleCI to start monitoring your application for changes.

      Click on the Start Building button. CircleCI will redirect you to a build progress/status page, which as yet has no build.

      To test the pipeline trigger, go to the recently forked repository at https://github.com/your_GitHub_username/rsvpapp-webinar4 and make some changes in the dev branch only. Since you have added the branch filter only: dev to your .circleci/config file, CI will build only when there is change in the dev branch. Make a change to the dev branch code, and you will find that CircleCI has triggered a new workflow in the user interface. Click on the running workflow and you will find the details of what CircleCI is doing:

      CircleCI Project Workflow Page

      With your CircleCI workflow taking care of the Continuous Integration aspect of your GitOps CI/CD system, you can install and configure Argo CD on top of your Kubernetes cluster to address Continuous Deployment.

      Step 2 — Installing and Configuring Argo CD on your Kubernetes Cluster

      Just as CircleCI uses GitHub to trigger automated testing on changes to source code, Argo CD connects your Kubernetes cluster into your GitHub repository to listen for changes and to automatically deploy the updated application. To set this up, you must first install Argo CD into your cluster.

      First, create a namespace named argocd:

      • kubectl create namespace argocd

      Within this namespace, Argo CD will run all the services and resources it needs to create its Continuous Deployment workflow.

      Next, download the Argo CD manifest from the official GitHub respository for Argo:

      • kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.9.2/manifests/install.yaml

      In this command, the -n flag directs kubectl to apply the manifest to the namespace argocd, and -f specifies the file name for the manifest that it will apply, in this case the one downloaded from the Argo repository.

      By using the kubectl get command, you can find the pods that are now running in the argocd namespace:

      • kubectl get pod -n argocd

      Using this command will yield output similar to the following:

      NAME                                      READY     STATUS    RESTARTS   AGE
      application-controller-6d68475cd4-j4jtj   1/1       Running   0          1m
      argocd-repo-server-78f556f55b-tmkvj       1/1       Running   0          1m
      argocd-server-78f47bf789-trrbw            1/1       Running   0          1m
      dex-server-74dc6c5ff4-fbr5g               1/1       Running   0          1m
      

      Now that Argo CD is running on your cluster, download the Argo CD CLI tool so that you can control the program from your command line:

      • curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.9.2/argocd-linux-amd64

      Once you’ve downloaded the file, use chmod to make it executable:

      • chmod +x /usr/local/bin/argocd

      To find the Argo CD service, run the kubectl get command in the namespace argocd:

      • kubectl get svc -n argocd argocd-server

      You will get output similar to the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-server ClusterIP 10.109.189.243 <none> 80/TCP,443/TCP 8m

      Now, access the Argo CD API server. This server does not automatically have an external IP, so you must first expose the API so that you can access it from your browser at your local workstation. To do this, use kubectl port-forward to forward port 8080 on your local workstation to the 80 TCP port of the argocd-server service from the preceding output:

      • kubectl port-forward svc/argocd-server -n argocd 8080:80

      The output will be:

      Output

      Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080

      Once you run the port-forward command, your command prompt will disappear from your terminal. To enter more commands for your Kubernetes cluster, open a new terminal window and log onto your remote server.

      To complete the connection, use ssh to forward the 8080 port from your local machine. First, open up an additional terminal window and, from your local workstation, enter the following command, with remote_server_IP_address replaced by the IP address of the remote server on which you are running your Kubernetes cluster:

      • ssh -L 8080:localhost:8080 root@remote_server_IP_address

      To make sure that the Argo CD server is exposed to your local workstation, open up a browser and navigate to the URL localhost:8080. You will see the Argo CD landing page:

      Sign In Page for ArgoCD

      Now that you have installed Argo CD and exposed its server to your local workstation, you can continue to the next step, in which you will connect GitHub into your Argo CD service.

      Step 3 — Connecting Argo CD to GitHub

      To allow Argo CD to listen to GitHub and synchronize deployments to your repository, you first have to connect Argo CD into GitHub. To do this, log into Argo.

      By default, the password for your Argo CD account is the name of the pod for the Argo CD API server. Switch back to the terminal window that is logged into your remote server but is not handling the port forwarding. Retrieve the password with the following command:

      • kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2

      You will get the name of the pod running the Argo API server:

      Output

      argocd-server-b686c584b-6ktwf

      Enter the following command to log in from the CLI:

      • argocd login localhost:8080

      You will receive the following prompt:

      Output

      WARNING: server certificate had error: x509: certificate signed by unknown authority. Proceed insecurely (y/n)?

      For the purposes of this demonstration, type y to proceed without a secure connection. Argo CD will then prompt you for your username and password. Enter admin for username and the complete argocd-server pod name for your password. Once you put in your credentials, you’ll receive the following message:

      Output

      'admin' logged in successfully Context 'localhost:8080' updated

      Now that you have logged in, use the following command to change your password:

      • argocd account update-password

      Argo CD will ask you for your current password and the password you would like to change it to. Choose a secure password and enter it at the prompts. Once you have done this, use your new password to relogin:

      Enter your password again, and you will get:

      Output

      Context 'localhost:8080' updated

      If you were deploying an application on a cluster external to the Argo CD cluster, you would need to register the application cluster's credentials with Argo CD. If, as is the case with this tutorial, Argo CD and your application are on the same cluster, then you will use https://kubernetes.default.svc as the Kubernetes API server when connecting Argo CD to your application.

      To demonstrate how one might register an external cluster, first get a list of your Kubernetes contexts:

      • kubectl config get-contexts

      You'll get:

      Output

      CURRENT NAME CLUSTER AUTHINFO NAMESPACE * minikube minikube minikube

      To add a cluster, enter the following command, with the name of your cluster in place of the highlighted name:

      • argocd cluster add minikube

      In this case, the preceding command would yield:

      Output

      INFO[0000] ServiceAccount "argocd-manager" created INFO[0000] ClusterRole "argocd-manager-role" created INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created, bound "argocd-manager" to "argocd-manager-role" Cluster 'minikube' added

      Now that you have set up your log in credentials for Argo CD and tested how to add an external cluster, move over to the Argo CD landing page and log in from your local workstation. Argo CD will direct you to the Argo CD applications page:

      Argo CD Applications Screen

      From here, click the Settings icon from the left-side tool bar, click Repositories, then click CONNECT REPO. Argo CD will present you with three fields for your GitHub information:

      Argo CD Connect Git Repo Page

      In the field for Repository URL, enter https://github.com/your_GitHub_username/rsvpapp-webinar4, then enter your GitHub username and password. Once you've entered your credentials, click the CONNECT button at the top of the screen.

      Once you've connected your repository containing the demo RSVP app to Argo CD, choose the Apps icon from the left-side tool bar, click the + button in the top right corner of the screen, and select New Application. From the Select Repository page, select your GitHub repository for the RSVP app and click next. Then choose CREATE APP FROM DIRECTORY to go to a page that asks you to review your application parameters:

      Argo CD Review application parameters Page

      The Path field designates where the YAML file for your application resides in your GitHub repository. For this project, type k8s. For Application Name, type rsvpapp, and for Cluster URL, select https://kubernetes.default.svc from the dropdown menu, since Argo CD and your application are on the same Kubernetes cluster. Finally, enter default for Namespace.

      Once you have filled out your application parameters, click on CREATE at the top of the screen. A box will appear, representing your application:

      Argo CD APPLICATIONS Page with rsvpapp

      After Status:, you will see that your application is OutOfSync with your GitHub repository. To deploy your application as it is on GitHub, click ACTIONS and choose Sync. After a few moments, your application status will change to Synced, meaning that Argo CD has deployed your application.

      Once your application has been deployed, click your application box to find a detailed diagram of your application:

      Argo CD Application Details Page for rsvpapp

      To find this deployment on your Kubernetes cluster, switch back to the terminal window for your remote server and enter:

      You will receive output with the pods that are running your app:

      Output

      NAME READY STATUS RESTARTS AGE rsvp-755d87f66b-hgfb5 1/1 Running 0 12m rsvp-755d87f66b-p2bsh 1/1 Running 0 12m rsvp-db-54996bf89-gljjz 1/1 Running 0 12m

      Next, check the services:

      You'll find a service for the RSVP app and your MongoDB database, in addition to the number of the port from which your app is running, highlighted in the following:

      NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
      kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        2h
      mongodb      ClusterIP   10.102.150.54   <none>        27017/TCP      25m
      rsvp         NodePort    10.106.91.108   <none>        80:31350/TCP   25m
      

      You can find your deployed RSVP app by navigating to your_remote_server_IP_address:app_port_number in your browser, using the preceding highlighted number for app_port_number:

      RSVP Application

      Now that you have deployed your application using Argo CD, you can test your Continuous Deployment system and adjust it to automatically sync with GitHub.

      Step 4 — Testing your Continuous Deployment Setup

      With Argo CD set up, test out your Continuous Deployment system by making a change in your project and triggering a new build of your application.

      In your browser, navigate to https://github.com/your_GitHub_username/rsvpapp-webinar4, click into the master branch, and update the k8s/rsvp.yaml file to deploy your app using the image built by CircleCI as a base. Add dev after image: nkhare/rsvpapp:, as shown in the following:

      rsvpapp-webinar2/k8s/rsvp.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: rsvp
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: rsvp
        template:
          metadata:
            labels:
              app: rsvp
          spec:
            containers:
            - name: rsvp-app
              image: nkhare/rsvpapp: dev
              imagePullPolicy: Always
              livenessProbe:
                httpGet:
                  path: /
                  port: 5000
                periodSeconds: 30
                timeoutSeconds: 1
                initialDelaySeconds: 50
              env:
              - name: MONGODB_HOST
                value: mongodb
              ports:
              - containerPort: 5000
                name: web-port
      . . .
      

      Instead of pulling the original image from Docker Hub, Argo CD will now use the dev image created in the Continuous Integration system to build the application.

      Commit the change, then return to the ArgoCD UI. You will notice that nothing has changed yet; this is because you have not activated automatic synchronization and must sync the application manually.

      To manually sync the application, click the blue circle in the top right of the screen, and click Sync. A new menu will appear, with a field to name your new revision and a checkbox labeled PRUNE:

      Synchronization Page for Argo CD

      Clicking this checkbox will ensure that, once Argo CD spins up your new application, it will destroy the outdated version. Click the PRUNE box, then click SYNCHRONIZE at the top of the screen. You will see the old elements of your application spinning down, and the new ones spinning up with your CircleCI-made image. If the new image included any changes, you would find these new changes reflected in your application at the URL your_remote_server_IP_address:app_port_number.

      As mentioned before, Argo CD also has an auto-sync option that will incorporate changes into your application as you make them. To enable this, open up your terminal for your remote server and use the following command:

      • argocd app set rsvpapp --sync-policy automated

      To make sure that revisions are not accidentally deleted, the default for automated sync has prune turned off. To turn automated pruning on, simply add the --auto-prune flag at the end of the preceding command.

      Now that you have added Continuous Deployment capabilities to your Kubernetes cluster, you have completed the demonstration GitOps CI/CD system with CircleCI and Argo CD.

      Conclusion

      In this tutorial, you created a pipeline with CircleCI that triggers tests and builds updated images when you change code in your GitHub repository. You also used Argo CD to deploy an application, automatically incorporating the changes integrated by CircleCI. You can now use these tools to create your own GitOps CI/CD system that uses Git as its organizing theme.

      If you'd like to learn more about Git, check out our An Introduction to Open Source series of tutorials. To explore more DevOps tools that integrate with Git repositories, take a look at How To Install and Configure GitLab on Ubuntu 18.04.



      Source link