One place for hosting & domains

      Balancing

      Getting Started with Load Balancing on a Linode Kubernetes Engine (LKE) Cluster


      Updated by Linode Contributed by Linode

      The Linode Kubernetes Engine (LKE) is Linode’s managed Kubernetes service. When you deploy an LKE cluster, you receive a Kubernetes Master which runs your cluster’s control plane components, at no additional cost. The control plane includes Linode’s Cloud Controller Manager (CCM), which provides a way for your cluster to access additional Linode services. Linode’s CCM provides access to Linode’s load balancing service, Linode NodeBalancers.

      NodeBalancers provide your Kubernetes cluster with a reliable way of exposing resources to the public internet. The LKE control plane handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, that the NodeBalancer will route traffic to. Whenever a Kubernetes Service of the LoadBalancer type is created, your Kubernetes cluster will create a Linode NodeBalancer service with the help of the Linode CCM.

      Note

      Adding external Linode NodeBalancers to your LKE cluster will incur additional costs. See Linode’s Pricing page for details.

      Note

      All existing LKE clusters receive CCM updates automatically every two weeks when a new LKE release is deployed. See the LKE Changelog for information on the latest LKE release.

      Note

      The Linode Terraform K8s module also deploys a Kubernetes cluster with the Linode CCM installed by default. Any Kubernetes cluster with a Linode CCM installation can make use of Linode NodeBalancers in the ways described in this guide.

      In this Guide

      This guide will show you:

      Before You Begin

      This guide assumes you have a working Kubernetes cluster that was deployed using the Linode Kubernetes Engine (LKE). You can deploy a Kubernetes cluster using LKE in the following ways:

      Adding Linode NodeBalancers to your Kubernetes Cluster

      To add an external load balancer to your Kubernetes cluster you can add the example lines to a new configuration file, or more commonly, to a Service file. When the configuration is applied to your cluster, Linode NodeBalancers will be created, and added to your Kubernetes cluster. Your cluster will be accessible via a public IP address and the NodeBalancers will route external traffic to a Service running on healthy nodes in your cluster.

      Note

      Billing for Linode NodeBalancers begin as soon as the example configuration is successfully applied to your Kubernetes cluster.

      1
      2
      3
      4
      5
      6
      7
      
      spec:
        type: LoadBalancer
        ports:
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      • The spec.type of LoadBalancer is responsible for telling Kubernetes to create a Linode NodeBalancer.
      • The remaining lines provide port definitions for your Service’s Pods and maps an incoming port to a container’s targetPort.

      Viewing Linode NodeBalancer Details

      To view details about running NodeBalancers on your cluster:

      1. Get the services running on your cluster:

        kubectl get services
        

        You will see a similar output:

          
        NAME            TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
        kubernetes      ClusterIP      10.128.0.1      none           443/TCP        3h5m
        example-service LoadBalancer   10.128.171.88   45.79.246.55   80:30028/TCP   36m
              
        
        • Viewing the entry for the example-service, you can find your NodeBalancer’s public IP under the EXTERNAL-IP column.
        • The PORT(S) column displays the example-service incoming port and NodePort.
      2. View details about the example-service to retrieve information about the deployed NodeBalancers:

        kubectl describe service example-service
        
          
        Name:                     nginx-service
        Namespace:                default
        Labels:                   app=nginx
        Annotations:              service.beta.kubernetes.io/linode-loadbalancer-throttle: 4
        Selector:                 app=nginx
        Type:                     LoadBalancer
        IP:                       10.128.171.88
        LoadBalancer Ingress:     192.0.2.0
        Port:                     http  80/TCP
        TargetPort:               80/TCP
        NodePort:                 http  30028/TCP
        Endpoints:                10.2.1.2:80,10.2.1.3:80,10.2.2.2:80
        Session Affinity:         None
        External Traffic Policy:  Cluster
        Events:                   
        

      Configuring your Linode NodeBalancers with Annotations

      The Linode CCM accepts annotations that configure the behavior and settings of your cluster’s underlying NodeBalancers.

      • The table below provides a list of all available annotation suffixes.
      • Each annotation must be prefixed with service.beta.kubernetes.io/linode-loadbalancer-. For example, the complete value for the throttle annotation is service.beta.kubernetes.io/linode-loadbalancer-throttle.
      • Annotation values such as http are case-sensitive.

      Annotations Reference

      Annotation (suffix) Values Default Value Description
      throttle • integer
      020
      0 disables the throttle
      20 The client connection throttle limits the number of new connections-per-second from the same client IP.
      default-protocol • string
      tcp, http, https
      tcp Specifies the protocol for the NodeBalancer to use.
      port-* A JSON object of port configurations
      For example:
      { "tls-secret-name": "prod-app-tls", "protocol": "https"})
      None • Specifies a NodeBalancer port to configure, i.e. port-443.

      • Ports 1-65534 are available for balancing.

      • The available port configurations are:

      "tls-secret-name" use this key to provide a Kubernetes secret name when setting up TLS termination for a service to be accessed over HTTPS. The secret type should be kubernetes.io/tls.

      "protocol" specifies the protocol to use for this port, i.e. tcp, http, https. The default protocol is tcp, unless you provided a different configuration for the default-protocol annotation.

      check-type • string
      none, connection, http, http_body
      None • The type of health check to perform on Nodes to ensure that they are serving requests. The behavior for each check is the following:

      none no check is performed

      connection checks for a valid TCP handshake

      http checks for a 2xx or 3xx response code

      http_body checks for a specific string within the response body of the healthcheck URL. Use the check-body annotation to provide the string to use for the check.

      check-path string None The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-body string None The string that must be present in the response body of the URL path used for health checks. You must have a check-type annotation configured for a http_body check.
      check-interval integer None The duration, in seconds, between health checks.
      check-timeout • integer
      • value between 130
      None Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attempts • integer
      • value between 130
      None Number of health checks to perform before removing a back-end Node from service.
      check-passive boolean false When true, 5xx status codes will cause the health check to fail.
      preserve boolean false When true, deleting a LoadBalancer service does not delete the underlying NodeBalancer

      Note

      Configuring Linode NodeBalancers for TLS Encryption

      This section describes how to set up TLS termination on your Linode NodeBalancers so a Kubernetes Service can be accessed over HTTPS.

      Generating a TLS type Secret

      Kubernetes allows you to store sensitive information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In this section, you will create a Kubernetes secret to store Transport Layer Security (TLS) certificates and keys that you will then use to configure TLS termination on your Linode NodeBalancers.

      In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type tls. Follow the steps in this section to create a Kubernetes TLS Secret.

      Note

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -newkey rsa:4096 
            -x509 
            -sha256 
            -days 3650 
            -nodes 
            -out tls.crt 
            -keyout tls.key 
            -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. Create the secret using the create secret tls command. Ensure you substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --cert tls.crt --key tls.key
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Configuring TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port using the required annotations. You can add the following code snippet to a Service file to enable TLS termination on your NodeBalancers:

      example-serivce.yaml
      1
      2
      3
      4
      5
      6
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-default-protocol: http
          service.beta.kubernetes.io/linode-loadbalancer-port-443: '{ "tls-secret-name": "example-secret", "protocol": "https" }'
      ...
      • The service.beta.kubernetes.io/linode-loadbalancer-default-protocol annotation configures the NodeBalancer’s default protocol.

      • service.beta.kubernetes.io/linode-loadbalancer-port-443 specifies port 443 as the port to be configured. The value of this annotation is a JSON object designating the TLS secret name to use (example-secret) and the protocol to use for the port being configured (https).

      If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      example-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-default-protocol: http
          service.beta.kubernetes.io/linode-loadbalancer-port-443: '{ "tls-secret-name": "example-secret", "protocol": "https" }'
          service.beta.kubernetes.io/linode-loadbalancer-port-8443: '{ "tls-secret-name": "example-secret-staging", "protocol": "https" }'
      ...

      Configuring Session Affinity for Cluster Pods

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To direct traffic to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod. You can add the example lines to a Service configuration file to

      1
      2
      3
      4
      5
      6
      7
      8
      
      spec:
        type: LoadBalancer
        selector:
          app: example-app
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      Removing Linode NodeBalancers from your Kubernetes Cluster

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f example-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service example-service
      

      After deleting your service, its corresponding NodeBalancer will be removed from your Linode account.

      Note

      If your Service file used the preserve annotation, the underlying NodeBalancer will not be removed from your Linode account. See the annotations reference for details.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Configure Load Balancing with TLS Encryption on a Kubernetes Cluster


      Updated by Linode Contributed by Linode

      This guide will use an example Kubernetes Deployment and Service to demonstrate how to route external traffic to a Kubernetes application over HTTPS. This is accomplished using the NGINX Ingress Controller, cert-manager and Linode NodeBalancers. The NGINX Ingress Controller uses Linode NodeBalancers, which are Linode’s load balancing service, to route a Kubernetes Service’s traffic to the appropriate backend Pods over HTTP and HTTPS. cert-manager creates a Transport Layer Security (TLS) certificate from the Let’s Encrypt certificate authority (CA) providing secure HTTPS access to a Kubernetes Service.

      Note

      Before you Begin

      1. This guide assumes that your Kubernetes cluster has the Linode Cloud Controller Manager (CCM) installed on your Kubernetes cluster. The Linode CCM is installed by default on clusters deployed with the Linode Kubernetes Engine and the Linode Terraform K8s module.

        Note

        The recommended way to deploy a Kubernetes cluster on Linode is using the Linode Kubernetes Engine (managed) or the Linode Terraform K8s module (unmanaged). However, to learn how to install the Linode CCM on a cluster that was not installed in the two ways mentioned above, see the Installing the Linode CCM on an Unmanaged Kubernetes Cluster guide.
      2. Install Helm 3, and kubectl to your local environment.

      3. Purchase a domain name from a reliable domain registrar. In a later section, you will use Linode’s DNS Manager, to create a new Domain and to add a DNS “A” record for two subdomains, one named blog and another named shop. Your subdomains will point to the example Kubernetes Services you will create in this guide. The example domain names used throughout this guide are blog.example.com and shop.example.com.

        Note

        Optionally, you can create a Wildcard DNS record, *.example.com and point your NodeBalancer’s external IP address to it. Using a Wildcard DNS record, will allow you to expose your Kubernetes services without requiring further configuration using the Linode DNS Manager.

      Create an Example Application

      The primary focus of this guide is to show you how to use the NGINX Ingress Controller and cert-manager to route traffic to a Kubernetes application over HTTPS. In this section, you will create two example applications that you will route external traffic to in a later section. The example application displays a page that returns information about the Deployment’s current backend Pod. This sample application is built using NGINX’s demo Docker image, nginxdemos/hello. You can replace the example applications used in this section with your own.

      Create your Application Service and Deployment

      Each example manifest file creates three Pods to serve the application.

      1. Using a text editor, create a new file named hello-one.yaml with the contents of the example file.

        hello-one.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        
        apiVersion: v1
        kind: Service
        metadata:
          name: hello-one
        spec:
          type: ClusterIP
          ports:
          - port: 80
            targetPort: 80
          selector:
            app: hello-one
        ---
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: hello-one
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: hello-one
          template:
            metadata:
              labels:
                app: hello-one
            spec:
              containers:
              - name: hello-ingress
                image: nginxdemos/hello
                ports:
                - containerPort: 80
                
      2. Create a second Service and Deployment manifest file named hello-two.yaml with the contents of the example file.

        hello-two.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        
        apiVersion: v1
        kind: Service
        metadata:
          name: hello-two
        spec:
          type: ClusterIP
          ports:
          - port: 80
            targetPort: 80
          selector:
            app: hello-two
        ---
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: hello-two
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: hello-two
          template:
            metadata:
              labels:
                app: hello-two
            spec:
              containers:
              - name: hello-ingress
                image: nginxdemos/hello
                ports:
                - containerPort: 80
                
      3. Use kubectl to create the Services and Deployments for your example applications.

        kubectl create -f hello-one.yaml
        kubectl create -f hello-two.yaml
        

        You should see a similar output:

          
        service/hello-one created
        deployment.apps/hello-one created
        service/hello-two created
        deployment.apps/hello-two created
            
        
      4. Verify that the Services are running.

        kubectl get svc
        

        You should see a similar output:

          
        NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
        hello-one    ClusterIP   10.128.94.166            80/TCP    6s
        hello-two    ClusterIP   10.128.102.187           80/TCP    6s
        kubernetes   ClusterIP   10.128.0.1               443/TCP   18m
            
        

      Install the NGINX Ingress Controller

      In this section you will use Helm to install the NGINX Ingress Controller on your Kubernetes Cluster. Installing the NGINX Ingress Controller will create Linode NodeBalancers that your cluster can make use of to load balance traffic to your example application.

      Note

      1. Add the Google stable Helm charts repository to your Helm repos.

        helm repo add stable https://kubernetes-charts.storage.googleapis.com/
        
      2. Update your Helm repositories.

        helm repo update
        
      3. Install the NGINX Ingress Controller. This installation will result in a Linode NodeBalancer being created.

        helm install nginx-ingress stable/nginx-ingress
        

        You will see a similar output:

          
        NAME: nginx-ingress
        LAST DEPLOYED: Mon Jul 20 10:27:03 2020
        NAMESPACE: default
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        NOTES:
        The nginx-ingress controller has been installed.
        It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller'
        ...
           
        

      Create a Subdomain DNS Entries for your Example Applications

      Now that Linode NodeBalancers have been created by the NGINX Ingress Controller, you can point a subdomain DNS entries to the NodeBalancer’s public IPv4 address. Since this guide uses two example applications, it will require two subdomain entries.

      1. Access your NodeBalancer’s assigned external IP address.

        kubectl --namespace default get services -o wide -w nginx-ingress-controller
        

        The command will return a similar output:

          
        NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE     SELECTOR
        my-nginx-ingress-controller   LoadBalancer   10.128.169.60   192.0.2.0   80:32401/TCP,443:30830/TCP   7h51m   app.kubernetes.io/component=controller,app=nginx-ingress,release=my-nginx-ingress
            
        
      2. Copy the IP address of the EXTERNAL IP field and navigate to Linode’s DNS manager and add two “A” records for the blog and shop subdomains. Ensure you point each record to the NodeBalancer’s IPv4 address you retrieved in the previous step.

      Now that your NGINX Ingress Controller has been deployed and your subdomains’ A records have been created, you are ready to enable HTTPS on each subdomain.

      Create a TLS Certificate Using cert-manager

      Note

      Before performing the commands in this section, ensure that your DNS has had time to propagate across the internet. This process can take a while. You can query the status of your DNS by using the following command, substituting blog.example.com for your domain.

      dig +short blog.example.com
      

      If successful, the output should return the IP address of your NodeBalancer.

      To enable HTTPS on your example application, you will create a Transport Layer Security (TLS) certificate from the Let’s Encrypt certificate authority (CA) using the ACME protocol. This will be facilitated by cert-manager, the native Kubernetes certificate management controller.

      In this section you will install cert-manager using Helm and the required cert-manager CustomResourceDefinitions (CRDs). Then, you will create a ClusterIssuer resource to assist in creating a cluster’s TLS certificate.

      Note

      If you would like a deeper dive into cert-manager, see our guide [’’]().

      Install cert-manager

      1. Install cert-manager’s CRDs.

        kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.crds.yaml
        
      2. Create a cert-manager namespace.

        kubectl create namespace cert-manager
        
      3. Add the Helm repository which contains the cert-manager Helm chart.

        helm repo add jetstack https://charts.jetstack.io
        
      4. Update your Helm repositories.

        helm repo update
        
      5. Install the cert-manager Helm chart. These basic configurations should be sufficient for many use cases, however, additional cert-manager configurable parameters can be found in cert-manager’s official documentation.

        helm install 
        cert-manager jetstack/cert-manager 
        --namespace cert-manager 
        --version v0.15.1
        
      6. Verify that the corresponding cert-manager pods are now running.

        kubectl get pods --namespace cert-manager
        

        You should see a similar output:

          
        NAME                                       READY   STATUS    RESTARTS   AGE
        cert-manager-579d48dff8-84nw9              1/1     Running   3          1m
        cert-manager-cainjector-789955d9b7-jfskr   1/1     Running   3          1m
        cert-manager-webhook-64869c4997-hnx6n      1/1     Running   0          1m
            
        

        Note

        You should wait until all cert-manager pods are ready and running prior to proceeding to the next section.

      Create a ClusterIssuer Resource

      1. Create a manifest file named acme-issuer-prod.yaml that will be used to create a ClusterIssuer resource on your cluster. Ensure you replace [email protected] with your own email address.

        acme-issuer-prod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        
        apiVersion: cert-manager.io/v1alpha2
        kind: ClusterIssuer
        metadata:
          name: letsencrypt-prod
        spec:
          acme:
            email: [email protected]
            server: https://acme-v02.api.letsencrypt.org/directory
            privateKeySecretRef:
              name: letsencrypt-secret-prod
            solvers:
            - http01:
                ingress:
                  class: nginx
        
              
        • This manifest file creates a ClusterIssuer resource that will register an account on an ACME server. The value of spec.acme.server designates Let’s Encrypt’s production ACME server, which should be trusted by most browsers.

          Note

          Let’s Encrypt provides a staging ACME server that can be used to test issuing trusted certificates, while not worrying about hitting Let’s Encrypt’s production rate limits. The staging URL is https://acme-staging-v02.api.letsencrypt.org/directory.
        • The value of privateKeySecretRef.name provides the name of a secret containing the private key for this user’s ACME server account (this is tied to the email address you provide in the manifest file). The ACME server will use this key to identify you.

        • To ensure that you own the domain for which you will create a certificate, the ACME server will issue a challenge to a client. cert-manager provides two options for solving challenges, http01 and DNS01. In this example, the http01 challenge solver will be used and it is configured in the solvers array. cert-manager will spin up challenge solver Pods to solve the issued challenges and use Ingress resources to route the challenge to the appropriate Pod.

      2. Create the ClusterIssuer resource:

        kubectl create -f acme-issuer-prod.yaml
        

        You should see a similar output:

          
        clusterissuer.cert-manager.io/letsencrypt-prod created
              
        

      Enable HTTPS for your Application

      Create the Ingress Resource

      1. Create an Ingress resource manifest file named hello-app-ingress.yaml. If you assigned a different name to your ClusterIssuer, ensure you replace letsencrypt-prod with the name you used. Replace all hosts and host values with your own application’s domain name.

        hello-app-ingress.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        
        apiVersion: networking.k8s.io/v1beta1
        kind: Ingress
        metadata:
          name: hello-app-ingress
          annotations:
            kubernetes.io/ingress.class: "nginx"
            cert-manager.io/cluster-issuer: "letsencrypt-prod"
        spec:
          tls:
          - hosts:
            - blog.example.com
            - shop.example.com
            secretName: example-tls
          rules:
          - host: blog.example.com
            http:
              paths:
              - backend:
                  serviceName: hello-one
                  servicePort: 80
          - host: shop.example.com
            http:
              paths:
              - backend:
                  serviceName: hello-two
                  servicePort: 80
            

        This resource defines how traffic coming from the Linode NodeBalancers is handled. In this case, NGINX will accept these connections over port 80, diverting traffic to both of your services via their domain names. The tls section of the Ingress resource manifest handles routing HTTPS traffic to the hostnames that are defined.

      2. Create the Ingress resource.

        kubectl create -f hello-app-ingress.yaml
        

        You should see a similar output:

          
        ingress.networking.k8s.io/hello-app-ingress created
            
        
      3. Navigate to your app’s domain or if you have been following along with the example, navigate to blog.example.com and then, shop.example.com. You should see the demo NGINX page load and display information about the Pod being used to serve your request.

        The NGINX demo page loads with information about the Pod being used to serve your request

        Use your browser to view your TLS certificate. You should see that the certificate was issued by Let’s Encrypt Authority X3.

        Use your browser to view your TLS certificate.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Multi-Location Load Balancing for DigitalOcean


      Video

      Snapt CEO and industry expert Dave Blakey unpacks the current and future state of ADCs and Load Balancers, and how they solve the challenges in delivering and securing multi-location and cloud-native applications in DigitalOcean.

      Topics covered

      • Comparing the difference in load balancing architectures for traditional vs. cloud-native and microservices-based app developments
      • Understanding how multi-location load balancing works with GSLB and intelligent DNS routing
      • How to quickly scale and configuration ADCs using a centrally managed ADC platform
      • Applying AI & ML to better, scale, and secure and make multi-location applications more available
      • Using advanced real-time telemetry to observe and secure critical apps deployed into multiple locations

      Resources

      Nova ADC from Snapt is a powerful and scalable ADC for modern networks and users.



      Source link