One place for hosting & domains

      New Survey Reveals the Big 4 Reasons Behind Cloud Migrations and the Off-Premise Exodus


      Organizations continue to migrate their IT infrastructure off-premise, and this trend is not slowing down any time soon. INAP’s 2019 State of IT Infrastructure Management survey, released in November, revealed that nearly 9 in 10 organizations (88 percent) will be migrating at least some workloads off-premise in the next three years.

      This already high percentage jumps to an even greater level when only looking at organizations that already have remote environments. Ninety-six percent of this group plans to move more workloads off-prem in the near future.

      For this second annual survey, participants—500 IT leaders and infrastructure managers—were asked why they are moving workloads off-prem, and where those workloads are going. The results indicate four big reasons for moving off-prem, as well as the share of companies migrating workloads to cloud, colo and bare metal.

      If you haven’t checked out the survey report yet, read up on it here, or download the full report below.

      Why Companies are Moving Off-Premise

      Big 4

      The data above compares the 2018 and 2019 results of the survey. The clear top four reasons for 2019—network, scalability, resiliency and security—make a compelling case for why the demise of the on-premise data center is perhaps inevitable within the next decade. Let’s explore this year’s top reasons in greater depth.

      Improve Network Performance

      Jumping up six points from 2018, where it ranked No. 3 overall, network performance has claimed the No. 1 spot for reasons to move off-premise. Ultra-low latency and high availability is critically important to end users (think multi-player gamers, streaming customers, Ad Tech and financial service consumers, etc.) and can make the difference between retaining customers or losing them to a competitor. Additionally, conversations around the importance of Edge Computing and Edge Networking strategies continue to grow in prominence, meaning network performance will only grow more important.

      Accomplishing both low latency and high availability requires not only more bandwidth, but better infrastructure to serve end users at the edge. Alternatively, organizations can take advantage of network route optimization technologies to ensure traffic is always sent along the lowest-latency path.

      Whichever path an organization chooses to take, it’s clear that traditional on-premise data centers are increasingly incapable of delivering the performance today’s digital economy demands. To truly compete in the current and future tech landscape, strong network performance is a requirement, not a necessity.

      Application Scalability and Resiliency

      The No. 2 (scalability) and No. 3 (resiliency) reasons are both essential components of running workloads and applications in the digital economy. Infrastructure as a Service (IaaS) solutions deliver these attributes via on-demand deployment, and colocation data centers deliver them via greater space and power capacity, each offering companies the ability to add resources easier and more efficiently than they can with the average on-premise facility.

      Additionally, any quality off-premise solution will include component and utility redundancy, enabling up-time of 99.999 percent annually. A single point of failure, whether it’s related to electrical, power or cooling, for example, will not disrupt your service in a Tier 3 data center run and maintained by a reliable partner.

      Infrastructure and Data Center Security

      Cyberattacks continue to grow in sophistication and frequency. Earlier this year, we rounded up sobering statistics on the state of cybersecurity. Did you know that it takes an organization 34 days on average to patch for critical common vulnerabilities and exposures? And did you also know that a successful phishing attempt will cost small or medium business $1.6 million on average? It’s no wonder security ranked No. 4 as a reason to move off-premise.

      As information security initiatives attract more attention from the C-suite, in-house IT infrastructure and operations leaders will continue to view on-premise facilities as a vulnerability. The levels of physical and network security offered by leading cloud and data center providers goes above and beyond what an on-premise operation can achieve. The security and compliance-ready attributes of Tier 3 data center facilities instantly tick several “best-practice” boxes, giving IT pros and the C-suite alike peace of mind, not to mention assurance that their infrastructure will be ready to stand up to future threats.

      Additionally, more companies are choosing to offload the day-to-day management of common security functions like monitoring, log management and patch management. For instance, a fully managed hosted private cloud offers the scalability and security that organizations are looking for and allows them to work with a trusted partner to deploy the best-fit solution for applications and workloads.

      The Future of IT Infrastructure is Hybrid

      Now that we understand why organizations are headed off-prem, let’s explore where they are moving. We asked survey participants to choose from non-Software as a Service (SaaS) and non-Platform as a Service (PaaS) environments, including hosted private cloud, hyperscale public cloud, colocation data center and hosted bare metal or dedicated servers. The participants were able to select all options that applied.

      Hybrid IT

      Based on the results, we learned that it’s not a uniform journey to the major hyperscale cloud providers. Organizations plan to spread workloads across a variety of different environments, including colocation and hosted private clouds, with the later just outpacing the hyperscalers at 77 percent.

      Between colocation and the different types of clouds, organizations have plenty of choice when it comes to their off-premise infrastructure, and there is no one-size-fits-all best solution. In this hybrid era of IT, it will be important for organizations to evaluate their infrastructure strategies for to ensure that their chosen solutions (cloud, colo, etc.) are best meeting workload requirements.

      “All of that infrastructure spread out across multiple data centers and clouds needs to be centrally managed, monitored and secured with efficiency,” says Jennifer Curry, INAP’s SVP, Global Cloud Services in TechRepublic’s coverage of the report. “This is no easy feat, so my recommendation would be to seek out partners who understand how to design performance-driven architectures and partners who can provide you the flexibility and support required to give your teams the peace of mind and the ability to focus on what matters most.”

      The Off-Premise Acceleration

      Just how fast will the on-premise data center exodus be? Respondents were asked to anticipate the percentage share of on-premise workloads now and three years from now in our latest survey. Overall, IT pros expect a 38 percent reduction in on-premise workloads by 2022. While there are myriad reasons to move away from on-prem infrastructures, the “Big 4” reasons present a compelling case for this migration.

      Laura Vietmeyer


      READ MORE



      Source link

      Deploy NodeBalancers with the Linode Cloud Controller Manager


      Updated by Linode Written by Linode Community

      The Linode Cloud Controller Manager (CCM) allows Kubernetes to deploy Linode NodeBalancers whenever a Service of the “LoadBalancer” type is created. This provides the Kubernetes cluster with a reliable way of exposing resources to the public internet. The CCM handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, the NodeBalancer will service.

      This guide will explain how to:

      • Create a service with the type “LoadBalancer.”
      • Use annotations to control the functionality of the NodeBalancer.
      • Use the NodeBalancer to terminate TLS encryption.

      Caution

      Using the Linode Cloud Controller Manager to create NodeBalancers will create billable resources on your Linode account. A NodeBalancer costs $10 a month. Be sure to follow the instructions at the end of the guide if you would like to delete these resources from your account.

      Before You Begin

      You should have a working knowledge of Kubernetes and familiarity with the kubcetl command line tool before attempting the instructions found in this guide. For more information about Kubernetes, consult our Kubernetes Beginner’s Guide and our Getting Started with Kubernetes guide.

      When using the CCM for the first time, it’s highly suggested that you create a new Kubernetes cluster, as there are a number of issues that prevent the CCM from running on Nodes that are in the “Ready” state. For a completely automated install, you can use the Linode CLI’s k8s-alpha command line tool. The Linode CLI’s k8s-alpha command line tool utilizes Terraform to fully bootstrap a Kubernetes cluster on Linode. It includes the Linode Container Storage Interface (CSI) Driver plugin, the Linode CCM plugin, and the ExternalDNS plugin. For more information on creating a Kubernetes cluster with the Linode CLI, review our How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide.

      Note

      To manually add the Linode CCM to your cluster, you must start kubelet with the --cloud-provider=external flag. kube-apiserver and kube-controller-manager must NOT supply the --cloud-provider flag. For more information, visit the upstream Cloud Controller documentation.

      If you’d like to add the CCM to a cluster by hand, and you are using macOS, you can use the generate-manifest.sh file in the deploy folder of the CCM repository to generate a CCM manifest file that you can later apply to your cluster. Use the following command:

      ./generate-manifest.sh $LINODE_API_TOKEN us-east
      

      Be sure to replace $LINODE_API_TOKEN with a valid Linode API token, and replace us-east with the region of your choosing.

      To view a list of regions, you can use the Linode CLI, or you can view the Regions API endpoint.

      If you are not using macOS, you can copy the ccm-linode-template.yaml file and change the values of the data.apiToken and data.region fields manually.

      Using the CCM

      To use the CCM, you must have a collection of Pods that need to be load balanced, usually from a Deployment. For this example, you will create a Deployment that deploys three NGINX Pods, and then create a Service to expose those Pods to the internet using the Linode CCM.

      1. Create a Deployment manifest describing the desired state of the three replica NGINX containers:

        nginx-deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx-deployment
          labels:
            app: nginx
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
      2. Use the create command to apply the manifest:

        kubectl create -f nginx-deployment.yaml
        
      3. Create a Service for the Deployment:

        nginx-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx-service
          annotations:
            service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          labels:
            app: nginx
        spec:
          type: LoadBalancer
          ports:
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
          selector:
            app: nginx
          sessionAffinity: None

        The above Service manifest includes a few key concepts.

        • The first is the spec.type of LoadBalancer. This LoadBalancer type is responsible for telling the Linode CCM to create a Linode NodeBalancer, and will provide the Deployment it services a public facing IP address with which to access the NGINX Pods.
        • There is additional information being passed to the CCM in the form of metadata annotations (service.beta.kubernetes.io/linode-loadbalancer-throttle in the example above), which are discussed in the next section.
      4. Use the create command to create the Service, and in turn, the NodeBalancer:

        kubectl create -f nginx-service.yaml
        

      You can log in to the Linode Cloud Manager to view your newly created NodeBalancer.

      Annotations

      There are a number of settings, called annotations, that you can use to further customize the functionality of your NodeBalancer. Each annotation should be included in the annotations section of the Service manifest file’s metadata, and all of the annotations are prefixed with service.beta.kubernetes.io/linode-loadbalancer-.

      Annotation (suffix) Values Default Value Description
      throttle 020 (0 disables the throttle) 20 Client Connection Throttle. This limits the number of new connections-per-second from the same client IP.
      protocol tcp, http, https tcp Specifies the protocol for the NodeBalancer.
      tls Example value: [ { "tls-secret-name": "prod-app-tls", "port": 443} ] None A JSON array (formatted as a string) that specifies which ports use TLS and their corresponding secrets. The secret type should be kubernetes.io/tls. Fore more information, see the TLS Encryption section.
      check-type none, connection, http, http_body None The type of health check to perform on Nodes to ensure that they are serving requests. connection checks for a valid TCP handshake, http checks for a 2xx or 3xx response code, http_body checks for a certain string within the response body of the healthcheck URL.
      check-path string None The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-body string None The text that must be present in the body of the page used for health checks. For use with a check-type of http_body.
      check-interval integer None The duration, in seconds, between health checks.
      check-timeout integer (a value between 130) None Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attempts integer (a value between 130) None Number of health checks to perform before removing a back-end Node from service.
      check-passive boolean false When true, 5xx status codes will cause the health check to fail.

      To learn more about checks, please see our reference guide to NodeBalancer health checks.

      TLS Encryption

      This section will describe how to set up TLS termination for a Service so that the Service can be accessed over https.

      Generating a TLS type Secret

      Kubernetes allows you to store secret information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type of tls. Follow the next steps to create a valid tls type Secret:

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.crt -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. To create the secret, you can issue the create secret tls command, being sure to substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --key key.pem --cert cert.crt
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Defining TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port through the proper annotations. Take the following code snippet as an example:

      nginx-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
      ...

      The linode-loadbalancer-protocol annotation identifies the https protocol. Then, the linode-loadbalancer-tls annotation defines which Secret and port to use for serving https traffic. If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      nginx-service-two-environments.yaml
      1
      2
      3
      4
      
      ...
          service.beta.kubernetes.io/linode-loadbalancer-tls: |
            [ { "tls-secret-name": "my-secret", "port": 443 }. {"tls-secret-name": "my-secret-staging", "port": 8443} ]'
      ...

      Next, you’ll need to set up your Service to expose the https port. The whole example might look like the following:

      nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
        labels:
          app: nginx
        name: nginx-service
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        selector:
          app: nginx
        type: LoadBalancer

      Note that here the NodeBalancer created by the Service is terminating the TLS encryption and proxying that to port 80 on the NGINX Pod. If you had a Pod that listened on port 443, you would set the targetPort to that value.

      Session Affinity

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To ensure that traffic is directed to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod:

      session-affinity.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-service
        labels:
          app: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      You can set the timeout for the session by using the spec.sessionAffinityConfig.clientIP.timeoutSeconds field.

      Troubleshooting

      If you are having problems with the CCM, such as the NodeBalancer not being created, you can check the CCM’s error logs. First, you’ll need to find the name of the CCM Pod in the kube-system namespaces:

      kubectl get pods -n kube-system
      

      The Pod will be named ccm-linode- with five random characters at the end, like ccm-linode-jrvj2. Once you have the Pod name, you can view its logs. The --tail=n flag is used to return the last n lines, where n is the number of your choosing. The below example returns the last 100 lines:

      kubectl logs ccm-linode-jrvj2 -n kube-system --tail=100
      

      Note

      Currently the CCM only supports https ports within a manifest’s spec when the linode-loadbalancer-protocol is set to https. For regular http traffic, you’ll need to create an additional Service and NodeBalancer. For example, if you had the following in the Service manifest:

      unsupported-nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      ...
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      ...

      The NodeBalancer would not be created and you would find an error similar to the following in your logs:

      ERROR: logging before flag.Parse: E0708 16:57:19.999318       1 service_controller.go:219] error processing service default/nginx-service (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      ERROR: logging before flag.Parse: I0708 16:57:19.999466       1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"5d1afc22-a1a1-11e9-ad5d-f23c919aa99b", APIVersion:"v1", ResourceVersion:"1248179", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      

      Removing the http port would allow you to create the NodeBalancer.

      Delete a NodeBalancer

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f nginx-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service nginx-service
      

      Updating the CCM

      The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so, you can run the edit command.

      kubectl edit ds -n kube-system ccm-linode
      

      The CCM Daemonset manifest will appear in vim. Press i to enter insert mode. Navigate to spec.template.spec.image and change the field’s value to the desired version tag. For instance, if you had the following image:

      image: linode/linode-cloud-controller-manager:v0.2.2
      

      You could update the image to v0.2.3 by changing the image tag:

      image: linode/linode-cloud-controller-manager:v0.2.3
      

      For a complete list of CCM version tags, visit the CCM DockerHub page.

      Caution

      The CCM Daemonset manifest may list latest as the image version tag. This may or may not be pointed at the latest version. To ensure the latest version, it is recommended to first check the CCM DockerHub page, then use the most recent release.

      Press escape to exit insert mode, then type :wq and press enter to save your changes. A new Pod will be created with the new image, and the old Pod will be deleted.

      Next Steps

      To further take advantage of Linode products through Kubernetes, check out our guide on how to use the Linode Container Storage Interface (CSI), which allows you to create persistent volumes backed by Linode Block Storage.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Survey: How Do IT Leaders Grade Their Data Center and Cloud Infrastructure Strategies?


      We’re still merely entering the hybrid and multicloud era of information technology, but according to new survey research from INAP, the transformation is about to hit warp speed, a trend we see continuing in our latest survey. Nearly 9 in 10 organizations with on-premise data centers plan to move at least some of their workloads off-premise into cloud, managed hosting or colocation in the next three years.

      As more companies diversify their infrastructure mix, how confident are IT leaders and managers that they’re taking the right approach?

      For INAP’s second annual installment of the State of IT Infrastructure Management survey, we asked 500 IT leaders and infrastructure managers to assess their data center and cloud strategies, assign a letter grade and give us their thoughts on why they chose a particular rating.

      How do the grades stack up among participants? What factors are most closely associated with A-grade infrastructures? And why do some infrastructure strategies fall short?

      Making the Grade in the Hybrid IT and Multicloud Era

      Grades

      Instead of the classic bell curve so many of us were subject to during our years in academia, most of the IT infrastructure management professionals say their infrastructure strategy deserves an above average grade, with the majority—56.3 percent of respondents—giving their infrastructures a B. Roughly 19 percent think they deserve a C or below. While the results can be read as a vote of confidence for multiplatform, hybrid cloud and multicloud strategies, most respondents say there’s still plenty room for improvement: Only 1 in 4 participants (25.2 percent) gave their infrastructure strategies an A.

      Factors Most Associated with A-Grade Infrastructure

      Still, it’s worth asking: What factors distinguish A’s from the rest of the crowd?

      Four groups in the data, regardless of company size, industry and headcount, are strongly correlated with high marks:

      Off-Premise Migrators

      A’s have a significantly smaller portion of their workloads on-premise (30 percent of workloads, on average) compared to C’s and below (45 percent).

      Colocation Customers

      Thirty-one percent of IT pros who have colocation as part of their infrastructure mix give themselves an A. This is six points higher than the total population.

      Cloud Diversifiers

      For companies already in the cloud, those who only host with public cloud platforms (AWS, Azure, Google) are less likely to give themselves A’s than those who adopt multicloud platform strategies—18 percent vs. 29 percent, respectively.

      Managed Services Super Users

      The more companies rely on third parties or cloud providers to fully manage their hosted environments (up to the application layer), the more likely they are to assign their infrastructure strategy an A. The average share of workloads fully managed: A’s (71 percent), B’s (62 percent), C’s (54 percent).

      Why Some IT Infrastructures Strategies Fall Short

      Click to view full-size image.

      From the above results, no single explanation for why strategies did not earn top marks were selected by a fewer than a fifth of respondents, but two clearly lead the pack:

      • Infrastructure not fully optimized for applications
      • Too much time managing and maintaining the infrastructure

      The first leading factor speaks to a simultaneous benefit and challenge of the multicloud and hybrid IT era. It’s more economical than ever to find a mix of infrastructure solutions that match the needs of individual workloads and applications. The flip side to that benefit is the simple fact that adopting new platforms can quickly lead to environment sprawl and raise the complexity of the overall strategy—making the goal of application optimization a tougher bar to clear.

      The second leading factor—improper time allocation—underscores a central theme of IT infrastructure management that will be discussed in greater depth in a future blog.

      Senior Leaders vs. Non-Senior IT Pros

      As previously noted, only 1 in 4 participants gave their infrastructure strategies an A. That number falls to 1 in 8 (12.6 percent) if we remove senior IT leaders from the mix. Non-senior infrastructure managers are also two times more likely to grade their infrastructure strategy a C. In other areas of the State of IT Infrastructure Management survey, senior leaders generally held a more optimistic outlook, and the infrastructure grades were no exception.

      Why might this be? We can only speculate, but senior leaders may be loath to give a low grade to a strategy they had a large part in shaping. Or perhaps it’s that non-senior leaders deal with more of the day-to-day tasks associated with infrastructure upkeep and don’t feel as positive about the strategy. Whatever the reason, these two groups are not seeing eye to eye.

      Strategizing to Earn the A-Grade

      When considering solutions—be it cloud, colocation and/or managed services—a lesson or two can be taken from those A-grade infrastructure strategies, and maybe from the C’s and below, as well.

      If you’re ready to level-up your strategy, but unsure where to start, INAP can help. We offer high-performance data center, cloud, network and managed services solutions that will earn your infrastructure strategy an A+.

      Laura Vietmeyer


      READ MORE



      Source link