One place for hosting & domains

      Deploying Microsoft SQL Servers in a Private Cloud with High Availability


      There are many considerations to account for when implementing a Microsoft SQL server in a private cloud environment. Today’s SQL dependent applications have different performance and high availability (HA) requirements. In Part 1 of this series, we covered implementation of Microsoft SQL Servers in a private cloud for maximum performance. Here in Part 2, we’ll explore deployment in a private cloud with HA.

      HA Options Available Within Microsoft SQL Server

      When designing Microsoft SQL servers with HA, we must consider SQL dependent application requirements and features to ensure they work with highly available deployments. Below is a list of a few HA options natively available within the Microsoft SQL server and organized by deployment type.

      Physical SQL Server Deployment with Single SAN and SQL Clustering

      This HA option is a Microsoft Server native clustering with SQL clustering with a single SAN or direct-attached sub-storage system. In this option, we run two SQL server nodes with a single copy of the database running on a SAN. Only one instance of SQL is active and attached to the databases at any given time.

      Pros: This option protects against a single SQL node failure. The surviving node will start its SQL instance and attach to the same databases to continue serving client requests. It offers the best compute performance and good storage performance.

      Cons: This option does not protect against database corruption or storage sub-system issues. Both corruption and SAN issues will impact the entire cluster. Licensing costs may be an issue if licensing by CPU.

      Virtual SQL Server Deployment with SQL on SAN and VMWare HA

      Standard Windows clustering is not recommended in this option. VMWare and other hypervisors will provide a level of HA to protect against hypervisor node failure by restarting the SQL server VM on other nodes.

      Pros: This is an easy way to implement HA without having to configure and support windows server clustering. Per CPU SQL licensing costs are reduced. This option offers good computer performance and good storage performance.

      Cons: This option does not protect against database corruption or storage sub-system issues. Both problems will cause an outage of the SQL server.

      Virtual SQL Server Deployment with SQL on Local SSD Disk and SQL Native AlwaysOn HA

      In this option, we utilize two SQL virtual machines running on top of local SSD drives configured in a RAID10 directly on the hypervisor nodes. Each SQL VM is running on separate nodes with local storage. The database, which requires HA, is being protected using SQL AlwaysOn features by maintaining two copies of the database on two separate VMs and two separate, high-speed sub-storage systems.

      Pros: This option offers strong HA protection against single SQL Server VM Failure, single hardware node failure and single storage system failure. Automatic database page corruption protection is provided by the AlwaysOn technology. AlwaysOn keeps two separate copies of the database in sync. It offers good compute performance and the best storage performance in a virtual environment. The virtualized SQL server provides savings on per CPU licensing costs by assigning just the amount of CPU that your SQL server needs. Only the active SQL servers instance requires licensing.

      Cons: VMWare vMotion should not be used while the SQL VM is turned on. In this design, however, vMotion is not needed since the AlwaysOn protected database server will not need to vMotion during failover. By design, other VMWare HA and resource management services will not be used in this option. Local storage has to be scoped with growth in mind. My rule of thumb is to scope three years of required growth for local storage per server.

      Physical SQL Server Deployment with Local SSD Storage and SQL Native AlwaysOn HA

      We utilize two physical SQL servers in this option. Each hardware server has SSD RAID10 for local storage. Each SQL server is running on separate bare metal servers. The database, which requires HA, is being protected using SQL AlwaysOn features by maintaining two copies of the database on two separate bare metal servers, and two separate, high-speed sub-storage systems.

      Pros: This option provides strong HA protection against single SQL Server failure and single storage system failure. Automatic database page corruption protection is provided by the AlwaysOn technology. AlwaysOn keeps two separate copies of the database in sync. This option offers the best compute performance and best storage performance. Only the active SQL servers instance requires licensing.

      Cons: Per CPU licensing costs could get pricy depending on CPU core count. Local storage must be scoped with growth in mind. My rule of thumb is to scope three years of required local storage growth per server.


      LEARN MORE

      Closing Thoughts on Microsoft SQL Server HA Options

      Compare your workload requirements with the abilities of each option and your budgetary considerations to determine what works best for you. Development or test SQL servers and production workloads can easily run inside virtualized environments with SAN storage. Some of your more demanding production workloads may need to be placed into virtual or physical environments with local SSD-based storage for best performance and HA needs.

      Integrating these options into your private cloud environments is simple and can save on costs down the line. When working with local storage, be sure to future proof your disk space growth availability the first time. Future proofing your local storage for growth will save on maintenance headaches and costs in the long run.

      In this series, we have looked at basic SQL server concepts and performance factors to be considered when designing Microsoft SQL server deployments for HA and high performance. Download the whitepaper by filling out the form below to get this series in its entirety.

      High performance is measured differently for different applications and is greatly dependent on the end user’s expectations as they interact with your supported application. By collecting these simple measurements and requirements up-front, you will be able to make decisions to right size the environment for your end user and to help you stick to your budget.

      Read Part 1: How to Implement Microsoft SQL Servers in a Private Cloud for Maximum Performance

      Read: The Basics of a Microsoft SQL Server Architecture

      Download the full white paper below:

      Laura Vietmeyer


      READ MORE



      Source link

      How to Implement Microsoft SQL Servers in a Private Cloud for Maximum Performance


      There are many considerations to take into account when implementing a Microsoft SQL Server in a private cloud environment. Today’s SQL dependent applications have different performance and high availability (HA) requirements. As a solutions architect, my goal is to provide our customers the best performing, highly available designs while managing budgetary concerns, scalability, supportability and total cost of ownership. Like so many tasks in IT infrastructure strategy, success is all about planning. There are many moving pieces and balancing everything to reach our goal becomes a challenge if we don’t ask the right questions up front.

      In this two-part series, I’ll share my approach to scoping, sizing and designing private cloud infrastructure capable of migrating or standing up new Microsoft SQL Server environments. In part one, I’ll identify performance considerations and provide real world examples to make sure that your SQL Server environment is ready to meet application, growth and DR requirements of your organization. In part two, I’ll focus on SQL Server deployment options with high availability.

      If you need a review on SQL server basics before we dive into the private cloud design, you can brush up here.

      Here’s what we’ll cover in this post, with links if you’d prefer to jump ahead:

      SQL Server Performance Considerations: RAM, IOPS, CPU and More

      What follows are the basic performance considerations to take into account when designing a Microsoft SQL Server environment.

      RAM—These requirements are based on database size and developer recommendations. Ideally, you’ll have enough RAM to put the entire database into RAM. However, that’s not always possible with large DB sizes. RAM is delicious to SQL and the server will eat it up, so be generous if your budget allows. Leave 20 percent of RAM reserved on the server for OS and other services.

      IOPS—The SQL Server is mostly a read/write machine, and its performance is dependent on disk IO and storage latency. With SSDs becoming more affordable, many SQL DBAs now prefer to run on SSDs due to high IOPS provided by SSD and NVMe drives. In the past, many 15K SAS disks in RAID10 were the norm for data volumes.

      Low latency is essential to SQL performance. By today’s standards, keeping latency below 5ms per IO is the norm. Sub 1ms latency is very common with local SSD storage. However, 10ms is still a good response time for most medium performance applications.

      CPU—Bare metal CPU allocation is easy. Your server has a number of CPUs and all of them can be allocated to SQL with no negative considerations, other than licensing costs. However, allocating CPU to a SQL Server in a VMWare private cloud environment should be done with caution. Licensing is based on CPU cores. Too much assigned, but unused, vCPU GHz in a VMWare VM can negatively impact performance. Rightsizing is key.

      The SQL Server is not normally a CPU hog. Unexpected and prolonged high CPU usage during production time means something is not right with the database or SQL code. Adding CPU in those cases may not solve the issue. These unexpected conditions should be checked by SQL DBAs and developers. Higher than normal CPU utilization is to be expected during maintenance time. In instances where a database may require high CPU utilization as a norm, a developer or SQL DBA should check the database for missing indexes or other issues before adding more vCPUs to virtual servers.

      SQL as a VM

      Based on the above considerations, when designing a SQL Server environment, budgetary and licensing considerations may start leaning your design toward SQL as a VM on a private cloud with a restricted number of assigned vCPUs. Running SQL on a private cloud is a great way to save on licensing costs. It’s also a good performer for most application loads, and because SQL is more of an IO-dependent system, its performance will greatly depend on the latency and IO availability of your storage system.

      Storage Systems IO Availability

      SAN storage will normally provide great amounts of IO based on the amount of disks and disk types on the SAN. The norm to expect from many SAN systems is 5ms to 15ms of latency per IO. When your application desires sub 1ms latency, such as gaming, financial or ad-tech systems, a local SSD RAID10 disk array will save the day. These local disk arrays are easy to set up and use with both VMs and bare metal server implementations. You may ask, “What about the HA and extra redundancy features of using a SAN instead of local disk?” I will be discussing High Availability in a performance-demanding environment in Part 2: Deploying Microsoft SQL Servers in a Private Cloud with High Availability. Check back soon.

      In the end, your application’s performance requirements and budget will drive your decision whether to run a private cloud or a bare metal SQL deployment. Both VM and bare metal deployments can be configured with high availability and will easily integrate into your private cloud environments.

      With these performance considerations in mind, let’s discuss other scoping and sizing matrices that we’ll need to design our high-performing and future-proofed SQL deployment.

      Scoping and Sizing for Microsoft SQL Deployments in a Private Cloud

      As a Solution Engineer at INAP, I collect numerous measurements during the SQL scoping process. Before delving into numbers, I start by evaluating pain points clients may have with their current SQL Servers. I want to know what hurts.

      Asking these and other questions helps me understand how to resolve issues and future proof your next SQL deployment:

      • Is it performing to your expectation?
      • Are you meeting your SQL database maintenance time window?
      • Do your users complain that their SQL dependent application freezes sometimes for no reason?
      • When was the last time you restored a database from your backup and how long did that take?
      • What’s your failover plan in case your production database server dies?
      • What backup software is being used for SQL?
      • Are your databases running in simple or full restore mode?

      Once I know the exact problem, or if I’m designing for a new application, I start by collecting specific technical details. The most common specific measurements and requirements collected during the scoping phase are:

      • DB sizes for all databases per SQL instance
      • Instances (Is there more than one SQL instance, and why?)
      • Growth requirements (Usually measured in daily change rate to help with other considerations like backups and replication for DR purposes)
      • RAM requirements & CPU requirements
      • Performance requirements, such as latency/ IO and IOPS requirements
      • HA requirements (Can your customer base wait for you to restore your SQL Server in case of an outage? How long does it take to restore?)
      • Regulatory/Compliance requirements such as HIPAA and PCI
      • Maintenance schedule (Do the maintenance jobs complete on time every day)
      • Replication requirements
      • Reporting requirements (Does this deployment need a reporting server so as to not interrupt production workloads)
      • Backup (What is used today to backup production data? Has it been tested? What are the challenges?)

      In environments requiring high performance database response times, we utilize native Windows Server tools, such as perfmon, to collect very detailed performance matrixes from exiting SQL Servers to identify performance bottlenecks and other considerations that help us resolve these issues in current or future database deployments. Because a good SQL Server’s performance is heavily dependent on the disk sub-systems, we will go deeper in disk design recommendations for private clouds, VMs and bare metal deployments.


      LEARN MORE

      SQL Server Disk Layout for Performance, HA and DR

      Separating database files into different disks is a best practice. It helps performance and helps your DBA easily identify performance issues when troubleshooting. For example, you could have a runaway query beating up your TempDB. If your TempDB is on a separate disk from your prod database files, you can easily identify that the TempDB disk is being thrashed and that same TempDB issue is not stepping on other workloads in your environments.

      For high performance requirements, SSDs are recommended. The following is a basic disk layout for performance:

      • Data (MDF, NDF files): Fast read/write disk, many drives in an array preferred for best performance
      • Index (NDF files, not often used): Fast read/write disk, many drives in an array preferred for best performance
      • Log (LDF files): Fast write performance, not much reading happens here
      • TempDB (Temp database to crunch numbers and formulas): Should be a fast disk. SSD is preferred. Do not combine with other data or log files.
      • Page File (NTFS): Keep this on a separate disk LUN or Array if possible. C: drive is not a good place for a page file on a SQL Server.

      In a SQL deployment that requires DR, the above disk layout will allow you to granularly replicate just the SQL data and the OS that needs to be replicated, leaving TempDB and pagefile out of the replication design since those files are reset during reboot. TempDB and pagefile also produce lots of noisy IO. Replicating those files will result in unnecessarily heavy bandwidth and disk utilization.

      Check back soon for Part 2: Deploying Microsoft SQL Servers in a Private Cloud with High Availability

      Rob Lerner


      READ MORE



      Source link

      Deploy NodeBalancers with the Linode Cloud Controller Manager


      Updated by Linode Written by Linode Community

      The Linode Cloud Controller Manager (CCM) allows Kubernetes to deploy Linode NodeBalancers whenever a Service of the “LoadBalancer” type is created. This provides the Kubernetes cluster with a reliable way of exposing resources to the public internet. The CCM handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, the NodeBalancer will service.

      This guide will explain how to:

      • Create a service with the type “LoadBalancer.”
      • Use annotations to control the functionality of the NodeBalancer.
      • Use the NodeBalancer to terminate TLS encryption.

      Caution

      Using the Linode Cloud Controller Manager to create NodeBalancers will create billable resources on your Linode account. A NodeBalancer costs $10 a month. Be sure to follow the instructions at the end of the guide if you would like to delete these resources from your account.

      Before You Begin

      You should have a working knowledge of Kubernetes and familiarity with the kubcetl command line tool before attempting the instructions found in this guide. For more information about Kubernetes, consult our Kubernetes Beginner’s Guide and our Getting Started with Kubernetes guide.

      When using the CCM for the first time, it’s highly suggested that you create a new Kubernetes cluster, as there are a number of issues that prevent the CCM from running on Nodes that are in the “Ready” state. For a completely automated install, you can use the Linode CLI’s k8s-alpha command line tool. The Linode CLI’s k8s-alpha command line tool utilizes Terraform to fully bootstrap a Kubernetes cluster on Linode. It includes the Linode Container Storage Interface (CSI) Driver plugin, the Linode CCM plugin, and the ExternalDNS plugin. For more information on creating a Kubernetes cluster with the Linode CLI, review our How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide.

      Note

      To manually add the Linode CCM to your cluster, you must start kubelet with the --cloud-provider=external flag. kube-apiserver and kube-controller-manager must NOT supply the --cloud-provider flag. For more information, visit the upstream Cloud Controller documentation.

      If you’d like to add the CCM to a cluster by hand, and you are using macOS, you can use the generate-manifest.sh file in the deploy folder of the CCM repository to generate a CCM manifest file that you can later apply to your cluster. Use the following command:

      ./generate-manifest.sh $LINODE_API_TOKEN us-east
      

      Be sure to replace $LINODE_API_TOKEN with a valid Linode API token, and replace us-east with the region of your choosing.

      To view a list of regions, you can use the Linode CLI, or you can view the Regions API endpoint.

      If you are not using macOS, you can copy the ccm-linode-template.yaml file and change the values of the data.apiToken and data.region fields manually.

      Using the CCM

      To use the CCM, you must have a collection of Pods that need to be load balanced, usually from a Deployment. For this example, you will create a Deployment that deploys three NGINX Pods, and then create a Service to expose those Pods to the internet using the Linode CCM.

      1. Create a Deployment manifest describing the desired state of the three replica NGINX containers:

        nginx-deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx-deployment
          labels:
            app: nginx
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
      2. Use the create command to apply the manifest:

        kubectl create -f nginx-deployment.yaml
        
      3. Create a Service for the Deployment:

        nginx-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx-service
          annotations:
            service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          labels:
            app: nginx
        spec:
          type: LoadBalancer
          ports:
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
          selector:
            app: nginx
          sessionAffinity: None

        The above Service manifest includes a few key concepts.

        • The first is the spec.type of LoadBalancer. This LoadBalancer type is responsible for telling the Linode CCM to create a Linode NodeBalancer, and will provide the Deployment it services a public facing IP address with which to access the NGINX Pods.
        • There is additional information being passed to the CCM in the form of metadata annotations (service.beta.kubernetes.io/linode-loadbalancer-throttle in the example above), which are discussed in the next section.
      4. Use the create command to create the Service, and in turn, the NodeBalancer:

        kubectl create -f nginx-service.yaml
        

      You can log in to the Linode Cloud Manager to view your newly created NodeBalancer.

      Annotations

      There are a number of settings, called annotations, that you can use to further customize the functionality of your NodeBalancer. Each annotation should be included in the annotations section of the Service manifest file’s metadata, and all of the annotations are prefixed with service.beta.kubernetes.io/linode-loadbalancer-.

      Annotation (suffix) Values Default Value Description
      throttle 020 (0 disables the throttle) 20 Client Connection Throttle. This limits the number of new connections-per-second from the same client IP.
      protocol tcp, http, https tcp Specifies the protocol for the NodeBalancer.
      tls Example value: [ { "tls-secret-name": "prod-app-tls", "port": 443} ] None A JSON array (formatted as a string) that specifies which ports use TLS and their corresponding secrets. The secret type should be kubernetes.io/tls. Fore more information, see the TLS Encryption section.
      check-type none, connection, http, http_body None The type of health check to perform on Nodes to ensure that they are serving requests. connection checks for a valid TCP handshake, http checks for a 2xx or 3xx response code, http_body checks for a certain string within the response body of the healthcheck URL.
      check-path string None The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-body string None The text that must be present in the body of the page used for health checks. For use with a check-type of http_body.
      check-interval integer None The duration, in seconds, between health checks.
      check-timeout integer (a value between 130) None Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attempts integer (a value between 130) None Number of health checks to perform before removing a back-end Node from service.
      check-passive boolean false When true, 5xx status codes will cause the health check to fail.

      To learn more about checks, please see our reference guide to NodeBalancer health checks.

      TLS Encryption

      This section will describe how to set up TLS termination for a Service so that the Service can be accessed over https.

      Generating a TLS type Secret

      Kubernetes allows you to store secret information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type of tls. Follow the next steps to create a valid tls type Secret:

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.crt -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. To create the secret, you can issue the create secret tls command, being sure to substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --key key.pem --cert cert.crt
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Defining TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port through the proper annotations. Take the following code snippet as an example:

      nginx-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
      ...

      The linode-loadbalancer-protocol annotation identifies the https protocol. Then, the linode-loadbalancer-tls annotation defines which Secret and port to use for serving https traffic. If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      nginx-service-two-environments.yaml
      1
      2
      3
      4
      
      ...
          service.beta.kubernetes.io/linode-loadbalancer-tls: |
            [ { "tls-secret-name": "my-secret", "port": 443 }. {"tls-secret-name": "my-secret-staging", "port": 8443} ]'
      ...

      Next, you’ll need to set up your Service to expose the https port. The whole example might look like the following:

      nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
        labels:
          app: nginx
        name: nginx-service
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        selector:
          app: nginx
        type: LoadBalancer

      Note that here the NodeBalancer created by the Service is terminating the TLS encryption and proxying that to port 80 on the NGINX Pod. If you had a Pod that listened on port 443, you would set the targetPort to that value.

      Session Affinity

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To ensure that traffic is directed to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod:

      session-affinity.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-service
        labels:
          app: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      You can set the timeout for the session by using the spec.sessionAffinityConfig.clientIP.timeoutSeconds field.

      Troubleshooting

      If you are having problems with the CCM, such as the NodeBalancer not being created, you can check the CCM’s error logs. First, you’ll need to find the name of the CCM Pod in the kube-system namespaces:

      kubcetl get pods -n kube-system
      

      The Pod will be named ccm-linode- with five random characters at the end, like ccm-linode-jrvj2. Once you have the Pod name, you can view its logs. The --tail=n flag is used to return the last n lines, where n is the number of your choosing. The below example returns the last 100 lines:

      kubectl logs ccm-linode-jrvj2 -n kube-system --tail=100
      

      Note

      Currently the CCM only supports https ports within a manifest’s spec when the linode-loadbalancer-protocol is set to https. For regular http traffic, you’ll need to create an additional Service and NodeBalancer. For example, if you had the following in the Service manifest:

      unsupported-nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      ...
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      ...

      The NodeBalancer would not be created and you would find an error similar to the following in your logs:

      ERROR: logging before flag.Parse: E0708 16:57:19.999318       1 service_controller.go:219] error processing service default/nginx-service (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      ERROR: logging before flag.Parse: I0708 16:57:19.999466       1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"5d1afc22-a1a1-11e9-ad5d-f23c919aa99b", APIVersion:"v1", ResourceVersion:"1248179", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      

      Removing the http port would allow you to create the NodeBalancer.

      Delete a NodeBalancer

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f nginx-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service nginx-service
      

      Updating the CCM

      The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so, you can run the edit command.

      kubectl edit ds -n kube-system ccm-linode
      

      The CCM Daemonset manifest will appear in vim. Press i to enter insert mode. Navigate to spec.template.spec.image and change the field’s value to the desired version tag. For instance, if you had the following image:

      image: linode/linode-cloud-controller-manager:v0.2.2
      

      You could update the image to v0.2.3 by changing the image tag:

      image: linode/linode-cloud-controller-manager:v0.2.3
      

      For a complete list of CCM version tags, visit the CCM DockerHub page.

      Caution

      The CCM Daemonset manifest may list latest as the image version tag. This may or may not be pointed at the latest version. To ensure the latest version, it is recommended to first check the CCM DockerHub page, then use the most recent release.

      Press escape to exit insert mode, then type :wq and press enter to save your changes. A new Pod will be created with the new image, and the old Pod will be deleted.

      Next Steps

      To further take advantage of Linode products through Kubernetes, check out our guide on how to use the Linode Container Storage Interface (CSI), which allows you to create persistent volumes backed by Linode Block Storage.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link