One place for hosting & domains

      How To Set Up a Jupyter Notebook with Python 3 on Debian 10


      Introduction

      Jupyter Notebook offers a command shell for interactive computing as a web application so that you can share and communicate with code. The tool can be used with several languages, including Python, Julia, R, Haskell, and Ruby. It is often used for working with data, statistical modeling, and machine learning.

      This tutorial will walk you through setting up Jupyter Notebook to run from a Debian 10 server, as well as teach you how to connect to and use the Notebook. Jupyter Notebooks (or just “Notebooks”) are documents produced by the Jupyter Notebook app which contain both computer code and rich text elements (paragraph, equations, figures, links, etc.) which aid in presenting and sharing reproducible research.

      By the end of this guide, you will be able to run Python 3 code using Jupyter Notebook running on a remote Debian 10 server.

      Prerequisites

      In order to complete this guide, you should have a fresh Debian 10 server instance with a basic firewall and a non-root user with sudo privileges configured. You can learn how to set this up by running through our Initial Server Setup with Debian 10 guide.

      Step 1 — Install Pip and Python Headers

      To begin the process, we’ll download and install all of the items we need from the Debian repositories. We will use the Python package manager pip to install additional components a bit later.

      We first need to update the local apt package index and then download and install the packages:

      Next, install pip and the Python header files, which are used by some of Jupyter’s dependencies:

      • sudo apt install python3-pip python3-dev

      Debian 10 (“Buster”) comes preinstalled with Python 3.7.

      We can now move on to setting up a Python virtual environment into which we’ll install Jupyter.

      Step 2 — Create a Python Virtual Environment for Jupyter

      Now that we have Python 3, its header files, and pip ready to go, we can create a Python virtual environment for easier management. We will install Jupyter into this virtual environment.

      To do this, we first need access to the virtualenv command. We can install this with pip.

      Upgrade pip and install the package by typing:

      • sudo -H pip3 install --upgrade pip
      • sudo -H pip3 install virtualenv

      With virtualenv installed, we can start forming our environment. Create and move into a directory where we can keep our project files:

      • mkdir ~/myprojectdir
      • cd ~/myprojectdir

      Within the project directory, create a Python virtual environment by typing:

      This will create a directory called myprojectenv within your myprojectdir directory. Inside, it will install a local version of Python and a local version of pip. We can use this to install and configure an isolated Python environment for Jupyter.

      Before we install Jupyter, we need to activate the virtual environment. You can do that by typing:

      • source myprojectenv/bin/activate

      Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (myprojectenv)user@host:~/myprojectdir$.

      You’re now ready to install Jupyter into this virtual environment.

      Step 3 — Install Jupyter

      With your virtual environment active, install Jupyter with the local instance of pip:

      Note: When the virtual environment is activated (when your prompt has (myprojectenv) preceding it), use pip instead of pip3, even if you are using Python 3. The virtual environment’s copy of the tool is always named pip, regardless of the Python version.

      At this point, you’ve successfully installed all the software needed to run Jupyter. We can now start the Notebook server.

      Step 4 — Run Jupyter Notebook

      You now have everything you need to run Jupyter Notebook! To run it, execute the following command:

      A log of the activities of the Jupyter Notebook will be printed to the terminal. When you run Jupyter Notebook, it runs on a specific port number. The first Notebook you run will usually use port 8888. To check the specific port number Jupyter Notebook is running on, refer to the output of the command used to start it:

      Output

      [I 21:23:21.198 NotebookApp] Writing notebook server cookie secret to /run/user/1001/jupyter/notebook_cookie_secret [I 21:23:21.361 NotebookApp] Serving notebooks from local directory: /home/sammy/myprojectdir [I 21:23:21.361 NotebookApp] The Jupyter Notebook is running at: [I 21:23:21.361 NotebookApp] http://localhost:8888/?token=1fefa6ab49a498a3f37c959404f7baf16b9a2eda3eaa6d72 [I 21:23:21.361 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [W 21:23:21.361 NotebookApp] No web browser found: could not locate runnable browser. [C 21:23:21.361 NotebookApp] Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://localhost:8888/?token=1fefa6ab49a498a3f37c959404f7baf16b9a2eda3eaa6d72

      If you are running Jupyter Notebook on a local Debian computer (not on a Droplet), you can simply navigate to the displayed URL to connect to Jupyter Notebook. If you are running Jupyter Notebook on a Droplet, you will need to connect to the server using SSH tunneling as outlined in the next section.

      At this point, you can keep the SSH connection open and keep Jupyter Notebook running or can exit the app and re-run it once you set up SSH tunneling. Let’s keep it simple and stop the Jupyter Notebook process. We will run it again once we have SSH tunneling working. To stop the Jupyter Notebook process, press CTRL+C, type Y, and hit ENTER to confirm. The following will be displayed:

      Output

      [C 21:28:28.512 NotebookApp] Shutdown confirmed [I 21:28:28.512 NotebookApp] Shutting down 0 kernels

      We’ll now set up an SSH tunnel so that we can access the Notebook.

      Step 5 — Connect to the Server Using SSH Tunneling

      In this section we will learn how to connect to the Jupyter Notebook web interface using SSH tunneling. Since Jupyter Notebook will run on a specific port on the server (such as :8888, :8889 etc.), SSH tunneling enables you to connect to the server’s port securely.

      The next two subsections describe how to create an SSH tunnel from 1) a Mac or Linux and 2) Windows. Please refer to the subsection for your local computer.

      SSH Tunneling with a Mac or Linux

      If you are using a Mac or Linux, the steps for creating an SSH tunnel are similar to using SSH to log in to your remote server, except that there are additional parameters in the ssh command. This subsection will outline the additional parameters needed in the ssh command to tunnel successfully.

      SSH tunneling can be done by running the following SSH command in a new local terminal window:

      • ssh -L 8888:localhost:8888 your_server_username@your_server_ip

      The ssh command opens an SSH connection, but -L specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side (server). This means that whatever is running on the second port number (e.g. 8888) on the server will appear on the first port number (e.g. 8888) on your local computer.

      Optionally change port 8888 to one of your choosing to avoid using a port already in use by another process.

      server_username is your username (e.g. sammy) on the server which you created and your_server_ip is the IP address of your server.

      For example, for the username sammy and the server address 203.0.113.0, the command would be:

      • ssh -L 8888:localhost:8888 sammy@203.0.113.0

      If no error shows up after running the ssh -L command, you can move into your programming environment and run Jupyter Notebook:

      You’ll receive output with a URL. From a web browser on your local machine, open the Jupyter Notebook web interface with the URL that starts with http://localhost:8888. Ensure that the token number is included, or enter the token number string when prompted at http://localhost:8888.

      SSH Tunneling with Windows and Putty

      If you are using Windows, you can create an SSH tunnel using Putty.

      First, enter the server URL or IP address as the hostname as shown:

      Set Hostname for SSH Tunnel

      Next, click SSH on the bottom of the left pane to expand the menu, and then click Tunnels. Enter the local port number to use to access Jupyter on your local machine. Choose 8000 or greater to avoid ports used by other services, and set the destination as localhost:8888 where :8888 is the number of the port that Jupyter Notebook is running on.

      Now click the Add button, and the ports should appear in the Forwarded ports list:

      Forwarded ports list

      Finally, click the Open button to connect to the server via SSH and tunnel the desired ports. Navigate to http://localhost:8000 (or whatever port you chose) in a web browser to connect to Jupyter Notebook running on the server. Ensure that the token number is included, or enter the token number string when prompted at http://localhost:8000.

      Step 6 — Using Jupyter Notebook

      This section goes over the basics of using Jupyter Notebook. If you don’t currently have Jupyter Notebook running, start it with the jupyter notebook command.

      You should now be connected to it using a web browser. Jupyter Notebook is a very powerful tool with many features. This section will outline a few of the basic features to get you started using the Notebook. Jupyter Notebook will show all of the files and folders in the directory it is run from, so when you’re working on a project make sure to start it from the project directory.

      To create a new Notebook file, select New > Python 3 from the top right pull-down menu:

      Create a new Python 3 notebook

      This will open a Notebook. We can now run Python code in the cell or change the cell to markdown. For example, change the first cell to accept Markdown by clicking Cell > Cell Type > Markdown from the top navigation bar. We can now write notes using Markdown and even include equations written in LaTeX by putting them between the $$ symbols. For example, type the following into the cell after changing it to markdown:

      # First Equation
      
      Let us now implement the following equation:
      $$ y = x^2$$
      
      where $x = 2$
      

      To turn the markdown into rich text, press CTRL+ENTER, and the following should be the results:

      results of markdown

      You can use the markdown cells to make notes and document your code. Let’s implement that equation and print the result. Click on the top cell, then press ALT+ENTER to add a cell below it. Enter the following code in the new cell.

      x = 2
      y = x**2
      print(y)
      

      To run the code, press CTRL+ENTER. You’ll receive the following results:

      first equation results

      You now have the ability to import modules and use the Notebook as you would with any other Python development environment!

      Conclusion

      At this point, you should be able to write reproducible Python code and notes in Markdown using Jupyter Notebook. To get a quick tour of Jupyter Notebook from within the interface, select Help > User Interface Tour from the top navigation menu to learn more.

      From here, you can begin a data analysis and visualization project by reading Data Analysis and Visualization with pandas and Jupyter Notebook in Python 3.



      Source link

      The Flagship Series: INAP Atlanta Data Center Market Overview


      Atlanta is the Southeast’s leading economic hub. It may be the ninth largest metro-area in the U.S., but it ranks in the top five markets for bandwidth access, in the top three for headquarters of Fortune 500 companies and is home to the world’s busiest airport[i]. It should be no surprise that Atlanta is also popular and growing destination for data centers.

      There’s plenty about Atlanta that is attractive to businesses looking for data center space. Atlanta is favorable for its low power costs, low risk for natural disasters and pro-business climate. The major industries in Atlanta include technology, mobility and IoT, bioscience, supply chain and manufacturing.

      Low power and high connectivity are strong factors driving data center growth in this area. On average, the cost of power is under five cents per kilowatt-hour, a cost even more favorable than the 5.2 cents per kWh in Northern Virginia—the U.S.’s No. 1 data center market. Atlanta is on an Integrated Transmission System (ITS) for power, meaning all providers have access to the same grid, ensuring that transmission is efficient and reliable, according to Georgia Transmission Corp.

      Noted in the Newmark Knight Frank, 2Q18—Atlanta Data Center Market Trends report, the City of Atlanta is invested in becoming a sustainable smart city. Plans include a Smart City Command Center at Georgia Tech’s High Performance Computing Center and a partnership with Google Fiber.

      Economic incentives also aid Atlanta’s data center market growth. According to Bisnoow, legislators in Georgia passed House Bill 696, giving “data center operators a sales and use tax break on the expensive equipment installed in their facilities.” This incentive extends to 2028 and providers have taken notice. 451 Research noted that if all the providers who have declared plans to build follow through, Atlanta will overtake Dallas for the No. 3 spot for data center growth.

      Considering Atlanta for a colocation, network or cloud solution? There are several reasons why we’re confident you’ll call INAP your future partner in this growing market.

      The INAP Atlanta Data Center Difference

      With a choice of providers in this thriving metro, why choose INAP?

      Our Atlanta Data Centers—one located downtown and the other on the perimeter of the city—offer a reliable, high-performing backbone connection to Washington, D.C. and Dallas through our private fiber. INAP customers in these flagship data centers avoid single points of failure with our high-capacity metro network rings. Metro Connect provides multiple points of egress for traffic and is Performance IP® enabled. The metro rings are built on dark fiber and use state-of-the-art networking gear from Ciena.

      Atlanta Data Centers ACS Building
      Our downtown Atlanta flagship data center.

      These flagship data centers offer cages, cabinets and private suites, all housed under facilities designed with Tier 3 compliant attributes. For customers looking to manage largescale deployments, Atlanta is a great fit—our private colocation suites give the flexibility they need. And for customers looking to reduce their footprint, our high-density data centers allow them to fit more gear into a smaller space.

      To find a best-fit configuration within the data center, customers can work with INAP’s expert support technicians to adopt the optimal architecture specific to the needs of their applications. Engineers are onsite in the Atlanta data centers and are dedicated to keeping your infrastructure online, secure and always operating at peak efficiency.

      CHAT NOW

      INAP’s extensive product portfolio supports businesses and allows customers to customize their infrastructure environments.

      At a glance, our Atlanta Data Centers feature:

      • Power: 8 MW of power capacity, 20+ kW per cabinet
      • Space: Over 200,000 square feet of leased space with 45,000 square feet of raised floor
      • Facilities: Tier 3 compliant attributes, located outside of flood plain and seismic zones
      • Energy Efficient Cooling: 4,205 tons of cooling capacity, N+1 with concurrent maintainability
      • Security: 24/7/365 onsite staff, video surveillance, key card and biometric authentication
      • Network: INAP Performance IP® mix, carrier-neutral connectivity, geographic redundancy
      • Compliance: PCI DSS, HIPAA, SOC 2 Type II, Green Globes and ENERGY STAR

      Download the Atlanta Data Center spec sheet here [PDF].

      INAP’s Content Delivery Network Performs with 18 Edge Locations

      Atlanta is one of INAPs’ 18 high-capacity CDN edge locations. Our Content Delivery Network substantially improves end users’ online experience for our customers, regardless of their distance from the content’s origin server. We also have 100 global POPs to support this network.

      Content is delivered along the lowest-latency path—from the origin sever, to the edge, to the user—using INAP’s proven and patented route-optimization technology. We also use GeoDNS technology, which identifies user longitude-latitude and directs requests to the nearest INAP CDN cache, for seamless delivery.

      INAP’s CDN also gives customers control of all aspects of content caching at their CDN edges—automated purges, cache warming, cache bypass, connection limits, request disabling, URL token stripping and much more.

      Learn more about the INAP’s CDN here.

      [i] Newmark Knight Frank, 2Q18 – Atlanta Data Center Market Trends

       

      Laura Vietmeyer


      READ MORE



      Source link

      Deploy NodeBalancers with the Linode Cloud Controller Manager


      Updated by Linode Written by Linode Community

      The Linode Cloud Controller Manager (CCM) allows Kubernetes to deploy Linode NodeBalancers whenever a Service of the “LoadBalancer” type is created. This provides the Kubernetes cluster with a reliable way of exposing resources to the public internet. The CCM handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, the NodeBalancer will service.

      This guide will explain how to:

      • Create a service with the type “LoadBalancer.”
      • Use annotations to control the functionality of the NodeBalancer.
      • Use the NodeBalancer to terminate TLS encryption.

      Caution

      Using the Linode Cloud Controller Manager to create NodeBalancers will create billable resources on your Linode account. A NodeBalancer costs $10 a month. Be sure to follow the instructions at the end of the guide if you would like to delete these resources from your account.

      Before You Begin

      You should have a working knowledge of Kubernetes and familiarity with the kubcetl command line tool before attempting the instructions found in this guide. For more information about Kubernetes, consult our Kubernetes Beginner’s Guide and our Getting Started with Kubernetes guide.

      When using the CCM for the first time, it’s highly suggested that you create a new Kubernetes cluster, as there are a number of issues that prevent the CCM from running on Nodes that are in the “Ready” state. For a completely automated install, you can use the Linode CLI’s k8s-alpha command line tool. The Linode CLI’s k8s-alpha command line tool utilizes Terraform to fully bootstrap a Kubernetes cluster on Linode. It includes the Linode Container Storage Interface (CSI) Driver plugin, the Linode CCM plugin, and the ExternalDNS plugin. For more information on creating a Kubernetes cluster with the Linode CLI, review our How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide.

      Note

      To manually add the Linode CCM to your cluster, you must start kubelet with the --cloud-provider=external flag. kube-apiserver and kube-controller-manager must NOT supply the --cloud-provider flag. For more information, visit the upstream Cloud Controller documentation.

      If you’d like to add the CCM to a cluster by hand, and you are using macOS, you can use the generate-manifest.sh file in the deploy folder of the CCM repository to generate a CCM manifest file that you can later apply to your cluster. Use the following command:

      ./generate-manifest.sh $LINODE_API_TOKEN us-east
      

      Be sure to replace $LINODE_API_TOKEN with a valid Linode API token, and replace us-east with the region of your choosing.

      To view a list of regions, you can use the Linode CLI, or you can view the Regions API endpoint.

      If you are not using macOS, you can copy the ccm-linode-template.yaml file and change the values of the data.apiToken and data.region fields manually.

      Using the CCM

      To use the CCM, you must have a collection of Pods that need to be load balanced, usually from a Deployment. For this example, you will create a Deployment that deploys three NGINX Pods, and then create a Service to expose those Pods to the internet using the Linode CCM.

      1. Create a Deployment manifest describing the desired state of the three replica NGINX containers:

        nginx-deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx-deployment
          labels:
            app: nginx
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
      2. Use the create command to apply the manifest:

        kubectl create -f nginx-deployment.yaml
        
      3. Create a Service for the Deployment:

        nginx-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx-service
          annotations:
            service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          labels:
            app: nginx
        spec:
          type: LoadBalancer
          ports:
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
          selector:
            app: nginx
          sessionAffinity: None

        The above Service manifest includes a few key concepts.

        • The first is the spec.type of LoadBalancer. This LoadBalancer type is responsible for telling the Linode CCM to create a Linode NodeBalancer, and will provide the Deployment it services a public facing IP address with which to access the NGINX Pods.
        • There is additional information being passed to the CCM in the form of metadata annotations (service.beta.kubernetes.io/linode-loadbalancer-throttle in the example above), which are discussed in the next section.
      4. Use the create command to create the Service, and in turn, the NodeBalancer:

        kubectl create -f nginx-service.yaml
        

      You can log in to the Linode Cloud Manager to view your newly created NodeBalancer.

      Annotations

      There are a number of settings, called annotations, that you can use to further customize the functionality of your NodeBalancer. Each annotation should be included in the annotations section of the Service manifest file’s metadata, and all of the annotations are prefixed with service.beta.kubernetes.io/linode-loadbalancer-.

      Annotation (suffix)ValuesDefault ValueDescription
      throttle020 (0 disables the throttle)20Client Connection Throttle. This limits the number of new connections-per-second from the same client IP.
      protocoltcp, http, httpstcpSpecifies the protocol for the NodeBalancer.
      tlsExample value: [ { "tls-secret-name": "prod-app-tls", "port": 443} ]NoneA JSON array (formatted as a string) that specifies which ports use TLS and their corresponding secrets. The secret type should be kubernetes.io/tls. Fore more information, see the TLS Encryption section.
      check-typenone, connection, http, http_bodyNoneThe type of health check to perform on Nodes to ensure that they are serving requests. connection checks for a valid TCP handshake, http checks for a 2xx or 3xx response code, http_body checks for a certain string within the response body of the healthcheck URL.
      check-pathstringNoneThe URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-bodystringNoneThe text that must be present in the body of the page used for health checks. For use with a check-type of http_body.
      check-intervalintegerNoneThe duration, in seconds, between health checks.
      check-timeoutinteger (a value between 130)NoneDuration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attemptsinteger (a value between 130)NoneNumber of health checks to perform before removing a back-end Node from service.
      check-passivebooleanfalseWhen true, 5xx status codes will cause the health check to fail.

      To learn more about checks, please see our reference guide to NodeBalancer health checks.

      TLS Encryption

      This section will describe how to set up TLS termination for a Service so that the Service can be accessed over https.

      Generating a TLS type Secret

      Kubernetes allows you to store secret information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type of tls. Follow the next steps to create a valid tls type Secret:

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.crt -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. To create the secret, you can issue the create secret tls command, being sure to substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --key key.pem --cert cert.crt
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Defining TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port through the proper annotations. Take the following code snippet as an example:

      nginx-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
      ...

      The linode-loadbalancer-protocol annotation identifies the https protocol. Then, the linode-loadbalancer-tls annotation defines which Secret and port to use for serving https traffic. If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      nginx-service-two-environments.yaml
      1
      2
      3
      4
      
      ...
          service.beta.kubernetes.io/linode-loadbalancer-tls: |
            [ { "tls-secret-name": "my-secret", "port": 443 }. {"tls-secret-name": "my-secret-staging", "port": 8443} ]'
      ...

      Next, you’ll need to set up your Service to expose the https port. The whole example might look like the following:

      nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
        labels:
          app: nginx
        name: nginx-service
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        selector:
          app: nginx
        type: LoadBalancer

      Note that here the NodeBalancer created by the Service is terminating the TLS encryption and proxying that to port 80 on the NGINX Pod. If you had a Pod that listened on port 443, you would set the targetPort to that value.

      Session Affinity

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To ensure that traffic is directed to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod:

      session-affinity.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-service
        labels:
          app: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      You can set the timeout for the session by using the spec.sessionAffinityConfig.clientIP.timeoutSeconds field.

      Troubleshooting

      If you are having problems with the CCM, such as the NodeBalancer not being created, you can check the CCM’s error logs. First, you’ll need to find the name of the CCM Pod in the kube-system namespaces:

      kubcetl get pods -n kube-system
      

      The Pod will be named ccm-linode- with five random characters at the end, like ccm-linode-jrvj2. Once you have the Pod name, you can view its logs. The --tail=n flag is used to return the last n lines, where n is the number of your choosing. The below example returns the last 100 lines:

      kubectl logs ccm-linode-jrvj2 -n kube-system --tail=100
      

      Note

      Currently the CCM only supports https ports within a manifest’s spec when the linode-loadbalancer-protocol is set to https. For regular http traffic, you’ll need to create an additional Service and NodeBalancer. For example, if you had the following in the Service manifest:

      unsupported-nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      ...
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      ...

      The NodeBalancer would not be created and you would find an error similar to the following in your logs:

      ERROR: logging before flag.Parse: E0708 16:57:19.999318       1 service_controller.go:219] error processing service default/nginx-service (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      ERROR: logging before flag.Parse: I0708 16:57:19.999466       1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"5d1afc22-a1a1-11e9-ad5d-f23c919aa99b", APIVersion:"v1", ResourceVersion:"1248179", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      

      Removing the http port would allow you to create the NodeBalancer.

      Delete a NodeBalancer

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f nginx-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service nginx-service
      

      Updating the CCM

      The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so, you can run the edit command.

      kubectl edit ds -n kube-system ccm-linode
      

      The CCM Daemonset manifest will appear in vim. Press i to enter insert mode. Navigate to spec.template.spec.image and change the field’s value to the desired version tag. For instance, if you had the following image:

      image: linode/linode-cloud-controller-manager:v0.2.2
      

      You could update the image to v0.2.3 by changing the image tag:

      image: linode/linode-cloud-controller-manager:v0.2.3
      

      For a complete list of CCM version tags, visit the CCM DockerHub page.

      Caution

      The CCM Daemonset manifest may list latest as the image version tag. This may or may not be pointed at the latest version. To ensure the latest version, it is recommended to first check the CCM DockerHub page, then use the most recent release.

      Press escape to exit insert mode, then type :wq and press enter to save your changes. A new Pod will be created with the new image, and the old Pod will be deleted.

      Next Steps

      To further take advantage of Linode products through Kubernetes, check out our guide on how to use the Linode Container Storage Interface (CSI), which allows you to create persistent volumes backed by Linode Block Storage.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link