One place for hosting & domains

      Linode

      How to Deploy a New Linode Using a StackScript


      Updated by Linode

      Written by Linode

      What are StackScripts?

      StackScripts provide Linode users with the ability to automate the deployment of custom systems on top of Linode’s default Linux distribution images. For example, every time you deploy a new Linode you might execute the same tasks, like updating your system’s software, installing your favorite Linux tools, and adding a limited user account. These tasks can be automated using a StackScript that will perform these actions for you as part of your Linode’s first boot process.

      All StackScripts are stored in the Linode Cloud Manager and can be accessed whenever you deploy a Linode. A StackScript authored by you is an Account StackScript. While a Community StackScript is a StackScript created by a Linode community member that has made their StackScript publicly available in the Linode Cloud Manager.

      In this Guide

      This guide will show you how to do the following:

      Note

      Account StackScripts

      An Account StackScript is any StackScript that you create. It will be stored in the Linode Cloud Manager where you can access it to use when deploying a new Linode. By default, your Account StackScripts are only visible on your account and can only be used by you to deploy a new Linode.

      This section will show you how to deploy a new Linode using an Account StackScript and how to access your Account StackScripts.

      Note

      Deploy a Linode from an Account StackScript

      1. Log into the Linode Cloud Manager.

      2. Click on the Create button at the top of the Linode Cloud Manager and select Linode. This will take you to the Linodes Create page.

        Select Linode from the Create menu.

      3. Click on the My Images tab to access different sources you can use from which to create a Linode.

      4. Viewing the My Images page, click on the Account StackScripts tab. On this page you will be able to see a list of all of your account’s StackScripts.

        Access the Account StackScripts tab.

      5. From the list, select the StackScript you would like to use to deploy your new Linode instance.



        View this StackScript’s Details

        To view the details of a StackScript prior to using it to deploy your Linode, click on its Show Details link.

        View details about this StackScript

      6. In the StackScript Options section, fill in values for your StackScript’s Options. Not all StackScripts are written to accept option values, so your StackScript might not present this section.

        Note

      7. From the Select an Image dropdown menu, select the Linux distribution to use. This list will be limited to the distributions your StackScript supports.

      8. Choose the region where you would like your Linode to reside. If you’re not sure which to select, see our How to Choose a Data Center guide. You can also generate MTR reports for a deeper look at the route path between you and a data center in each specific region.

      9. Select a Linode plan.

      10. Give your Linode a label. This is a name to help you easily identify it within the Cloud Manager’s Dashboard. If desired, assign a tag to the Linode in the Add Tags field.

      11. Create a root password for your Linode in the Root Password field. This password must be provided when you log in to your Linode via SSH. It must be at least 6 characters long and contain characters from two of the following categories:

        • lowercase and uppercase case letters
        • numbers
        • punctuation characters
      12. Click Create. You will be directed back to your new Linode’s Summary page which will report the status of your Linode as it boots up.

      Access an Account StackScript

      1. Log into the Linode Cloud Manager.

      2. Click on the StackScripts link in the left-hand navigation menu. You will be brought to the StackScripts page.

        Click on the StackScripts link in the left-hand navigation menu.

      3. Viewing the Account StackScripts tab, you will see a list of all of your account’s StackScripts.

      4. To view the details and contents of an Account StackScript, click on the StackScript you would like to view to access its StackScript detail page.

        View the details and contents of an Account StackScript.

      5. If you would like to deploy a new Linode from the Account StackScript you are viewing, click on the Deploy New Linode at the top of the StackScript detail page.

        Deploy a new Linode from your Account StackScript.

        You will be brought to the Linodes Create page which will have your Account StackScript selected. Continue to provide the rest of the required configurations to create your Linode. See step 6 in the Deploy a Linode from an Account StackScript section for details on the remaining configurations.

      Community StackScripts are any scripts that have been created by a Linode community member and are publicly available via the Linode Cloud Manager. You can deploy a new Linode using any Community StackScript.

      This section will show you how to deploy a new Linode using a Community StackScript and how to access the contents of a Community StackScript.

      Note

      Linode does not verify the accuracy of any Linode Community member submitted StackScripts. Prior to deploying a Linode using a Community StackScript, you should ensure you understand what the script will execute on your Linode.

      1. Log in to your Linode Cloud Manager account.

      2. At the top of the page, click Create and select Linode.

        Select Linode from the Create menu.

      3. Click on the One-Click tab to access the Create From options.

      4. Viewing the Create From: options, click on the Community StackScripts tab. On this page, you will see a list of all available Community StackScripts.

        View a list of all available Community StackScripts

        You can scroll through the list of StackScripts or you can use the Search field to locate the Community StackScript you’d like to use. You can search by StackScript username, label or description. For example, to search for a Community StackScript by username, you can enter username:LinodeApps into the search field.

      5. From the list, select the Community StackScript you would like to use to deploy your new Linode instance.



        View this Community StackScript’s Details

        To view the details of a StackScript prior to using it to deploy your Linode, click on its Show Details link.

        View details about this StackScript

      6. In the StackScript Options section, fill in values for your StackScript’s Options. Not all StackScripts are written to accept option values, so your StackScript might not present this section.

        Note

      7. From the Select an Image dropdown menu, select the Linux distribution to use. This list will be limited to the distributions your StackScript supports.

      8. Choose the region where you would like your Linode to reside. If you’re not sure which to select, see our How to Choose a Data Center guide. You can also generate MTR reports for a deeper look at the route path between you and a data center in each specific region.

      9. Select a Linode plan.

      10. Give your Linode a label. This is a name to help you easily identify it within the Cloud Manager’s Dashboard. If desired, assign a tag to the Linode in the Add Tags field.

      11. Create a root password for your Linode in the Root Password field. This password must be provided when you log in to your Linode via SSH. It must be at least 6 characters long and contain characters from two of the following categories:

        • lowercase and uppercase case letters
        • numbers
        • punctuation characters
      12. Click Create. You will be directed back to your new Linode’s Summary page which will report the status of your Linode as it boots up.

      1. Log into the Linode Cloud Manager.

      2. Click on the StackScripts link in the left-hand navigation menu. You will be brought to the StackScripts page.

        Click on the StackScripts link in the left-hand navigation menu.

      3. Click on the Community StackScripts tab. You will see a list of all available Community StackScripts.

        List all Community StackScripts.

        You can scroll through the list of StackScripts or you can use the Search field to locate the Community StackScript you’d like to use. You can search by StackScript username, label or description. For example, to search for a Community StackScript by username, you can enter username:LinodeApps into the search field.

      4. To view the details and contents of a Community StackScript, click on the StackScript you would like to view to access its StackScript detail page.

        View the details of a Community StackScript.

      5. If you would like to deploy a new Linode from the Account StackScript you are viewing, click on the Deploy New Linode at the top of the StackScript detail page.

        Deploy a new Linode using this Community StackScript.

        You will be brought to the Linodes Create page which will have your Community StackScript selected. Continue to provide the rest of the required configurations to create your Linode. See step 6 in the Deploy a Linode from a Community StackScript section for details on the remaining configurations.

      Next Steps

      • After deploying a new Linode using a StackScript you can connect to your Linode via SSH and verify that the StackScript has executed as expected. These steps will vary depending on the StackScript that you used when deploying your Linode.

        Note

        Depending on the contents of your StackScript, it may take a few minutes for the script to finish executing.

      • To learn how to create your own StackScript see the A Tutorial for Creating and Managing StackScripts guide.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Deploy K3s on Linode


      Updated by Linode Written by Rajakavitha Kodhandapani

      Marquee image for How to Deploy K3s on Linode

      K3s is a lightweight, easy-to-install Kubernetes distribution. Built for the edge, K3s includes an embedded SQLite database as the default datastore and supports external datastore such as PostgreSQL, MySQL, and etcd. K3s includes a command line cluster controller, a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller. It also automates and manages complex cluster operations such as distributing certificates. With K3s, you can run a highly available, certified Kubernetes distribution designed for production workloads on resource-light machines like Nanodes.

      Note

      • While you can deploy a K3s cluster on just about any flavor of Linux, K3s is officially tested on Ubuntu 16.04 and Ubuntu 18.04. If you are deploying K3s on CentOS where SELinux is enabled by default, then you must ensure that proper SELinux policies are installed. For more information, see Rancher’s documentation on SELinux support.
      • Nanode instances are suitable for low-duty workloads where performance isn’t critical. Depending on your requirements, you can choose to use Linodes with greater resources for your K3s cluster.

      Before You Begin

      1. Familiarize yourself with our Getting Started guide.

      2. Create two Linodes in the same region that are running Ubuntu 18.04.

      3. Complete the steps for setting the hostname and timezone for both Linodes. When setting hostnames, it may be helpful to identify one Linode as a server and the other as an agent.

      4. Follow our Securing Your Server guide to create a standard user account, harden SSH access, remove unnecessary network services, and create firewall rules to allow all outgoing traffic and deny all incoming traffic except SSH traffic on both Linodes.

        Note

        This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, visit our Users and Groups guide.

        All configuration files should be edited with elevated privileges. Remember to include sudo before running your text editor.

      5. Ensure that your Linodes are up to date:

        sudo apt update && sudo apt upgrade
        

      Install K3s Server

      First, you will install the K3s server on a Linode, from which you will manage your K3s cluster.

      1. Connect to the Linode where you want to install the K3s server.

      2. Open port 6443/tcp on your firewall to make it accessible by other nodes in your cluster:

        sudo ufw allow 6443/tcp
        
      3. Open port 8472/udp on your firewall to enable Flannel VXLAN:

        Note

        Replace 192.0.2.1 with the IP address of your K3s Agent Linode.

        As detailed in Rancher’s Installation Requirements, port 8472 should not be accessible outside of your cluster for security reasons.

        sudo ufw allow from 192.0.2.1 to any port 8472 proto udp
        
      4. (Optional) Open port 10250/tcp on your firewall to utilize the metrics server:

        sudo ufw allow 10250/tcp
        
      5. Set environment variables used for installing the K3s server:

        export K3S_KUBECONFIG_MODE="644"
        export K3S_NODE_NAME="k3s-server-1"
        
      6. Execute the following command to install K3s server:

        curl -sfL https://get.k3s.io | sh -
        
      7. Verify the status of the K3s server:

        sudo systemctl status k3s
        
      8. Retrieve the access token to connect a K3s Agent Linode to your K3s Server Linode:

        sudo cat /var/lib/rancher/k3s/server/node-token
        

        The expected output is similar to:

        abcdefABCDEF0123456789::server:abcdefABCDEF0123456789
        
      9. Copy the access token and save it in a secure location.

      Install K3s Agent

      Next you will install the K3s agent on a Linode.

      1. Connect to the Linode where you want to install the K3s agent.

      2. Open port 8472/udp on your firewall to enable Flannel VXLAN:

        Note

        Replace 192.0.2.0 with the IP address of your K3s Server Linode.

        As detailed in Rancher’s Installation Requirements, port 8472 should not be accessible outside of your cluster for security reasons.

        sudo ufw allow from 192.0.2.0 to any port 8472 proto udp
        
      3. (Optional) Open port 10250 on your firewall to utilize the metrics server:

        sudo ufw allow 10250/tcp
        
      4. Set environment variables used for installing the K3s agent:

        Note

        Replace 192.0.2.0 with the IP address of your K3s Server Linode and abcdefABCDEF0123456789::server:abcdefABCDEF0123456789 with the its access token.

        export K3S_KUBECONFIG_MODE="644"
        export K3S_NODE_NAME="k3s-agent-1"
        export K3S_URL="https://192.0.2.0:6443"
        export K3S_TOKEN="abcdefABCDEF0123456789::server:abcdefABCDEF0123456789"
        
      5. Execute the following command to install a K3s server:

        curl -sfL https://get.k3s.io | sh -
        
      6. Verify the status of the K3s agent:

        sudo systemctl status k3s-agent
        

      Manage K3s

      Your K3s installation includes kubectl, a command-line interface for managing Kubernetes clusters.

      From your K3s Server Linode, use kubectl to get the details of the nodes in your K3s cluster.

      kubectl get nodes
      

      The expected output is similar to:

      NAME           STATUS   ROLES    AGE   VERSION
      k3s-server-1   Ready    master   95s   v1.18.2+k3s1
      k3s-agent-1    Ready    <none>   21s   v1.18.2+k3s1
      

      Note

      To manage K3s from outside the cluster, copy the contents of /etc/rancher/k3s/k3s.yaml from your K3s Server Linode to ~/.kube/config on an external machine where you have installed kubectl, replacing 127.0.0.1 with the IP address of your K3s Server Linode.

      Test K3s

      Here, you will test your K3s cluster with a simple NGINX website deployment.

      1. On your K3s Server Linode, create a manifest file labeled nginx.yaml, open it with a text editor, and add the following text that describes a single-instance deployment of NGINX that is exposed to the public using a K3s service load balancer:

        nginx.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        
        ---
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx:latest
                ports:
                - containerPort: 80
        ---
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          ports:
            - protocol: TCP
              port: 8081
              targetPort: 80
          selector:
            app: nginx
          type: LoadBalancer
      2. Save and close the nginx.yaml file.

      3. Deploy the NGINX website on your K3s cluster:

        kubectl apply -f ./nginx.yaml
        

        The expected output is similar to:

        deployment.apps/nginx created
        service/nginx created
        
      4. Verify that the pods are running:

        kubectl get pods
        

        The expected output is similar to:

        NAME                    READY   STATUS    RESTARTS   AGE
        svclb-nginx-c6rvg       1/1     Running   0          21s
        svclb-nginx-742gb       1/1     Running   0          21s
        nginx-cc7df4f8f-2q7vf   1/1     Running   0          22s
        
      5. Verify that your deployment is ready:

        kubectl get deployments
        

        The expected output is similar to:

        NAME    READY   UP-TO-DATE   AVAILABLE   AGE
        nginx   1/1     1            1           57s
        
      6. Verify that the load balancer service is running:

        kubectl get services nginx
        

        The expected output is similar to:

        NAME       TYPE           CLUSTER-IP    EXTERNAL-IP       PORT(S)          AGE
        nginx      LoadBalancer   10.0.0.89     192.0.2.1         8081:31809/TCP   33m
        
      7. In a web browser navigation bar, type the IP address listed under EXTERNAL_IP from your output and append the port number :8081 to reach the default NGINX welcome page.

      8. Delete your test NGINX deployment:

        kubectl delete -f ./nginx.yaml
        

      Tear Down K3s

      To uninstall your K3s cluster:

      1. Connect to your K3s Agent Linode and run the following commands:

        sudo /usr/local/bin/k3s-agent-uninstall.sh
        sudo rm -rf /var/lib/rancher
        
      2. Connect to your K3s Server Linode and run the following commands:

        sudo /usr/local/bin/k3s-uninstall.sh
        sudo rm -rf /var/lib/rancher
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      This guide is published under a CC BY-ND 4.0 license.



      Source link