One place for hosting & domains

      Linode

      Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode


      Updated by Linode

      Contributed by

      Linode

      Linode offers several pathways for users to easily deploy a Kubernetes cluster. If you prefer the command line, you can create a Kubernetes cluster with one command using the Linode CLI’s k8s-alpha plugin, and Terraform. Or, if you prefer a full featured GUI, Linode’s Rancher integration enables you to deploy and manage Kubernetes clusters with a simple web interface. The Linode Kubernetes Engine, currently under development with an early access beta version on its way this summer, allows you to spin up a Kubernetes cluster with Linode handling the management and maintenance of your control plane. These are all great options for production ready deployments.

      Kubeadm is a cloud provider agnostic tool that automates many of the tasks required to get a cluster up and running. Users of kubeadm can run a few simple commands on individual servers to turn them into a Kubernetes cluster consisting of a master node and worker nodes. This guide will walk you through installing kubeadm and using it to deploy a Kubernetes cluster on Linode. While the kubeadm approach requires more manual steps than other Kubernetes cluster creation pathways offered by Linode, this solution will be covered as way to dive deeper into the various components that make up a Kubernetes cluster and the ways in which they interact with each other to provide a scalable and reliable container orchestration mechanism.

      Note

      This guide’s example instructions will result in the creation of three billable Linodes. Information on how to tear down the Linodes are provided at the end of the guide. Interacting with the Linodes via the command line will provide the most opportunity for learning, however, this guide is written so that users can also benefit by reading along.

      Before You Begin

      1. Deploy three Linodes running Ubuntu 18.04 with the following system requirements:

        • One Linode to use as the master Node with 4GB RAM and 2 CPU cores.
        • Two Linodes to use as the Worker Nodes each with 1GB RAM and 1 CPU core.
      2. Follow the Getting Started and the Securing Your Server guides for instructions on setting up your Linodes. The steps in this guide assume the use of a limited user account with sudo privileges.

      Note

      When following the Getting Started guide, make sure that each Linode is using a different hostname. Not following this guideline will leave you unable to join some or all nodes to the cluster in a later step.
      1. Disable swap memory on your Linodes. Kubernetes requires that you disable swap memory on any cluster nodes to prevent the Kubernetes scheduler (kube-scheduler) from ever sending a pod to a node that has run out of CPU/memory or reached its designated CPU/memory limit.

        sudo swapoff -a
        

        Verify that your swap has been disabled. You should expect to see a value of 0 returned.

        cat /proc/meminfo | grep 'SwapTotal'
        

        To learn more about managing compute resources for containers, see the official Kubernetes documentation.

      2. Read the Beginners Guide to Kubernetes to familiarize yourself with the major components and concepts of Kubernetes. The current guide assumes a working knowledge of common Kubernetes concepts and terminology.

      Build a Kubernetes Cluster

      Kubernetes Cluster Architecture

      A Kubernetes cluster consists of a master node and worker nodes. The master node hosts the control plane, which is the combination of all the components that provide it the ability to maintain the desired cluster state. This cluster state is defined by manifest files and the kubectl tool. While the control plane components can be run on any cluster node, it is a best practice to isolate the control plane on its own node and to run any application containers on a separate worker node. A cluster can have a single worker node or up to 5000. Each worker node must be able to maintain running containers in a pod and be able to communicate with the master node’s control plane.

      The table below provides a list of the Kubernetes tooling you will need to install on your master and worker nodes in order to meet the minimum requirements for a functioning Kubernetes cluster as described above.

      ToolDescriptionMaster NodeWorker Nodes
      kubeadmThis tool provides a simple way to create a Kubernetes cluster by automating the tasks required to get a cluster up and running. New Kubernetes users with access to a cloud hosting provider, like Linode, can use kubeadm to build out a playground cluster. kubeadm is also used as a foundation to create more mature Kubernetes deployment tooling.xx
      Container RuntimeA container runtime is responsible for running the containers that make up a cluster’s pods. This guide will use Docker as the container runtime.xx
      kubeletkubelet ensures that all pod containers running on a node are healthy and meet the specifications for a pod’s desired behavior.xx
      kubectlA command line tool used to manage a Kubernetes cluster.xx
      Control PlaneSeries of services that form Kubernetes master structure that allow it to control the cluster. Kubeadm allows the control plane services to run as containers on the master node. The control plane will be created when you initialize kubeadm later in this guide.x

      Install the Container Runtime: Docker

      Docker is the software responsible for running the pod containers on each node. You can use other container runtime software with Kubernetes, such as Containerd and CRI-O. You will need to install Docker on all three Linodes.

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine docker.io
        
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
        
      3. Add Docker’s GPG key:

        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88
        

        You should see output similar to the following:

          
        pub   4096R/0EBFCD88 2017-02-22
                Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) 
        sub   4096R/F273FCD8 2017-02-22
        
        
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
        
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
        
      7. Add your limited Linux user account to the docker group. Replace $USER with your username:

        sudo usermod -aG docker $USER
        

        Note

        After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        sudo docker run hello-world
        
      9. Setup the Docker daemon to use systemd as the cgroup driver, instead of the default cgroupfs. This is a recommended step so that Kubelet and Docker are both using the same cgroup manager. This will make it easier for Kubernetes to know which resources are available on your cluster’s nodes.

        sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
        {
          "exec-opts": ["native.cgroupdriver=systemd"],
          "log-driver": "json-file",
          "log-opts": {
            "max-size": "100m"
          },
          "storage-driver": "overlay2"
        }
        EOF'
        
      10. Create a systemd directory for Docker:

        sudo mkdir -p /etc/systemd/system/docker.service.d
        
      11. Restart Docker:

        sudo systemctl daemon-reload
        sudo systemctl restart docker
        

      Install kubeadm, kubelet, and kubectl

      Complete the steps outlined in this section on all three Linodes.

      1. Update the system and install the required dependencies for installation:

        sudo apt-get update && sudo apt-get install -y apt-transport-https curl
        
      2. Add the required GPG key to your apt-sources keyring to authenticate the Kubernetes related packages you will install:

        curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
        
      3. Add Kubernetes to the package manager’s list of sources:

        sudo bash -c "cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
        deb https://apt.kubernetes.io/ kubernetes-xenial main
        EOF"
        
      4. Update apt, install Kubeadm, Kubelet, and Kubectl, and hold the installed packages at their installed versions:

        sudo apt-get update
        sudo apt-get install -y kubelet kubeadm kubectl
        sudo apt-mark hold kubelet kubeadm kubectl
        
      5. Verify that kubeadm, kubelet, and kubectl have installed by retrieving their version information. Each command should return version information about each package.

        kubeadm version
        kubelet --version
        kubectl version
        

      Set up the Kubernetes Control Plane

      After installing the Kubernetes related tooling on all your Linodes, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.

      The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the Beginner’s Guide to Kubernetes.

      In addition to the baseline control plane components, there are several addons, that can be installed on the master node to access additional cluster features. You will need to install a networking and network policy provider add on that will implement Kubernetes’ network model on the cluster’s pod network.

      This guide will use Calico as the pod network add on. Calico is a secure and open source L3 networking and network policy provider for containers. There are several other network and network policy providers to choose from. To view a full list of providers, refer to the official Kubernetes documentation.

      Note

      kubeadm only supports Container Network Interface (CNI) based networks. CNI consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers

      1. Initialize kubeadm on the master node. This command will run checks against the node to ensure it contains all required Kubernetes dependencies, if the checks pass, it will then install the control plane components.

        When issuing this command, it is necessary to set the pod network range that Calico will use to allow your pods to communicate with each other. It is recommended to use the private IP address space, 10.2.0.0/16.

        Note

        The pod network IP range should not overlap with the service IP network range. The default service IP address range is 10.96.0.0/12. You can provide an alternative service ip address range using the --service-cidr=10.97.0.0/12 option when initializing kubeadm. Replace 10.97.0.0/12 with the desired service IP range.

        For a full list of available kubeadm initialization options, see the official Kubernetes documentation.

        sudo kubeadm init --pod-network-cidr=10.2.0.0/16
        

        You should see a similar output:

          
        Your Kubernetes control-plane has initialized successfully!
        
        To start using your cluster, you need to run the following as a regular user:
        
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
        You should now deploy a pod network to the cluster.
        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
          https://kubernetes.io/docs/concepts/cluster-administration/addons/
        
        Then you can join any number of worker nodes by running the following on each as root:
        
        kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
            --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
              
        

        The kubeadm join command will be used in the Join a Worker Node to the Cluster section of this guide to bootstrap the worker nodes to the Kubernetes cluster. This command should be kept handy for later use. Below is a description of the required options you will need to pass in with the kubeadm join command:

        • The master node’s IP address and the Kubernetes API server’s port number. In the example output, this is 192.0.2.0:6443. The Kubernetes API server’s port number is 6443 by default on all Kubernetes installations.
        • A bootstrap token. The bootstrap token has a 24-hour TTL (time to live). A new bootstrap token can be generated if your current token expires.
        • A CA key hash. This is used to verify the authenticity of the data retrieved from the Kubernetes API server during the bootstrap process.
      2. Copy the admin.conf configuration file to your limited user account. This file allows you to communicate with your cluster via kubectl and provides superuser privileges over the cluster. It contains a description of the cluster, users, and contexts. Copying the admin.conf to your limited user account will provide you with administrative privileges over your cluster.

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
      3. Install the necessary Calico manifests to your master node and apply them using kubectl. The first file, rbac-kdd.yaml, works with Kubernetes’ role-based access control (RBAC) to provide Calico components access to necessary parts of the Kubernetes API. The second file, calico.yaml, configures a self-hosted Calico installation that uses the Kubernetes API directly as the datastore (instead of etcd).

        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
        

      Inspect the Master Node with Kubectl

      After completing the previous section, your Kubernetes master node is ready with all the necessary components to manage a cluster. To gain a better understanding of all the parts that make up the master’s control plane, this section will walk you through inspecting your master node. If you have not yet reviewed the Beginner’s Guide to Kubernetes, it will be helpful to do so prior to continuing with this section as it relies on the understanding of basic Kubernetes concepts.

      1. View the current state of all nodes in your cluster. At this stage, the only node you should expect to see is the master node, since worker nodes have yet to be bootstrapped. A STATUS of Ready indicates that the master node contains all necessary components, including the pod network add-on, to start managing clusters.

        kubectl get nodes
        

        Your output should resemble the following:

          
        NAME        STATUS     ROLES     AGE   VERSION
        kube-master   Ready     master      1h    v1.14.1
            
        
      2. Inspect the available namespaces in your cluster.

        kubectl get namespaces
        

        Your output should resemble the following:

          
        NAME              STATUS   AGE
        default           Active   23h
        kube-node-lease   Active   23h
        kube-public       Active   23h
        kube-system       Active   23h
            
        

        Below is an overview of each namespace installed by default on the master node by kubeadm:

        • default: The default namespace contains objects with no other assigned namespace. By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
        • kube-system: The namespace for objects created by the Kubernetes system. This includes all resources used by the master node.
        • kube-public: This namespace is created automatically and is readable by all users. It contains information, like certificate authority data (CA), that helps kubeadm join and authenticate worker nodes.
        • kube-node-lease: The kube-node-lease namespace contains lease objects that are used by kubelet to determine node health. kubelet creates and periodically renews a Lease on a node. The node lifecycle controller treats this lease as a health signal. kube-node-lease was released to beta in Kubernetes 1.14.
      3. View all resources available in the kube-system namespace. The kube-system namespace contains the widest range of resources, since it houses all control plane resources. Replace kube-system with another namespace to view its corresponding resources.

        kubectl get all -n kube-system
        

      Join a Worker Node to the Cluster

      Now that your Kubernetes master node is set up, you can join worker nodes to your cluster. In order for a worker node to join a cluster, it must trust the cluster’s control plane, and the control plane must trust the worker node. This trust is managed via a shared bootstrap token and a certificate authority (CA) key hash. kubeadm handles the exchange between the control plane and the worker node. At a high-level the worker node bootstrap process is the following:

      1. kubeadm retrieves information about the cluster from the Kubernetes API server. The bootstrap token and CA key hash are used to ensure the information originates from a trusted source.

      2. kubelet can take over and begin the bootstrap process, since it has the necessary cluster information retrieved in the previous step. The bootstrap token is used to gain access to the Kubernetes API server and submit a certificate signing request (CSR), which is then signed by the control plane.

      3. The worker node’s kubelet is now able to connect to the Kubernetes API server using the node’s established identity.

      Before continuing, you will need to make sure that you know your Kubernetes API server’s IP address, that you have a bootstrap token, and a CA key hash. This information was provided when kubeadm was initialized on the master node in the Set up the Kubernetes Control Plane section of this guide. If you no longer have this information, you can regenerate the necessary information from the master node.


      Regenerate a Bootstrap Token

      These commands should be issued from your master node.

      1. Generate a new bootstrap token and display the kubeadm join command with the necessary options to join a worker node to the master node’s control plane:

        kubeadm token create --print-join-command
        

      Follow the steps below on each node you would like to bootstrap to the cluster as a worker node.

      1. SSH into the Linode that will be used as a worker node in the Kubernetes cluster.

        ssh [email protected]
        
      2. Join the node to your cluster using kubeadm. Ensure you replace 192.0.2.0:6443 with the IP address for your master node along with its Kubernetes API server’s port number, udb8fn.nih6n1f1aijmbnx5 with your bootstrap token, and sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26 with your CA key hash. The bootstrap process will take a few moments.

        sudo kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
        --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
        

        When the bootstrap process has completed, you should see a similar output:

          
          This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
        
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
              
        
      3. Repeat the steps outlined above on the second worker node to bootstrap it to the cluster.

      4. SSH into the master node and verify the worker nodes have joined the cluster:

         kubectl get nodes
        

        You should see a similar output.

          
        NAME          STATUS   ROLES    AGE     VERSION
        kube-master   Ready    master   1d22h   v1.14.1
        kube-node-1   Ready       1d22h   v1.14.1
        kube-node-2   Ready       1d22h   v1.14.1
              
        

      Next Steps

      Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, and deploy services that are exposed to the public internet. To help you get started with this, move on to follow along with the Deploy a Static Site on Linode using Kubernetes guide.

      Tear Down Your Cluster

      If you are done experimenting with your Kubernetes Cluster, be sure to remove the Linodes you have running in order to avoid being further billed for them. See the Removing Services section of the Billing and Payments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Host a Static Site using Linode Object Storage


      Updated by Linode

      Contributed by

      Linode

      Note

      Object Storage is currently in a closed early access Beta, and you may not have access to Object Storage through the Cloud Manager or other tools. To gain access to the Early Access Program (EAP), open up a Customer Support ticket noting that you’d like to be included in the program, or e-mail [email protected] – beta access is completely free.

      Additionally, because Object Storage is in Beta, there may be breaking changes to how you access and manage Object Storage. This guide will be updated to reflect these changes if and when they occur.

      Why Host a Static Site on Object Storage?

      Static site generators are a popular solution for creating simple, fast, flexible, and attractive websites that are easy to update. You can contribute new pages and content to a static site in two steps:

      1. First, write the content for your site’s new page using Markdown, an easy-to-learn and light-weight markup language.

      2. Then, tell your static site generator to compile your Markdown (along with other relevant assets, like CSS styling, images, and JavaScript) into static HTML files.

      The second compilation step only needs to happen once for each time that you update your content. This is in contrast with a dynamic website framework like WordPress or Drupal, which will reference a relational database and compile your HTML every time a visitor loads your site.

      Benefits of Hosting on Object Storage

      Traditionally, these static HTML files would be served by a web server (like NGINX or Apache) running on a Linode. Using Object Storage to host your static site files means you do not have to worry about maintaining your site’s infrastructure. It is no longer necessary to perform typical server maintenance tasks, like software upgrades, web server configuration, and security upkeep.

      Object Storage provides an HTTP REST gateway to objects, which means a unique URL over HTTP is available for every object. Once your static site is built, making it available publicly over the Internet is as easy uploading files to an Object Storage bucket.

      Object Storage Hosting Workflow

      At a high-level, the required steps to host a static site using Object Storage are:

      1. Install the static site generator of your choice to your local computer.

      2. Create the desired content and build the site (using your static site generator).

      3. Upload the static files to your Object Storage bucket to make the content publicly available over the Internet.

      This guide will use Hugo to demonstrate how to create a static site and host it on Linode Object Storage. However, there are many other static site generators to choose from–Jekyll and Gatsby are popular choices, and the general steps outlined in this guide could be adapted to them. For more information on choosing a static site generator, see the How to Choose a Static Site Generator guide.

      Before You Begin

      1. Read the How to Use Linode Object Storage guide to familiarize yourself with Object Storage on Linode. Specifically, be sure that you have:

        • Created your Object Storage access and secret keys.
        • Installed and configure the s3cmd tool.
      2. Install and configure Git on your local computer.

      Install the Hugo Static Site Generator

      Hugo is written in Go and is known for being extremely fast to compile sites, even very large ones. It is well-supported, well-documented, and has an active community. Some useful Hugo features include shortcodes, which are an easy way to include predefined templates inside of your Markdown, and built-in LiveReload web server, which allows you to preview your site changes locally as you make them.

      1. Install Hugo on your computer:

        macOS:

        Linux/Ubuntu:

        • Determine your Linux kernel’s architecture:

          uname -r
          

          Your output will resemble the following:

            
          4.9.0-8-amd64
          
          
        • Navigate to Hugo’s GitHub releases page and download the appropriate version for your platform. This example command downloads version 0.55, but a newer release may be available:

          wget https://github.com/gohugoio/hugo/releases/download/v0.55.0/hugo_0.55.0_Linux-64bit.deb
          
        • Install the package using dpkg:

          sudo dpkg -i hugo*.deb
          
      2. Verify that Hugo is installed. You should see output indicating your installed Hugo’s version number:

        hugo version
        

      Create a Hugo Site

      In this section, you will use the Hugo CLI (command line interface) to create your Hugo site, initialize a Hugo theme, and add content to your site. Hugo’s CLI provides several useful commands for common tasks needed to build, configure, and interact with your Hugo site.

      1. Create a new Hugo site on your local computer. This command will create a folder named example-site and scaffold Hugo’s directory structure inside it:

        hugo new site example-site
        
      2. Move into your Hugo site’s root directory:

        cd example-site
        

        Note

        All commands in this section of the guide should be issued from your site’s root directory.

      3. You will use Git to add a theme to your Hugo site’s directory. Initialize your Hugo site’s directory as a Git repository:

        git init
        
      4. Install the Ananke theme as a submodule of your Hugo site’s Git repository. Git submodules allow one Git repository to be stored as a subdirectory of another Git repository, while still being able to maintain each repository’s version control information separately. The Ananke theme’s repository will be located in the ~/example-site/themes/ananke directory of your Hugo site.

        git submodule add https://github.com/budparr/gohugo-theme-ananke.git themes/ananke
        

        Note

        Hugo has many available themes that can be installed as a submodule of your Hugo site’s directory.
      5. Add the theme to your Hugo site’s configuration file. The configuration file (config.toml) is located at the root of your Hugo site’s directory.

        echo 'theme = "ananke"' >> config.toml
        
      6. Create a new content file for your site. This command will generate a Markdown file with an auto-populated date and title:

        hugo new posts/my-first-post.md
        
      7. You should see a similar output. Note that the file is located in the content/posts/ directory of your Hugo site:

          
        /home/username/example-site/content/posts/my-first-post.md created
        
        
      8. Open the Markdown file in the text editor of your choice to begin modifying its content; you can copy and paste the example snippet into your file, which contains an updated front matter section at the top and some example Markdown body text.

        Set your desired value for title. Then, set the draft state to false and add your content below the --- in Markdown syntax, if desired:

        /home/username/example-site/content/posts/my-first-post.md
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        ---
        title: "My First Post"
        date: 2019-04-11T11:25:11-04:00
        draft: false
        ---
        
        # Host a Static Site on Linode Object Storage
        
        There are many benefits to using a static site generator. Here is a list of a few of them:
        
        - Run your own website without having to manage a Linode.
        - You don't need to worry about running a web server like Apache or NGINX.
        - Static website performance is typically very fast.
        - Use Git to version control your static website's content.


        About front matter

        Front matter is a collection of metadata about your content, and it is embedded at the top of your file within opening and closing --- delimiters.

        Front matter is a powerful Hugo feature that provides a mechanism for passing data that is attached to a specific piece of content to Hugo’s rendering engine. Hugo accepts front matter in TOML, YAML, and JSON formats. In the example snippet, there is YAML front matter for the title, date, and draft state of the Markdown file. These variables will be referenced and displayed by your Hugo theme.

      9. Once you have added your content, you can preview your changes by building and serving the site using Hugo’s built-in webserver:

        hugo server
        
      10. You will see a similar output:

          
        &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| EN
        +------------------+----+
          Pages&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| 11
          Paginator pages&nbsp&nbsp&nbsp&nbsp|  0
          Non-page files&nbsp&nbsp&nbsp&nbsp&nbsp|  0
          Static files&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  3
          Processed images&nbsp&nbsp&nbsp|  0
          Aliases&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  1
          Sitemaps&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  1
          Cleaned&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  0
        
        Total in 7 ms
        Watching for changes in /home/username/example-site/{content,data,layouts,static,themes}
        Watching for config changes in /home/username/example-site/config.toml
        Serving pages from memory
        Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
        Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
        Press Ctrl+C to stop
        
        
      11. The output will provide a URL to preview your site. Copy and paste the URL into a browser to access the site. In the above example Hugo’s web server URL is http://localhost:1313/.

      12. When you are happy with your site’s content you can build your site:

        hugo -v
        

        Hugo will generate your site’s static HTML files and store them in a public directory that it will create inside your project. The static files that are generated by Hugo are the files that you will upload to your Object Storage bucket to make your site accessible via the Internet.

      13. View the contents of your site’s public directory:

        ls public
        

        Your output should resemble the following example. When you built the site, the Markdown file you created and edited in steps 6 and 7 was used to generate its corresponding static HTML file in the public/posts/my-first-post/index.html directory.

          
          404.html    categories  dist        images      index.html  index.xml   posts       sitemap.xml tags
            
        


        Track your Static Site Files with Git

        It’s not necessary to version control your site files in order to host them on Object Storage, but we still recommended that you do so:

        1. Display the state of your current working directory (root of your Hugo site):

          git status
          
        2. Stage all your files to be committed:

          git add -A
          
        3. Commit all your changes and add a meaningful commit message:

          git commit -m 'Add my first post.'
          

        Once you have used Git to track your local Hugo site files, you can easily push them to a remote Git repository, like GitHub or GitLab. Storing your static site files on a remote Git repository opens up many possibilities for collaboration and automating your static site’s deployment to Linode Object Storage. To learn more about Git, see the Getting Started with Git guide.

      Upload your Static Site to Linode Object Storage

      Before proceeding with this section ensure that you have already created your Object Storage access and secret keys and have installed the s3cmd tool.

      1. Create a new Object Storage bucket; prepend s3:// to the beginning of the bucket’s name:

        s3cmd mb s3://my-bucket
        

        Note

        Buckets names must be unique within the Object Storage cluster. You might find the bucket name my-bucket is already in use by another Linode customer, in which case you will need to choose a new bucket name.

      2. Initialize your Object Storage bucket as a website. You must tell your bucket which files to serve as the index page and the error page for your static site. This is done with the --ws-index and --ws-error options:

        s3cmd ws-create --ws-index=index.html --ws-error=404.html s3://my-bucket
        

        In our Hugo example, the site’s index file is index.html and the error file is 404.html. Whenever a user visits your static site’s URL, the Object Storage service will serve the index.html page. If a site visitor tries to access an invalid path, they will be presented with the 404.html page.

      3. The command will return the following message:

          
            Bucket 's3://my-bucket/': website configuration created.
              
        
      4. Display information about your Object Storage’s website configuration to obtain your site’s URL:

        s3cmd ws-info s3://my-bucket
        
      5. You should see a similar output. Be sure to take note of your Object Storage bucket’s URL:

          
              Bucket s3://my-bucket/: Website configuration
        Website endpoint: http://website-us-east-1.linodeobjects.com/
        Index document:   index.html
        Error document:   404.html
            
        

        Note

        The Linode Object Storage early access Beta provides SSL enabled by default. This means you can access your Object Storage bucket using https, as well.

      6. Use s3cmd’s sync command to upload the contents of your static site’s public directory to your Object Storage bucket. This step will make your site available publicly on the Internet. Ensure you are in your site’s root directory on your computer (e.g. /home/username/example-site):

        s3cmd --no-mime-magic --acl-public --delete-removed --delete-after sync public/ s3://my-bucket
        
        Option                         Description
        no-mime-magicTells Object Storage not to use file signatures when guessing the object’s MIME-type.
        acl-publicSets the access level control of the objects to public.
        delete-removedDeletes any destination objects with no corresponding source file.
        delete-afterDeletes destination files that are no longer found at the source after all files are uploaded to the bucket.
      7. Use a browser to navigate to your Object Storage bucket’s URL to view your Hugo site:

        Hugo Index Page

        Note

        It may take a minute or two after your s3cmd sync completes for the page to appear at your bucket’s website URL.

      8. If needed, you can continue to update your static site locally and upload any changes using s3cmd’s sync command from step 3 of this section.

      (Optional) Next Steps

      After uploading your static site to Linode Object Storage, you may want to use a custom domain for your site. To do this, you can add a CNAME entry to your domain’s DNS records that aliases it to your Object Storage bucket’s website URL. To learn about managing DNS records on Linode, see the DNS Manager and DNS Records: An Introduction guides.

      As noted before, it’s possible to trigger automatic deployments to the Object Storage service when you push new content updates to GitHub or GitLab. This is done by leveraging a CI/CD (continuous integration/continuous delivery) tool like Travis CI. Essentially, you would build your Hugo site within the Travis environment and then run the s3cmd sync command from it to your bucket.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Use Linode Object Storage


      Updated by Linode

      Contributed by

      Linode

      Note

      Object Storage is currently in a closed early access Beta, and you may not have access to Object Storage through the Cloud Manager or other tools. To gain access to the Early Access Program (EAP), open up a Customer Support ticket noting that you’d like to be included in the program, or e-mail [email protected] – beta access is completely free.

      Additionally, because Object Storage is in Beta, there may be breaking changes to how you access and manage Object Storage. This guide will be updated to reflect these changes if and when they occur.

      Linode’s Object Storage is a globally-available, S3- and Swift-compatible method for storing and accessing data. Object Storage differs from traditional hierarchical data storage (as in a Linode’s disk) and Block Storage Volumes. Under Object Storage, files (also called objects) are stored in flat data structures (referred to as buckets) alongside their own rich metadata.

      Additionally, Object Storage does not require the use of a Linode. Instead, Object Storage gives each object a unique URL with which you can access your data. An object can be publicly accessible, or you can set it to be private and only visible to you. This makes Object Storage great for sharing and storing unstructured data like images, documents, archives, streaming media assets, and file backups, and the amount of data you store can range from small collections of files up to massive libraries of information. Lastly, Linode Object Storage has the built-in ability to host a static site.

      Below you will find instructions on how to connect to Object Storage, and how to upload and access objects:

      1. First, you’ll need to create a key pair to access the service.

      2. Then, you’ll use choose from a variety of available first-party and third-party tools to access and use the service.

      Object Storage Key Pair

      The first step towards using Object Storage is to create a pair of keys for the service. This pair is composed of an access key and a secret key:

      • The access key allows you to access any objects that you set to have private read permissions.

        Note

        To use your access key when viewing a private object, you first need to generate a signed URL for the object. The signed URL is much like the standard URL for your object, but some extra URL parameters are appended to it, including the access key. Instructions for generating a signed URL can be found for each of the tools outlined in this guide.

      • Your secret key is used together with your access key to authenticate the various Object Storage tools with your Linode account. You should not share the secret key.

        Note

        Each Object Storage key pair on your Linode account has complete access to all of the buckets on your account.

      Generate a Key Pair

      1. Log in to the Linode Cloud Manager.

        Note

        Object Storage is not available in the Linode Classic Manager.

      2. Click on the Object Storage link in the sidebar, click the Access Keys tab, and then click the Create an Access Key link.

        Click on the 'Access Keys' tab.

      3. The Create an Access Key menu will appear.

        The 'Create an Access Key' menu.

      4. Enter a label for the key pair. This label will be how you reference your key pair in the Linode Cloud Manager. Then, click Submit.

      5. A window will appear that contains your access key and your secret key. Write these down somewhere secure. The access key will be visible in the Linode Cloud Manager, but you will not be able to retrieve your secret key again once you close the window.

        Your access key and secret key.

        You now have the credentials needed to connect to Linode Object Storage.

      There are a number of tools that are available to help manage Linode Object Storage. This guide explains how to install and use the following options:

      • The Linode Cloud Manager can be used to create buckets (you are currently not able to upload objects to a bucket from the Cloud Manager).

      • s3cmd is a powerful command line utility that can be used with any S3-compatible object storage service, including Linode’s. s3cmd can be used to create and remove buckets, add and remove objects, convert a bucket into a static site from the command line, plus other functions like syncing entire directories up to a bucket.

      • Cyberduck is a graphical utility available for Windows and macOS and is a great option if you prefer a GUI tool.

      Cloud Manager

      Create a Bucket

      The Cloud Manager provides a web interface for creating buckets. To create a bucket:

      1. If you have not already, log in the Linode Cloud Manager.

      2. Click on the Object Storage link in the sidebar, and then click on Add a Bucket.

        The Object Storage menu.

      3. The Create a Bucket menu will appear.

        The Create a Bucket menu.

      4. Add a label for your bucket. A bucket’s label needs to be unique within the cluster that it lives in, and this includes buckets of the same name on different Linode accounts. If the label you enter is already in use, you will have to choose a different label.

      5. Choose a cluster location for the bucket to reside in.

      6. Click Submit. You are now ready to upload objects to your bucket using one of the other tools outlined in this guide.

      s3cmd

      s3cmd is a command line utility that you can use for any S3-compatible Object Storage.

      Install and Configure s3cmd

      1. s3cmd can be downloaded using apt on Debian and Ubuntu, and Homebrew on macOS. To download s3cmd using Homebrew, run the following command:

        brew install s3cmd
        

        Note

        On macOS, s3cmd might fail to install if you do not have XCode command line tools installed. If that is the case, run the following command:

        xcode-select --install
        

        You will be prompted to agree to the terms and conditions.

        To install s3cmd on Debian or Ubuntu, run the following command:

        apt install s3cmd
        
      2. Once s3cmd has been installed, you will need to configure it:

        s3cmd --configure
        

        You will be presented with a number of questions. To accept the default answer that appears within the brackets, press enter. Here is an example of the answers you will need to provide:

        Access Key: 4TQ5CJGZS92LLEQHLXB3
        Secret Key: enteryoursecretkeyhere
        Default Region: US
        S3 Endpoint: us-east-1.linodeobjects.com
        DNS-style bucket+hostname:port template for accessing a bucket: us-east-1.linodeobjects.com
        Encryption password: YOUR_GPG_KEY
        Path to GPG program: /usr/local/bin/gpg
        Use HTTPS protocol: False
        HTTP Proxy server name:
        HTTP Proxy server port: 0
        

        Note

        It is not necessary to supply a GPG key when configuring s3cmd, though it will allow you to store and retrieve encrypted files. If you do not wish to configure GPG encryption, you can leave the Encryption password and Path to GPG program fields blank.

      3. When you are done, enter Y to save your configuration.

        Note

        s3cmd offers a number of additional configuration options that are not presented as prompts by the s3cmd --configure command. One of those options is website_endpoint, which instructs s3cmd on how to construct an appropriate URL for a bucket that is hosting a static site, similar to the S3 Endpoint in the above configuration. This step is optional, but will ensure that any commands that contain your static site’s URL will output the right text. To edit this configuration file, open the ~/s3.cfg file on your local computer:

        nano ~/.s3cfg
        

        Scroll down until you find the website_endpoint, then add the following value:

        http://%(bucket)s.website-us-east-1.linodeobjects.com/
        

      You are now ready to use s3cmd to create a bucket in Object Storage.

      Create a Bucket with s3cmd

      You can create a bucket with s3cmd issuing the following mb command, replacing my-example-bucket with the name of the bucket you would like to create. Bucket names need to be unique within the same cluster, including buckets on other Linode accounts. If you choose a name for your bucket that someone else has already created, you will have to choose a different name:

      s3cmd mb s3://my-example-bucket
      

      To remove a bucket, you can use the rb command:

      s3cmd rb s3://my-example-bucket
      

      Caution

      To delete a bucket that has files in it, include the --recursive (or -r) option and the --force (or -f) option. Use caution when using this command:

      s3cmd rb -r -f s3://my-example-bucket/
      

      Upload, Download, and Delete an Object with s3cmd

      1. As an example object, create a text file and fill it with some example text.

        echo 'Hello World!' > example.txt
        
      2. Now, transfer the text file object to your bucket using s3cmd’s put command, replacing my-example-bucket with the label of the bucket you gave in the last section:

        s3cmd put example.txt s3://my-example-bucket -P
        

        Note

        The -P flag at the end of the command instructs s3cmd to make the object public. To make the object private, which means you will only be able to access it from a tool such as s3cmd, simply leave the ‘-P’ flag out of the command.

        Note

        If you chose to enable encryption when configuring s3cmd, you can store encrypted objects by supplying the -e flag:

        s3cmd put -e encrypted_example.txt s3://my-example-bucket
        
      3. The object will be uploaded to your bucket, and s3cmd will provide a public URL for the object:

        upload: 'example.txt' -> 's3://my-example-bucket/example.txt'  [1 of 1]
        13 of 13   100% in    0s   485.49 B/s  done
        Public URL of the object is: http://us-east-1.linodeobjects.com/my-example-bucket/example.txt
        

        Note

        The URL for the object that s3cmd provides is one of two valid ways to access your object. The first, which s3cmd provides, places the name of your bucket after the domain name. You can also access your object by affixing your bucket name as a subdomain: http://my-example-bucket.us-east-1.linodeobjects.com/example.txt. The latter URL is generally favored.

      4. To retrieve a file, issue the get command:

        s3cmd get s3://my-example-bucket/example.txt
        

        If the file you are attempting to retrieve is encrypted, you can retrieve it using the -e flag:

        s3cmd get -e s3://my-example-bucket/encrypted_example.txt
        
      5. To delete a file, you can issue the rm command:

         s3cmd rm s3://my-example-bucket/example.txt
        

        Caution

        To delete all files in a bucket, include the --recursive (or -r) option and the --force (or -f) option. Use caution when using this command:

        s3cmd rm -r -f s3://my-example-bucket/
        
      6. To list all available buckets, issue the ls command:

        s3cmd ls
        
      7. To list all objects in a bucket, issue the ls command and supply a bucket:

        s3cmd ls s3://my-example-bucket
        

      Create a Static Site with s3cmd

      You can also create a static website using Object Storage and s3cmd:

      1. To create a website from a bucket, issue the ws-create command:

        s3cmd ws-create --ws-index=index.html --ws-error=404.html s3://my-example-bucket
        

        The --ws-index and --ws-error flags specify which objects the bucket should use to serve the static site’s index page and error page, respectively.

      2. You will need to separately upload the index.html and 404.html files (or however you have named the index and error pages) to your bucket:

        echo 'Index page' > index.html
        echo 'Error page' > 404.html
        s3cmd put index.html 404.html s3://my-example-bucket
        
      3. Your static site is accessed from a different URL than the generic URL for your Object Storage bucket. Static sites are available at the website-us-east-1 subdomain. Using my-example-bucket as an example, you would navigate to http://my-example-bucket.website-us-east-1.linodeobjects.com.

      For more information on hosting a static website with Object Storage, read our Host a Static Site using Linode Object Storage guide.

      Other s3cmd Commands

      To upload an entire directory of files, you can use the the sync command, which will automatically sync all new or changed files. Navigate to the directory you would like to sync, then enter the following:

      s3cmd sync . s3://my-example-bucket -P
      

      This can be useful for uploading the contents of a static site to your bucket.

      Note

      The period in the above command instructs s3cmd to upload the current directory. If you do not want to first navigate to the directory you wish to upload, you can supply a path to the directory instead of the period.

      Cyberduck

      Cyberduck is a desktop application that facilitates file transfer over FTP, SFTP, and a number of other protocols, including S3.

      Install and Configure Cyberduck

      1. Download Cyberduck by visiting their website.

      2. Once you have Cyberduck installed, open the program and click on Open Connection.

      3. At the top of the Open Connection dialog, select Amazon S3 from the dropdown menu.

        Open Cyberduck and click on 'Open Connection' to open the connection menu.

      4. For the Server address, enter us-east-1.linodeobjects.com.

      5. Enter your access key in the Access Key ID field, and your secret key in the Secret Access Key field.

      6. Click Connect.

      You are now ready to create a bucket in Object Storage.

      Create a Bucket with Cyberduck

      To create a bucket in Cyberduck:

      1. Right click within the window frame, or click Action, then click New Folder:

        Right click or click 'Action', then click 'New Folder'

      2. Enter your bucket’s name and then click Create. Bucket names need to be unique within the same cluster, including buckets on other Linode accounts. If the name of your bucket is already in use, you will have to choose a different name.

      To delete the bucket using Cyberduck, right click on the bucket and select Delete.

      Upload, Download, and Delete an Object with Cyberduck

      1. To upload objects with Cyberduck, you can simply drag and drop the object, or directory of objects, to the bucket you’d like to upload them to, and Cyberduck will do the rest. Alternatively, you can click on the Action button and select Upload from the menu:

        Click on the 'Action' button to use the file upload dialog.

      2. To make your objects publicly accessible, meaning that you can access them from the object’s URL, you need to set the proper READ permissions. Right click on the object and select Info.

      3. Click on the Permissions tab.

      4. Click the gear icon at the bottom of the window and select Everyone.

        Open the file permissions prompt by right clicking on the file and selecting.

      5. A new entry for Everyone will appear in the Access Control List. Next to Everyone, under Permissions column heading, select READ from the drop down menu.

        Set the permissions for 'Everyone' to READ.

        Your object is now accessible via the internet, at the URL http://my-example-bucket.us-east-1.linodeobjects.com/example.txt, where my-example-bucket is the name of your bucket, and example.txt is the name of your object.

      6. To download an object, right click on the object and select Download, or click Download As if you’d like to specify the location of the download.

      7. To delete an object, right click the object name and select Delete.

      Create a Static Site with Cyberduck

      To create a static site from your bucket:

      1. Select a bucket, then right click on that bucket or select the Action button at the top of the menu.

      2. Click on Info, and then select the Distribution (CDN) tab.

      3. Check the box that reads Enable Website Configuration (HTTP) Distribution:

        Check the box labeled 'Enable Website Configuration (HTTP) Distribution'

      4. You will need to separately upload the index.html and 404.html files (or however you have named the index and error pages) to your bucket. Follow the instructions from the Upload, Download, and Delete an Object with Cyberduck section to upload these files.

      5. Your static site is accessed from a different URL than the generic URL for your Object Storage bucket. Static sites are available at the website-us-east-1 subdomain. Using my-example-bucket as an example, you would navigate to http://my-example-bucket.website-us-east-1.linodeobjects.com.

        For more information on hosting a static website with Object Storage, read our Host a Static Site using Linode Object Storage guide.

      Next Steps

      There are S3 bindings available for a number of programming languages, including the popular Boto library for Python, that allow you to interact with Object Storage programmatically.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link