One place for hosting & domains

      Cluster

      Deploy and Manage a Cluster with Linode Kubernetes Engine – A Tutorial


      Updated by Linode Contributed by Linode

      Note

      Linode Kubernetes Engine (LKE) is currently in Private Beta, and you may not have access to LKE through the Cloud Manager or other tools. To request access to the Private Beta, sign up here. Beta access awards you $100/month in free credits for the duration of the beta, which is automatically applied to your account when an LKE cluster is in use. Additionally, you will have access to the Linode Green Light community, a new program connecting beta users with our product and engineering teams.

      Additionally, because LKE is in Beta, there may be breaking changes to how you access and manage LKE. This guide will be updated to reflect these changes if and when they occur.

      What is the Linode Kubernetes Engine (LKE)

      The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy an LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.

      Additional LKE features

      • etcd Backups : A snapshot of your cluster’s metadata is backed up continuously, so your cluster is automatically restored in the event of a failure.
      • High Availability : All of your control plane components are monitored and will automatically recover if they fail.

      In this Guide

      In this guide you will learn:

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to remove it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account.

      Before You Begin

      Enable Network Helper

      In order to use the Linode Kubernetes Engine, you will need to have Network Helper enabled globally on your account. Network Helper is a Linode-provided service that automatically sets a static network configuration for your Linode when it boots. To enable this global account setting, follow these instructions.

      If you don’t want to use Network Helper on some Linodes that are not part of your LKE clusters, the service can also be disabled on a per-Linode basis; see instructions here.

      Note

      If you have already deployed an LKE cluster and did not enable Network Helper, you can add a new node pool with the same type, size, and count as your initial node pool. Once your new node pool is ready, you can then delete the original node pool.

      Install kubectl

      You will need to install the kubectl client to your computer before proceeding. Follow the steps corresponding to your computer’s operating system.

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest kubectl release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Create an LKE Cluster

      1. Log into your Linode Cloud Manager account.

        Note

        LKE is not available in the Linode Classic Manager

      2. From the Linode dashboard, click the Create button in the top left-hand side of the screen and select Kubernetes from the dropdown menu.

        Create a Kubernetes Cluster Screen

      3. The Create a Kubernetes Cluster page will appear. Select the region where you would like your cluster to reside.

        Select your cluster's region

      4. In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster. If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.

        Select your cluster's resources

      5. Under Number of Linodes, input the number of Linode worker nodes you would like to add to your Node Pool. These worker nodes will have the hardware resources selected from the Add Node Pools section.

        Select the number of Linode worker nodes

      6. Click on the Add Node Pool button to add the pool to your cluster’s configuration. You will see a Cluster Summary appear on the right-hand side of the Cloud Manager detailing your cluster’s hardware resources and monthly cost.

        A list of pools also appears below the Add Node Pool button with quick edit Node Count fields. You can easily change the number of nodes by typing a new number in the field, or use the up and down arrows to increment or decrement the number in the field. Each row in this table also has a Remove link if you want to remove the node pool.

        Add a node pool to your Kubernetes cluster

      7. In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account. This name will be how you identify your cluster in the Cloud Manager’s Dashboard.

        Provide a name for your cluster

      8. From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.

        Select a Kubernetes version

      9. When you are satisfied with the configuration of your cluster, click the Create button on the right hand side of the screen. Your cluster’s detail page will appear where you will see your Node Pools listed. From this page, you can edit your existing Node Pools, add new Node Pools to your cluster, access your Kubeconfig file, and view an overview of your cluster’s resource details.

      Connect to your LKE Cluster with kubectl

      After you’ve created your LKE cluster using the Cloud Manager, you can begin interacting with and managing your cluster. You connect to it using the kubectl client on your computer. To configure kubectl, you’ll download your cluster’s kubeconfig file.

      Access and Download your kubeconfig

      Anytime after your cluster is created you can download its kubeconfig. The kubeconfig is a YAML file that will allow you to use kubectl to communicate with your cluster. Here is an example kubeconfig file:

      example-cluster-kubeconfig.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUd...
          server: https://192.0.2.0:6443
        name: kubernetes
      contexts:
      - context:
          cluster: kubernetes
          user: kubernetes-admin
        name: [email protected]
      current-context: [email protected]
      kind: Config
      preferences: {}
      users:
      - name: kubernetes-admin
        user:
          client-certificate-data: LS0tLS1CRUd...
          client-key-data: LS0tLS1CRUd...

      This configuration file defines your cluster, users, and contexts.

      1. To access your cluster’s kubeconfig, log into your Cloud Manager account and navigate to the Kubernetes section.

      2. From the Kubernetes listing page, click on your cluster’s more options ellipsis and select Download kubeconfig. The file will be saved to your computer’s Downloads folder.

        Download your cluster's kubeconfig

        Download and view your Kubeconfig from the cluster’s details page

        You can also download the kubeconfig from the Kubernetes cluster’s details page.

        1. When viewing the Kubernetes listing page, click on the cluster for which you’d like to download a kubeconfig file.

        2. On the cluster’s details page, under the kubeconfig section, click the Download button. The file will be saved to your Downloads folder.

          Kubernetes Cluster Download kubeconfig from Details Page

        3. To view the contents of your kubeconfig file, click on the View button. A pane will appear with the contents of your cluster’s kubeconfig file.

          View the contents of your kubeconfig file

      3. Open a terminal shell and save your kubeconfig file’s path to the $KUBECONFIG environment variable. In the example command, the kubeconfig file is located in the Downloads folder, but you should alter this line with this folder’s location on your computer:

        export KUBECONFIG=~/Downloads/kubeconfig.yaml
        

        Note

        It is common practice to store your kubeconfig files in ~/.kube directory. By default, kubectl will search for a kubeconfig file named config that is located in the ~/.kube directory. You can specify other kubeconfig files by setting the $KUBECONFIG environment variable, as done in the step above.

      4. View your cluster’s nodes using kubectl.

        kubectl get nodes
        

        Note

        If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. Visit our Troubleshooting Kubernetes guide to learn how to switch cluster contexts.

        You are now ready to manage your cluster using kubectl. For more information about using kubectl, see Kubernetes’ Overview of kubectl guide.

      Persist the Kubeconfig Context

      If you create a new terminal window, it will not have access to the context that you specified using the previous instructions. This context information can be made persistent between new terminals by setting the KUBECONFIG environment variable in your shell’s configuration file.

      Note

      These instructions will persist the context for users of the Bash terminal. They will be similar for users of other terminals:

      1. Navigate to the $HOME/.kube directory:

        cd $HOME/.kube
        
      2. Create a directory called configs within $HOME/.kube. You can use this directory to store your kubeconfig files.

        mkdir configs
        
      3. Copy your kubeconfig.yaml file to the $HOME/.kube/configs directory.

        cp ~/Downloads/kubeconfig.yaml $HOME/.kube/configs/kubeconfig.yaml
        

        Note

        Alter the above line with the location of the Downloads folder on your computer.

        Optionally, you can give the copied file a different name to help distinguish it from other files in the configs directory.

      4. Open up your Bash profile (e.g. ~/.bash_profile) in the text editor of your choice and add your configuration file to the $KUBECONFIG PATH variable.

        If an export KUBECONFIG line is already present in the file, append to the end of this line as follows; if it is not present, add this line to the end of your file:

        export KUBECONFIG:$KUBECONFIG:$HOME/.kube/config:$HOME/.kube/configs/kubeconfig.yaml
        
      5. Close your terminal window and open a new window to receive the changes to the $KUBECONFIG variable.

      6. Use the config get-contexts command for kubectl to view the available cluster contexts:

        kubectl config get-contexts
        

        You should see output similar to the following:

          
        CURRENT  NAME                         CLUSTER     AUTHINFO          NAMESPACE
        *        [email protected]  kubernetes  kubernetes-admin
        
        
      7. If your context is not already selected, (denoted by an asterisk in the current column), switch to this context using the config use-context command. Supply the full name of the cluster (including the authorized user and the cluster):

        kubectl config use-context [email protected]
        

        You should see output like the following:

          
        Switched to context "[email protected]".
        
        
      8. You are now ready to interact with your cluster using kubectl. You can test the ability to interact with the cluster by retrieving a list of Pods in the kube-system namespace:

        kubectl get pods -n kube-system
        

      Modify a Cluster’s Node Pools

      You can use the Linode Cloud Manager to modify a cluster’s existing node pools by adding or removing nodes. You can also add or remove entire node pools from your cluster. This section will cover completing those tasks. For any other changes to your LKE cluster, you should use kubectl.

      Access your Cluster’s Details Page

      1. Click the Kubernetes link in the sidebar. The Kubernetes listing page will appear and you will see all your clusters listed.

        Kubernetes cluster listing page

      2. Click the cluster that you wish to modify. The Kubernetes cluster’s details page will appear.

        Kubernetes cluster's details page

      Edit or Remove Existing Node Pools

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, you can now edit your existing node pool or remove it entirely:

        • The Node Count fields are now editable text boxes.

        • To remove a node pool, click the Remove link to the right.

        • As you make changes you will see an Updated Monthly Estimate; contrast this to the current Monthly Pricing under the Details panel on the right.

          Edit your cluster's node pool

      3. Click the Save button to save your changes; click the Clear Changes button to revert back to the cluster state before you started editing; or click the Cancel button to cancel editing.

      Add Node Pools

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, navigate to the Add Node Pools panel. Select the type and size of Linode(s) you want to add to your new pool.

        Select a plan size for your new node pool

      3. Under Number of Linodes, input the number of Linode worker nodes you’d like to add to the pool in the text box; you can also use the arrow keys to increment or decrement this number. Click the Add Node Pool button.

        Add a new node pool to your cluster

      4. The new node pool appears in the Node Pools list which you can now edit, if desired.

        Kubernetes Cluster New Node Pool Created

      Delete a Cluster

      You can delete an entire cluster using the Linode Cloud Manager. These changes cannot be reverted once completed.

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, scroll to the bottom and click on the Delete Cluster button.

        Delete your LKE cluster

      3. A confirmation pop-up will appear. Enter in your cluster’s name and click the Delete button to confirm.

        Kubernetes Delete Confirmation Dialog

      4. The Kubernetes listing page will appear and you will no longer see your deleted cluster.

      Next Steps

      Now that you have a running LKE cluster, you can start deploying workloads to it. Refer to our other guides to learn more:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Deploy and Manage a Cluster with Linode Kubernetes Engine and the Linode API – A Tutorial


      Updated by Linode Contributed by Linode

      Note

      Linode Kubernetes Engine (LKE) is currently in Private Beta, and you may not have access to LKE through the Cloud Manager or other tools. To request access to the Private Beta, sign up here. Beta access awards you $100/month in free credits for the duration of the beta, which is automatically applied to your account when an LKE cluster is in use. Additionally, you will have access to the Linode Green Light community, a new program connecting beta users with our product and engineering teams.

      Additionally, because LKE is in Beta, there may be breaking changes to how you access and manage LKE. This guide will be updated to reflect these changes if and when they occur.

      What is the Linode Kubernetes Engine (LKE)?

      The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy a LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE Cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.

      Additional LKE features

      • etcd Backups : A snapshot of your cluster’s metadata is backed up continuously, so your cluster is automatically restored in the event of a failure.
      • High Availability : All of your control plane components are monitored and will automatically recover if they fail.

      You can easily deploy an LKE cluster in several ways:

      These Linode-provided interfaces can be used to create, delete, and update the structural elements of your cluster, including:

      • The number of nodes that make up a cluster’s node pools.
      • The region where your node pools are deployed.
      • The hardware resources for each node in your node pools.
      • The Kubernetes version deployed to your cluster’s Master node and worker nodes.

      The Kubernetes API and kubectl are the primary ways you will interact with your LKE cluster once it’s been created. These tools can be used to configure, deploy, inspect, and secure your Kubernetes workloads, deploy applications, create services, configure storage and networking, and define controllers.

      In this Guide

      This guide will cover how to use the Linode API to:

      Before You Begin

      1. Familiarize yourself with the Linode Kubernetes Engine service. This information will help you understand the benefits and limitations of LKE.

      2. Create an API Token. You will need this to access the LKE service.

      3. Install kubectl on your computer. You will use kubectl to interact with your cluster once it’s deployed.

      4. If you are new to Kubernetes, refer to our A Beginner’s Guide to Kubernetes series to learn about general Kubernetes concepts. This guide assumes a general understanding of core Kubernetes concepts.

      Enable Network Helper

      In order to use the Linode Kubernetes Engine, you will need to have Network Helper enabled globally on your account. Network Helper is a Linode-provided service that automatically sets a static network configuration for your Linode when it boots. To enable this global account setting, follow these instructions.

      If you don’t want to use Network Helper on some Linodes that are not part of your LKE clusters, the service can also be disabled on a per-Linode basis; see instructions here.

      Note

      If you have already deployed an LKE cluster and did not enable Network Helper, you can add a new node pool with the same type, size, and count as your initial node pool. Once your new node pool is ready, you can then delete the original node pool.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest kubectl release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Create an LKE Cluster

      Required Parameters Description
      region The data center region where your cluster will be deployed. Currently, us-central is the only available region for LKE clusters.
      label A human readable name to identify your cluster. This must be unique. If no label is provided, one will be assigned automatically. Labels must start with an alpha [a-z][A-Z] character, must only consist of alphanumeric characters and dashes, and must not contain two dashes in a row.
      node_pools The collections of Linodes that serve as the worker nodes in your LKE cluster.
      version The desired version of Kubernetes for this cluster.
      1. To create an LKE Cluster, send a POST request to the /lke/clusters endpoint. The example below displays all possible request body parameters. Note that tags is an optional parameter.

        curl -H "Content-Type: application/json" 
             -H "Authorization: Bearer $TOKEN" 
             -X POST -d '{
                "label": "cluster12345",
                "region": "us-central",
                "version": "1.16",
                "tags": ["ecomm", "blogs"],
                "node_pools": [
                  { "type": "g6-standard-2", "count": 2},
                  { "type": "g6-standard-4", "count": 3}
                ]
             }' https://api.linode.com/v4/lke/clusters
        

        You will receive a response similar to:

          
        {"version": "1.16", "updated": "2019-08-02T17:17:49", "region": "us-central", "tags": ["ecomm", "blogs"], "label": "cluster12345", "id": 456, "created": "2019-22-02T17:17:49"}%
            
        
      2. Make note of your cluster’s ID, as you will need it to continue to interact with your cluster in the next sections. In the example above, the cluster’s ID is "id": 456. You can also access your cluster’s ID by listing all LKE Clusters on your account.

        Note

        Each Linode account has a limit to the number of Linode resources they can deploy. This includes services, like Linodes, NodeBalancers, Block Storage, etc. If you run into issues deploying the number of nodes you designate for a given cluster’s node pool, you may have run into a limit on the number of resources allowed on your account. Contact Linode Support if you believe this may be the case.

      Connect to your LKE Cluster

      Now that your LKE cluster is created, you can access and manage your cluster using kubectl on your computer. This will give you the ability to interact with the Kubernetes API, and to create and manage Kubernetes objects in your cluster.

      To communicate with your LKE cluster, kubectl requires a copy of your cluster’s kubeconfig. In this section, you will access the contents of your kubeconfig using the Linode API and then set up kubectl to communicate with your LKE cluster.

      1. Access your LKE cluster’s kubeconfig file by sending a GET request to the /lke/clusters/{clusterId}/kubeconfig endpoint. Ensure you replace 12345 with your cluster’s ID that you recorded in the previous section:

        curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters/12345/kubeconfig
        

        The API returns a base64 encoded string (a useful format for automated pipelines) representing your kubeconfig. Your output will resemble the following:

          
        {"kubeconfig": "YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VONVJFTkRRV0pEWjBGM1NVSkJaMGxDUVVSQlRrSm5hM0ZvYTJsSE9YY3dRa0ZSYzBaQlJFRldUVkpOZDBWUldVUldVVkZFUlhkd2NtUlhTbXdLWTIwMWJHUkhWbnBOUWpSWVJGUkZOVTFFWjNkTmFrVXpUVlJqTVUxV2IxaEVWRWsx ... 0TFMwdExRbz0K"}%
        
        
      2. Copy the kubeconfig field’s value from the response body, since you will need it in the next step.

        Note

        Make sure you only copy the long string inside the quotes following "kubeconfig": in your output. Do not copy the curly braces or anything outside of them. You will receive an error if you use the full output in later steps.

      3. Save the base64 kubeconfig to an environment variable:

        KUBE_VAR='YXBpVmVyc2lvbjogdjEK ... 0TFMwdExRbz0K'
        
      4. Navigate to your computer’s ~/.kube directory. This is where kubectl looks for kubeconfig files, by default.

        cd ~/.kube
        
      5. Create a directory called configs within ~/.kube. You can use this directory to store your kubeconfig files.

        mkdir configs
        cd configs
        
      6. Decode the contents of $KUBE_VAR and save it to a new YAML file:

        echo $KUBE_VAR | base64 -D > cluster12345-config.yaml
        

        Note

        The YAML file that you decode to (cluster12345-config.yaml here) can have any name of your choosing.

      7. Add the kubeconfig file to your $KUBECONFIG environment variable.

        export KUBECONFIG=cluster12345-config.yaml
        
      8. Verify that your cluster is selected as kubectl’s current context:

        kubectl config get-contexts
        
      9. View the contents of the configuration:

        kubectl config view
        

        Note

      10. View all nodes in your LKE cluster using kubectl:

        kubectl get nodes
        

        Your output will resemble the following example, but will vary depending on your own cluster’s configurations.

          
        NAME                      STATUS   ROLES  AGE     VERSION
        lke166-193-5d44703cd092   Ready    none   2d22h   v1.14.0
        lke166-194-5d44703cd780   Ready    none   2d22h   v1.14.0
        lke166-195-5d44703cd691   Ready    none   2d22h   v1.14.0
        lke166-196-5d44703cd432   Ready    none   2d22h   v1.14.0
        lke166-197-5d44703cd211   Ready    none   2d22h   v1.14.0
        
        

        Now that you are connected to your LKE cluster, you can begin using kubectl to deploy applications, inspect and manage cluster resources, and view logs.

      Persist the Kubeconfig Context

      If you create a new terminal window, it will not have access to the context that you specified using the previous instructions. This context information can be made persistent between new terminals by setting the KUBECONFIG environment variable in your shell’s configuration file.

      Note

      These instructions will persist the context for users of the Bash terminal. They will be similar for users of other terminals:

      1. Open up your Bash profile (e.g. ~/.bash_profile) in the text editor of your choice and add your configuration file to the $KUBECONFIG PATH variable.

        If an export KUBECONFIG line is already present in the file, append to the end of this line as follows; if it is not present, add this line to the end of your file:

        export KUBECONFIG:$KUBECONFIG:$HOME/.kube/config:$HOME/.kube/configs/cluster12345-config.yaml
        

        Note

        Alter the $HOME/.kube/configs/cluster12345-config.yaml path in the above line with the name of the file you decoded to in the previous section.

      2. Close your terminal window and open a new window to receive the changes to the $KUBECONFIG variable.

      3. Use the config get-contexts command for kubectl to view the available cluster contexts:

        kubectl config get-contexts
        

        You should see output similar to the following:

          
        CURRENT  NAME                         CLUSTER     AUTHINFO          NAMESPACE
        *        [email protected]  kubernetes  kubernetes-admin
        
        
      4. If your context is not already selected, (denoted by an asterisk in the current column), switch to this context using the config use-context command. Supply the full name of the cluster (including the authorized user and the cluster):

        kubectl config use-context [email protected]
        

        You should see output like the following:

          
        Switched to context "[email protected]".
        
        
      5. You are now ready to interact with your cluster using kubectl. You can test the ability to interact with the cluster by retrieving a list of Pods in the kube-system namespace:

        kubectl get pods -n kube-system
        

      Inspect your LKE Cluster

      Once you have created an LKE Cluster, you can access information about its structural configuration using the Linode API.

      List LKE Clusters

      To view a list of all your LKE clusters, send a GET request to the /lke/clusters endpoint.

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters
      

      The returned response body will display the number of clusters deployed to your account and general details about your LKE clusters:

        
      {"results": 2, "data": [{"updated": "2019-08-02T17:17:49", "region": "us-central", "id": 456, "version": "1.16", "label": "cluster-12345", "created": "2019-08-02T17:17:49", "tags": ["ecomm", "blogs"]}, {"updated": "2019-08-05T17:00:04", "region": "us-central", "id": 789, "version": "1.16", "label": "cluster-56789", "created": "2019-08-05T17:00:04", "tags": ["ecomm", "marketing"]}], "pages": 1, "page": 1}%
      
      

      View an LKE Cluster

      You can use the Linode API to access details about an individual LKE cluster. You will need your cluster’s ID to access information about this resource. If you don’t know your cluster’s ID, see the List LKE Clusters section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.

      To view your LKE cluster, send a GET request to the the /lke/clusters/{clusterId} endpoint. In this example, ensure you replace 12345 with your cluster’s ID:

      curl -H "Authorization: Bearer $TOKEN" 
              https://api.linode.com/v4/lke/clusters/12345
      

      Your output will resemble the following:

        
      {"created": "2019-08-02T17:17:49", "updated": "2019-08-02T17:17:49", "version": "1.16", "tags": ["ecomm", "blogs"], "label": "cluster-12345", "id": 456, "region": "us-central"}%
      
      

      List a Cluster’s Node Pools

      A node pool consists of one or more Linodes (worker nodes). Each node in the pool has the same plan type. Your LKE cluster can have several node pools. Each pool is assigned its own plan type and number of nodes. To view a list of an LKE cluster’s node pools, you need your cluster’s ID. If you don’t know your cluster’s ID, see the List LKE Clusters section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.

      To list your cluster’s node pools, send a GET request to the /lke/clusters/{clusterId}/pools endpoint. In this example, replace 12345 with your cluster’s ID:

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters/12345/pools
      

      The response body will include information on each node pool’s pool ID, Linode type, and node count; and each node’s individual ID and status.

        
      {"pages": 1, "page": 1, "data": [{"count": 2, "id": 193, "type": "g6-standard-2", "linodes": [{"id": "13841932", "status": "ready "}, {"id": "13841933", "status": "ready"}]}, {"count": 3, "id": 194, "type": "g6-standard-4", "linodes": [{"id": "13841934", "status": "ready"}, {"id": "13841935", "status": "ready"}, {"id": "13841932", "status": "ready"}]}], "results": 2}%
      
      

      View a Node Pool

      You can use the Linode API to access details about a specific node pool in an LKE cluster. You will need your cluster’s ID and node pool ID to access information about this resource. To retrieve your cluster’s ID, see the List LKE Clusters section. To find a node pool’s ID, see the List a Cluster’s Node Pools section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      poolId ID of the LKE node pool to lookup.

      To view a specific node pool, send a GET request to the /lke/clusters/{clusterId}/pools/{poolId} endpoint. In this example, replace 12345 with your cluster’s ID and 456 with the node pool’s ID:

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters/12345/pools/456
      

      The response body provides information about the number of nodes in the node pool, the node pool’s ID, and type. You will also retrieve information about each individual node in the node pool, including the Linode’s ID and status.

        
      {"count": 2, "id": 193, "type": "g6-standard-2", "linodes": [{"id": "13841932", "status": "ready"}, {"id": "13841933", "status": "ready"}]}%
      
      

      Note

      If desired, you can use your node pool’s Linode ID(s) to get more details about each node in the pool. Send a GET request to the /linode/indstances/{linodeId} endpoint. In this example, ensure you replace 13841932 with your Linode’s ID.

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/linode/instances/13841932
      

      Although you have access to your cluster’s nodes, it is recommended that you only interact with your nodes via the Linode’s LKE interfaces (like the LKE endpoints in Linode’s API, or the Kubernetes section in the Linode Cloud Manager), or via the Kubernetes API and kubectl.

      Modify your LKE Cluster

      Once an LKE cluster is created, you can modify two aspects of it: the cluster’s label, and the cluster’s node pools. In this section you will learn how to modify each of these parts of your cluster.

      Update your LKE Cluster Label

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.

      To update your LKE cluster’s label, send a PUT request to the /lke/clusters/{clusterId} endpoint. In this example, ensure you replace 12345 with your cluster’s ID:

      curl -H "Content-Type: application/json" 
              -H "Authorization: Bearer $TOKEN" 
              -X PUT -d '{
              "label": "updated-cluster-name"
              }' https://api.linode.com/v4/lke/clusters/12345
      

      The response body will display the updated cluster label:

        
      {"created": "2019-08-02T17:17:49", "updated": "2019-08-05T19:11:19", "version": "1.16", "tags": ["ecomm", "blogs"], "label": "updated-cluster-name", "id": 456, "region": "us-central"}%
      
      

      Add a Node Pool to your LKE Cluster

      A node pool consists of one or more Linodes (worker nodes). Each node in the pool has the same plan type and is identical to each other. Your LKE cluster can have several node pools, each pool with its own plan type and number of nodes.

      You will need your cluster’s ID in order to add a node pool to it. If you don’t know your cluster’s ID, see the List LKE Clusters section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      type The Linode plan type to use for all the nodes in the pool. Linode plans designate the type of hardware resources applied to your instance.
      count The number of nodes to include in the node pool. Each node will have the same plan type.

      To add a node pool to an existing LKE cluster, send a POST request to the /lke/clusters/{clusterId}/pools endpoint. The request body must include the type and count parameters. In the URL of this example, ensure you replace 12345 with your own cluster’s ID:

      curl -H "Content-Type: application/json" 
              -H "Authorization: Bearer $TOKEN" 
              -X POST -d '{
              "type": "g6-standard-1",
              "count": 5
              }' https://api.linode.com/v4/lke/clusters/12345/pools
      

      The response body will resemble the following:

        
      {"count": 5, "id": 196, "type": "g6-standard-1", "linodes": [{"id": "13841945", "status": "ready"}, {"id": "13841946", "status": "ready"}, {"id": "13841947", "status": "ready"}, {"id": "13841948", "status": "ready"}, {"id": "13841949", "status": "ready"}]}%
      
      

      Note

      Each Linode account has a limit to the number of Linode resources they can deploy. This includes services, like Linodes, NodeBalancers, Block Storage, etc. If you run into issues deploying the number of nodes you designate for a given cluster’s node pool, you may have run into a limit on the number of resources allowed on your account. Contact Linode Support if you believe this may be the case.

      Resize your LKE Node Pool

      You can resize an LKE cluster’s node pool to add or decrease its number of nodes. You will need your cluster’s ID and the node pool’s ID in order to resize it. If you don’t know your cluster’s ID, see the List LKE Clusters section. If you don’t know your node pool’s ID, see the List a Cluster’s Node Pools section.

      Note

      You cannot modify an existing node pool’s plan type. If you would like your LKE cluster to use a different node pool plan type, you can add a new node pool to your cluster with the same number of nodes to replace the current node pool. You can then delete the node pool that is no longer needed.
      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      poolId ID of the LKE node pool to lookup.
      count The number of Linodes in the node pool.

      To update your node pool’s node count, send a PUT request to the /lke/clusters/{clusterId}/pools/{poolId} endpoint. In the URL of this example, replace 12345 with your cluster’s ID and 196 with your node pool’s ID:

      curl -H "Content-Type: application/json" 
          -H "Authorization: Bearer $TOKEN" 
          -X PUT -d '{
              "type": "g6-standard-4",
              "count": 6
          }' https://api.linode.com/v4/lke/clusters/12345/pools/196
      

      Note

      Each Linode account has a limit to the number of Linode resources they can deploy. This includes services, like Linodes, NodeBalancers, Block Storage, etc. If you run into issues deploying the number of nodes you designate for a given cluster’s node pool, you may have run into a limit on the number of resources allowed on your account. Contact Linode Support if you believe this may be the case.

      Delete a Node Pool from an LKE Cluster

      When you delete a node pool you also delete the Linodes (nodes) and routes to them. The Pods running on those nodes are evicted and rescheduled. If you have assigned Pods to the deleted Nodes, the Pods might remain in an unschedulable condition if no other node in the cluster satisfies the node selector.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      poolId ID of the LKE node pool to lookup.

      To delete a node pool from a LKE cluster, send a DELETE request to the /lke/clusters/{clusterId}/pools/{poolId} end point. In the URL of this example, replace 12345 with your cluster’s ID and 196 with your cluster’s node pool ID:

      Caution

      This step is permanent and will result in the loss of data.

      curl -H "Authorization: Bearer $TOKEN" 
          -X DELETE 
          https://api.linode.com/v4/lke/clusters/12345/pools/196
      

      Delete an LKE Cluster

      Deleting an LKE cluster will delete the Master node, all worker nodes, and all NodeBalancers created by the cluster. However, it will not delete any Volumes created by the LKE cluster.

      To delete an LKE Cluster, send a DELETE request to the /lke/clusters/{clusterId} endpoint. In the URL of this example, replace 12345 with your cluster’s ID:

      Caution

      This step is permanent and will result in the loss of data.

      curl -H "Authorization: Bearer $TOKEN" 
          -X DELETE 
          https://api.linode.com/v4/lke/clusters/12345
      

      Where to Go From Here?

      Now that you have created an LKE cluster, you can start deploying workloads to it. Review these guides for further help:

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Configure a Galera Cluster with MariaDB on Debian 10 Servers


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Clustering adds high availability to your database by distributing changes to different servers. In the event that one of the instances fails, others are quickly available to continue serving.

      Clusters come in two general configurations, active-passive and active-active. In active-passive clusters, all writes are done on a single active server and then copied to one or more passive servers that are poised to take over only in the event of an active server failure. Some active-passive clusters also allow SELECT operations on passive nodes. In an active-active cluster, every node is read-write and a change made to one is replicated to all.

      MariaDB is an open source relational database system that is fully compatible with the popular MySQL RDBMS system. You can read the official documentation for MariaDB at this page. Galera is a database clustering solution that enables you to set up multi-master clusters using synchronous replication. Galera automatically handles keeping the data on different nodes in sync while allowing you to send read and write queries to any of the nodes in the cluster. You can learn more about Galera at the official documentation page.

      In this guide, you will configure an active-active MariaDB Galera cluster. For demonstration purposes, you will configure and test three Debian 10 servers that will act as nodes in the cluster. This is the smallest configurable cluster.

      Prerequisites

      To follow along, you will need a DigitalOcean account, in addition to the following:

      • Three Debian 10 servers with private networking enabled, each with a non-root user with sudo privileges.

      While the steps in this tutorial have been written for and tested against DigitalOcean Droplets, much of them should also be applicable to non-DigitalOcean servers with private networking enabled.

      Step 1 — Adding the MariaDB Repositories to All Servers

      In this step, you will add the relevant MariaDB package repositories to each of your three servers so that you will be able to install the right version of MariaDB used in this tutorial. Once the repositories are updated on all three servers, you will be ready to install MariaDB.

      One thing to note about MariaDB is that it originated as a drop-in replacement for MySQL, so in many configuration files and startup scripts, you’ll see mysql rather than mariadb. For consistency’s sake, we will use mysql in this guide where either could work.

      In this tutorial, you will use MariaDB version 10.4. Since this version isn’t included in the default Debian repositories, you’ll start by adding the external Debian repository maintained by the MariaDB project to all three of your servers.

      To add the repository, you will first need to install the dirmngr and software-properties-common packages. dirmngr is a server for managing repository certificates and keys. software-properties-common is a package that allows easy addition and updates of source repository locations. Install the two packages by running:

      • sudo apt install dirmngr software-properties-common

      Note: MariaDB is a well-respected provider, but not all external repositories are reliable. Be sure to install only from trusted sources.

      You’ll add the MariaDB repository key with the apt-key command, which the APT package manager will use to verify that the package is authentic:

      • sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8

      Once you have the trusted key in the database, you can add the repository with the following command:

      • sudo add-apt-repository 'deb [arch=amd64] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.4/debian buster main'

      After adding the repository, run apt update in order to include package manifests from the new repository:

      Once you have completed this step on your first server, repeat for your second and third servers.

      Now that you have successfully added the package repository on all three of your servers, you're ready to install MariaDB in the next section.

      Step 2 — Installing MariaDB on All Servers

      In this step, you will install the actual MariaDB packages on your three servers.

      Beginning with version 10.1, the MariaDB Server and MariaDB Galera Server packages are combined, so installing mariadb-server will automatically install Galera and several dependencies:

      • sudo apt install mariadb-server

      You will be asked to confirm whether you would like to proceed with the installation. Enter yes to continue with the installation.

      From MariaDB version 10.4 onwards, the root MariaDB user does not have a password by default. To set a password for the root user, start by logging into MariaDB:

      Once you're inside the MariaDB shell, change the password by executing the following statement:

      • set password = password("your_password");

      You will see the following output indicating that the password was set correctly:

      Output

      Query OK, 0 rows affected (0.001 sec)

      Exit the MariaDB shell by running the following command:

      If you would like to learn more about SQL or need a quick refresher, check out our MySQL tutorial.

      You now have all of the pieces necessary to begin configuring the cluster, but since you'll be relying on rsync in later steps, make sure it's installed:

      This will confirm that the newest version of rsync is already available or prompt you to upgrade or install it.

      Once you have installed MariaDB and set the root password on your first server, repeat these steps for your other two servers.

      Now that you have installed MariaDB successfully on each of the three servers, you can proceed to the configuration step in the next section.

      Step 3 — Configuring the First Node

      In this step you will configure your first node. Each node in the cluster needs to have a nearly identical configuration. Because of this, you will do all of the configuration on your first machine, and then copy it to the other nodes.

      By default, MariaDB is configured to check the /etc/mysql/conf.d directory to get additional configuration settings from files ending in .cnf. Create a file in this directory with all of your cluster-specific directives:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Add the following configuration into the file. The configuration specifies different cluster options, details about the current server and the other servers in the cluster, and replication-related settings. Note that the IP addresses in the configuration are the private addresses of your respective servers; replace the highlighted lines with the appropriate IP addresses.

      /etc/mysql/conf.d/galera.cnf

      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_on=ON
      wsrep_provider=/usr/lib/galera/libgalera_smm.so
      
      # Galera Cluster Configuration
      wsrep_cluster_name="test_cluster"
      wsrep_cluster_address="gcomm://First_Node_IP,Second_Node_IP,Third_Node_IP"
      
      # Galera Synchronization Configuration
      wsrep_sst_method=rsync
      
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      
      • The first section modifies or re-asserts MariaDB/MySQL settings that will allow the cluster to function correctly. For example, Galera won’t work with MyISAM or similar non-transactional storage engines, and mysqld must not be bound to the IP address for localhost. You can learn about the settings in more detail on the Galera Cluster system configuration page.
      • The "Galera Provider Configuration" section configures the MariaDB components that provide a WriteSet replication API. This means Galera in your case, since Galera is a wsrep (WriteSet Replication) provider. You specify the general parameters to configure the initial replication environment. This doesn't require any customization, but you can learn more about Galera configuration options.
      • The "Galera Cluster Configuration" section defines the cluster, identifying the cluster members by IP address or resolvable domain name and creating a name for the cluster to ensure that members join the correct group. You can change the wsrep_cluster_name to something more meaningful than test_cluster or leave it as-is, but you must update wsrep_cluster_address with the private IP addresses of your three servers.
      • The "Galera Synchronization Configuration" section defines how the cluster will communicate and synchronize data between members. This is used only for the state transfer that happens when a node comes online. For your initial setup, you are using rsync, because it's commonly available and does what you'll need for now.
      • The "Galera Node Configuration" section clarifies the IP address and the name of the current server. This is helpful when trying to diagnose problems in logs and for referencing each server in multiple ways. The wsrep_node_address must match the address of the machine you're on, but you can choose any name you want in order to help you identify the node in log files.

      When you are satisfied with your cluster configuration file, copy the contents into your clipboard, save and close the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Now that you have configured your first node successfully, you can move on to configuring the remaining nodes in the next section.

      Step 4 — Configuring the Remaining Nodes

      In this step, you will configure the remaining two nodes. On your second node, open the configuration file:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Paste in the configuration you copied from the first node, then update the Galera Node Configuration to use the IP address or resolvable domain name for the specific node you're setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:

      /etc/mysql/conf.d/galera.cnf

      . . .
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      . . .
      

      Save and exit the file.

      Once you have completed these steps, repeat them on the third node.

      You're almost ready to bring up the cluster, but before you do, make sure that the appropriate ports are open in your firewall.

      Step 5 — Opening the Firewall on Every Server

      In this step, you will configure your firewall so that the ports required for inter-node communication are open. On every server, check the status of the firewall by running:

      In this case, only SSH is allowed through:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

      Since only SSH traffic is permitted in this case, you’ll need to add rules for MySQL and Galera traffic. If you tried to start the cluster, it would fail because of firewall rules.

      Galera can make use of four ports:

      • 3306 For MySQL client connections and State Snapshot Transfer that use the mysqldump method.
      • 4567 For Galera Cluster replication traffic. Multicast replication uses both UDP transport and TCP on this port.
      • 4568 For Incremental State Transfer.
      • 4444 For all other State Snapshot Transfer.

      In this example, you’ll open all four ports while you do your setup. Once you've confirmed that replication is working, you'd want to close any ports you're not actually using and restrict traffic to just servers in the cluster.

      Open the ports with the following command:

      • sudo ufw allow 3306,4567,4568,4444/tcp
      • sudo ufw allow 4567/udp

      Note: Depending on what else is running on your servers you might want to restrict access right away. The UFW Essentials: Common Firewall Rules and Commands guide can help with this.

      After you have configured your firewall on the first node, create the same firewall settings on the second and third node.

      Now that you have configured the firewalls successfully, you're ready to start the cluster in the next step.

      Step 6 — Starting the Cluster

      In this step, you will start your MariaDB cluster. To begin, you need to stop the running MariaDB service so that you can bring your cluster online.

      Stop MariaDB on All Three Servers

      Use the following command on all three servers to stop MariaDB so that you can bring them back up in a cluster:

      • sudo systemctl stop mysql

      systemctl doesn't display the outcome of all service management commands, so to be sure you succeeded, use the following command:

      • sudo systemctl status mysql

      If the last line looks something like the following, the command was successful:

      Output

      . . . Apr 26 03:34:23 galera-node-01 systemd[1]: Stopped MariaDB 10.4.4 database server.

      Once you've shut down mysql on all of the servers, you're ready to proceed.

      Bring Up the First Node

      To bring up the first node, you'll need to use a special startup script. The way you've configured your cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf file to get its initial state. Without using the galera_new_cluster script that allows systemd to pass the --wsrep-new-cluster parameter, a normal systemctl start mysql would fail because there are no nodes running for the first node to connect with.

      This command will not display any output on successful execution. When this script succeeds, the node is registered as part of the cluster, and you can see it with the following command:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that there is one node in the cluster:

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+

      On the remaining nodes, you can start mysql normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.

      Bring Up the Second Node

      Now you can bring up the second node. Start mysql:

      • sudo systemctl start mysql

      No output will be displayed on successful execution. You will see your cluster size increase as each node comes online:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that the second node has joined the cluster and that there are two nodes in total.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 2 | +--------------------+-------+

      Bring Up the Third Node

      It's now time to bring up the third node. Start mysql:

      • sudo systemctl start mysql

      Run the following command to find the cluster size:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output, which indicates that the third node has joined the cluster and that the total number of nodes in the cluster is three.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+

      At this point, the entire cluster is online and communicating successfully. Now, you can ensure the working setup by testing replication in the next section.

      Step 7 — Testing Replication

      You've gone through the steps up to this point so that your cluster can perform replication from any node to any other node, known as active-active replication. Follow the steps below to test and see if the replication is working as expected.

      Write to the First Node

      You'll start by making database changes on your first node. The following commands will create a database called playground and a table inside of this database called equipment.

      • mysql -u root -p -e 'CREATE DATABASE playground;
      • CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
      • INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");'

      In the previous command, the CREATE DATABASE statement creates a database named playground. The CREATE statement creates a table named equipment inside the playground database having an auto-incrementing identifier column called id and other columns. The type column, quant column, and color column are defined to store the type, quantity, and color of the equipment respectively. The INSERT statement inserts an entry of type slide, quantity 2, and color blue.

      You now have one value in your table.

      Read and Write on the Second Node

      Next, look at the second node to verify that replication is working:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      If replication is working, the data you entered on the first node will be visible here on the second:

      Output

      +----+-------+-------+-------+ | id | type | quant | color | +----+-------+-------+-------+ | 1 | slide | 2 | blue | +----+-------+-------+-------+

      From this same node, you can write data to the cluster:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'

      Read and Write on the Third Node

      From the third node, you can read all of this data by querying the table again:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output showing the two rows:

      Output

      +----+-------+-------+--------+ | id | type | quant | color | +----+-------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | +----+-------+-------+--------+

      Again, you can add another value from this node:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");'

      Read on the First Node:

      Back on the first node, you can verify that your data is available everywhere:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output which indicates that the rows are available on the first node.

      Output

      +----+--------+-------+--------+ | id | type | quant | color | +----+--------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | | 3 | seesaw | 3 | green | +----+--------+-------+--------+

      You've successfully verified that you can write to all of the nodes and that replication is being performed properly.

      Conclusion

      At this point, you have a working three-node Galera test cluster configured. If you plan on using a Galera cluster in a production situation, it’s recommended that you begin with no fewer than five nodes.

      Before production use, you may want to take a look at some of the other state snapshot transfer (sst) agents like xtrabackup, which allows you to set up new nodes very quickly and without large interruptions to your active nodes. This does not affect the actual replication, but is a concern when nodes are being initialized.



      Source link