One place for hosting & domains

      Troubleshooting

      Troubleshooting Linode Longview


      Updated by Linode

      Written by Linode

      This guide discusses basic troubleshooting steps to help you diagnose and resolve any issues you may encounter while using Longview. If you’re experiencing problems with the Longview client, follow the steps outlined in this guide to help determine the cause.

      Basic Diagnostics

      1. Ensure that your system is fully updated.

        Note

        Longview requires Perl 5.8 or later.

      2. Verify that the Longview client is running. Use the command that is appropriate for your distribution’s initialization system:

        CentOS, Debian, and Ubuntu

        sudo systemctl status longview   # For distributions with systemd.
        

        Other Distributions

        sudo service longview status     # For distributions without systemd.
        

        If the Longview client is not running, start it with the command appropriate for your distribution’s initialization system:

        CentOS, Debian, and Ubuntu

        sudo systemctl start longview
        

        Other Distributions

        sudo service longview start
        

        If the service fails to start, check Longview’s log for errors. The log file is located in /var/log/linode/longview.log.

      Debug Mode

      Restart the Longview client in debug mode for increased logging verbosity.

      1. First stop the Longview client:

        CentOS, Debian, and Ubuntu

        sudo systemctl stop longview   # For distributions with systemd.
        

        Other Distributions

        sudo service longview stop     # For distributions without systemd.
        
      2. Then restart Longview with the debug flag:

        sudo /etc/init.d/longview debug
        
      3. When you’re finished collecting information, repeat the first two steps to stop Longview and restart it again without the debug flag.

        If Longview does not close properly, find the process ID and kill the process:

        ps aux | grep longview
        sudo kill $PID
        

      Firewall Rules

      If your Linode has a firewall, it must allow communication with Longview’s aggregation host at longview.linode.com (IPv4: 96.126.119.66). You can view your firewall rules with one of the commands below, depending on the firewall controller used by your Linux distribution:

      firewalld

      sudo firewall-cmd --list-all
      

      Note

      iptables

      sudo iptables -S
      

      Note

      ufw

      sudo ufw show added
      

      Note

      If the output of those commands show no rules for the Longview domain (or for 96.126.119.66, which is the IP for the Longview domain), you must add them. A sample iptables rule that allows outbound HTTPS traffic to Longview would be the following:

      iptables -A OUTPUT -p tcp --dport 443 -d longview.linode.com -j ACCEPT
      

      Note

      If you use iptables, you should also make sure to persist any of your firewall rule changes. Otherwise, your changes will not be enforced if your Linode is rebooted. Review the iptables-persistent section of our iptables guide for help with this.

      Verify API key

      The API key given in the Linode Cloud Manager should match that on your system in /etc/linode/longview.key.

      1. In the Linode Cloud Manager, the API key is located in the Installation tab of your Longview Client instance’s detailed view.

      2. SSH into your Linode. The Longview key is located at /etc/linode/longview.key. Use cat to view the contents of that file and compare it to what’s shown in the Linode Cloud Manager:

        cat /etc/linode/longview.key
        

        The two should be the same. If they are not, paste the key from the Linode Cloud Manager into longview.key, overwriting anything already there.

      Cloned Keys

      If you clone a Linode which has Longview installed, you may encounter the following error:

        
      Multiple clients appear to be posting data with this API key. Please check your clients' configuration.
      
      

      This is caused by both Linodes posting data using the same Longview key. To resolve it:

      1. Uninstall the Longview agent on the cloned system.

        CentOS:

        sudo yum remove linode-longview
        

        Debian or Ubuntu:

        sudo apt-get remove linode-longview
        

        Other Distributions:

        sudo rm -rf /opt/linode/longview
        
      2. Add a new Linode Longview Client instance. This will create a new Longview API key independent from the system which it was cloned from.

        Note

        The GUID provided in the Longview Client’s installation URL is not the same as the Longview API key.

      3. Install the Longview Agent on the cloned Linode.

      If you still need assistance after performing these checks, please open a support ticket.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Troubleshooting Kubernetes


      Updated by Linode

      Written by Linode Community

      Troubleshooting issues with Kubernetes can be complex, and it can be difficult to account for all the possible error conditions you may see. This guide tries to equip you with the core tools that can be useful when troubleshooting, and it introduces some situations that you may find yourself in.


      Where to go for help outside this guide

      If your issue is not covered by this guide, we also recommend researching and posting in the Linode Community Questions site and in #linode on the Kubernetes Slack, where other Linode users (and the Kubernetes community) can offer advice.

      If you are running a cluster on Linode’s managed LKE service, and you are experiencing an issue related to your master/control plane components, you can report these issues to Linode by contacting Linode Support. Examples in this category include:

      • Kubernetes’ API server not running. If kubectl does not respond as expected, this can indicate problems with the API server.

      • The CCM, CSI, Calico, or kube-dns pods are not running.

      • Annotations on LoadBalancer services aren’t functioning.

      • PersistentVolumes are not re-attaching.

      Please note that the kube-apiserver and etcd pods will not be visible for LKE clusters, and this is expected. Issues outside of the scope of Linode Support include:

      In this guide we will:

      General Troubleshooting Strategies

      To troubleshoot issues with the applications running on your cluster, you can rely on the kubectl command to gather debugging information. kubectl includes a set of subcommands that can be used to research issues with your cluster, and this guide will highlight four of them: get, describe, logs, and exec.

      To troubleshoot issues with your cluster, you may need to directly view the logs that are generated by Kubernetes’ components.

      kubectl get

      Use the get command to list different kinds of resources in your cluster (nodes, pods, services, etc). The output will show the status for each resource returned. For example, this output shows that a pod is in the CrashLoopBackOff status, which means it should be investigated further:

      kubectl get pods
      NAME              READY     STATUS             RESTARTS   AGE
      ghost-0           0/1       CrashLoopBackOff   34         2h
      mariadb-0         1/1       Running            0          2h
      
      • Use the --namespace flag to show resources in a certain namespace:

        # Show pods in the `kube-system` namespace
        kubectl get pods --namespace kube-system
        

        Note

        If you’ve set up Kubernetes using automated solutions like Linode’s Kubernetes Engine, k8s-alpha CLI, or Rancher, you’ll see csi-linode and ccm-linode pods in the kube-system namespace. This is normal as long as they’re in the Running status.
      • Use the -o flag to return the resources as YAML or JSON. The Kubernetes API’s complete description for the returned resources will be shown:

        # Get pods as YAML API objects
        kubectl get pods -o yaml
        
      • Sort the returned resources with the --sort-by flag:

        # Sort by name
        kubectl get pods --sort-by=.metadata.name
        
      • Use the --selector or -l flag to get resources that match a label. This is useful for finding all pods for a given service:

        # Get pods which match the app=ghost selector
        kubectl get pods -l app=ghost
        
      • Use the --field-selector flag to return resources which match different resource fields:

        # Get all pods that are Pending
        kubectl get pods --field-selector status.phase=Pending
        
        # Get all pods that are not in the kube-system namespace
        kubectl get pods --field-selector metadata.namespace!=kube-system
        

      kubectl describe

      Use the describe command to return a detailed report of the state of one or more resources in your cluster. Pass a resource type to the describe command to get a report for each of those resources:

      kubectl describe nodes
      

      Pass the name of a resource to get a report for just that object:

      kubectl describe pods ghost-0
      

      You can also use the --selector (-l) flag to filter the returned resources, as with the get command.

      kubectl logs

      Use the logs command to print logs collected by a pod:

      kubectl logs mariadb-0
      
      • Use the --selector (-l) flag to print logs from all pods that match a selector:

        kubectl logs -l app=ghost
        
      • If a pod’s container was killed and restarted, you can view the previous container’s logs with the --previous or -p flag:

        kubectl logs -p ghost-0
        

      kubectl exec

      You can run arbitrary commands on a pod’s container by passing them to kubectl’s exec command:

      kubectl exec mariadb-0 -- ps aux
      

      The full syntax for the command is:

      kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
      

      Note

      The -c flag is optional, and is only needed when the specified pod is running more than one container.

      It is possible to run an interactive shell on an existing pod/container. Pass the -it flags to exec and run the shell:

      kubectl exec -it mariadb-0 -- /bin/bash
      

      Enter exit to later leave this shell.

      Viewing Master and Worker Logs

      If the Kubernetes API server isn’t working normally, then you may not be able to use kubectl to troubleshoot. When this happens, or if you are experiencing other more fundamental issues with your cluster, you can instead log directly into your nodes and view the logs present on your filesystem.

      Non-systemd systems

      If your nodes do not run systemd, the location of logs on your master nodes should be:

      On your worker nodes:

      systemd systems

      If your nodes run systemd, you can access the logs that kubelet generates with journalctl:

      journalctl --unit kubelet
      

      Logs for your other Kubernetes software components can be found through your container runtime. When using Docker, you can use the docker ps and docker logs commands to investigate. For example, to find the container running your API server:

      docker ps | grep apiserver
      

      The output will display a list of information separated by tabs:

      2f4e6242e1a2    cfdda15fbce2    "kube-apiserver --au…"  2 days ago  Up 2 days   k8s_kube-apiserver_kube-apiserver-k8s-trouble-1-master-1_kube-system_085b2ab3bd6d908bde1af92bd25e5aaa_0
      

      The first entry (in this example: 2f4e6242e1a2) will be an alphanumeric string, and it is the ID for the container. Copy this string and pass it to docker logs to view the logs for your API server:

      docker logs ${CONTAINER_ID}
      

      Troubleshooting Examples

      Viewing the Wrong Cluster

      If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. To view all of the cluster contexts on your system, run:

      kubectl config get-contexts
      

      An asterisk will appear next to the active context:

      CURRENT   NAME                                        CLUSTER            AUTHINFO
                my-cluster-kayciZBRO5s@my-cluster           my-cluster         my-cluster-kayciZBRO5s
      *         other-cluster-kaUzJOMWJ3c@other-cluster     other-cluster      other-cluster-kaUzJOMWJ3c
      

      To switch to another context, run:

      kubectl config use-context ${CLUSTER_NAME}
      

      E.g.:

      kubectl config use-context my-cluster-kayciZBRO5s@my-cluster
      

      Can’t Provision Cluster Nodes

      If you are not able to create new nodes in your cluster, you may see an error message similar to:

        
      Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.
      
      

      This is a reference to the total number of Linode resources that can exist on your account. To create new Linode instances for your cluster, you will need to either remove other instances on your account, or request a limit increase. To request a limit increase, contact Linode Support.

      Insufficient CPU or Memory

      If one of your pods requests more memory or CPU than is available on your worker nodes, then one of these scenarios may happen:

      • The pod will remain in the Pending state, because the scheduler cannot find a node to run it on. This will be visible when running kubectl get pods.

        If you run the kubectl describe command on your pod, the Events section may list a FailedScheduling event, along with a message like Failed for reason PodExceedsFreeCPU and possibly others. You can run kubectl describe nodes to view information about the allocated resources for each node.

      • The pod may continually crash. For example, the Ghost pod specified by Ghost’s Helm chart will show the following error in its logs when not enough memory is available:

        kubectl logs ghost --tail=5
        1) SystemError
        
        Message: You are recommended to have at least 150 MB of memory available for smooth operation. It looks like you have ~58 MB available.
        

      If your cluster has insufficient resources for a new pod, you will need to:

      • Reduce the number of other pods/deployments/applications running on your cluster,
      • Resize the Linode instances that represent your worker nodes to a higher-tier plan, or
      • Add a new worker node to your cluster.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Troubleshooting Kubernetes


      Updated by Linode Written by Linode Community

      Troubleshooting issues with Kubernetes can be complex, and it can be difficult to account for all the possible error conditions you may see. This guide tries to equip you with the core tools that can be useful when troubleshooting, and it introduces some situations that you may find yourself in.

      Where to go for help outside this guide

      If your issue is not covered by this guide, we also recommend researching and posting in the Linode Community Questions site and in #linode on the Kubernetes Slack, where other Linode users (and the Kubernetes community) can offer advice.

      If you are running a cluster on Linode’s managed LKE service, and you are experiencing an issue related to your master/control plane components, you can report these issues to Linode by contacting Linode Support. Examples in this category include:

      • Kubernetes’ API server not running. If kubectl does not respond as expected, this can indicate problems with the API server.

      • The CCM, CSI, Calico, or kube-dns pods are not running.

      • Annotations on LoadBalancer services aren’t functioning.

      • PersistentVolumes are not re-attaching.

      Please note that the kube-apiserver and etcd pods will not be visible for LKE clusters, and this is expected. Issues outside of the scope of Linode Support include:

      In this guide we will:

      General Troubleshooting Strategies

      To troubleshoot issues with the applications running on your cluster, you can rely on the kubectl command to gather debugging information. kubectl includes a set of subcommands that can be used to research issues with your cluster, and this guide will highlight four of them: get, describe, logs, and exec.

      To troubleshoot issues with your cluster, you may need to directly view the logs that are generated by Kubernetes’ components.

      kubectl get

      Use the get command to list different kinds of resources in your cluster (nodes, pods, services, etc). The output will show the status for each resource returned. For example, this output shows that a pod is in the CrashLoopBackOff status, which means it should be investigated further:

      kubectl get pods
      NAME              READY     STATUS             RESTARTS   AGE
      ghost-0           0/1       CrashLoopBackOff   34         2h
      mariadb-0         1/1       Running            0          2h
      
      • Use the --namespace flag to show resources in a certain namespace:

        # Show pods in the `kube-system` namespace
        kubectl get pods --namespace kube-system
        

        Note

        If you’ve set up Kubernetes using automated solutions like Linode’s Kubernetes Engine, k8s-alpha CLI, or Rancher, you’ll see csi-linode and ccm-linode pods in the kube-system namespace. This is normal as long as they’re in the Running status.
      • Use the -o flag to return the resources as YAML or JSON. The Kubernetes API’s complete description for the returned resources will be shown:

        # Get pods as YAML API objects
        kubectl get pods -o yaml
        
      • Sort the returned resources with the --sort-by flag:

        # Sort by name
        kubectl get pods --sort-by=.metadata.name
        
      • Use the --selector or -l flag to get resources that match a label. This is useful for finding all pods for a given service:

        # Get pods which match the app=ghost selector
        kubectl get pods -l app=ghost
        
      • Use the --field-selector flag to return resources which match different resource fields:

        # Get all pods that are Pending
        kubectl get pods --field-selector status.phase=Pending
        
        # Get all pods that are not in the kube-system namespace
        kubectl get pods --field-selector metadata.namespace!=kube-system
        

      kubectl describe

      Use the describe command to return a detailed report of the state of one or more resources in your cluster. Pass a resource type to the describe command to get a report for each of those resources:

      kubectl describe nodes
      

      Pass the name of a resource to get a report for just that object:

      kubectl describe pods ghost-0
      

      You can also use the --selector (-l) flag to filter the returned resources, as with the get command.

      kubectl logs

      Use the logs command to print logs collected by a pod:

      kubectl logs mariadb-0
      
      • Use the --selector (-l) flag to print logs from all pods that match a selector:

        kubectl logs -l app=ghost
        
      • If a pod’s container was killed and restarted, you can view the previous container’s logs with the --previous or -p flag:

        kubectl logs -p ghost-0
        

      kubectl exec

      You can run arbitrary commands on a pod’s container by passing them to kubectl’s exec command:

      kubectl exec mariadb-0 -- ps aux
      

      The full syntax for the command is:

      kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
      

      Note

      The -c flag is optional, and is only needed when the specified pod is running more than one container.

      It is possible to run an interactive shell on an existing pod/container. Pass the -it flags to exec and run the shell:

      kubectl exec -it mariadb-0 -- /bin/bash
      

      Enter exit to later leave this shell.

      Viewing Master and Worker Logs

      If the Kubernetes API server isn’t working normally, then you may not be able to use kubectl to troubleshoot. When this happens, or if you are experiencing other more fundamental issues with your cluster, you can instead log directly into your nodes and view the logs present on your filesystem.

      Non-systemd systems

      If your nodes do not run systemd, the location of logs on your master nodes should be:

      On your worker nodes:

      systemd systems

      If your nodes run systemd, you can access the logs that kubelet generates with journalctl:

      journalctl --unit kubelet
      

      Logs for your other Kubernetes software components can be found through your container runtime. When using Docker, you can use the docker ps and docker logs commands to investigate. For example, to find the container running your API server:

      docker ps | grep apiserver
      

      The output will display a list of information separated by tabs:

      2f4e6242e1a2    cfdda15fbce2    "kube-apiserver --au…"  2 days ago  Up 2 days   k8s_kube-apiserver_kube-apiserver-k8s-trouble-1-master-1_kube-system_085b2ab3bd6d908bde1af92bd25e5aaa_0
      

      The first entry (in this example: 2f4e6242e1a2) will be an alphanumeric string, and it is the ID for the container. Copy this string and pass it to docker logs to view the logs for your API server:

      docker logs ${CONTAINER_ID}
      

      Troubleshooting Examples

      Viewing the Wrong Cluster

      If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. To view all of the cluster contexts on your system, run:

      kubectl config get-contexts
      

      An asterisk will appear next to the active context:

      CURRENT   NAME                                        CLUSTER            AUTHINFO
                my-cluster-kayciZBRO5s@my-cluster           my-cluster         my-cluster-kayciZBRO5s
      *         other-cluster-kaUzJOMWJ3c@other-cluster     other-cluster      other-cluster-kaUzJOMWJ3c
      

      To switch to another context, run:

      kubectl config use-context ${CLUSTER_NAME}
      

      E.g.:

      kubectl config use-context my-cluster-kayciZBRO5s@my-cluster
      

      Can’t Provision Cluster Nodes

      If you are not able to create new nodes in your cluster, you may see an error message similar to:

        
      Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.
      
      

      This is a reference to the total number of Linode resources that can exist on your account. To create new Linode instances for your cluster, you will need to either remove other instances on your account, or request a limit increase. To request a limit increase, contact Linode Support.

      Insufficient CPU or Memory

      If one of your pods requests more memory or CPU than is available on your worker nodes, then one of these scenarios may happen:

      • The pod will remain in the Pending state, because the scheduler cannot find a node to run it on. This will be visible when running kubectl get pods.

        If you run the kubectl describe command on your pod, the Events section may list a FailedScheduling event, along with a message like Failed for reason PodExceedsFreeCPU and possibly others. You can run kubectl describe nodes to view information about the allocated resources for each node.

      • The pod may continually crash. For example, the Ghost pod specified by Ghost’s Helm chart will show the following error in its logs when not enough memory is available:

        kubectl logs ghost --tail=5
        1) SystemError
        
        Message: You are recommended to have at least 150 MB of memory available for smooth operation. It looks like you have ~58 MB available.
        

      If your cluster has insufficient resources for a new pod, you will need to:

      • Reduce the number of other pods/deployments/applications running on your cluster,
      • Resize the Linode instances that represent your worker nodes to a higher-tier plan, or
      • Add a new worker node to your cluster.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link