One place for hosting & domains


      Troubleshooting DNS Records

      Updated by Linode

      Written by Linode

      Having problems with your DNS records? This guide to help get your DNS settings back on track. Follow these tips to troubleshoot DNS issues.

      Before You Begin

      The Domains section of the Linode Cloud Manager is a comprehensive DNS management interface that allows you to add DNS records for all of your domain names. For an introduction to DNS Manager including setting up DNS records, see the DNS Manager guide.


      Linode’s DNS service employs Cloudflare to provide denial of service (DDoS) mitigation, load balancing, and increased geographic distribution for our name servers. These factors make our service reliable, fast, and a great choice for your DNS needs.


      To use the Linode DNS Manager to serve your domains, you must have an active Linode on your account. If you remove all active Linodes, your domains will no longer be served.

      Wait for Propagation

      DNS updates will take effect, or propagate, within the time period set by your zone file’s TTL. If you’ve just made a DNS change and aren’t seeing it reflected yet, the new information may not be available for up to 48 hours.

      While you can’t control DNS caching at every point on the Internet, you do have control over your web browser. Try holding down the Shift key or the Control key (depending on your browser) while you refresh the page to bypass your browser’s cache of the old DNS data. You can also try bringing up your site in an alternate browser or editing your hosts file to preview your website without DNS.

      Set the Time To Live or TTL

      In the context of DNS, Time to Live (TTL) tells internet servers how long to cache particular DNS entries. The default TTL for Linode domain zone files is 24 hours. This is fine for most situations because most people don’t update their IP addresses often.

      However, there are times when you’ll want the TTL to be as low as possible. For instance, when you make a DNS change, you’ll want that change to propagate quickly. Otherwise, some people will see the new site right away, and others (who had the old data cached) will still be visiting the website at your old server. Long caching times can be even more problematic when it comes to email, because some messages will be sent to the new server and some to the old one.

      The solution is to lower your TTL before making a DNS change. You’ll want to lower the TTL first, before making any other DNS changes. Here’s a general overview of what should happen during a smooth DNS update:


      TTL is always written out in seconds, so 24 hours = 86400 seconds.

      1. Check the TTL value for the DNS record you will be updating. Typically, this will be 24 or 48 hours.
      2. Update the relevant DNS records 48 to 96 hours in advance (for a 24-48 hour record), taking into account any intermediate DNS servers. Lower the TTL to five minutes (300 seconds, or the lowest allowed value). Do not make any other changes at this time.
      3. Wait out the original 48 to 96 hours.
      4. Visit your domain’s DNS records in the Linode Cloud Manager again to update your IP address and anything else needed.
      5. The DNS changes should propagate within 30 minutes.

      Find Current DNS Information

      Sometimes you may need to find the current DNS information for a domain. There are two great tools for doing this:

      • dig: Look up individual DNS entries. For example, you can find the IP address where your domain resolves.

      • whois: Find your registrar and nameserver information for your domain.

      If you’re using a computer that runs macOS or Linux, you can use these tools from the command line. To find your domain’s IP (the primary A record), run:


      Look in the answer section of the output to locate your IP address. You can also query for other types of records. For example, to see the mail records for a domain, run:

      dig mx

      This returns all of your domain’s MX records.

      To find your domain’s registrar and nameserver information, run:


      This generates a large amount of information about the domain. The basic information you need will be near the top of the output, so you might have to scroll back to see it.

      For a web-based tool, you can also use for dig requests and for WHOIS requests. Note that since you’re running these lookups from a third-party website, the information they find is not necessarily what your local computer has cached. There should be a difference only if you’ve made recent changes to your DNS information.

      For more information and examples on how to use dig, see our Use dig to Perform Manual DNS Queries guide.

      Name Resolution Failures

      If you have DNSSEC enabled at your domain’s registrar it will cause name resolution failures such as NXDOMAIN when an attempt is made to access the DNS. This is because the Linode DNS Manager does not support DNSSEC at this time.

      This guide is published under a CC BY-ND 4.0 license.

      Source link

      Troubleshooting Linode Longview

      Updated by Linode

      Written by Linode

      This guide discusses basic troubleshooting steps to help you diagnose and resolve any issues you may encounter while using Longview. If you’re experiencing problems with the Longview client, follow the steps outlined in this guide to help determine the cause.

      Basic Diagnostics

      1. Ensure that your system is fully updated.


        Longview requires Perl 5.8 or later.

      2. Verify that the Longview client is running. Use the command that is appropriate for your distribution’s initialization system:

        CentOS, Debian, and Ubuntu

        sudo systemctl status longview   # For distributions with systemd.

        Other Distributions

        sudo service longview status     # For distributions without systemd.

        If the Longview client is not running, start it with the command appropriate for your distribution’s initialization system:

        CentOS, Debian, and Ubuntu

        sudo systemctl start longview

        Other Distributions

        sudo service longview start

        If the service fails to start, check Longview’s log for errors. The log file is located in /var/log/linode/longview.log.

      Debug Mode

      Restart the Longview client in debug mode for increased logging verbosity.

      1. First stop the Longview client:

        CentOS, Debian, and Ubuntu

        sudo systemctl stop longview   # For distributions with systemd.

        Other Distributions

        sudo service longview stop     # For distributions without systemd.
      2. Then restart Longview with the debug flag:

        sudo /etc/init.d/longview debug
      3. When you’re finished collecting information, repeat the first two steps to stop Longview and restart it again without the debug flag.

        If Longview does not close properly, find the process ID and kill the process:

        ps aux | grep longview
        sudo kill $PID

      Firewall Rules

      If your Linode has a firewall, it must allow communication with Longview’s aggregation host at (IPv4: You can view your firewall rules with one of the commands below, depending on the firewall controller used by your Linux distribution:


      sudo firewall-cmd --list-all



      sudo iptables -S



      sudo ufw show added


      If the output of those commands show no rules for the Longview domain (or for, which is the IP for the Longview domain), you must add them. A sample iptables rule that allows outbound HTTPS traffic to Longview would be the following:

      iptables -A OUTPUT -p tcp --dport 443 -d -j ACCEPT


      If you use iptables, you should also make sure to persist any of your firewall rule changes. Otherwise, your changes will not be enforced if your Linode is rebooted. Review the iptables-persistent section of our iptables guide for help with this.

      Verify API key

      The API key given in the Linode Cloud Manager should match that on your system in /etc/linode/longview.key.

      1. In the Linode Cloud Manager, the API key is located in the Installation tab of your Longview Client instance’s detailed view.

      2. SSH into your Linode. The Longview key is located at /etc/linode/longview.key. Use cat to view the contents of that file and compare it to what’s shown in the Linode Cloud Manager:

        cat /etc/linode/longview.key

        The two should be the same. If they are not, paste the key from the Linode Cloud Manager into longview.key, overwriting anything already there.

      Cloned Keys

      If you clone a Linode which has Longview installed, you may encounter the following error:

      Multiple clients appear to be posting data with this API key. Please check your clients' configuration.

      This is caused by both Linodes posting data using the same Longview key. To resolve it:

      1. Uninstall the Longview agent on the cloned system.


        sudo yum remove linode-longview

        Debian or Ubuntu:

        sudo apt-get remove linode-longview

        Other Distributions:

        sudo rm -rf /opt/linode/longview
      2. Add a new Linode Longview Client instance. This will create a new Longview API key independent from the system which it was cloned from.


        The GUID provided in the Longview Client’s installation URL is not the same as the Longview API key.

      3. Install the Longview Agent on the cloned Linode.

      If you still need assistance after performing these checks, please open a support ticket.

      This guide is published under a CC BY-ND 4.0 license.

      Source link

      Troubleshooting Kubernetes

      Updated by Linode

      Written by Linode Community

      Troubleshooting issues with Kubernetes can be complex, and it can be difficult to account for all the possible error conditions you may see. This guide tries to equip you with the core tools that can be useful when troubleshooting, and it introduces some situations that you may find yourself in.

      Where to go for help outside this guide

      If your issue is not covered by this guide, we also recommend researching and posting in the Linode Community Questions site and in #linode on the Kubernetes Slack, where other Linode users (and the Kubernetes community) can offer advice.

      If you are running a cluster on Linode’s managed LKE service, and you are experiencing an issue related to your master/control plane components, you can report these issues to Linode by contacting Linode Support. Examples in this category include:

      • Kubernetes’ API server not running. If kubectl does not respond as expected, this can indicate problems with the API server.

      • The CCM, CSI, Calico, or kube-dns pods are not running.

      • Annotations on LoadBalancer services aren’t functioning.

      • PersistentVolumes are not re-attaching.

      Please note that the kube-apiserver and etcd pods will not be visible for LKE clusters, and this is expected. Issues outside of the scope of Linode Support include:

      In this guide we will:

      General Troubleshooting Strategies

      To troubleshoot issues with the applications running on your cluster, you can rely on the kubectl command to gather debugging information. kubectl includes a set of subcommands that can be used to research issues with your cluster, and this guide will highlight four of them: get, describe, logs, and exec.

      To troubleshoot issues with your cluster, you may need to directly view the logs that are generated by Kubernetes’ components.

      kubectl get

      Use the get command to list different kinds of resources in your cluster (nodes, pods, services, etc). The output will show the status for each resource returned. For example, this output shows that a pod is in the CrashLoopBackOff status, which means it should be investigated further:

      kubectl get pods
      NAME              READY     STATUS             RESTARTS   AGE
      ghost-0           0/1       CrashLoopBackOff   34         2h
      mariadb-0         1/1       Running            0          2h
      • Use the --namespace flag to show resources in a certain namespace:

        # Show pods in the `kube-system` namespace
        kubectl get pods --namespace kube-system


        If you’ve set up Kubernetes using automated solutions like Linode’s Kubernetes Engine, k8s-alpha CLI, or Rancher, you’ll see csi-linode and ccm-linode pods in the kube-system namespace. This is normal as long as they’re in the Running status.
      • Use the -o flag to return the resources as YAML or JSON. The Kubernetes API’s complete description for the returned resources will be shown:

        # Get pods as YAML API objects
        kubectl get pods -o yaml
      • Sort the returned resources with the --sort-by flag:

        # Sort by name
        kubectl get pods
      • Use the --selector or -l flag to get resources that match a label. This is useful for finding all pods for a given service:

        # Get pods which match the app=ghost selector
        kubectl get pods -l app=ghost
      • Use the --field-selector flag to return resources which match different resource fields:

        # Get all pods that are Pending
        kubectl get pods --field-selector status.phase=Pending
        # Get all pods that are not in the kube-system namespace
        kubectl get pods --field-selector metadata.namespace!=kube-system

      kubectl describe

      Use the describe command to return a detailed report of the state of one or more resources in your cluster. Pass a resource type to the describe command to get a report for each of those resources:

      kubectl describe nodes

      Pass the name of a resource to get a report for just that object:

      kubectl describe pods ghost-0

      You can also use the --selector (-l) flag to filter the returned resources, as with the get command.

      kubectl logs

      Use the logs command to print logs collected by a pod:

      kubectl logs mariadb-0
      • Use the --selector (-l) flag to print logs from all pods that match a selector:

        kubectl logs -l app=ghost
      • If a pod’s container was killed and restarted, you can view the previous container’s logs with the --previous or -p flag:

        kubectl logs -p ghost-0

      kubectl exec

      You can run arbitrary commands on a pod’s container by passing them to kubectl’s exec command:

      kubectl exec mariadb-0 -- ps aux

      The full syntax for the command is:

      kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}


      The -c flag is optional, and is only needed when the specified pod is running more than one container.

      It is possible to run an interactive shell on an existing pod/container. Pass the -it flags to exec and run the shell:

      kubectl exec -it mariadb-0 -- /bin/bash

      Enter exit to later leave this shell.

      Viewing Master and Worker Logs

      If the Kubernetes API server isn’t working normally, then you may not be able to use kubectl to troubleshoot. When this happens, or if you are experiencing other more fundamental issues with your cluster, you can instead log directly into your nodes and view the logs present on your filesystem.

      Non-systemd systems

      If your nodes do not run systemd, the location of logs on your master nodes should be:

      On your worker nodes:

      systemd systems

      If your nodes run systemd, you can access the logs that kubelet generates with journalctl:

      journalctl --unit kubelet

      Logs for your other Kubernetes software components can be found through your container runtime. When using Docker, you can use the docker ps and docker logs commands to investigate. For example, to find the container running your API server:

      docker ps | grep apiserver

      The output will display a list of information separated by tabs:

      2f4e6242e1a2    cfdda15fbce2    "kube-apiserver --au…"  2 days ago  Up 2 days   k8s_kube-apiserver_kube-apiserver-k8s-trouble-1-master-1_kube-system_085b2ab3bd6d908bde1af92bd25e5aaa_0

      The first entry (in this example: 2f4e6242e1a2) will be an alphanumeric string, and it is the ID for the container. Copy this string and pass it to docker logs to view the logs for your API server:

      docker logs ${CONTAINER_ID}

      Troubleshooting Examples

      Viewing the Wrong Cluster

      If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. To view all of the cluster contexts on your system, run:

      kubectl config get-contexts

      An asterisk will appear next to the active context:

      CURRENT   NAME                                        CLUSTER            AUTHINFO
                my-cluster-kayciZBRO5s@my-cluster           my-cluster         my-cluster-kayciZBRO5s
      *         other-cluster-kaUzJOMWJ3c@other-cluster     other-cluster      other-cluster-kaUzJOMWJ3c

      To switch to another context, run:

      kubectl config use-context ${CLUSTER_NAME}


      kubectl config use-context my-cluster-kayciZBRO5s@my-cluster

      Can’t Provision Cluster Nodes

      If you are not able to create new nodes in your cluster, you may see an error message similar to:

      Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.

      This is a reference to the total number of Linode resources that can exist on your account. To create new Linode instances for your cluster, you will need to either remove other instances on your account, or request a limit increase. To request a limit increase, contact Linode Support.

      Insufficient CPU or Memory

      If one of your pods requests more memory or CPU than is available on your worker nodes, then one of these scenarios may happen:

      • The pod will remain in the Pending state, because the scheduler cannot find a node to run it on. This will be visible when running kubectl get pods.

        If you run the kubectl describe command on your pod, the Events section may list a FailedScheduling event, along with a message like Failed for reason PodExceedsFreeCPU and possibly others. You can run kubectl describe nodes to view information about the allocated resources for each node.

      • The pod may continually crash. For example, the Ghost pod specified by Ghost’s Helm chart will show the following error in its logs when not enough memory is available:

        kubectl logs ghost --tail=5
        1) SystemError
        Message: You are recommended to have at least 150 MB of memory available for smooth operation. It looks like you have ~58 MB available.

      If your cluster has insufficient resources for a new pod, you will need to:

      • Reduce the number of other pods/deployments/applications running on your cluster,
      • Resize the Linode instances that represent your worker nodes to a higher-tier plan, or
      • Add a new worker node to your cluster.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link