One place for hosting & domains


      An Introduction to Service Meshes


      A service mesh is an infrastructure layer that allows you to manage communication between your application’s microservices. As more developers work with microservices, service meshes have evolved to make that work easier and more effective by consolidating common management and administrative tasks in a distributed setup.

      Taking a microservice approach to application architecture involves breaking your application into a collection of loosely-coupled services. This approach offers certain benefits: teams can iterate designs and scale quickly, using a wider range of tools and languages. On the other hand, microservices pose new challenges for operational complexity, data consistency, and security.

      Service meshes are designed to address some of these challenges by offering a granular level of control over how services communicate with one another. Specifically, they offer developers a way to manage:

      • Service discovery
      • Routing and traffic configuration
      • Encryption and authentication/authorization
      • Metrics and monitoring

      Though it is possible to do these tasks natively with container orchestrators like Kubernetes, this approach involves a greater amount of up-front decision-making and administration when compared to what service mesh solutions like Istio and Linkerd offer out of the box. In this sense, service meshes can streamline and simplify the process of working with common components in a microservice architecture. In some cases they can even extend the functionality of these components.

      Why Services Meshes?

      Service meshes are designed to address some of the challenges inherent to distributed application architectures.

      These architectures grew out of the three-tier application model, which broke applications into a web tier, application tier, and database tier. At scale, this model has proved challenging to organizations experiencing rapid growth. Monolithic application code bases can grow to be unwieldy “big balls of mud”, posing challenges for development and deployment.

      In response to this problem, organizations like Google, Netflix, and Twitter developed internal “fat client” libraries to standardize runtime operations across services. These libraries provided load balancing, circuit breaking, routing, and telemetry — precursors to service mesh capabilities. However, they also imposed limitations on the languages developers could use and required changes across services when they themselves were updated or changed.

      A microservice design avoids some of these issues. Instead of having a large, centralized application codebase, you have a collection of discretely managed services that represent a feature of your application. Benefits of a microservice approach include:

      • Greater agility in development and deployment, since teams can work on and deploy different application features independently.
      • Better options for CI/CD, since individual microservices can be tested and redeployed independently.
      • More options for languages and tools. Developers can use the best tools for the tasks at hand, rather than being restricted to a given language or toolset.
      • Ease in scaling.
      • Improvements in uptime, user experience, and stability.

      At the same time, microservices have also created challenges:

      • Distributed systems require different ways of thinking about latency, routing, asynchronous workflows, and failures.
      • Microservice setups cannot necessarily meet the same requirements for data consistency as monolithic setups.
      • Greater levels of distribution necessitate more complex operational designs, particularly when it comes to service-to-service communication.
      • Distribution of services increases the surface area for security vulnerabilities.

      Service meshes are designed to address these issues by offering coordinated and granular control over how services communicate. In the sections that follow, we’ll look at how service meshes facilitate service-to-service communication through service discovery, routing and internal load balancing, traffic configuration, encryption, authentication and authorization, and metrics and monitoring. We will use Istio’s Bookinfo sample application — four microservices that together display information about particular books — as a concrete example to illustrate how service meshes work.

      Service Discovery

      In a distributed framework, it’s necessary to know how to connect to services and whether or not they are available. Service instance locations are assigned dynamically on the network and information about them is constantly changing as containers are created and destroyed through autoscaling, upgrades, and failures.

      Historically, there have been a few tools for doing service discovery in a microservice framework. Key-value stores like etcd were paired with other tools like Registrator to offer service discovery solutions. Tools like Consul iterated on this by combining a key-value store with a DNS interface that allows users to work directly with their DNS server or node.

      Taking a similar approach, Kubernetes offers DNS-based service discovery by default. With it, you can look up services and service ports, and do reverse IP lookups using common DNS naming conventions. In general, an A record for a Kubernetes service matches this pattern: service.namespace.svc.cluster.local. Let’s look at how this works in the context of the Bookinfo application. If, for example, you wanted information on the details service from the Bookinfo app, you could look at the relevant entry in the Kubernetes dashboard:

      Details Service in Kubernetes Dash

      This will give you relevant information about the Service name, namespace, and ClusterIP, which you can use to connect with your Service even as individual containers are destroyed and recreated.

      A service mesh like Istio also offers service discovery capabilities. To do service discovery, Istio relies on communication between the Kubernetes API, Istio’s own control plane, managed by the traffic management component Pilot, and its data plane, managed by Envoy sidecar proxies. Pilot interprets data from the Kubernetes API server to register changes in Pod locations. It then translates that data into a canonical Istio representation and forwards it onto the sidecar proxies.

      This means that service discovery in Istio is platform agnostic, which we can see by using Istio’s Grafana add-on to look at the details service again in Istio’s service dashboard:

      Details Service Istio Dash

      Our application is running on a Kubernetes cluster, so once again we can see the relevant DNS information about the details Service, along with other performance data.

      In a distributed architecture, it’s important to have up-to-date, accurate, and easy-to-locate information about services. Both Kubernetes and service meshes like Istio offer ways to obtain this information using DNS conventions.

      Routing and Traffic Configuration

      Managing traffic in a distributed framework means controlling how traffic gets to your cluster and how it’s directed to your services. The more control and specificity you have in configuring external and internal traffic, the more you will be able to do with your setup. For example, in cases where you are working with canary deployments, migrating applications to new versions, or stress testing particular services through fault injection, having the ability to decide how much traffic your services are getting and where it is coming from will be key to the success of your objectives.

      Kubernetes offers different tools, objects, and services that allow developers to control external traffic to a cluster: kubectl proxy, NodePort, Load Balancers, and Ingress Controllers and Resources. Both kubectl proxy and NodePort allow you to quickly expose your services to external traffic: kubectl proxy creates a proxy server that allows access to static content with an HTTP path, while NodePort exposes a randomly assigned port on each node. Though this offers quick access, drawbacks include having to run kubectl as an authenticated user, in the case of kubectl proxy, and a lack of flexibility in ports and node IPs, in the case of NodePort. And though a Load Balancer optimizes for flexibility by attaching to a particular Service, each Service requires its own Load Balancer, which can be costly.

      An Ingress Resource and Ingress Controller together offer a greater degree of flexibility and configurability over these other options. Using an Ingress Controller with an Ingress Resource allows you to route external traffic to Services and configure internal routing and load balancing. To use an Ingress Resource, you need to configure your Services, the Ingress Controller and LoadBalancer, and the Ingress Resource itself, which will specify the desired routes to your Services. Currently, Kubernetes supports its own Nginx Controller, but there are other options you can choose from as well, managed by Nginx, Kong, and others.

      Istio iterates on the Kubernetes Controller/Resource pattern with Istio Gateways and VirtualServices. Like an Ingress Controller, a Gateway defines how incoming traffic should be handled, specifying exposed ports and protocols to use. It works in conjunction with a VirtualService, which defines routes to Services within the mesh. Both of these resources communicate information to Pilot, which then forwards that information to the Envoy proxies. Though they are similar to Ingress Controllers and Resources, Gateways and VirtualServices offer a different level of control over traffic: instead of combining Open Systems Interconnection (OSI) layers and protocols, Gateways and VirtualServices allow you to differentiate between OSI layers in your settings. For example, by using VirtualServices, teams working with application layer specifications could have a separation of concerns from security operations teams working with different layer specifications. VirtualServices make it possible to separate work on discrete application features or within different trust domains, and can be used for things like canary testing, gradual rollouts, A/B testing, etc.

      To visualize the relationship between Services, you can use Istio’s Servicegraph add-on, which produces a dynamic representation of the relationship between Services using real-time traffic data. The Bookinfo application might look like this without any custom routing applied:

      Bookinfo service graph

      Similarly, you can use a visualization tool like Weave Scope to see the relationship between your Services at a given time. The Bookinfo application without advanced routing might look like this:

      Weave Scope Service Map

      When configuring application traffic in a distributed framework, there are a number of different solutions — from Kubernetes-native options to service meshes like Istio — that offer various options for determining how external traffic will reach your application resources and how these resources will communicate with one another.

      Encryption and Authentication/Authorization

      A distributed framework presents opportunities for security vulnerabilities. Instead of communicating through local internal calls, as they would in a monolithic setup, services in a microservice architecture communicate information, including privileged information, over the network. Overall, this creates a greater surface area for attacks.

      Securing Kubernetes clusters involves a range of procedures; we will focus on authentication, authorization, and encryption. Kubernetes offers native approaches to each of these:

      • Authentication: API requests in Kubernetes are tied to user or service accounts, which need to be authenticated. There are several different ways to manage the necessary credentials: Static Tokens, Bootstrap Tokens, X509 client certificates, and external tools like OpenID Connect.
      • Authorization: Kubernetes has different authorization modules that allow you to determine access based on things like roles, attributes, and other specialized functions. Since all requests to the API server are denied by default, each part of an API request must be defined by an authorization policy.
      • Encryption: This can refer to any of the following: connections between end users and services, secret data, endpoints in the Kubernetes control plane, and communication between worker cluster components and master components. Kubernetes has different solutions for each of these:

      Configuring individual security policies and protocols in Kubernetes requires administrative investment. A service mesh like Istio can consolidate some of these activities.

      Istio is designed to automate some of the work of securing services. Its control plane includes several components that handle security:

      • Citadel: manages keys and certificates.
      • Pilot: oversees authentication and naming policies and shares this information with Envoy proxies.
      • Mixer: manages authorization and auditing.

      For example, when you create a Service, Citadel receives that information from the kube-apiserver and creates SPIFFE certificates and keys for this Service. It then transfers this information to Pods and Envoy sidecars to facilitate communication between Services.

      You can also implement some security features by enabling mutual TLS during the Istio installation. These include strong service identities for cross- and inter-cluster communication, secure service-to-service and user-to-service communication, and a key management system that can automate key and certificate creation, distribution, and rotation.

      By iterating on how Kubernetes handles authentication, authorization, and encryption, service meshes like Istio are able to consolidate and extend some of the recommended best practices for running a secure Kubernetes cluster.

      Metrics and Monitoring

      Distributed environments have changed the requirements for metrics and monitoring. Monitoring tools need to be adaptive, accounting for frequent changes to services and network addresses, and comprehensive, allowing for the amount and type of information passing between services.

      Kubernetes includes some internal monitoring tools by default. These resources belong to its resource metrics pipeline, which ensures that the cluster runs as expected. The cAdvisor component collects network usage, memory, and CPU statistics from individual containers and nodes and passes that information to kubelet; kubelet in turn exposes that information via a REST API. The Metrics Server gets this information from the API and then passes it to the kube-aggregator for formatting.

      You can extended these internal tools and monitoring capabilities with a full metrics solution. Using a service like Prometheus as a metrics aggregator allows you to build directly on top of the Kubernetes resource metrics pipeline. Prometheus integrates directly with cAdvisor through its own agents, located on the nodes. Its main aggregation service collects and stores data from the nodes and exposes it though dashboards and APIs. Additional storage and visualization options are also available if you choose to integrate your main aggregation service with backend storage, logging, and visualization tools like InfluxDB, Grafana, ElasticSearch, Logstash, Kibana, and others.

      In a service mesh like Istio, the structure of the full metrics pipeline is part of the mesh’s design. Envoy sidecars operating at the Pod level communicate metrics to Mixer, which manages policies and telemetry. Additionally, Prometheus and Grafana services are enabled by default (though if you are installing Istio with Helm you will need to specify granafa.enabled=true during installation). As is the case with the full metrics pipeline, you can also configure other services and deployments for logging and viewing options.

      With these metric and visualization tools in place, you can access current information about services and workloads in a central place. For example, a global view of the BookInfo application might look like this in the Istio Grafana dashboard:

      Bookinfo services from Grafana dash

      By replicating the structure of a Kubernetes full metrics pipeline and simplifying access to some of its common components, service meshes like Istio streamline the process of data collection and visualization when working with a cluster.


      Microservice architectures are designed to make application development and deployment fast and reliable. Yet an increase in inter-service communication has changed best practices for certain administrative tasks. This article discusses some of those tasks, how they are handled in a Kubernetes-native context, and how they can be managed using a service mesh — in this case, Istio.

      For more information on some of the Kubernetes topics covered here, please see the following resources:

      Additionally, the Kubernetes and Istio documentation hubs are great places to find detailed information about the topics discussed here.

      Source link

      The Linode Backup Service

      Updated by Linode

      Written by Alex Fornuto

      The Linode Backup Service

      The Linode Backup Service is a subscription service add-on that automatically performs daily, weekly, and biweekly backups of your Linode. It’s affordable, easy to use, and provides peace of mind. This guide explains how to enable and schedule your backups, make a manual backup snapshot, restore from a backup, and disable the Backup Service.


      Pricing is per Linode and varies depending upon your Linode’s plan:

      Standard Plans

      Service Backups Hourly Rate Backups Monthly
      Linode 1GB $0.003/hr $2/mo
      Linode 2GB $0.004/hr $2.50/mo
      Linode 4GB $0.008/hr $5/mo
      Linode 8GB $0.016/hr $10/mo
      Linode 16GB $0.03/hr $20/mo
      Linode 32GB $0.06/hr $40/mo
      Linode 64GB $0.12/hr $80/mo
      Linode 96GB $0.18/hr $120/mo
      Linode 128GB $0.24/hr $160/mo
      Linode 192GB $0.36/hr $240/mo

      High Memory Plans

      Service Backups Hourly Rate Backups Monthly
      Linode 24GB $0.0075/hr $5/mo
      Linode 48GB $0.015/hr $10/mo
      Linode 90GB $0.03/hr $20/mo
      Linode 150GB $0.06/hr $40/mo
      Linode 300GB $0.12/hr $80/mo

      Enable the Backup Service

      Use the Linode Manager to enable the Backup Service on a Linode. Here’s how:

      1. Log in to the Linode Cloud Manager.

      2. From the Linodes screen, select the Linode you want to back up.

      3. Click the Backups tab.

        Enable Linode Backups by navigating to to the individual Linode's backup menu.

      4. Click Enable Backups.

      The Linode Backup Service is now enabled for the selected Linode.

      Auto Enroll New Linodes in the Backup Service

      You can automatically enroll all new Linodes in the Backup Service. To do so, navigate to the Account page in the left-hand navigation, then select the Global Settings tab.

      Under Backup Auto Enrollment click on the toggle button to enable backups on all new Linodes.

      Auto enroll all new Linodes in the Backup Service by navigating to the Global Settings tab in the Account settings and enabling Backups.


      Enabling this setting does not retroactively enroll any previously created Linodes in the Backup Service.

      Manage Backups

      You’ll manage your backups with a simple web interface in the Linode Manager. There’s no software to install, and there are no commands to run. Just log in to the Linode Manager, navigate to the Linodes screen, select a Linode, and then click the Backups tab. The backups interface is shown below.

      The Linode Backup Service interface

      1. A list of available backups. Listed in this view are the date created, the label, how long the backup took to be created, the disks imaged, and the size of the resulting image.

      2. Manually create a backup by taking a manual snapshot. For more information, see the Take a Manual Snapshot section.

      3. Configure backup schedule settings. For more information, see the Schedule Backups section.

      4. Cancel backups. After cancelling your backups you will have to wait 24 hours before you can re-enable them again.

      How Linode Backups Work

      Backups are stored on a separate system in the same data center as your Linode. The space required to store the backups is not subtracted from your storage space. You can store four backups of your Linode, three of which are automatically generated and rotated:

      • Daily backup: Automatically initiated daily within the backup window you select. Less than 24 hours old.
      • Current week’s backup: Automatically initiated weekly within the backup window, on the day you select. Less than 7 days old.
      • Last week’s backup: Automatically initiated weekly within the backup window, on the day you select. Between 8 and 14 days old.
      • Manual Snapshot: A user-initiated snapshot that stays the same until another snapshot is initiated.

      The daily and weekly backups are automatically erased when a new backup is performed. The Linode Backup Service does not keep automated backups older than 8 – 14 days.

      Schedule Backups

      You can configure when automatic backups are initiated. Here’s how:

      1. From the Linodes page, select the Linode.

      2. Click the Backups tab.

      3. Under Settings, select a time interval from the Time of Day menu. The Linode Backup Service will generate all backups between these hours.

      4. Select a day from the Day of Week menu. This is the day whose backup will be promoted to the weekly slot. The back up will be performed within the time period you specified in step 3.

      5. Click Save Changes.

      The Linode Backup Service will backup your Linode according to the schedule you specified.

      Take a Manual Snapshot

      You can make a manual backup of your Linode by taking a snapshot. Here’s how:

      1. From the Linodes page, select the Linode.

      2. Click the Backups tab.

      3. Under Manual Snapshot, give your snapshot a name and click Take Snapshot.


        Taking a new snapshot will overwrite a previously saved snapshot.

      The Linode Backup Service initiates the manual snapshot. Creating the manual snapshot can take several minutes, depending on the size of your Linode and the amount of data you have stored on it. Other Linode Manager jobs for this Linode will not run until the snapshot job has been completed.

      Restore from a Backup

      This section shows how to restore a backup to a new Linode, or to an existing Linode.

      Restoring a backup will create a new configuration profile and a new set of disks on your Linode. The restore process does not restore single files or directories automatically. Restoring particular files can be done by completing a normal restore, copying the files off of the new disks, and then removing the disks afterward.


      The size of the disk(s) created by the restore process will only be slightly larger than the total size of the files restored. This means that the disk(s) created will be ‘full’.

      Some applications, like databases, need some amount of free unused space inside the disk in order to run. As a result, you may want to increase your disk(s) size after the restore process is completed.

      To restore a backup to a different data center, first restore to a Linode in the same data center, creating a new one if necessary. Once the restore is complete, use the Clone tab to copy the disk(s) to a Linode in a different data center.

      Restore to a New Linode

      This section covers how to restore a backup to a new Linode that does not have any disks deployed to it. The new Linode will be located in the same data center. If you instead wish to restore your backup to an existing Linode, see the next section.

      1. From Linodes page, select the Linode whose backups you intend to restore, and then click on the Backups tab. Select the ellipsis (three dots) next to the backup you would like to restore, and click Deploy New Linode.

        Click on the ellipsis menu icon to restore to a new Linode.

      2. You will be taken to the Create New Linode screen. The Create from Backup tab will already be selected for you, as will the fields corresponding to the Linode and backup that you are restoring from. Choose a Linode plan, enter a label for the new Linode, select any other options you prefer, and click Create. The new Linode will be created with the same password and SSH keys (if any) as the original.

        The backup disks and configuration profiles will be restored to the Linode you selected. Watch the notifications area for updates on the process. Restoring from a backup can take several minutes depending on the size of your Linode and the amount of data you have stored on it.

      Restore to an Existing Linode

      You can restore a backup to any Linode located in the same data center, even if the target does not have the Backup Service enabled. To restore a backup to an existing Linode, you will need to make sure that you have enough storage space that is not currently assigned to disk images.


      If you are attempting to restore a disk to the same Linode the backup was created from, the restoration process will not delete the original disk for you. Manually delete the original disk to make room for the backup, if desired.

      1. From Linodes page, select the Linode whose backups you intend to restore, and then click on the Backups tab. Observe the size of the backup you would like to restore, which is visible in the Space Required column. You will need at least this amount of unallocated disk space on the target Linode to complete the restore.

      2. Select the ellipsis (three dots) next to the backup you would like to restore, and click Restore to Existing Linode.

        Click on the ellipsis menu icon to restore to an existing Linode.

      3. A menu will open with the Linodes that you can restore to. Select a Linode and click Restore.

        Select the Linode you would like to restore your backup to.

        You will be notified if you do not have enough space on your Linode to restore your backup. Optionally, you can choose to overwrite the Linode you are restoring to.

      4. If the amount of unallocated space available is greater than the size of the backup, you can proceed with restoring. If the amount of unallocated space is less than the size of the backup, you can stop the restoration workflow, resize your existing disks on the target Linode to make room for it, and then come back to the restore page after the disk resize operation has finished.


        In some cases, you will not be able to shrink your disks enough to fit the restored backup. As an alternative, you can change your Linode’s plan to a higher tier that offers more disk space.
      5. From the Restore to Existing Linode menu, click Restore.

        Your backup will begin restoring to your Linode, and you can monitor its progress in the notifications area. Note that the time it takes to restore your backup will vary depending upon the restore size, and the number of files being restored.

      Boot from a Backup

      After the backup has been restored, the disks and configuration profiles will be available to the destination Linode you selected. Select the restored configuration profile and reboot your Linode to start up from the restored disks:

      1. From the Linodes page, select the Linode that you restored the backup to. Navigate to the Settings tab and open the Advanced Configurations section.

      2. Select the ellipsis icon (three dots) next to the configuration profile that was restored and select Boot This Config.

        Navigate to the Advanced Configurations section of your Linode's Settings tab.

      The Linode will start from the backup disks. Monitor the notifications area for progress.

      Cancel the Backup Service

      You can cancel the Backup Service at any time. From your Linode’s dashboard, choose the Backups tab and click the Cancel Backups link at the bottom of the page. Cancelling the service will remove your saved backups from the Linode platform.


      There are some limitations to what the Linode Backup Service can back up. Here are some things you should be aware of:

      • The Backup Service must be able to mount your disks. If you’ve created partitions, configured full disk encryption, or made other changes that prevent us from mounting the disk as a filesystem, you will likely not be able to use the Linode Backup Service. The backup system operates at the file level, not at the block level.
      • Because the Backup Service is file-based, the number of files stored on disk will impact both the time it takes for backups and restores to complete, and your ability to successfully take and restore backups. Customers who need to permanently store a large number of files may want to archive bundles of smaller files into a single file, or consider other backup services.
      • Backups taken of ext4 or ext3 filesystems will be restored as ext4. Backups taken of other mountable filesystem types will have their contents restored using ext4.
      • Files that have been modified but have the same size and modify time will not be considered “changed” during a subsequent backup. ACLs and extended attributes are not tracked.
      • The Backup Service uses a snapshot of your disks to take consistent backups while your Linode is running. This method is very reliable, but can fail to properly back up the data files for database services like MySQL. If the snapshot occurs during a transaction, the database’s files may be backed up in an unclean state. We recommend scheduling routine dumps of your database to a file on the filesystem. The resulting file will then be backed up, allowing you to restore the contents of the database if you need to restore from a backup.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link

      How Your Online Business Can Nail Customer Service During the Holiday Rush

      The holiday season is upon us once more, and that means many things for your business. On the one hand, you’re about to enter the most lucrative period of the year. However, you’ll also be considerably busier than usual, and will likely need to deal with a much higher number of customer support queries.

      To make sure your support can cope with the holiday rush, you’ll want to plan ahead. Strengthening and preparing your support team is key to helping them provide assistance for a huge influx of stressed customers. If you do that, you’ll be able to reap the benefits of the season more effectively.

      In this article, we’ll discuss why it’s particularly important to provide quality customer service throughout the holidays. We’ll also offer some tips for how you can prepare your business and support team in advance. Let’s get started!

      Why Customer Service Matters Most During the Holidays

      If you’re anything like us, you’re getting busier by the day preparing for the holiday season. However, this isn’t just a time for buying gifts and eating good food. It’s also the most critical period for businesses, as many companies make the bulk of their yearly sales during the last few weeks of the year.

      However, to make sure your business takes full advantage of this period, you’ll need to plan ahead carefully. There are plenty of ways to ensure that you’re ready for the holiday rush, and one of the most crucial is making sure your customer service will function flawlessly.

      Of course, providing high-quality customer support is always necessary. During the holiday rush, however, you will most likely be inundated with even more support queries, questions, and confused customers than at any other time of the year. And because of high stress levels, you’re also more likely to end up dealing with some frustrated and potentially antagonistic customers.

      This might sound intimidating. By preparing in advance and making a solid plan, however, you can ensure that your customer service will remain top-notch even under less-than-ideal circumstances. Not only will this help your customers, but it will be a huge benefit to you and your customer service agents as well.

      10 Ways to Prepare Your Customer Service for the Holiday Rush

      If you’re wondering: “When should I start to prepare for the holidays?”, our answer is right now! It’s never too early to start planning for the year’s final month, but having a plan in place at least before the beginning of December is highly recommended.

      With that in mind, we’re going to guide you through some of the most important steps you’ll want to take. Here are 10 things you can do prepare your customer service before Santa arrives!

      1. Analyze Last Year’s Data

      A perfect place to start your planning is to look back at the previous year. This will involve examining the volume of calls and messages you received, finding out what the most common pain points were, and trying to understand where your service may have been lacking.

      Having this data at hand will be a huge help when formulating a plan for the upcoming rush. You’ll be able to improve in areas where you’ve struggled previously, and you can also preemptively provide information for the most common customer questions. In turn, this will cut down on the number of queries your team has to field.

      How you go about doing this analysis will naturally depend on your toolset. If you’re using software like Zendesk or Awesome Support, you can just view the statistics and queries from previous years. You should also liaise with your support and marketing teams, as they’ll be best equipped to tell you where you need to focus your attention.

      Here are some vital questions you’ll want to be able to answer:

      • How much larger is the volume of support queries you receive during the holiday shopping period, compared with the rest of the year?
      • What are the most common questions customers have?
      • How are most people choosing to contact you — via email, phone, chat, or some other medium?

      Of course, this is by no means an exhaustive list. However, these answers will help you immensely throughout the rest of your preparations.

      2. Decide Which Support Channels to Focus On

      It’s essential that you know where to focus your attention during the holiday season. At first glance, it might seem like the best route to use every conceivable method of contact, but this can lead to spreading yourself too thin.

      Imagine that you have to simultaneously juggle phone lines, live chat, emails, and social media, in addition to updating your content and dealing with orders and shipping. In this scenario, you’ll likely see most — if not all — of those channels suffer in quality. This is especially true if you only have a small support team.

      To avoid this problem, you’ll need to consider which channels of communication to focus on. The best way to start is by looking at which channels are most commonly used by your customers. As we mentioned in the previous section, looking at earlier years’ support queries will give you a good baseline to work from. However, you’ll also want to consider which channels are most popular during the rest of the year.

      For example, if you find that your customers are primarily calling in or using your contact form throughout the year, it’s fair to assume that these will be the busiest channels during the holidays as well. Knowing this will let you assign more people to handle those channels, and avoid keeping customers waiting.

      3. Prepare for Quick Scaling

      The truth is that no matter how well you plan, the holidays are never completely predictable. This means you’ll need to have a contingency plan, in case you need to scale up or down with little notice.

      For example, what if you face twice as many support requests as you anticipated? You’ll need to be able to assign more time and manpower to deal with them, while also keeping the rest of your operations afloat. In this scenario, you might consider hiring remote seasonal workers to help out.

      This is something many companies do to handle the increased volume of work during the holidays. Hiring temporary workers gives you the freedom to change the size of your team at almost a moment’s notice. For example, you could use a service like PartnerHero to outsource some or all of your customer support work during this period.

      Naturally, you’ll need to ensure that these seasonal workers have all the assets and information they need, which is something we’ll discuss later on. With the right preparations in place, they should be able to slot into your normal operations with little friction and help you deal with almost any unexpected situation.

      4. Keep Your Customers Informed

      Arguably the most significant way to avoid customer frustration is to manage their expectations. If your support is changing during the holidays, you need to make that clear as early as possible. They’ll need to be aware of when and how they will be able to contact you.

      It’s also smart to let customers know how your other operations are likely to alter. For example, will returns take longer to process, and will they need to wait a bit for responses to their emails? By letting them know what to expect, you can keep them informed and minimize the risk of frustration or hostility.

      One strategy you can use to your advantage is sometimes referred to as “underpromise and overdeliver.” The idea is that you prepare customers for potential issues that may arise, but then work to avoid those problems anyway. This lets you exceed their expectations.

      Overall, our recommendation is to be honest about what customers can expect and to make any changes clear through as many channels as possible. That includes on your website, social media, and even your email list. This will ensure that the bulk of your customers know what to expect.

      5. Use Automation to Your Advantage

      When the season gets going and you find yourself swamped in tasks, every second will count. To make sure you can use your available time most efficiently, you’ll want to consider automating tasks whenever it’s possible to do so.

      For instance, you can create an automated workflow using software like Help Scout. This can be set up to redirect customer queries to the person or team best suited to deal with them. Not only will this save time on your end, but it will also keep waiting times down for your stressed customers.

      Workflows also let you handle plenty of other tasks automatically, such as tracking products to let you know right away when stocks are low. You can then deal with the potential issue before it becomes a full-blown problem.

      There are plenty of other ways you can use automation during the holidays. One of the best strategies is to set up an AI-driven chatbot that can help you deal with the most common questions. This can dramatically cut down on the amount of time the human members of your team need to spend on customer support requests.

      6. Implement a Triage System for Support Queries

      In addition to automating parts of your support system, you can also optimize it by introducing a triage process. This involves sorting tasks and support queries into categories depending on their urgency. You can then prioritize more urgent matters first, while non-emergency tasks can be dealt with later.

      Implementing triage into your customer service will let you focus your attention on what matters most at any given time. The most pressing and time-sensitive tasks can be dealt with right away, minimizing the risk of making your customers feel frustrated and hostile.

      An easy way to do this is to simply categorize each customer query according to priority. If an issue needs to be dealt with immediately, you might label it as “critical,” while if it needs to be looked at within 1-2 hours it could be labeled “urgent.” Issues that can wait a day or two, on the other hand, can be noted as “low priority.”

      However, you need to remember that you’ll still have to actually deal with all requests. If you find that you’re never getting around to handling low-priority tasks, you may need to consider scaling your team up temporarily by assigning additional personnel.

      7. Update Your Content and Knowledge Base

      Earlier, we discussed the importance of keeping your customers informed. However, this extends beyond just letting them know about changes to your schedule. By making sure that all of your content and assets are up-to-date, you can save both customers and yourself a lot of time and hassle.

      For example, if you provide a knowledge base with information about your products and services, you can use it to answer most of the most commonly asked questions during the holiday period. In many cases, your support team can simply refer customers to relevant knowledge base articles, answering their queries quickly.

      For this to work, you’ll obviously need to ensure that you provide as much documentation and information as possible. It also needs to be thoroughly updated, to ensure that you don’t cause additional confusion among your customers.

      If you need to set up a knowledge base, you can use a plugin such as Heroic Knowledge Base. If you already have one, on the other hand, you should perform a content audit well before the holiday rush kicks in. This can also involve reviewing similar resources, such as your FAQ page.

      8. Learn How to Help Stressed Customers

      The holidays are intended to offer relaxation and fun, but we all know that it can also be a thoroughly stressful period. As such, you’re likely to deal with a few customers who are particularly difficult, frustrated, or even outright antagonistic.

      Naturally, you’ll need to prepare in order to help them out and avoid angering them further. Dealing with difficult customers is a delicate task. The most valuable advice we can offer is to train your support team to stay calm and professional at all times, no matter what a customer might say.

      In addition, here are some ways you can approach particularly challenging customers:

      • Listen. If the customer feels like they’re being deflected or ignored, they’re only going to get angrier and less responsive.
      • Be quick. Naturally, your goal is to be as a fast as possible with all support queries. However, it can be worth prioritizing more stressed customers, to avoid further incident.
      • Treat them like people. We discussed the value of automation earlier, but in tough cases, it’s better to take a personal approach. Make it clear to the customer that you’re handling their issue and care about their frustration, so they don’t feel like they’re being treated as a nuisance.

      In short, by listening to the customer and being prepared to meet them halfway, you can usually solve even the most heated of issues.

      9. Prepare to Provide Compensation to Customers

      In some situations, you may need to compensate customers. Especially in the most volatile or challenging cases, a simple gift can help to smooth things over immensely. Some customers might even demand this kind of treatment.

      Providing compensation can help to soften even the most upset customers. It can also win back some goodwill. Your goal is to ensure that the customer considers using your business again in the future, despite their current grievances.

      Naturally, you’ll want to be very careful about how and when you compensate customers. In some cases, such as when they’ve received a faulty product, you may be legally obligated to provide a new item or a refund.

      However, you can also provide compensation if a customer has had a particularly difficult experience, either with your business or your customer service. This could be in the form of a small gift, a coupon, a discount, or anything else that’s convenient but useful to the customer.

      10. Take Care of Your Support Team

      Finally, while it’s obviously necessary to take care of your customers, you shouldn’t ignore the people on your own front lines. Beginning on Black Friday and Cyber Monday, the holiday rush is a stressful experience for everyone, especially those who have to field questions and requests from wound-up customers.

      Depending on the size of your business, you can take care of your support team in several ways. Naturally, you should make sure they have everything they’ll need to do their jobs without incident.

      However, it’s also nice to reward your support team further, to show your appreciation for all their hard work. Even something as simple as the occasional gift, like seasonally appropriate sweets and drinks, can do a lot to raise morale during this hectic season.

      Holiday Shopping Made Easy

      The holidays are meant to be a time of joy, but it can be hard to feel merry if your customer service is strained. By preparing well in advance, you can put a plan into place, train your team, and inform your customers — providing effective and efficient support as a result.

      Do you have any questions about how to handle customer support during the holiday rush? Find us on social and let’s start the conversation!

      Source link