One place for hosting & domains


      Kubernetes Reference Guide

      Updated by Linode Contributed by Linode

      This is a reference for common Kubernetes terminology.


      Calico is an implementation of Kubernetes’ networking model, and it served as the original reference for the Kubernetes NetworkPolicy API during its development. Calico also provides advanced policy enforcement capabilities that extend beyond Kubernetes’ NetworkPolicy API, and these can be used by administrators alongside that API. Calico uses BGP to distribute routes for Kubernetes Pods, without the need for overlays.


      A group of servers containing at least one master node and one or more worker nodes.


      Similar to virtual machines, containers are isolated runtimes that you can run your applications and services inside. Containers consume fewer resources than virtual machines, as they do not attempt to emulate a full operating system running on dedicated hardware. Instead, containers only bundle the files, environment variables, and libraries needed by the applications they run, and they share the other resources of the operating system that hosts them.


      Containerization is a software architecture practice that organizes applications and their dependencies in containers. Containerizing an application requires a base image that can be used to create an instance of a container. Once an application’s image exists, you can push it to a centralized container registry. Docker, Kubernetes, and other orchestration tools can download images from a registry to deploy container instances.

      Container Storage Interface

      The Container Storage Interface (CSI) specification provides a common storage interface for container orchestrators like Kubernetes (and others, like Mesos). The interface is used by an orchestrator to attach storage volumes to containers and to manage the lifecycle of those volumes.

      The objective of this specification is to allow cloud computing platforms to develop a single storage plugin that works with any container orchestrator. Linode has authored a CSI driver for Linode’s Block Storage service, which makes Block Storage Volumes available to your containers.


      A Kubernetes Controller is a control loop that continuously watches the Kubernetes API and tries to manage the desired state of certain aspects of the cluster. Examples of different Controllers include:

      • ReplicaSets, which manage the number of running instances of a particular Pod.
      • Deployments, which manage the number of running instances of a particular Pod and can perform upgrades of Pods to new versions.
      • Jobs, which manage Pods that perform one-off tasks.

      Control Plane

      kube-apiserver, kube-controller-manager, kube-scheduler, and etcd form what is known as the Control Plane of a Kubernetes cluster. The Control Plane is responsible for keeping a record of the state of a cluster, making decisions about the cluster, and pushing the cluster towards new desired states.


      A Deployment can manage a ReplicaSet, so it shares the ability to keep a defined number of replica Pods up and running. A Deployment can also update those Pods to resemble the desired state by means of rolling updates. For example, if you wanted to update a container image to a newer version, you would create a Deployment, and the Controller would update the container images one by one until the desired state is achieved. This ensures that there is no downtime when updating or altering your Pods.


      Docker is a tool that allows quick deployment of apps in containers using operating system level virtualization. While Kubernetes supports several container runtimes, Docker is a very popular option.


      A Dockerfile contains all commands, in their required order of execution, needed to build a given Docker image. For example, a Dockerfile might contain instructions for:

      • Installing a specific operating system referencing another image,
      • Installing an application’s dependencies, and
      • Executing configuration commands in the running container.

      Docker Hub

      Docker Hub is a centralized container image registry that can host your images and make them available for sharing and deployment. You can also find and use official Docker images and vendor specific images. When combined with a remote version control service, like GitHub, Docker Hub can automate the build process for images and can trigger actions for further automation with other services and tooling.


      etcd is a highly available key-value store that provides the backend database for Kubernetes.


      Flannel is a networking overlay that meets the functionality of the Kubernetes. Flannel supplies a layer 3 network fabric and is relatively easy to set up.


      Helm is a tool that assists with installing and managing applications on Kubernetes clusters. It is often referred to as “the package manager for Kubernetes,” and it provides functions that are similar to a package manager for an operating system.

      Helm Charts

      The software packaging format for Helm. A Helm chart specifies a file and directory structure for packaging your Kubernetes manifests.

      Helm Client

      The Helm client software issues commands to your cluster that can install new applications, upgrade them, and delete them. You run the client software on your computer, in your CI/CD environment, or anywhere else you’d like.

      Helm Tiller

      A server component that runs on your cluster and receives commands from the Helm client software. Tiller is responsible for directly interacting with the Kubernetes API (which the client software does not do). Tiller maintains the state for your Helm releases.


      A Job is a Controller which manages Pods created for a single or a set of tasks. This is handy if you need to create a Pod that performs a single function, or calculates a value. The deletion of the Job will delete the Pod.


      The kube-apiserver is the front end for the Kubernetes API server. Validates and configures data for Kubernetes’ API objects including Pods, Services, Deployments, and more.


      The kube-controller-manager is a daemon that manages the Kubernetes control loop. It watches the shared state of the cluster through the Kubernetes API server.


      The kube-proxy is a networking proxy that proxies the UDP, TCP, and SCTP networking of each node, and provides load balancing. This is only used to connect to Services.


      The kube-scheduler is a function that looks for newly created Pods that have no nodes. kube-scheduler assigns Pods to a nodes based on a host of requirements.


      kubeadm is a cloud provider-agnostic tool that automates many of the tasks required to get a cluster up and running. Users of kubeadm can run a few simple commands on individual servers to turn them into a Kubernetes cluster consisting of a master node and worker nodes.


      kubectl is a command line tool used to interact with the Kubernetes cluster. It offers a host of features, including:

      • Creating, stopping, and deleting resources
      • Describing active resources
      • Auto scaling resources.


      kubelet is an agent that receives descriptions of the desired state of a Pod from the API server and ensures the Pod is healthy and running on its node.


      Kubernetes, often referred to as “k8s”, is an open source container orchestration system that helps deploy and manage containerized applications. Developed by Google starting in 2014 and written in the Go language, Kubernetes is quickly becoming the standard way to architect horizontally-scalable applications.

      Kubernetes Manifests

      Files, often written in YAML, used to create, modify, and delete Kubernetes resources such as Pods, Deployments, and Services.

      Linode Cloud Controller Manager

      The Linode Cloud Controller Manager (CCM) creates a fully supported Kubernetes experience on Linode:

      • Linode NodeBalancers are automatically deployed when a Kubernetes Service of type “LoadBalancer” is deployed. This is the most reliable way to allow services running in your cluster to be reachable from the Internet.

      • Linode hostnames and network addresses (private/public IPs) are automatically associated with their corresponding Kubernetes resources, forming the basis for a variety of Kubernetes features.

      • Node resources are put into the correct state when Linodes are shut down, allowing Pods to be appropriately rescheduled.

      • Nodes are annotated with the Linode region, which is the basis for scheduling based on failure domains.

      Linode k8s-alpha CLI

      The Linode k8s-alpha CLI is a plugin for the Linode CLI that offers quick, single-command deployments of Kubernetes clusters on your Linode account.

      Linode NodeBalancer

      NodeBalancers are highly-available, managed, cloud based “load balancers as a service”. They intelligently route incoming requests to backend Linodes to help your application cope with load, and to increase your application’s availability.

      Master Server

      The Kubernetes Master is normally a separate server in a Kubernetes cluster responsible for maintaining the desired state of the cluster. It does this by telling the nodes how many instances of your application it should run and where.


      Namespaces are virtual clusters that exist within the Kubernetes cluster that help to group and organize Kubernetes API objects. Every cluster has at least three Namespaces: default, kube-system, and kube-public. When interacting with the cluster it is important to know which Namespace the object you are looking for is in, as many commands will default to only showing you what exists in the default Namespace. Resources created without an explicit Namespace will be added to the default Namespace.


      Orchestration is the automated configuration, coordination, and management of computer systems, software, middleware, and services. It takes advantage of automated tasks to execute processes. The subject of orchestration is often discussed in reference to lifecycle management for containers, a practice known as container orchestration.


      A Pod is the smallest deployable unit of computing in the Kubernetes architecture. A Pod is a group of one or more containers with shared resources and a specification for how to run these containers. Each Pod has its own IP address in the cluster. Pods are “mortal,” which means that they are created and destroyed depending on the needs of the application


      Rancher is a web application that provides a GUI interface for cluster creation and for the management of clusters. Rancher also provides easy interfaces for deploying and scaling apps on your clusters, and it has a built-in catalog of curated apps to choose from.


      A ReplicaSet is one of the Controllers responsible for keeping a given number of replica Pods running. If one Pod goes down in a ReplicaSet, another will be created to replace it. In this way, Kubernetes is self-healing. However, for most use cases it is recommended to use a Deployment instead of a ReplicaSet.


      REST stands for REpresentational State Transfer. It is an architectural style for network based software that requires stateless, cacheable, client-server communication via a uniform interface between components. The HTTP protocol is most often used in RESTful applications.


      Services group identical Pods together to provide a consistent means of accessing them. Each service is given an IP address and a corresponding DNS entry. Services exist across nodes. There are four types of Services:

      • ClusterIP: exposes the Service internally to the cluster; this is the default type of Service.

      • NodePort: exposes the Service to the internet from the IP address of the node at the specified port number, which is in the range 30000-32767.

      • LoadBalancer: creates a load balancer assigned to a fixed IP address in the cloud if the cloud provider supports it. For clusters deployed on Linode, this is the responsibility of the Linode’s Cloud Controller Manager (CCM), which will create NodeBalancers for each of your LoadBalancer services. This is the best way to expose your cluster to the internet.

      • ExternalName: maps the Service to a DNS name by returning a CNAME record redirect. ExternalName is good for directing traffic to outside resources, such as a database hosted on another cloud.


      Terraform by HashiCorp is a software tool that allows you to represent your Linode instances and other resources with declarative code inside configuration files, instead of manually creating those resources via the Linode Manager or API. This practice is referred to as Infrastructure as Code, and Terraform is a popular example of this methodology.


      A Volume in Kubernetes is a way to share file storage between containers in a Pod. Kubernetes Volumes differ from Docker volumes because they exist inside the Pod rather than inside the container.

      Worker Nodes

      Worker nodes in a Kubernetes cluster are servers that run your applications’ Pods. The number of nodes in your cluster is determined by the cluster administrator.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link

      How to Use Ansible: A Reference Guide

      Ansible Cheat Sheet


      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers.

      This cheat sheet-style guide provides a quick reference to commands and practices commonly used when working with Ansible. For an overview of Ansible and how to install and configure it, please check our guide on how to install and configure Ansible on Ubuntu 18.04.

      How to Use This Guide:

      • This guide is in cheat sheet format with self-contained command-line snippets.
      • Jump to any section that is relevant to the task you are trying to complete.
      • When you see highlighted text in this guide’s commands, keep in mind that this text should refer to hosts, usernames and IP addresses from your own inventory.

      Ansible Glossary

      The following Ansible-specific terms are largely used throughout this guide:

      • Control Machine / Node: a system where Ansible is installed and configured to connect and execute commands on nodes.
      • Node: a server controlled by Ansible.
      • Inventory File: a file that contains information about the servers Ansible controls, typically located at /etc/ansible/hosts.
      • Playbook: a file containing a series of tasks to be executed on a remote server.
      • Role: a collection of playbooks and other files that are relevant to a goal such as installing a web server.
      • Play: a full Ansible run. A play can have several playbooks and roles, included from a single playbook that acts as entry point.

      If you’d like to practice the commands used in this guide with a working Ansible playbook, you can use this playbook from our guide on Automating Initial Server Setup with Ansible on Ubuntu 18.04. You’ll need at least one remote server to use as node.

      Testing Connectivity to Nodes

      To test that Ansible is able to connect and run commands and playbooks on your nodes, you can use the following command:

      The ping module will test if you have valid credentials for connecting to the nodes defined in your inventory file, in addition to testing if Ansible is able to run Python scripts on the remote server. A pong reply back means Ansible is ready to run commands and playbooks on that node.

      Connecting as a Different User

      By default, Ansible tries to connect to the nodes as your current system user, using its corresponding SSH keypair. To connect as a different user, append the command with the -u flag and the name of the intended user:

      • ansible all -m ping -u sammy

      The same is valid for ansible-playbook:

      • ansible-playbook myplaybook.yml -u sammy

      Using a Custom SSH Key

      If you're using a custom SSH key to connect to the remote servers, you can provide it at execution time with the --private-key option:

      • ansible all -m ping --private-key=~/.ssh/custom_id

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --private-key=~/.ssh/custom_id

      Using Password-Based Authentication

      If you need to use password-based authentication in order to connect to the nodes, you need to append the option --ask-pass to your Ansible command.

      This will make Ansible prompt you for the password of the user on the remote server that you're attempting to connect as:

      • ansible all -m ping --ask-pass

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --ask-pass

      Providing the sudo Password

      If the remote user needs to provide a password in order to run sudo commands, you can include the option --ask-become-pass to your Ansible command. This will prompt you to provide the remote user sudo password:

      • ansible all -m ping --ask-become-pass

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --ask-become-pass

      Using a Custom Inventory File

      The default inventory file is typically located at /etc/ansible/hosts, but you can also use the -i option to point to custom inventory files when running Ansible commands and playbooks. This is useful for setting up per-project inventories that can be included in version control systems such as Git:

      • ansible all -m ping -i my_custom_inventory

      The same option is valid for ansible-playbook:

      • ansible-playbook myplaybook.yml -i my_custom_inventory

      Using a Dynamic Inventory File

      Ansible supports inventory scripts for building dynamic inventory files. This is useful if your inventory fluctuates, with servers being created and destroyed often.

      You can find a number of open source inventory scripts on the official Ansible GitHub repository. After downloading the desired script to your Ansible control machine and setting up any required information — such as API credentials — you can use the executable as custom inventory with any Ansible command that supports this option.

      The following command uses Ansible's DigitalOcean inventory script with a ping command to check connectivity to all current active servers:

      • ansible all -m ping -i

      For more details on how to use dynamic inventory files, please refer to the official Ansible documentation.

      Running ad-hoc Commands

      To execute any command on a node, use the -a option followed by the command you want to run, in quotes.

      This will execute uname -a on all the nodes in your inventory:

      • ansible all -a "uname -a"

      It is also possible to run Ansible modules with the option -m. The following command would install the package vim on server1 from your inventory:

      • ansible server1 -m apt -a "name=vim"

      Before making changes to your nodes, you can conduct a dry run to predict how the servers would be affected by your command. This can be done by including the --check option:

      • ansible server1 -m apt -a "name=vim" --check

      Running Playbooks

      To run a playbook and execute all the tasks defined within it, use the ansible-playbook command:

      • ansible-playbook myplaybook.yml

      To overwrite the default hosts option in the playbook and limit execution to a certain group or host, include the option -l in your command:

      • ansible-playbook -l server1 myplaybook.yml

      Getting Information about a Play

      The option --list-tasks is used to list all tasks that would be executed by a play without making any changes to the remote servers:

      • ansible-playbook myplaybook.yml --list-tasks

      Similarly, it is possible to list all hosts that would be affected by a play, without running any tasks on the remote servers:

      • ansible-playbook myplaybook.yml --list-hosts

      You can use tags to limit the execution of a play. To list all tags available in a play, use the option --list-tags:

      • ansible-playbook myplaybook.yml --list-tags

      Controlling Playbook Execution

      You can use the option --start-at-task to define a new entry point for your playbook. Ansible will then skip anything that comes before the specified task, executing the remaining of the play from that point on. This option requires a valid task name as argument:

      • ansible-playbook myplaybook.yml --start-at-task="Set Up Nginx"

      To only execute tasks associated with specific tags, you can use the option --tags. For instance, if you'd like to only execute tasks tagged as nginx or mysql, you can use:

      • ansible-playbook myplaybook.yml --tags=mysql,nginx

      If you want to skip all tasks that are under specific tags, use --skip-tags. The following command would execute myplaybook.yml, skipping all tasks tagged as mysql:

      • ansible-playbook myplaybook.yml --skip-tags=mysql

      Using Ansible Vault to Store Sensitive Data

      If your Ansible playbooks deal with sensitive data like passwords, API keys, and credentials, it is important to keep that data safe by using an encryption mechanism. Ansible provides ansible-vault to encrypt files and variables.

      Even though it is possible to encrypt any Ansible data file as well as binary files, it is more common to use ansible-vault to encrypt variable files containing sensitive data. After encrypting a file with this tool, you'll only be able to execute, edit or view its contents by providing the relevant password defined when you first encrypted the file.

      Creating a New Encrypted File

      You can create a new encrypted Ansible file with:

      • ansible-vault create credentials.yml

      This command will perform the following actions:

      • First, it will prompt you to enter a new password. You'll need to provide this password whenever you access the file contents, whether it's for editing, viewing, or just running playbooks or commands using those values.
      • Next, it will open your default command-line editor so you can populate the file with the desired contents.
      • Finally, when you're done editing, ansible-vault will save the file as encrypted data.

      Encrypting an Existing Ansible File

      To encrypt an existing Ansible file, you can use the following syntax:

      • ansible-vault encrypt credentials.yml

      This will prompt you for a password that you'll need to enter whenever you access the file credentials.yml.

      Viewing the Contents of an Encrypted File

      If you want to view the contents of a file that was previously encrypted with ansible-vault and you don't need to change its contents, you can use:

      • ansible-vault view credentials.yml

      This will prompt you to provide the password you selected when you first encrypted the file with ansible-vault.

      Editing an Encrypted File

      To edit the contents of a file that was previously encrypted with Ansible Vault, run:

      • ansible-vault edit credentials.yml

      This will prompt you to provide the password you chose when first encrypting the file credentials.yml with ansible-vault. After password validation, your default command-line editor will open with the unencrypted contents of the file, allowing you to make your changes. When finished, you can save and close the file as you would normally, and the updated contents will be saved as encrypted data.

      Decrypting Encrypted Files

      If you wish to permanently revert a file that was previously encrypted with ansible-vault to its unencrypted version, you can do so with this syntax:

      • ansible-vault decrypt credentials.yml

      This will prompt you to provide the same password used when first encrypting the file credentials.yml with ansible-vault. After password validation, the file contents will be saved to the disk as unencrypted data.

      Using Multiple Vault Passwords

      Ansible supports multiple vault passwords grouped by different vault IDs. This is useful if you want to have dedicated vault passwords for different environments, such as development, testing, and production environments.

      To create a new encrypted file using a custom vault ID, include the --vault-id option along with a label and the location where ansible-vault can find the password for that vault. The label can be any identifier, and the location can either be prompt, meaning that the command should prompt you to enter a password, or a valid path to a password file.

      • ansible-vault create --vault-id dev@prompt credentials_dev.yml

      This will create a new vault ID named dev that uses prompt as password source. By combining this method with group variable files, you'll be able to have separate ansible vaults for each application environment:

      • ansible-vault create --vault-id prod@prompt credentials_prod.yml

      We used dev and prod as vault IDs to demonstrate how you can create separate vaults per environment, but you can create as many vaults as you want, and you can use any identifier of your choice as vault ID.

      Now to view, edit, or decrypt these files, you'll need to provide the same vault ID and password source along with the ansible-vault command:

      • ansible-vault edit credentials_dev.yml --vault-id dev@prompt

      Using a Password File

      If you need to automate the process of provisioning servers with Ansible using a third-party tool, you'll need a way to provide the vault password without being prompted for it. You can do that by using a password file with ansible-vault.

      A password file can be a plain text file or an executable script. If the file is an executable script, the output produced by this script will be used as the vault password. Otherwise, the raw contents of the file will be used as vault password.

      To use a password file with ansible-vault, you need to provide the path to a password file when running any of the vault commands:

      • ansible-vault create --vault-id dev@path/to/passfile credentials_dev.yml

      Ansible doesn't make a distinction between content that was encrypted using prompt or a password file as password source, as long as the input password is the same. In practical terms, this means it is OK to encrypt a file using prompt and then later use a password file to store the same password used with the prompt method. The opposite is also true: you can encrypt content using a password file and later use the prompt method, providing the same password when prompted by Ansible.

      For extended flexibility and security, instead of having your vault password stored in a plain text file, you can use a Python script to obtain the password from other sources. The official Ansible repository contains a few examples of vault scripts that you can use for reference when creating a custom script that suits the particular needs of your project.

      Running a Playbook with Data Encrypted via Ansible Vault

      Whenever you run a playbook that uses data previously encrypted via ansible-vault, you'll need to provide the vault password to your playbook command.

      If you used default options and the prompt password source when encrypting the data used in this playbook, you can use the option --ask-vault-pass to make Ansible prompt you for the password:

      • ansible-playbook myplaybook.yml --ask-vault-pass

      If you used a password file instead of prompting for the password, you should use the option --vault-password-file instead:

      • ansible-playbook myplaybook.yml --vault-password-file

      If you're using data encrypted under a vault ID, you'll need to provide the same vault ID and password source you used when first encrypting the data:

      • ansible-playbook myplaybook.yml --vault-id dev@prompt

      If using a password file with your vault ID, you should provide the label followed by the full path to the password file as password source:

      • ansible-playbook myplaybook.yml --vault-id

      If your play uses multiple vaults, you should provide a --vault-id parameter for each of them, in no particular order:

      • ansible-playbook myplaybook.yml --vault-id --vault-id test@prompt --vault-id ci@prompt


      If you run into errors while executing Ansible commands and playbooks, it's a good idea to increase output verbosity in order to get more information about the problem. You can do that by including the -v option to the command:

      • ansible-playbook myplaybook.yml -v

      If you need more detail, you can use -vvv and this will increase verbosity of the output. If you're unable to connect to the remote nodes via Ansible, use -vvvv to get connection debugging information:

      • ansible-playbook myplaybook.yml -vvvv


      This guide covers some of the most common Ansible commands you may use when provisioning servers, such as how to execute remote commands on your nodes and how to run playbooks using a variety of custom settings.

      There are other command variations and flags that you may find useful for your Ansible workflow. To get an overview of all available options, you can use the help command:

      If you want a more comprehensive view of Ansible and all its available commands and features, please refer to the official Ansible documentation.

      Source link

      Infrastructure for Online Gaming: Bare Metal and Colocation Reference Architecture

      Bare Metal is powerful, fast and, most importantly, easily scalable—all qualities that make it perfect for resource-intensive, dynamic applications like massive online games. It’s a single-tenant environment, meaning you can harness all the computing power of the hardware for yourself (and without the need for virtualization).

      And beyond that, it offers all that performance and functionality at a competitive price, even when fully customized to your performance needs and unique requirements.

      Given all this, it’s easy to see why Bare Metal has quickly become the infrastructure solution of choice for gaming applications. So what does a comprehensive gaming deployment look like?

      Bare Metal for Gaming: Reference Architecture

      Here’s an example of what a Bare Metal deployment for gaming might look like.

      bare metal gaming reference architecture
      Download this Bare Metal reference architecture [PDF].

      1. Purpose-Built Configurations: Standard configurations are available, but one strength of Bare Metal is its customizability for specific performance needs or unique requirements.

      2. Access the Edge: Solution flexibility and wide reach across a global network puts gaming platforms closer to end users for better performance.

      3. Critical Services: Infrastructure designed for the needs of your application, combined with environment monitoring and support, enables the consistent performance your players expect from any high-quality gaming experience.

      4. Content Delivery Networks: CDNs are perfect for executing software downloads and patch updates or for delivering cut scenes and other static embedded content quickly, while reducing loads on main servers. Read our recent blog about CDN to learn more.

      5. Automated Route Optimization: Your infrastructure is nothing without a solid network to connect it to your players. Ours is powered by our proprietary Performance IP service, which ensures outbound traffic takes the lowest-latency path, reducing lag and packet loss. For more on this technology, read below.

      6. Cloud Connect: On-ramp to hyperscale cloud providers—ideal for test deployments and traffic bursting. If you’re not sure what kind of cloud is right for you, our cloud experts can help you craft a flexible multicloud deployment that meets the needs of your applications and integrates seamlessly into your other infrastructure solutions.

      7. Enterprise SAN Storage: Connect to a high-speed storage area network (SAN) for reliable, secure storage.

      CHAT NOW

      The Need for Ultra-Low Latency

      In online games, latency plays a huge role in the overall gaming experience. Just a few milliseconds of lag can mean the difference between winning and losing—between an immersive experience and something that people stop playing after a few frustrated minutes.

      Minimizing latency is always an ongoing battle, which is why INAP is proud of our automated route optimization engine Performance IP and its proven ability to put outbound traffic on the lowest-latency route possible.

      • Enhances default Border Gateway Protocol (BGP) by automatically routing outbound traffic along the lowest-latency path
      • Millions of optimizations made per location every hour
      • Carrier-diverse IP blend creates network redundancy (up to 7 carriers per location)
      • Supported by complex network security to protect client data and purchases

      Learn more about how it works by watching the video below or jump into a demo to see for yourself the difference that it makes.


      If a hosted model isn’t right for you—maybe you want or need to bring your own hardware—Colocation might be a good way to bring the power, resiliency and availability of modern data centers to your gaming application.

      colocation gaming reference architecture
      Download this Colocation reference architecture [PDF].

      1. Purpose-Built Configurations: Secure cabinets, cages and private suites can be configured to your needs.

      High-Density Colocation: High power density means more bang for your footprint. INAP environments support 20+ kW per rack for efficiency and ease of scalability.

      Designed for Concurrent Maintainability: Tier 3-design data centers provide component redundancy and superior availability.

      2. Automated Route Optimization: Your infrastructure is nothing without a solid network to connect it to your players. Ours is powered by our proprietary Performance IP service, which ensures outbound traffic takes the lowest-latency path, reducing lag and packet loss.

      3. Cloud Connect: On-ramp to hyperscale cloud providers—ideal for test deployments and traffic bursting. If you’re not sure what kind of cloud is right for you, our cloud experts can help you craft a flexible multicloud deployment that meets the needs of your applications and integrates seamlessly into your other infrastructure solutions.

      4. Integrated With Private Cloud & Bare Metal: Run auxiliary or back-office applications in right-sized Private Cloud and/or Bare Metal environments engineered to meet your needs. Get onboarding and support from experts.

      5. Enterprise SAN Storage: Connect to a high-speed storage area network (SAN) for reliable, secure storage.

      Interested in learning more about INAP Bare Metal?

      CHAT NOW

      Josh Williams

      Josh Williams is Vice President of Solutions Engineering. His team enables enterprises and service providers in the design, deployment and management of a wide range of data center and cloud IT solutions. READ MORE

      Source link