One place for hosting & domains


      How to Manage DigitalOcean and Kubernetes Infrastructure with Pulumi

      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.


      Pulumi is a tool for creating, deploying, and managing infrastructure using code written in general purpose programming languages. It supports automating all of DigitalOcean’s managed services—such as Droplets, managed databases, DNS records, and Kubernetes clusters—in addition to application configuration. Deployments are performed from an easy-to-use command-line interface that also integrates with a wide variety of popular CI/CD systems.

      Pulumi supports multiple languages but in this tutorial you will use TypeScript, a statically typed version of JavaScript that uses the Node.js runtime. This means you will get IDE support and compile-time checking that will help to ensure you’ve configured the right resources, used correct slugs, etc., while still being able to access any NPM modules for utility tasks.

      In this tutorial, you will provision a DigitalOcean Kubernetes cluster, a load balanced Kubernetes application, and a DigitalOcean DNS domain that makes your application available at a stable domain name of your choosing. This can all be provisioned in 60 lines of infrastructure-as-code and a single pulumi up command-line gesture. After this tutorial, you’ll be ready to productively build powerful cloud architectures using Pulumi infrastructure-as-code that leverages the full surface area of DigitalOcean and Kubernetes.


      To follow this tutorial, you will need:

      • A DigitalOcean Account to deploy resources to. If you do not already have one, register here.
      • A DigitalOcean API Token to perform automated deployments. Generate a personal access token here and keep it handy as you’ll use it in Step 2.
      • Because you’ll be creating and using a Kubernetes cluster, you’ll need to install kubectl. Don’t worry about configuring it further — you’ll do that later.
      • You will write your infrastructure-as-code in TypeScript, so you will need Node.js 8 or later. Download it here or install it using your system’s package manager.
      • You’ll use Pulumi to deploy infrastructure, so you’ll need to install the open source Pulumi SDK.
      • To perform the optional Step 5, you will need a domain name configured to use DigitalOcean nameservers. This guide explains how to do this for your registrar of choice.

      Step 1 — Scaffolding a New Project

      The first step is to create a directory that will store your Pulumi project. This directory will contain the source code for your infrastructure definitions, in addition to metadata files describing the project and its NPM dependencies.

      First, create the directory:

      Next, move in to the newly created directory:

      From now on, run commands from your newly created do-k8s directory.

      Next, create a new Pulumi project. There are different ways to accomplish this, but the easiest way is to use the pulumi new command with the typescript project template. This command will first prompt you to log in to Pulumi so that your project and deployment state are saved, and will then create a simple TypeScript project in the current directory:

      Here you have passed the -y option to the new command which tells it to accept default project options. For example, the project name is taken from the current directory’s name, and so will be do-k8s. If you’d like to use different options for your project name, simply elide the -y.

      After running the command, list the contents of the directory with ls:

      The following files will now be present:


      Pulumi.yaml index.ts node_modules package-lock.json package.json tsconfig.json

      The primary file you’ll be editing is index.ts. Although this tutorial only uses this single file, you can organize your project any way you see fit using Node.js modules. This tutorial also describes one step at a time, leveraging the fact that Pulumi can detect and incrementally deploy only what has changed. If you prefer, you can just populate the entire program, and deploy it all in one go using pulumi up.

      Now that you’ve scaffolded your new project, you are ready to add the dependencies needed to follow the tutorial.

      Step 2 — Adding Dependencies

      The next step is to install and add dependencies on the DigitalOcean and Kubernetes packages. First, install them using NPM:

      This will download the NPM packages, Pulumi plugins, and save them as dependencies.

      Next, open the index.ts file with your favorite editor. This tutorial will use nano:

      Replace the contents of your index.ts with the following:


      import * as digitalocean from "@pulumi/digitalocean";
      import * as kubernetes from "@pulumi/kubernetes";

      This makes the full contents of these packages available to your program. If you type "digitalocean." using an IDE that understands TypeScript and Node.js, you should see a list of DigitalOcean resources supported by this package, for instance.

      Save and close the file after adding the content.

      Note: We will be using a subset of what’s available in those packages. For complete documentation of resources, properties, and associated APIs, please refer to the relevant API documentation for the @pulumi/digitalocean and @pulumi/kubernetes packages.

      Next, you will configure your DigitalOcean token so that Pulumi can provision resources in your account:

      • pulumi config set digitalocean:token YOUR_TOKEN_HERE --secret

      Notice the --secret flag, which uses Pulumi’s encryption service to encrypt your token, ensuring that it is stored in cyphertext. If you prefer, you can use the DIGITALOCEAN_TOKEN environment variable instead, but you’ll need to remember to set it every time you update your program, whereas using configuration automatically stores and uses it for your project.

      In this step you added the necessary dependencies and configured your API token with Pulumi so that you can provision your Kubernetes cluster.

      Step 3 — Provisioning a Kubernetes Cluster

      Now you’re ready to create a DigitalOcean Kubernetes cluster. Get started by reopening the index.ts file:

      Add these lines at the end of your index.ts file:


      const cluster = new digitalocean.KubernetesCluster("do-cluster", {
          region: digitalocean.Regions.SFO2,
          version: "latest",
          nodePool: {
              name: "default",
              size: digitalocean.DropletSlugs.DropletS2VPCU2GB,
              nodeCount: 3,
      export const kubeconfig = cluster.kubeConfigs[0].rawConfig;

      This new code allocates an instance of digitalocean.KubernetesCluster and sets a number of properties on it. This includes using the sfo2 region slug, the latest supported version of Kubernetes, the s-2vcpu-2gb Droplet size slug, and states your desired count of three Droplet instances. Feel free to change any of these, but be aware that DigitalOcean Kubernetes is only available in certain regions at the time of this writing. You can refer to the product documentation for updated information about region availability.

      For a complete list of properties you can configure on your cluster, please refer to the KubernetesCluster API documentation.

      The final line in that code snippet exports the resulting Kubernetes cluster’s kubeconfig file so that it’s easy to use. Exported variables are printed to the console and also accessible to tools. You will use this momentarily to access our cluster from standard tools like kubectl.

      Now you’re ready to deploy your cluster. To do so, run pulumi up:

      This command takes the program, generates a plan for creating the infrastructure described, and carries out a series of steps to deploy those changes. This works for the initial creation of infrastructure in addition to being able to diff and update your infrastructure when subsequent updates are made. In this case, the output will look something like this:


      Previewing update (dev): Type Name Plan + pulumi:pulumi:Stack do-k8s-dev create + └─ digitalocean:index:KubernetesCluster do-cluster create Resources: + 2 to create Do you want to perform this update? yes > no details

      This says that proceeding with the update will create a single Kubernetes cluster named do-cluster. The yes/no/details prompt allows us to confirm that this is the desired outcome before any changes are actually made. If you select details, a full list of resources and their properties will be shown. Choose yes to begin the deployment:


      Updating (dev): Type Name Status + pulumi:pulumi:Stack do-k8s-dev created + └─ digitalocean:index:KubernetesCluster do-cluster created Outputs: kubeconfig: "…" Resources: + 2 created Duration: 6m5s Permalink:…/do-k8s/dev/updates/1

      It takes a few minutes to create the cluster, but then it will be up and running, and the full kubeconfig will be printed out to the console. Save the kubeconfig to a file:

      • pulumi stack output kubeconfig > kubeconfig.yml

      And then use it with kubectl to perform any Kubernetes command:

      • KUBECONFIG=./kubeconfig.yml kubectl get nodes

      You will receive output similar to the following:


      NAME STATUS ROLES AGE VERSION default-o4sj Ready <none> 4m5s v1.14.2 default-o4so Ready <none> 4m3s v1.14.2 default-o4sx Ready <none> 3m37s v1.14.2

      At this point you’ve set up infrastructure-as-code and have a repeatable way to bring up and configure new DigitalOcean Kubernetes clusters. In the next step, you will build on top of this to define the Kubernetes infrastructure in code and learn how to deploy and manage them similarly.

      Step 4 — Deploying an Application to Your Cluster

      Next, you will describe a Kubernetes application’s configuration using infrastructure-as-code. This will consist of three parts:

      1. A Provider object, which tells Pulumi to deploy Kubernetes resources to the DigitalOcean cluster, rather than the default of whatever kubectl is configured to use.
      2. A Kubernetes Deployment, which is the standard Kubernetes way of deploying a Docker container image that is replicated across any number of Pods.
      3. A Kubernetes Service, which is the standard way to tell Kubernetes to load balance access across a target set of Pods (in this case, the Deployment above).

      This is a fairly standard reference architecture for getting up and running with a load balanced service in Kubernetes.

      To deploy all three of these, open your index.ts file again:

      Once the file is open, append this code to the end of the file:


      const provider = new kubernetes.Provider("do-k8s", { kubeconfig })
      const appLabels = { "app": "app-nginx" };
      const app = new kubernetes.apps.v1.Deployment("do-app-dep", {
          spec: {
              selector: { matchLabels: appLabels },
              replicas: 5,
              template: {
                  metadata: { labels: appLabels },
                  spec: {
                      containers: [{
                          name: "nginx",
                          image: "nginx",
      }, { provider });
      const appService = new kubernetes.core.v1.Service("do-app-svc", {
          spec: {
              type: "LoadBalancer",
              selector: app.spec.template.metadata.labels,
              ports: [{ port: 80 }],
      }, { provider });
      export const ingressIp = appService.status.loadBalancer.ingress[0].ip;

      This code is similar to standard Kubernetes configuration, and the behavior of objects and their properties is equivalent, except that it’s written in TypeScript alongside your other infrastructure declarations.

      Save and close the file after making the changes.

      Just like before, run pulumi up to preview and then deploy the changes:

      After selecting yes to proceed, the CLI will print out detailed status updates, including diagnostics around Pod availability, IP address allocation, and more. This will help you understand why your deployment might be taking time to complete or getting stuck.

      The full output will look something like this:


      Updating (dev): Type Name Status pulumi:pulumi:Stack do-k8s-dev + ├─ pulumi:providers:kubernetes do-k8s created + ├─ kubernetes:apps:Deployment do-app-dep created + └─ kubernetes:core:Service do-app-svc created Outputs: + ingressIp : "" Resources: + 3 created 2 unchanged Duration: 2m52s Permalink:…/do-k8s/dev/updates/2

      After this completes, notice that the desired number of Pods are running:

      • KUBECONFIG=./kubeconfig.yml kubectl get pods


      NAME READY STATUS RESTARTS AGE do-app-dep-vyf8k78z-758486ff68-5z8hk 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-8982s 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-94k7b 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-cqm4c 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-lx2d7 1/1 Running 0 1m

      Similar to how the program exports the cluster’s kubeconfig file, this program also exports the Kubernetes service’s resulting load balancer’s IP address. Use this to curl the endpoint and see that it is up and running:

      • curl $(pulumi stack output ingressIp)


      <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=""></a>.<br/> Commercial support is available at <a href=""></a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

      From here, you can easily edit and redeploy your application infrastructure. For example, try changing the replicas: 5 line to say replicas: 7, and then rerun pulumi up:

      Notice that it just shows what has changed, and that selecting details displays the precise diff:


      Previewing update (dev): Type Name Plan Info pulumi:pulumi:Stack do-k8s-dev ~ └─ kubernetes:apps:Deployment do-app-dep update [diff: ~spec] Resources: ~ 1 to update 4 unchanged Do you want to perform this update? details pulumi:pulumi:Stack: (same) [urn=urn:pulumi:dev::do-k8s::pulumi:pulumi:Stack::do-k8s-dev] ~ kubernetes:apps/v1:Deployment: (update) [id=default/do-app-dep-vyf8k78z] [urn=urn:pulumi:dev::do-k8s::kubernetes:apps/v1:Deployment::do-app-dep] [provider=urn:pulumi:dev::do-k8s::pulumi:providers:kubernetes::do-k8s::80f36105-337f-451f-a191-5835823df9be] ~ spec: { ~ replicas: 5 => 7 }

      Now you have both a fully functioning Kubernetes cluster and a working application. With your application up and running, you may want to configure a custom domain to use with your application. The next step will guide you through configuring DNS with Pulumi.

      Step 5 — Creating a DNS Domain (Optional)

      Although the Kubernetes cluster and application are up and running, the application’s address is dependent upon the whims of automatic IP address assignment by your cluster. As you adjust and redeploy things, this address might change. In this step, you will see how to assign a custom DNS name to the load balancer IP address so that it’s stable even as you subsequently change your infrastructure.

      Note: To complete this step, ensure you have a domain using DigitalOcean’s DNS nameservers,,, and Instructions to configure this are available in the Prerequisites section.

      To configure DNS, open the index.ts file and append the following code to the end of the file:


      const domain = new digitalocean.Domain("do-domain", {
          name: "your_domain",
          ipAddress: ingressIp,

      This code creates a new DNS entry with an A record that refers to your Kubernetes service’s IP address. Replace your_domain in this snippet with your chosen domain name.

      It is common to want additional sub-domains, like www, to point at the web application. This is easy to accomplish using a DigitalOcean DNS record. To make this example more interesting, also add a CNAME record that points to


      const cnameRecord = new digitalocean.DnsRecord("do-domain-cname", {
          type: "CNAME",
          name: "www",
          value: "@",

      Save and close the file after making these changes.

      Finally, run pulumi up to deploy the DNS changes to point at your existing application and cluster:


      Updating (dev): Type Name Status pulumi:pulumi:Stack do-k8s-dev + ├─ digitalocean:index:Domain do-domain created + └─ digitalocean:index:DnsRecord do-domain-cname created Resources: + 2 created 5 unchanged Duration: 6s Permalink:…/do-k8s/dev/updates/3

      After the DNS changes have propagated, you will be able to access your content at your custom domain:

      You will receive output similar to the following:


      <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=""></a>.<br/> Commercial support is available at <a href=""></a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

      With that, you have successfully set up a new DigitalOcean Kubernetes cluster, deployed a load balanced Kubernetes application to it, and given that application’s load balancer a stable domain name using DigitalOcean DNS, all in 60 lines of code and a pulumi up command.

      The next step will guide you through removing the resources if you no longer need them.

      Step 6 — Removing the Resources (Optional)

      Before concluding the tutorial, you may want to destroy all of the resources created above. This will ensure you don’t get charged for resources that aren’t being used. If you prefer to keep your application up and running, feel free to skip this step.

      Run the following command to destroy the resources. Be careful using this, as it cannot be undone!

      Just as with the up command, destroy displays a preview and prompt before taking action:


      Previewing destroy (dev): Type Name Plan - pulumi:pulumi:Stack do-k8s-dev delete - ├─ digitalocean:index:DnsRecord do-domain-cname delete - ├─ digitalocean:index:Domain do-domain delete - ├─ kubernetes:core:Service do-app-svc delete - ├─ kubernetes:apps:Deployment do-app-dep delete - ├─ pulumi:providers:kubernetes do-k8s delete - └─ digitalocean:index:KubernetesCluster do-cluster delete Resources: - 7 to delete Do you want to perform this destroy? yes > no details

      Assuming this is what you want, select yes and watch the deletions occur:


      Destroying (dev): Type Name Status - pulumi:pulumi:Stack do-k8s-dev deleted - ├─ digitalocean:index:DnsRecord do-domain-cname deleted - ├─ digitalocean:index:Domain do-domain deleted - ├─ kubernetes:core:Service do-app-svc deleted - ├─ kubernetes:apps:Deployment do-app-dep deleted - ├─ pulumi:providers:kubernetes do-k8s deleted - └─ digitalocean:index:KubernetesCluster do-cluster deleted Resources: - 7 deleted Duration: 7s Permalink:…/do-k8s/dev/updates/4

      At this point, nothing remains: the DNS entries are gone and the Kubernetes cluster—along with the application running inside of it—are gone. The permalink is still available, so you can still go back and see the full history of updates for this stack. This could help you recover if the destruction was a mistake, since the service keeps full state history for all resources.

      If you’d like to destroy your project in its entirety, remove the stack:

      You will receive output asking you to confirm the deletion by typing in the stack’s name:


      This will permanently remove the 'dev' stack! Please confirm that this is what you'd like to do by typing ("dev"):

      Unlike the destroy command, which deletes the cloud infrastructure resources, the removal of a stack erases completely the full history of your stack from Pulumi’s purview.


      In this tutorial, you’ve deployed DigitalOcean infrastructure resources—a Kubernetes cluster and a DNS domain with A and CNAME records—in addition to the Kubernetes application configuration that uses this cluster. You have done so using infrastructure-as-code written in a familiar programming language, TypeScript, that works with existing editors, tools, and libraries, and leverages existing communities and packages. You’ve done it all using a single command line workflow for doing deployments that span your application and infrastructure.

      From here, there are a number of next steps you might take:

      The entire sample from this tutorial is available on GitHub. For extensive details about how to use Pulumi infrastructure-as-code in your own projects today, check out the Pulumi Documentation, Tutorials, or Getting Started guides. Pulumi is open source and free to use.

      Source link

      How to Keep your IT Infrastructure Safe from Natural Disasters

      Costly natural disasters—think disasters that cost over $1 billion—are occurring with increased frequency. According to the National Oceanic and Atmospheric Administration, there was an average of 6.3 annual billion-dollar events from 1980-2018, yet in the last five years alone, the average doubled to 12.6.

      Last year, natural disasters cost the U.S. $91 billion, and there were 30 events in total over 2017 and 2018 with losses exceeding $1 billion.

      Whether the event is a hurricane, flood, tornado or wildfire, businesses can be blindsided when they do happen. And many businesses are woefully unprepared. As many as 50 percent of organizations affected won’t survive these kinds of events, according to IDC’s State of IT Resilience white paper.

      Of those businesses that do survive, IDC found that the average cost of downtime is $250,000 per hour across all industries and organizational sizes.

      Imagine what would happen if your business takes a direct hit and your data, applications and infrastructure are disabled. We all know that these events are unpredictable, but that doesn’t mean that we can’t do something now to prepare for any eventuality.

      Here are a few basic steps you should take to protect your IT infrastructure and keep your business up and running after a natural disaster.


      Perform a Self-Evaluation

      The first step in protecting your sensitive information is to determine exactly what needs to be safeguarded.

      For most companies, the biggest risk is data loss. Determine how many instances of your data exist and where they are located. If your company only performs backs up onsite or even stores data off-site with no additional backup, you need to reevaluate your strategy. Putting all your eggs in one basket makes it easy for your information to be wiped out by natural disasters.

      Think About Off-Site Backups in Different Locations

      If you do use off-site backups for your information, you’re taking a step in the right direction, but depending on their physical location, your data might not yet be fully protected.

      Consider this scenario: Your business is headquartered in San Francisco and you back up your data in nearby Silicon Valley. A massive earthquake strikes the Bay Area (seismologists say California is overdue for the next “big one”), disabling your building as well as the data center where your backup data is located. Depending on the size of the disaster it could take hours, days or even weeks before your data is accessible. Would your company be able to survive this disruption?

      A smarter option would be to select a backup site that’s not in the same geographic region, reducing the chances that both locations would be impacted by the same disaster.

      Consider the Cloud

      An option becoming more popular with businesses is to utilize cloud storage as their backup solution. INAP provides a cost-effective and scalable storage option, providing a flexible and dependable cloud storage solution.

      Another dependable and more robust option, Disaster Recovery as a Service (DRaaS) replicates mission-critical data and applications so your business does not suffer any downtime during natural disasters. DRaaS provides an automatic failover to a secondary site should your main environment go down, while allowing your IT teams to monitor and control your replicated infrastructure without your end users knowing anything is wrong.

      Think of DRaaS as a facility redundancy in your infrastructure, but rather than running your servers simultaneously from multiple sites, one is just standing by ready to go in case of an emergency.

      To learn more about INAP’s backup and disaster recovery solutions, sign up to receive your free consultation with a data center services expert today.

      Don’t Wait Until It’s Too Late

      It’s never a bad time to evaluate your disaster recovery strategy. But if you’re waiting for a natural disaster to come barreling toward your city, then you’re waiting too long to establish and activate your backup strategy.

      It’s just up to you and your IT team to determine which services are most appropriate for your business needs.

      Explore INAP Cloud.


      Laura Vietmeyer


      Source link

      Infrastructure for Online Gaming: Bare Metal and Colocation Reference Architecture

      Bare Metal is powerful, fast and, most importantly, easily scalable—all qualities that make it perfect for resource-intensive, dynamic applications like massive online games. It’s a single-tenant environment, meaning you can harness all the computing power of the hardware for yourself (and without the need for virtualization).

      And beyond that, it offers all that performance and functionality at a competitive price, even when fully customized to your performance needs and unique requirements.

      Given all this, it’s easy to see why Bare Metal has quickly become the infrastructure solution of choice for gaming applications. So what does a comprehensive gaming deployment look like?

      Bare Metal for Gaming: Reference Architecture

      Here’s an example of what a Bare Metal deployment for gaming might look like.

      bare metal gaming reference architecture
      Download this Bare Metal reference architecture [PDF].

      1. Purpose-Built Configurations: Standard configurations are available, but one strength of Bare Metal is its customizability for specific performance needs or unique requirements.

      2. Access the Edge: Solution flexibility and wide reach across a global network puts gaming platforms closer to end users for better performance.

      3. Critical Services: Infrastructure designed for the needs of your application, combined with environment monitoring and support, enables the consistent performance your players expect from any high-quality gaming experience.

      4. Content Delivery Networks: CDNs are perfect for executing software downloads and patch updates or for delivering cut scenes and other static embedded content quickly, while reducing loads on main servers. Read our recent blog about CDN to learn more.

      5. Automated Route Optimization: Your infrastructure is nothing without a solid network to connect it to your players. Ours is powered by our proprietary Performance IP service, which ensures outbound traffic takes the lowest-latency path, reducing lag and packet loss. For more on this technology, read below.

      6. Cloud Connect: On-ramp to hyperscale cloud providers—ideal for test deployments and traffic bursting. If you’re not sure what kind of cloud is right for you, our cloud experts can help you craft a flexible multicloud deployment that meets the needs of your applications and integrates seamlessly into your other infrastructure solutions.

      7. Enterprise SAN Storage: Connect to a high-speed storage area network (SAN) for reliable, secure storage.

      CHAT NOW

      The Need for Ultra-Low Latency

      In online games, latency plays a huge role in the overall gaming experience. Just a few milliseconds of lag can mean the difference between winning and losing—between an immersive experience and something that people stop playing after a few frustrated minutes.

      Minimizing latency is always an ongoing battle, which is why INAP is proud of our automated route optimization engine Performance IP and its proven ability to put outbound traffic on the lowest-latency route possible.

      • Enhances default Border Gateway Protocol (BGP) by automatically routing outbound traffic along the lowest-latency path
      • Millions of optimizations made per location every hour
      • Carrier-diverse IP blend creates network redundancy (up to 7 carriers per location)
      • Supported by complex network security to protect client data and purchases

      Learn more about how it works by watching the video below or jump into a demo to see for yourself the difference that it makes.


      If a hosted model isn’t right for you—maybe you want or need to bring your own hardware—Colocation might be a good way to bring the power, resiliency and availability of modern data centers to your gaming application.

      colocation gaming reference architecture
      Download this Colocation reference architecture [PDF].

      1. Purpose-Built Configurations: Secure cabinets, cages and private suites can be configured to your needs.

      High-Density Colocation: High power density means more bang for your footprint. INAP environments support 20+ kW per rack for efficiency and ease of scalability.

      Designed for Concurrent Maintainability: Tier 3-design data centers provide component redundancy and superior availability.

      2. Automated Route Optimization: Your infrastructure is nothing without a solid network to connect it to your players. Ours is powered by our proprietary Performance IP service, which ensures outbound traffic takes the lowest-latency path, reducing lag and packet loss.

      3. Cloud Connect: On-ramp to hyperscale cloud providers—ideal for test deployments and traffic bursting. If you’re not sure what kind of cloud is right for you, our cloud experts can help you craft a flexible multicloud deployment that meets the needs of your applications and integrates seamlessly into your other infrastructure solutions.

      4. Integrated With Private Cloud & Bare Metal: Run auxiliary or back-office applications in right-sized Private Cloud and/or Bare Metal environments engineered to meet your needs. Get onboarding and support from experts.

      5. Enterprise SAN Storage: Connect to a high-speed storage area network (SAN) for reliable, secure storage.

      Interested in learning more about INAP Bare Metal?

      CHAT NOW

      Josh Williams

      Josh Williams is Vice President of Solutions Engineering. His team enables enterprises and service providers in the design, deployment and management of a wide range of data center and cloud IT solutions. READ MORE

      Source link