One place for hosting & domains


      Dedicated Private Cloud vs. Virtual Private Cloud: What’s the Difference?

      What is the difference between a dedicated private cloud and a virtual private cloud? As solutions architects, this is a question my teammates and I hear often. Simply put:

      • Dedicated Private Cloud (DPC) is defined as physically isolated, single-tenant collection of compute, network and sometimes storage resources exclusively provisioned to just one organization or application.
      • Virtual Private Cloud (VPC) is defined as a multi-tenant but virtually isolated, collection of compute, network and storage resources.

      A simple analogy comparing the two would be choosing between a single-family private home (DPC) versus a condo building (VPC).

      Despite the differences, both dedicated and virtual private clouds offer secure environments with flexible management options, which allow you to concentrate on your core business instead of struggling to keep up with daily infrastructure monitoring and maintenance.

      Let’s discuss each cloud product in greater depth and review use cases for dedicated vs. virtual private clouds. I’ll use INAP’s dedicated private cloud (DPC) and virtual private cloud (VPC) products as examples for the DPC and VPC differentiators.

      Dedicated Private Cloud (DPC)

      DPCs are scalable, isolated computing environments that are tailored to fit unique requirements and rightsized for any of workload or application. DPCs are ideal for mission-critical or legacy applications. When applications can’t be easily refactored for the cloud, a DPC can be a viable solution.  DPC is also ideal for organizations seeking to reduce time spent maintaining infrastructure. You do not need to sacrifice control, compliance or performance with a DPC. INAP DPCs are built with trusted enterprise-class technologies powered by VMware or Hyper-V.

      DPC use cases:

      • Compliance and audit requirements, such as PCI or HIPAA
      • Stringent security requirements
      • Large scale applications with rigorous performance and/or data storage requirements
      • Legacy applications, which may require hardware keys or specific software licensing components
      • Data center migration — scale physical compute, network and storage capacity as needed without significant investments in data center build outs
      • Complex network requirements, which may include MPLS, SDWAN, private layer 2 connections to customers, vendors or partners
      • Fully-integrated active or hot-standby disaster recovery environments
      • Infrastructure Management Services, all the way to the operating system
      • High CPU/GPU/RAM requirements
      • AI environments
      • Big Data
      • Always on applications that are not a fit for hyper-scale providers

      INAP’s DPC differentiators:

      • Designed and “right-sized” to fit your application, economics and compliance requirements
      • Built with enterprise-class technologies and powered by VMware or Hyper-V.
      • Utilize 100 percent isolated compute and highly secure, single-tenant environments perfect for PCI or HIPAA compliance.
      • Flexible compute and data storage options which allow you meet any application performance and growth requirements.
      • OS Managed services free up time from routine tasks of patching
      • Transparency into the core infrastructure technology allows you complete visibility in the inter-workings of the environment.
      • No restrictions on sizing of the VMs or application workloads because the infrastructure is custom designed for your organization specific technology needs.
      • SDN switching for flexible, quick and easy network management or dedicated switching for complex network configurations to meet any network requirements.
      • MDR security services available, which include vulnerability scanning, IDS/IPS, log management with SOC (Security Operations Center)
      • Off-site cloud backups and fully integrated and managed DRaaS available.

      Virtual Private Cloud (VPC)

      VPCs are ideal for applications with variable resource requirements and organizations seeking to reduce time spent maintaining infrastructure without sacrificing control of your virtual machines, compliance, and elasticity. They provide a customized landscape of users, groups, computing resources and a virtual network that you define. Different organizations or users of VPC resources do not have access to the underlying hypervisor for customization or monitoring plugin installation.

      VPCs are pre-designed for smaller to medium workloads and provide management and monitoring tools. They allow for very fast application deployment because the highly available compute, security, storage and hypervisors are already deployed and ready for your workload.

      VPC use cases:

      • Small to medium sized workloads with 10 to 25 VMs and simple network requirements
      • Applications with lower RAM requirements
      • Ideal for additional capacity needed for projects. Deploy in hours—not days.
      • Quickly spin up unlimited Virtual Machines (VMs) per host to support new projects or peak business cycle’s ability to quickly add resources on demand

      INAP’s VPC differentiators:

      • Designed for fast deployments enabling you to eliminate lengthy sourcing and procurement timelines
      • Shield Managed Security services included
        • 24/7 physical security in SSAE 16/SOC 2 certified Data Centers
        • Private networks & segmentation
        • Account security for secure portal access
        • DDoS protection & Mitigation
      • OS Managed services free up time from routine tasks of patching
      • Easy to use interface simplifies management and reduces operational expense of training IT staff
      • Off-site Cloud Backups and Fully integrated On-Demand (Paygo) DRaaS available
      • MDR security services available, which include vulnerability scanning, IDS/IPS, log management with SOC (Security Operations Center)

      Next Steps

      Do you know which private cloud model will work with your company’s workload and applications? Whether you’re certain that a DPC or VPC will be a good fit or you’re still unsure, INAP’s experts can help take your cloud infrastructure to the next level. Chat today to talk all things private cloud.

      Explore INAP Private Cloud.


      Rob Lerner


      Source link

      Use Cases for Linode Dedicated CPU Instances

      Updated by Linode

      Written by Ryan Syracuse

      Why Dedicated CPU

      Dedicated CPU Linodes offer a complement to CPU intensive tasks, and have the potential to significantly reduce issues that arise from shared cloud hosting environments. Normally, when creating a Linode via our standard plan, you are paying for access to virtualized CPU cores, which are allocated to you from a host’s shared physical CPU. While a standard plan is designed to maximize performance, the reality of a shared virtualized environment is that your processes are scheduled to use the same physical CPU cores as other customers. This can produce a level of competition that results in CPU steal, or a higher wait time from the underlying hypervisor to the physical CPU.

      CPU Steal can be defined more strictly as a measure of expected CPU cycles against actual CPU cycles as your virtualized environment is scheduled access to the physical CPU. Although this number is generally small enough that it does not heavily impact standard workloads and use cases, if you are expecting high and constant consumption of CPU resources, you are at risk of being negatively impacted by CPU Steal.

      Dedicated CPU Linodes have private access to entire physical CPU cores, meaning no other Linodes will have any processes on the same cores you’re using. Dedicated CPUs are therefore exempt from any competition for CPU resources and the potential problems that could arise because of CPU steal. Depending on your workload, you can experience an improvement in performance by using Dedicated CPU.

      Dedicated CPU Use Cases

      While a standard plan is usually a good fit for most use cases, a Dedicated CPU Linode may be recommended for a number of workloads related to high and constant CPU processing. Such examples include:

      CI/CD Toolchains and Build Servers

      CI and CD are abbreviations for Continuous Integration and Continuous Delivery, respectively, and refer to an active approach to DevOps that reduces overall workloads by automatically testing and regularly implementing small changes. This can help to prevent last-minute conflicts and bugs, and keeps tasks on schedule. For more information on the specifics of CI and CD, see our Introduction to CI/CD Guide.

      In many cases, the CI/CD pipeline can become resource-intensive if many new code changes are built and tested against your build server. When a Linode is used as a remote server and is expected to be regularly active, a Dedicated CPU Linode can add an additional layer of speed and reliability to your toolchain.

      Game Servers

      Depending on the intensity of demands they place on your Linode, game servers may benefit from a Dedicated CPU. Modern multiplayer games need to coordinate with a high number of clients, and require syncing entire game worlds for each player. If CPU resources are not available, then players will experience issues like stuttering and lag. Below is a short list of popular games that may benefit from a Dedicated CPU:

      Audio and Video Transcoding

      Audio and Video Transcoding (AKA Video/Audio Encoding) is the process of taking a video or audio file from its original or source format and converting it to another format for use with a different device or tool. Because this is often a time-consuming and resource-intensive task, a Dedicated CPU or Dedicated GPU Linode are suggested to maximize performance. FFmpeg is a popular open source tool used specifically for the manipulation of audio and video, and is recommended for a wide variety of encoding tasks.

      Big Data and Data Analysis

      Big Data and Data Analysis is the process of analyzing and extracting meaningful insights from datasets so large they often require specialized software and hardware. Big data is most easily recognized with the the “three V’s” of big data:

      • Volume: Generally, if you are working with terabytes, petabytes, exabytes, or more amounts of information you are in the realm of big data.
      • Velocity: With Big Data, you are using data that is being created, called, moved, and interacted with at a high velocity. One example is the real time data generated on social media platforms by their users.
      • Variety: Variety refers to the many different types of data formats with which you may need to interact. Photos, video, audio, and documents can all be written and saved in a number of different formats. It is important to consider the variety of data that you will collect in order to appropriately categorize it.

      Processing big data is often especially hardware-dependent. A Dedicated CPU can give you access to the isolated resources often required to complete these tasks.

      The following tools can be extremely useful when working with big data:

      • Hadoop – an Apache project for the creation of parallel processing applications on large data sets, distributed across networked nodes.

      • Apache Spark – a unified analytics engine for large-scale data processing designed with speed and ease of use in mind.

      • Apache Storm – a distributed computation system that processes streaming data in real time.

      Scientific Computing

      Scientific Computing is a term used to describe the process of using computing power to solve complex scientific problems that are either impossible, dangerous, or otherwise inconvenient to solve via traditional means. Often considered the “Third Pillar” of modern science behind Theoretical Analysis and Experimentation, Scientific Computing has quickly become a prevalent tool in scientific spaces.

      Scientific Computing involves many intersecting skills and tools for a wide array of more specific use cases, though solving complex mathematical formulas dependent on significant computing power is considered to be standard. While there are a large number of open source software tools available, below are two general purpose tools we can recommend to get started with Scientific Computing.

      It’s worth keeping in mind that, beyond general use cases, there are many more example of tools and software available and often designed for individual fields of science.

      Machine Learning

      Machine learning is a powerful approach to data science that uses large sets of data to build prediction algorithms. These prediction algorithms are commonly used in “recommendation” features on many popular music and video applications, online shops, and search engines. When you receive intelligent recommendations tailored to your own tastes, machine learning is often responsible. Other areas where you might find machine learning being used are in self-driving cars, process automation, security, marketing analytics, and health care.

      Below is a list of common tools used for machine learning and AI that can be installed on a Linode CPU instance:

      • TensorFlow – a free, open-source, machine learning framework and deep learning library. Tensorflow was originally developed by Google for internal use and later fully released to the public under the Apache License.

      • PyTorch – a machine learning library for Python that uses the popular GPU-optimized Torch framework.

      • Apache Mahout – a scalable library of machine learning algorithms and distributed linear algebra framework designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms.

      Where to Go From Here

      If you’re ready to get started with a Dedicated CPU Linode, our Getting Started With Dedicated CPU guide will walk you through the process of an initial installation. Additionally, see our Pricing Page for a rundown of both hourly and monthly costs.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link

      Getting Started with Dedicated CPUs

      Updated by Linode

      Written by Ryan Syracuse

      This guide will serve as a brief introduction into what a Dedicated CPU Linode is and how to add one to your Linode account. Review our Use Cases for Dedicated CPUs guide for more information about the tasks that work well on this instance type.

      What is a Dedicated CPU Linode?

      In contrast with a Standard Linode, which gives you access to shared virtual CPU cores, a Dedicated CPU Linode offers entire physical CPU cores that are accessible only by your instance. Because your cores will be isolated to your Linode, no other Linodes can schedule processes on them, so your instance will never have to wait for another process to complete its execution, and your software can run at peak speed and efficiency.

      While a Standard Linode is a good fit for most use cases, a Dedicated CPU Linode is recommended for a number of workloads related to high, sustained CPU processing, including:

      Deploying a Dedicated CPU Linode

      Create a Dedicated CPU Linode in the Cloud Manager

      1. Log in to the Linode Cloud Manager.

      2. Click on the Create dropdown menu at the top left of the page, and select the Linode option.

      3. Select a Distribution, One-Click App, or Image to deploy from.


      4. Choose the region where you would like your Linode to reside. If you’re not sure which to select, see our How to Choose a Data Center guide. You can also generate MTR reports for a deeper look at the network route between you and each of our data centers.

      5. At the top of the Linode Plan section, click on the Dedicated CPU tab and select the Dedicated CPU plan you would like to use.

      6. Enter a label for your new Linode under the Linode Label field.

      7. Enter a strong root password for your Linode in the Root Password field. This password must be at least six characters long and contain characters from at least two of the following categories:

        • lowercase letters
        • uppercase letters
        • numbers
        • punctuation characters


        You will not be prompted to enter a root password if you are cloning another Linode or restoring from the Linode Backups service.

      8. Optionally, add an SSH key, Backups, or a Private IP address.

      9. Click the Create button when you have finished completing this form. You will be redirected to the overview page for your new Linode. This page will show a progress bar which will indicate when the Linode has been provisioned and is ready for use.

      Next Steps

      See our Getting Started guide for help with connecting to your Linode for the first time and configuring the software on it. Then visit the How to Secure Your Server guide for a collection of security best practices for your new Linode.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link