One place for hosting & domains

      The Value of Virtualization

      You’re probably familiar with many arguments for virtualizing your systems. Virtualization can make your systems more secure, by reducing the number of applications and users on a single machine. It can make it easier to scale, utilizes your resources more efficiently, reduces costs, is faster to set up, and shields you from hardware failures, providing you with better uptime. VMs have another advantages over conventional servers, though, which is less commonly listed but still pretty important: they’re automatically instrumented.

      Let me explain what I mean. Lets say you’re having some problems running your website on a traditional server. Your traffic has gone up, and now you are having outages during peak times. You speak to your tech support staff, and the admins agree that there’s a problem — but they’re not exactly sure what the cause is.

      Usually at this point, the admins will start ‘keeping an eye’ on the system in question. This often means being logged in and running top or vmstats. If the problem recurs, hopefully the admins will catch it, and the output from the monitoring software will give them hints as to what went wrong. If the admin is not around when the problem happens, though, they might not get the data they need, and then the process will have to start all over again.

      Another solution is to start monitoring the server using a monitoring program like Cacti or Ganglia. This is a little more reliable than manual monitoring because the software won’t get bored or distracted and miss the fault event. But monitoring software has its own problems. It is often a hassle to setup. It requires punching holes in your firewall, making your server less secure. It takes up resources on an already precarious machine, possibly making downtime more likely. And if the problem affects the network, the remote monitoring machine might not be able to communicate with the trouble server to get any useful data at the exact time when the data is needed.

      This is where virtualization comes to the rescue. The hypervisor — the software which makes virtualization possible — already has a lot of statistics about the virtual machine. Our Cascade cloud platform automatically gathers such statistics for every VM, storing the data in approximately five-minute increments in our own internal logging database. The data gathering happens in the context of the node, not the VM, meaning that the VM will not see a performance impact from the monitoring. Also, the fact that every VM is already monitored means that if a fault occurs, you won’t have to wait for a second fault to figure out what went wrong. The data to analyze the original fault might already be there.

      Let me give you a concrete example. Just today, we had a problem with a customer’s VM; his PHP site went offline and the VM required a reboot to bring the site back online. The admins had a hunch that the VM was overloaded and couldn’t handle the traffic, but they didn’t know what resource was running low.

      Here are some graphs generated by our internal system, Manage, which allowed our admins to get to the bottom of the problem. First, lets start with a graph of network bandwidth for the VM:


      This graph illustrates the problem precisely. Around 8:50 PM, the VM stopped serving requests, or the amount of data served dropped precipitously. When admins logged in, they saw this in the kernel logs:

      Oct 31 20:58:03 vm1 kernel: INFO: task php:41632 blocked for more than 120 seconds.

      But why did this happen? Maybe the CPU usage for the VM was too high? Well, we can answer this question using our CPU graphs, which display CPU usage data for both the VM as a whole on its node and for each virtual CPU inside the VM:



      Sure enough, the VM is pretty busy. It is making good use of all of its virtual CPUs, and its overall load on its node is often over 100%. However, the VM has 4 VCPUs, which means that if CPU were the limiting resource, the load would be as high as 400%. It looks like each VCPU is only being about 25% utilized. Also, the fault occurred at 8:50 PM, and we don’t see a CPU spike around that time. In fact, CPU usage for some of the virtual CPUs appears to drop around 8:50 — VCPU 0, at least, had nothing much to do during the outage.

      So what could be the problem? For an answer, lets turn to yet another batch of data we are able to get from the hypervisor: disk statistics



      The VM is not a particularly major user of disk IO — mostly steady writes consistent with saving log activity, with a few read spikes which might indicate someone searching through the file system or perhaps a scheduled backup. But here’s something interesting: right around the time the VM experienced its failure, swap file usage skyrocketed. Now we know exactly why the VM failed: it ran out of memory, and swap was too slow to fulfill the heavy traffic requests the VM demanded.

      Getting data like this on a regular, traditional server would have required complex monitoring software, a steady stream of network traffic, a whole another monitoring server and skilled labor to set the whole thing up. On a Cascade VM, you get this kind of data for free, automatically. You won’t see these exact same graphs in LEAP, our customer portal, as these are generated for our internal interfaces only. But the data behind these graphs is also available to LEAP, which will generate much prettier, more usable visualizations, which permit you to easily drill down and explore what’s happening with your virtual machine.

      The conclusion to this story is that we increased the amount of memory available to the VM from 4 GB to 8 GB. This required just a quick reboot of the VM, with none of the downtime or stress required for pulling a physical server out of the racks and opening it up. This solved the customer’s problems, with better performance and no outages. So here is yet another way virtualization with Cascade and LEAP makes your life easier.

      Visuals for the World of Virtualization

      I am admittedly very new to the world of web hosting but am playing the role of SpongeKevin SquarePants in terms of soaking up every bit of knowledge my fellow SingleHoppers throw my way!

      Fortunately the other guys around the pond here have been amazing when it comes to making sense of intergalactic-like world of web hosting with great visuals.  And by great I don’t mean these works of art are heading to the Museum of Modern Art anytime soon, they’re actually just doodles but man do they help!

      Want to see how a server looks without making a trip to a data center, it’s simple:


      Talk about simple, right?  Obviously within this server they’re thousands of moving parts, but for now, this a server in its entirety.

      So if that’s a server, than what’s a virtual server or VM? How can something that’s physical become virtual?

      Let’s start with the picture:


      Software actually tells (or tricks, if you ask me) the server into thinking it’s 3 separate machines, all using Server #1’s resources as their own. Sneaky, right?

      If you’re a web hosting veteran this stuff is second nature, but if you’re a rookie, visuals drive understanding. If  you know of any other great visual (pictures or videos) please share!

      Cloud Hosting Series via the Layman


      If I had a dollar for every time I said, read, or heard this lovely little buzzword on a daily basis, I would no doubt have enough cash to buy a pint for every SingleHopper in the office. Aside from being Mr. Popular, would I be any wiser for it? Just because it’s being used, doesn’t mean it’s being used correctly, or that its definition is universally known.

      So there’s where this blog comes into play…This will be an ongoing series of blog entries where I discuss all things cloud.

      • What is cloud hosting?
      • Is it right for my business?
      • Is it just a buzzword, here today, gone tomorrow?
      • Can I touch it?
      • Is it cost-effective?
      • Who’s driving the demand?
      • Can I ride it?

      As we all know there are a million sources for this type of content. What I hope makes series a bit different from the rest is its top-line, straight to the point, and occasionally humorous commentary.

      Firstly, the term ‘cloud’ is perfect! It literally floats above us (or servers), it’s able to rapidly change its size, and physically (and sometimes mentally) impossible to grasp. When explain the theory behind computing, especially to a novice, keep it simple… Start with the familiar and mention most users have used a cloud before unbeknownst to them, it’s called Gmail.

      Consumers have become much more open to allowing their photos, music, and words to live all alone out there on a cloud whether they know it or not. Companies have now decided they like the functionality and scalability of the cloud concept and are running and jumping to them!

      As it stands right now, a cloud is a number of elements (data and processes to name a couple) stretching from server to server or even datacenter to datacenter that are accessible at any given time, rate, or place. The cloud’s functions are far greater than storage – it also allows the sharing of content in real time across multiple users given they’re all online. It’s this type of transparency (and functionality) that has exposed most people “the cloud’’ without ever knowing it exists. {insert evil laugh here}

      Be sure to check back for the next installment of cloud talk!