I recently published an eBook, “Hyper-consolidation: Unleash the power and get the performance you paid for” on the Kindle library. It can be found here. In this book, I point out that organizations invest a great deal in servers and yet seldom get all of the performance they pay for.
If we examine why this is true, we soon learn that the way the system software governing these systems works, they are only seeing a fraction of the performance their systems could actually deliver. This also means that enterprises that use past performance as a guide for future system purchases are promulgating the problem. While expedient, that approach means that most organizations are paying too much. Given the right software, the systems they have could be doing much more work than they currently deliver.
What most organizations don’t realize is that most systems process inputs and outputs serially, becoming the limiting factor on how many virtual machines can do computing simultaneously. Systems software and hypervisors have done a great job of finding ways to make the most of systems’ computational power but have done little to address the biggest bottleneck holding back full utilization – Input/output (IO.)
Systems with a given level of processing power should be able to support more applications and more virtual machines simply because their IO requests could be processed simultaneously rather than being serviced one at a time. By adding the necessary software technology, the systems currently installed in organizations’ data centers could then do significantly more work.
The vision of “scale out computing” or “Web-scale computing” rather than trying to get the most of each individual system isn’t really an answer either. Adding more servers when applications begin to run slowly really doesn’t address the problem of applications waiting for IO.
Before reaching for the checkbook to buy more servers, I would suggest that it is wise to take the time to look more closely at what the installed systems are really doing. I would suspect, for example, that enterprise IT organizations would be likely to discover how many of their CPUs are idle, and their IO ports lightly loaded despite sluggish application response. Serial IO processes limit these systems and simply can’t put their resources to work.
My friends at DataCore would point out that their parallel IO software could provide the answer. This technology can be adopted by merely installing the right software on target systems. Once the software is installed, currently unused processing power available in today’s multi-core systems can be fully put to work to process IOs in parallel, allowing more work to be accomplished.
When systems are not put to their fullest use, much of that investment goes to waste as I pointed out in an article, “Storage Virtualization and the Question of Balance”.
Harnessing available CPU cores to work on IO offers organizations a number of productivity and cost saving benefits including the following.
More applications or virtual machines can be supported on the currently installed systems. The organizations won’t need to purchase unnecessary hardware or replace systems prematurely.
When an organization can purchase fewer systems or smaller system configurations and use them for a longer period of time, it can save on system acquisitions, purchasing processes, power and cooling, data center floor space, as well as planning and implementing data center transitions.
Failing to tap into the unharnessed resources means organizations are simply paying too much. They’re having to put in way more servers than necessary, and upgrading to more expensive systems just to get by.
Download and read the eBook to get the complete picture.