IT Starts with Information, Part 2: Capacity Planning, Improved ROI, Increased Productivity

Posted by RF Code

In part one of our blog series we discussed real-time monitoring’s position within the data center management chain. In this second blog, we analyze how to use this data to dramatically improve TCO through the continuous visibility, management and accurate provisioning of IT assets.

Our recent white paper addresses these concepts in more depth, explaining how real-time monitoring, capacity planning and predictive analysis technologies help businesses improve data center agility and efficiency to ensure higher performance at a lower cost.

Asset management is a key feature of data center capacity planning, however for many organizations the activity translates to basic record keeping and outdated collection methods.

Recording the name and location of every piece of IT equipment in the data center into a spreadsheet, by hand, is a labor-intensive, expensive and error-prone way to track valuable assets, especially as the data center is a dynamic environment.

Equipment is moved every day; devices are taken offline for maintenance, and new equipment is deployed. Many data centers that choose to track assets manually are attempting to solve a modern problem using a methodology that dates back to ancient Mesopotamia.

A data center that employs a static, manually maintained system like this requires staff to physically walk around the facility when conducting inventory audits. If a device is missing from its last recorded location or if information about the device conflicts with existing records, staff must investigate, but without any assistance into where to start.

This lack of visibility compromises productivity, lowers morale and increases costs, and makes capacity decisions the equivalent of working in the dark.

Keeping the Lights On

Now consider a data center with a real-time, wire-free asset management system. In this data center, the user knows the exact location of every piece of equipment in the facility. They can drill down to specifications, maintenance and warranty histories for every device. Assets can be visible on a floor plan and in context, with power paths, network connections and dependencies clearly mapped.

describe the imageThis business fully understands the current position and status of every piece of equipment in the data center whether it is connected to a rack, or not.

Real-time monitoring data can be correlated with asset information to detect stranded capacity (for example, power is available in a given area but cooling is at its limit). “What if?” scenarios for new configurations can be modelled and predictions into what would occur if an asset were to fail can take place. This is a data center that is consistently available to perform today and is ready to meet the demands of tomorrow.

On a tactical level, real-time asset management
means less time wasted in inventory reconciliation, fewer penalties from late lease returns, and smaller equipment replacement budgets. Warranty and depreciation information is readily available, audits
are streamlined and change management is simplified.

On a more strategic level, asset management systems facilitate more effective capacity planning, prevent over-provisioning and allow the redeployment of assets that could otherwise be sitting in storage areas unused, depreciating in value.

Those that question whether capacity planning is an important concern should consider the data. A recent study found that 63% of those interviewed indicated that they would run out in the next 2-5 years. Building new capacity is expensive – a data center can cost $5-10 million per MW – and co-location prices do not include electricity, bandwidth, staff or migration costs.

A company that buys new compute capacity simply because it is unable to identify whether sufficient capacity already exists is making two expensive mistakes: it is wasting money on something it does not need, and it is taking funds from other initiatives that could drive business growth.

There is an answer to this challenge, and as we explore in the final part of the series, it resides in the predictive analysis of data center monitoring data.