Blog

Recently proposed metrics such as CADE and DC FVER offer an interesting opportunity to re-evaluate the ways in which businesses measure data center efficiency. Where PUE (and DCiE) approach data center efficiency strictly from a power availability and consumption perspective, and subsequent metrics like CUE, WUE, RCI and RTI attempt to address PUE’s shortcoming by narrowing the focus to specific measurable aspects of the data center environment, the new metrics focus on the business value or services it provides, and how efficiently it does so. This is defined as the “Useful Work” that a data center provides.

There’s a fly in the ointment with these emerging metrics, however. Unlike more easily quantifiable factors like air temperature, IT equipment operational efficiency, and compute load balancing, which can be measured and monitored with a high degree of accuracy, Useful Work is both highly subjective and business specific.  For example, while a commercial web-based business’s Useful Work might best be measured by maximum number of transactions per hour, the Useful Work of a business that streams multimedia content to subscribers might be better judged by maximum number of connections and throughput. In other words, while energy consumption is largely specific and measurable, the more holistic concept of Useful Work inherently involves a bit of guesswork. As Bob Landstrom notes in his industry blog Notes from the Consultant’s Jungle, “The notion of Useful Work is a conceptual challenge, and has been well solved by only a few.”

Manual Processes in a Data CenterThis got me thinking about other factors that affect the efficiency of a data center as whole, thereby impacting their ability to deliver Useful Work. If we’re going to shift the discussion of data center efficiency away from asking: “Is my data center consuming energy in the most cost-effective way possible?” to: “Is the cost of running my data center acceptable compared to the Useful Work it provides?” then don’t we need to include factors outside of energy consumption and efficiency that significantly contribute to the cost of running a data center?

As anyone who has experienced a data center outage over a weekend or holiday can tell you, one of the single most significant contributors to overall data center uptime is personnel. There’s no question that the presence of trained staff directly impacts the overall efficiency of a data center. And clearly, personnel can be expensive: in fact, staffing accounts for as much as 40% of annual data center operational expenses. That’s a pretty significant contributor to OpEx to leave out of any equation that attempts to measure data center performance from a monetary perspective.

Beside the costs associated with staffing data centers for day-to-day operations, staffing as a contributor to operational efficiency extends to exception processing (that is, handling exceptional cases and avoiding/minimizing downtime). Well-optimized "nominal" staffing, both in numbers and in skills, may be fine for normal operations, but might be wholly inadequate in the event of a severe failure event.  Conversely, maintaining large numbers highly skilled personnel on an ongoing basis as a way to avoid potential downtime can be considerably costly.

Which begs the question:  Can the efficiency of data center staffing be measured as yet another contributor to data center efficiency? And should metrics that attempt to measure a data center’s Useful Work efficiency ignore such a large contributor to operational expenditures?

Obviously, it’s not a simple problem to solve. Any proposed data center staffing efficiency metric would have to account for a huge number of potential variables. But as new metrics are proposed that attempt to shift the focus from energy consumption to Useful Work, it stands to reason that the personnel responsible for the deployment, maintenance, and day-to-day operation of these facilities – and the costs associated with them -- be taken into account as well.