This week RF Code is pleased to feature a guest blog by Dr. Magnus Herrlin, President of ANCIS Inc. Prior to establishing ANCIS, Dr. Herrlin was a Principal Scientist at Telcordia Technologies (Bellcore) where he optimized the network environment to minimize service downtime and operating costs. His expertise has generated numerous papers, reports, and standards, including thermal management, energy management, mechanical system design and operation, and network reliability.
You might ask yourself what data, metrics, and training may have in common in the data center domain. Data is everywhere and increasingly so. The goal of collecting data is to do something intelligent with them, at least eventually. Metrics are fantastic tools to compress a large amount of “raw” data into understandable, actionable numbers. However, data and metrics alone will not necessarily make us design and operate data centers in a more energy efficient way. The missing link is training of those involved in the design and day-to-day operation of the data center.
We have the capabilities to collect a nearly endless amount of operational infrastructure data. Specifically, many Data Center Infrastructure Management (DCIM) offerings have that capability when linked to a sensor network. Computational Fluid Dynamics (CFD) modeling has a similar data-handling challenge. On the infrastructure side of the business, the data we are talking about are those related to energy and those related to environmental conditions in the equipment space. The data can be collected with wired or wireless technologies and linked to a DCIM software. Of utmost importance is that the collection and software technologies are flexible. Data centers are dynamic environments, which require frequent reconfigurations of the data collection system. Many of the commercial data collection systems are phenomenal data-generating machines.
But data alone is not sufficient for allowing operational efficiency. The growth of the DCIM market have not met the industry projections a few years back. One reason may be that the DCIM vendors have been less than successful in communicating the benefits of their systems. Another may be that the industry is young and poorly organized/standardized. But the lack of tools to convert raw data into actionable information, which the user actually can use to improve the operation of the data center, may also be a major contributor to this slow growth.
Metrics are fantastic tools for compressing a large amount of data into useful numbers. A metric is typically calculated with a formula, generating an output that is simple to understand. Raw data often requires analysis and interpretation to make it useful. “Rich” data on the other hand can be used for predicting, planning, and decision making. And well selected metrics help produce rich data.
Since a metric generally is a single number, it can also easily be tracked over time. After all, how do you effectively track 200 or 2000 raw data points over time? Tracking performance is imperative because it shows progress (or lack thereof). And don’t underestimate the business value of this data: the guy in the corner office simply loves this type of information.
Maybe the most well-known metric in data centers is the Power Utilization Effectiveness (PUE). It is a measure of the energy (not power) premium to condition the equipment space. A PUE well above 1 means a large infrastructure overhead, whereas 1 would mean no overhead whatsoever. Clearly, to be able to calculate this single-number metric requires raw data as well as some data analysis (using a formula).
Energy efficiency needs to be balanced with equipment reliability, which generally has higher priority for data center owners and operators than energy efficiency. It’s a balancing act. An important part of equipment reliability is the air intake temperatures. The trick is to increase intake temperatures and thereby decrease energy costs without risking equipment reliability. In other words, energy and thermal management. Several organizations have developed guidelines for intake temperatures, for example, ASHRAE and NEBS.
Leading network services provider CenturyLink set out to find a way to cut the spending on energy and cooling. But, without appropriate monitoring and analysis, increased temperatures could lead to costly equipment failures. Based on a pilot project with the environmental monitoring and asset management company RF Code, CenturyLink is projected to save nearly three million annually at full implementation by balancing the temperature increase with equipment reliability.
There are plenty of intake temperatures in a data center. To be exact, everywhere there is a cooling fan on a piece of equipment. Even a fairly small data center has the capacity to produce data that becomes next to worthless without data management. One metric that was specifically designed to help with such data overload is the Rack Cooling Index (RCI). It is a metric for showing compliance with the ASHRAE and NEBS temperature guidelines. An RCI of 100% means perfect compliance. RF Code’s software used by CenturyLink automatically calculates this metric to help ensure thermal compliance.
There are a number of training opportunities for the data center industry. I will limit myself to a training program called the Data Center Energy Practitioner (DCEP) program. It was developed by the Department of Energy’s (DOE) Advanced Manufacturing Office in partnership with industry stakeholders. This certificate training program for data center energy experts has now been re-initiated with help from the Federal Energy Management Program (FEMP), Lawrence Berkeley National Laboratory (LBNL), and three professional training organizations. A number of training dates are upcoming across the country, beginning on October 27, 2014 in New York.
The main objective for the DCEP Program is to raise the standards of those involved in energy assessments of data centers. Training events, lasting one, two, or three days, prepare attendees with the significant knowledge and skills required to perform accurate energy assessments of HVAC, electrical, and IT systems in data centers, including the use of the DOE DcPro suite of energy-assessment software tools. The energy efficiency at day 1 will unquestionably decline over time if the staff does not understand how the energy and environmental systems are supposed to work. The more sophisticated systems, the more need for trained staff.
For more information about the DCEP training, please visit DOE’s Center of Expertise for Data Center Energy Efficiency in Data Centers. This website also maintains a list of over 300 recognized DCEPs, who are available to perform standardized energy assessments.
With flexible data collection, efficient data management, powerful metrics, and trained staff, we can actually do something useful with the data. Nice!
Get a closer look at how CenturyLink used intelligent sensor networks to drive data center efficiency and savings: Watch this presentation by CenturyLink's Joel Stone and John Alaimo today!