Blog

This piece by RF Code was originally published in CIO Dive. See the original article here.


It takes a variety of figures working together in the right ways for a data center to function: power consumption, cooling efficiency, delta T's, temperature and humidity levels, etc. The array of solutions available that monitor and provide data on all of these areas is impressive.

Because minutes of downtime equate to thousands of dollars lost, it's natural to occasionally lose focus on the bigger picture while trying to utilize all of these tools. That's why providing visibility of the data center for the C-level is so important.

 

Greater visibility and insight into operations, as well as ensuring compliance with regulatory and audit standards, enables executives to make decisions that have a bigger impact on the bottom line, such as inventory control and labor costs.  

Ultimately, every data center is a business with a bottom line. Dozens, if not hundreds of decisions need to be made every day to maintain uptime (at a minimum) and optimize operations (the ideal). These changes are based on a vast array of information provided by sensors and monitoring systems, which can be challenging to interpret simply due to the volume of output. 

While the talented engineers are keeping their facilities running, management at a higher level must take a step back to strategize on how best to minimize expenses, optimize processes to drive efficiency and reinvest where it makes sense.

Regular audits and inspections of physical assets and environmental conditions play a key role in ensuring optimal performance, maintenance and management of the data center. Data centers must comply with a variety of standards, depending on their industry, such as the PCI data security standard to protect information of credit card owners, the Health Insurance Portability and Accountability Act (HIPAA) to safeguard personal medical information, or Service Organization Controls (SOC) to ensure that the information housed in the data center, especially that relevant to financial reporting, is secure, available, and private.

Compliance is a critical piece of what executives must consider when looking at the overall picture of how the data center impacts the bottom line.

The important question is: "How do we get the C-level the information it needs to make the decisions that will make the business more profitable and agile?" The answer lies in providing business insights instead of massive amounts of raw data.

As the data center industry continues to evolve, the healthiest facilities are moving at the speed of business. Solutions that can provide real-time information in a format that promotes fast and easy assimilation allow CIOs and CEOs to make the necessary changes to promote business process optimization and a healthy bottom line. Let's look at a few examples.

Inventory Control and Planning

Often overlooked until a key piece of equipment goes missing or audit costs get out of control, proper inventory management and the planning that comes from it are important key performance indicators (KPIs) for data centers of all sizes.

Improvements made to both areas result in real savings and improved performance metrics in other parts of the facility. Unfortunately, a substantial number of data centers still use time-intensive and error prone manual data collection methods, which is why inventory control and planning are critical areas for the C-level to find improvements in. Increasing visibility will:

  • Improve inventory and auditing accuracy. Are critical assets actually available and in their assigned storage areas when a crisis happens? Understanding how accurate the data center inventory is will help lead to process changes that reduce downtime in the future. Refinements in this area will help reduce or eliminate the financial penalties when customer servers aren't operational as well as produce insights into how assets can be stored to promote more efficient changes when problems arise. Additionally, simply utilizing a solution that provides real-time information on all data center assets will make routine audits proceed faster and more efficiently, saving tremendous amounts in man-hours from staff manually counting stock, investigating discrepancies and forwarding the copious amounts of data for evaluation.

  • Make intelligent decisions for asset retirement planning. Planning on when to remove and replace assets before they fail or become obsolete is a delicate task. From the C-level, giving data center managers the ability to utilize equipment to its fullest before discarding allows the business to recoup maximum value. Equally important is giving data center executives the information that allows them to plan for these switches so as to minimize disruption, downtime and labor costs. Examining decommissioning data, such as the time it takes to remove and recycle equipment, allows for business process changes that improve efficiency with this task. The end result is financial savings from reducing the power usage from assets no longer in service, storage costs and man-hours that can be put to better use elsewhere.  

Acquisition and Deployment

Whether building out new whitespace or performing maintenance in an established data center, an important KPI is the efficient and accurate deployment of new assets. While the data center staff is tagging new racks and connecting servers, executives should be able to view a variety of broader metrics to find trends and make improvements to reduce deployment times. This will ultimately save money in terms of:

  • Reducing the depreciating value of equipment. Often, C-level staff are unaware that too many assets are remaining in storage for too long, either at one or multiple sites. Understanding the asset composition will help minimize the amount of equipment in storage and how long it's staying there, preventing over and underpurchasing of assets and ensuring that existing assets are being properly utilized. 
  • Reducing the labor cost of installs. By drilling down on deployment times, trends can emerge that show which assets are taking too long deploy. By examining how efficient deployment processes are, the C-level can organize targeted training to make improvements and save on man-hours in the future. Reducing improper deployments is an excellent area for business process optimization, not only in terms of improving existing procedures, but also in reducing security risks e.g. preventing servers hosting sensitive data (financial or medical) from being deployed to vulnerable sections of the data center. Simply avoiding the penalties from SLA violations, fines and damage to the brand and reputation can make an impactful difference.

As data centers are being tasked with accommodating increasing amounts of information at faster speeds, the plans, methodologies and audit compliance to accomplish this need to be up to the challenge. Maintaining uptime in the facility is always the priority, but optimizing how it's done is the only way to guarantee long-term success.

The ability of data center executives to interpret large swaths of business intelligence on their inventory and processes will result in informed decisions on how to become more efficient, which is the fastest way to create a healthy bottom line.