If your business relies on data centers, any downtime will negatively affect your business. Are you measuring and monitoring its performance? By measuring Data Center Downtime (DCD) as a business metric, you’ll quickly see if your service provider is helping or harming your business. You’ll also see if any changes need to be made for better performance.
The Hidden Costs of Data Center Downtime
We’re all familiar with upfront costs such as damaged inventory, product recalls, and theft. Data Center Downtime costs both time and money. Measuring it allows you to determine the best course of action for recovery.
According to Business Insider, a single minute of downtime costs a data center over $8,000. Now, consider how much that minute costs your business, including:
- The number of customers lost
- Transactions canceled or locked
- Tarnished brand
Every minute is an opportunity lost and a sunk cost. For instance, the Fortune 1000 lose as much as $2.5 billion a year due to application downtime. If your applications are hosted on servers in a data center that goes down, you lose access to those applications until the outage issues are solved.
According to one study by Ponemon Institute, the average cost of a data center outage was $740,357 as of 2015, with the higher end of the scale rising to over $2 million.
Hoping For The Best, Preparing For The Worst
Businesses and the general public all know that downtime is just a fact of life, even in a technology-dependent society. Natural disasters, equipment failures, and hackers all cause unexpected downtime for data centers. While you might think equipment causes the majority of problems, IT equipment failures only account for 4% of outages, according to the same Ponemon Institute study.
Knowing the causes of Data Center Downtime helps you gain at least some control over them. You’ll be better prepared for outages, discover the most common causes for your business, and even find ways to prevent them.
Weather accounts for 10% and issues with heating and air account for another 11%. It shouldn’t come as any surprise that 22% of data center outages occur due to human error. It’s usually human error that leaves open doors for cyber attacks.
According to David Boston, director of facility operations at TiePoint-bkm Engineering, two-thirds of data center outages are related to processes, not infrastructure systems. Infrastructure typically isn’t an issue since everyone’s aware of the need to consistently upgrade equipment and components. It’s the processes for implementing new infrastructure that’s the challenge. For instance, the focus might be placed on upgrading equipment. However, this causes a larger strain on the electrical system, resulting in an outage.
The correct process would be to test before implementation, prepare the electrical systems and then perform the upgrades. This issue usually affects mainly smaller or in-house data centers.
Measuring this business metric is tricky. The exact formula varies based on your industry. For instance, a business that mainly operates during the day isn’t going to see much effect if a data center goes down in the middle of the night. On the other hand, a gaming company would notice problems no matter what time the data center outage occurs.
Some things to consider when measuring downtime as a business metric include:
- Average data center usage during any given hour
- Number of customers affected
- Number of employees affected
- Loss of production
- Negative impact on reputation and brand
- Length of downtime
- Time for recovery
For each of these factors, you’re looking at both time and money. This makes it vital to better understand exactly how data center outages affect your business. Regular outages could mean it’s time to switch providers. It could also mean the equipment you’re using isn’t enough to handle the demands of your business and customers.
Tired of the high costs of major data center outages? Reap the benefits of better reliability along with flexible and scalable services with Zenworks.
Image: Bryan Goff