![]() |
The fact is there has been much discussion on this topic over the past 20 years or more, so it might be helpful to look at things in perspective. After 911, there were several white papers and regulatory bodies strongly recommending that a company’s secondary site should be “out of region”. This term was defined as more than 250 miles from the primary site and was achievable with the ability to do replication over distance. Some organizations implemented this concept by moving their secondary site cross-country or overseas. But, the long distance between sites came at cost. The cost was data loss, maybe only minutes worth but valued at too high a price to sacrifice for many financial firms and companies with highly critical workloads. As a result, a majority of the financial community chose to keep their second data center within synchronous distance to ensure it would not have any data loss during an outage. This fact is compelling, especially when combined with the information from the NOAA and USGS that tells us the largest non-coastal disaster covered only 25 miles in distance and the largest coastal disaster radius is only 44 miles. Based on this data, one could conclude that distances as low as the 44- to 60-mile range would be totally acceptable for primary and secondary data centers and would stand the test of time. Since I have been in the business continuity and disaster recovery industry for more than 20 years, and I am able to contact many of my colleagues and customers, I thought I might do my own informal poll. I asked the question, “In your entire career do you know of any organization that has lost their primary and secondary site in a regional disaster?” So far, I have not found one. I would welcome any feedback from anyone who may want to weigh in on this issue. So what has changed recently to make an adjustment to data center distances? For the first time in my career, companies are implementing truly Active/Active data centers over distances from 44 miles up to 120 miles apart. The new secret sauce here is EMC’s VPLEX Metro, integrated with some “off the shelf” technologies to provide — for the very first time — a continuously available architecture. The ability to completely eliminate downtime and data loss is so compelling, I think we will see several organizations migrate from their long distance strategy to this one. For those still concerned about the distance risk, you can still bunker a copy of your data at an unlimited distance away and have a “quick ship” recovery in the unlikely event of an area outage. |
Update your feed preferences |
