Choosing a Disaster Recovery Center Location

When preparing a Disaster Recovery Center, one of the most important decisions is the location of the location of the Disaster Recovery Center. Up until the 9/11, a lot of companies held their DR centers in the adjacent building, and right after 9/11, everyone wanted to go as far from the primary data center as possible.

One of the common misconceptions of Disaster Recovery planning is that longer distance ensures better disaster protection. Of course, increasing the distance between data centers reduces the likelihood that the two centers are affected by the same disaster. But just putting distance between locations may not be sufficient protection. In reality, the best distance for a DR location is dictated by a multitude of factors:

  • Minimal parameters dictated by regulators - certain businesses, especially telco and finance must maintain regulatory compliance. It is not unusual for regulators to mandate minimal distance between the primary and the Disaster Recovery location. You must comply to these parameters
  • Corporate RTO parameters - the company has decided that the Disaster Recovery Center must be up and running within the time defined as RTO - Recovery Time Objective. This time will include the travel time to Disaster Recovery center and the system activation times. So it is always important to take this parameter into account when choosing a Disaster Recovery site
  • Telecommunications services - larger distance between the primary and DR site means higher telecommunication costs and limits the choice of appropriate remote copy technology. For instance, synchronous replication is still very difficult to achieve past the 40km mark. Choose a location that is sufficiently distant but still manages to deliver the required bandwidth for the chosen replication/remote copy technology
  • Geophysical conditions -In order to avoid a natural disaster, it is not always sufficient to move your Disaster Recovery center to a specific distance from the primary center. Most natural disasters deliver high impact in areas which support their spread by terrain configuration or other geophysical conditions. For instance, a safe hurricane impact distance was considered 150 km. However hurricane Katrina lost strength after over 240 km inland since there was no terrain feature to stop it. Best location should be in a separate flood basin, off a seismic fault line (or at least on a different one) and with a large mountain between the primary and the DR site
  • Means of Transportation - increased distance between primary and DR site may make it difficult for employees to travel to the recovery site. This is especially true in situations of crisis, when roads may be damaged or blocked, or public transport is stopped by strikes. Choose a site that has multiple travel options - railroad, motorway, even river boat
  • Vicinity of Strategic objects - It is never smart to place your Disaster Recovery center in the vicinity of objects of strategic importance to the country. Such locations are prone to terrorist attacks, and attack by opposing forces in a military conflict. Also, even in situations of natural disasters, strategic locations will have strong military presence that may limit access to your Disaster Recovery center. Strategic objects are military bases, airports, refineries and oil depots etc. Choose a safe distance from such locations

There is no such thing as an ideal Disaster Recovery location. The optimal location is the one that minimizes the risks at an acceptable cost and meets the required SLAs and authorities' regulations.

Talkback and comments are most welcome

Related posts
Mitigating Risks of the IT Disaster Recovery Test
iPhone Failed - Disaster Recovery Practical Insight
Business Continuity Analysis - Communication During Power Failure
Business Continuity Plan for Brick & Mortar Businesses
Example Business Continuity Plan For Online Business

Fuzzing with OWASP's JBroFuzz

I decided to search out a good web fuzzer for some testing needs. I wanted a fuzzer that was capable, customizable and could support my testing. The last thing I wanted was some sort of all-in-one application security scanner (since the false positives can just get ridiculous at times). Nope, all I needed was some automation assistance.

First thing a simple definitio: Fuzzing or Fuzz testing is a software testing technique that provides invalid, unexpected, or random data to the inputs of a program. If the program fails (for example, by crashing or failing built-in code assertions), the defects can be noted.

I came across OWASP's JBroFuzz and think I've found a good match. The tool provides a variety of brute force options and includes some nice graphing and statistics to analyze the information. I was also happy to see some nice documentation so I could quickly get up and running. My only compliant at the moment is that the proxy setup is a little clunky and not-intuitive at first. But again, as long as you follow the guide, it shouldn't be an issue.

When do I plan to use this new found fuzzer?
1. Sites where I don't have source for some reason. This is actually a rarity. If you want someone to assess the security of your web app, you should really give them the source code. Quick aside: if the consultants you select for an assessment aren't asking for source code, an alarm should go off in your head. If they don't do source code analysis, then they aren't doing there job.

2. When a site relies heavily on complex regular expressions for input validation and has weak output encoding. Yes, we can make the argument straight away that this is an issue. But its very powerful to make your case with a working exploit. Otherwise, you are trying to justify a bug fix to an issue that may or may not be currently exploitable. This can be a tough sell if developers are heavily leveraged with feature enhancements, new functionality, upcoming releases, etc.

This is a guest post by Michael Coates, a senior application security consultant with extensive experience in application security, security code review and penetration assessments. He has conducted numerous security assessments for financial, enterprise and cellular customers world-wide.
The original text is published on ...Application Security...

Talkback and comments are most welcome

Related posts
Skipfish - New Web Security Tool from Google
Tutorial - Using Ratproxy for Web Site Vulnerability Analysis
How To - Malicious Web SIte Analysis Environment
Web Site that is not that easy to hack - Part 1 HOWTO - the bare necessities
Checking web site security - the quick approach

Designed by Posicionamiento Web