At the Gartner Data Center Conference in Las Vegas this week ,Acronis’ CMO Scott Crenshaw was among the great technology minds attending (and tweeting!) on the future of the IT landscape. The hashtag #GartnerDC was created to allow attendees to share their thoughts, engage in conversations and discuss industry trends – and let those of us not as fortunate to be in attendance follow along. Some of the trending topics stemming from this week’s show included issues surrounding Big Data, data storage and, of course, the cloud.

Make the Most of Your IT Budget in Q4

Welcome to the fourth quarter, the time when most IT departments are working hard to tie up loose ends and complete projects before the end of the fiscal year. Economic conditions have caused organizations to do more with less. They are tasked with protecting a growing amount of mission-critical data, often with a shrinking budget.

Now is the time to review and adjust your current technology budget, to make room for investing in new services and new technology. It’s also the time of year when you’ll want to spend any remaining 2012 budget. Failing to do so may result in your budget getting reallocated in 2013. The clock is ticking, so make the most of your 2012 budget in the last quarter. Learn some best practices for technology budgeting:

Never Underestimate Your Resource & Skillset Requirements

The final post in this series of five hidden DR hazards involves underestimating your resource and skillset requirements. When creating a do-it-yourself disaster recovery solution, you must consider your team’s personal priorities and also their ability to access your remote site.

One of the biggest mistakes that you can make is to assume that your staff will be available during a disaster. Because of the interdependence on skillsets that DIY disaster recovery demands, it is virtually impossible to guarantee that your entire team will be available to work during a medium to large scale incident. In a massive geographic disaster, the priorities of your top IT employees will be on personal needs, like the safety and well-being of their families.

Failed Testing: Another Hidden Hazard of DIY DR

Building a disaster recovery site can be an exciting project for ambitious IT teams. It involves a great deal of planning and results in a high degree of satisfaction once the environment is complete, the budget is justified and the equipment is procured and configured. Overall it is a gratifying experience– until it’s time to test the solution.

Designing a DR testing scenario can be a project in itself. Significant capital expense has to be justified for building the DR site and the return on investment cannot be realized until a test has been developed. Most companies will plan scheduled downtime, take production systems offline, and test the DR site with production offline. This typically involves many hours of late night and weekend work for the IT staff and for the application owners.

How to Avoid Hardware/ Software Drift

I’ve been writing about the hidden hazards of do-it-yourself disaster recovery. One of these hazards is hardware/software drift. Since your disaster recovery site represents a working replica of the production environment, it will need to be maintained on an ongoing basis. There are several strategies for how hardware and software are provisioned for your DR site. The strategies you choose will determine the how much maintenance will be needed to keep your DR site running at an optimal level.

There are two main techniques for acquiring hardware for a disaster recovery site, although you may want to use a combination of both. One technique is to replace hardware (i.e. a server) that is no longer covered by warranty with new hardware and use the old hardware for disaster recovery. Another strategy is to buy or lease new equipment to use at the DR site.

How to Choose a Remote Colocation Site

Choosing a disaster recovery location is critical to the success of any DR project. One of the biggest mistakes that you can make is to choose a colocation site that is too close to your production site. It’s way too easy for a power grid failure to knock out both your primary and your colocation site if both sites are located in the same metropolitan area. (It happens!) But the challenge of in-house DR is that the technical team responsible for bringing your DR site online will need access to that site. This means that the approach and technology used to deliver a disaster recovery solution must be capable of remote activation. Learn more about remote activation in the nScaled white paper The 5 Things That Can Go Wrong With DIY Disaster Recovery: 5 Things That Can Go Wrong With DIY DR

The Hidden Hazards of SAN-to-SAN Replication

It might be tempting to create a “do-it-yourself” disaster recovery solution by purchasing additional hardware and installing it in a branch office or colocation facility. But creating an effective disaster recovery solution is a complex project and there are several unplanned costs and other hidden hazards associated with it. I’ll identify some of these hidden hazards over the next few weeks.

I’ll begin with SAN to SAN replication. Any SAN manufacturer with a clear understanding of storage space has some kind of SAN to SAN replication offer, but not all SAN to SAN replication is alike. When creating an effective DR solution, you have to make several architectural considerations for replicating data between production and DR, including virtualization, application specific agents and snapshot storage requirements.

Recovery-in-a-Cave

Just got done reading the funniest white paper ever, from our friends at Iron Mountain. It asks the question, “…what happens when the devastation is so fierce that it hits the backups too? Don’t panic.”

Don’t panic? Really? I think you damn well better panic – about your career. You stashed your backup tapes so close to your primary data center that they got clobbered by the same natural disaster? C’mon man! Better update that resume.

And while we’re at it, why on earth are you still using tape? Iron Mountain, whose business is storing tape, points out just how fragile tape is:

“…if tapes suffer major temperature fluctuations every night, they will weaken and become more likely to snap. When your staff fails to maintain drive heads’ cleanliness, or cleans equipment incorrectly, tapes may be more prone to breaking. To avoid such problems, know the cleaning cycle and storage conditions recommended by your vendor.”

Really?

Disco Stu has moved from Springfield to the world of backup and DR.

Earlier this week, eVault announced 4-hour recovery times for their mid-size business customers. This is one of those mind boggling announcements that gets a lot of us in the business saying, “Really?”

Because the thing is, in 2012, a 4-hour recovery time is terrible. They never explain if the 4-hour guarantee applies to any single system or an entire data center. In either case, eVault’s claim is just plain archaic.

For the sake of comparison, nScaled offers customers a 15-minute RTO for any single system, and a 2-hour RTO for an entire data center.

Boston Fire leads to customer failover

You may have read about the fire in Boston yesterday that took out power for a section of the city.

The power outage affected the Boston office one of our large disaster recovery customers. We protect 70 servers in 11 offices around the US for this customer.

Our support guys were up late last night, getting the customer failed over so they could keep working through the interruption, so I haven’t had a chance to get the detailed story from them. I’ll post again when I have more news.