Enterprises Need Clouds, Not Droplets

I just read an interesting story about the Apache LibCloud project. It’s an interesting development in cloud computing for certain groups. But I don’t think mainstream enterprises aren’t one of those groups. Here’s why.

LibCloud, and a lot of other cloud technology providers, are all about providing a la carte menus of commodity-priced, on-demand computing and storage that the customer can assemble into the solution they want.

In other words, these providers aren’t actually supplying clouds. They’re supplying droplets of water vapor, requiring the customer to assemble them and turn them into clouds.

How To Not Lose Your Data

Once again I’ll start by citing a very nice story that came out today. This time, it’s David Gewirtz at ZDNet writing about the data that was lost in the Amazon outage last week.

Gewirtz lays out some excellent advice for his readers, including something that I would like to expand upon. He suggests, “If all your data’s in the cloud, back up to a local environment.” This is a great idea, but not totally obvious to many businesses thinking about shifting their IT data centers to the cloud. Having backups, and backups of backups, is vital, and many companies will never feel safer than when they have one of those copies on their premises.

Why the Amazon Outage is Not the End of the Cloud

Businesses are rightly concerned about what Amazon’s outage last week means to them, in terms of their plans to move some or all of their IT operations to cloud data centers. We hope that they keep cool heads and not panic. Too many of the stories being written about Amazon imply that all cloud data centers are alike, and thus prone to the same problems. Of course this is not true.

In a thorough story, Patrick Thibodeau at Computerworld observes, “…supporters are going to have a tough time arguing that the uptime delivered by cloud services is superior to anything corporate IT can deliver.”

On the back of our success in the US market, nScaled formally launched its European operations earlier this year.

Like any new entrant to a market, spending time speaking to leading figures in the industry and gaining valuable feedback, is a critical part of the strategy. What has quickly become evident through these conversations is that thinking has moved on and organisations are realising that the benefits from owning and managing their own datacentres is diminishing.

A few years ago, the phrase, “we want to be out of the data centre management business” is not one which would have readily been associated with law firm IT directors. Now it is. These are not quick decisions, but commercially law firms are realising that there are significant benefits from adopting a shared approach to infrastructure and that security within these environments is now at a level which often exceeds what they are able to provide themselves.

Preventing Disasters

I talk with a lot of CIO’s and IT Directors around the US about how they go about protecting their firms from the risks of downtime . We usually cover topics such as:

  • RTO and RPO,
  • SLAs,
  • Clouds,
  • Budgets,
  • Data,
  • Storage, etc.

Recently, the topic has turned more from a bits and bytes discussion into one focused on the reality of keeping their users working and “preventing disasters.”

I realize that “preventing disasters” may seem like an impossibility, and from the traditional view of a disaster (earthquake, tornado, water leak, etc), you would be right. But what if you think in terms of keeping your users working with their content (documents, emails, appointments) so that your customers continue to come to you for their needs? When you are successful, you avoid the bigger more personal disaster: being forced out of business by loss of the business.

Time for Change – From Backup to Availability

It’s all about application and site availability

Backup is no substitute for site and application availability

Henry David Thoreau clarifies our present situation when he said “Men have become the tools of their tools”

CIOs, Storage Administrators and IT professionals are constantly challenged by limited time, talent, and budget to adequately deal with their real work issues of data growth, data protection, remote offices, compliance, migrations, disaster recovery, and server failure.  Little time is afforded to ensure their IT applications and data are constantly available.  As a result, organizations end up with several disparate point solutions which make data and application availability during a disaster  very challenging and in most cases impossible to achieve.

nScaled predictions for 2011

1. Organizations will stop throwing big money at big storage
The cost per GB of cloud storage will fuel the growing realization that the days of making million $ investments into privately managed, top tier SAN storage are coming to an end.

2. Tiered storage, and smarter use of storage tiers will become a standard
Organizations will take a harder look at the data residing on their expensive primary storage, and put solutions in place to manage its migration to appropriate media and locations. Factors for consideration will include access speeds, retrieval times and redundancy. Older, less accessed data will be moved to more cost effective storage tiers automatically. Primary, expensive storage will be viewed more as a ‘data cache’ – used only for current data. This will slow the needs for continued growth of primary, high performance SAN facilities.

Video Short – Local File Recovery from Snapshot

The following short video short shows the quick and simple process of recovering a file locally if you have protected it using some of the on-premise integrated tools that are part of nScaled’s Total Data Protection Disaster Recovery and Business Continuity solution.

Click here to view

1.0.13 beta update of nScaled Cloud Console

On August 18th, 2010, we released the 1.0.13 update for the nScaled Cloud Console.

Here’s what’s new in version 1.0.13, since previous release:

The nScaled Cloud Computing Console – Part 2

In today’s installment to review with you the nScaled console I present a recent update we made that begins to surface important data about what you have purchased and are using, the “data growth” widget and how much data you are sending and receiving to and from your business continuity site over time with the “Data Center Connectivity” widget.

On the left in the screenshot above you can clearly see how much capacity you have pre-purchased from nScaled for the purposes of storing your business continuity data.  Recall that with nScaled business continuity data in the cloud is essentially the ability to recovery any file, folder, disk, or entire server at any point in time according to your defined retention schedule.