Another Look at File Virtualization
It is customary for many to consider file virtualization as its own discipline. But, in fact, file virtualization is actually a Hungarian goulash of supporting technologies, including mirroring, replication, disk images, point-in-time copying, data movement, archive, elements of hierarchical storage management (HSM) and data abstraction across all storage tiers. These features can be implemented as software running on off-the-shelf servers.
Features can also be implemented on custom and proprietary hardware that focuses on mirroring, snapshot technology, or one of the other technologies mentioned above. With all that variety, there is no "one size fits all" way to implement file virtualization in your installation. Every business has its own, unique needs that are reflected (or should be) in the IT operation.
Once file virtualization is implemented, it should operate transparently. It should be sensitive to the IT timetable, leaving broad openings for transactional and operational priority processing applications. Virtualization might be best as a third-shift, back-end operation, depending on the business' duty cycles.
Just about every virtualization vendor will promise that their solution, be it hardware or software, are totally nondisruptive, but at some point something needs to change. For example, a data center might need to implement a change in automounting or a DNS configuration on the servers. The trick is to implement business common sense, asking whether the changes are really necessary, and are the changes worthwhile for the data center and the organization.