CIOs today face a gathering storm of data challenges arising from an unstoppable deluge of new data, compounded by vast tracts of unstructured legacy data, often dating all the way back to the organisation’s launch. To cut business risk, support better, faster decision making, enable robust data governance and improve operational efficiency while making ongoing financial savings, CIOs increasingly need more agile, scalable and cost-effective enterprise file storage solutions.
Selecting the best solution is becoming more difficult, as the market evolves with new entrants constantly arriving on the scene.
Traditional, Cloud or Hybrid?
Traditionally, enterprise file storage solutions have employed scale-out hardware. Independent, self-sufficient storage devices were brought together into clusters by a distributed file system. Newer solutions use cloud-based devices and software, serving up data stored in “objects” via RESTful HTTP APIs such as Amazon Simple Storage Service (S3).
Even looking solely at newer, cloud-based solutions, there are many different approaches to consider. Some systems, for example, use the cloud as second tier storage supporting traditional on-premise storage, while others, taking a “cloud first” approach, use object storage as the primary tier, with stateless edge appliances delivering high-performance access to active data.
Strengths and Weaknesses
On-premise scale-out NAS can suffer performance degradation as capacity is scaled. It also proves cumbersome when multi-site collaboration is needed, demanding expensive replication technology and duplicated file infrastructures. More traditional technologies such as Windows file servers and NetApp arrays deliver excellent performance within certain constraints on location, and file and volume size, but are poorly suited to multi-site collaboration and global file sharing.
File sync and share solutions like Dropbox and Box, designed for cloud, provide some version control and collaboration features. However, they were never designed to support shared access to large file types through file servers and NAS protocols.
Newer, hybrid cloud solutions such as Nasuni and Panzura may at first glance seem similar, addressing many of the same NAS file system issues by combining the limitless capacity, geo-redundancy and agility of cloud object storage with the flexibility of local, on-premise NAS. However, they are architected quite differently.
Panzura is a hybrid-cloud system, designed for the edge and then pushed to the cloud. Nasuni is a cloud-native file system, originating in the cloud and pushed down to the edge, enabling true cloud-scale performance.
Nasuni’s file system, including both WORM copies of files and all metadata, resides natively in object storage, while simultaneously caching the most frequently accessed files on a local filer. As a result, cache hit rates average 98% while on-site storage footprint is dramatically reduced and cloud access charges cut.
In contrast, Panzura stores file system metadata for the entire dataset on each controller. It must therefore continuously replicate this metadata, as changes occur, to other controllers and to a cloud container. Maintaining performance over multiple sites with this architecture can require significant hardware investment. These scalability limitations are similar to various device-centric file systems, such as EMC Isilon, IBM Spectrum Scale and NetApp.
Nasuni and Panzura also differ markedly in terms of backup. Nasuni, supporting unlimited numbers of files of any size, can restore any file version from any point in time. Panzura, placing restrictions on numbers of file versions and retention periods, cannot offer such robust file restore capabilities
To learn more about MTI’s Datacentre Maturity Assessment (DCMA) services and how you can address enterprise file storage challenges in your organisation, contact MTI at firstname.lastname@example.org or on 01483 520 200.