Network Attached Storage has come a long way from its humble beginnings as a file-storage tool. Now, a new generation of NAS systems is needed in the cloud to meet enterprise demands.

Andres Rodriguez, CEO, Nasuni

March 31, 2015

5 Min Read

7 Linux Facts That Will Surprise You

7 Linux Facts That Will Surprise You


7 Linux Facts That Will Surprise You (Click image for larger view and slideshow.)

The world runs on files. Every bit of digital information eventually ends up in one or more files: From the simplest text file to files that capture the intricacies of a product or the blueprint for a high-rise. It's all files, and files are a pain to store and manage.

Beneath the calm humming of any file-intensive organization, there is a storage engineer shoveling coal into the engine in order to ensure that there is always plenty of capacity, that the files are protected, and that they are available conveniently and quickly to the people who need them. The sheer volume of files and the expectations for anytime, anywhere access have never been higher than they are today. And all of a sudden, NAS is sexy -- again.

Network Attached Storage (NAS) is a dedicated storage server whose sole purpose is to host files and make them available to the Local Area Network (LAN). NAS has become the way IT stores, protects, and makes files available to end-users and to applications.

At the heart of a great NAS is a rock-solid file system. The first great migration of files into the network defined the performance and functional requirements of NAS. Monolithic NAS architectures enjoyed a decade of strong growth with single, powerful controllers. It would take the rise of the Internet to create the next set of requirements, which strained even the most powerful monolithic architectures beyond their technical limits.

[Want to learn more about how storage is being reorganized? See One Big Idea For Data Center Storage.]

The Internet took file storage and sharing to a whole other level. I happened to be on the frontlines working with several large media companies, including The New York Times, as we struggled to create an infrastructure that would scale to millions of users. First came throngs of users, and soon after came an explosion in the number and size of files as everything went digital: Pictures, music, movies, everything. Overnight, the physical world was transformed into files. Monolithic NAS architectures couldn’t support the access load, and IT struggled to increase capacity fast enough.

In order to conquer scale, the industry chose to divide the monolithic NAS controller into clusters of smaller NAS controllers. The new architecture became Scale-Out NAS (SONAS) and the leader that emerged at that time was Isilon, now part of EMC, with its OneFS file system. History repeats, and progress has a way of pushing every technology to its breaking point. The Achilles’s Heel of SONAS is its insistence that the file system and the hardware clusters be one and the same. This works well while the files are concentrated in a single location, but modern organizations need a NAS platform that helps them span the globe.

Another great file migration is underway, and this time files are going to the cloud. The scalability problems that began in media companies have become commonplace in all of the file-heavy industries, including engineering, healthcare, architecture, legal, and life sciences. Unlike traditional media outlets, these organizations need their NAS everywhere.

A couple of modern approaches put files in the cloud in order to enable massive scale and the global synchronization of files. It is still too early to tell which approach will emerge as the winner, but when it comes to files, distributed organizations can already benefit from a wide range of choices within these two ways to approach files. The software plays -- such as Dropbox, Box, Microsoft’s OneDrive, and Google’s GDrive -- are terrific in their support of files for mobile users, but they lack the performance, scale, and the standard protocols used in the data center.

Today, the software approach represents a storage island, great for mobile files but largely incompatible with the existing infrastructure. However, vendors in this space have their eyes on the lucrative enterprise market, and they will evolve.

There is also a third wave of cloud-based NAS (including Avere, cTera, Nasuni, and Panzura) that uses dedicated hardware appliances to deliver performance to the data center while leveraging the cloud for scale and the global synchronization of files. Today, most of these providers offer limited mobile support, but that too is changing.

Files are not going away. In fact the opposite is true. The race is on for a single storage platform that can reliably store all of the files required and deliver them quickly to any location or device. The data center remains relevant, but files are moving to where they can be most useful and less painful: a cloud-based core -- a giant engine for storage and data synchronization. The cloud expands the capabilities of NAS in the data center to a technology able to store billions of files and make them available worldwide with the same appliance-like simplicity that made NAS successful in the first place. This generation of NAS is about scale, performance, and the agility to make data available anywhere on earth. This is not your grandfather's NAS.

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

About the Author(s)

Andres Rodriguez

CEO, Nasuni

Andres Rodriguez is CEO of Nasuni, a supplier of enterprise storage using on-premises hardware and cloud services, accessed via an appliance. He previously co-founded Archivas, a company that developed an enterprise-class cloud storage system and was acquired by Hitachi Data Systems in 2006. After the acquisition, he served as Hitachi's CTO of Files Services. Prior to founding Archivas, Rodriguez was CTO of The New York Times. He received a Bachelor of Science in engineering and a Master's in physics from Boston University. He holds several patents for system designs.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights