So today when one of our customers sent me the link to this article and pointed out the solution that Arizona State University came up with I was pretty surprised.
"Some users looking for new disaster recovery tools have encountered frustration. ASU is charged with storing half a petabyte of video footage from the Apollo space missions and replicating that data between two sites for disaster recovery. ASU ran into two limitations on its NetApp filers: a 16 TB file system limit and the fact that researchers move file directories around frequently, something that can wreak havoc on performance when hundreds of terabytes of data are attached.
ASU tried NetApp's OnTap GX product to overcome the file system limit and provide a global namespace, but Scheib grew weary of the fact that OnTap GX doesn't support NetApp disaster recovery tools, such as SnapMirror. Scheib found other clustered NAS systems beyond his budget, especially when it meant losing investments in the NetApp storage already on the floor.
In the end, Scheib designed his own architecture using open source ZFS layered over the NetApp filers, spanning the two locations. ZFS is a 128-bit file system with an exponentially larger namespace, and its performance allows for the quick movement of folders and directories. Because ZFS spans the two sites, data replication can be accomplished by dragging folders from one directory to another. The actual migration of data over trunked Ethernet pipes takes much longer, but users still have access during the process.
The irony of pairing ZFS with NetApp, when ZFS creator Sun Microsystems Inc. and NetApp are suing each other over the file system, isn't lost on Scheib, but he isn't concerned about that. "By the time that's settled, I hope there will be more prepackaged alternatives to meet my particular needs."
This really surprises me because NetApp considers ASU one of its quotable and reference clients.
"Cost savings and reliability prompted us to look to NetApp to provide a scalable storage infrastructure for the IDEAL project," said Samuel DiGangi, assistant vice provost of Information Technology at Arizona State University. "ASU continually strives to give back to the surrounding communities, and IDEAL is the next step of that initiative, encouraging lifelong learning and eliminating educational barriers."
"Supporting the nomination for the award was Network Appliance, Inc., a technology vendor for the project. NetApp extended its congratulations to ASU in a news release the day the award was announced. Elisa Steel, vice president of Worldwide Integrated Marketing at Network Appliance, said, “We are thrilled that ASU’s innovative approach to creating the IDEAL project has garnered them recognition.” The company’s file storage servers are one of several components of the IDEAL system, which includes equipment from IBM, Sun, Cisco, and F5 Networks."
Perhaps NetApp started getting too expensive to maintain in 2007,
"Google has economies of scale that we don't have," Page said.
The university has already been able to transition two of its four full-time engineers who had managed the 4 terabyte (TB) NetApp storage system to other functions, according to Page. As soon as the migration is complete, the other two will be reassigned as well. Once the NetApp filer is also reassigned, Page said, the switch to Gmail will save the university $350,000 per year in storage, maintenance and personnel costs."
As architectures grow more complicated storage managers are going to be put under a lot of pressure to reduce costs while maintaining reliability. Is NetApp going to be able to compete when they are being pinched on multiple sides by pressures technologies like ZFS and services like Gmail? We live in interesting times for technology companies and storage managers.