Thursday, July 30, 2009

What is next?

Over the last couple of months as I have watched EMC and NetApp's bidding war unfold for Data Domain it seemed to me that many folks have forgotten what got NetApp to where they are today. Dave Hitz and his team came up with a great idea for a Keep It Simple file storage system. They executed their plan well, and brought to market a niche solution that grew into a $3 Billion dollar a year company. When I met Dave in 1998, NetApp was selling the FAS630's and about to come out with the F700 series filers. The FAS630's suffered from a power supply problem, and many of them were retired after a few years, but many F740's and F760's are still running strong and reliably. The DEC Alpha processors were awesome, and those early versions of OnTap were slim and efficient.

How many other servers, routers and other network devices do you know of that are still reliably serving content to customers 10 years after they were delivered? Dave Hitz' team developed an incredibly reliable device and the market embraced it.

But NetApp diversified as it grew and now looks toward acquisitions rather than internal product development. As this article highlights:

"In the process of losing its bidding battle with EMC for de-duplication market leader Data Domain, Network Appliance (NetApp) has exposed its weakness in the de-duplication (de-dupe) market sector. It had previously developed its own technology (A-SIS) but evidently accepted that Data Domain provided a better bet. "

If I were in charge of NetApp, I would bring Dave back into the lab and let him and his team create a new reliable platform that will deliver data reliably well in to the next decade. Keep It Simple, and make it reliable, and customers will return to NetApp again and again.

If Dave can create a category killer like the F700 series again I think the future will be bright for NetApp.

Tuesday, July 28, 2009

Is Zerowait a Solution Provider?

Of course we are, but we don't fit the standard model. Zerowait provides creative solutions to customers' storage requirements every day. For example our Legacy NetApp service contracts provides customers who are looking to redeploy their superseded NetApp equipment into a secondary or tertiary role the support they need at an affordable price. For other customers, we provide them with the confidence they need to make certain that they have the parts and technical service required to maintain their High Availability customer requirements with older equipment. While I was in Texas last week I spoke to a customer who is still using their R150's for a critical application, they have about 30 newer filers, but the R150's are stable and running reliably. They are happy and so are their customers. Their Solution Provider is Zerowait.

There are many alternatives to purchasing new equipment available to the savvy network and storage administrator. Zerowait can provide storage additions, upgrades and support services to customers who are looking for creative solutions at an affordable price.

Our business continues to grow in this tough economic environment, which means that more customers are embracing solutions provided by Zerowait.

Friday, July 24, 2009

Another week of travel

This week I visited customers in the Southeast. It is always great to stop by and see our customers and see how they are coping with the economic environment. Most of the customers I saw are expanding their use of Zerowait's services and asking us to look at taking over additional pieces their data infrastructure.

While I was in the office of one customer this week he asked if I had time to meet with a friend of his who needs NetApp support and equipment. He called his friend up while I was in the office and made an appointment for me to visit his friend's company the next day.

Providing high availability service and support takes a dedicated team of professionals. And while I am on the road visiting with customers I hear stories about how our team has saved their bacon time and time again.

Thursday, July 16, 2009

Vendor TCO estimates might not be realistic

I was not surprised to read this article on Byte and Switch today. I have long thought that performance and Benchmark numbers were suspect in the storage and networking world.




Highlights of the article:
* Driven by a general sense that benchmarking practices in the areas of file and storage systems are lacking, we conducted an extensive survey of the benchmarks that were published in relevant conference papers in recent years. We decided to evaluate the evaluators, if you will. Our May 2008 ACM Transactions on Storage article, entitled "A Nine Year Study of File System and Storage Benchmarking'", surveyed 415 file system and storage benchmarks from 106 papers that were published in four highly-regarded conferences (SOSP, OSDI, USENIX, and FAST) between 1999 and 2007.

Our suspicions were confirmed. We found that most popular benchmarks are flawed, and many research papers used poor benchmarking practices and did not provide a clear indication of the system's true performance. We evaluated benchmarks qualitatively as well as quantitatively: we conducted a set of experiments to show how some widely used benchmarks can conceal or overemphasize overheads. Finally, we provided a set of guidelines that we hope will improve future performance evaluations. An updated version of the guidelines is available.

We believe that the current state of performance evaluations has much room for improvement. This belief is supported by the evidence presented in our survey. Computer Science is still a relatively young field, and the experimental evaluations need to move further in the direction of precise science. One part of the solution is that standards clearly need to be raised and defined. This will have to be done both by reviewers putting more emphasis on a system's evaluation, and by researchers conducting experiments. Another part of the solution is that this information needs to be better disseminated to all. We hope that this article, as well as our continuing work, will help researchers and others to understand the problems that exist with file and storage system benchmarking. The final aspect of the solution to this problem is creating standardized benchmarks, or benchmarking suites, based on open discussion among file system and storage researchers.

Our article focused on benchmark results that are published in venues such as conferences and journals. Another aspect is standardized industrial benchmarks. Here, how the benchmark is run or chosen, or how the results are presented is of little interest, as these are all standardized. An interesting question, though, is how effective these benchmarks are, and how the standards shape the products that are being sold today (for better or worse).

The goal of this project is to raise awareness of issues relating to proper benchmarking practices of file and storage systems. We hope that with greater awareness, standards will be raised, and more rigorous and scientific evaluations will be performed and published. Since this article was published in May 2008, we held a workshop on storage benchmarking at UCSC, and we presented a BoF session at the 2009 7th USENIX Conference on File and Storage Technologies (FAST). We have also set up a mailing list for information on future events, as well as discussions. More information can be found on our Website, http://fsbench.filesystems.org/.

Monday, July 06, 2009

Green Shoots or Dead Roots?

Over the holiday weekend I got together with a bunch of my friends who own businesses or are executives in companies. We had a chance to talk about what they see in their businesses and there was not a lot of optimism for economic growth in the USA. Many of the companies they represent have stabilized their businesses, but they are not expecting domestic growth this year. The consensus was that a lack of Capital Expenditures is forcing companies to spend more on maintenance instead of upgrades and replacements, but even operating budgets have been curtailed.

My informal research coincides with a report I saw by Reuters today. Some points in the report are below.

* The services sector represents about 80 percent of U.S. economic activity, including businesses such as banks, airlines, hotels and restaurants.

* Both manufacturing and service sector reports show signs that the 18-month-old U.S. recession, the most protracted in decades, may soon end.

* The Conference Board, a private research organization, said its Employment Trends Index slipped to 88.4 from a downwardly revised 89.1 in May. It was originally reported at 89.9.

* "Compared with the beginning of the year, the decline in the Employment Trends Index has significantly moderated," said Gad Levanon, senior economist at the Conference Board. "We therefore expect job growth to resume around the end of the year. "However," he added, "over the last month, leading indicators of employment were mostly disappointing, suggesting the Employment Trends Index is still seeking a bottom."


Green Shoots or Dead Roots - I think it depends on what sector of the Economy and what business you are in.

Thursday, July 02, 2009

Analysts project storage growth

According to an article by Chris Mellor :

The Enterprise Strategy Group is forcasting a sixfold growth in file archive capacity, from a little over 10,000PB in 2008 to 62,000PB in 2012

This seems to one of the few bright spots I have seen in the last few months when it comes to economic growth . I see a few interesting points that this growth will have on ever tightening IT budgets.

1) If budgets remain tight, customers will have to seek cost savings on legacy equipment to purchase new hardware to handle the growth. Many OEM's raise support prices on legacy equipment as an incentive to current customers to purchase new equipment. Since budgets are tight customers will be seeking more third party support to reduce their operating expenses on their legacy equipment. That should be good for Zerowait.

2) Customers will be looking for better ways to manage their RAID arrays to gain more efficiencies out of current hardware. Whether they choose compression techniques or de-duplication technologies, Storage Resource Management is still going to be a headache.

3) My recent overseas trip clearly demonstrated that customers around the world are seeking ways to squeeze more out of their storage budgets. Large international companies and smaller regional enterprises are looking at which state or country is offering the lowest cost data center.
Clearly storing and serving more data is going to take more power, and therefore there are advantages for storing data in locations that offer cheap, reliable and abundant electrical power and connectivity.

Most of the companies we deal with have three tiers of data - Primary, Secondary and Tertiary data sets. Primary data needs to be 'connectivity near' the users but as data gets less critical it can be 'connectivity farther' away.

In the future we may see more data centers in low cost locations being filled with high availability legacy equipment that is cheaper to maintain than new equipment for the secondary and tertiary data sets.

Economics will force folks to ask "why are we serving data from the most expensive locations?"

If headlines like the one below continue companies are going to have to cut costs to survive:

Unemployment flirts with an ugly truth--the possibility of hitting a new post-Depression high.