Wednesday, December 23, 2009

The Long Slog Forward - A Dialog Begins

Jon Toigo and I have been chatting about tactical and strategic storage decision making for many years. He posted a thread of a recent conversation on his blog site and we decided to cross post his blog.


The Long Slog Forward - A Dialog Begins

My previous post about the long slog ahead for IT business prompted a communique from my friend and fellow blogger, Mike Linett, over at Zerowait. We have been chatting back and forth via email over the past few days and I thought I would post the emails here so that others can chime in if so inclined.

ZEROWAIT:

There is a fundamental shift that is occurring in the enterprise storage and networking business. Enterprise network and storage consumers have encountered an uncertainty tax based on the unpredictable government regulatory and tax policies as well shifting models on hardware supercession and maintenance costs from hardware vendors. When combined, there is no way to accurately forecast what operational costs and support costs will be for the 24 month to 36 month time frame. Based on an inability to forecast future costs, we are seeing enterprise customers reduce their time horizons for hardware upgrade and support ROI calculations.

I feel that the “uncertainty tax” is causing enterprise network and storage consumers to look at third party support vendors. Corporate finance is asking which costs can be reduced for FY 2010? I suspect that in FY2010 organizations will try to find a rational predictive model from which to model their network operational costs. The uncertain Financial and Regulatory environment is causing corporate financial types to keep a stronger cash position, therefore CAPEX spending is rationally being curbed based on caution.

Corporations are acting like rational consumers, individuals are saving more and shopping at Walmart and Costco to cut their weekly Operating Expenses to live within their paychecks. Companies are now living within their cash flow statements, and managing risk with the security of a stronger cash position, and shopping for competitive support pricing.

DRUNKENDATA:

I agree with your view of corporate evaluations becoming more rational. But, if that is the case, how is EMC showing $1.1B in revenues per IDC from equipment sales? That is double its nearest competitor’s number for what is arguably inferior products in many cases. Even if you are an EMC fan, the long tail cost of their gear comes in the form of a warranty and maintenance contract that rivals NetApp in terms of its price.

Some might argue, as one financial analyst recently did with me, that the key to enterprise sales is really predictability. The big league consumers want an enterprise storage rig from a vendor who will still be around for three years. The CAPEX spend is either a write-off, or they lease it, which is also a write-off. All things being equal, they are taking Gartner’s cue that the most important criteria in vendor selection is “ability to fulfill.”

Everywhere else but Wall Street, I see what you are describing: close attention to economical purchasing. I have heard repeatedly that companies are dropping maintenance from the OEM and shopping it out to third parties where (1) knowledgability is there (often because the aftermarket service company is staffed by techs who were laid off by the OEM), (2) maintenance can be performed remotely (so the company doesn’t need to increase its own staff headcount), and (3) the support service provider has been around for a few years and is likely to exist over the life of the maintenance agreement.

As for equipment itself, the other-than-Wall-Street enterprises seem more interested than ever before in Red Hat, Gluster, ZFS, and other Linux/Open Source wares to provide value-add services around storage. They are not shopping for integrated one-stop-shop rigs with lots of value-add features embedded on the array controller that they may or may not use but must pay for nonetheless. Plus, the value add stuff tends to break down and requires patching, in some cases, with the same frequency as Microsoft software.

A big issue in every shop I go to is the mismatch between existing storage infrastructure and server virtualization. The latter is a huge driver of server consolidation, to be sure, but hypervisors are also gumming up I/O performance and creating real opportunities for third party I/O monitoring and measurement wares (like Virtual Instruments), for off-box I/O routing (Xsigo comes to mind), and for simply-connected “network storage” rigs – and in many cases Direct Attached Storage. Also, give the failure potential of virtual server-based hosting, I am also seeing virtualization create a lot of mindshare for data protection software. Even the hardware independent virtualization software firms seem to be garnering revenue from the decisions of CIOs to embrace VMware and Hyper-V.

What seems to be missing in all of this is a unified architectural model, complete with open management standards, that will enable cheaper storage to be integrated with virtual server environments in a predictable and cost effective way. Initially, I think that companies will use open source operating and file system features to wed storage to virtual servers. Going forward, however, I am a huge advocate of a Web Services management framework leveraging REST.

What are your thoughts? Is there an opportunity to turn a frown upside down and to get consumers thinking about the current situation as strategic, rather than a temporary response to a temporary economic downturn?

ZEROWAIT:

As a consequence of the financial panic of 2008 and 2009 there has been a tectonic shift in customers’ perceptions of proprietary features and associated values. This shift will have deleterious effects on enterprise OEM’s pricing models. For example, Enterprise customers have recognized that open source tools such as Nagios and Cacti are excellent, reliable and affordable for monitoring enterprise systems, and Apache is the choice of many enterprises for their web servers. Due to tightened budgets the value proposition of Open Source when there is a reliable support organization backing up the OS, like Red Hat, has been recognized by Enterprises.

The new model for enterprise customers is value investing in network and storage equipment. Customers will buy as they need it, and do not want to pay outrageous licensing fees based on marketing limitations. The software management costs should not trend up by terabyte, but should trend down, this can be accomplished with Open Source, but not with proprietary Hardware and Software Products. Proprietary vendors sales models are still based in the unit sales of box and license sellers, and this is a dying branch of the evolutionary storage tree. Conversely, Zerowait’s business is based on the long term service and support business, which aligns perfectly with our customers long term ROI needs. The new reality of storage is that customers are not going to buy over provisioned arrays or pay 300% markups for hard drives that the Enterprise OEM’s charge Although the major storage OEM’s and Gartner may not realize it, customers are changing their data models based on usage. Primary storage will probably stay on high priced OEM solutions for years to come, but the other 80% of data will slowly migrate to Open Source solutions that have a reliable service and support organization behind them.

Proprietary vendors claims of interoperability often require a toxic mix of patches and software upgrades to continue to work together. Over time these, patches and band aids have gotten to the point that no two networks are running the same code and revisions and they are all unique. Interoperability was supposed to leverage the idea of interchangeable parts, but now very few components can be unplugged from one network and plugged right into another without some revisions to firmware or software. Network engineering has turned back into an art more than a science as complexity has grown. Sometimes it seems that many of our network engineering customers have more in common with alchemists than with computer theorists.

If interchangeable parts made the industrial revolution possible, why is it that computer hardware has become less interchangeable over time? Perhaps the proprietary hardware vendors have wanted to lock in customers to their high priced solutions? The financial panic has broken that model though, and the marketplace is embracing open source software and POSIX very quickly due to the cost and restrictions of proprietary hardware. You can’t fight the marketplace, you must embrace its lessons.

DRUNKENDATA:

Mike, I follow you on the bulk of what you are suggesting. My concerns are perhaps further down in the weeds, or in the clouds, depending on your perception.

First, while you may be right that simpler technology based on open source may trump purchases of overbuilt Enterprise OEM rigs in the short term, both for reasons of acquisition cost and cost of ownership, I am worried about the way that companies are/aren’t considering the bigger drivers of cost.

Storage growth is predicated upon data growth. Data growth is out of control because data itself is so poorly managed. If our assessments are correct, and literally 70% of the data being written to disk is more appropriately stored in a cheap removable-media archive (or in the trash bin), then shouldn’t firms be looking at discovery and archive tools in a much bigger way?

When firms do deploy disk, it is critical that it be managed in accordance with some sort of framework. That way, you become proactive in fault resolution and develop a knowledge of trends that can help you plan your acquisitions and utilize better the investments you make. Without effective management, you again see the hardware cost curve accelerate. Yet, again, I do not see a huge push to get a hardware management paradigm in place in the companies I visit.

Finally, I don’t see companies adequately addressing the long tail cost of storage acquisition – the cost of redundancy required to safeguard data. The redundant components of infrastructure, whether in the form of a second array, a tape library, etc., required to ensure the availability of data that is being stored on the new primary rig, are not being purchased. That RAID 5 continues to exist at all is testimony to the fact that companies are not thinking through the ineffectiveness of that scheme when applied to large capacity drives. That said, mirroring is considered by many to be a cost too dear in the current economy.

In short, even with the advent of cheaper storage and open source software, I find myself wondering whether this is ultimately strategic in the absence of effective hardware and data management…

ZEROWAIT:

You have pinpointed the problem. The proprietary vendors are driven to simplify proprietary management of their hardware, while adding complexity to management of a mixed hardware environment. For hardware vendors, who are being judged by VC’s and public stock prices, units sold is a number that makes sense to their investors who are looking for short term Return on Investment. However, customers want to get a quick ROI and also a long and affordable service duty cycle out of Capital equipment. Most Enterprise storage consumers that we work with yearn to be freed from high priced proprietary licensing schemes and single vendor lock in support programs. Organizations that embrace Open Source can reduce the lock in possibilities for proprietary hardware manufacturers. Once freed from vendor lock in schemes, consumers can and will judge hardware and software by their reliability as well as their acquisition and support costs. Open source hardware and software solutions will commoditize pricing and force vendors to compete on the true value and price of their service and support. Once freed of lock in a true competitive market can flourish, which inevitably will reduce price and improve products and services.

Since there is not an open market for storage decision making, Enterprise consumers have to interpolate their changeover risk with their cost of being locked in to a particular vendor. The lock in and cost of change is well covered by the Economist Hal Varian. Our customers recognize that there can be competition within an OEM’s market for legacy support, and often the OEM’s will reduce their prices once we have a service contract in place within a customer’s data center.

In summary, the financial panic of 2008 has opened the door to a tectonic shift in the way storage is looked at by organizations. The over hyped concept of cloud storage is being considered by many, because it is an easy first baby step toward a new storage pragmatism. Everyone that has looked into the cloud model sees dangerous cells in the towering cumulonimbus of cloud storage models. Issues of security, accessibility and speed being the first concerns people voice. I predict that 2010 will be the beginning of the end of the reign of the proprietary hardware and software vendor. 2010 should also signal the beginning of storage pragmatism and organizations will rationalize their storage costs with open source hardware and storage management solutions.

(MORE TO COME)

Avatars and Symmetry

One OEM has been a big believer in the usage of Avatars for a few years now, as this story from 2007 highlights. The second link shows how the movie industry has embraced the idea, and may have created a product from it.

http://www.thehindubusinessline.com/ew/2007/06/25/stories/2007062550110800.htm
Network Appliance (NetApp) is a $2.8-billion storage networking firm up against the likes of EMC Corporation and Hewlett-Packard. Maintaining the culture of innovation amongst its engineers so products are creative and useful is a prime focus to reach up the market share ladder.
One way of developing better products is by tuning into clients’ data management woes, understanding requirements, and shaping products to reflect needs. However, the knowledge that after-sales and marketing folks get from the field does not always permeate to the engineers/developers.
And that is why the company created Mike Raja and Joe and four other personas. Joe could be a DBA (database administrator)/Chief Information Officer/any other user of NetApp’s data management products.
“This is a new approach that we are trying. Innovation is important and these personas for users of our products help continually innovate,” Louis H. Selincourt, Vice-President, SMAI & Offshore Operations, told eWorld over lunch at the company’s headquarters in Sunnyvale, California, recently.
The idea came from the book, “The Lunatics Are Running The Asylum” penned by Alan Cooper, whose company makes software more user-friendly.
“Personas create a consistent ID of our users, so we can discuss them across the company while brainstorming,” says Selincourt.

http://www.theregister.co.uk/2009/12/21/avatar_storage_effects/
Weta used NetApp kit to store the incoming data, then used a huge number of workstations and bladed servers - with 30,000 cores in total - to work on it. The NetApp filers were fitted with up to five 160GB DRAM cache accelerator cards in their controllers, the PAM (Performance Acceleration Modules) caches, to speed file access by the Weta creative people and the servers.

There is a certain balance to this when we recognize that the same OEM that has used Avatars for years has its equipment used in the production of a movie based on Avatars. At Zerowait, we still view our customers as individuals with specific needs. Probably there won't ever be a movie based on our customer service and support models.

Monday, December 14, 2009

A Milestone for Spectra Logic

I have known Nathan Thomspon of Spectra logic - www.spectralogic.com - for many years, we share a common interest in aviation. Nathan's company is celebrating its thirtieth anniversary. I had dinner with the Spectra folks recently at the Super Computer conference in Portland.

Here is a snippet on an article in his local paper on the story:

Data-storage company started in a dorm room 30 years ago
By Alicia Wallace
Daily Camera
Posted: 12/14/2009 01:00:00 AM MST

As the founder and chief executive of a high-tech company, Nathan Thompson is used to looking forward, not back.

But he's been doing a lot of the latter recently as his Spectra Logic Corp. — a Boulder- based data-storage company whose roots trace back to Thompson's dorm room at the University of Colorado — marks 30 years in business.

In 1979, Thompson was a sophomore engineering major at CU when he withdrew the $500 that was in his bank account to start a memory-board manufacturer called Western Automation. In 1987, Western Automation bought the assets of a company called Spectra Logic, and the company reincorporated as Spectra Logic in 1993 because it was a better- known name in data storage, Thompson said.

The 30-year milestone is significant, Thompson and others say, considering the dynamic nature of the storage industry.

"It is kind of amazing . . . the degree to which companies have started and died — (companies) that were competitors or the hot company around here," he said. "We've just sort of steadily marched along."

Congratulations Nathan and all of Spectra Logic.