Thursday, June 29, 2006

StorageWiki - A very cool resource

I like the idea of a Free NAS solution, especially for the SMB market. Buy some cheap drives, put free software on a cheap server, and invest the money for corporate growth! I know that logic makes sense for my dentist, and my buddies with small businesses around the country. They love these projects. Storage commoditization is coming, are you ready? Here are some interesting projects to keep an eye on.

First is a project that might help demonstrate that all storage devices don't have to be proprietary. The openfiler efforts are quite fascinating and I wonder if they had any reason to name it the way they did?

If you are a systems administrator looking for a way to take control of your storage resources without having to pull off the modern equivalent of The Great Train Robbery in order to afford it, Openfiler is the answer to your prayers. Openfiler is a serious tool meant for professional systems administrators with a keen desire for the ability manage network storage in an efficient and cost-effective manner.

Second is the project for a Free NAS solution based on Free BSD. I seem to remember hearing that Network Appliance Corporation's Dave Hitz started with a FreeBSD kernal when they started working on their software after they left Auspex.

Everything old is new again!

By the way Dave Hitz wrote to me a while ago, but I mis-moderated I guess - here are his comments. Posted by Dave Hitz to Zerowait High Availability at 5/21/2006 08:16:55 PM

People who are interested in verifying these TCO (Total Cost of Ownership) claims might be interested in looking at the detailed report from Mercer, which is the management consulting company that did the study for us. (See

They identified three categories of cost:

(1) Product Acquisition & Ongoing Vendor Costs (hardware, software, implementation, training, service, support)

(2) Internal Operational Costs (labor, facilities, environmental)

(3) Quantifiable Business Cost of Downtime

So it looks like they were trying to be pretty thorough when it comes to capturing all the costs you have.

In their summary of why NetApp is lower, they point to several factors. For the same size DB, people tend to use less storage with NetApp, because of features like snapshots and cloning. NetApp tends to take fewer people to manage. And snapshots let you recover from errors faster.
Posted by Dave Hitz to Zerowait High Availability at 5/21/2006 08:16:55 PM

Tuesday, June 27, 2006

The coming Data Explosion - will there be a need for short term Caching?

I have been reading a lot more about the coming of RFID. It seems everyone agrees that this will cause a data explosion with manufacturing, distribution and supply chain applications. But no one seems to be addressing the requirements of the data scanning and accumulation within the networks. For low volume high value merchandise the data might be small, however , for high volume middle value products, I can see where there might be some bottlenecks created within our current network & data infrastructures.

Will companies build store and forward caching networks to capture information before sending production information batches to the accounting & inventory systems or will they try to build real time inventory systems to capture all of the RFID information? I imagine at first companies will try to cobble together solutions using existing technology, but most of the information used to track the RFID tag progress through a production or transport system will be transitory. Will the Government step in with impossible to enforce HIPPA and Sarbanes Oxley like regulations about RFID data tracking and data retention?

Technology has come up with a short term answer with new Perpendicular Drive Technology. RFID may cause a whole new wave of technological evolution. For one thing , it should certainly change the bar code readers in tape libraries. RFID will also probably cause an evolution in the addressing & classification of storage. Interesting stuff!

Thursday, June 22, 2006

Zerowait High Availability
Over the last 17 years in business our company has earned a reputation for High Availability in Networking and Storage design. There have been many technology changes in the marketplace and the equipment that we use to build and maintain our customers' High Availability infrastructure, but there has always been one constant, Zerowait's High Availability commitment to its customers. We don't just talk about it we provide it 7/24 every day of the year, and have for many years.

In the mid 1990's Zerowait's customers needed to deploy vast web server networks on the Internet, and so Zerowait began to install Radware's load balancing products. After we installed these products we found that our customers could not quickly deploy their Storage and so we went to NetApp's offices on San Tomas, and spoke to Dave Hitz about becoming a dealer for them in our High Availability niche. The mix of load balancing and NetApp Storage was a lucrative one and Zerowait was very successful in installing a lot of equipment for both Vendor's as can be seen in this link.

The Radware equipment was unable to keep up with our customers' high traffic loads and the folks from Arrowpoint came to us to help them introduce their ASIC switch based product to the marketplace. We became an OEM and private labeled the products, they told us that we represented 10% of all their sales prior to the Cisco purchase. After Cisco purchased Arrowpoint we had several customers who were left without hardware support for their load balancers. And so we developed a High Availability service and support program for these customers.

During that same period of time NetApp canceled our reseller agreement and we started our third party parts, service & support for our NetApp customer base which has become the largest part of our business. Now that NetApp has canceled support for the NetApp F760 line this business is growing quickly. Over the next few weeks we plan to open our European service operation, which will help our European customers maintain their legacy NetApp equipment more affordably. Exciting things are happening at Zerowait!

Tuesday, June 20, 2006

Pittsburgh, Spinnaker and roads not taken.

I have to go to Pittsburgh today to visit some customers. I used to go to Pittsburgh quite often before NetApp purchased Spinnaker networks. A few years ago Zerowait helped Spinnaker with its early ideas about marketing their products to certain niche markets. Our engineers traveled to Pittsburgh to meet with Mike Kazar's team, and the Spinnaker folks visited our offices and were very interested in gaining entry into our larger accounts to talk to our customers about their products. This was all occurring near the time that NetApp was taking all of Zerowait's largest and best Filer accounts and making them into direct accounts for NetApp's sales department. I recall that our NetApp reseller manager told us that it was too expensive to have a reseller on the bigger accounts, once we had established the accounts NetApp figured they could make more money without us. In hindsight, it was a interesting twist of events when Spinnaker was sold to NetApp, because we really thought Spinnaker was going to be a viable alternative to NetApp for Zerowait's customers that NetApp had taken away from us. But it did not turn out that way. And in another interesting turn of events, our customers that NetApp took direct had their service and support prices raised almost immediately by NetApp. These were among the customers who asked us to create an affordable third party support organization to support their filers once our affiliation with NetApp had ended. They became our first independent service and support customers for Network Appliance products, and many of them are still our customers.

The Spinnaker technology was way ahead of its time, in a similar way to the Pick Operating System back in the 1980's. Being absorbed by NetApp was probably good for the VC's involved in Spinnaker, but it left a void in the marketplace - which is still unfilled. Our engineers felt certain that the AFS based Spinnaker was a superior platform for 'grid storage' when compared to the BSD based Ontap platform. Whether NetApp was able to shoehorn the AFS capabilities into Ontap will be seen as the market absorbs the technology.
Can the Spinnaker Technology Turbocharge Ontap? I doubt that the new NetApp OS will provide all of the possibilities that the Spinnaker technology would have provided. But time, experience and marketing dollars will ultimately answer these questions.

Zerowait was forced to take the road toward independent legacy service, support and maintenance of NetApp equipment. At the time there was not the option of traveling both the road of third party support and new NetApp sales. There was a divergence in the paths, and Zerowait took the road less traveled, and it has made all the difference.

With homage to Robert Frost....

Monday, June 19, 2006

Myopia and SMB Storage

I went to the local eye doctor last week to get my new eyeglasses. The eye Doctor knows that Zerowait has something to do with computers and storage and he started peppering me with questions. First he wanted to know if I could help him with his backup. I asked what he was doing now for networking and storage infrastructure. I explained that we specialize in high end storage and high availability networks but I would hear him out. I learned that he has three networks in his office because he is afraid of hackers. However, two networks can't get to the Internet. The three networks can't communicate with each other and are backed up to tapes. I asked if he ever tried to test his data with a restore from tape. He said that he had not, but he had five copies of his tapes. I asked how he could tell if he was backing up anything useful since he has not tested a restore. He paused and said that he did not now how reliable his back ups were.

As we proceeded with my eye glass examination other questions surfaced, I tried to explain why he might want to install a firewall and centralize his storage onto a single system, which would also be simpler to back up or mirror. He wanted to know what would it cost. I explained that the cost would be in the thousands for a firewall and centralized storage and backup to meet his data storage requirements. He looked at me and assured me that he could not spend more than $1000.00 for his backup system, and that is why he does not want any firewalls and a centralized storage solution.

The Doctor is a nice guy and he has at least as many employees as we do at Zerowait. I imagine his gross sales are similar to mine on a yearly basis, although I don't know. But he has the problem that I see so many times when I discuss computer storage with my friends who are the fabled ' SMB customers'. They don't see the value of secure back ups and centralized storage. I have a close friend who has a land planning and engineering company with 30 people. He does not understand the value of centralized storage either, but we have at least finally convinced him that he should have a firewall.

After over 20 years in the business of High Availability networking and storage, I have seen a very small percentage of small and medium businesses that see the long term ROI in investing in centralized storage, back ups and coordinated networks. Perhaps my definition of small and medium sized business is wrong. But most of my friends work at companies with less than 50 people. The rarity among our friends is the folks working for companies with more than 500 people.

When I read articles about the huge SMB marketplace for High Availabilty storage, I am left wondering where it is.

Wednesday, June 14, 2006

Large Margin Technology

Robin Harris at uses the term Enhanced Margin technology to describe the mark up's that storage vendors make on reselling their commodity based hardware. But perhaps we should call it Venture Capital Margin Technology (VCMT) , because many of these companies were founded with Venture Capital and they need a healthy Return On Investment to justify the irinitial investors interest. Mining profits when you are the first viable enterprise in a newly identified market and technology niche depends on attracting attention and gaining quick market acceptance and it is not easy, but can be quite profitable. Many of these companies find it very difficult to impossible to maintain a healthy margin when their technology commoditizes. I don't want to say it matures, since storage technologies are evolving so quickly.

Looking over the storage landscape today, I see some companies that identified Storage market niches that are now abandoning those niches and trying to go up market or down market. This might be a good strategy for corporate ego's, similar to the VW attempt to market their $100,000.00 Phaeton. But when the customers don't perceive the value add, there is little chance of gaining new customers and a big chance of losing old customer relationships.

The downward spiral starts with a trickle as customers start to break the vendor lock in by seeking other sources to maintain or replace their technology, but the early adopter and influencer in the marketplace can be quite effective in eroding customer allegiance in proprietary technologies.

Robin Harris understands finance much better than I do. In a recent conversation, he said to watch what the insiders are doing with their stock purchases and sales. According to Robin, the big shots at the large storage vendors are sellers of stock currently. What does this mean for customers who are seeking a five year duty cycle out of their storage arrays? Perhaps, it is time to be much more aggressive in their negotiations with their storage vendors for price and service concessions?

Tuesday, June 13, 2006

Pearls of wisdom from Storage Mojo
Last night I got a chance to speak to Robin Haris and I asked permission to cross post his blog entries on mine. We had a great discussion and I encourage folks to watch his blog entries, because he is looking out for the storage user.

Below are some excerpts from his blog.

NetApp’s Battle Shots

June 12th, 2006 by Robin Harris in Enterprise, NAS, IP, iSCSI

NetApp’s announcement of multi-petabyte namespace support in its Data Ontap GX 7G storage operating system - my, doesn’t that just roll off the tongue! - should allow it corner several shrinking HPC markets. Industrial Light & Magic used the Spinnaker version to store 6 petabytes of Star Wars v3 special effects, including 300 TB of the opening battle shots. If only they’d stopped there.

Raquel Welch in One Million BC vs Net App In One Million IOPS
If the block storage people are wondering about NetApp’s intentions, wonder no more. They are gunning for the high-end data center storage market now dominated by EMC, IBM and Hitachi. One clue: the 1,000,000 I/O per second SPEC mark. True, it was mostly a stunt that flaunted the ripped abs of their 768 GB cache, and performance started degrading fast around 900k, but so what? This is about bragging rights, not real life.

As Greg Schulz points out NAS is an increasingly popular data center option for ease of management and scalability. Block storage isn’t going away anytime soon, but as the divergent stock prices of NTAP and EMC proclaim, Wall St. is more interested in your future growth rate than your current market share.

NetApp Goes Hollywood?
As Silicon Graphics can attest, Hollywood has zero brand loyalty. So the ILM endorsement means only that they haven’t found anything cheaper that will do the job. That will change when the cluster version of ZFS rolls out. Nor is geophysical modeling for finding oil a a growth industry. HPC is traditionally a graveyard for companies that focus on it: too few customers who are too demanding and unreliable. Ask Cray Research.

Yet the six petabyte array is a significant technical achievement. I hope their marketing does a good job of selling the benefits of large namespaces and storage pools, because right now people are still caught up in the whole disks and volumes mindset. NetApp can legitimize the storage pool concept in data centers, paving the way for software solutions like ZFS to grow.

NetApp’s web site notes that Yahoo Mail uses NetApp equipment. They also claim in one of their 10-K reports:

NetApp success to date has been in delivering cost-effective enterprise storage solutions that reduce the complexity associated with managing conventional storage systems. Our goal is to deliver exceptional value to our customers by providing products and services that set the standard for simplicity and ease of operation.

Uh-huh. Like those 520 byte sector disk drives with the Advanced Margin Enhancement Technology?

Second, the problem of read failures. As this note in NetApp’s Dave’s Blog explains, complete disk failures are not the only issue. The other is when the drive is unable to read a chunk of data. The drive is working, but for some reason that chunk on the drive is unreadable (& yes, drives automatically try and try again). It may be an unimportant or even vacant chunk, but then again, it may not be. According to Dave’s calculations, if you have a four 400GB drive RAID 5 group, there is about a 10% chance that you will lose a chunk of data as the data is recovered onto the replacement drive. As Dave notes, even a 1% chance seems high.

Where Dave and I part company is in our response to this problem. Dave suggests insisting on something called RAID 6, which maintains TWO copies of the recovery data. Compared to our RAID 5 example above, this means that instead of having 2000GB of usable capacity, you would have 1600GB. And now RAID 1 would only have 25% less capacity. I say drop RAID 5 and 6 and go to RAID 1+0, which is both faster and more reliable.

Monday, June 12, 2006

When NetApp purchased Spinnaker I was startled, as I could not understand the reason behind the purchase. Spinnaker, like Panasas, could have been a viable company, but they were both late to the marketplace, and could not get the market acceptance that the Internet boom provided both EMC and NetApp. But Spinnaker had identified a few niche markets, as has Panasas. Additionally, the Spinnaker technology was based on the Andrews File System & the NetApp system is based on BSD. So they really could not be easily integrated. In my humble opinion, Panasas would have made more sense to purchase from a technology point of view for NetApp. So, I was very interested in reading this article by Chris Mellor over the weekend.

Gigabit Ethernet clustering just doesn't give you the performance and future headroom that Infiniband does.

Anderson says: "NetApp uses Infiniband to cluster two nodes. When NetApp bought Spinnaker it then made a mistake. It tried to add features out of the Spinnaker product into ONTAP. But clustering can't be done that way; it has to be in the DNA of the system. NetApp's approach didn't work. Two years ago NetApp reversed direction. Dave Hitz (NetApp CEO) announced that Data ONTAP GX is a Spinnaker foundation with NetApp features added to it."

Anderson added this comment: "(Data ONTAP GX) is namespace organisation. It's not clustering. It's RAID behind the veil and can still take eight hours to rebuild a disk. They'll be performance problems downstream. It's a bandaid. It's a total kluge."

With Isilon file data and parity data is striped across up to 9 nodes. A failed disk can be re-built in 30 minutes to an hour. In effect, Isilon's striping technology renders RAID redundant.

Anderson says suppliers like Acopia 'do it in the switch layer. It's not rich, it's lightweight.' Again there will be performance problems downstream.

A virtualised pool of NAS resource requires the NAS nodes to be clustered for smooth performance scaling. It also requires N + 2 protection so that the system can recover from two failed disks and not just one. (NetApp's RAID DP provides protection against two disk failures.)

Friday, June 09, 2006

The Disk Cleaning & Sanitization issue:

Recently a growing number of customers have been asking us to help them SANITIZE their disks after they retire their storage equipment. We do this with our proprietary solution for a daily on site rate, because we can never tell going in how many hours it will take to cleanse all the disks in the FC arrays. I hope to have a forum on this during the next Disaster Recovery conference, because I am not the only one who considers the possibility of private data getting into the wrong hands a disaster!

The Wikipedia actually has very good summary of the problem and this piece is really interesting to many customers we speak with:

The bad track problem

A compromise of sensitive data may occur if media is released when an addressable segment of a storage device (such as unusable or "bad" tracks in a disk drive or inter-record gaps in tapes) is not receptive to an overwrite. As an example, a disk platter may develop unusable tracks or sectors; however, sensitive data may have been previously recorded in these areas. It may be difficult to overwrite these unusable tracks. Before sensitive information is written to a disk, all unusable tracks, sectors, or blocks should be identified (mapped). During the life cycle of a disk, additional unusable areas may be identified. If this occurs and these tracks cannot be overwritten, then sensitive information may remain on these tracks. In this case, overwriting is not an acceptable purging method and the media should be degaussed or destroyed.

Here are two links that address the issues :

What is your corporate policy on excess equipment and disk sanitization?

Thursday, June 08, 2006

The Small Business perspective and a Goliath's

Last night I was reading the Wall Street Journal of June 6th 2006, and on page B5 there is an article by Gwendolyn Bounds about how an independent radio station in Philadelphia maintains its market leadership under private ownership. The article ends with this statement by the station's owner ' Whenever there's a decision to be made, I ask myself two questions: "will I make money in the next 12 months?' and " will I make money in 5 years?" ' In the spirit of someone without public shareholders to consider, Mr Lee adds " The five-year one is the only one that matters" :-)

While I was in Tampa last week I got into a discussion about how a small business like Zerowait can compete against a large leviathan like NetApp for service and support of Legacy NetApp equipment. I tried to explain that because we are small means that we can discuss tactical and strategic ideas between our departments and make decisions quickly based on our customer's requests and emerging requirements. But the reality is much simpler, ask your NetApp, Hitachi, or EMC salesman and management what their five year expectation is for his employment and the duty cycle of their products. Then call Zerowait and compare the answer, Zerowait is always working toward the strategic five year time frame, while most Enterprise storage manufacturers are looking at the quarterly sales figures.

Mr Lee summed up my thoughts beautifully on our business.

Wednesday, June 07, 2006

Strategic and tactical thinking about your storage infrastructure.

When you consider your enterprise storage strategic plan do you consider a ROI of three years, or five years? What does your vendor consider the term of their strategic alliance to be? What happens to your service and support budget and QOS if your chosen vendors' agreement falls apart with their suddenly not so strategic partner? What happens if a vendor cancels support after 18 months for the product you just purchased?

Many enterprise storage companies face this problem, and when the vendors say that the only choice is to upgrade and spend even more money, and get an even more proprietary solution, customers often agree to the upgrade because they see no other solution. But there are other viable solutions, and avoiding vendor lock in is an enterprise customer's best defense.

I was at a conference recently in Tampa, and I was speaking to some managers of big data centers, every one of them was interested in how to avoid vendor lock in. Each of them had a nightmare story about their storage vendor and hidden lock in costs. We had a great discussion about the different tactics there are to fight vendor lock in. Fighting vendor lock in starts at the negotiations stage, but it is very important that you add addendum's to the vendor's RTU (Right To Use) license agreement and also make certain your PO reflects the special changes you want starting with the right to a transferable license and the right to use third party support without any changes to your warranty. Tactically you can save tens of thousands of dollars at purchase, but strategically you can save hundreds of thousands by negotiating aggressively with your storage manufacturer.

Tuesday, June 06, 2006

NetApp & IBM agreements

We get a lot of calls from customers who are trying to figure out how to get the best deal on NetApp equipment and we advise them to shop around. The agreement between NetApp and IBM provides a way to negotiate a better deal on your new NetApp equipment's because now you have two competing manufacturer sales forces offering identical equipment. Since both sales forces have to meet their quotas a savvy customer can play them against each other.
Additionally each company has a reseller channel which can also be contacted to get quotes from.

Since both NetApp and IBM are offering NetApp software and support services, there is no differentiation in product or service other than price. It is very similar to going between different car dealers and negotiating.

I recommend some caution in purchasing the IBM branded equipment because NetApp in the past had a similar OEM agreement with Dell. But when it fell apart the Dell customers were left without support. As the article shows IBM just discontinued its last NAS head, how long will it be before they discontinue support for the NetApp brand?

Monday, June 05, 2006

During last week's Disaster recovery Summit in Tampa I was surprised to see so few storage vendors and storage resellers. I would have expected to see a whole bunch of Storage resellers at the conference, since recovering data is such an important part of Business Continuity. It seemed very odd, although the attendees were probably very happy not to see them. Zerowait was the only storage support and services company in attendence, and I was the only non- manufacturer on the panel discussions.

In a strange coincidence today in a press release by Tech Data anouncing their NetApp distribution agreement they say: "Whether it's ensuring critical data is accessible in case of disaster or complying with regulations that mandate increased digital document retention, businesses of all sizes are turning to IT resellers to develop innovative, cost-effective storage solutions," said Pete Peterson, Tech Data's vice president, Systems Product Marketing.

Tech data is in Clearwater and the conference was in Tampa, at the Airport Marriott. If Data Availalbility in the face of a disaster is so important to Tech Data and NetApp, I wonder why they did not drive over the bridge to the conference? It would have been a great opportunity for them to introduce their new product.

Thursday, June 01, 2006

The 2006 Disaster recovery Summit is history now and it was the best conference I have ever attended. And here is why:
1) Attendees were really interested in the topics.
2) End user experiences were clear,enlightening and well presented.
3) Vendors were not allowed to give their standard powerpoint commercials.
4) When Vendors made unverifiable claims, Toigo questioned them on their statements.
5) After conference dinner party was outstanding .

And for Zerowait, we found a lot more customers who are interested in our service and support offerings.

I wish other conferences were as well run, focused and informative.