Thursday, April 27, 2006

" We simply can't afford NetApp pricing and have moved in a different direction"

This is what a former NetApp customer told me as they were trying to sell us their old NetApp system which they loved. And since his Purchasing Agent did not write ' Transferable licenses required" on their PO when they purchased their system from NetApp, they only have a residual parts value.

So many people call us up after they have turned off their filers. They complain about the price of continuing maintenance from NetApp & how NetApp's pricing forced them to move to another system. Although they love their filers, they have gone to an inferior product that is affordable.

When we speak to them and they hear that Zerowait provides an affordable alternative to NetApp for service and support they are quite dismayed, because they did not call us sooner. Before you give up on your filers CALL ZEROWAIT 888.811.0808 . We offer affordable service and support for NetApp.



Wednesday, April 26, 2006

Customer comments

On Monday, we had a customer call us up because they were having a NetApp problem in their New York City Data Center. They wanted us to take care of their problem for them because they are physically located in Russia.

We sent an engineer up on the train on Tuesday morning and took care of their problem. Taking care of problems with NetApp systems is what we do, so it all seemed pretty routine to us. Imagine how happy we were to receive the following comment from the customer via email this morning.
As of yesterday afternoon EDT, NYDR is back in operation.

I would like to point out the service we received from Zerowait was prompt,
flexible, and 100% professional -- they truly met our needs the best it can
be done. The downtime was minimized as much as possible.

Providing an affordable alternative to Netapp for Service, Support and Upgrades is what Zerowait specializes in, and we really appreciate it when customers notice.

Tuesday, April 25, 2006

It is always interesting to see who hits the Zerowait websites and Blog.

Lately NetApp ( nat-198.95.226.224..netapp.com ) has been stopping by a lot, and so has their attorneys Bowman and Brooke. (12.162.212.128) . The folks at Hitachi stop by a lot also (px2.hitachi.co.jp ) , we also see a lot of visits from EMC (psuedo-nat -29.isus.emc.com). The most popular site that people hit us from is www.netapp.org, and second is www.drunkendata.com. Google is the most popular search engine that sends people our way.

I am always amazed at the number of hits we get from overseas. And how many people are interested in my opinions. I appreciate your visits and also your comments, and I hope that you find my perspectives worthwhile. I remember that John Wanamaker said something like his advertising was 50% effective, he just did not know which 50%. Sometimes I feel the same way.



Monday, April 24, 2006









Zerowait and many of our customers will be at the summit. At Zerowait we use load balancing and mirroring to protect ourselves from data loss. Although, we also use our Spectra Logic library for a weekly backup. You can't be too careful!

By the way, customers also need to be aware of the ways to cleanse their disks when they get ready to dispose of them. The other side of data protection is knowing how to cleanse your data.

I am not certain whether there will be a session on Disk cleaning techiques or not. But I hope that in the future these will be included. There are many aspects to data protection, this summit is a start in the right direction. I look forward to seeing you there!

Saturday, April 22, 2006

Finally --- An answer of sorts from NetApp on the BCS & ZCS issue

These answers below give a great perspective on the switch back to 512 sectors, from 520 sectors. And it is worth reading all the way through.

But I still think NetApp should release reliable, repeatable and verifiable performance data so that customers can make informed, economical business decisions based on the costs and risk factors of storing D/R data on ATA disk as compared to FC disk. Additionally, since there are costs associated with additional disks and wasted disk space due to the penalty of running Dual Parity disks to protect from a parity disk failure, customers need to know what are the percentages of wasted disk space and their costs in these configurations? Is it possible that because you don't need to run DP with FC disks that in certain smaller raid configurations it could be cheaper to run FC than ATA on NetApp filers?

Finally, is there a read or write penalty to running databases on ATA disks with Dual DP and ZCS formatting, as compared to the faster Fibre channel disks with BCS formatting?

April 22nd, 2006 at 10:57 am
Cross Posted from the previous thread: From Dave Hitz, CTO, Network Appliance:
Let me take a shot at this. I asked one of our engineers to take a look at this thread as well, so if I mess up the details, hopefully he can set me right. (Hi Steve.)
Reformatting the disk drives from 512 bytes blocks to 520 byte blocks and putting the checksum right in each individual block is the best solution, because it doesn’t take any extra seeks or reads to get the chunksum data you need. This is called BCS or Block Checksum. (Most high-end storage vendors have something similar. EMC and Hitachi certainly do.)
Unfortunately, we aren’t able to format ATA drives with 520 byte blocks. Maybe someday, but not yet. So with ATA we use a different technology called Zoned Checksum (or ZCS) where we steal every Nth block on the disk and use it for the checksums. (I think N is 64, but can’t remember for sure.) This is less efficient because you have to read extra data, but it allows you to get the reliability benefits of checksums even with ATA drives, which is important because ATA drives are less reliable.
And what about the RAID-DP (DP = “double parity”)? I think that RAID-DP is a wise choice for all drives, Fibre Channel or ATA, but given that ATA drives are less reliable we make RAID-DP the default there. I’m wondering if it’s time to make it the default for Fibre Channel drives as well, but as far as I know, we haven’t done that yet.
Why sell less reliable drives? ATA drives are cheaper! If you’ve got the money, then by all means keep buying Fibre Channel drives and keep using block checksums.
On the other hand, if you want to save money, and your application can get by with a bit less performance, then the combination of RAID-DP and Zoned Checksums can make ATA drives very safe. We used to recommend ATA only for disk-based backup or for archival storage, but now that we have RAID-DP and ZCS, we see lots of customers using it for primary storage, which is why we are starting to support ATA through the entire product line, and not just in the R-Series.

************************ ******************* ********8
  1. Steve Strange Says:

    Let me see if I can fill in a few more details (Hi Dave).

    First, let me try to clear up the confusion about BCS vs. ZCS, and provide a little history. As Dave says, ZCS works by taking every 64th 4K block in the filesystem and using it to store a checksum on the preceding 63 4K blocks. We originally did it this way so we could do on-the-fly upgrades of WAFL volumes (from not-checksum-protected to checksum-protected). Clearly, reformatting each drive from 512 sectors to 520 would not make for an easy, on-line upgrade.

    As Dave says above, the primary drawback to ZCS is performance, particularly on reads. Since the data does not always live adjacent to its checksum, a 4K read from WAFL often turns into two I/O requests to the disk. Thus was born the NetApp 520-byte-formatted drive and Block Checksums (BCS). For newly-created volumes, this is the preferred checksum method. Note that a volume cannot use a combination of both methods — a volume is either ZCS or BCS.

    Pq65 provides some spare-disk output from a filer running ONTAP 7.x showing spares that could be used in either a BCS or a ZCS. The FC drive shown here is formatted with 520-byte sectors. If it is used in a ZCS volume, ONTAP will simply not use those extra 8 bytes in each sector.

    When ATA drives came along, we were stuck with 512-byte sectors. But we wanted to use BCS for performance reasons. So rather than going back to using ZCS, we use what we call and “8/9ths” scheme down in the storage layer of the software stack (underneath RAID). Every 9th 512-byte sector is deemed a checksum sector that contains checksums for each of the previous 8 512-byte sectors (which is a single 4K WAFL block). This scheme allows RAID to treat the disk as if it were formatted with 520-byte sectors, and therefore they are considered BCS drives. And because the checksum data lives adjacent to the data it protects, a single disk I/O can read both the data and checksum, so it really does perform similarly to a 520-byte sector FC drive (modulo the fact that ATA drives have slower seek times and data transfer/rotational speeds).

    Starting in ONTAP 7.0, the default RAID type for aggregates is RAID-DP, regardless of disk type. For traditional volumes, the default is still RAID-4 for FC drives, but RAID-DP for ATA drives. You cannot mix FC drives and ATA drives in the same traditional volume or aggregate.

    The default RAID group size for RAID-DP is typically double the number of disks as for RAID-4, so if you are deploying large aggregates, the cost of parity is quite similar for either RAID type. But the ability to protect you from a single media error during a reconstruct is of course far superior with RAID-DP (the topic of one of Dave’s recent blogs on the NetApp website).

    You can easily upgrade a RAID-4 aggregate to RAID-DP, or downgrade a RAID-DP aggregate to RAID-4. But you cannot shrink a RAID group, so you do want to be careful about how you configure your RAID groups before you populate them with data (assuming you don’t like the defaults).

    There was an implication earlier in this blog that we used to use RAID 4, but on newer systems we use RAID 5. That’s not the case — we do not use RAID 5 on any of our systems (though an HDS system sitting behind a V-series gateway might use it internally). This is a whole topic in itself, but the reason, stated briefly, is that RAID-4 is more flexible when it comes to adding drives to a RAID group, and because of WAFL, RAID-4 does not present a performance penalty for us, as it does for most other storage vendors. RAID-DP looks much like RAID-4, but with a second parity drive.

    Our “lost-writes” protection capability was also mentioned. Though it is rare, disk drives occasionally indicate that they have written a block (or series of blocks) of data, when in fact they have not. Or, they have written it in the wrong place! Because we control both the filesystem and RAID, we have a unique ability to catch these errors when the blocks are subsequently read. In addition to the checksum of the data, we also store some WAFL metadata in each checksum block, which can help us determine if the block we are reading is valid. For example, we might store the inode number of the file containing the block, along with the offset of that block in the file, in the checksum block. If it doesn’t match what WAFL was expecting, RAID can reconstruct the data from the other drives and see if that result is what is expected. With RAID-DP, this can be done even if a disk is currently missing!

    We’re constantly looking for opportunities for adding features to ONTAP RAID and WAFL that can hide some of the deficiencies and quirks of disk drives from clients. I think NetApp is in a unique position to be able to do this sort of thing. It’s great to see that you guys are noticing!

    Steve

  2. Administrator Says:

    Wow: great historical and technical clarification. My hat is off to Dave and Steve for jumping to the task of helping clear up the confusion around this 512/520 issue.

    Truly appreciated by all.


I agree with Jon on this, I guess it is my turn to invite Dave out for dinner to thank him for clarifying the issues so well.

Friday, April 21, 2006

The ZCS(512) & BCS(520) Saga continues

Jon Toigo asked me to respond to a post on his blog .

Hi Jon:

I find PQ65's comments very interesting, but I don't understand the reasoning behind his comments. If BCS (520) is better then ZCS (512) and NetApp uses ZCS on their Nearstore products, doesn't this mean that customers D/R and backup drives are more vulnerable to corruption? It would seem so, because NetApp recommends Dual Parity on ZCS drives. This seems to leave customers relying on a less resilient technology for their backups. How much less reliable are ZCS systems than BCS systems, and is it worth the risk? That is what my customers and I are trying to find out.

Can NetApp provide reliable, repeatable and verifiable data to show their consumers that the Nearstore products that use ZCS drives are as reliable as NetApp's products that use BCS technology? Does NetApp keep its financial data on ZCS drives or BCS drives? Why not allow consumers to judge their cost to risk ratio by disclosing test results that can be duplicated and verified?

Clearly there are performance and cost advantages to each technology and drive type. NetApp could easily disclose accurate and repeatable test results, consumers could then make informed and economical decisions on where to store their D/R and back up data. And everyone would be a winner.

Thursday, April 20, 2006

Yesterday, we received a call from a customer complaining about how unreliable the Maxtor drives were in his R150. We had a long discussion about the reliability problems of the drives and he could not understand why NetApp uses unreliable drives. He had been reading my blog about the BCS and ZCS issues and was also steamed that NetApp is selling drives that are less reliable and have less error checking than the drives they used to sell.

I explained that he was not alone, and we agreed that NetApp is much more focused on profitability then reliability. Anyone can read NetApp's founder's Blog and see that he writes about the size of the company, moving engineering to India, and avoids issues related to reliability.

But what do I know? NetApp is growing, so perhaps customers don't care about reliability anymore.

Wednesday, April 19, 2006

Do Seagate drives sold by NetApp have some mystical powers? Not according to Hitachi!

I have been confused for years about the benefits of 512 (ZCS)Sector drives as compared to 520 (BCS) sector drives. And as one of our clients pointed out the R200 uses the older inferior 512 sector (ZCS), according to NetApp ZCS technology is inferior. So why would a newer unit, for example the R200, use superseded technology? I was confused and could not find an answer to this question from anyone at NetApp or from anywhere on their website. So I wrote a note to the guys at HDS. Since HDS works with NetApp on their Gfilers, I figured I could get a clear answer from them.


According to the HDS engineer
"There is no measurable difference in reliability between 512 byte and 520 byte sectors.
SCSI and FC drives can be low-level formatted with different sector sizes (within a small range with 512 at the low end).
The reason for this is that the payload of 512 bytes is sometimes augmented, as it is in Hitachi's case on the RAID and DF, by additional check byes appended to the end of the sector. I think Hitachi calls these CRC check bytes, but they also identify which sector it is, (perhaps by an "ID-less" technique where when the CRC bytes for the 512 byte sector are computed, a virtual ID field is included in the check bytes. That way, that sector will only read correctly when it is read from the right place.
So the bottom line is that this is a case of a subsystem vendor applied redundancy code applied on top of what the disk drive already supplies."

It leaves me with a feeling that all this drive jargon that NetApp has been marketing to us for several years is simply marketing spin. And another way to make their systems more proprietary.

Comments - from Jon Toigo on this post

Hey Mike, the really interesting thing to me is that NetApp contacted me through two different routes to describe how popular their solutions are for data protection. One fellow claimed that their gear was more popular than anyone else's for disaster recovery. When I invited them to make their case as a sponsor of the Disaster Recovery and Data Protection Summit, they demurred. Guess they were afraid that you would be there...

Jon Toigo
Toigo Partners International
Managing Principal
Data Management Institute

Founder
1538 Patricia Avenue
Dunedin, FL 34698 USA
O: 727-736-5367
F: 727-736-8353
C: 727-504-9311
jtoigo@toigopartners.com







Tuesday, April 18, 2006

Does NetApp charge so much for its software because customers have incomplete information to make an informed decision?

According to an article in the Economist April 15, 2006 edition on page 78 - Physicians and Lawyers wives have much fewer surgeries than the general population. The article says that this is because Surgeons have better information than the general population, and that Surgeons fear operating on Lawyers wives. This same article points out that Real Estate agents on commission sell houses cheaper than they would their own house.

If customers had complete information about NetApp's hardware and software policies inevitably their cost of storage would fall. And while there is competition for NetApp in the general storage market. Once a customer buys into the NetApp value proposition, Zerowait is the only viable option for transferable licensed filers , and service and support for NetApp's equipment.

The best time to purchase from NetApp is at the end of the quarter or the end of their fiscal year. This is because the sales department wants to make as much money as possible so they will cut their prices to sell more equipment. Additionally, the NetApp salesperson is into his Commission Accelerator period at the end of the year, So his commission rate is higher. Just like a real estate agent NetApp's sales folks will cut price to sell more product, and raise his income.

As NetApp's end of year is fast approaching, the best way to get price concessions from NetApp will be to show them competitive quotes. Either from Zerowait or from one of the other storage vendors. And remember to insist on getting your transferable software licenses from NetApp.

Monday, April 17, 2006

History shows that where ethics and economics come in conflict, victory is always with economics. B.R. Ambedkar

Why does NetApp tax their users more to manage 24TB of data on an R200 than 12TB of data?
In short NetApp charges more because they think they can, but in the long run they will lose customers to lower cost alternatives. It is the way of the world.

One of our customers asked us this the other day, and I had to tell him that I could not figure out a reason NetApp would charge more to manage more data. It is the same software, and it has the same amount of programming. So why does it cost more?

Perhaps NetApp views software the way the IRS views income? The more you have the higher the tax on it. It seems ironic that NetApp would put a progressive software tax on storage, since the one thing all enterprises want to do is lower their marginal tax costs.

How can a NetApp user lower their marginal storage tax burden? There are a few of ways:
1) Purchase transferable licensed systems and then max out the storage on the systems.
2) Use storage systems that don't tax storage as much as NetApp does.
3) Get a quote from companies like Zerowait and show it to NetApp, that usually has a downward effect on NetApp's priocing.

Over time NetApp's high marginal tax rate on storage will cause customers to look to other sources of storage which are cheaper to maintain. Think of it as the gloablaization of storage. Just as NetApp is using India for engineering, customers will sooner or later embrace lower cost alternatives to NetApp.

Friday, April 14, 2006

Why does an R200 Nearstore use 512 Sector drives? What is going on? Maybe the reason that Dave Hitz is advocating Dual Parity now is because NetApp is using less reliable drives and drive technology?
A few years NetApp swtiched to 520 sector drives because they were more resilient . But on their Back up and Archiving units they currently use 512 Sector drives. Are the Maxtor ATA drives less error prone then the Seagate FC drives?

NetApp Disk Formatting: ZCS and BCS
Network Appliance filers in the early days came with with "normal" 512 sector disk drives. But the drive firmware's built-in check summing occasionally caused parity i
nconsistency, or data corruption.

So NetApp created their own method of check summing and introduced it in Data ONTAP 6.0. With this new method, every 64th strip on disk was a check sum of the previous 63. Unfortunately, performance suffered terribly, some say up to 30% overhead, until the engineers remembered the mainframe technology of old, where there were 520 sector disk drives with the check summing built right in to each sector. No more 64th strip and the performance was vastly improved.

As a result, the old, "normal" 512 drives became ZCS or Zone Check Sum--named after how the program slips the drive into zones of data stripes and check sum stripes. The new drives with the 520 byte sector became BCS or Block Check Sum drives because the check summing is "built-in".

Thursday, April 13, 2006

MAY 31 & JUNE 1 - Join us in Tampa!

Jon Toigo's group is putting on a Disaster Recovery Conference and we will be there! Although we like to advocate Disaster Prevention techniques, rather then waiting for a problem to occur.

It looks like it is going to be quite an event and the interest that we are getting from our customers is quite good. I think a good cross section of our customers will be there. If you want to attend you can go to the website that I have linked to at the top of this entry.


I hope to see you there!





Wednesday, April 12, 2006

It has been a very busy couple of days. We have started to get in NetApp FAS960's and a lot of people want to upgrade from their 800 series units to the FAS960's, and have been waiting for our stock levels to increase to satisfy these customers. Additionally, now we are geting calls from NetApp customers who are looking at new systems. Typically, a NetApp customer gets a quote from NetApp for a FAS3020 looks at the price - faints - recovers - and gives us a call - we tell them to go to www.spec.org and look at the performance of the FAS3020 or FAS3050 and compare them to the specs on the FAS960. Many are left wondering why they should buy a 3000 series unit from NetApp. We have been trying to understand the reason to upgrade from a 940 to a 3020 for many months. But, because of recent articles in the trade press end user customers are also scratching their heads.

Additionally, our customers are embracing our ZHA exception reporter. During this morning's staff meeting I was told that yesterday my staff received two calls from customers who are using it to improve their filers performance. If you are using NetApp filers, you should try our exception reporter, it is really a useful tool.

Tuesday, April 11, 2006

Morphic Resonance

Last night I phoned Jon Toigo because of the search storage article and we chatted about it at length. This morning I read his blog. It seems that the press and the competition is now starting to ask the same questions that Jon, Zerowait, and our customers have been asking for a couple of years now.

When is NetApp going to provide reliable, repeatable and verifiable performance numbers? Competition in storage is strong and growing, and I predict that NetApp's well funded competitors will soon start to push for Verification of NetApp's claims.

It is not only Zerowait and our customers who are looking for real numbers, but also the trade press.

Monday, April 10, 2006

NetApp - we used to be a contender

It is eerie to watch the decline of NetApp. Here is company that spends hundreds a millions of dollars to purchase Spinnaker, Alacritus, Decru and yet does not spend money on listening to its customers. It is like watching GM's decline in many ways. GM thought that by purchasing Hummer, part of Fiat, and Saab they could stop the decline of their company. But the problem is not the brands it is that the company does not listen to its customers. NetApp likens itself to a New Economy company, but it is really the same old story. Listen to your customers, they will tell you what they want and what they are willing to pay for it.

So what will happen over the next few years? Competition will force NetApp to lower its prices, I predict that because they have a very high overhead they will slowly cut into their currently very high gross margins. As their current customers smell that their salesman are getting more desperate the customers will demand higher discounts. This will cause NetApp to reduce its workforce and offload unprofitable divisions and projects. Will NetApp's shareholders ever see a return for their investment in the Spinnaker purchase? I strongly doubt it.

It will take a couple of years, but competition will reduce NetApp to a memory, just like Wang, DEC, Compaq, and so many others.

Tuesday, April 04, 2006

I was in Charlotte NC yesterday visiting with a fast growing racing technology company that we provide service and support to for their NetApp filers. Even this fast growing, fast paced business needs help in controlling their NetApp costs for service support and expansion.

They love their NetApp equipment, but they get heartburn when they look at the prices NetApp wants to charge. So they called us, and they are very happy with our affordable alternative to NetApp.