Monday, March 16, 2009

Cisco enters the storage broom closet

Cisco is looking to reduce the TCO (Total Cost of Ownership) of the complete data center and they seem to be looking to partner with EMC and NetApp to enable an integrated solution. NetApp is famous for talking up interoperability while in fact locking customers into their set of solutions. It will be interesting to see if Cisco can herd NetApp and EMC salesman into the same room with a customer to see if they can work together to close a sale to the customer. I predict a collision of interests as both companies are fighting for market share and floorspace within the customers data center.

Cisco has a lot of leverage, but salesman never change .

Let's watch this as it develops.

Starting in the second quarter of 2009, it plans to offer complete systems of up to 320 compute nodes housed in 40 chassis, with data flowing across 10 gigabit Ethernet.

Critical to its challenge will be its ability to draw on the expertise of key partners. Its compute capabilities, UCS B-Series blades, will be based on Intel Nehalem processors, the follow-on generation from Intel Xeon; VMware will supply the critical virtualisation software; BMC will enable “a single management environment for all data centre devices”; EMC and NetApp will be responsible for the storage system units; Emulex and Qlogic will input storage networking technology; Oracle will deliver middleware; and key systems software will come from Microsoft and Red Hat.

The company is already making bold claims for the savings that clients will reap from such an integrated fabric. UCS “reduces total cost of ownership: up to 20% reduction in capital expenditure and up to 30% reduction in operational expenditures”.

1 comment:

Anonymous said...

The cisco "solution" baffles me - it seems they want you to trust the decisions on what should go into your enormously complicated infrastructure to a bunch of sales & marketing folks who have agreed to team up on you.

The only thing I found remotely interesting on this is how their servers will accept up to 384GB of memory per server -- That is a big deal when you are talking about features like VMware's Distributed Power Management - You have to have a couple of boxes that can take on massive loads of idle vm's overnight to see benefits from this sort of thing and memory (even with overcommit) is the big problem with that currently.