If you happen to read my writing(as infrequent as it is these days), you know that I am a networking focused person. I live my day to day within the walls of routing, switching, wireless, and other “network centric” platforms and technologies. The days of Unix, Windows, and other generalist type administration duties are gone for me. However, like many IT professionals, I have a strong desire to understand all of the different areas in order to enhance my capabilities within the networking space. If you wish to implement IT in any particular silo, it helps to understand all the different pieces. With that in mind, I happily accepted my invite to the Cisco UCS Grand Slam event in New York City a few weeks ago. My involvement with Cisco UCS usually stops at the fabric interconnect point, and occasionally down into the virtual networking piece as well.
I mention that to state that while I understand the moving parts within storage, compute, and virtualization, I DON’T understand it at the level of people who live in those worlds full time. In light of that, I have to point out that I may be completely wrong in my predictions or thoughts around this particular launch. Then again, I may be 100% right in where this is all headed. Time will tell, and right or wrong, this will be available on the Internet until I am shamed into the void of abandoned blogs, or offered a very lucrative gig shilling for one of the billion flash storage companies.
UCS Mini
Coming into the Cisco UCS Grand Slam event, I knew about the UCS mini. Everyone knew about this. A fabric interconnect(FIC) for UCS that fits into the Cisco 5108 blade chassis. Great for smaller customers that didn’t want to go all in and buy the larger 6200 series FICs for a handful of servers. Not so great for customers that needed a ton of UCS servers and already had the larger 6200 series FICs.
Hooray! The mid market customer finally got some UCS love apart from owning a handful of C series UCS boxes. The use case was put forth for a large branch office, and since I live a lot of the time in healthcare environments, I can see that use case in hospitals. However, I still think it is a larger opportunity in the data center of smaller companies.
Here is a video I shot of one of these 6300 series FICs at the event. I can tell you that this little guy was not light, but then again, they had to pack a fair amount of technology in this smaller form factor.
But Wait, There’s More
A couple of interesting things were also announced at the event.
First, there was the M4308 modular server chassis. It is a 2U box that can hold up to 8 M142 compute cartridges. Each cartridge is actually 2 different servers. Well, it is really just a processor and memory. The M4308 uses shared network(2x40Gbps uplinks) and storage(up to 4 SSDs). Cisco has effectively decoupled everything from the server itself other than processor and memory. Why would you want to do something like this you ask? Well, the way I see it, it gives you the potential for a lot of distributed computing power without the typical expense involved in buying regular servers. Maybe you don’t need anything but a lot of processing horsepower for a particular application. Maybe you just need small servers to run a bunch of smaller applications that require their own dedicated box. It could be used for any number of things I suppose.
M4308 Front Picture
M4308 – Rear Portion Open
M4308 Rear Picture Showing Drive Bays and Network Connections
M142 Compute Cartridge
M142 Cartridge Opened
Second, the C3160 server was announced. Basically, this is a big storage box. It can hold up to 360TB of storage. It has 64 drive bays. While Cisco isn’t the first to release a server with tons of storage space like this, it does make their compute offering a little more complete.
C3160 Server
Is That All There Is?
Okay, so we have some new hardware that gives us more options. That’s always a good thing, right? Other, more qualified server/storage/virtualization folks, would have a lot more content regarding these products, and you can find their posts linked at the bottom of this page. I would normally end things here. A basic piece about the new UCS offerings.
But then I read this piece from Stephen Foskett, where he discusses virtualized and distributed storage…….
That added some more info to what I had already been pondering in regards to the future of UCS. I also ran across this post from Colin Lynch, and he makes some very interesting statements that caught my eye:
“You need to embrace the concept that UCS is not a Chassis Centric architecture”
“There is no intelligence or hardware switching that goes on inside a UCS Chassis.”
Now consider the rise of solutions like Nutanix and Scale Computing. Consider how they differ from the traditional big storage and big compute silos that we tend to pack into data centers. They converge it all down into nodes that intelligently link together. It’s a clever way to provide somewhat similar services, but with the ability to scale out linearly in both storage and compute within the same box/vendor.
Here’s where I am going to take a wild guess. I think that in the coming years, Cisco will be able to provide the compute, storage, and networking, but in a variety of different building block sizes. From the compute perspective, they already have an interesting array of products. From the networking side within the data center, they have already demonstrated their ability to provide a variety of platforms to suit every need from 1Gbps up to 100Gbps. The missing piece is the storage aspect. Maybe that is where Invicta(Whiptail) comes in. If Stephen is right, distributed storage will be the future. Instead of very large centralized storage platforms, we’ll see lots of smaller platforms spread out across the data center.
As long as the distributed systems can provide the same or similar type of services that the large centralized storage platforms have, I think it can work. Since I am not a storage guy by trade, I have to assume that there are features and capabilities that the larger centralized storage platforms possess that would be hard for Cisco to duplicate with UCS. This would be similar to how larger chassis switches such as the Nexus 7000’s offer things that smaller 1RU switches typically do not. If I were to assume that less than a quarter of storage implementations utilized the largest arrays available, that leaves a considerable chunk of the storage market that can be served with a highly distributed model. I just made that 25% number up. I have no idea what the real number is of organizations that use something like VMAX from EMC. Even if that number is 50%, that is still a lot of customers that don’t need the largest storage platform.
Closing Thoughts
I’ll admit that there is a LOT that I don’t understand when it comes to storage and compute. However, I think at a basic level, we can all understand what the various pieces of the puzzle are within the data center when it comes to infrastructure. If there is something to be gained by using smaller components, while managing it all centrally to where it isn’t that much different than having massive compute, storage, and network blocks, then how bad can that be? I suppose it all hinges on the performance required for the business to function properly. Perhaps, if I look at this from an SDN perspective, it will make more sense. If I can get the same reliability and performance from a bunch of distributed switches throughout a data center and manage them centrally(not just NOC type monitoring, but distributed forwarding intelligence), as opposed to nailing up all 10/40/100Gbps connections to a monster chassis, how is that a bad thing? It should be cheaper, and it should allow for more flexibility.
If I were Cisco, I would want to own it all from the network port to the hardware the data lives on and is processed on. Provided it could all be managed and provisioned from a central location, that is a compelling offer. Vendor interoperability is a good thing, but outside of a single vendor, the single pane of glass concept is relatively unrealized.
I’ll end this post here, because I have started to ramble, and I am not entirely sure if I have made a whole lot of sense. What I am certain of is that Cisco has started creeping closer into the storage vendor’s territory. Will they end up making another acquisition in the storage world soon, or will the Whiptail acquisition provide them with as much of the storage piece as they want? I have no idea. What I do know is that they have managed to make a dent in the compute/server market with UCS in just a few short years. It seems to me that storage is the logical next step for them. If storage as we know it is changing into a more distributed model, I wouldn’t rule out some additional offerings from them. I have no firm insider information regarding their future plans. Just a hunch.
Disclaimer: My travel, lodging, and food expenses were covered by the Tech Field Day crew(Thanks again!), and I assume that Cisco ultimately footed the bill for my accommodations. I wasn’t asked to write anything in return, and based on the timing of this post(which I haven’t had time to finish until tonight in a hotel room), I can assure you that they have probably given up on me by now if they were expecting something. 😉
I love that Cisco UCS is really gaining ground. It is truly making other vendor’s “work harder” to stay ahead of the game. With Cisco introducing the new tech, while making it more difficult for people to make decisions, they are pushing the market.
I’ll admit, when I last moved my blogging platform, I ran across an old post where I said “I’ll give them 2 years before they pull the plug”. Thinking it would be like Lenovo. I was dead wrong, and it’s interesting to see the new concepts they bring to the table.
Good Post! Thanks for the writeup.