Quantcast
Channel: OpenStack Nerd, CCIE, DevOps Junkie » DESIGN
Viewing all articles
Browse latest Browse all 22

Simplifying scale out DataCenter design with UCS Manager 2.1

$
0
0

I’ve been designing and deploying UCS since the product was released a couple years ago (technically I was involved in the pre-release so we will say since UCSM v 0.8). From the start I was constantly pushing up against scalability and design constraints of UCS. The benefits of the system outweighed the challenge, but these design constraints created some challenges in creating external systems to meet the needs of large UCS customers.

Don’t get me wrong, out of any server platform I prefer UCS. That being said there are a few area’s that have really caused headaches for me over the years.

Headaches solved with the release of UCSM 2.1

NewImage

Headache #1 – Once I scale past a certain number of servers, I have to establish a new UCS domain

This has been a huge challenge for both large single data center instances, as well as multi data center instances (such as DR). In both these cases I would have to utilize tricks like placing mac address, wwn pools and other “unique” identifiers into a CMDB (Configuration Management Database) outside of UCS. And even when utilizing external CMDBs, there was a still a bit of design necessary to lay out UCS domains in a fashion that would support eventual integration in the future without overlapping configuration elements.

All of this work was done to ensure that if two servers were instantiated in two different UCS domains that they wouldn’t have conflicts if they wound up on the same segment. Handling this logically by bit swapping the UCS domain ID in certain resource pools wasn’t terribly complicated, but in my opinion unnecessary (though integration with CMDB’s can be very complicated).

This got even more complicated If you wanted to have a DR site. Making something simple happen like having a server that boots from SAN boot of the DR site SAN in an outage involved using external tools or scripts. In my opinion this is something that should be handled by UCSM or a manager of UCSM.

Headache solved – UCS Central Manager of Managers

NewImage

For those in the now, this has been in the works for a VERY long time. In fact the early install (1000+ servers) that I mentioned above where we had to use external CMDB’s to glue UCS domains together in the first year of UCS generated this feature request. 

UCS Central is in a sense a manager of managers. This allows you to aggregate pools and policies of multiple independent UCS domains into one central management platform.

It solves the problems -

  • resource conflicts across pools,
  • mobility of service profiles between UCS domains as well as
  • centralizing access logs
  • centralizing access to console servers

Headache #2 – Even when Cisco released code to manage c-series 19″ rack mounts under UCSM it still required a bunch of extra cables and equipment to make it work.

70% of the worlds x86 servers are in a 19″ rack mount form factor. Recently Cisco enabled them to be managed under UCSM and to have a data path that exits through the fabric interconnects. This allowed a couple key things to happen. First, it allowed a unified view of systems for an administrative staff for a data center. Second, it allowed a clean data path from, say a storage caching engine run on a a b-250 blade, to a compute node housed on a c240 rack mount. All of this communication would be contained within the fabric interconnects, and not have to exit northbound as it had in the past.

I was happy with this release. It allowed the c-series servers to be managed under UCSM with the same tools techniques and API’s that we manage the blades with. However the code was not updated to allow all that magic to happen over a single wire.

You would end up with beautiful cabling on the backs of your blade centers, and a giant mess of cables coming out of your rack mounts since you needed separate cables to support data path vs management plane. Call me a neatnick, but I like my racks to be pretty and clean (and not have to buy extra switches, cables and adapters).

Headache solved – Single wire management for ALL UCS servers

NewImage

With the 2.1 release now all you need is a single Cisco Virtual Interface Card in your UCS 19″ rack mount (two if you want redundancy) to allow the full feature set that you have available on a UCS blade. For me this not only simplify’s my designs, but also allows flexibility in things like designing Hadoop and OpenStack Swift Object Storage clusters where redundancy is done at the application level and dual 10 gig interfaces are not needed.

Headache #3 – For certain topologies I want storage locally attached to my pods using whatever protocol I want

Here is a dirty little secret. Even though you can abstract a bunch of storage functions into UCS, most server guys are still a bit impatient with their peers on the storage teams. There are many times when the server guys want to consolidate a bunch of boot disks into an array and connect them directly to the fabric interconnects.

Over time Cisco has been releasing support for additional protocols connected in this way, however it was not ubiquitous. This created problems because you could not create a standard topology that supported flexible protocol consumption in your network. You would end up with two to three variants of supported topologies. In my opinion this creates issues with operational procedures and leads to extension of outages and generally inefficient designs.

Headache solved – Flexible and consistent storage topology options no matter what protocol is being used.

ucs fcoe multi-hop cisco

With UCS 2.1 now, no matter what protocol floats your boat you can implement them in a consistent manner. This may include directly connecting Fibre Channel storage to your fabric interconnects and zoning them. Or it may include utilizing multi-hop FCoE (I’ll leave the argument to whether you SHOULD use this till later).

Either way, the most important thing to me is that no matter what the design requirements are. Now you have the tools available to meet them in a consistent fashion without changing your entire network and systems topology.

Colin’s Thoughts

Quite often there is lots of glitz and glamor when a new product is released. Press conferences are held where everybody looks at the shiny blinky things and oohs and aweess. However when new software makes things you already use every day work better, or allows them to do new things comes out nobody notices.

In this case the 2.1 release of UCSM takes a product that many people already have (Unified Computing System) and makes it do more. There aren’t going to be press conferences about this, but it is worth taking a closer look at. It will make my life easier, and I hope it does the same for you.

 

 

 


Viewing all articles
Browse latest Browse all 22

Latest Images

Trending Articles





Latest Images