I’m spending the day at VMware’s beautiful Palo Alto campus headquarters to attend and discuss the launch of their new vCloud Hybrid Service offering. Both Pat Gelsinger, CEO, and Bill Fathers, GM, Hybrid Cloud Services are giving the press, users, investors, and even many of their employees a first look at the future of VMware’s software defined data center solution (SDDC). Not be confused with the VMware Cloud Evaluation, which was made available to the public in a “test drive” mode earlier this year, the vCloud Hybrid Service (vCHS or a slang version being “vChess”) offering is fully production ready and offers two different consumption models: Dedicated Cloud and Virtual Private Cloud. During the presentation, Gelsinger noted that he expects consumption of both offerings in parallel by customers.
Note: All travel and incidentals were paid for by VMware to attend this event. No other compensation was received.
Cloud Offering Details
The Dedicated Cloud offers a nice chunk of compute and storage in an isolated environment for a customer that has a use case that requires it. The Virtual Private Cloud, on the other hand, is a multitenant environment with burstable compute. Both plans offer similar production support but the dedicated model requires a subscription investment of 12 months or more. Some ideas for use cases could be a read more…
The idea behind a management cluster is to form a pool of compute that manages the other pool of compute. Much like a tug boat pulling along some massive tanker, it exists as a separate entity to provide management in an out-of-band fashion. These are often a great idea, akin to the rationale of sticking iLO cards on an HP server – sometimes bad things happen, and you need a way to fix them that don’t exist inside of the smoking crater that was decimated. But there is no reason to reserve this idea to only large scale, production ready, enterprise environments – your own home lab can (and should) enjoy these features!
Here are three big reasons to have a management cluster, be it one standalone hosts or an HA pair of them, for your home lab.
1) Home Labs Explode … Often
In fact, that’s sort of the point of a home lab. Unlike a real, production environment, a lab is an area where you can go in with guns blazing and have no fear of causing harm. If something breaks, you’ve learned a valuable lesson that will save you the same headache, usually amplified, in your production environment. But there’s a darker side to this – what if you broke your storage array and lost all your data? Re-building an entire lab environment, even from backups, is a non-trivial amount of time in most cases. You have to have all of the software and/or ISOs available, potentially rebuild a domain controller or two, re-create accounts – the list goes on.
By protecting the guts of your home lab with a management cluster, you can focus on read more…
The use of a Link Aggregation Group (LAG) with Link Aggregation Control Protocol (LACP) is rather standard with converged infrastructure northbound uplinks. This grants additional link redundancy and avoids the need for minor levels of interruption in the event of a single link failure, and when coupled with a virtual port channel (vPC) it can also provide protection against switch failure. However, I have found that the use of a LAG, often referred to as a port channel, can cause some confusion when configuring the vSphere switch side of the equation. Nearly all documentation in the wild focuses on the need to use an IP Hash teaming policy whenever a LAG is present.
Does this also mean that you have to use IP Hash for vSphere switches inside of converged infrastructure, such as Cisco UCS or HP Virtual Connect?
With traditional rack mount server design, a port channel is created between the upstream switch and the vSphere host itself. 2 or more NICs that live inside the hypervisor become member ports in the port channel. In this case read more…
Virtual environments are excellent at providing an elastic, scalable environment for a myriad of workloads to thrive. Unfortunately, it’s often too easy to over-provision resources or leave discarded and unwanted workloads, fostering a constrained environment due to virtual sprawl. However, did you know that this can also negatively impact the performance and growth of your critical workloads?
I’d like to extend a welcome to all of my readers (and your friends!) to pop on over to BrightTALK and register for my upcoming presentation entitled “The Devastating Impacts of Virtual Sprawl And Ways To Mitigate Them“ – it kicks off at 3PM CST on Thursday, May 16th. The high level agenda includes:
- Identify virtual sprawl using advanced technical methods
- Dig into performance related issues that can be caused by imbalanced or top heavy provisioning
- Understand how to translate real world sprawl data into a solid business case for change
- Exercise recommended practices to avoid falling deeper into the virtual sprawl trap
It’s interesting to watch the position of flash technology among various vendor stacks, along with the messaging and goals. One of the final presentations at Storage Field Day 3 was by a well known storage company: NetApp. This is a nice change of pace from a partial week spent with a heavy presence of start-up companies that are looking to change the world with radical new designs, and I found myself really looking forward to seeing exactly how NetApp wanted to improve their OnTAP design with flash.
There are many places you can put flash to accelerate performance: in the server for server-side caching, in the array to provide write-back or write-through caching, or even the creation of an entirely flash based array. NetApp seems to believe that they can offer solutions in all of these places – but is this reality?
Note: All travel and incidentals were paid for by Gestalt IT to attend Storage Field Day 3. No other compensation was given.
Creating a Hybrid Flash Array
NetApp has enjoyed a lot of success with their use of Flash Cache technology. This is the process of putting a PCIe card inside of a traditional FAS storage array and using it to cache hot read data, thus alleviating the amount of spindle activity necessary to serve up read IOs. This has an added bonus of giving those same spindles additional time to serve up write IOs. As a former NetApp customer, I definitely used Flash Cache in my arrays to offload 20%+ of my spindle activity to the cache.
What I didn’t realize until the Storage Field Day 3 presentation was that the technology uses a first-in-first-out (FIFO) eviction method – I’m still not entirely sure why this was chosen over something like least-frequently-read (LFR or sometimes called LFU for “used”), as FIFO means that read more…