One term that seems to get a bad wrap in the IT industry is “best practices.” The phrase is intended to paint a picture of trust within a process and is based on the experiences, triumphs, and failures of numerous others. To some it is a shield that can be used to protect a weak or out dated idea. Others wield it like a sword to attack new ideas or anything that challenges the status quo. And then there are the few who really seem to grasp the essence behind a best practice.
Jason Nash, VCDX, discusses VMware best practices in his blog post on passing the VCAP-DCD – a good source to get ideas on some positive applications of the concept.
The real problem, as I see it, is that the idea of having a best practice no longer holds the time value that it used to offer. Technology is simply changing too rapidly for most (not all) best practices to remain in place long enough to become a “best” practice.
Much of the best practices that are preached today are like pulling a card from the Monopoly Chance pile: while you can occasionally get a bank dividend of $50, you also have a pretty good chance of finding the “Go Directly To Jail” card.
“Officer, you can’t do this to me! It’s a best practice!”
Some best practices are really just common sense wearing a fancy suit and will easily stand the test of time. Take this one for example.
Perform changes on your infrastructure during a maintenance window, even when it promises to be non invasive.
Sure. Makes sense. The risk is easily defined – either take a chance with the change and hope it goes smooth, or put some boundaries around it to avoid updating the resume. But what about those others cases?
It’s All About The “Why”
The art of information technology is fluid and highly experience based. I see a lot of environments with similar hardware set up in completely different ways, with different naming structures and object groupings, because the architect came from a different background or decided to solve the problem in her or his own way. Of course, the use cases aren’t exactly identical, but the point I’m driving is that there is a lot of diversity in the decision making process.
Here’s a great example from Scott Lowe on how varied our opinions can be on best practice. The thread involves a discussion around RAID recommendations. I think Scott handled it extremely well and turned it into a positive example for the community.
I don’t hold it against anyone to cite best practices as long as they can explain why it is a best practice, and why they are applying it to the specific situation. And you should, too. Take some of the older VMware documentation as an easy example. If I read the best practices (or recommended practices as they seem to be called these days) guides from vSphere 4, and you read them from vSphere 5, we could both cite knowing of best practices from VMware and I would be out dated.
Fast forward to a vSphere 5 design scenario. I may never use PVSCSI controllers for my boot disks or low I/O workloads citing older vSphere 4 best practice, while you end up using them everywhere citing the newer best practice. In reality, since things have changed since I last did research, I should be updating my knowledge to see if something has improved.
Thoughts
Take the time to research your design decisions beyond the idea of “it’s a best practice”. Understand why it’s a best practice for your specific use case and functional requirement. This will lead to making wiser choices and having an open mind that is constantly trying to learn and evolve. I especially recommend and emphasize this for those who aspire for the VCDX.
We all know the GOOD best practices are in the Community Chest!
Do you have any stories or examples to share in the comments below? I’d really like to hear them.