That fun game changer in the room (I’m looking at you, x86 Virtualization) has really shaken things up over the past decade. It seems like every single piece of technology in the datacenter has been altered due to the presence, and results, of virtualizing workloads. To those many companies out there who have embraced the wave and converted their army of small physical boxes into a few behemoth servers running VMware, it’s been a mixed bag of good and bad. Mostly good, but there are a few … let’s call them annoyances. One of them is our old buddy: storage.

I’ll be absolutely blunt – I don’t find storage all that interesting as a standalone topic (sorry Robin!). I don’t really care much about giant petabyte arrays or big fiber mesh networks – they’re certainly cool to think about, but to me, are simply the foundation of the datacenter stack. What does interest me is using storage for virtualization, and using it well. I spend a lot of time thinking about my storage and how to make it ultra efficient, organized, highly deduplicated, and, well, “smooth” for the VMs that ride upon them.
A number of super smart people who have been working with storage for a long time are making the statement that traditional storage models are not going to work as we continue to travel down the path of virtual workloads. Bold statement, I know, but after hearing their reasons why I will acknowledge that they have some valid points. Is this a problem for this year, or even next year? Probably not. I’m very confident that today’s arrays will do the job for the near future.
However, what about 5 or 10 years down the line? And what about the now possible Monster VM workloads? How do you perform triage on a scaled-out VMware cluster with so many moving pieces and parts? These questions are more than “speeds and feeds” and are important in getting all you can from a VMware environment.

In this post, I’ll go over some of the technology from a storage vendor that wants to change the game for virtualization, how they intend to make their mark, and also discuss the changes needed to embrace the technology.
Contents
Tintri: The “VM-Aware” Hybrid NAS Datastore
I’ve been watching Tintri for a while and have made sure to get some demos of their product from various places. Their claim to fame is “VM-Aware” storage, in which the storage array has some really granular and detailed information about the virtual workload that it is hosting. Talking to CEO Dr. Kieran Harty, who incidentally spent several years as Executive Vice President of R&D at VMware, reveals that he saw a fundamental flaw in storage design when used in a virtual environment and decided to do something about it.

Let’s take a dive into what it looks like to manage a Tintri unit, as it will probably be the most visible part of any administrator’s experience.
Not “Another Single Glass of Pain”
The array has a lot of information at its disposal: statistics that are gathered directly from the array, along with hooks directly into vCenter, resulting in details on a VM that are just sick! From host/network/disk latency values , historical trends of performance, IOPS, flash hits, all the way to esxtop-like values such as %CPU ready. Many of these statistics are unique to Tintri, as the array is really “VM-Aware” and able to give data that you can’t gather from vCenter.
From a triage standpoint, the management interface really does a great job at presenting the administrator a holistic set of values to isolate and resolve issues. The storage also performs auto alignment on the fly, in which misaligned vdisks and partitions are discovered and corrected at the storage level, automagically. This behavior is enabled by default, requires no configuration, and can really save a lot of wasted IO while retrieving blocks.
Here are some real screenshots from a live system that I was given a tour of. Each picture can be clicked to full size for more detail.



What’s In The Box?
Ok, enough about me geeking out over the software that Tintri provides, let’s discuss the hardware itself. At the time of this writing, Tintri is working on (or has started to ship) the T540 unit, which is an HA pair of heads. Some details with the new model include:
- It’s now 3U in size (thinner than before).
- Over 50% increase in usable storage.
- The amount of SSD flash has been roughly doubled when compared to the T445.

The array uses a tiered approach for disk; a layer of MLC SSD flash drives that sit on top of SATA disks. The usable storage value is just the SATA disk – the SSD drives are not considered in any free space calculations. I was kind of surprised to hear that something can be smaller, faster, and have dual heads, but I saw it with my own peepers and can vouch that it’s true. 🙂
Not Just Storage “With Some SSDs” Added
Their strategy is to give flash level performance to all VMs without having to buy an entire array of flash drives. During the demo, almost all VMs that were running were showing a 99%+ flash hit rate. The few that were not hitting flash were “staged” in such a way to show poor performance and highlight the ease of resolving issues.
In order to service such a high flash hit rate using a small set of flash SSDs, Tintri uses in-line deduplication on the SSDs. The logic seems to be something common in the SSD world: its faster to calculate deduplication tables than to write more data to flash drives. It also lengthens the life of the SSDs as they are doing less writes.
Enterprise Ready?
Like with any design, it depends on your functional requirements. There are two items still being worked on by Tintri that may hold you back:
- VM Snapshots
- Replication
There’s also some skepticism due to their use of MLC flash, as opposed to SLC “Enterprise” flash. I would tend to disagree about this being an issue, as would some others who have written articles on the point. No one wants to deal with drive failures (and the resulting rebuild times / poor performance), but it’s not as if there were only a single set of flash drives – the system can handle drive failures. I will say that I’m less concerned with the MLC flash part since I see many vendors going this route.
Another point is the use of 7.2k SATA drives for the back end storage. This seems to be a road already traveled by a few vendors (NetApp comes to mind) that was not well received. Not to pick on NetApp (I’m also a big fan of their arrays) but their FlashCache cards only service reads, whereas the Tintri flash services both reads and writes. I think it depends on how you view storage tiering to decide if you like or don’t like this decision.
The Right Fit
I do see this unit being a great fit for many types of virtualization:
- The new shop that wants to get started with VMware. NFS is extremely easy to configure and the Tintri storage is presented as a single datastore. How easy is that?
- An existing VMware shop that wants to add a new project / layer to virtualization without expanding their current storage.
- Virtual desktop (VDI) projects using VMware View.
Thoughts

The bottom line is that Tintri is on the right path. Their interface and simplicity are exactly what the virtualization space needs. There’s still more work to be done, but I applaud their efforts to date and think that owners of their system will be very happy with the amount of monitoring and stats they can pull from the array. If you have spent time with a unit, attended a demo, or have other thoughts to share, please add your comments!
I’d like to thank the Tintri team for letting me in their HQ to take a tour, answering all my questions, and being clear about their product and what it can do. I hope to see more great things from Tintri in the future.