18 Responses

  1. G
    G at |

    Splendid read.

    The issue with the Synology devices is that you need dual SSD’s to take advantage with your “The Hybrid and Server-Side Cache Approach”, which means you really need a 1813+ or bigger, which works out very expensive.

    An excellent read though, thanks for taking your time to write this up.


  2. Jason
    Jason at |

    Don’t forget about nutanix. The winner of vmworld 2013. Seems like it gives both of your software solutions a run for the money.

  3. Chris Conlan
    Chris Conlan at |

    I’m contemplating doing the Hybrid approach. I have a DS1812+ (I want the DS1813+). I might just yank the 1TB RAID0 array out and replace with 128GB SSD’s. Then look at putting the same in each host (128GB in each).

    I still need to read up more on vFlash because from things I read there are caveats if there isn’t enough space left on the SSDs on the host when you vMotion and what not.

  4. Paul Braren
    Paul Braren at |

    Fantastic work Chris, thank you for a great read. Been hooked on SSDs, and using LSI 9265-8i for SSD caching of reads and writes to my RAID5 since January, but getting that middle tier of storage performing like I wanted was admittedly a multi-year struggle:

  5. darkfader
    darkfader at |

    My current host uses a two-SSD raid0 as a fronting write-around cache infront of a 4-disk raid10 make up of 4 green drives. Small IO performance has been nice ever since I added this “read cache”
    I feel unhappy to move the IO layer into a VM both in terms of data consistency (it could crash and do bad things) and performance (more roundtrips for the IO)

    My next route will – probably – be to venture into using EnhanceIO instead of the more mature, yet older, Flashcache.
    If it weren’t just a lab I’d go straight for a better HBA, better SSDs(*) and CacheCade2.0 instead of any virtual appliance or block-layer caching solution.
    Probably it’s time to spin up another, VMWare based, lab to check out what’s in store.

    (*) I use Samsungs, almost guaranteed to lose cache data on power loss.

    1. Chris Conlan
      Chris Conlan at |

      So you are using the Flash Cache in 5.5 on your hosts? My 2 Kingston 240GB should be arriving shortly. I haven’t decided if I want to put those in the hosts and utilize Flash Cache, or put them in RAID0 in my Synology and do the caching on there.

      1. darkfader
        darkfader at |

        No 5.5 😉
        Running Alpine Linux w/ Xen4.3 on my home box.
        If the synology can handle it cpu-wise you could put them in there. Network will be a little bottleneck, but I like the idea of a more central cache.

        If you wanna cache (Linux) Host side:
        Carlo Daffara from Cloud Weavers told me they use EnhanceIO now since it can be plugged/removed on the fly. It can even cover NFS.
        Sounds really good.

      2. Chris Conlan
        Chris Conlan at |

        The Synology’s CPU barely gets touched to be honest. It has a dual core 2.13Ghz Intel CPU with 3GB DDR3.

        I’m awaiting to see if I can get the NFR for the PernixFVP.

      3. darkfader
        darkfader at |

        Central caching is really nice with NFS, and wow, if you get the PernixFVP that will also be quite fun, yes 🙂

      4. Chris Conlan
        Chris Conlan at |

        Yeah I just need to move the VM’s off the RAID0 array I have them on (2 Samsung Spinpoint 1TB’s) and back onto the RAID10 (WD RED 3TBx6). Hopefully I can toss the Kingston HyperX 240GB in today. I like Samsung 840 Pro’s, but these were $50 cheaper, plus got great reviews.

        I don’t see needing that much of a cache on the host side so I might look into 120-128GB for those.

        Too bad my wife doesn’t see that I’m spending my hard earned money wisely.

      5. darkfader
        darkfader at |

        It’s a lot wiser than what other men might spend it on, err – no don’t say that 🙂

        Anyway, i think the HyperX are also not as cache-unprotected from power failures.

        Will be fun, and yeah, you only need very little actual cache. Cutting off 50%+ for wear leveling is just fine.

  6. Welcome to vSphere-land! » Home Lab Links

    […] Network) My Lab (Wahl Network) Designing A Home Lab? Here’s My Three Favorite Tips (Wahl Network) Three Example Home Lab Storage Designs using SSDs and Spinning Disk (Wahl Network) New Super Quiet Supermicro X8SIL VMWare ESXi Server […]

  7. vnelsontx
    vnelsontx at |

    Great post! Ended up going with Scenario #1:
    Synology DS3612xs
    Storage Group 1: (4) Intel 520 Series 240GB SSD
    Storage Group 2: (8) Seagate Constellation ES 4TB

    Reason: 2 Host environment which means VSAN was out and Pernix hasn’t responded yet on NFR keys for FVP, also it’s kinda hard to fit 3.5″ drives into 2.5″ bays 🙂

    Yes the initial investment in the Synology was tough but after weighing different storage scenarios, traditional made the most sense for our use case.

  8. Chris Conlan
    Chris Conlan at |

    Just tossed in the two Kingston 240GB’s and keeps telling me I can only use 125GB of Cache. The DS1812+ I have has 3GB of RAM. Trying to remember where I read about the algorithm that let’s you know the max.

    Looks like I’ll be tossing in 2 Samsung 830 120GB’s and then finding a place to repurpose the 240GB because I’m not sure if I’d see a huge benefit in doing SSD caching on the hosts themselves (since I’m running Enterprise Plus).

    1. Chris Conlan
      Chris Conlan at |

      Might look at the DS3612xs since that can handle up to 8GB and toss the 1812+ at my parents for off-site backup. Baby on the way is making this a difficult decision.

  9. The New Haswell Fueled ESXi 5.5 Home Lab Build

    […] deeper into home lab storage design with my “Three Example Home Lab Storage Designs using SSDs and Spinning Disk” […]

  10. TinkerTry IT @ home | Superguide: Home virtualization server enthusiasts’ colorful variety of ESXi whiteboxes

    […] by Chris Wahl on Oct 14 2013 wahlnetwork.com/2013/10/14/three-example-home-lab-storage-designs-using-ssds-spinning-disk/#more-851… […]

Share your point of view!