Cloud Sync to Google Nearline with Synology’s DSM 5.2 Beta

Inspired by Todd “Don’t call me Scott” Scalzott and with thanks to this forum discussion, I decided to experiment with the DSM 5.2 Beta’s ability to leverage Cloud Sync with Google Nearline storage. This nudged me to update the DSM code on my ioSafe 1513+ (more details on that here) to DSM 5.2-5532U1 so that I could kick the tires with the fancy new magic in DSM 5.2.

First off, it’s worth noting that using Google cloud storage of any tier isn’t free – if you want to tinker with this, you can certainly snag a trial account from Google, which results in 60 days of usage (up to $300 worth) before you start shelling out your hard earned monies. If you choose that route, you’ll get a friendly little email from Google with more details, such as what I received below:

google-trial-details

As for Nearline Storage itself, you’re paying the lowest cost per gigabyte, but also required to pay data retrieval fees. This is a similar model to Amazon Glacier (which I’ve written about previously) but Nearline is much “warmer” storage meaning you can get to your data fairly quickly. Glacier, on the other hand, takes a much longer time to hydrate. You can view all of the cost details on the pricing page, but I’ve captured the storage pricing below.

nearline-pricing

Note: It’s ultimately up to you if this method of storage is cheaper than other alternatives such as Dropbox, and will largely depend on your access patterns. It’s also worth noting that Cloud Sync is not meant for backups per se, although you could use it to sync files in one direction as a means to achieve a poor man’s backup. As an example: I use Cloud Sync as a master node for Dropbox files in the home lab, rather than having to install a Dropbox client on all of my home desktops and servers, as a method of sharing files among my devices. If you’re looking to backup to cloud, I’d suggest Synology backup to Glacier as written about hereΒ and here.

Building a Google Nearline Storage Bucket

Google does a good job at describing how to build a bucket in their documentation. I’ll cover the broad strokes that I followed, but I’m going to assume that you can set up an account and create a new project. Once that’s done, select the Create a storage bucket task from the Project Dashboard.

project-dashboard

Making a storage bucket is dead simple. Give it a name (lower case, unique), pick a class, and define the location. You can go with a higher tier of storage if Nearline isn’t your cup of tea, but I stuck with the cheap tier of storage. πŸ™‚

nearline-bucket

With the bucket created, scoot on over to the Storage access section, choose Interoperability, and enable interoperability access so that your Synology NAS can do its work.

interoperability

After interoperability is enabled, you can create access and secret keys to grant your Synology a way to authenticate to the bucket. Click the Create a new key button, then write down the access and secret keys for future use. These are somewhat analogous to a username and password, and the secret key is often forever masked after it is created – so make sure you keep it somewhere safe!

storage-keys

Setting Up Cloud Sync to Google Storage

Toggle on over to your Synology NAS and open the Cloud Sync application. Choose the Amazon S3 setting and fill in the boxes below:

  1. Use the API for Google Storage, which is storage.googleapis.com
  2. Enter your access key for the bucket
  3. Enter your secret key for the bucket
  4. Choose the bucket name created earlier from the drop down
cloud-sync-to-nearline

Everything else is just like a vanilla Cloud Sync setup job – fill in a name for the sync task, tell the task where to sync your files, and determine if you want the sync to be two ways (bidirectional), or just up into the cloud or down into your Synology array. I also set up encryption – which will require a name and password to decrypt the files – just for the sake of variety.

cloud-sync-task-setup

Toss in files as you usually would, or upload them directly into the bucket on your Google project page. Nothing too fancy here – you’re storing stuff in the cloud. If you want to share the file, and didn’t enable encryption, you can select the Public Link box to provide a URL to your friends or co-workers.

nearline-upload

Thoughts and Use Cases

deletion

One thing that I found handy was the ability to delete the entire bucket with a click on the website. Normally, you can’t delete a bucket unless you’ve also deleted all of the objects within the bucket. Google does that for you. Snazzy.

This exercise was more just to prove to myself that it could be done than to meet any serious requirements. I’m not even sure how much official support is behind this configuration. But I can see a few opportunities of which to take advantage:

  • Public cloud diversity – set up sync tasks to multiple cloud providers for additional high availability and potentially less concern of lock-in.
  • Tiered public storage – use Google Nearline as a higher tier of file storage for a certain set of data (such as files created in the last 30 days or accessed in the last 30 days), and then demote files down to Glacier beyond that time frame.
  • Price models – it may be that your access patterns fit quite well with Nearline, keeping costs quite low, but you still want to retain the ability to snag a file quickly for one-off data pulls.