I’ve continued to tinker around with Synology’s DSM 5.2 Beta, although this time I’ve switched gears from Google Nearline via Cloud Sync to deploying Docker on my ioSafe. That’s crazy just to write in a single sentence – running containers on a NAS, woah there! Beyond this, I’ve also noticed a number of new applications appearing in the Package Center that run on Docker, such as GitLab. In case you’re wondering if GitLab is any good, see this note to self.
Keep in mind that they are all beta for now, so I don’t even know if this is just an experiment that will vaporize upon GA or a serious endevour. And of course this wouldn’t be officially supported in production, so this is purely for fun and learning adventures highlighted by the fact that there’s a Report Bugs button at the bottom of the DSM dashboard. 🙂
The full list can be found by typing docker into the Package Center’s search box.
If you’re looking to drop the big blue whale on your Synology NAS, and perhaps want to build out a social coding platform for your snazzy team, here’s the nuts and bolts.
Installing Docker onto a Synology NAS is the normal click click click sort of workflow. The bits will download and the package will install. Enjoy a relaxing tea and come back in a minute or two. You’ll see a new item appear in the Main Menu at the top shortly.
Once the Docker install completed, I also decided to install an application that requires Docker: GitLab. This has an additional dependency on MariaDB, which is another package available in the Package Center. However, you don’t have to install it individually; just start the install for GitLab and it will automatically prompt to install and configure MariaDB for you. Make sure to use a password for the DB that you can remember – I forgot mine (must have been some tasty tea) and had to reset the password, which wasn’t that hard but still made me feel silly. 🙂
At this point, I took a look at the two containers running in Docker. There’s one called synology_gitlab and another called synology_gitlab_redis. The first one will spend a fair number of CPU cycles as it sets up the database for the first time, but calms down in a few minutes. Here’s an example I sent out showing 26.52% CPU usage and a tiny bit of RAM. Most of the work needed to set up the DB is compute heavy, so RAM shouldn’t play a significant factor.
Switching over to the Process view, I can drill down into exactly what’s going on in the container. At the point of this screenshot, you can see that PID 26165 is doing some Ruby magic and eating up my CPU time.
There’s also an Overview panel that shows more general statistics about the container, such as uptime, priorities, limits, network port usage, and assigned variables.
I checked in on the container after about half an hour and found that it was mostly idle. The logs showed that the database was migrated successfully and I was ready to start using GitLab. If you are unable to view anything on the GitLab port (30000), watch the logs to see when the migration is complete, or just give it some time. It’s really going to boil down to how powerful your Synology is, and my 1513+ is pretty great but certainly not an XS/XS+ series with some beefy procs.
Now that the containers are purring, I can check out the application. At this point I also wondered if this is how Synology plans to deliver future application packages? Just have folks deploy Docker containers instead of thick installs? It certainly would make upgrades simpler, and potentially partition apps away from DSM, perhaps?
Test Driving GitLab
After pointing a browser to the GitLab port, 30000, I was greeted with this stoic looking fox man. A little creepy, but OK.
Unfortunately I didn’t see any documentation on how to log in, and I certainly had no account because this was a fresh installation on my Synology. I cruised on over to the GitHub project page for GitLab and found this little informative nugget:
You can access a new installation with the login
5iveL!fe, after login you are required to set a unique password. (source)
This ended up working and allowed me to log in.
I spent a few hours tinkering with my profile and setting up projects. Everything was nice and peppy with a single user on there (me), and DSM’s Resource Monitor reported that the NAS had plenty of horsepower to spare.
Deploying Jenkins from Docker Hub
While the Package Center is great and all, the real strength of Docker is the Docker Hub with all sorts of official and community driven contributions. I wanted to try deploying something simple from the Hub, so I selected Jenkins as my target container. You can find the Jenkins repository here.
There’s two ways you can add images to Docker: By URL or use the Registry. I’ll show the URL method first, then the Registry.
Method 1: Add images via URL or File
Head over to Docker in Synology DSM and pull up the Images section. Add a new image from URL and paste in the Jenkins repository link. There’s no need for a username or password because this is a public and open repository (not private).
Pick a tag, which corresponds to a release. I just selected the latest version.
The image will start transferring over to your Synology. The little disk drive icon on the right will fill up like a progress bar.
Method 2: Add images via the Registry
Alternatively, the Docker Hub has been added to the Registry out of the box. You can click on the Registry link in the Docker window and search for whatever you’re looking for. You can also add other sources if the Hub isn’t your go-to location. Below I’ve searched for jenkins and you can see the official image with the gold medal on the right.
Regardless of the method you choose, the system will fire off an event when the image has finished downloading.
Launching Jenkins with Docker Run
Now that the image is downloaded, it can be launched. I chose to use Docker Run because the repository had some baked examples for me.
Paste in the full docker run command into the parser. I chose a simple one that builds a temporary instance based on the examples on this page.
There are a few other questions to answer on naming, ports, priorities, and resources. I used the container name of myjenkins and left all other settings at default.
Once that’s done, there will be a new docker container with your container name in the Container section of Docker. Locate your container and click the toggle switch on the right to flip it over to running.
The container will fire up and begin the run sequence. You can watch the process or log details to see what’s going on in the container. Note that the CPU value shown below was during the initial docker run setup; the CPU calmed down to about 3% once that finished (took a few minutes).
Open up a browser to your NAS with the port shown (8080). And there’s that happy little Jenkins page, waiting to blow your mind.
Easy enough, right?
I’m really impressed with the ability to use Docker containers on the Synology NAS. Coupling the ridiculously easy power of Docker with the simple interface of DSM is a no brainer. It exponentially builds upon the value of this NAS, making it a powerful little jack-of-all-trades box.
Docker seems like a great use case – applications that can use the NAS for the underlying persistent storage, but can be blown away and updated to new versions easily. I doubt all applications on the Package Center will migrate to this model, or at least … not very quickly, but it doesn’t really matter much if you can load Docker images from anywhere else!