As VMware vCenter continues to rise in importance in most people’s data center stack, the need to protect it from disaster becomes more important. To me, vCenter was traditionally more of a management tool that provided central administration of the cluster. Outages were not that big of a deal, and impacted (mostly) the Admin team. Now, with all sorts of various plugins (such as VAAI for NFS) and applications (such as VMware View) relying on vCenter, it may be a good time to analyze your vCenter’s availability and adjust your plans.
While there are many different ways to accomplish this, I thought it was worth covering one of the more simple methods: Log Shipping. Both a cost effective and relatively easy way to provide both a backup of the database and hot spare of the SQL instance, Log Shipping is basically what it sounds like – transaction logs are copied over to a staging folder on a defined interval. Those transactions are then restored to the secondary database on another defined interval. In the event of a disaster affecting your database, you can recover the secondary database (recover is a fancy word for “make it active and available for changes”) and point your vCenter server to it.
This post will cover the process to get Log Shipping configured to protect your vCenter database in the Standby (Read Only) mode, so that you can query the database to validate that it continues to receive updates. The alternate method is No Recovery, which I’m not as much of a fan of since you can not peek into the DB.
I also have covered using a SQL Maintenance Plan if you want just a backup.
Initial Requirements
In order to start working with SQL Log Shipping, you’ll need the following items.
SQL Servers
A primary (production) and secondary (disaster recovery) SQL server. The secondary server will not need any additional SQL licensing if it is only being used as a Log Shipping target (license guide PDF). If you have users hitting the server for other reasons (other databases, reporting server, etc.) it will need to be licensed.
You will need to make sure you have a Windows license for both servers.
In the example scenario I cover here, I am using two servers running SQL Server 2008 R2 Enterprise on Windows Server 2008 R2 Data Center Edition. You do not need SQL Enterprise or Windows Data Center edition to do Log Shipping; these are just what I have in the lab.
Log Shipping Folders
Create a folder on the primary server and share it out so that the secondary server can reach it. This is where transaction logs will be written to by the primary and pulled from the secondary.
Make sure the service account (next section) has the ability to access the shared folder with Change permissions or higher. In my lab, I’ve simply given all Domain Admins the Full Control permission (a tad overkill in my opinion).
Additionally, make a folder on the secondary server to store transaction logs that were pulled over. This will be a local folder that is not shared.
Service Accounts
A domain service account for the SQL Server and SQL Server Agent services. Make sure it has permissions to modify contents of the shared folder (previous section) and modify the vCenter database on both servers. It may be easiest to make it a sysadmin for the initial configuration, then revoke that permission when finished.
Configuring Log Shipping
In this scenario, I have two SQL Servers: SQL (primary, production) and SQL2 (secondary, disaster recovery). I find it easiest to open the SQL Server Management Studio on the primary server and then connect to both servers.
Assuming you have completed the Initial Requirements steps above (setting the SQL services to run as the domain service account, creating a folder on both servers, and sharing the folder on the primary server) you should be ready to begin the Log Shipping wizard.
Find your vCenter Database (mine is called vCenter) in the list of Databases. Right click it and select Properties. Navigate to the “Transaction Log Shipping” page.
Check the box next to “Enable this as a primary database in a log shipping configuration”. This will light up the “Backup Settings…” box.
Click the “Backup Settings…” box. This will bring up a number of fields that need values.
- Network path: Enter the network path to your shared folder on the primary server.
- Local Path: The local path to the shared folder on your primary server.
- Delete files older than: How many hours to keep transaction logs on your primary server. I went with 6 hours in my lab (to save space). You may want this to be 24 hours or longer, depending on how long the secondary server may possibly be down for maintenance or whatnot.
- Set backup compression: Compression greatly decreases file size, so if you have this option, it may be best to enable it (Compress backup). This will increase the CPU usage on your primary server during the backups, but reduce disk space; ultimately, the choice is yours as to which resource is more scarce (CPU or disk).
Click OK, then navigate to the Secondary databases section and click “Add…”
The options you want to set here are:
- Secondary server instance: Click the “Connect…” button and enter the credentials to connect to your secondary server.
- Secondary database: Use the same you want for your secondary database. I went with the same name.
On the Initialize Secondary Database tab:
- Choose the first “Yes…” option, as it will create a new full backup and restore it for you. Very handy.
On the Copy Files tab:
- Destination folder: This is the local path on your secondary server that will be used for copying over transaction logs. Use a local path (I used the E:logship path on both servers to make things easy).
- Delete copied files after: Pick a time that is greater than the time set on the primary server.
Finally, on the Restore Transaction Log tab:
- Database state when restoring backups: Select Standby Mode, which puts the database in a read only state that allows queries. Using No Recovery mode will place the database in a state where it is not readable until restored.
Click OK. Optionally, you can set a Monitor server instance to watchdog over the Log Shipping process. I chose to use the primary server for this task.
Click OK to finish the Log Shipping wizard. A window will open showing the status of the process.
Check the Log Shipping Jobs
At this point I suggest checking all the jobs manually to ensure that all is working properly.
Primary Server
On the primary server, check to make sure the new job appears on the SQL Server Agent. It will be listed under SQL Server Agent > Jobs. In my case, the job was “LSBackup_vCenter”. I then run the job manually by right clicking on the job and choosing “Start Job at Step…“. It should be successful and you can see at least 1 transaction log dumped into the shared folder (possibly more if the job already ran on the schedule).
Secondary Server
On the secondary server, make sure the new vCenter database appears with the description of “(Standby / Read-Only)” next to the name.
Then, check to make sure you have two new jobs on the secondary server: “LSCopy_sql_vCenter” and “LSRestore_sql_vCenter” (names will change based on your environment’s names). Run both jobs manually and verify that they complete successfully, and investigate the local transaction log folder to locate the copied transaction logs.
Congratulations, you’ve successfully set up Log Shipping to protect your vCenter database.
Restoring the Database
Because Log Shipping isn’t a cluster, the database on the secondary server has no idea if the primary has failed. You will have to invoke a restore manually.
To do this, first disable the copy and restore jobs on the secondary server. Right click on the two jobs and choose Disable.
Then, restore the database so that it becomes writable. I use this very simple query.
[sourcecode language=”sql”]USE master
GO
ALTER DATABASE vCenter SET single_user WITH ROLLBACK immediate
GO
RESTORE DATABASE vCenter
GO
ALTER DATABASE vCenter SET multi_user[/sourcecode]
Additional Informative Resources
If you want to read more on Log Shipping, here are some helpful links that I have bookmarked:
- How to: Enable Log Shipping (SQL Server Management Studio)
- Failing Over to a Log Shipping Secondary
- RESTORE (Transact-SQL)
- Using Warm Standby Servers
- Restore With Standby (Includes Video)
Thoughts
While not being quite as robust as SQL Mirroring or Clustering, using Log Shipping is a very straight forward and relatively cheap method of offering some disaster recovery to your vCenter database. It can also serve as a backup of the database, but doesn’t necessarily guard against corruption as the most recently shipped log may contain the corruption and be imported into the secondary database – you can slow the schedule on the restore job to help combat this if you wish. Have you used Log Shipping in your environment, and if so, what are your thoughts? Good experiences, bad experiences, or better ways to accomplish this are all welcome.
Additionally, I plan to write a follow up to this post with details on how to switch your vCenter server over to the secondary database if the primary goes offline.