Windows Home Server as a backup server

In case you don’t know: [MS] Windows Home Server (WHS) has gone RTM a while ago. The problem is that you can’t find one in the store yet (unless you are located in New Zealand) , nor it appears in MSDN due to some odd decision. For now the only way to get it is through Connect where you can apply for RC version.

So, why would one bother with WHS? There are actually plenty of reasons, however I have to point out the most appealing one, at least to me: its backup system combined with RAID-like system.

  1. RAID like storage system
    This is kind of software RAID but not exactly. While RAID requires an array of disks of same type this isn’t true for WHS. Instead you can mount whatever disk you like (internal or external), as much disks as you can connect to your computer and WHS will add it to its storage system. You can add additional disks later. The failover functionality is achieved through balancing the data among more than one disk – data is always duplicated somewhere (you can decide which folders require duplication)- this is done automatically with “balancing” process that fires up now and then. If duplication can’t be achieved (perhaps due to limited disk space) then WHS will notify you that there is a problem. Perhaps balancing system is slower than RAID but the speed doesn’t really matter that much for backup/storage systems. The big plus here is that it is a lot easier to manage disks and you are more free in disk choice – just add whatever disk you want (RAID requires exactly same disks on the same bus – if disk fails after several years one has problems to get replacement).
  2. Backup system
    You install WHS client software (Connector) on target computer and that’s it. WHS will backup your entire disks (you can specify which disks) daily at predefined time on the server. If the target computer disk fails, replace it and restore the system. This would be very similar to features other backup software have, such as True Image from Acronis, if it wasn’t more intelligent and conservative with storage space used. It boasts a great feature – duplicated file doesn’t get stored twice. What does this mean: imagine you have two computers with same OS. If you run WHS backup on both, only one copy of same file (since the same file resides on both computers) will be actually stored. In plain words this feature greatly reduces the storage space required on the server. Needless to say that I love this feature. Ah, and it doesn’t suffer from thing like incremental files merging. It is also fast as one would imagine.
    So that’s all lovely, but there are some problems, too. Currently the client software won’t run on 64 bit OS (again, this shows how much are 64 bits really pushed). Proper client is supposed to arrive later.

Did I mention what WHS actually runs on the top of Windows 2003 OS? The logical questions is: will there be an add-in software for Windows 2003 that does both points mentioned above. I don’t see any “physical” problem. Political decisions…that’s another world.

Anyway, I am currently testing WHS on my Windows 2003 x64 server. How? Simply – I am using [VMWare] Server with a dedicated 750GB disk. A word of caution here: it happened that WHS storage system went FUBAR somehow, I am not sure how, but I think it was due to OS hibernation. It required WHS reinstallation, but luckily backup data survived (seems like only system partition got a problem). Note also that [VMWare] Server doesn’t officially support WHS yet. I bet they’ll soon add official support.
So, that’s it. I see WHS as a perfect backup system for my work/home network (WHS is a lot more, I just looked at it from backup server perspective). Flexible, powerful, yet easy to manage. I just miss the same feature within Windows 2003 and x64 client. Longhorn perhaps?

.net Windows

Team Foundation Server 2008 and all of the 64 bits

[MS] is lately trying to go 64 bits with their OS. Vista x64 isn’t going anywhere due to the lack of 3rd party drivers (hey, it is hard to obtain even 32 bit drivers for Vista, you can forget about 64 bits for now). At least that’s my experience. But Vista is not a server OS. Server transition to 64 bits is a different story. They need huge amounts of RAM and you can use more than 4GB only with a 64 (or more) bit OS (running on 64 bit CPU of course). Though the truth is, that with current PC architecture you can hardly use even 3.5GB on an x86 OS. Let’s get back to servers. Servers usually use expensive serious hardware and not that desktop junk we all use. The hardware comes with better drivers and 64 bit drivers should be more easily accessible. The bottom line is that servers are more 64 bits motivated while desktop aren’t (one of the reason being there is no need for such huge memory size, at least for an average user).

Anyway, with such a push for 64 bit OS one would think that [MS] would provide their server products in both 32 and 64 bit flavor (if not just 64 bit). But hey, one of their big server products, Team Foundation Server 2005, won’t run on 64 bit OS. No way, its application tier needs good old x86. WTF? [MS] is pushing for 64 bits but at the same time their servers won’t run on 64 bit OS. OK, that was in the past and the only excuse I saw at the time was the lack of time for testing. Fast forward three years and meet TFS 2008 (in beta 2 currently). So far I didn’t find any indication (feature list posted here) that it is going to fully support 64 bit OS, even worse, I think I saw the same limitations in help file. I really hope it will support pure 64 bit OS, otherwise [MS] just shoot itself in the other foot. I wonder what’ll be the excuse this time…


Vista fixed?

I saw over Matevz’s blog that [MS] released a couple of Vista pre-SP1 reliability updates. Of course I’ve installed them both without hesitation (you should install them only if they fix a problem you have, see KB for performance update and reliability update). Soon, I found out a funny new bug. If you read what’s been improved or fixed, you would see in KB article for performance update this line:

“When you copy or move a large file, the “estimated time remaining” takes a long time to be calculated and displayed.”

This was one of my problems before updates. Now, it really calculates the remaining time almost immediately. However, after a large copy to the network disk this is what happens:



Note that 2,13GB of data is remaining and it calculates that it’ll take just about 0 Seconds. Now, that’s a performance update – according to calculation my disk is now light speed fast!


The good and the bad about Acronis True Image

I have been using Acronis True Image Workstation 9.1 (TI) for a couple of months now (I am doing disk image backups of my computers) and have good and bad news.

Perhaps I should start with the good. I hoped that I wouldn’t really need restore functionality ever. I was wrong. Today I’ve installed [VMWare] Server on the same Windows 2003 machine with [MS] Virtual Server 2005 R2 SP1 already installed. They both mostly work side by side. It is just that Virtual Server management web application is broken for some reason after [VMWare] Server was installed (spitting Service Unavailable in web browser) and thus preventing me from any administration. Fortunately I was smart enough to foresee possible issue and have done a disk image backup of my server OS disk (using TI of course). Because I didn’t have days to find out the cause and the solution of mentioned problem I’ve decided to simply restore the disk as it was just before [VMWare] Server installation. And it worked smoothly as it should. My server is back in its full glory. Great work, Acronis.

Now the bad side of TI. It sucks for backuping large disk images, such as mine 600MB one. Actually, the backup itself works fine and is relatively fast (~4hrs over 1Gb switch for my disk – my backup storage is a 750GB disk on server) and consequent incremental backups are many times faster (depends on how many changes are to backup). So far so good.  The big big problem lies in the fact that backup storage isn’t infinitely large, meaning, you have to merge full backup with consequent backup(s) sooner or later, i.e. I have a full backup (~250GB) and 7 incremental ones (each ~10Gb) and there is no place for next incremental backup. Thus full backup is consolidated with the oldest incremental and so on until there is enough place (you can configure various parameters). That is still good. But here is the showstopper problem: consolidation of such a huge backup is unbelievably slow. It literally takes days, if not weeks and makes whole disk image backup process useless. Unfortunately. A workaround would be to mount another backup disk (duplication!) and from time to time (before consolidation happens) move current full and incremental backup files there. This workaround carries downsides:

  • yet another disk (cost, power usage, heat, noise, …) to have 
  • forcing more full backups (they aren’t exactly fast, remember – 4hrs).
  • more administration

Yuck, eh. So I am reconsidering my backup strategy. Luckily, [MS] just RTMed Windows Home Server (WHS) which includes great backup technology (a topic for another post), like it was made for my requirements :-). It is a shame though, that this backup technology isn’t available as an add-in for Windows 2003. More about WHS and its backup capabilities in another post.