Monthly Archives: October 2013

Desires of a Backup System by Bob Hyatt

You invited me to write more on this topic. I believe I will do so, but in parts, rather than in a 20 page university paper. I will start with an overview of the ‘desired’ home server backup, vs what seems to be available.

For most homes and small businesses, the primary desires / goals are (in my view):

A. secure backups. In this case I add, secure from failure for at least 5 years!

B. Easy to do backups. I do NOT mean automatically scheduled backups. They automatically back up things you are NOT interested in. Or that will be useless to you if there is a problem. For example, do you really want to back up all the Operating System files, settings, etc. Things that change a lot, and will occupy a lot of space on your backup drives? I do NOT.

C. Allow you to determine what is important to You!
Not every photo or video is important to you. Not every PDF is valuable to you. Not every MP3 or FLAC is valuable to you. So you need a way to differentiate between valued and chaff. And for the ‘tool’ to remember. OR for you to keep the data you want to keep separately from the chaff. Then have the ‘tool’ look only at the desired data.

D. Allow you to add storage, as needed, easily. Adding a new hard drive should not be ‘a hope and a prayer’ kind of task.

E. If I should desire to change from one backup product or tools to another, the existing should NOT act as a barrier or gatekeeper. (as in keeping your data as prisoner).

F. The running of the backup device or server should NOT wear out the disks, ever! The issue with the video I described in the earlier ‘letter’ here was that the one always busy was attempting to make the ‘distribution’ of data on the drives “perfect”.

If you are a corporation or a government agency, such wearing out of the disks would be considered ‘cost of doing business’. And they have the budget for this task. A home user or a small office does NOT have a budget for this willful destruction. Further, “Perfect distribution” is NOT what a home user or small office is looking for. They are looking for a safe place to put their data, and the exact location is not important.

Home or small business users will seldom read the backups. Perhaps, not ‘seldom’. But the reality is you will do plenty of writes to update the backup. In some cases you might do more reads (if you share the data with media players, other computers on your network, etc).

One more difference between the products that are most sold in this arena? Large organizations are also looking for quick access to the data. E.g. The usage of this kind of thing is for ‘backup or archival’ purposes for the home and small business, whereas the large organizations are using the data put onto such a device as production data and they expect lots of access and lots of updates.

SAN and / or NAS systems in the large organizations are for processing production data, not for backups. So you begin the issues with a fundamental difference in the use of these devices. This is partly why the components tend to be expensive.

The good news for the home or small business is the time tested components used by the large organizations, while expensive, are likely to last longer than your need.

If I get some feedback on this introduction, I will introduce ‘task vs tool’ in the next ‘letter’ I write.

Designing the home NAS

In an earlier installment, I pointed out that popular branded solutions are surprisingly expensive for low-performing hardware.  Reviews indicate that they have rather poor performance.  So for my comparison, I’ll use the Synology DiskStation DS1513+, which reportedly has good performance, more than what a single-link gigabit Ethernet connection can handle.

It has quite a few things in common with the home-made solution:  Multiple Ethernet ports that can be used for redundancy or increased throughput, the ability to host related servers as “apps”, and not user-friendly enough for novices to do the initial setup.

While I was doing this, the Synology DiskStation could be found for $830.  It contains a dual-core Atom D2700 running at 2.13 GHz and 2GB of DDR3 RAM.

Now, there are two ways to approach this.  Clearly a competent file server can run on low-end x86_64 processors with a small (by today’s desktop standards) amount of RAM.  The original FreeNAS was commonly used with hand-me-down hardware.

But, times have changed.  The new FreeNAS, a rewrite by iXsystems, was designed with more modern concerns in mind:  RAM is much cheaper now and the system can be more capable and easier to write if it doesn’t have to cope with low-RAM installations.  In addition, the safety of ZFS against mysterious data corruption relies on the RAM not having mysterious corruption too, and should be used with ECC RAM.  Then comes dire warnings about Windows file shares (CIFS) being single threaded and thus needing a fast CPU (as opposed to multiple slower cores), and features such as encryption demanding ever more CPU performance.  Oh, and the Realtek NIC used on many consumer motherboards is not good for FreeNAS; it needs an Intel NIC.

In short, I’m looking at a server-grade system, not a typical desktop or “gamer” enthusiast system.  What you don’t need is fancy overclock support, sound, lots of slots and multi-video-card support, etc. so a low-end server board is actually about the same price as a “fancy” desktop motherboard.

In particular, the Supermicro brand comes highly recommended.  I could have gotten an X9-series server motherboard and put a Xeon E3 v2 CPU on it.  But why stop there?  I spent more to go with the newer X10-series board and a Xeon E3 v3 “Haswell” CPU.  The X10-SL7-f in fact contains an 8-channel SAS controller as well as the usual 6 SATA channels, sprouting a whopping 14 SATA connectors on the motherboard.  It also features IPMI 2.0 on its own dedicated network port, which is a wonderful feature and I’ll have more to say about it later.

So without further ado, here is the breakdown of my build:

Parts List

Item Description
Price
ICY DOCK MB153SP-B 3 in 2 SATA Internal Backplane Raid Cage Module$63.99
Intel Intel Xeon E3-1245V3 Haswell 3.4GHz LGA 1150 84W Quad-Core Server Processor$289.99
SUPERMICRO MBD-X10SL7-F-O uATX Server Motherboard$239.99
SeaSonic SSR-360GP 360W ATX12V v2.31 80 PLUS GOLD Certified Active PFC Power Supply New 4th Gen CPU Certified Haswell Ready$59.99
Fractal Design Define R4 Black Pearl w/ USB 3.0 ATX Mid Tower Silent PC Computer Case$99.99
ZALMAN CNPS5X Performa 92mm FSB (Fluid Shield Bearing) Powerful Cooling Performance CPU Cooler$19.99
2 × 8GB PC3-12800 DDR3-1600MHz ECC Unbuffered CL11 HYNIX Memory178.92
Total without drives$952.86
WD Red WD40EFRX 4TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" NAS Internal Hard Drive -Bulk3×$189.99 = $569.97
Seagate ST4000DM000 Desktop 4TB 64MB Cache2×$155.49 = $310.98
Total for Build$1833.81

The raw power seriously outclasses the DiskStation, and is only $120 more.  With the X9/v2 option, it would have actually been less.

Oort build - inside

Above is the result, Oort, with the side open.  You can see the stack of 8 drive trays, and the large heat sink over the CPU.

view-_MG_2530

Here is a front view.  The grill along the edges allow air intake from the front.   The blank front face is imposing and mysterious… I wonder if I can get some artwork over it?

view-_MG_2534

And finally, with the front panel open.  There is foam sound-dampening on all the case surfaces including the inside of this door.  The ICY-Dock hot-swap bays are now accessible.  I plan to use these for backing up and mounting off-site volumes while they are resident.  The main drives require side access, which is simply a matter of removing two thumb screws.

Now back to the details.  The X10 (rather than X9) series mainboard allows the use of the newer Haswell processors, which run cooler and save power.  The onboard SAS saves what would be a hundred dollar PCI card, and is much easier as well since it provides common SATA-compatible connectors.  And finally, this motherboard has the wonderful IPMI 2.0 with full KVM-over-LAN.

For the CPU, I looked at the the chart in Wikipedia, along with the prices and availability at NewEgg.  I chose the lowest (cheapest) Xeon E3 that had onboard graphics and hyperthreading.  Why do I need onboard graphics if the system doesn’t have a monitor?  I think that the monitor-over-LAN feature still requires an actual VGA; it doesn’t emulate one, but just captures the output.  There is a more primitive remote management feature that allows for a TTY-style console (also over LAN), but I don’t think that helps with initial BIOS screen stuff.  Also, with the standard built-in GPU I can use it for computation other than drawing graphics.  Maybe it will accelerate other software I run on the box at some point.

I’m keeping the box in a closet which besides building up heat from the machines gets afternoon sun on the outside wall.  The closet is warm in the summer.  My experience with the stock cooler that comes with the CPU is that it’s loud or even inadequate.  Looking through NewEgg, I looked for this style with low noise and a good price.  I normally like this style in part because it takes a standard square fan which can be updated and replaced, but the Zalman is known for quiet fans too.  I mounted it, not with the thermal grease that it came with, but with Phobia HeGrease, carefully applied and spread.

The RAM was not available at NewEgg.  Apparently ECC but not Buffered/Registered also is uncommon.  Buffering is used to facilitate having many more memory sticks on a board, which is not the case of this server-but-desktop board.  I found it at a specialty RAM site, www.memoryamerica.com, which has a wide selection.  To be on the safe side, I looked at the brands that Supermicro had tested on this board, and took the cheaper of the two.  16GiB uses two of the four memory slots, so it can be doubled in the future.

I use Seasonic power supplies, and that’s another story.  I looked for “Haswell support”, which enables a new improved stand-by mode.

Now for the case:  Some mentions on the FreeNAS web forum led me to Fractal Designs.  I followed up by reading reviews and the manufacturer’s web site.  There are a couple models that are so similar that I wonder what the difference is!  Since there is no direct explanation, it takes reading the specs very carefully and comparing the dimensions to spot the real differences.  This R4 features an internal stack of 8 HDD trays (with the anti-vibration mounting) plus two half-height 5¼″ external bays.  If you include two SSDs stuck elsewhere, that is 13 drives total, which is nicely close to the motherboard’s support of 14.

I chose an option with two external bays so I could fit a 3-disk hot-swap backplane.  Here I went with the name-brand ICY-Dock and a with-tray design, because I had trouble with trayless on Mercury.  So using the front-loading drive bay requires the use of two mounting screws, which is not very handy as it turns out.

Worse, the “2 half height bays” is a little exaggerated.  It’s more like 1.95 half height bays, as a large bulge protrudes into the area where the bottom bay should be.  I had to remove the bottom piece of sheet metal from the ICY-Dock in order to squeeze it in; this also got rid of the normal mounting ears.  I’ll make a bracket some day (a perfect job for a 3D printer), but it fits tightly and is not heavily used, so I left it without screws for the time being.

Other than that, assembling was easy and straightforward.  Testing proved interesting and adventuresome, and I’ll tell you about that later.

Network File Sharing

What are the ramifications of using NAS (Network Attached Storage) instead of DAS?  Will I want to work directly off of it, or keep work locally and only use it for backing up?  The answer will vary with the nature of the work, I think.

File Copy

Let’s start with simple file copying.  This was performed using the command line, not the File Explorer GUI.

Large files

17 files totaling 273.6 GiBytes

Windows (CIFS) share49min 41sec94 MiBytes/second
NFS share
different drive on same machine48min 2sec97 MiBytes/second

Small files

14,403 files in 1,582 directories totaling 2.53 GiBytes

CIFS share64min 53sec683 KiiBytes/second
same drive3h 25min216 KiBytes/second
different drive on same machine56min 50sec780 KiBytes/second

For large files, the transfer rate of 94 MiBytes/second (or 98.6 MBytes/second) is respectable.  Everything I could find on real-world speed of Gigabit Ethernet is outdated, with home PCs being limited by the hard drive speed.  Note that I’m going through two cheap switches between the test machines.

The small-file case is two orders of magnitude slower!  This bears out the common wisdom that it’s faster to zip up a large collection of files first, and then transfer the zip file over the network, and unzip on the receiving side.

I think that the speed is limited by the individual remote file open/close operations, which are slow on Windows, and the network ads latency if this is a synchronous operation.  The DAS (different drive on the same computer) is only 14% faster then the NAS in this case.  The data transfer time of the file content is only 0.7% of the time involved.  The real limiting factor seems to be the ~4 files or directories processed per second.  That does not sound at all realistic as I’ve seen programs process many more files than that.  There must be some quantity after which it slows down to the observed rate.  Since it is similar for DAS and NAS, It must be a Windows problem.  I’ll have to arrange some tests using other operating systems, later.

Working with Files

Compiler

This is what I do all day.  What are the ramifications of keeping my work on the NAS, as compared with other options?

Compile Build Job

Project located on local drive. (A regular directory or a VHD makes no difference)4min 9sec
Project located on NAS, accessed via CIFS (normal Windows share)10min 29sec
using NFS share insteadN/A

The Microsoft Visual Studio project reads a few thousand files, writes around 1500, reads those again, and writes some more.  When the project is located on a local drive, the CPU usage reads 100% most of the time, indicating that the job is CPU bound.

When the project is located on the NAS, the situation is quite different.  Given that the actual work performed is the same, the difference in time is due to file I/O.  And the extra I/O takes more time than the job did originally; that is, the time more than doubled.  It was observed that the CPU utilization was not maxed out in this case.  The file I/O dominated.

The same job was performed again immediately afterwards, giving the computer a chance to use cached file data it had already read recently.  That made no difference to the time.  It appears that with Windows (CIFS) shares, even on Windows 7 (the sharing protocol was significantly reworked as of Vista), file data is not cached in memory but is re-read each time it is needed.  That, or the “lots of small files speed limit”, or both, kills performance.

I tried to repeat that using a NFS share instead of the CIFS share.  However, I could not get it to work at all.  The Windows machine could see the file names and navigate the directories, but could not read any file.

Video Encoding

Encoding a video entailed reading one very large file and writing one smaller file.  The process’s performance metrics indicate reading only 2.5MB/s and writing merely 80KB/s.  I would not expect it to matter if the input file, output file, or both were on the NAS.

Likewise, video editing and programs like Photoshop will read in the files and maintain the contents in memory or manage its own overflow swap space (which you put on a local drive).  It’s harder to do actual timing here, but the impression is that various programs are perfectly responsive when the file are directly attached.  If that changes when using the NAS instead, I’ll note the circumstances.

Caveat

All of the performance characteristics above are made with the assumption that the storage unit and the network links are all mine for the duration of the test.  If multiple people and pets in the household are using the NAS, you have the added issue of having to divide up the performance among the simultaneous users.

Note that FreeNAS does support link aggregation, so I could plug in two gigabit Ethernet cables if I replaced the switch with one that also understood aggregation.

I need a (home made) NAS!

I ran out of space on my RAID-5 drive, which I built several years ago.  At the time, 1TB drives were as large as you could get before the price increased disproportionately to the capacity.  I bought 2 “enterprise” grade drives for around $250 each, and a consumer drive for half that.  The usable capacity is 2TB because of the redundancy.

I decided I was not going to lose data ever again.  Having redundancy against drive failure is one component of this.  So, all my photos and system backups are stored there, along with anything else I want to keep safe, including Virtual Machine images.

It turns out that a lot of the space is taken by the daily backups.  Even with a plan that uses occasional full backups and mostly incremental backups, they just keep on coming.  I also need to better tune the quota for each backup task, but with multiple tasks on multiple machines there is no feature to coordinate the quotas across everything.

Meanwhile, a new drive I installed had a capacity of 3TB by itself.  Lots of room for intermediate files and things I don’t need to keep safe.  But that’s more to be backed up!

Now I could simply replace the drives with larger ones, using the same Directly Attached controller and chassis space.  But there are reasons for looking at Network Attached storage.  They even have Drobo NAS units at WalMart now, so it must be quite the mainstream thing now.

Besides future upgradability to more drives (rather than just replace the drives with larger ones again), and better compatibility with different operating systems used in the home, and specialized media services for “smart” TVs and tablets, a compelling reason for me is the reason I’m using a RAID-5 in the first place:  to preserve my data.  As I’ve noted elsewhere, silent data loss is a bigger problem than generally realized, and it is actually growing.  Individual files somehow go bad, and are never noticed until long after you have reused backups from that period.

Direct-Attached storage — simply having the drive on my main PC — limits the choice of file systems.  In particular, Windows 7 doesn’t have full support for anything other than NTFS and various varieties of FAT, and new more advanced file systems are only available on a few operating systems as they are not widely used yet.

A file system that specifically keeps data integrity and guards against silent errors is ZFS.  When I first learned about it, it was only available for BSD-family operating systems.  A NAS appliance installation (using FreeBSD) called FreeNAS started up in 2005.  More generally, someone could run a FreeBSD or Linux system that has drives attached that are using ZFS or btrfs or whatever special thing is needed, and put that box on the network.

As I write this, a Drobo 5N (without disks) sells for $550.  It reportedly uses a multi-core ARM CPU, and is still underpowered according to reviews.  Most 2-disk systems seem to use one of two ARM-based SoC that costs about $30.  Now you could put something like the Addonics RAID-5 SATA port multiplier ($62) on that to control more disks at a low price.  Most 5-disk home/SOHO NAS systems seem to be based on x86 Atom boards.

Anyway, if you used hand-me-down hardware, such as the previous desktop PC you just replaced with a newer model, you’d have a much more powerful platform for free.   Buying a modest PC motherboard, CPU, and RAM for the purpose (supposing you had a case and power supply laying around) could be found for … (perusing the NewEgg website for current prices) … $225.

So basically, if you know what you’re doing (or can hire someone to do it for you for a hundred dollars) you can get hardware substantially more powerful for a fraction of the price.

Being an enthusiast who’s never bought a pre-made desktop PC, it’s a no-brainer for me to put something together from parts, even if it only had the features these home/SOHO appliances that have become so common, even if I don’t re-use anything I have on hand.

But, none of the NAS boxes I see advertised discuss anything like the silent data corruption problem.  They don’t say what kind of file system is being used, or how the drives might be mounted on a different system in the event of a board (not drive) failure if a replacement for the exact model is no longer available.  I would think that if a NAS had advanced data integrity features then it would feature prominently in the advertising.  So, build I must, to meet the requirements.

In future posts I’ll discuss the silent corruption problem at more length, and of course show what I actually built.  (I’ve named the NAS server OORT, by the way.)