Author Archives: JDługosz

Designing the home NAS

In an earlier installment, I pointed out that popular branded solutions are surprisingly expensive for low-performing hardware.  Reviews indicate that they have rather poor performance.  So for my comparison, I’ll use the Synology DiskStation DS1513+, which reportedly has good performance, more than what a single-link gigabit Ethernet connection can handle.

It has quite a few things in common with the home-made solution:  Multiple Ethernet ports that can be used for redundancy or increased throughput, the ability to host related servers as “apps”, and not user-friendly enough for novices to do the initial setup.

While I was doing this, the Synology DiskStation could be found for $830.  It contains a dual-core Atom D2700 running at 2.13 GHz and 2GB of DDR3 RAM.

Now, there are two ways to approach this.  Clearly a competent file server can run on low-end x86_64 processors with a small (by today’s desktop standards) amount of RAM.  The original FreeNAS was commonly used with hand-me-down hardware.

But, times have changed.  The new FreeNAS, a rewrite by iXsystems, was designed with more modern concerns in mind:  RAM is much cheaper now and the system can be more capable and easier to write if it doesn’t have to cope with low-RAM installations.  In addition, the safety of ZFS against mysterious data corruption relies on the RAM not having mysterious corruption too, and should be used with ECC RAM.  Then comes dire warnings about Windows file shares (CIFS) being single threaded and thus needing a fast CPU (as opposed to multiple slower cores), and features such as encryption demanding ever more CPU performance.  Oh, and the Realtek NIC used on many consumer motherboards is not good for FreeNAS; it needs an Intel NIC.

In short, I’m looking at a server-grade system, not a typical desktop or “gamer” enthusiast system.  What you don’t need is fancy overclock support, sound, lots of slots and multi-video-card support, etc. so a low-end server board is actually about the same price as a “fancy” desktop motherboard.

In particular, the Supermicro brand comes highly recommended.  I could have gotten an X9-series server motherboard and put a Xeon E3 v2 CPU on it.  But why stop there?  I spent more to go with the newer X10-series board and a Xeon E3 v3 “Haswell” CPU.  The X10-SL7-f in fact contains an 8-channel SAS controller as well as the usual 6 SATA channels, sprouting a whopping 14 SATA connectors on the motherboard.  It also features IPMI 2.0 on its own dedicated network port, which is a wonderful feature and I’ll have more to say about it later.

So without further ado, here is the breakdown of my build:

Parts List

Item Description
Price
ICY DOCK MB153SP-B 3 in 2 SATA Internal Backplane Raid Cage Module$63.99
Intel Intel Xeon E3-1245V3 Haswell 3.4GHz LGA 1150 84W Quad-Core Server Processor$289.99
SUPERMICRO MBD-X10SL7-F-O uATX Server Motherboard$239.99
SeaSonic SSR-360GP 360W ATX12V v2.31 80 PLUS GOLD Certified Active PFC Power Supply New 4th Gen CPU Certified Haswell Ready$59.99
Fractal Design Define R4 Black Pearl w/ USB 3.0 ATX Mid Tower Silent PC Computer Case$99.99
ZALMAN CNPS5X Performa 92mm FSB (Fluid Shield Bearing) Powerful Cooling Performance CPU Cooler$19.99
2 × 8GB PC3-12800 DDR3-1600MHz ECC Unbuffered CL11 HYNIX Memory178.92
Total without drives$952.86
WD Red WD40EFRX 4TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" NAS Internal Hard Drive -Bulk3×$189.99 = $569.97
Seagate ST4000DM000 Desktop 4TB 64MB Cache2×$155.49 = $310.98
Total for Build$1833.81

The raw power seriously outclasses the DiskStation, and is only $120 more.  With the X9/v2 option, it would have actually been less.

Oort build - inside

Above is the result, Oort, with the side open.  You can see the stack of 8 drive trays, and the large heat sink over the CPU.

view-_MG_2530

Here is a front view.  The grill along the edges allow air intake from the front.   The blank front face is imposing and mysterious… I wonder if I can get some artwork over it?

view-_MG_2534

And finally, with the front panel open.  There is foam sound-dampening on all the case surfaces including the inside of this door.  The ICY-Dock hot-swap bays are now accessible.  I plan to use these for backing up and mounting off-site volumes while they are resident.  The main drives require side access, which is simply a matter of removing two thumb screws.

Now back to the details.  The X10 (rather than X9) series mainboard allows the use of the newer Haswell processors, which run cooler and save power.  The onboard SAS saves what would be a hundred dollar PCI card, and is much easier as well since it provides common SATA-compatible connectors.  And finally, this motherboard has the wonderful IPMI 2.0 with full KVM-over-LAN.

For the CPU, I looked at the the chart in Wikipedia, along with the prices and availability at NewEgg.  I chose the lowest (cheapest) Xeon E3 that had onboard graphics and hyperthreading.  Why do I need onboard graphics if the system doesn’t have a monitor?  I think that the monitor-over-LAN feature still requires an actual VGA; it doesn’t emulate one, but just captures the output.  There is a more primitive remote management feature that allows for a TTY-style console (also over LAN), but I don’t think that helps with initial BIOS screen stuff.  Also, with the standard built-in GPU I can use it for computation other than drawing graphics.  Maybe it will accelerate other software I run on the box at some point.

I’m keeping the box in a closet which besides building up heat from the machines gets afternoon sun on the outside wall.  The closet is warm in the summer.  My experience with the stock cooler that comes with the CPU is that it’s loud or even inadequate.  Looking through NewEgg, I looked for this style with low noise and a good price.  I normally like this style in part because it takes a standard square fan which can be updated and replaced, but the Zalman is known for quiet fans too.  I mounted it, not with the thermal grease that it came with, but with Phobia HeGrease, carefully applied and spread.

The RAM was not available at NewEgg.  Apparently ECC but not Buffered/Registered also is uncommon.  Buffering is used to facilitate having many more memory sticks on a board, which is not the case of this server-but-desktop board.  I found it at a specialty RAM site, www.memoryamerica.com, which has a wide selection.  To be on the safe side, I looked at the brands that Supermicro had tested on this board, and took the cheaper of the two.  16GiB uses two of the four memory slots, so it can be doubled in the future.

I use Seasonic power supplies, and that’s another story.  I looked for “Haswell support”, which enables a new improved stand-by mode.

Now for the case:  Some mentions on the FreeNAS web forum led me to Fractal Designs.  I followed up by reading reviews and the manufacturer’s web site.  There are a couple models that are so similar that I wonder what the difference is!  Since there is no direct explanation, it takes reading the specs very carefully and comparing the dimensions to spot the real differences.  This R4 features an internal stack of 8 HDD trays (with the anti-vibration mounting) plus two half-height 5¼″ external bays.  If you include two SSDs stuck elsewhere, that is 13 drives total, which is nicely close to the motherboard’s support of 14.

I chose an option with two external bays so I could fit a 3-disk hot-swap backplane.  Here I went with the name-brand ICY-Dock and a with-tray design, because I had trouble with trayless on Mercury.  So using the front-loading drive bay requires the use of two mounting screws, which is not very handy as it turns out.

Worse, the “2 half height bays” is a little exaggerated.  It’s more like 1.95 half height bays, as a large bulge protrudes into the area where the bottom bay should be.  I had to remove the bottom piece of sheet metal from the ICY-Dock in order to squeeze it in; this also got rid of the normal mounting ears.  I’ll make a bracket some day (a perfect job for a 3D printer), but it fits tightly and is not heavily used, so I left it without screws for the time being.

Other than that, assembling was easy and straightforward.  Testing proved interesting and adventuresome, and I’ll tell you about that later.

Network File Sharing

What are the ramifications of using NAS (Network Attached Storage) instead of DAS?  Will I want to work directly off of it, or keep work locally and only use it for backing up?  The answer will vary with the nature of the work, I think.

File Copy

Let’s start with simple file copying.  This was performed using the command line, not the File Explorer GUI.

Large files

17 files totaling 273.6 GiBytes

Windows (CIFS) share49min 41sec94 MiBytes/second
NFS share
different drive on same machine48min 2sec97 MiBytes/second

Small files

14,403 files in 1,582 directories totaling 2.53 GiBytes

CIFS share64min 53sec683 KiiBytes/second
same drive3h 25min216 KiBytes/second
different drive on same machine56min 50sec780 KiBytes/second

For large files, the transfer rate of 94 MiBytes/second (or 98.6 MBytes/second) is respectable.  Everything I could find on real-world speed of Gigabit Ethernet is outdated, with home PCs being limited by the hard drive speed.  Note that I’m going through two cheap switches between the test machines.

The small-file case is two orders of magnitude slower!  This bears out the common wisdom that it’s faster to zip up a large collection of files first, and then transfer the zip file over the network, and unzip on the receiving side.

I think that the speed is limited by the individual remote file open/close operations, which are slow on Windows, and the network ads latency if this is a synchronous operation.  The DAS (different drive on the same computer) is only 14% faster then the NAS in this case.  The data transfer time of the file content is only 0.7% of the time involved.  The real limiting factor seems to be the ~4 files or directories processed per second.  That does not sound at all realistic as I’ve seen programs process many more files than that.  There must be some quantity after which it slows down to the observed rate.  Since it is similar for DAS and NAS, It must be a Windows problem.  I’ll have to arrange some tests using other operating systems, later.

Working with Files

Compiler

This is what I do all day.  What are the ramifications of keeping my work on the NAS, as compared with other options?

Compile Build Job

Project located on local drive. (A regular directory or a VHD makes no difference)4min 9sec
Project located on NAS, accessed via CIFS (normal Windows share)10min 29sec
using NFS share insteadN/A

The Microsoft Visual Studio project reads a few thousand files, writes around 1500, reads those again, and writes some more.  When the project is located on a local drive, the CPU usage reads 100% most of the time, indicating that the job is CPU bound.

When the project is located on the NAS, the situation is quite different.  Given that the actual work performed is the same, the difference in time is due to file I/O.  And the extra I/O takes more time than the job did originally; that is, the time more than doubled.  It was observed that the CPU utilization was not maxed out in this case.  The file I/O dominated.

The same job was performed again immediately afterwards, giving the computer a chance to use cached file data it had already read recently.  That made no difference to the time.  It appears that with Windows (CIFS) shares, even on Windows 7 (the sharing protocol was significantly reworked as of Vista), file data is not cached in memory but is re-read each time it is needed.  That, or the “lots of small files speed limit”, or both, kills performance.

I tried to repeat that using a NFS share instead of the CIFS share.  However, I could not get it to work at all.  The Windows machine could see the file names and navigate the directories, but could not read any file.

Video Encoding

Encoding a video entailed reading one very large file and writing one smaller file.  The process’s performance metrics indicate reading only 2.5MB/s and writing merely 80KB/s.  I would not expect it to matter if the input file, output file, or both were on the NAS.

Likewise, video editing and programs like Photoshop will read in the files and maintain the contents in memory or manage its own overflow swap space (which you put on a local drive).  It’s harder to do actual timing here, but the impression is that various programs are perfectly responsive when the file are directly attached.  If that changes when using the NAS instead, I’ll note the circumstances.

Caveat

All of the performance characteristics above are made with the assumption that the storage unit and the network links are all mine for the duration of the test.  If multiple people and pets in the household are using the NAS, you have the added issue of having to divide up the performance among the simultaneous users.

Note that FreeNAS does support link aggregation, so I could plug in two gigabit Ethernet cables if I replaced the switch with one that also understood aggregation.

I need a (home made) NAS!

I ran out of space on my RAID-5 drive, which I built several years ago.  At the time, 1TB drives were as large as you could get before the price increased disproportionately to the capacity.  I bought 2 “enterprise” grade drives for around $250 each, and a consumer drive for half that.  The usable capacity is 2TB because of the redundancy.

I decided I was not going to lose data ever again.  Having redundancy against drive failure is one component of this.  So, all my photos and system backups are stored there, along with anything else I want to keep safe, including Virtual Machine images.

It turns out that a lot of the space is taken by the daily backups.  Even with a plan that uses occasional full backups and mostly incremental backups, they just keep on coming.  I also need to better tune the quota for each backup task, but with multiple tasks on multiple machines there is no feature to coordinate the quotas across everything.

Meanwhile, a new drive I installed had a capacity of 3TB by itself.  Lots of room for intermediate files and things I don’t need to keep safe.  But that’s more to be backed up!

Now I could simply replace the drives with larger ones, using the same Directly Attached controller and chassis space.  But there are reasons for looking at Network Attached storage.  They even have Drobo NAS units at WalMart now, so it must be quite the mainstream thing now.

Besides future upgradability to more drives (rather than just replace the drives with larger ones again), and better compatibility with different operating systems used in the home, and specialized media services for “smart” TVs and tablets, a compelling reason for me is the reason I’m using a RAID-5 in the first place:  to preserve my data.  As I’ve noted elsewhere, silent data loss is a bigger problem than generally realized, and it is actually growing.  Individual files somehow go bad, and are never noticed until long after you have reused backups from that period.

Direct-Attached storage — simply having the drive on my main PC — limits the choice of file systems.  In particular, Windows 7 doesn’t have full support for anything other than NTFS and various varieties of FAT, and new more advanced file systems are only available on a few operating systems as they are not widely used yet.

A file system that specifically keeps data integrity and guards against silent errors is ZFS.  When I first learned about it, it was only available for BSD-family operating systems.  A NAS appliance installation (using FreeBSD) called FreeNAS started up in 2005.  More generally, someone could run a FreeBSD or Linux system that has drives attached that are using ZFS or btrfs or whatever special thing is needed, and put that box on the network.

As I write this, a Drobo 5N (without disks) sells for $550.  It reportedly uses a multi-core ARM CPU, and is still underpowered according to reviews.  Most 2-disk systems seem to use one of two ARM-based SoC that costs about $30.  Now you could put something like the Addonics RAID-5 SATA port multiplier ($62) on that to control more disks at a low price.  Most 5-disk home/SOHO NAS systems seem to be based on x86 Atom boards.

Anyway, if you used hand-me-down hardware, such as the previous desktop PC you just replaced with a newer model, you’d have a much more powerful platform for free.   Buying a modest PC motherboard, CPU, and RAM for the purpose (supposing you had a case and power supply laying around) could be found for … (perusing the NewEgg website for current prices) … $225.

So basically, if you know what you’re doing (or can hire someone to do it for you for a hundred dollars) you can get hardware substantially more powerful for a fraction of the price.

Being an enthusiast who’s never bought a pre-made desktop PC, it’s a no-brainer for me to put something together from parts, even if it only had the features these home/SOHO appliances that have become so common, even if I don’t re-use anything I have on hand.

But, none of the NAS boxes I see advertised discuss anything like the silent data corruption problem.  They don’t say what kind of file system is being used, or how the drives might be mounted on a different system in the event of a board (not drive) failure if a replacement for the exact model is no longer available.  I would think that if a NAS had advanced data integrity features then it would feature prominently in the advertising.  So, build I must, to meet the requirements.

In future posts I’ll discuss the silent corruption problem at more length, and of course show what I actually built.  (I’ve named the NAS server OORT, by the way.)

 

 

 

 

Archival File Storage

More people these days have important computer data that they want to keep.  More (most?) household business is done with computer files now.  Hobbies and interests are likely to involve computer files.  And, the all important family photos and videos are now digital.

Around the year 2000 I stopped using 35mm film, as digital cameras were getting good enough to be interesting.  I also realized that the capacity of hard disk space was growing faster than the resolution of the cameras, so I should be able to keep all my photos immediately available on the computer’s drive, as opposed to storing them in drawers full of discs and having to put in the right disc to get the desired files.

We want to keep these files safe and sound.

So what exactly does “safe” mean?

It is easy to understand the threat of a drive dieing, media being damaged or unreadable after being stored for a long time, and accidentally deleting or saving-over the wrong file.

Having multiple copies protects against these things.  Automated replication/backing up won’t necessarily protect you against destroying files by mistake, but there are specific ways of guarding against that which I’ll discuss in another post.  It’s replication and redundancy that I want to discuss here.

Let me relate an anecdote.  One time I wanted to do something with an old project I had worked on previously, but when I opened the file I found it was filled with gibberish!  I looked at my backup, and it was the same.  Whatever had happened to the file had occurred some time ago, without being noticed at the time.  I continued backing up the now-corrupted file, and a good one was now beyond my retention period for the backups.

Fortunately, I found a copy on another disc where I had made an ad-hoc backup copy of the work and put it in drawer to be forgotten.  Many years later, large data centers report that “silent corruption” affects more stored data than previously realized.  Just as with my backups, a RAID-5 verification won’t detect anything amiss.

So, I worry about the integrity of all my saved photos, old projects, and whatever else I’ve saved for a long time.

I’ve thought about ways to perform a cryptographic hash of each file’s contents, to be used as a checksum.  Repeating this will verify that the files are still readable and unchanged.  For example, when putting backup files on a disc I’ve used a command-line script to generate a text file of filenames and hashes, and include that on the disc too.

With files being stored on always-available hard drives, it is possible to automatically and periodically check these.  For example, do it just before performing a backup so you don’t back up a corrupted file, and you are alerted to restore it instead!  This is complicated by the fact that some files are changed on purpose, and how can an automated tool know which are not expected to be changed and which are really in current use?  Also, with arbitrary files — large numbers of files in a deep directory tree — it is more difficult to store the hash of each file.

So, I’ve never gotten around to making an automated system that does this.

My plans and dabbling had been to use XML to store information for each file, including the date/time stamp, hash, and where it was last backed up to and when, and any necessary overrides to the backup policy.  That would then be used to perform incremental backups, and re-checking would be part of the back-up process.

This problem can be simplified in light of current habits, and to just address one single issue.  It won’t track backup generations and such, but will only give the hash of each file.  The directory will be purely for archival files, so any change at all other than adding files will be something to yell about.  And finally, don’t worry about bloat in the hash file due to repeating the directory name over and over — it will be insignificant compared to the files anyway, or can be handled by running a standard compression tool on the file.

Originally, my hash generation program for media was done using the 4NT command shell.  However, the tool has evolved in ways that I don’t care for, so I stopped buying upgrades.  Now it (the plain command-line version) is known as TCC/LE, and this free version blocks the MD5 hashing function.  So, I’ve lost features I’ve previously paid for because I don’t want to buy features I don’t want.  I suppose I could dig out and preserve an old copy, but then that wouldn’t be something useful to you unless you also bought the tool, and only on Windows.

Writing such a file is/was trivial.  Just write the formatted directory listing to a file, along with a note of exactly the command arguments used.  (Include the hash and the file name as a relative (not fully-qualified) name, and skip the hash file itself).  Now recall that this was to be placed on saved media, which would then not change again.  So, repeating the same command on the top directory of the media would generate exactly the same file.  If a dumb byte-by-byte compare turned up anything, then any common DIFF tool would be used to turn up details.

An example sha256.txt file from 2007. This is placed on the backup media along with the files. Note that the file begins with instructions on how it was created, so it may be re-run to check the files for corruption even if you don’t remember how you did it.

Result of running:
     pdir /(f z @sha256[*]) *.tib |tee sha256.txt


Pluto-1.tib 2186522624 B5D19101E8821CE886E56738A9207E00E6C324DB4EC9A01111E9469A6FA2C233
Pluto-2.tib 2301180416 8DF3BDD0F4A1390A56537BE0D1BD93628BD1EB52D15B49BCFC272C7869C2CC53
Pluto-3.tib 2959220224 0E1FA34E0DCB9D1EE480EE75F1394DE9C95347D247A84EE52C746F175DC579D9
Pluto-41.tib 4660039168 9813846E427B4AFF8996A6AE275E4F9DB7C4897A46362A7B2FCA849A7E948E8F
Pluto-42.tib   11684864 2ADD81B304A19EE753EA8E868562B95A959214897F4846905CD7CD65F23EB817
Pluto-51.tib 4660039168 224B85F067026F0360E4ECEEB03E1EE49CE751BF37180453C6732935032BE0C9
Pluto-52.tib 1935769600 F10764774D18A1469CD48B4A605575D21DE558DE3129201F0DE1E82CCD6D6D1B

As a check of a live replication destination, it needs to be fully automated and handle added files, and allow me to confirm any deletion was done on purpose but then remove the matching hash data.

So, the most difficult part is reading in the hash data.  Exactly how that’s approached would depend on the programming language and/or framework used.  It doesn’t have to be XML, but can be minimally simple.

As I’ve implied, I want to produce such a tool (at long last!) for my own use and as something I can give away and encourage everyone else to use, too.  It should be available on all platforms, not just Windows.  So, what should I write it in?

Using Perl would be very easy, as it has common library code available for performing the hash and for traversing a directory structure, and also makes it dead simple to read and parse the text file containing the results to compare against.  The only real drawback would be the difficulty for non-technical Windows users to install, since Perl is not included on Windows by default.

Writing it in standard C++ using common portable library code means that it could be compiled for any platform to produce an executable that can be fully stand-alone.  Assuming that the SHA-256 code is obtainable easily enough, it would still need code to traverse the directory and to suck back in the results; the very things that are trivial in Perl.

To be continued…

[Citation Needed] — the intelligent heckler

The xkcd comics can be funny normally, but this strikes me as particularly humorous because of other things I’ve seen recently.   In particular, the You-Tube contributor potholer54 is a former science journalist who not only gives his comments on various bunk but explains why you don’t have to take his word for it and how to spot possible bunk through journalistic techniques.  In particular, follow up on the sources.  See if the proponent is just repeating (and further distorting) a mis-reported story, and find the original that started it.

In this Internet age, it is easier than ever.  Just click on the link, or use Google.  Long before I saw these techniques spelled out, I recall reading something that seemed fishy.  In a minute or two I figured out that all reports were just repeating the company’s own white paper (and each other).

In order for “news” outlets to return to some standard of accuracy and integrity, their readers need to care.  There might be resources like Snopes that people can easily check, and you can well imagine browser extensions that automatically indicate the credibility of an article.  But that would mean more people would care, and that should push back on the providers.

So really, if someone is telling you something (or posting, or publishing), you shouldn’t need to necessarily believe him.  Is he just making things up?  Did he get the facts wrong?  Is he deliberately distorting the picture?  For important issues like climate change and GMO foods, you can and should find out for yourself who to trust on the subject.

Happiness is having a good backup

I’ve my share of hard drive failures and software accidents, and more often than not I’ve been able to recover.  Here are my current back-up provisions:

  • Daily backups using Acronis
  • Windows 7 “Previous Versions” feature
  • Backup copies of important files on different drives
  • Backup copies of important files on different machines at home
  • Annual off-site backup of entire machine
  • Important files stored using RAID-5

The technology of back-up media has changed over the years.  Once upon a time I used a stack of 50–100 5¼″ floppy disks!  I also remember when I went to DAT tape, which could hold a full backup and incremental backups every day for a month on one 400 MB cartridge.  Eventually came recordable CDs, and later DVD-RAM.  Along the way were 20MB “floptical” disks, Jaz drives, and Zip discs.

Today, the best backup medium is another hard drive.  A cheap on-sale hard drive has a better price per gigabyte than optical media of reputable quality, and nothing else is even close.  Desktop HDDs are also quite robust—I’ve heard of data recovery companies reading a hard drive after a house fire had destroyed the computer.  Plastic optical media would be toast!

Partition vs File

There are two fundamentally different kinds of backup.  For typical data applications, “documents” are files and can be easily copied elsewhere for safe keeping.  Any manner of copying a file (and copying it back where it came from) will serve to back up a word processing or any kind of office document, photo, video, etc.

But the “system” is different.  The operating system and the arrangement of installed programs has files you don’t understand, and even special things in special places on the hard drive.  The way to back that up is to make an exact sector-by-sector image of the partition.  This requires specialized software both to make and restore.

That is also one reason why I still keep my data separate from the system.  My C: drive is for the operating system and installed programs, and my files are on a different partition (say, E: and G:).  On Windows this means ignoring the prepared My Documents locations or taking steps to point that to another partition.

It has definite advantages, and I’ve made good use of it recently.  When updating some program caused problems, I simply restored to the previous day’s system backup of the entire C: drive.  My work, which was on E:, was not affected.  Had my work been on C: also, this step would have erased my efforts that were performed since that backup point.

Multiple Methods

Besides using different tools for the System backup and your day-to-day work files, you can use a variety of different overlapping techniques all at the same time.  You don’t have to use one tool or another.  You can use a 3rd party backup suite and casually replicate your work to your spouse’s computer.  Even with a single too, you can have automated daily incremental backups to another drive and make monthly full backups to Blu-ray to store off-site.

Automatically and Frequent

I used to boot the computer specially in order to do a complete partition back up of the normal C: drive.  I would do so before making significant changes, and was supposed to do so once a month regardless.  But it was a chore and a bother.

Now, Windows can reliably back up the running C: drive using an operating system called Volume Shadowing.  Being able to perform the backup while running the regular system is liberating, because it can be done automatically on a timer, and it can be done in the background.  So I have Acronis True Image perform daily backups of the C: drive.

Likewise, the same technology applies to data files.  Even if I happened to be still working at the odd hour at which I scheduled the daily file backup, using the files would not conflict with the backing up.

Windows Previous Versions feature

Windows 7 has a feature called Previous Versions that can be handy.  You can turn on System Protection and also enable it for your data drive.  Use the System control panel applet, and there is a tab for System Protection.

Windows 8 File History

This is deprecated but still available on Windows 8.  Windows 8 revamps the general idea with something that’s said to be more like Mac’s Time Machine.  It backs up to an external drive or network location, and it is hourly (or customizable interval).

Search for file history on the Windows 8 Start screen to get to the applet.  However, there seems no way to specify which files are being backed up!  It only and always applies to places that are part of a Library.  So I worked-around it by adding the directories of interest to a Library in File Explorer.

Windows Restore

I tried (on Windows 7) using its supplied System Backup feature, and was less than trilled with it.  It backs up to a hidden directory on the same drive, I don’t know what it does about having multiple versions stored there.  And I can’t simply copy the backup file elsewhere.  It’s actually the same feature that the Previous Versions uses, so I imagine it’s also better on Windows 8.

Drill!  Be confidant

Make sure you know how to restore files, and that it actually works.  When an urgent deadline coincides with a messed up file, that is not the time to be figuring out an unfamiliar system.

So, after you initiate your automated backup system for work files, also create one or more scratch files of the same kind you normally work with.  A silly word processor document containing a stupid joke, perhaps.  After a couple days, when the automated system has had time to do its thing, delete the file.

Now, get it back.

Make notes, and keep them on actual paper, to refer to when this is not a drill.

Then, you can be happy.  Be smug even, especially when someone else has “an incident”.

 

Jehovah’s Witnesses at the door, again

Late this Saturday morning I eagerly answered the door, expecting a visit from a landscaper.  Instead, I met two religious evangelists.

Although they did not identify themselves until I asked for details, they turned out to be Jahovah’s Witnesses.  Not too long ago I had a similar visit, and was not very prepared.  I wanted to know more about them, so later I checked on the Internet. The Wikipedia article has a section on beliefs but wasn’t really the kind of practical details I was interested in.  I also tried You-Tube, since I knew there was a lot posted in reaction to other evangelists.  Perhaps if Jahovah’s Witnesses themselves posted some videos instead of just going door to door, there would be ten times as many posted in response, dissecting every statement and analyzing every point made.  But all I found were a few odd cartoons to expound on what JW’s believed and how they acted.

I’m still interested in a definitive (yet brief!) summary.  But here are some things I’ve learned about them:

  • The world is ending!  This time for sure!
  • No blood transfusions allowed!
  • Probably no modern medicine, but strictness varies.
  • Against education.  Instead of college, young people go door knocking, and don’t get paid for it.
  • No holidays or celebrations at all, not even birthdays.
  • Creationists.
  • Hostile toward homosexuals.

So this time, I at least knew what their group stood for.  I could probably argue over several points of their doctrine, if the matter came up.  But, what did he want?

The implicit assumption is that the real goal, in the long term, is “Join our cult; believe the same things we do; do exactly as we do.”  But that’s not what they come out and say.

This one (the talkative of the pair) seemed to be saying that he wanted to “spread the good news”, and had some thesis about making the world a better place and Man alone can’t manage it.  In the half hour that I talked with him, I learned that they feel compelled to do their characteristic door-to-door stuff because of some passage in the Bible.

He also thinks that the Holy Bible is the first and oldest Holy book, even older than the Torah!  That the Old Testament is, in fact, older than the Christian collection is not something he was willing to accept.

But it still leaves me wondering what I should have said.  Any (serious) pointers or suggestions?