SaltThePass.com

March 26th, 2013

As many geeks do, I have a collection of about 30-odd domain names that I’ve purchased over the past few years for awesome-at-the-time ideas that I just never found the time to work on.

Last month, I resolved stop collecting these domains and instead make some visible progress on them, one at a time.

SaltThePass is my first project.  Do you have an account on LinkedIn, Evernote, or Yahoo?  All of these sites had password breaches in the last year that compromised their user’s logins and passwords.  One big problem people face today is managing all of the passwords they use for all of the sites that they visit.  People often re-use the same password on many sites because it would be impossible to remember hundreds of different passwords.  Unfortunately, this means that if a single site is hacked and your password is revealed, the attacker may have access to your account on all of the other sites you visit.

To help solve this problem, I created SaltThePass.com.  Salt The Pass is a password generator that will help you generate unique, secure passwords for all of the websites you visit based on a single Master Password that you remember.  You don’t need to install any additional software, and you can access your passwords from anywhere you have internet access.

Check it out at https://saltthepass.com and let me know what you think!

Using Modern Browser APIs to Improve the Performance of Your Web Applications

February 26th, 2013

Last night I gave a short presentation on Using Modern Browser APIs to Improve the Performance of Your Web Applications at GrWebDev.

It’s available on SlideShare:

Usinng Modern Browser APIs SlideShare deck

Two other presentations I gave late last year are available here as well:

Switch your HTPC back to Media Center after logging out of Remote Desktop

May 23rd, 2012

I have a Windows 7 Media Center PC hooked up to the TV in our living room.  It’s paired to a 4-stream Ceton CableCard adapter and is great for watching both TV and movies.

Sometimes I need to Remote Desktop (RDP) into the machine to install updates or make other changes.  During this, and after logging out, the Media Center PC is left at the login-screen on the TV.  So the next time I sit down to watch TV, I have to find the wireless keyboard and enter my password to log back in.

Since this can get annoying, I’ve created a small script on the desktop that automatically switches the console session (what’s shown on the TV) back to the primary user and re-starts Media Center.  This way, the next person that uses the TV doesn’t have to log back in.  When I’m done in the RDP session, I simply start the batch script and it logs me out of RDP and logs the TV back in.

Here’s the simple script:

call %windir%\system32\tscon.exe 1 /dest:console
start "Media Center" /max %windir%\ehome\ehshell.exe
exit /b 1

PngOutBatch: Optimize your PNGs by running PngOut multiple times

May 15th, 2012

PngOut is a command-line tool that can losslessly reduce the file size of your PNGs. In many cases, it can reduce the size of a PNG by 10-15%. I’ve even seen some cases where it was able to reduce the file size by over 50%.

There are several other PNG compression utilties out there, such as pngcrush and AdvanceCOMP, but I’ve found PngOut to be the best optimizer most of the time.

There’s an excellent tutorial on PngOut for first-timers.  Running PngOut is pretty easy, simply run it once agaist your PNG:

PngOut.exe [image.png]

However, to get the best optimization of your images, you can run PngOut multiple times with different block sizes (eg, /b1024) and randomized initial tables (/r).

There’s a commercial program, PngOutWin that can run through all of the block sizes using multiple CPU cores, but I wanted something free that I could run from the command line.

To aid in this, I created a simple DOS batch script that runs PngOut through 9 different block sizes (from 0 to 8192), with each block size run multiple times with random initial tables.

While the first iteration of PngOut does all of the heavy lifting, I’ve sometimes found that using the different block sizes can eek out a few extra bytes (sometimes 100-bytes or more than the initial pass).  You may not care about optimizing your PNG to the absolute last byte possible, but I try to run any new PNGs ready for production in my websites and mobile apps through this batch script before they’re committed to the wild.

Running PngOutBatch is as easy as running PngOut:

PngOutBatch.cmd [image.png] [number of iterations per block size - defaults to 5]

PngOutBatch will show progress as it reduces the file size.  Here’s a sample compressing the PNG logo from libpng.org:

Blocksize: 0
Iteration #1: Saved 2529 bytes
Iteration #2: No savings
Iteration #3: No savings
Iteration #4: No savings
Iteration #5: No savings
Blocksize: 128
Iteration #1: Saved 606 bytes
Iteration #2: Saved 10 bytes
Iteration #3: No savings
Iteration #4: Saved 2 bytes
Iteration #5: No savings
Blocksize: 192
Iteration #1: No savings
Iteration #2: No savings
Iteration #3: No savings
Iteration #4: No savings
Iteration #5: No savings
Blocksize: 256
Iteration #1: Saved 1 bytes
Iteration #2: No savings
Iteration #3: Saved 5 bytes
Iteration #4: Saved 11 bytes
Iteration #5: No savings
Blocksize: 512
Iteration #1: No savings
Iteration #2: No savings
Iteration #3: No savings
Iteration #4: No savings
Iteration #5: No savings
Blocksize: 1024
Iteration #1: No savings
Iteration #2: No savings
Iteration #3: No savings
Iteration #4: No savings
Iteration #5: No savings
Blocksize: 2048
Iteration #1: No savings
Iteration #2: No savings
Iteration #3: No savings
Iteration #4: No savings
Iteration #5: No savings
Blocksize: 4096
Iteration #1: No savings
Iteration #2: No savings
Iteration #3: No savings
Iteration #4: No savings
Iteration #5: No savings
Blocksize: 8192
Iteration #1: No savings
Iteration #2: No savings
Iteration #3: No savings
Iteration #4: No savings
Iteration #5: No savings
D:\temp\test.png: SUCCESS: 17260 bytes originally, 14096 bytes final: 3164 bytes saved

The first block size (0) reduced the file by 2529 bytes, then the 128-byte block size further reduced it by 606, 10 then 2 bytes. The 192-byte block size didn’t help, but a 256-byte block size reduced the file size by 1, 5 then 11 more bytes.  Larger block sizes didn’t help, but at the end of the day we reduced the PNG by 3164 bytes (18%), and 635 bytes (25% more) than if we had only run it once.

The PngOutBatch.cmd script is hosted at Gist.Github if you want to use it or contribute changes.

DIY Cloud Backup using Amazon EC2 and EBS

February 20th, 2012

I’ve created a small set of scripts that allows you to use Amazon Web Services to backup files to your own personal “cloud”. It’s available at GitHub for you to download or fork.

Features

  • Uses rsync over ssh to securely backup your Windows machines to Amazon’s EC2 (Elastic Compute Cloud) cloud, with persistent storage provided by Amazon EBS (Elastic Block Store)
  • Rsync efficiently mirrors your data to the cloud by only transmitting changed deltas, not entire files
  • An Amazon EC2 instance is used as a temporary server inside Amazon’s data center to backup your files, and it is only running while you are actively performing the rsync
  • An Amazon EBS volume holds your backup and is only attached during the rsync, though you could attach it to any other EC2 instance later for data retrieval, or snapshot it to S3 for point-in-time backup

Introduction

There are several online backup services available, from Mozy to Carbonite to Dropbox. They all provide various levels of backup services for little or no cost. They usually require you to run one of their apps on your machine, which backs up your files periodically to their “cloud” of storage.

While these services may suffice for the majority of people, you may wish to take a little more control of your backup process. For example, you are trusting their client app to do the right thing, and for your files to be stored securely in their data centers. They may also put limits on the rate they upload your backups, change their cost, or even go out of business.

On the other hand, one of the simplest tools to backup files is a program called rsync, which has been around for a long time. It efficiently transfers files over a network, and can be used to only transfer the parts of a file that have changed since the last sync. Rsync can be run on Linux or Windows machines through Cygwin. It can be run over SSH, so backups are performed with encryption. The problem is you need a Linux rsync server somewhere as the remote backup destination.

Instead of relying on one of the commercial backup services, I wanted to create a DIY backup “cloud” that I had complete control of. This script uses Amazon Web Services, a service from Amazon that offers on-demand compute instances (EC2) and storage volumes (EBS). It uses the amazingly simple, reliable and efficient rsync protocol to back up your documents quickly to Amazon’s data centers, only using an EC2 instance for the duration of the rsync. Your backups are stored on EBS volumes in Amazon’s data center, and you have complete control over them. By using this DIY method of backup, you get complete control of your backup experience. No upload rate-limiting, no client program constantly running on your computer. You can even do things like encrypt the volume you’re backing up to.

The only service you’re paying for is Amazon EC2 and EBS, which is pretty cheap, and not likely to disappear any time soon. For example, my monthly EC2 costs for perfoming a weekly backup are less than a dollar, and EBS costs at this time are as cheap as $0.10/GB/mo.

These scripts are provided to give you a simple way to backup your files via rsync to Amazon’s infrastructure, and can be easily adapted to your needs.

How It Works

This script is a simple DOS batch script that can be run to launch an EC2 instance, perform the rsync, stop the instance, and check on the status of your instances.

After you’ve created your personal backup “cloud” (see Amazon Cloud Setup), and have the Required Tools, you simply run the amazon-cloud-backup.cmd -start to startup a new EC2 instance. Internally, this uses the Amazon API Developer Tools to start the instance via ec2-run-instances. There’s a custom bootscript for the instance, amazon-cloud-backup.bootscript.sh that works well with the Amazon Linux AMIs to enable root access to the machine over SSH (they initially only offer the user ec2-user SSH access). We need root access to perform the mount of the volume.

After the instance is started, the script attaches your personal EBS volume to the device. Its remote address is queried viaec2-describe-instances and SSH is used to mount the EBS volume to a backup point (eg, /backup). Once this is completed, your remote EC2 instance and EBS volume are ready for you to rsync.

To start the rsync, you simply need to run amazon-cloud-backup.cmd -rsync [options]. Rsync is started over SSH, and your files are backed up to the remote volume.

Once the backup is complete, you can stop the EC2 instance at any time by running amazon-cloud-backup.cmd -stop, or get the status of the instance by running amazon-cloud-backup.cmd -status. You can also check on the free space on the volume by running amazon-cloud-backup.cmd -volumestatus.

There are a couple things you will need to configure to set this all up. First you need to sign up for Amazon Web Services and generate the appropriate keys and certificates. Then you need a few helper programs on your machine, for example rsync.exe and ssh.exe. Finally, you need to set a few settings in amazon-cloud-backup.cmd so the backup is tailored to your keys and requirements.

Amazon “Cloud” Setup

To use this script, you need to have an Amazon Web Services account. You can sign up for one at https://aws.amazon.com/. Once you have an Amazon Web Services account, you will also need to sign up for Amazon EC2.

Once you have access to EC2, you will need to do the following.

  1. Create a X.509 Certificate so we can enable API access to the Amazon Web Service API. You can get this in your Security Credentials page. Click on the X.509 Certificates tab, then Create a new Certificate. Download both the X.509 Private Key and Certificate files (pk-xyz.pem and cert-xyz.pem).
  2. Determine which Amazon Region you want to work out of. See their Reference page for details. For example, I’m in the Pacific Northwest so I chose us-west-2 (Oregon) as the Region.
  3. Create an EC2 Key Pair so you can log into your EC2 instance via SSH. You can do this in the AWS Management Console. Click on Create a Key Pair, name it (for example, “amazon-cloud-backup-rsync”) and download the .pem file.
  4. Create an EBS Volume in the AWS Management Console. Click on Volumes and then Create Volume. You can create whatever size volume you want, though you should note that you will pay monthly charges for the volume size, not the size of your backed up files.
  5. Determine which EC2 AMI (Amazon Machine Image) you want to use. I’m using the Amazon Linux AMI: EBS Backed 32-bit image. This is a Linux image provided and maintained by Amazon. You’ll need to pick the appropriate AMI ID for your region. If you do not use one of the Amazon-provided AMIs, you may need to modify amazon-cloud-backup.bootscript.sh for the backup to work.
  6. Create a new EC2 Security Group that allows SSH access. In the AWS Management Console, under EC2, open the Security Groups pane. Select Create Security Group and name it “ssh” or something similar. Once added, edit its Inbound rules to allow port 22 from all sources “0.0.0.0/0″. If you know what your remote IP address is ahead of time, you could limit the source to that IP.
  7. Launch an EC2 instance with the “ssh” Security Group. After you launch the instance, you can use the Attach Volume button in theVolumes pane to attach your new volume as /dev/sdb.
  8. Log-in to your EC2 instance using ssh (see Required Toolsbelow) and fdisk the volume and create a filesystem. For example:
    ssh -i my-rsync-key.pem ec2-user@ec2-1-2-3-4.us-west-1.compute.amazonaws.com
    [ec2-user] sudo fdisk /dev/sdb
    ...
    [ec2-user] sudo mkfs.ext4 /dev/sdb1
  9. Your Amazon personal “Cloud” is now setup.

Many of the choices you’ve made in this section will need to be set as configuration options in the amazon-cloud-backup.cmd script.

Required Tools

You will need a couple tools on your Windows machine to perform the rsync backup and query the Amazon Web Services API.

  1. First, you’ll need a few binaries (rsync.exe, ssh.exe) on your system to facilitate the ssh/rsync transfer. Cygwin can be used to accomplish this. You can easily install Cygwin from http://www.cygwin.com/. After installing, pluck a couple files from the bin/folder and put them into this directory. The binaries you need are:
    rsync.exe
    ssh.exe
    sleep.exe

    You may also need a couple libraries to ensure those binaries run:

    cygcrypto-0.9.8.dll
    cyggcc_s-1.dll
    cygiconv-2.dll
    cygintl-8.dll
    cygpopt-0.dll
    cygspp-0.dll
    cygwin1.dll
    cygz.dll
  2. You will need the Amazon API Developer Tools, downloaded from http://aws.amazon.com/developertools/. Place them in a sub-directory called amazon-tools\

Script Configuration

Now you simply have to configure amazon-cloud-backup.cmd.

Most of the settings can be left at their defaults, but you will likely need to change the locations and name of your X.509 Certificate and EC2 Key Pair.

Usage

Once you’ve done the steps in Amazon “Cloud” Setup, Required Tools and Script Configuration, you just need to run the amazon-cloud-backup.cmd script.

These simple steps will launch your EC2 instance, perform the rsync, and then stop the instance.

amazon-cloud-backup.cmd -launch
amazon-cloud-backup.cmd -rsync
amazon-cloud-backup.cmd -stop

After -stop, your EC2 instance will stop and the EBS volume will be un-attached.

Source

The source code is available at GitHub. Feel free to send pull requests for improvements!

Windows command-line regular expression renaming tool: RenameRegex

January 30th, 2012

Every once in a while, I need to rename a bunch of files.  Instead of hand-typing all of the new names, sometimes a nice regular expression would get the job done a lot faster.  While there are a couple Windows GUI regular expression file renamers, I enjoy doing as much as I can from the command-line.

Since .NET exposes an easy to use library for regular expressions, I created a small C# command-line app that can rename files via any regular expression.

Usage:

RR.exe file-match search replace [/p]
  /p: pretend (show what will be renamed)

You can use .NET regular expressions for the search and replacement strings, including substitutions (for example, “$1″ is the 1st capture group in the search term).

Examples:

Simple rename without a regular expression:

RR.exe * .ext1 .ext2

Renaming with a replacement of all “-” characters to “_”:

RR.exe * "-" "_"

Remove all numbers from the file names:

RR.exe * "[0-9]+" ""

Rename files in the pattern of “123_xyz.txt” to “xyz_123.txt”:

RR.exe *.txt "([0-9]+)_([a-z]+)" "$2_$1"

Download

You can download RenameRegex (RR.exe) from here.  The full source of RenameRegex is also available at GitHub if you want to fork or modify it. If you make changes, let me know!

Auto-ban website spammers via the Apache access_log

January 24th, 2012

During the past few months, several of my websites have been the target of some sort of SPAM attack.  After my getting alerted that my servers were under high load (from Cacti), I found that a small number of IP addresses were loading and re-loading or POSTing to the same pages over and over again.  In one of the attacks, they were simply reloading a page several times a second from multiple IP addresses.  In another attack, they were POSTing several megabytes of data to a form (which spent time validating the input), several times a second. I’m not sure of their motives – my guess is that they’re either trying to game search rankings (the POSTings) or someone with an improperly configured robot.

Since I didn’t have anything in-place to automatically drop requests from these rogue SPAMmers, the servers were coming under increasing load and causing real visitor’s page loads to slow down.

After looking at the server’s Apache’s access_log, I was able to narrow down the IPs causing the issue.  With their IP, I simply created a few iptables rules to drop all packets from their IP addresses. Within a few seconds, the load on the server returned to normal.

I didn’t want to play catch-up the next time this happened, so I created a small script to automatically parse my server’s access_logs and auto-ban any IP address that appears to be doing inappropriate things.

The script is pretty simple.  It uses tail to look at the last $LINESTOSEARCH lines of the access_log, grabs all of the IPs via awk, sorts and counts them via uniq, then looks to see if any of these IPs had loaded more than $THRESHOLD pages.  If so, it does a quick query of iptables to see if the IP is already banned.  If not, it adds a single INPUT rule to DROP packets from that IP.

Here’s the code:

 
#!/bin/bash

#
# Config
#

# if more than the threshold, the IP will be banned
THRESHOLD=100

# search this many recent lines of the access log
LINESTOSEARCH=50000

# term to search for
SEARCHTERM=POST

# logfile to search
LOGFILE=/var/log/httpd/access_log

# email to alert upon banning
ALERTEMAIL=foo@foo.com

#
# Get the last n lines of the access_log, and search for the term.  Sort and count by IP, outputting the IP if it's
# larger than the threshold.
#
for ip in `tail -n $LINESTOSEARCH $LOGFILE | grep "$SEARCHTERM" | awk "{print \\$1}" | sort | uniq -c | sort -rn | head -20 | awk "{if (\\$1 > $THRESHOLD) print \\$2}"`
do
    # Look in iptables to see if this IP is already banned
    if ! iptables -L INPUT -n | grep -q $ip
    then
        # Ban the IP
        iptables -A INPUT -s $ip -j DROP
        
        # Notify the alert email
        iptables -L -n | mail -s "Apache access_log banned '$SEARCHTERM': $ip" $ALERTEMAIL
    fi
done

You can put this in your crontab, so it runs every X minutes. The script will probably need root access to use iptables.

I have the script in /etc/cron.10minutes and a crontab entry to run all files in that directory every 10 minutes: /etc/crontab:
0,10,20,30,40,50 * * * * root run-parts /etc/cron.10minutes

Warning: Ensure that the $SEARCHTERM you use will not match a wide set of pages that at web crawler (for example, Google) would see. In my case, I set SEARCHTERM=POST, because I know that Google will not be posting to my website as all of the forms are excluded from crawling via robots.txt.

The full code is also available at Gist.GitHub if you want to fork or modify it. It’s a rather simplistic, brute-force approach to banning rogue IPs, but it has worked for my needs. You could easily update the script to be a bit smarter. If you do, let me know!

Mounting VHDs in Windows 7 from a command-line script

January 4th, 2012

Windows 7 has native support for VHDs (virtual hard disks) built into the OS. VHDs are great for virtual machines, native VHD booting into recent Windows OSs, or even moving whole file systems around.

While you can mount VHDs from the Windows 7 diskmgmt.msc GUI, or via vhdmount, if you need support for mounting or unmounting VHDs from the command-line on a vanilla Windows 7 / Server 2008 install, you have to use diskpart.

diskpart’s mount commands are pretty simple:

C:\> diskpart
DISKPART> sel vdisk file="[location of vhd]"
DISKPART> attach vdisk

Unmounting is just as simple:

C:\> diskpart
DISKPART> sel vdisk file="[location of vhd]"
DISKPART> detach vdisk

These commands work fine on an ad-hoc basis, but I had the need to automate loading a VHD from a script.  Luckily, diskpart takes a single parameter, /s, which specifies a diskpart “script”.  The script is simply the command you would have typed in above:

C:\> diskpart /s [diskpart script file]

I’ve created two simple scripts, MountVHD.cmd and UnmountVHD.cmd that create a “diskpart script”, run it, then remove the temporary file.  This way, you can simply run MountVHD.cmd and point it to your VHD:

C:\> MountVHD.cmd [location of vhd] [drive letter - optional]

Or unmount the same VHD:

C:\> UnMountVHD.cmd [location of vhd]

These files are hosted at Gist.Github if you want to use them or contribute changes.

Backing up Windows computers to a Synology NAS via SSH and rsync

January 4th, 2012

I recently purchased a Synology DS1511+ to act as a NAS (network attached storage) for my home network. The 5-drive, Linux powered device is beautiful – small, sleek and quiet. What sold me was the amazing web-based configuration interface they provide, and the ability to access the device remotely via the web or from mobile apps Synology provides in the iTunes App Store and Android Market.

After setting it up with a couple 2TB and 3TB drives, I wanted to use the device to backup documents from several Windows computers I manage (my own, my wife’s netbook and my parents’ computers thousands of miles away). Local network backup is pretty easy – you can use the Synology Data Replicator to backup Windows hosts to your Synology on your local network. However, it seemed pretty slow to me, and doesn’t use the highly-optimized rsync protocol for backing up files. Since I was previously using rsync over SSH to a Linux server I run at home, I figured since the Synology was Linux-based, it should be able to do the same.

All it takes is a few updates to the Synology server, and a few scripts on the Windows computers you want to backup to make this work for both computers on your home network as well as any external computers you want to backup, as long as they know the address of the remote server. You can use a dynamic-IP service such as TZO.com or DynDNS.org so your remote Windows clients know how to contact your home Synology.

Once I got it all working, I figured the process and scripts I created could be used by others with a Synology NAS (or any server or NAS running Linux). I’ve created a GitHub repository with the scripts and instructions so you can setup your own secure backup for local and remote Windows computers:

https://github.com/nicjansma/synology-windows-ssh-rsync-backup

Features

  • Uses rsync over ssh to securely backup your Windows hosts to a Synology NAS.
  • Each Windows host gets a unique SSH private/public key that can be revoked at any time on the server.
  • The server limits the SSH private/public keys so they can only run rsync, and can’t be used to log into the server.
  • The server also limits the SSH private/public keys to a valid path prefix, so rsync can’t destroy other parts of the file system.
  • Windows hosts can backup to the Synology NAS if they’re on the local network or on a remote network, as long as the outside IP/port are known.

NOTE: The backups are performed via the Synology root user’s credentials, to simplify permissions. The SSH keys are only valid for rsync, and are limited to the path prefix you specify. You could change the scripts to backup as another user if you want (config.csv).

Synology NAS Setup

  1. Enable SSH on your Synology NAS if you haven’t already. Go to Control Panel – Terminal, and check “Enable SSH service”.
  2. Log into your Synology via SSH.
  3. Create a /root/.ssh directory if it doesn’t already exist
    mkdir /root/.ssh
    chmod 700 /root/.ssh
  4. Upload server/validate-rsync.sh to your /root/.ssh/validate-rsync.sh. Then chmod it so it can be run:
    chmod 755 /root/.ssh/validate-rsync.sh
  5. Create an authorized_keys file for later use:
    touch /root/.ssh/authorized_keys
    chmod 600 /root/.ssh/authorized_keys
  6. Ensure private/public key logins are enabled in /etc/ssh/sshd_config.
    vi /etc/ssh/sshd_config

    You want to ensure the following lines are uncommented:

    PubkeyAuthentication yes
    AuthorizedKeysFile .ssh/authorized_keys
  7. You should reboot your Synology to ensure the settings are applied:
    reboot
  8. Setup a share on your Synology NAS for backups (eg, ‘backup’).

Client Package Preparation

Before you backup any clients, you will need to make a couple changes to the files in the client/ directory.

  1. First, you’ll need a few binaries (rsync, ssh, chmod, ssh-keygen) on your system to facilitate the ssh/rsync transfer. Cygwin can be used to accomplish this. You can easily install Cygwin from http://www.cygwin.com/. After installing, pluck a couple files from the bin/ folder and put them into the client/ directory. The binaries you need are:
    chmod.exe
    rsync.exe
    ssh.exe
    ssh-keygen.exe

    You may also need a couple libraries to ensure those binaries run:

    cygcrypto-0.9.8.dll
    cyggcc_s-1.dll
    cygiconv-2.dll
    cygintl-8.dll
    cygpopt-0.dll
    cygspp-0.dll
    cygwin1.dll
    cygz.dll
  2. Next, you should update config.csv for your needs:
    rsyncServerRemote - The address clients can connect to when remote (eg, a dynamic IP host)
    rsyncPortRemote - The port clients connect to when remote (eg, 22)
    rsyncServerHome - The address clients can connect to when on the local network (for example, 192.168.0.2)
    rsyncPortHome - The port clients connect to when on the local network (eg, 22)
    rsyncUser - The Synology user to backup as (eg, root)
    rsyncRootPath - The root path to back up to (eg, /volume1/backup)
    vcsUpdateCmd - If set, the version control system command to use prior to backup up (eg, svn up)
  3. The version control update command (%vcsUpdateCmd%) can be set to run a version control update on your files prior to backing up. This can be useful if you have a VCS repository that clients can connect to. It allows you to make remote changes to the backup scripts, and have the clients get the updated scripts without you having to log into them. The scripts are updated each time start-backup.cmd is run. For example, you could use this command to update from a svn repository:
    vcsUpdateCmd,svn up

    If you are using a VCS system, you should ensure you have the proper command-line .exes and .dlls in the client/ directory. I’ve used Collab.net’s svn.exe and lib*.dll files from their distribution (http://www.collab.net/downloads/subversion/).

    During client setup, you simply need to log into the machine, checkout the repository, and setup a scheduled task to do the backups (see below). Each time a backup is run, the client will update its backup scripts first.

The client package is now setup! If you’re using %vcsUpdateCmd%, you can check the client/ directory into your remote repository.

Client Setup

For each client you want to backup, you will need to do the following:

  1. Generate a private/public key pair for the computer. You can do this by running ssh-keygen.exe, or have generate-client-keys.cmd do it for you:
    generate-client-keys.cmd

    or

    generate-client-keys.cmd [computername]

    If you run ssh-keygen.exe on your own, you should name the files rsync-keys-[computername]:

    ssh-keygen.exe -t dsa -f rsync-keys-[computername]

    If you run ssh-keygen.exe on your own, do not specify a password, or clients will need to enter it every time they backup.

  2. Grab the public key out of rsync-keys-[computername].pub, and put it into your Synology backup user’s .ssh/authorized_keys:
    vi ~/.ssh/authorized_keys

    You will want to prefix the authorized key with your validation command. It should look something like this

    command="[validate-rsync.sh location] [backup volume root]" [contents of rsync-keys-x.pub]

    For example:

    command="/root/.ssh/validate-rsync.sh /volume1/backup/MYCOMPUTER" ssh-dss AAAdsadasds...

    This ensures that the public/private key is only used for rsync (and can’t be used as a shell login), and that the rsync starts at the specified root path and no higher (so it can’t destroy the rest of the filesystem).

  3. Copy backup-TEMPLATE.cmd to backup-[computername].cmd
  4. Edit the backup-[computername].cmd file to ensure %rsyncPath% is correct. The following DOS environment variable is available to you, which is set in config.csv:
    %rsyncRootPath% - Remote root rsync path

    You should set rsyncPath to the root remote rsync path you want to use. For example:

    set rsyncPath=%rsyncRootPath%/%COMPUTERNAME%

    or

    set rsyncPath=%rsyncRootPath%/bob/%COMPUTERNAME%

    %rsyncRootPath% is set in config.csv to your Synology backup volume (eg, /volume1/backup), so %rsyncPath% would evaluate to this if your current computer’s name is MYCOMPUTER:

    /volume1/backup/MYCOMPUTER

    You can see this is the same path that you put in the authorized_keys file.

  5. Edit the backup-[computername].cmd file to run the appropriate rsync commands. The following DOS environment variables are available to you, which are set in start-backup.cmd:
    %rsyncStandardOpts% - Standard rsync command-line options
    %rsyncConnectionString% - Rsync connection string

    For example:

    set cmdArgs=rsync %rsyncStandardOpts% "/cygdrive/c/users/bob/documents/" %rsyncConnectionString%:%rsyncPath%/documents
    echo Starting %cmdArgs%
    call %cmdArgs%
  6. Copy the client/ directories to the target computer, say C:\backup. If you are using %vcsUpdateCmd%, you can checkout the client directory so you can push remote updates (see above).
  7. Setup a scheduled task (via Windows Task Scheduler) to run start-backup.cmd as often as you wish.
  8. Create the computer’s backup directory on your Synology NAS:
    mkdir /volume1/backup/MYCOMPUTER

The client is now setup!

Source

As noted above, the source for these scripts is available on Github:

https://github.com/nicjansma/synology-windows-ssh-rsync-backup

If you have any suggestions, find a bug or want to make contributions, please head over to GitHub!

Unofficial LEGO Minifigure Catalog App now available in Apple AppStore

December 16th, 2011

Our Unofficial LEGO Minifigure Catalog App was just approved by Apple and is now available in the AppStore!