Archive for October, 2012

Restart time syncing in Mac OS X with Remote Desktop

Tuesday, October 30th, 2012

As batteries die in older Macs their ability to keep the computer’s clock ticking dies with them. Slight interruptions in power can reset the date to January 1, 1970, or January 1, 2000, for newer machines.

Syncing the computer’s clock to a network NTP time server can quickly return it to current time without any effort. However, Macs may not begin syncing right away. That’s a problem for users in Active Directory environments where a discrepancy in time of more than five minutes won’t allow them to log in.

Using Apple Remote Desktop (or an SSH connection to the computer), a remote administrator can issue two simple commands to restart time syncing.

First, verify the time of the remote computer using ARD’s Send UNIX command to get the current time and date. Simply enter the date command and run it as root.

Date command in ARD

This will return something like: Thr Jan 1 10:56:37 CDT 1970. Active Directory won’t allow any logins.

To correct the time use the systemsetup command to turn time syncing off and then turn it on again:

systemsetup -setusingnetworktime off
systemsetup -setusingnetworktime on


Run the date command again and the clock should now show the current time: Tue Oct 30 11:09:26 CDT 2012. Active Directory users should be able to log in immediately.

To store this quick command for later select Save as Template… from the Template drop down menu and assign it a name.

Rename files en masse

Friday, October 26th, 2012

There are more than a few shareware utilities for both Windws and Mac that give a user the ability to rename a bunch of files according to a certain criteria.  Gui utilities are always nice but what if you’re logged into a webserver and need to rename all .JPG files to .jpg?

There’s a simple perl script that give you the ability to rename all files in a directory according to the powerful rules of regular expressions.  Here are some example ways to use the script.


The following renames all files ending in .JPG to .jpg.

% rename ‘s/\.JPG$/jpg/’ *.JPG

 The next one converts all uppercase filenames to lowercase except for Makefiles.

% rename ‘tr/A-Z/a-z/ unless /^Make/’ *
The next one removes the preceeding dot in front of a filename unless it’s a .DS_Store file
% rename ‘s/^\.// unless /^.DS_Store/’ *

The next one appends the date to all text files in the current directory.

% rename ‘$_ .= “.2012-10-26″‘ *.txt

The last and arguably most useful way to use this tool is to pipe it through find.  This example renames all files in /var/www from .JPG to .jpg.

% find /var/www -name ‘*.JPG’ -print | rename ‘s/\.JPG$/\.jpg/’

Note: There are 2 variables you can set in the script.  The first is a list of files to ignore and the second is to turn on “dry run” mode. This will allow you to see what the script is going to do before irreversibly changing the file name.


Here is the script

Basic for loops

Thursday, October 25th, 2012

One of the first things anyone taking the leap into programming is going to learn is the ever present, ever useful for loop. One of the reasons a for loop is so useful is that it implements exactly what computers do best and what people do worst, tedious repetitive tasks. Imagine having to grab a specific piece of data from 100 different vcards or manually update 1000 rows in someones SQL database. Not fun, and that’s where for loops come in, giving you the ability to program a specific set of tasks and then let your computer crunch away at the numbers while you relax and sip your favourite coffee product. It’s like the old saying goes, laziness is the mother of efficiency and a good for loop will help you accomplish both of those.

The basic for loop consists of a set of values which are either numbers or a set of strings, a temp variable that is what you use to access the data you’re iterating through, and a criteria that has to be fulfilled so that the loop knows when to stop. It’ll probably make more sense when there is a live example so look below to get a better understanding.

Here is a basic for loop written in Perl that iterates through a given set of IP’s to see which ones are responding.

# Declare current subnet
$subnet = “192.168.0.”;
# Initialize temp variable ($var), stop after number 10 ($var <= 11) and for each loop add one ($var++)
for (my $var = 1; $var <= 11; $var++) {
    # Send one ping per IP
    `ping -c 1 $subnet$var`;
    # And finally print which hosts are up
    print “$subnet$var is up\n” if ($? == 0);


Installing Python On Windows

Tuesday, October 9th, 2012


Python, although a standard language for all Macs and most Unix / Linux distributions, doesn’t come preinstalled on windows machines. Thankfully, getting the python to play nice with Bill Gates is very straightforward and you’ll be done in less time it takes to run a Windows update.

Get Python 2.7.3

First step is to go to the main python website and get the correct python version for your needs. Although python 3.2 is out python 2.7.3 is the most compatible and version 3.2 isn’t 100% backwards compatible so unless you’re writing code from scratch that won’t need any external modules version 2.7.3 is the way to go.

Get the specific python installer for your hardware here:

Installing Python

Installing is simple. Open the MSI package like so:

Install Python 2.7.3

Choose the install folder. Default is C:\Python27

Choose folder

Default customizations are fine
Default Customizations
Then watch some progress bars…
Progress Bars
All done!


Updating Your Path (optional & recommended)

The only other thing you may need to do is update your path to include the python executable. This isn’t necessary since the installer associates all .py files with the python exe but if you ever want to test something or just run python from the shell this update is a handy one.

First, right click the My Computer icon and go to properties.


Then go to advanced.
Advanced Properties

And then Environment Variables

Environment Variables

Append ;C:\Python27 to the path section like so.

Append Path

All done!

(Sysadmin) Software Design Decisions

Wednesday, October 3rd, 2012

When approaching a task with an inkling to automate, sometimes you find an open source project that fits the bill. But the creator will work within constraints, and often express their opinion of what’s important to ‘solve’ as a problem and therefore prioritize on: a deployment tool is not necessarily a patch management tool is not necessarily a configuration management tool, and so on. One of the things I’ve dealt with is trying to gauge the intent of a developer and deciding if they are interested in further discussion/support/development of a given project. Knowing why one decision was made or another can be helpful in these situations. In that category of things I wish someone could have written so I could read it, here’s the design decisions behind the sonOfBackupRestoreScripts project I’ve been toying with as an add-on to DeployStudio(heretofore DS), which you can hopefully understand why I am not releasing as an official, supportable tool in it’s current bash form after reading the following.
I’ve adapted some of the things Google used in their outline for Simian as a model, to give this some structure.

Project Objective:

To move user home folders and local authentication/cached credentials between workstations in a customizable and optimized manner, preserving the integrity of the data/user records as much as possible


For speed and data integrity, rsync is used to move selections of the users home folder(minus caches, trash, and common exclusions made by Time Machine). To increase portability and preserve mac-specific attributes, a disk image is generated to enclose the data. The user account information is copied separately and helpful information is displayed at the critical points as it moves from one stage to another and during the backup itself.

Requirements: DeployStudio Server / NetBoot

DS, as a service, enables an infrastructure to run the script in, and automounts a repository to interact with over the network. Meant to work optimally with or without a NetBoot environment, an architecture assumption being made during development/testing is wired ethernet, with the use of USB/Thunderbolt adapters if clients are MacBook Airs. Even old minis can function fine as the server, assuming the repo is located on a volume with enough space available to accept the uncompressed backups.

Implementation Details: Major Components / Underlying Programs

- source/destination variables

Parameters can be passed to the script to change the source/destination of backups/restores with the -s(for source) and -d(…) switches and then a path that is reachable by the NetBooted system.

- hdiutil

A simple sparsediskimage is created which can expand up to 100GBs with the built-in binary hdiutil. The file system format of that container is JHFS+, and a bunch of other best practices, cobbled together from Bombich’s Carbon Copy Cloner(heretofore CCC) and InstaDMG, are employed.

- cp

The cp binary is used to just copy the user records from the directory service the data resides on to the root of the sparseimage, and the admin group’s record is copied into a ‘group’ folder. If hashes exist in /var/db/shadow/hash, which is how passwords were stored previous to 10.7, those are moved to a ‘hashes’ folder.

- rsync

A custom, even more current build of rsync could be generated if the instructions listed here are followed. Ideally, a battle-tested version like the one bundled with CCC’s (/Applications/Carbon\ Copy\, which is actually a heavily customized rsync version 3.0.6) could be used, but it’s output isn’t easy to adapt and see an overview of the progress during a CLI transfer. Regardless, the recommended switches are employed in hopes to get a passing grade on the backupBouncer test. The 3.0.7 version bundled with DS itself (/Applications/Utilities/DeployStudio\, which for whatever reason is excluded when the assistant creates NetBoot sets) was used during development/testing.


The Users folder on the workstation that’s being backed up is what’s targeted directly, so any users that have been deleted or subfolders can be removed with the exclusions file fed to the rsync command, and without catch-all, asterisk(*) ‘file globbing’, you’d need to be specific about certain types of files you want to exclude if they’re in certain directories. For example, to not backup any mp3 files, no matter where they are in the user folders being backed up, you’d add - *.mp3 Additional catch-all excludes could be used, as detailed in the script, which specifically excludes ipsw’s(iOS firmware/OS installers) like this: --exclude='*.ipsw'


Pretty much everything done via both rsync and cp are done in reverse, utilizing the source/destination options, so a backup taken from one machine can easily be chosen to restore to another.

Security Considerations:

Very little security is applied during storage. Files are transferred over password-protected AFP, so a separate server and repo could be used to minimize potential access by whoever can access the main DS service. Nothing encrypts the files inside the sparseimages, and if present, the older password format is a hash that could potentially be cracked over a great length of time. The home folder ACL’s and ownership/perms are preserved, so in that respect it’s secure according to whoever has access to the local file systems on the server and client.

Excluded/Missing Features:
(Don’t You Wish Every Project Said That?)

Hopefully this won’t sound like a soul-bearing confession, but here goes:
No checks are in place if there isn’t enough space on destinations, nor if a folder to backup is larger than the currently hard-coded 100GB sparseimage cap (after exclusions.) Minimal redirection of logs is performed, so the main DS log can quickly hit a 2MB cap and stop updating the DS NetBoot log window/GUI if there’s a boatload of progress echo’d to stdout. The process to restore a users admin group membership(or any other group on the original source) is not performed, although the group’s admin.plist can be queried after the fact. Nor is there even reporting on Deleted Users orphaned home folders if they do actually need to be preserved, by default they’re just part of the things rsync excludes. All restrictions are performed in the Excludes.txt file fed to rsync, so it cannot be passed as a parameter to the script.
And the biggest possible unpleasantness is also the #1 reason I’m not considering continuing development in bash: UID collisions. If you restore a 501 user to an image with a pre-existing 501 user that was the only admin… bad things will happen. (We’ve changed our default admin user’s UID as a result.) If you get lucky, you can change one user’s UID or the other and chown to fix things as admin before all heck breaks lose… If this isn’t a clean image, there’s no checking for duplicate users with newer data, there’s no filevault1 or 2 handling, no prioritization so if it can only fit a few home folders it’ll do so and warn about the one(s) that wouldn’t fit, no version checking on the binaries in case different NetBoot sets are used, no fixing of ByHostPrefs(although DS’s finalize script should handle that), no checks with die function are performed if the restore destination doesn’t have enough space, since common case is restoring to same HD or a newer, presumably larger computer. Phew!


The moral of the story is that the data structures available in most of the other scripting languages are more suited for these checks and to perform evasive action, as necessary. Bash does really ungainly approximations of tuples/dictionaries/hash tables, and forced the previous version of this project to perform all necessary checks and actions during a single loop per-user to keep things functional without growing exponentially longer and more complex.

Let’s look forward to the distant future when this makes it’s way into Python for the next installment in this project. Of course I’ve already got the name of the successor to SonOfBackupRestoreScripts: BrideOfBackupRestoreScripts!