Archive for the ‘Scripts’ Category

Pulling Report Info from MunkiWebAdmin

Wednesday, November 6th, 2013

Alright, you’ve fallen in love with the Dashboard in MunkiWebAdmin – we don’t blame you, it’s quite the sight. Now you know one day you’ll hack on Django and the client pre/postflight scripts until you can add that perfect view to further extend it’s reporting and output functionality, but in the meantime you just want to export a list of all those machines still running 10.6.8. Mavericks is free, and them folks still on Snow Leo are long overdue. If you’ve only got a handful of clients, maybe you set up MunkiWebAdmin using sqlite(since nothing all that large is actually stored in the database itself.)

MunkiWebAdmin in action

Let’s go spelunking and try to output just those clients in a more digestible format than html, so I’d use the csv output option for starters. We could tool around in an interactive session with the sqlite binary, but in this example we’ll just run the query on that binary and cherry-pick the info we want. Most often, we’ll use the information submitted as a report by the pre- and postflight scripts munki runs, which dumps in to the reports_machine table. And the final part is as simple as you’d expect, we just select all from that particular table where the OS version equals exactly 10.6.8. Here’s the one-liner:

$sqlite3 -csv /Users/Shared/munkiwebadmin_env/munkiwebadmin/munkiwebadmin.db\
 "SELECT * FROM reports_machine WHERE os_version='10.6.8';"


And the resultant output:
b8:f6:b1:00:00:00,Berlin,"","",,"MacBookPro10,1","Intel Core i7","2.6 GHz",x86_64,"8 GB"...

You can then open that in your favorite spreadsheet editing application and parse it for whatever is in store for it next!

Add OS X Network Settings Remotely (Without Breaking Stuff)

Monday, September 23rd, 2013

So you’re going to send a computer off to a colocation facility, and it’ll use a static IP and DNS when it gets there, the info for which it’ll need before it arrives. Just like colo, you access this computer remotely to prepare it for its trip, but don’t want to knock it off the network while prepping this info, so you can verify it’s good to go and shut it down.

It’s the type of thing, like setting up email accounts programmatically, that somebody should have figured out and shared with the community as some point. But even if my google-fu is weak, I guess I can deal with having tomatoes thrown at me, so here’s a rough mock-up:


# purpose: add a network location with manual IP info without switching 
#   This script lets you fill in settings and apply them on en0(assuming that's active)
#   but only interrupts current connectivity long enough to apply the settings,
#   it then immediately switches back. (It also assumes a 'Static' location doesn't already exist...)
#   Use at your own risk! No warranty granted or implied! Tell us we're doing it rong on twitter!
# author: Allister Banks, 318 Inc.

# set -x

declare -xr networksetup="/usr/sbin/networksetup"

declare -xr MYIP=""
declare -xr MYMASK=""
declare -xr MYROUTER=""
declare -xr DNSSERVERS=""

declare -x PORTANDSERVICE=`$networksetup -listallhardwareports | awk '/en0/{print x};{x=$0}' | cut -d ' ' -f 3`

$networksetup -createlocation "Static" populate
$networksetup -switchtolocation "Static"
$networksetup -setmanual $PORTANDSERVICE $MYIP $MYMASK $MYROUTER
$networksetup -setdnsservers $PORTANDSERVICE $DNSSERVERS
$networksetup -switchtolocation Automatic

exit 0

Caveats: The script assumes the interface you want to be active in the future is en0, just for ease of testing before deployment. Also, that there isn’t already a network location called ‘Static’, and that you do want all interface populated upon creation(because I couldn’t think of particularly good reasons why not.)

If you find the need, give it a try and tweet at us with your questions/comments!

Write and run scripts through BBEdit and TextWrangler

Wednesday, September 4th, 2013

Part of my job is writing shell scripts. These are scripts for administrative work rather than end-user applications and they’re usually short and non-interactive. I can’t justify purchasing expensive code-writing tools for this type and frequency of work but I do prefer something more than just TextEdit. I could write a website using TextEdit but that would be painful. The same applies to scripts.

Two of my favorite script-writing tools are from Bare Bones Software: BBEdit and TextWrangler. These are actually text editors but that’s really all that’s needed for basic scripting. I recommend TextWrangler because it’s free and really powerful. For those who want more I recommend purchasing BBEdit, which is the big brother to TextWrangler.

Here’s just one thing I like about each when writing scripts.


Part of script writing is testing the code. Sometimes I’m just writing a snippet and only need to test a line or two. TextWrangler (and BBEdit) include a shebang (#!) menu to let me run code from the text I’ve just typed.

I can open a new TextWrangler document and enter a simple script to tell me today’s date:

	date "+Today's date is %m/%d/%Y"
exit 0

To run this script I don’t even need to save the document. I can just choose #! > Run:

Sheband > Run

and the result opens in a new window.

Shebang > Run output

If my script syntax were incorrect, such as omitting the final double-quote on the second line, the result would the same as if I had saved the file, made it executable and run it in Terminal.

Shebang > Run error


BBEdit has a feature called Shell Worksheets, which act kind of like an interactive script. I can create a new worksheet by choosing File menu > New > Shell Worksheet.

A new shell worksheet is based on the default UNIX shell, which is generally bash. It doesn’t require a shebang at the beginning of my code. I can enter my date command and then press Enter (on an extended keyboard) or Command-Return and that one line is not only executed but the result is displayed below.

Shell worksheet

If I have several lines of code I can highlight any one or multiple lines and press Enter or Command-Return to execute those lines. All output will appear after the last line of highlighted commands.

Better than TextEdit

Working within TextWrangler or BBEdit enables me to write and quickly test code without having to save my script and make it executable. In addition to quickly executing commands both applications feature line numbering and syntax highlighting to make reading and debugging scripts much easier.

For a better understanding of these tools consult the User Manual under the Help menu in each application.

Cat skinning technique #12 or “Convert to plist and then read”

Tuesday, August 13th, 2013

I’ve seen amazing things done to extract data from most anything with command-line tools such as awk, sed and regex. Just like “there’s more than one way to skin a cat”, there’s more than one way to get a result.

During some recent scripting research I noticed in the man page for the command I was using an option that allowed me to convert the data to an easier to parse format. Although the output for this option was much longer than normal output, I was able to avoid devising a complex regex for getting the data I needed.

Enough babble! I present yet another way to extract information from a blob of data or “cat skinning technique #12″.

This command when run in the Terminal returned a load of information about my OS X user account.

dscl . read /Users/tempuser

I appended an attribute called “Comment” and I gave the attribute a value of “Temporary account.”

sudo dscl . append /Users/tempuser Comment "Temporary account."

I could read this attribute quickly using:

dscl . read /Users/tempuser Comment

The result was:

 Temporary account.

I added a second and third comment by running the append command a couple more times:

 Temporary account.
 Expires: July 31, 2013.
 Manager: Martin Moose.

Now, how could I go about getting the expiration date from the comment? This is where awk-, sed- and regex-loving scripters would begin piping the results into something like:

dscl . read /Users/tempuser Comment | sed -n '3p'

The problem with this command was it left a blank leading space (note how the values for the comment were slightly indented in the above result).

I could pipe this again into another sed command along with some complicated regex magic to remove the leading space, which actually gave me what I wanted:

dscl . read /Users/tempuser Comment | sed -n '3p' | sed -e 's/^[ \t]*//'

As an administrator needing to get the job done I would be happy with this solution. If I were to post that one-liner into a forum, though, I’d be ridiculed for using the same command multiple times or for piping more than once.

I learned a few years back to try to exhaust the options provided by a single command rather than snipping away at results using a centipede of short commands. After viewing the man page for dscl I found a useful option—it could output the result in plist format. That’s the same format for preference files. Administrators familiar with managing preferences are also familiar with command line tools like defaults and PlistBuddy.

I added the extra option:

dscl -plist . read /Users/tempuser Comment

Although it returned lengthier output I had structure to the information:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
		<string>Temporary account.</string>
		<string>Expires: July 31, 2013.</string>
		<string>Manager: Martin Moose.</string>

Both the defaults and PlistBuddy command line tools only read plist files, which meant I needed to redirect this information into a file. The /private/tmp folder was a convenient place to store transient stuff:

dscl -plist . read /Users/tempuser Comment > /private/tmp/myfile.plist

All I needed to do was read the file. Because this plist file contained an array, PlistBuddy was much better suited to reading it than defaults. After a little trial and error I put a two-liner together:

dscl -plist . read /Users/tempuser Comment > /private/tmp/myfile.plist
/usr/libexec/PlistBuddy -c "print :dsAttrTypeStandard\:Comment:1" /private/tmp/myfile.plist

In plain language the PlistBuddy command said: “Read the value for the key ‘dsAttrTypeStandard:Comment’ and return index 1 (indexes start at 0) from the file myfile.plist.” The result returned was:

Expires: July 31, 2013.

InstaDMG Issues, and Workflow Automation via Your Friendly Butler, Jenkins

Thursday, January 17th, 2013

“It takes so long to run.”

“One change happens and I need to redo the whole thing”

“I copy-paste the newest catalogs I see posted on the web, the formatting breaks, and I continually have to go back and check to make sure it’s the newest one”

These are the issues commonly experienced with those who want to take advantage of InstaDMG, and to some, it may be enough to prevent them from being rid of their Golden Master ways. Of course there are a few options to address each of these, in turn, but you may have noticed a theme on blog posts I’ve penned recently, and that is:


(We’ll get to how automation takes over shortly.) First, to review, a customized InstaDMG build commonly consists of a few parts: the user account, a function to answer the setup assistant steps, and the bootstrap parts for your patch and/or configuration management system. To take advantage of the(hopefully) well-QA’d vanilla catalogs, you can nest it in your custom catalog via an include-file line, and you only update your custom software parts listed above in one place. (And preferably you keep those projects and catalog under version control as well.)

All the concerns paraphrased at the start of this post just happen to be discussed recently on The Graham Gilbert Dot Com. Go there now, and hear what he has to say about it. Check out his other posts, I can wait.

Graham Gilberts Blog
Back? Cool. Now you may think those are all the answers you need. You’re mostly right, you smarty you! SSDs are not so out-of-reach for normal folk, and they really do help to speed the I/O bound process up, so there’s less cost to create and repeat builds in general. But then there’s the other manual interaction and regular repetition parts – how can we limit it to as little as possible? Yes, the InstaDMG robot’s going to do the heavy lifting for us by speedily building an image, and using version control on our catalogs help us track change over time, but what if Integrating the changes from the vanilla catalogs was Continuous? (Answers within!) (more…)

If It’s Worth Doing, It’s Worth Doing At Least Three Times

Monday, January 14th, 2013

In my last post about web-driven automation, we took on the creation of Apple IDs in a way that would require a credit card before actually letting you download apps(even free ones.) This is fine to speed up the creation process when actual billing will be applied to each account one at a time, but for education or training purposes where non-volume license purchases wouldn’t be a factor, there is the aforementioned ‘BatchAppleIDCreator‘ applescript. It hasn’t been updated recently, though, and I still had more automation tools I wanted to let have a crack at a repetitive workflow like this use case.

SikuliScript was born out of MIT research in screen reading, which roughly approximates what humans do as they scan the screen for a pattern and then take action. One can build a Sikuli script from scratch by taking screenshots and then tying together the actions you’d like to take in its IDE(which essentially renders HTML pages of the ‘code’.) You can integrate Python or Java, although it needs(system) Java and the Sikuli tools to be in place in the Applications folder to work at all. For Apple ID creation in iTunes, which is the documented way to create an ID with the “None” payment method, Apple endorses the steps in this knowledge base document.Sikuli AutoAppleID Creator Project

When running, the script does a search for iBooks, clicks the “Free” button to trigger Apple ID login, clicks the Create Apple ID button, clicks through a splash screen, accepts the terms and conditions, and proceeds to type in information for you. It gets this info from a spreadsheet(ids.csv) that I adapted from the BatchAppleIDCreator project, but currently hard-codes just the security questions and answers. There is guidance in the first row on how to enter each field, and you must leave that instruction row in, although the NOT IMPLEMENTED section will not be used as of this first version.

It’s fastest to type selections and use the tab and/or arrow keys to navigate between the many fields in the two forms(first the ID selection/password/security question/birthdate options, then the users purchase information,) so I didn’t screenshot every question and make conditionals. It takes less than 45 seconds to do one Apple ID creation, and I made a 12 second timeout between each step in case of a slow network when running. It’s available on Github, please give us feedback with what you think.

Bash Tidbits

Friday, November 23rd, 2012

If you’re like me you have a fairly customized shell environment full of aliases, functions and other goodies to assist with the various sysadmin tasks you need to do.  This makes being a sysadmin easy when you’re up and running on your primary machine but what happens when you’re main machine crashes?

Last weekend my laptop started limping through the day and finally dropped dead and I was left with a pile of work yet on my secondary machine.  Little to no customization was present on this machine which made me nearly pull out my hair on more than one occasion.

Below is a list of my personal shell customizations and other goodies that you may find useful to have as well.  This is easily installed into your ~/.bashrc or ~/.bash_profile file to run every time


# Useful Variables
export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
export SN=`netstat -nr| grep -m 1 -iE ‘default|′ | awk ‘{print \$2}’ | sed ‘s/\.[0-9]*$//’ `
export ph=””
PS1=’\[\033[0;37m\]\u\[\033[0m\]@\[\033[1;35m\]\h\[\033[0m\]:\[\033[1;36m\]\w\[\033[0m\]\$ ‘# Aliases
alias arin=’whois -h’
alias grep=’grep –color’
alias locate=’locate -i’
alias ls=’ls -lh’
alias ns=’nslookup’
alias nsmx=’nslookup -q=mx’
alias pg=’ping’
alias ph=’ping’
alias phobos=’ssh -i ~/.ssh/identity -p 2200 -X -C -t screen -R’
alias pr=’ping `netstat -nr| grep -m 1 -iE ‘\”default|′\” | awk ‘\”{print $2}’\”`’
alias py=’ping’

At the top of the file you have 2 variables that set nice looking colors in the terminal so make it more readable.

One of my faviourite little shortcuts comes next.  You’ll notice that there is a variable called SN there and it is a shortcut for the subnet that you happen to be on.  I find myself having to do stuff to the various hosts on my subnet so if I can save having to type in 192.168.25 50 times a day then that’s definitely useful.  Here are a few examples of how to use it:

ping $SN.10
nmap -p 80 $SN.*
ssh admin@$SN.40

Also related is the alias named pr.  This finds the router and pings it to make sure it’s up.

Continuing down the list there is the alias ph which goes to my personal server.  Useful for all sorts of shortcuts and can save a fair amount of work.  Examples:

ssh alt229@$ph
scp ./test.txt alt229@$ph:~/

There are a bunch of other useful aliases there too so feel free to poach some of these for your own environment!

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.


Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

Rename files en masse

Friday, October 26th, 2012

There are more than a few shareware utilities for both Windws and Mac that give a user the ability to rename a bunch of files according to a certain criteria.  Gui utilities are always nice but what if you’re logged into a webserver and need to rename all .JPG files to .jpg?

There’s a simple perl script that give you the ability to rename all files in a directory according to the powerful rules of regular expressions.  Here are some example ways to use the script.


The following renames all files ending in .JPG to .jpg.

% rename ‘s/\.JPG$/jpg/’ *.JPG

 The next one converts all uppercase filenames to lowercase except for Makefiles.

% rename ‘tr/A-Z/a-z/ unless /^Make/’ *
The next one removes the preceeding dot in front of a filename unless it’s a .DS_Store file
% rename ‘s/^\.// unless /^.DS_Store/’ *

The next one appends the date to all text files in the current directory.

% rename ‘$_ .= “.2012-10-26″‘ *.txt

The last and arguably most useful way to use this tool is to pipe it through find.  This example renames all files in /var/www from .JPG to .jpg.

% find /var/www -name ‘*.JPG’ -print | rename ‘s/\.JPG$/\.jpg/’

Note: There are 2 variables you can set in the script.  The first is a list of files to ignore and the second is to turn on “dry run” mode. This will allow you to see what the script is going to do before irreversibly changing the file name.


Here is the script

Installing Python On Windows

Tuesday, October 9th, 2012


Python, although a standard language for all Macs and most Unix / Linux distributions, doesn’t come preinstalled on windows machines. Thankfully, getting the python to play nice with Bill Gates is very straightforward and you’ll be done in less time it takes to run a Windows update.

Get Python 2.7.3

First step is to go to the main python website and get the correct python version for your needs. Although python 3.2 is out python 2.7.3 is the most compatible and version 3.2 isn’t 100% backwards compatible so unless you’re writing code from scratch that won’t need any external modules version 2.7.3 is the way to go.

Get the specific python installer for your hardware here:

Installing Python

Installing is simple. Open the MSI package like so:

Install Python 2.7.3

Choose the install folder. Default is C:\Python27

Choose folder

Default customizations are fine
Default Customizations
Then watch some progress bars…
Progress Bars
All done!


Updating Your Path (optional & recommended)

The only other thing you may need to do is update your path to include the python executable. This isn’t necessary since the installer associates all .py files with the python exe but if you ever want to test something or just run python from the shell this update is a handy one.

First, right click the My Computer icon and go to properties.


Then go to advanced.
Advanced Properties

And then Environment Variables

Environment Variables

Append ;C:\Python27 to the path section like so.

Append Path

All done!

(Sysadmin) Software Design Decisions

Wednesday, October 3rd, 2012

When approaching a task with an inkling to automate, sometimes you find an open source project that fits the bill. But the creator will work within constraints, and often express their opinion of what’s important to ‘solve’ as a problem and therefore prioritize on: a deployment tool is not necessarily a patch management tool is not necessarily a configuration management tool, and so on. One of the things I’ve dealt with is trying to gauge the intent of a developer and deciding if they are interested in further discussion/support/development of a given project. Knowing why one decision was made or another can be helpful in these situations. In that category of things I wish someone could have written so I could read it, here’s the design decisions behind the sonOfBackupRestoreScripts project I’ve been toying with as an add-on to DeployStudio(heretofore DS), which you can hopefully understand why I am not releasing as an official, supportable tool in it’s current bash form after reading the following.
I’ve adapted some of the things Google used in their outline for Simian as a model, to give this some structure.

Project Objective:

To move user home folders and local authentication/cached credentials between workstations in a customizable and optimized manner, preserving the integrity of the data/user records as much as possible


For speed and data integrity, rsync is used to move selections of the users home folder(minus caches, trash, and common exclusions made by Time Machine). To increase portability and preserve mac-specific attributes, a disk image is generated to enclose the data. The user account information is copied separately and helpful information is displayed at the critical points as it moves from one stage to another and during the backup itself.

Requirements: DeployStudio Server / NetBoot

DS, as a service, enables an infrastructure to run the script in, and automounts a repository to interact with over the network. Meant to work optimally with or without a NetBoot environment, an architecture assumption being made during development/testing is wired ethernet, with the use of USB/Thunderbolt adapters if clients are MacBook Airs. Even old minis can function fine as the server, assuming the repo is located on a volume with enough space available to accept the uncompressed backups.

Implementation Details: Major Components / Underlying Programs

- source/destination variables

Parameters can be passed to the script to change the source/destination of backups/restores with the -s(for source) and -d(…) switches and then a path that is reachable by the NetBooted system.

- hdiutil

A simple sparsediskimage is created which can expand up to 100GBs with the built-in binary hdiutil. The file system format of that container is JHFS+, and a bunch of other best practices, cobbled together from Bombich’s Carbon Copy Cloner(heretofore CCC) and InstaDMG, are employed.

- cp

The cp binary is used to just copy the user records from the directory service the data resides on to the root of the sparseimage, and the admin group’s record is copied into a ‘group’ folder. If hashes exist in /var/db/shadow/hash, which is how passwords were stored previous to 10.7, those are moved to a ‘hashes’ folder.

- rsync

A custom, even more current build of rsync could be generated if the instructions listed here are followed. Ideally, a battle-tested version like the one bundled with CCC’s (/Applications/Carbon\ Copy\, which is actually a heavily customized rsync version 3.0.6) could be used, but it’s output isn’t easy to adapt and see an overview of the progress during a CLI transfer. Regardless, the recommended switches are employed in hopes to get a passing grade on the backupBouncer test. The 3.0.7 version bundled with DS itself (/Applications/Utilities/DeployStudio\, which for whatever reason is excluded when the assistant creates NetBoot sets) was used during development/testing.


The Users folder on the workstation that’s being backed up is what’s targeted directly, so any users that have been deleted or subfolders can be removed with the exclusions file fed to the rsync command, and without catch-all, asterisk(*) ‘file globbing’, you’d need to be specific about certain types of files you want to exclude if they’re in certain directories. For example, to not backup any mp3 files, no matter where they are in the user folders being backed up, you’d add - *.mp3 Additional catch-all excludes could be used, as detailed in the script, which specifically excludes ipsw’s(iOS firmware/OS installers) like this: --exclude='*.ipsw'


Pretty much everything done via both rsync and cp are done in reverse, utilizing the source/destination options, so a backup taken from one machine can easily be chosen to restore to another.

Security Considerations:

Very little security is applied during storage. Files are transferred over password-protected AFP, so a separate server and repo could be used to minimize potential access by whoever can access the main DS service. Nothing encrypts the files inside the sparseimages, and if present, the older password format is a hash that could potentially be cracked over a great length of time. The home folder ACL’s and ownership/perms are preserved, so in that respect it’s secure according to whoever has access to the local file systems on the server and client.

Excluded/Missing Features:
(Don’t You Wish Every Project Said That?)

Hopefully this won’t sound like a soul-bearing confession, but here goes:
No checks are in place if there isn’t enough space on destinations, nor if a folder to backup is larger than the currently hard-coded 100GB sparseimage cap (after exclusions.) Minimal redirection of logs is performed, so the main DS log can quickly hit a 2MB cap and stop updating the DS NetBoot log window/GUI if there’s a boatload of progress echo’d to stdout. The process to restore a users admin group membership(or any other group on the original source) is not performed, although the group’s admin.plist can be queried after the fact. Nor is there even reporting on Deleted Users orphaned home folders if they do actually need to be preserved, by default they’re just part of the things rsync excludes. All restrictions are performed in the Excludes.txt file fed to rsync, so it cannot be passed as a parameter to the script.
And the biggest possible unpleasantness is also the #1 reason I’m not considering continuing development in bash: UID collisions. If you restore a 501 user to an image with a pre-existing 501 user that was the only admin… bad things will happen. (We’ve changed our default admin user’s UID as a result.) If you get lucky, you can change one user’s UID or the other and chown to fix things as admin before all heck breaks lose… If this isn’t a clean image, there’s no checking for duplicate users with newer data, there’s no filevault1 or 2 handling, no prioritization so if it can only fit a few home folders it’ll do so and warn about the one(s) that wouldn’t fit, no version checking on the binaries in case different NetBoot sets are used, no fixing of ByHostPrefs(although DS’s finalize script should handle that), no checks with die function are performed if the restore destination doesn’t have enough space, since common case is restoring to same HD or a newer, presumably larger computer. Phew!


The moral of the story is that the data structures available in most of the other scripting languages are more suited for these checks and to perform evasive action, as necessary. Bash does really ungainly approximations of tuples/dictionaries/hash tables, and forced the previous version of this project to perform all necessary checks and actions during a single loop per-user to keep things functional without growing exponentially longer and more complex.

Let’s look forward to the distant future when this makes it’s way into Python for the next installment in this project. Of course I’ve already got the name of the successor to SonOfBackupRestoreScripts: BrideOfBackupRestoreScripts!

DeployStudio Scripting Tips

Tuesday, September 25th, 2012

I’ve given a presentation on my affinity for DeployStudio, yet with it being closed source, it’s sometimes like an enigma (wrapped in a mystery (wrapped in bacon)). However, a workflow exists to enable scripting within it, although the only option besides automatically running it when dropped into a workflow is non-interactively passing arguments to a script. Even with little in the way of documented information, projects have popped up to take advantage of the framework provided.

Most noticeably, Rusty Myers’ BackupRestore scripts enabled quite an interesting workflow: first, you could run one workflow to tar(or ditto) the user folders to a new Backup directory in the repo, with a few customizable exceptions. And then, when either specified or dropped into a workflow that had a restore action precede it, you could put the users and their associated passwords back into place. This is obviously pretty darn useful for clean(er) migrations and/or OS upgrades, or simply refreshing existing workstations with a new base set of software. Many folks in the MacEnterprise community contributed features, including FileVault(version 1) support, and updates were made for the curveball Lion introduced in respect to how passwords are stored(nested inside the user record plist itself.)

I’m in the process of creating a successor to this project, so I thought I’d share some of the experiences I’ve had and pointers I’ve come across as a sysadmin, not a software developer, attempting to get a repeatable task accomplished inside this framework. Tip number zero is the same advice given to all students of programming in scripting languages: don’t write a lot before running and getting feedback. So, I booted a laptop with a few small user folders to my DeployStudio-generated netboot set, authenticated to the interface, and opened terminal. That netboot set includes the optional Python framework(Ruby is another option if you’d like access to that scripting language), which I’ll be using in the future. Along with selecting “Display Runtime log window by default”, I extended the “Quit runtime automatically after__” number to 90 seconds, so when testing inside of a workflow I wouldn’t be kicked out as I iterated and repeated the process.

To get started, I made an “admin” folder inside the repo, put an updated version of the rsync binary in it(since the one in /usr/bin that ships with OS X is pretty ancient), and started writing a script in that directory which I could therefore run from terminal on the netbooted laptop over VNC/ARD. For starters, here’s tip #1:
DeployStudio mounts the repo in /tmp/DSNetworkRepository. While /tmp isn’t read only, you will get out-of-space errors and general unhappiness if you use it for anything except mountpoints.
Tip #2:
No, you can’t use symlinks in those subfolders to point somewhere else on the DeployStudio server, since it assumes the directory the symlink would point to is relative to the root of the netboot set instead. (No, really, it wouldn’t even work when using ln -s on the machine the repo’s hosted from instead of Finder’s aliases, which definitely don’t work in scripts.)
tip #3:
For persnickety folks like myself that MUST use a theme in terminal and can’t deal to not have option set as the meta key, you’re probably bummed that the Preferences menu item is greyed out and command-comma doesn’t seem to work. There is a way, though: from the Shell menu choose Show Inspector. Then from the settings tab, double-click the theme you prefer. The full settings window will appear, and you can have your modern conveniences again.
tip #4:
How does DeployStudio decide what is the first mounted volume, you may wonder? I invite(dare?) you to ‘bikeshed‘(find a more optimized way to accomplish a relatively trivial task) this particular piece of code:
system_profiler SPSerialATADataType | awk -F': ' '/Mount Point/ { print $2}' | head -n1
In the case of the restore script, hard-coding the DS_LAST_RESTORED_VOLUME variable (on line 44 in Rusty’s current Lion version), or changing the argument in the workflow to pass that path with the -v flag will remove the dependency on restoring an image before putting user folders back in place.

Two more tips before I go, which are both specific to the task I’m trying to accomplish. Ownership on files will not be preserved when moved to the repo with rsync, so you can create a sparse image or sparsebundle as a destination, and it will even retain ACL’s (certain patched rsync binaries complain about smb_acl’s, like the one I used, which is bundled in DeployStudio’s Tools folder.) As mentioned about /tmp in the NetBoot environment earlier, sparseimages should be created in a subfolder of the repo, or you could easily run into ‘out of space’ errors.

When logging, you can use tee or simply redirect output, but more customized feedback in the actual log window in the Deploy Studio netboot runtime is helpful. There’s a “custom_logger” function used in some of the bundled scripts… which literally does nothing but echo $1 – pretty rudimentary. For output that doesn’t display when run as part of a script, you can redirect that output to /dev/stdout and have it shown instead of fooling around with echo or printf.
e.g. rsync -avz /source /destination >/dev/stdout
There may be a lag if verbose output happens in a short amount of time, as the main log file in the repo is being written to simultaneously to what is printed onscreen.

Oh, and the silliest thing I’ve noticed: your script needs to be owned by root:wheel and 777 in the scripts folder of the repo in order to show up in the workflow interface for selection… It’s got it’s quirks, but it’s just about the best out there!

MacSysAdmin 2012 Slides and Videos are Live!

Thursday, September 20th, 2012

318 Inc. CTO Charles Edge and Solutions Architect alumni Zack Smith were back at the MacSysAdmin Conference in Sweden again this year, and the slides and videos are now available! All the 2012 presentations can be found here, and past years are at the bottom of this page.

A Bash Quicky

Thursday, August 30th, 2012

In our last episode spelunking a particularly shallow trough of bash goodness, we came across dollar sign substitution, which I said mimics some uses of regular expressions. Regex’s are often thought of as thick or dense with meaning. One of my more favorite descriptions goes something like, if you measured each character used in code for a regex in cups of coffee, you’d find the creators of this particular syntax the most primo, industrial-strength-caffeinated folks around. I’m paraphrasing, of course.

Now copy-pasta-happy, cargo-culting-coders like myself tend to find working code samples and reuse salvaged pieces almost without thinking, often recognizing the shape of the lines of code more than the underlying meaning. Looping back around to dollar sign substitution, we can actually interpret this commonly used value, assigned to a variable meaning the name of the script:
Okay children, what does it all mean? Well, let’s start at the very beginning(a very good place to start):
${0}The dollar sign and curly braces force an evaluation of the symbols contained inside, often used for returning complex series of variables. As an aside, counting in programming languages starts with zero, and each space-separated part of the text is defined with a number per place in the order, also known as positional parameters. The entire path to our script is given the special ‘seat’ of zero, so this puts the focus on that zero position.

Regrouping quickly, our objective is to pull out the path leading up to the script’s name. So we’re essentially gathering up all the stuff up to and including the last forward slash before our scripts filename, and chuckin’ them in the lorry bin.
${0##*}To match all of the instances of a pattern, in our case the forward slashes in our path, we double up the number signs(or pound sign for telcom fans, or hash for our friends on the fairer side of the puddle.) This performs a “greedy” match, gobbling up all instances, with a star “globbing”, to indiscriminately mop up any matching characters encountered along the way.
${0##*/}Then we cap the whole mess off by telling it to stop when it hits the last occurrence of a character, in this case forward slash. And that’s that!

Pardon the tongue-in-cheek tone of this quick detour into a bash-style regex-analogue… but to reward the masochists, here’s another joke from Puppet-gif-contest-award-winner @pmbuko:

Email from a linux user: “Slash is full.” I wanted to respond: “Did he enjoy his meal?”

Evaluating the Tokens, or Order of Expansion in Bash

Monday, July 23rd, 2012

Previously in our series on commonly overlooked things in bash, we spoke about being specific with the binaries our script will call, and mentioned conventions to use after deciding what to variable-ize. The rigidity and lack of convenience afforded by bash starts to poke through when we’re trying to abstract re-usable inputs through making them into variables, and folks are commonly tripped up when trying to have everything come out as intended on the other side, when that line runs. You may already know to put quotes around just about every variable to catch the possibility of spaces messing things up, and we’re not even touching on complex ‘sanitization’ of things like non-roman alphabets and/or UTF-8 encoding. Knowing the ‘order of operation expansion’ that the interpreter will use when running our scripts is important. It’s not all drudgery, though, as we’ll uncover features available to bash that you may not have realized exist.

For instance, you may know curly braces can be used did you know there’s syntax to, for example, expand to multiple extensions for the same filenames by putting them in curly braces, comma-separated? An interactive example(with set -x)
cp veryimportantconfigfile{,-backup}
+ cp veryimportantconfigfile veryimportantconfigfile-backup

That’s referred to as filename (or just) brace expansion, and is the first in order of the (roughly) six types of expansion the bash interpreter goes through when evaluating lines and ‘token-ized’ variables in a script.

Since you’re CLI-curious and go command line (trademark @thespider) all the time, you’re probably familiar with not only that you can use tilde(~) for a shortcut to the current logged-in users home directory, but also that just cd alone will assume you meant you wanted to go to that home directory? A users home gets a lot of traffic, and while the builtin $HOME variable is probably more reliable if you must include interaction with home directories in your script, tilde expansion (including any subdirectories tagged to the end) is the next in our expansion order.

Now things get (however underwhelmingly) more interesting. Third in the hit parade, each with semi-equal weighting, are
a. the standard “variable=foo, echo $variable” style ‘variable expressions’ we all know and love,
b. backtick-extracted results of commands, which can also be achieved with $(command) (and, worse came to worse, you could force another expansion of a variable with the eval command)
c. arithmetic expressions (like -gt for greater than, equal, less than, etc.) as we commonly use for comparison tests,
and an interesting set of features that are actually convenient (and mimic some uses of regular expressions), called (misleadingly)
$. dollar sign substitution. All of the different shorthand included under this category has been written about elsewhere in detail, but one in particular is an ad-hoc twist on a catchall that you could use via the ‘shell options’, or shopt command(originally created to expand on ‘set‘, which we mentioned in our earlier article when adding a debug option with ‘set -x‘). All of the options available with shopt are also a bit too numerous to cover now, but one that you’ll see particularly strict folks use is ‘nounset‘, to ensure that variables have always been defined if they’re going to be evaluated as the script runs. It’s only slightly confusing that a variable can have an empty string for a value, which would pass this check. Often, it’s the other way around, and we’ll have variables that are defined without being used; the thing we’d really like to look out for is when a variable is supposed to have a ‘real’ value, and the script could cause ill affects by running without one – so the question becomes how do we check for those important variables as they’re expanded?
A symbol used in bash that will come up later when we cover getopt is the colon, which refers to the existence of an argument, or the variables value (text or otherwise) that you’d be expecting to have set. Dollar sign substitution mimics this concept when allowing you to ad hoc check for empty (or ‘null’) variables by following a standard ‘$variable‘ with ‘:?’ (finished product: ${variable:?})- in other words, it’s a test if the $variable expanded into a ‘real’ value, and it will exit the script at that point with an error if unset, like an ejector seat.

Moving on to the less heavy expansions, the next is… command lookups in the run environments PATH, which are evaluated like regular(western) sentences, from left to right.
As it traipses along down a line running a command, it follows that commands rules regarding if it’s supposed to expect certain switches and arguments, and assumes those are split by some sort of separation (whitespace by default), referred to as the Internal Field Separator. The order of expansion continues with this ‘word splitting’.

And finally, there’s regular old pathname pattern matching – if you’re processing a file or folder in a directory, it find the first instance that matches to evaluate that – pretty straightforward. You may notice we’re often linking to the Bash Guide for Beginners site, as hosted by The Linux Documentation Project. Beyond that resource, there’s also videos from 2011(iTunesU link) and 2012(youtube) Penn State Mac Admins conference on this topic if you need a refresher before we forge ahead for a few more posts.

Back to Basics with Bash

Tuesday, July 17th, 2012

The default shell for Macs has been bash for as long as some of us can remember(as long as we forget it was tcsh through 10.2.8… and before that… there was no shell, it was OS 9!) Bash as a scripting language doesn’t get the best reputation as it is certainly suboptimal and generally unoptimized for modern workflows. To get common things done you need to care about procedural tasks and things can become very ‘heavy’ very quickly. With more modern programming languages that have niceties like API’s and libraries, the catchphrase you’ll hear is you get loads of functionality ‘for free,’ but it’s good to know how far we can get, and why those object-oriented folks keep telling us we’re missing out. And, although most of us are using bash every time we open a shell(zsh users probably know all this stuff anyway) there are things a lot of us aren’t doing in scripts that could be better. Bash is not going away, and is plenty serviceable for ‘lighter’, one-off tasks, so over the course of a few posts we’ll touch on bash-related topics.

Some things even a long-time scripter may easily overlook is how we might set variables more smartly and _often_, making good decisions and being specific about what we choose to variable-ize. If the purpose of a script is to customize things in a way that’s reusable, making a variable out of that customization (say, for example, a hostname or notification email address) allows us to easily re-set that variable in the future. And in our line of work, if you do something once, it is highly probable you’ll do it again.

Something else you may have seen in certain scripts is the PATH variable being explicitly set or overridden, under the assumption that may not be set in the environment the script runs in, or the droids binaries we’re looking for will definitely be found once we set the path directories specifically. This is well-intentioned, but imprecise to put it one way, clunky to put it another. Setting a custom path, or having binaries customized that could end up interacting with our script may cause unintended issues, so some paranoia should be exhibited. As scientists and troubleshooters, being as specific as possible always pays returns, so a guiding principle we should consider adopting is to, instead of setting the path and assuming, make a variable for each binary called as part of a script.

Now would probably be a good time to mention a few things that assist us when setting variables for binaries. Oh, and as conventions go, it helps to leave variable names that are set for binaries as lowercase, and all caps for the customizations we’re shoving in, which helps us visually see only our customized info in all caps as we debug/inspect the script and when we go in to update those variables for a new environment. /usr/bin/which tells us what is the path to the binary which is currently the first discovered in our path, for example ‘which which’ tells us we first found a version of ‘which’ in /usr/bin. Similarly, you may realize from its name what /usr/bin/whereis does. Man pages as a mini-topic is also discussed here. However, a more useful way to tell if you’re using the most efficient version of a binary is to check it with /usr/bin/type. If it’s a shell builtin, like echo, it may be faster than alternatives found at other paths, and you may not even find it necessary to make a variable for it, since there is little chance someone has decided to replace bash’s builtin ‘cd’…

The last practice we’ll try to spread the adoption of is using declare when setting variables. Again, while a lazy sysadmin is a good sysadmin, a precise one doesn’t have to worry about as many failures. A lack of portability across shells helped folks overlook it, but this is useful even if it is bash-specific. When you use declare and -r for read-only, you’re ensuring your variable doesn’t accidentally get overwritten later in the script. Just like the tool ‘set’ for shell settings, which is used to debug scripts when using the xtrace option for tracing how variables are expanded and executed, you can remove the type designation from variables with a +, e.g. set +x. Integers can be ensured by using -i (which frees us from using ‘let’ when we are just simply setting a number), arrays with -a, and when you need a variable to stick around for longer than the individual script it’s set in or the current environment, you can export the variable with -x. Alternately, if you must use the same exact variable name with a different value inside a nested script, you can set the variable as local so you don’t ‘cross the streams’. We hope this starts a conversation on proper bash-ing, look forward to more ‘back to basics’ posts like this one.

Video on Setting up TheLuggage

Friday, July 13th, 2012

The Luggage is shaping up to be the go-to packaging software for Mac Admins. Getting started can be daunting for some, though, so I’ve narrated a video taking you through the steps required to set it up. Not included:
- Getting a Mac account (while this process can mostly be done for free, it’s the best and easiest way if you do have access)
- Downloading the tools from the Mac Dev Center (Command Line Tools and Auxiliary Tools for Xcode)
- Choosing your favorite text editor (no emacs vs vi wars, thanks)

Setting up The Luggage from Allister Banks on Vimeo.

Happy Packaging! Please find us on Twitter or leave a comment if you have any feedback.

Creating a binding script to join Windows 7 clients to Active Directory

Tuesday, July 3rd, 2012

There are some different ways to join Windows 7 to a domain.  You can do it manually, use djoin.exe to do it offline, use powershell, or use netdom.exe.

  • Doing so manually can get cumbersome when you have a lot of different computers to do it on.
  • With Djoin.exe you will have to run it on a member computer already joined to the domain for EACH computer you want to join since it will create a computer object in AD for each computer before hand.
  • Powershell is OK to use, but you have to set the script to unrestricted before hand on EACH computer.
  • Netdom is the way to go since you prep once for the domain, then run the script with Administrator privledges on whatever computers you want to join on the domain.  Netdom doesn’t come on most versions of Windows 7 by default.  There are two versions of netdom.exe, one for x86 and one for x64.  You can obtain netdom.exe by installing Remote Server Administration Tools (RSAT) for Windows 7, and then copying netdom.exe to a share.

A quick way to deal with both x86 and x64 architectures in the same domain would be to make two scripts.  One for x86 and one for x64 and have the appropriate netdom.exe in two different spots \\server\share\x86\ and \\server\share\x64\.

You’ll need to either grab netdom.exe from a version of windows 7 that already has it, or you’ll need to install RSAT for either x64 or x86 Windows 7 from here:, which ever you will be working with.  Install that on a staging computer.   The following steps are how to get netdom.exe from the RSAT installation.

  1. Download and install RSAT for either x64 or x86.
  2. Follow the help file that opens after install for enabling features.
  3. Enable the following feature: Remote Server Administration Tools > Role Administration Tools > AD DS and AD LDS Tools > AD DS Tools > AD DS Snap-ins and Command-Line Tools

netdom.exe will now be under C:\windows\system32

Create a share readable by everybody on the domain, and drop netdom.exe there.

Create a script with the following (From:

@echo off
SET netdomPath=c:\windows\system32
CALL BATCH.BAT %adminUser%
SET sourcePath=\\fileshare\folder\

::If necessary, copy netdom to the local machine
IF EXIST c:\windows\system32\netdom.exe goto join
COPY %sourcePath%netdom.exe %netdomPath%
COPY %sourcePath%dsquery.exe %netdomPath%
COPY %sourcePath%dsrm.exe %netdomPath%

::Join PC to the domain
NETDOM JOIN %computerName% /d:%domain% /UD:%adminUser% /PD:%passwd%

SHUTDOWN -r -t 0

Change domain and sourcepath to their real places.  Remove dsquery.exe and dsrm.exe if not needed.  If you’re just joining a domain, and not running anything after, then you don’t need them.

Create another script called “BATCH.BAT” that will hold your credentials that have access to joining computers to the domain.  Put BATCH.BAT in both places that house your Join-To-Domain script (…/x86 and …/x64)

@echo off
SET passwd=thisismypassword
SET adminuser=thisismyadminusername

  1. Ensure you have the scripts in the same directory.
  2. Open up a command prompt with Administrator privledges and change directory to the location of your scripts.

Runnning the first script will:

  1. Run a check to see if netdom, dsquery, and dsrm are installed under system32, if they are, it will then join the domain, if not it will attempt to download them from your share.
  2. Once it ensures it has the files it needs, it will join the computer to the domain under the “Computers” OU with its current computer name using the credentials set by BATCH.BAT.
  3. It will reboot when done.

This will work on both Server 2003 and Server 2008.

Building A Custom CrashPlan PROe Installer

Friday, April 13th, 2012

CrashPlan PROe installation can be customized for various deployment scenarios

Customization of implementations for over 10,000 clients is considered a special case by Code 42, the makers of CrashPlan, and requires that you contact their sales department. Likewise, re-branding the client application to hide the CrashPlan logo also requires a special license.

Planning Your Deployment

A large scale deployment of CrashPlan PROe clients requires a certain level of planning and setup before you can proceed. This usually means a test environment to iron out the details that you wish to configure. Multiple locations, bandwidth, and storage are obvious concerns that will need a certain amount of tuning before and after the service ‘goes live’. Also, an LDAP server populated with the expected information or a prepared xml document that has identifiable machine information needs to be matched with account and registration data. Not just account credentials, but also filing computers and accounts into groups through the use of Organizations (which directly relate to the registration information used) should also be considered.

Which Files to Change

The CrashPlan PROe installer has different files for Windows and Mac OS X, but the gist is largely the same for either. There is a customizable script (or .bat file) in that you can use to specify variables to feed information into a template that are specific to your deployment. The script can be customized to reference ldap information, or even a shared data source that can provide account information based on an identifiable resource such as a MAC address.

Mac OS X 

Download the installer DMG and make a copy of it. The path we’ll be working in is:

Install CrashPlanPRO.mpkg/Contents/Resources/

Inside the Resources directory there is a Custom-example folder that contains the template and script to customize.

Duplicate the Custom-example to Custom is a configuration script that has (commented-out by default) sections for parsing usernames from the current home folder, hostname, or from LDAP. This would also be where one could gather other machine information ( such as mac address ) and match it to data in a shared document on a file server.

In the same folder as is the folder “conf” which contains the file default.service.xml. The contents of this file can be fed variable information from the configuration script to set the user name, computer name, ldap specifics, and password that will be used upon installation. It is advisable to test new user creation when using LDAP and CrashPlan organizations, to ensure users . It is possible to specify those properties in this xml list.

So the process breaks down like this. edit the to populate the default.service.xml. let the installer run and make contact with the server and let the organization policies set all non custom settings.

XML Parameters

default.service.xml has the following properties

By supplying the address, registrationKey, username and password, the user will bypass the registration / login screen. The following tables describe authority attributes that you can specify and their corresponding parameters.

Authority Attributes
the primary address and port to the server that manages the accounts and issues licenses. If you are running multiple PRO Server, enter the address for the Master PRO Server.
(optional) the secondary address and port to the authority that manages the accounts and issues licenses.

Note: This is an advanced setting. Use only if you are familiar with its use and results.

a valid Registration Key for an organization within your Master PRO Server. Hides the Registration Key field on the register screen if a value is given.
the username to use when authorizing the computer, can use params listed below
the password used when authorizing the computer, can use params listed below
(true/false) do not prompt or allow user to change the address (default is false)
(true/false) allow user to change the server address on the Settings > Account page. (do not set if hideAddress=“true”)

Authority Parameters
determined from the CP_USER_NAME command-line argument, the CP_USER_NAME environment variable, or “” Java system property from the user interface once it launches.
system computer name
random 8 characters, typically used for password
for LDAP and Auto register only! This allows clients to register without manually entering a password and requiring user to login to desktop the first time.
Set to false to turn off the inbound backup listener by default.

Sample Usage
All of these samples are for larger installations where you know the address of the PRO Server and want to specify a Registration Key for your users.
Note: NONE of these schemes require you to create the user accounts on your PRO Server ahead of time.

  • Random Password: Your users will end up with a random 8-character password. In order to access their account they will have to use the Reset My Password feature OR have their password reset by an admin.
  • Fixed Password: All users will end up with the same password. This is appropriate if your users will not have access to the CrashPlan Desktop UI and the credentials will be held by an admin.
  • Deferred Password: FOR LDAP ONLY! This scheme allows the client to begin backing up, but it is not officially “logged in”. The first time the user opens the Desktop UI they will be prompted with a login screen and they will have to supply their LDAP username/password to successfully use CrashPlan to change their settings or restore data.

Changing CrashPlan PRO’s Appearance (Co-branding)

This information pertains to editing the installer for co-branding. Skip this section if you are not co-branding your CrashPlan PRO.
Co-Branding: Changing the Skin and Images Contents

You can modify any of the images that appear in the PRO Server admin console as well as those that appear in the email header. Here are the graphics you may substitute:
.Custom/skin folder contents
Filename Description
logo_splash.png splash screen logo
splash.png transparent splash background (Windows XP only)
splash_default.png splash background, must NOT be transparent (Windows Vista, Mac, Linux, Solaris, etc.)
logo_main.png main application logo that appears on the upper right of the desktop
window_bg.jpg main application background
icon_app_16x16.png icons that appear on desktop, customizable with Private Label agreement only

View examples
In the Custom/skin folder, locate the image you wish to replace.
Create another image that is the same size with your logo on it.
For best results, we recommend using the same dimensions as the graphics files we’ve supplied.
Place your customized version into the Content-custom folder you created.
Make sure not to change the filename or folder structure, so that CrashPlan PRO will be able to find the file.
Co-Branding: Editing the Text Properties File

You can change the text that appears as the application name or product name in CrashPlan PRO Client. Make your changes in files in the Custom/lang folder.
The file is English and is the default language.
Each file contains the text for a language. Please refer to the Internationalization document from Sun for details (
The language is identified in the comments at the beginning of the file.
When you change the application or product name, keep in mind that using very long names could affect the flow / layout of the text in a window or message box.
Text Property Description
Product.B42_PRO The name of the product as it would appear on the Settings > Account page, such as CrashPlan PRO The application name appears in error messages, instructions, descriptions throughout the UI.

Creating an Installer

Make the customizations that you want as part of your deployment, then follow the instructions to build a self-installing .exe file.
How It Works – Windows Installs

Test your settings by running the CrashPlan_[date].exe installer.
Make sure the installer.exe file and the Custom folder reside in the same parent folder.
Re-zip the contents of your Custom folder so you have a new that contains:
Custom (includes the skin and conf folders)
Turn your zip file into a self-extracting / installing file for your users.
For example, download the zip2secureexe from
The premium version is not required; however, it does have some nice features and they certainly deserve your support if you use their utility.
Launch zip2secureexe, then :
specify the zip
specify the name of the program to run after unzipping: CrashPlan_[date].exe
check the Build an EXE option to automatically unzip to a temporary directory
specify the app title:CrashPlan Installer
specify the icon file:cpinstall.ico
click Create to create your self-extracting zip file
Windows Push Installs

Review / edit cp_silent_install.bat and cp_silent_uninstall.bat.
These show how the push installation system needs to execute the Windows installer.
If your push install software requires an MSI, download the 32-bit MSI or the 64-bit MSI.
If you have made customizations, place the Custom directory that contain your customizations next to the MSI file.
To apply the customizations, run the msiexec with Administrator rights:
Right-click CMD.EXE, and select Run as Administrator.
Enter msiexec /i



REM The LDAP login user name and the CrashPlan user name.
Echo UserName: %CP_USER_NAME%

REM The users home directory, used in backup selection path variables.
SET CP_USER_HOME=C:\Documents and Settings\crashplan
Echo UserHome: %CP_USER_HOME%

REM Tells the installer not to run CrashPlan client interface following the installation.
Echo Silent: %CP_SILENT%

Echo Arguments: %CP_ARGS%

REM You can use any of the msiexec command-line options.
ECHO Installing CrashPlan…
CrashPlanPRO_2008-09-15.exe /qn /l* install.log CP_ARGS=%CP_ARGS% CP_SILENT=%CP_SILENT%



REM Tells the installer to remove ALL CrashPlan files under C:/Program Files/CrashPlan.

ECHO Uninstalling CrashPlan…

How It Works – Mac OS X Installer

PRO Server customers who have a lot of Mac clients often want to push out and run the installer for many clients at a time. Because we don’t offer a push installation solution, you’ll need to use other software to push-install CrashPlan, such as Apple’s ARD.
Run Install CrashPlanPRO.mpkg to test your settings:
At the command line, type open Install\ CrashPlanPRO.mpkg from /Volumes/CrashPlanPRO/)
Launch Install CrashPlanPRO.mpkg to test your settings.
Unmount the resulting disk image and distribute to users.
Note: If you do not want the user interface to start up after installation or you want to run the installer as root (instead of user), change the file as described in next section.
Understanding the File
This Mac-specific file is in the Custom-example folder inside the installer metapackage. Edit this file to set the user name and home variables if you wish to run the installer from an account other than root, such as user, and/or you wish to prevent the user interface from starting up after installation.
Be sure to read the comments inside the file.
How It Works – Linux Installer
Edit your install script as needed.
Run the install script to test your settings.
Tar/gzip the crashplan folder and share it with other users.
Custom Folder Contents
When you open the installer zip file or resource contents and view the Custom-example folder, the structure looks like this:
Contents of resource folder
Custom (folder)
skin (folder)
lang (folder)
conf (folder)
cpinstall.ico (Windows only)
must be created using an icon editor (Mac only)

Customizing the PRO Server Admin Console

You can also change the appearance of the PRO Server admin console and email headers and footers.
In the ./content/Manage, locate the images and macros you wish to modify and copy them into ./content-custom/Manage-custom using the same sub-folder and file names as the originals. Placing them there protects your changes from being wiped during the next upgrade.
Our HTML macros are written with Apache Velocity. If your site stops working after you’ve changed a macro, delete or move the customized version to get it working again.
Location of Key PRO Server Files
These locations may change in a future release so you will be responsible to move your customized versions to keep your images working.
macros/cppStartHeader.vm ++ (see below)
macros/cppFooterDiv.vm ++ (see below)
Email images are:
++ These files are web macros. You’ll need to update these in place instead of copying them to the custom folder. They won’t work under the custom folder. Remember that our upgrade process will overwrite your changes.

The (Distributed) Version Control Hosting Landscape

Monday, March 19th, 2012

When working with complex code, configuration files, or just plain text, using Version control (or VC for short) should be like brushing your teeth. You should do it regularly, and getting into a routine with it will protect you from yourself. Our internet age has dragged us into more modern ways of tracking changes to and collaborating on source code, and in this article we’ll discuss the web-friendly and social ways of hosting and discovering code.

One of the earliest sites to rise to prominence was Sourceforge, which is now owned by the company behind Slashdot and Thinkgeek. Focused around projects instead of individuals, and offering more basic VC systems, like… CVS, Sourceforge became a site many open source developers would host and/or distribute their software through. Lately, Sourceforge seems to be on the wane, as it is found to be redirect and advertising-heavy.

When Google wanted to attract more attention to its open source projects and give outsiders a way to contribute, it opened in 2005. In addition to SVN, Mercurial (a.k.a. Hg) was available as an alternative VC option in 2009, as it was the system adopted by the Python language, whose creator is an employee at Google, Guido van Rossum. Hg was one of the original Distributed Version Control Systems, DVCS for short, and the complexity of such a system could feel ‘bolted-on’ when using Google for hosting (especially in the cloning interface), and its recent introduction of Git as an option mid last year brings this feeling out even more.

Bitbucket was another prominent early champion of Hg, and its focus, like those previously mentioned, is also on projects. Atlassian, the company behind it, are real titans in the industry, as the stewards of the Jira bug-tracking software, Confluence wiki, HipChat web-based IM/chatroom service, and have recently purchased the mac DVCS GUI client SourceTree. Even more indicative of the fast-paced and free-thinking approach of how Atlassian has done business is their adoption of Git late last year as an option for Bitbucket, going so far as to guide folks to move their Hg projects to it.

But the 900-pound gorilla in comparison to all of these is Github, with their motto, ‘Social Coding’. Collaboration can tightly couple developers and make open source dependent on the approval or contributions of others. In contrast, ‘Forking’ as a central concept to Git makes this interdependency less pronounced, and abstracts the project away to put more focus on the individual creators. Many words have already been spent on the phenomenon that is Git and Github by extension, just as its Rails engine enjoyed in years past, so we’ll just sign off here by recommending you sign up somewhere and join the social coding movement!

Microsoft’s System Center Configuration Manager 2012

Sunday, March 18th, 2012

Microsoft has released the Beta 2 version of System Center Configuration Manager (SCCM) aka System Center 2012. SCCM is a powerful tool that Microsoft has been developing for over a decade. It started as an automation tool and has grown into a full-blown management tool that allows you to manage, update, and distribute software, license, policies and a plethora of other amazing features to users, workstation, servers, and devices including mobile devices and tablets. The new version has been simplified infrastructure-wise, without losing functionality compared to previous versions.

SCCM provides end-users with a easy to use web portal that will allow them to choose what software they want easily, providing an instant response to install the application in a timely manner. For Mobile devices the management console has an exchange connector and will support any device that can use Exchange Active Sync protocol. It will allow you to push policies and settings to your devices (i.e. encryption configurations, security settings, etc…). Windows phone 7 features are also manageable through SCCM.

The Exchange component sits natively with the configuration manager and does not have to interface with Exchange directly to be utilized. You can also define minimal rights for people to just install and/or configure what they need and nothing more. The bandwidth usage can be throttled to govern its impact on the local network.

SCCM will also interface with Unix and Linux devices, allowing multiple platform and device management. At this point, many 3rd party tools such as the Casper Suite and Absolute Manage also plug into SCCM nicely. Overall this is a robust tool for the multi platform networks that have so commonly developed in today’s business needs everywhere.

Microsoft allows you to try the software at For more information, contact your 318 Professional Services Manager or if you do not yet have one.

Test-Driven Sysadmin with a Russo-Australian Accent

Friday, March 16th, 2012

One of the jokes in the Computer Science field goes like this: there are only 2 hard problems: cache invalidation, naming things, and off-by-one errors. Please do pardon the pun.

Besides the proclivity to name things strangely in the tech community, we often latch on to acronyms and terms that show our pride in being proficient with cutting-edge (or obscure) concepts. As with fashion, there is an ebb and flow to what’s new, but one thing that is here to stay are tests for code, exemplified by the concept of TDD or Test-Driven Development. When you work with complex systems, dependancies can become a fragile house of cards, but here’s another take on that concept: “here in Australia, “babushka doll” is the colloquial term for Russian nesting dolls. Deps” (short for dependancies) “are intended to be small, tidy chunks of code, nested within each other – hence the name”

Babushka is the name of a tool, for Mac OS X and Linux, that tests for the software or settings your system relies on – and if it isn’t present, it goes about changing that for you. Its claim of “no job too small” hints at how atomic and for-mere-mortals the tool was made to be. In comparison to configuration management tools like Puppet and Chef, which are also written in Ruby, it’s much more humble with a proportional community in comparison. The larger tools strive to deliver the ‘holy trinity’, consisting of a package, a configuration file, and a service (gathered in modules by Puppet parlance or recipes in Chef.) Babushka can just deliver the package and lets you build from there.

It was originally released a few years ago, and has recently been refreshed with new capabilities and approachable, comprehensive documentation. Unlike centralized business systems that require curation to take into account things like volume licensing, Babushka can let you reach right out to publicly available freeware. For developers it affords more conveniences like the command line tools that used to require Xcode, package managers like homebrew, and support for Ubuntu’s standard package manager as well.

Git and both play a big part in Babushka; and not just that Git’s the version control system it uses and Github is the site it can be downloaded from. If you decide you’d like to use someone else’s ‘Deps’ to set up your workstation, there is a simplified syntax to not only specify a user on Github whose repository you’d like to work out of, but you can now search across Github for all of the repositories Babushka knows about.

One way of getting started super fast is just running this simple command: bash -c “`curl`”

Now installing via this method is not the most secure, but you can audit the code since it is open source and make your own assurances that your network communication is secure before using it. For examples, you can look at the creator’s deps or your humble author’s.

Monitoring Xsan with Nagios and SNMP

Monday, December 12th, 2011

Monitoring a system or device using SNMP (a SonicWALL, for instance) is simple enough, provided you have the right MIB. XSNMP is an Open Source project that provides a simple Preference Pane to manage SNMP on OS X, and it also includes an MIB developed by LithiumCorp. This MIB provides OS X’s SNMP agent to gather and categorize information relating specifically to Mac OS X, Mac OS X Server, and Xsan.

XSNMP-MIB can be downloaded from GitHub, or directly from Lithium.

Download the XSNMP-MIB.txt file and put it in /usr/share/snmp/mibs. You can verify that the MIB is loaded by running snmpwalk on the system, specifying the XSNMP Version OID. If snmpwalk returns the version, the MIB is installed correctly. If it returns an error about an “Unknown Object Identifier”, then the MIB isn’t installed in the right spot.

bash$ snmpwalk -c public -v 1 my.server.address XSNMP-MIB::xsnmpVersion
XSNMP-MIB::xsnmpVersion.0 = Gauge32: 1

The fact that the MIB was developed by Lithium doesn’t stop us from using it with Nagios, though. You can define a Nagios service to gather the free space available on your Xsan volume by adding the following to a file called xsan_usage.cfg. Put the file in your Nagios config directory.

define service{
host_name xsan_controller
service_description Xsan Volume Free Space
check_command check_snmp!-C public -o xsanVolumeFreeMBytes.1 -m XSNMP-MIB

The host_name should match the Nagios host definition for your Xsan Controller. The service_description can be any arbitrary string that makes sense and describes the service.

The check_command definition is the actual command that’s run. The -C flag defines the SNMP community string, the -m flag defines which MIB should be loaded (you can use “-m all” to just load them all), and the -o flag defines which OID we should return. “xsanVolumeFreeMBytes.1″ should return the free space, in MB, of the first Xsan volume.

Building a Mac and iOS App Store Software Update Service

Wednesday, November 9th, 2011

Let’s say you run a network with a large number of Mac OS X or iOS (or, more likely, both) devices. Software Update and the two App Stores (Mac App Store and iOS App Store) make keeping all those devices up-to-date a pretty straightforward process. They are a huge improvement compared with the rather old-fashioned practice of looking through applications, visiting the web site for each one and manually downloading updated versions. When updating two or more similar machines, of course, one only needed to download the updated version once, then copy it to each other machine. Better, but a process that when performed across a lot of machines requires a lot of work.

However, even though the App Store and Software Update Server in Mac OS X Server make things easier, there’s no simple way to download things once and distribute the downloaded files to multiple machines for items purchased on the App Store. When large updates come out (such as a new version of iOS), you’re essentially downloading huge amounts of data to each and every machine, and if machines are set to automatically download updates, you could even have a large number of them downloading simultaneously.

Of course you can run your own Software Update service in Mac OS X Server, but this requires that every client machine be configured to use the local server. This works well for machines under your control, but for all those people who bring in their own laptops this doesn’t help.

What’s worse is that there’s currently no way whatsoever to run a Software Update-like service for App Store purchases. Imagine if you have a lab of dozens or hundreds of Macs with Final Cut X or iPads (or iPhones, iPod Touches, whatever comes out next with iMovie or ). Any time there’s an update you’re potentially downloading over a gigabyte per machine in the case of Final Cut X or 70 megabytes or so in the case of iMovie. That can easily add up to a tremendous amount of traffic and the congestion, complaints and headaches which go with it..

What’s needed is an easy way to cache App Store downloads. While we’re at it, it would also be nice to transparently have machines use our own Software Update server. Let’s be even a little more ambitious and do this without needing Mac OS X Server. Aw, heck – let’s make it work on any reasonably Unix-like OS.

So how do we do this? The App Stores and Software Update services use http for fetching files. So what we need to do is to capture those http requests and either redirect them to a local store of Software Update files or locally cached App Store files.

Just as an aside, it’d be tremendously difficult to create a local store of App Store files if for no other reason than the fact that there are currently more than half a million applications. Add to this the rate at which updates become available and your machine would probably never be finished attempting to download all of the applications! Considering this, we’re looking at running Apache and squid on our Unix-like machine and doing a little redirection magic on whatever device does NAT or routes for us.

Note: There’s no reason that the same machine can’t do both NAT/routing and Apache/squid, although in most environments we are assuming that the machine would simply be a proxy for Mac or iOS-based devices. To make this example end-to-end though, we’ll run the router on the host.

Our example uses a Mac OS X (non-Server) machine running Leopard which is doing both NAT and running our Apache and squid software. We’re simply using the Internet Sharing service, the public network interface is en0 (which we don’t use anywhere) and the interface which will serve our iOS and Apple clients is en1 and has the address

Everyone has their own favorite way of installing software on Unix-like OSes and a discussion about which is best and why would certainly be outside the scope of this article. In these examples we’re using NetBSD’s pkgsrc for no other reason than the fact that it will compile packages from source with a base directory which is easily configurable (feel free to use ports or some other automated tool according to what platform you are using). Get pkgsrc (usually via cvs; we’ll assume it’s put into /usr which can be as simple as:

cd /usr ; setenv CVSROOT ; cvs checkout -P pkgsrc

And then run /usr/pkgsrc/bootstrap/bootstrap like so:

cd /usr/pkgsrc/bootstrap/
./bootstrap --prefix /usr/local --pkgdbdir /usr/local/var/db/pkg --sysconfdir /usr/local/etc --varbase /usr/local/var --ignore-case-check

This puts all files into /usr/local including logs and configuration files, so keeping your system clean is simple and keeping track of the differences between built-in and pkgsrc software is easy. Next, install pkgsrc’s www/squid and www/apache (and net/wget if your Unix doesn’t already have it):

cd /usr/pkgsrc/www/squid
bmake update
cd /usr/pkgsrc/www/apache22
bmake update
cd /usr/pkgsrc/net/wget
bmake update

Note that on systems like Mac OS X which come with GNU make by default, that pkgsrc uses bmake; if you have BSD make already, just use make. Another note is that /usr/local/sbin is not in Mac OS X’s path by default, so add /usr/local/sbin to /etc/paths if you’re going to use it.

Now that the software is installed in consistent locations we can configure it. The squid.conf file only needs one line to be changed; everything else is added. Find the line which says:

http_port 3128

And change it to:

http_port 3128 intercept

Then add the following lines:

maximum_object_size_in_memory 4096 KB
cache_replacement_policy heap LFUDA
cache_dir ufs /usr/local/var/squid/cache 16384 16 256
maximum_object_size 2097152 KB
refresh_pattern -i .ipa$ 360 90% 10800 override-expire ignore-no-cache ignore-no-store ignore-private ignore-reload ignore-must-revalidate
refresh_pattern -i .pkg$ 360 90% 10080 override-expire ignore-no-cache ignore-no-store ignore-private ignore-reload ignore-must-revalidate
acl no_cache_local dstdomain
cache deny no_cache_local
redirect_program /usr/local/bin/

These settings are chosen to cache large files up to 2 gigabytes in size in a 16 gig cache on disk and to ignore cache directives with regards to .pkg and .ipa files. Adjust to your own liking. Of course, replace with the private IP of your machine. The cache deny with that address is used to make sure that redirected Software Update files are not cached in squid which would just take up room which better used for App Store files.

The URL rewriting script (create /usr/local/bin/ just changes Apple Software Update URLs to point to our server:

#!/usr/bin/env perl
while (<>) {

Next we configure Apache. The location you choose for the Software Update files can be anywhere (in our example, they’re on a FireWire attached drive mounted at /Volumes/sw_updates/) which needs to be allowed in the Apache configuration.

Add to /usr/local/etc/httpd/httpd.conf:

<Directory “/Volumes/sw_updates/”>
Options Indexes FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all
<VirtualHost *:80>
DocumentRoot “/Volumes/sw_updates”
ErrorLog “/usr/local/var/log/httpd/swupdate_error_log”
CustomLog “/usr/local/var/log/httpd/swupdate_access_log” common

The log lines are purely optional. If you don’t add them, logs will still be written at /usr/local/var/log/httpd/access_log and error_log.

Next, we configure ipfw (in the case of Mac OS X or FreeBSD) to redirect all port 80 traffic transparently to our squid instance. If you’re using a different device for NAT/routing or different firewalling software such as ipfilter, see the examples listed below.

ipfw add 333 fwd,3128 tcp from any to any 80 recv en1

Note that on Snow Leopard and Lion you’ll need to make this change, too:

sysctl -w net.inet.ip.scopedroute=0

ipfilter would look like this for the same ipfw task from above (if you’re using Linux):

rdr en1 port 80 -> port 3128 tcp

Again, the local private IP is and the local private interface is en1; substitute your IP and interface.

Finally, we need to mirror all Apple Software Updates. A simple shell script can do this. Save this file somewhere (named, for instance) and run it from cron now and then, perhaps once a night:



location=$1 # This is the root of our Software Update tree
mkdir -p $1
cd $1

for index in index-leopard-snowleopard.merged-1.sucatalog index-leopard.merged-1.sucatalog index-lion-snowleopard-leopard.merged-1.sucatalog
wget --mirror$index


for swfile in `cat$index | grep "http://" | awk -F">" '{ print $2 }' | awk -F"<" '{ print $1 }'`
echo $swfile
wget --mirror "$swfile"

Invoke this with the top of the tree of your Software Update files as you’ve used in the Apache config, like so:

./ /Volumes/sw_updates

Expect this to run for a long time the first time you run this because you’ll be downloading around 60 gigabytes of updates. Every time it runs afterwards, though, files won’t be downloaded again unless they change (which they won’t; new updates will show up as new files).

Start squid and Apache, then tail your Apache log and run Software Update to test:

/usr/local/share/examples/rc.d/apache start
/usr/local/share/examples/rc.d/squid start
tail -f /usr/local/var/log/httpd/swupdate_access_log

At this point, you can redirect your software updates to the host. Updates for both the Mac App Store and iOS are also now cached. In the next article we’ll look at using some squid extensions to enable you to block applications from the App Stores or block updates in the event that an update is problematic.

Using Squidman as a Web Proxy for OS X

Thursday, October 27th, 2011

Squid is an open source package available at that caches web files to a local server, increasing throughput for users and decreasing the amount of traffic on WAN connections. A Mac OS X software package named SquidMan, which includes Squid is available at SquidMan makes installing and using Squid much easier, giving nice buttons to use for management rather than managing Squid using configuration files.

Once SquidMan is downloaded, copy the SquidMan application bundle to the /Applications directory. Then open it. At the Helper Tool Installation screen click on the Yes button.

At the Squid Missing screen click on the OK button to install squid itself.

The Preferences screen then opens. Click on the Clients tab and, if you would like to restrict access to only a set of IP addresses, define them (or use the net mask to define a range).

Click on the General tab. Here, provide the following information:

  • HTTP Port: The port number that the proxy will run on.
  • Visible hostname: The hostname of the server (e.g.
  • Cache size: The total amount of space used for the proxies cache.
  • Maximum object size: The maximum size of single cached files.
  • Rotate logs: The frequency with which log files are rotated (I usually use Manually here).
  • Start Squid on launch: Automatically start squid when SquidMan is launched, and delay start by x number of seconds.
  • Quid Squid on logout: Define whether logging out of the server also stops squid.
  • Show errors produced by Squid: Displays squid’s errors in SquidMan.

Click on the Parent and define a proxy server that this one will use (if there is one, otherwise it just uses the web to directly access files). This feature is only used if you are daisy chaining multiple squid servers.

Click on the Direct tab and enter any sites that should not be proxied. Internal staging environments are a great example of sites that should bypass proxy servers.

At the Template tab, enter any custom variables.

Squid is usually used to cache and speed up web access, so the default configuration file is optimized for small files. In order to cache larger files effectively, change the configuration to allow for larger files (up to 64 megabytes) and allow for more total disk storage of cached files (up to 8 gigabytes in our tests for a few specific projects, but much larger is fine). This usually depends on the total available disk space on the machine which will run squid.

These are some of the options which we updated for a specific project we’re working on in the squid.conf (Template):

http_port 3128 transparent (add transparent if using NAT to redirect http requests):
maximum_object_size_in_memory 65536 KB
cache_dir ufs /usr/local/var/squid/cache 8192 16 256
maximum_object_size 65536 KB

These days, we prefer to use squid running in NetBSD’s pkgsrc, although any method of installation (such as the squidman approach) should be acceptable.

Next, click on the SquidMan application which should have been running the whole time and click Start Squid.

The squid daemon then starts. Looking at the processes running on the host reveals that it is run as follows:

/usr/local/squid/sbin/squid -f /Users/admin/Library/Preferences/squid.conf

Client systems can then be configured to use the squid proxy, or PAC (Proxy auto-config) file can be configured to configure clients. Another option being transparent parodying:

rdr de0 port 80 -> (local Squid server) port 3128 tcp

Disabling Spanning Tree on Cisco Switches

Monday, February 21st, 2011

Spanning Tree Protocol has always been a problem with Mac OS X Server. This goes back to the early days when OS’s whacked each other over the head with rocks to go from Alpha to Beta. This usually manifests itself in weird speed and connectivity issues. You can mitigate by changing timing values, but when testing, it is often easiest to start by disabling Spanning Tree Protocol, seeing if the problems you have go away and then working from there.

By default, Spanning Tree is enabled on all Cisco Switches. In this article we’ll look at disabling Spanning Tree Protocol. But it is important to point out that once disabled, it is important to keep in mind that creating an additional VLAN automatically runs another instance of spanning tree protocol, so you may need to repeat this process in the future.

First backup the device. Then, ssh into the device:

ssh admin@

You should be prompted for credentials at this time if using telnet. If you are using SSH you should only be prompted for the password. Once connected to the device you will need to go into enable mode by typing en at the command prompt and hit enter:


It may prompt you for a password, which you will need to know. Once complete you will notice that the prompt turns from an > to an # symbol. Now that you have administrative access, you will need to go into global configuration mode using the config t command:

config t

Now let’s actually disable spanning tree protocol. Enter in the no verb followed by spanning-tree, the protocol we’re disabling, followed by VLAN, followed by the VLAN identifier:

no spanning-tree VLAN vlan-id

Repeat for each VLAN if you need to do this on multiple. When done, exit config mode by entering the end command:


You can then enter the show command along with the spanning-tree option and view to see if there are any remaining spanning tree’s still active and verify if your command took:

show spanning-tree

If the command took and spanning tree is no longer enabled. Run the coppy command, followed by running-config and then startup-config, which copies your running configuration to your startup configuration making your change permanent:

copy running-config startup-config

It is then usually recommended to go ahead and reboot servers and clients prior to testing.

Install Powerchute Using a Script

Friday, February 11th, 2011

Here’s a little shell script that can be deployed from ARD to install and configure APC’s Powerchute Network software for Mac OS X clients. It’s currently only been tested with 2.2.4, but was used it to deploy Powerchute to 7 servers and can be quite a time saver. The only prereq is that the APC tar file be located at the path specified by variable ‘apcfile’ and the other variables in the script be completed.

Let us know if you have any questions!

### sends keystrokes to configure APC Powerchute software.


## start script
mkdir /tmp/apc_temp &> /dev/null
cd /tmp/apc_temp
tar -xf "$apcfile"

## get our IP
IP="$(ifconfig $nictoregister | awk '/inet / {print $2}' | head -1)"

open /tmp/apc_temp/install.command
sleep 3

osascript < tell application "System Events"
keystroke "$localadminpassword"
delay .2
keystroke return
delay 2
keystroke space
delay 1
keystroke space
delay 1
keystroke space
delay 1
keystroke space
delay 1
keystroke "$apcip"
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke space
delay 1
keystroke tab
delay .1
keystroke "$IP"
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke space
delay 1
keystroke "$apcadmin"
delay .1
keystroke tab
delay .1
keystroke "$apcpassword"
delay .1
keystroke tab
delay .1
keystroke "$apcsharedsecret"
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke tab
delay .1
keystroke space

end tell

Voice Dictation on iPhone and iPad

Monday, November 29th, 2010

The iPhone has a built-in voice controls that allow you to speak to the phone and have it perform certain tasks, such as dial a given contact, go to the next track when playing music and even start playing music. This allows you to control the device, hands free and perform basic tasks. Have you ever wanted to use that same kind of technology to dictate emails, notes and write documents while on the go? Well, Dragon Dictation, from Nuance Communications has got ya’ covered!

Using Dragon Dictation, you can press a button and dictate text. You can then review and edit the text if needed. That text can then be emailed, posted to your wall on Facebook, posted to Twitter, sent as an SMS and yes, even copied to the clipboard. If you find yourself in any situation where you cannot use the keyboard for extended periods of time then Dragon Dictation is a must have! And you can’t beat the price; Dragon Dictation is currently free!

Dragon also has a product for Mac OS X, Dragon Naturally Speaking, and versions for Windows as well. You can also use the desktop applications to control the computer itself, allowing you to name it Jarvis, KITT, GERTY, HAL, Mother or just plain old Computer. If you link it up to automator or do a little scripting then you can even control other applications, allowing you to tell the computer to turn the lights on, make you coffee and even turn off those Christmas lights.

MySQL Backup Options

Thursday, July 8th, 2010

MySQL bills itself as the world’s most popular open source database. It turns up all over, including most installations of WordPress. Packages for multiple platforms make installation easy and online resources are plentiful. Web-based admin tools like phpMyAdmin are very popular and there are many stand-alone options for managing MySQL databases as well.

When it comes to back-up, though, are you prepared? Backup plug-ins for WordPress databases are fairly common, but what other techniques can be used? Scripting to the rescue!

On Unix-type systems, it’s easy to find one of the many example scripts online, customize them to your needs, then add the script to a nightly cron job (or launchd on Mac OS X systems). Most of these scripts use the mysqldump command to create a text file that contains the structure and data from your database. More advanced scripts can loop through multiple databases on the same server, compress the output and email you copies.

Here is an example we found online a long time ago and modified (thanks to the unknown author):


# List all of the MySQL databases that you want to backup in here,
# each separated by a space
databases="database1 database2 database3"

# Directory where you want the backup files to be placed

# MySQL dump command, use the full path name here

# MySQL Username and password
userpassword=" --user=myusername --password=mypassword"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert --lock-tables"

# Unix Commands

# Create our backup directory if not already there
mkdir -p ${backupdir}
if [ ! -d ${backupdir} ]
echo "Not a directory: ${backupdir}"
exit 1

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
$mysqldumpcmd $userpassword $dumpoptions $database > ${backupdir}/${database}.sql

# Compress all of our backup files
echo "Compressing Dump Files"
for database in $databases
rm -f ${backupdir}/${database}.sql.gz
$gzip ${backupdir}/${database}.sql

# And we're done
ls -l ${backupdir}
echo "Dump Complete!"

Once you verify that your backup script is giving you valid backup files, these should be added to your other backup routines, such as CrashPlan, Mozy, Retrospect, Time Machine, Backup Exec, PresSTORE, etc. It never hurts to have too many copies of your critical data files.

To make sure your organization is prepared, contact your 318 account manager today, or email for assistance.

Script for Populating Jabber Buddy Lists in iChat

Monday, February 22nd, 2010

Note: Uses a Jabber server hosted on yourfqdn.

The 10.6 OS X ichat server has an autobuddy feature, but this feature only works with a user’s original shortname: if they have multiple shortname aliases, these additional shortnames will not have a buddy list associated with them when they login, as the jabber database keys off of the logged in name: each shortname maintains it’s own buddy list, and aliases are not handled by autobuddy population.

To get around this limitation I have created a shell script residing at: /usr/local/bin/ This script when ran traverses the Open Directory user database and inits jabber accounts for all user shortnames (using /usr/bin/jabber_autobuddy –inituser shortname@yourfqdn). This creates an active record for that shortname. After this is created for all shortnames in the system, the script then calls /usr/bin/jabber_autobuddy -m, which creates a buddy list for all users that contains an entry for all active records.

Unfortunately there is no way to auto-fire this script when a new user alias is added, it must be run by hand. To do so, after creating a new user account (or add a new shortname to an existing account) simply open a terminal window and type the following command:

sudo /usr/local/bin/

You will then be prompted for authentication. Once you authenticate, the script will process and create/init the appropriate accounts and ensure that they are buddied with all existing users.

Contents of /usr/local/bin/


## Specify search base
declare -x SEARCHBASE=”/LDAPv3/″

## Specify our jabber domain
declare -x JABBERDOMAIN=”yourFQDN”

## Iterate through all of our OD users
for user in $(dscl $SEARCHBASE list /Users); do
case “$user” in

echo “Resolving aliases for: $user”
## Read all shortnames for the user
for shortname in $(dscl -url $SEARCHBASE read /Users/$user RecordName | grep -v RecordName | sed -e ‘s/^\ //g’); do
echo “Initing jabber for username: $shortname”
## Init the shortname
jabber_autobuddy –inituser “${shortname//%20/ }@$JABBERDOMAIN”

## Populate all inited accounts
jabber_autobuddy -m