Posts Tagged ‘scripting’

Pulling Report Info from MunkiWebAdmin

Wednesday, November 6th, 2013

Alright, you’ve fallen in love with the Dashboard in MunkiWebAdmin – we don’t blame you, it’s quite the sight. Now you know one day you’ll hack on Django and the client pre/postflight scripts until you can add that perfect view to further extend it’s reporting and output functionality, but in the meantime you just want to export a list of all those machines still running 10.6.8. Mavericks is free, and them folks still on Snow Leo are long overdue. If you’ve only got a handful of clients, maybe you set up MunkiWebAdmin using sqlite(since nothing all that large is actually stored in the database itself.)

MunkiWebAdmin in action

Let’s go spelunking and try to output just those clients in a more digestible format than html, so I’d use the csv output option for starters. We could tool around in an interactive session with the sqlite binary, but in this example we’ll just run the query on that binary and cherry-pick the info we want. Most often, we’ll use the information submitted as a report by the pre- and postflight scripts munki runs, which dumps in to the reports_machine table. And the final part is as simple as you’d expect, we just select all from that particular table where the OS version equals exactly 10.6.8. Here’s the one-liner:

$sqlite3 -csv /Users/Shared/munkiwebadmin_env/munkiwebadmin/munkiwebadmin.db\
 "SELECT * FROM reports_machine WHERE os_version='10.6.8';"

 


And the resultant output:
b8:f6:b1:00:00:00,Berlin,"","",192.168.222.100,"MacBookPro10,1","Intel Core i7","2.6 GHz",x86_64,"8 GB"...

You can then open that in your favorite spreadsheet editing application and parse it for whatever is in store for it next!

Add OS X Network Settings Remotely (Without Breaking Stuff)

Monday, September 23rd, 2013

So you’re going to send a computer off to a colocation facility, and it’ll use a static IP and DNS when it gets there, the info for which it’ll need before it arrives. Just like colo, you access this computer remotely to prepare it for its trip, but don’t want to knock it off the network while prepping this info, so you can verify it’s good to go and shut it down.

It’s the type of thing, like setting up email accounts programmatically, that somebody should have figured out and shared with the community as some point. But even if my google-fu is weak, I guess I can deal with having tomatoes thrown at me, so here’s a rough mock-up:

 

#!/bin/bash
# purpose: add a network location with manual IP info without switching 
#   This script lets you fill in settings and apply them on en0(assuming that's active)
#   but only interrupts current connectivity long enough to apply the settings,
#   it then immediately switches back. (It also assumes a 'Static' location doesn't already exist...)
#   Use at your own risk! No warranty granted or implied! Tell us we're doing it rong on twitter!
# author: Allister Banks, 318 Inc.

# set -x

declare -xr networksetup="/usr/sbin/networksetup"

declare -xr MYIP="192.168.111.177"
declare -xr MYMASK="255.255.255.0"
declare -xr MYROUTER="192.168.111.1"
declare -xr DNSSERVERS="8.8.8.8 8.8.4.4"

declare -x PORTANDSERVICE=`$networksetup -listallhardwareports | awk '/en0/{print x};{x=$0}' | cut -d ' ' -f 3`

$networksetup -createlocation "Static" populate
$networksetup -switchtolocation "Static"
$networksetup -setmanual $PORTANDSERVICE $MYIP $MYMASK $MYROUTER
$networksetup -setdnsservers $PORTANDSERVICE $DNSSERVERS
$networksetup -switchtolocation Automatic

exit 0

Caveats: The script assumes the interface you want to be active in the future is en0, just for ease of testing before deployment. Also, that there isn’t already a network location called ‘Static’, and that you do want all interface populated upon creation(because I couldn’t think of particularly good reasons why not.)

If you find the need, give it a try and tweet at us with your questions/comments!


Cat skinning technique #12 or “Convert to plist and then read”

Tuesday, August 13th, 2013

I’ve seen amazing things done to extract data from most anything with command-line tools such as awk, sed and regex. Just like “there’s more than one way to skin a cat”, there’s more than one way to get a result.

During some recent scripting research I noticed in the man page for the command I was using an option that allowed me to convert the data to an easier to parse format. Although the output for this option was much longer than normal output, I was able to avoid devising a complex regex for getting the data I needed.

Enough babble! I present yet another way to extract information from a blob of data or “cat skinning technique #12″.

This command when run in the Terminal returned a load of information about my OS X user account.

dscl . read /Users/tempuser

I appended an attribute called “Comment” and I gave the attribute a value of “Temporary account.”

sudo dscl . append /Users/tempuser Comment "Temporary account."

I could read this attribute quickly using:

dscl . read /Users/tempuser Comment

The result was:

Comment:
 Temporary account.

I added a second and third comment by running the append command a couple more times:

Comment:
 Temporary account.
 Expires: July 31, 2013.
 Manager: Martin Moose.

Now, how could I go about getting the expiration date from the comment? This is where awk-, sed- and regex-loving scripters would begin piping the results into something like:

dscl . read /Users/tempuser Comment | sed -n '3p'

The problem with this command was it left a blank leading space (note how the values for the comment were slightly indented in the above result).

I could pipe this again into another sed command along with some complicated regex magic to remove the leading space, which actually gave me what I wanted:

dscl . read /Users/tempuser Comment | sed -n '3p' | sed -e 's/^[ \t]*//'

As an administrator needing to get the job done I would be happy with this solution. If I were to post that one-liner into a forum, though, I’d be ridiculed for using the same command multiple times or for piping more than once.

I learned a few years back to try to exhaust the options provided by a single command rather than snipping away at results using a centipede of short commands. After viewing the man page for dscl I found a useful option—it could output the result in plist format. That’s the same format for preference files. Administrators familiar with managing preferences are also familiar with command line tools like defaults and PlistBuddy.

I added the extra option:

dscl -plist . read /Users/tempuser Comment

Although it returned lengthier output I had structure to the information:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>dsAttrTypeStandard:Comment</key>
	<array>
		<string>Temporary account.</string>
		<string>Expires: July 31, 2013.</string>
		<string>Manager: Martin Moose.</string>
	</array>
</dict>
</plist>

Both the defaults and PlistBuddy command line tools only read plist files, which meant I needed to redirect this information into a file. The /private/tmp folder was a convenient place to store transient stuff:

dscl -plist . read /Users/tempuser Comment > /private/tmp/myfile.plist

All I needed to do was read the file. Because this plist file contained an array, PlistBuddy was much better suited to reading it than defaults. After a little trial and error I put a two-liner together:

dscl -plist . read /Users/tempuser Comment > /private/tmp/myfile.plist
/usr/libexec/PlistBuddy -c "print :dsAttrTypeStandard\:Comment:1" /private/tmp/myfile.plist

In plain language the PlistBuddy command said: “Read the value for the key ‘dsAttrTypeStandard:Comment’ and return index 1 (indexes start at 0) from the file myfile.plist.” The result returned was:

Expires: July 31, 2013.

(Sysadmin) Software Design Decisions

Wednesday, October 3rd, 2012

When approaching a task with an inkling to automate, sometimes you find an open source project that fits the bill. But the creator will work within constraints, and often express their opinion of what’s important to ‘solve’ as a problem and therefore prioritize on: a deployment tool is not necessarily a patch management tool is not necessarily a configuration management tool, and so on. One of the things I’ve dealt with is trying to gauge the intent of a developer and deciding if they are interested in further discussion/support/development of a given project. Knowing why one decision was made or another can be helpful in these situations. In that category of things I wish someone could have written so I could read it, here’s the design decisions behind the sonOfBackupRestoreScripts project I’ve been toying with as an add-on to DeployStudio(heretofore DS), which you can hopefully understand why I am not releasing as an official, supportable tool in it’s current bash form after reading the following.
I’ve adapted some of the things Google used in their outline for Simian as a model, to give this some structure.

Project Objective:

To move user home folders and local authentication/cached credentials between workstations in a customizable and optimized manner, preserving the integrity of the data/user records as much as possible

Overview:

For speed and data integrity, rsync is used to move selections of the users home folder(minus caches, trash, and common exclusions made by Time Machine). To increase portability and preserve mac-specific attributes, a disk image is generated to enclose the data. The user account information is copied separately and helpful information is displayed at the critical points as it moves from one stage to another and during the backup itself.

Requirements: DeployStudio Server / NetBoot

DS, as a service, enables an infrastructure to run the script in, and automounts a repository to interact with over the network. Meant to work optimally with or without a NetBoot environment, an architecture assumption being made during development/testing is wired ethernet, with the use of USB/Thunderbolt adapters if clients are MacBook Airs. Even old minis can function fine as the server, assuming the repo is located on a volume with enough space available to accept the uncompressed backups.

Implementation Details: Major Components / Underlying Programs

- source/destination variables

Parameters can be passed to the script to change the source/destination of backups/restores with the -s(for source) and -d(…) switches and then a path that is reachable by the NetBooted system.

- hdiutil

A simple sparsediskimage is created which can expand up to 100GBs with the built-in binary hdiutil. The file system format of that container is JHFS+, and a bunch of other best practices, cobbled together from Bombich’s Carbon Copy Cloner(heretofore CCC) and InstaDMG, are employed.

- cp

The cp binary is used to just copy the user records from the directory service the data resides on to the root of the sparseimage, and the admin group’s record is copied into a ‘group’ folder. If hashes exist in /var/db/shadow/hash, which is how passwords were stored previous to 10.7, those are moved to a ‘hashes’ folder.

- rsync

A custom, even more current build of rsync could be generated if the instructions listed here are followed. Ideally, a battle-tested version like the one bundled with CCC’s ccc_helper.app (/Applications/Carbon\ Copy\ Cloner.app/Contents/MacOS/ccc_helper.app/Contents/MacOS/rsync, which is actually a heavily customized rsync version 3.0.6) could be used, but it’s output isn’t easy to adapt and see an overview of the progress during a CLI transfer. Regardless, the recommended switches are employed in hopes to get a passing grade on the backupBouncer test. The 3.0.7 version bundled with DS itself (/Applications/Utilities/DeployStudio\ Admin.app/Contents/Frameworks/DSCore.framework/Versions/A/Resources/Tools/rsync, which for whatever reason is excluded when the assistant creates NetBoot sets) was used during development/testing.

-Exclusions

The Users folder on the workstation that’s being backed up is what’s targeted directly, so any users that have been deleted or subfolders can be removed with the exclusions file fed to the rsync command, and without catch-all, asterisk(*) ‘file globbing’, you’d need to be specific about certain types of files you want to exclude if they’re in certain directories. For example, to not backup any mp3 files, no matter where they are in the user folders being backed up, you’d add - *.mp3 Additional catch-all excludes could be used, as detailed in the script, which specifically excludes ipsw’s(iOS firmware/OS installers) like this: --exclude='*.ipsw'

-Restore

Pretty much everything done via both rsync and cp are done in reverse, utilizing the source/destination options, so a backup taken from one machine can easily be chosen to restore to another.

Security Considerations:

Very little security is applied during storage. Files are transferred over password-protected AFP, so a separate server and repo could be used to minimize potential access by whoever can access the main DS service. Nothing encrypts the files inside the sparseimages, and if present, the older password format is a hash that could potentially be cracked over a great length of time. The home folder ACL’s and ownership/perms are preserved, so in that respect it’s secure according to whoever has access to the local file systems on the server and client.

Excluded/Missing Features:
(Don’t You Wish Every Project Said That?)

Hopefully this won’t sound like a soul-bearing confession, but here goes:
No checks are in place if there isn’t enough space on destinations, nor if a folder to backup is larger than the currently hard-coded 100GB sparseimage cap (after exclusions.) Minimal redirection of logs is performed, so the main DS log can quickly hit a 2MB cap and stop updating the DS NetBoot log window/GUI if there’s a boatload of progress echo’d to stdout. The process to restore a users admin group membership(or any other group on the original source) is not performed, although the group’s admin.plist can be queried after the fact. Nor is there even reporting on Deleted Users orphaned home folders if they do actually need to be preserved, by default they’re just part of the things rsync excludes. All restrictions are performed in the Excludes.txt file fed to rsync, so it cannot be passed as a parameter to the script.
And the biggest possible unpleasantness is also the #1 reason I’m not considering continuing development in bash: UID collisions. If you restore a 501 user to an image with a pre-existing 501 user that was the only admin… bad things will happen. (We’ve changed our default admin user’s UID as a result.) If you get lucky, you can change one user’s UID or the other and chown to fix things as admin before all heck breaks lose… If this isn’t a clean image, there’s no checking for duplicate users with newer data, there’s no filevault1 or 2 handling, no prioritization so if it can only fit a few home folders it’ll do so and warn about the one(s) that wouldn’t fit, no version checking on the binaries in case different NetBoot sets are used, no fixing of ByHostPrefs(although DS’s finalize script should handle that), no checks with die function are performed if the restore destination doesn’t have enough space, since common case is restoring to same HD or a newer, presumably larger computer. Phew!

Wrapup:

The moral of the story is that the data structures available in most of the other scripting languages are more suited for these checks and to perform evasive action, as necessary. Bash does really ungainly approximations of tuples/dictionaries/hash tables, and forced the previous version of this project to perform all necessary checks and actions during a single loop per-user to keep things functional without growing exponentially longer and more complex.

Let’s look forward to the distant future when this makes it’s way into Python for the next installment in this project. Of course I’ve already got the name of the successor to SonOfBackupRestoreScripts: BrideOfBackupRestoreScripts!

DeployStudio Scripting Tips

Tuesday, September 25th, 2012

I’ve given a presentation on my affinity for DeployStudio, yet with it being closed source, it’s sometimes like an enigma (wrapped in a mystery (wrapped in bacon)). However, a workflow exists to enable scripting within it, although the only option besides automatically running it when dropped into a workflow is non-interactively passing arguments to a script. Even with little in the way of documented information, projects have popped up to take advantage of the framework provided.

Most noticeably, Rusty Myers’ BackupRestore scripts enabled quite an interesting workflow: first, you could run one workflow to tar(or ditto) the user folders to a new Backup directory in the repo, with a few customizable exceptions. And then, when either specified or dropped into a workflow that had a restore action precede it, you could put the users and their associated passwords back into place. This is obviously pretty darn useful for clean(er) migrations and/or OS upgrades, or simply refreshing existing workstations with a new base set of software. Many folks in the MacEnterprise community contributed features, including FileVault(version 1) support, and updates were made for the curveball Lion introduced in respect to how passwords are stored(nested inside the user record plist itself.)

I’m in the process of creating a successor to this project, so I thought I’d share some of the experiences I’ve had and pointers I’ve come across as a sysadmin, not a software developer, attempting to get a repeatable task accomplished inside this framework. Tip number zero is the same advice given to all students of programming in scripting languages: don’t write a lot before running and getting feedback. So, I booted a laptop with a few small user folders to my DeployStudio-generated netboot set, authenticated to the interface, and opened terminal. That netboot set includes the optional Python framework(Ruby is another option if you’d like access to that scripting language), which I’ll be using in the future. Along with selecting “Display Runtime log window by default”, I extended the “Quit runtime automatically after__” number to 90 seconds, so when testing inside of a workflow I wouldn’t be kicked out as I iterated and repeated the process.

To get started, I made an “admin” folder inside the repo, put an updated version of the rsync binary in it(since the one in /usr/bin that ships with OS X is pretty ancient), and started writing a script in that directory which I could therefore run from terminal on the netbooted laptop over VNC/ARD. For starters, here’s tip #1:
DeployStudio mounts the repo in /tmp/DSNetworkRepository. While /tmp isn’t read only, you will get out-of-space errors and general unhappiness if you use it for anything except mountpoints.
Tip #2:
No, you can’t use symlinks in those subfolders to point somewhere else on the DeployStudio server, since it assumes the directory the symlink would point to is relative to the root of the netboot set instead. (No, really, it wouldn’t even work when using ln -s on the machine the repo’s hosted from instead of Finder’s aliases, which definitely don’t work in scripts.)
tip #3:
For persnickety folks like myself that MUST use a theme in terminal and can’t deal to not have option set as the meta key, you’re probably bummed that the Preferences menu item is greyed out and command-comma doesn’t seem to work. There is a way, though: from the Shell menu choose Show Inspector. Then from the settings tab, double-click the theme you prefer. The full settings window will appear, and you can have your modern conveniences again.
tip #4:
How does DeployStudio decide what is the first mounted volume, you may wonder? I invite(dare?) you to ‘bikeshed‘(find a more optimized way to accomplish a relatively trivial task) this particular piece of code:
system_profiler SPSerialATADataType | awk -F': ' '/Mount Point/ { print $2}' | head -n1
In the case of the restore script, hard-coding the DS_LAST_RESTORED_VOLUME variable (on line 44 in Rusty’s current Lion version), or changing the argument in the workflow to pass that path with the -v flag will remove the dependency on restoring an image before putting user folders back in place.

Two more tips before I go, which are both specific to the task I’m trying to accomplish. Ownership on files will not be preserved when moved to the repo with rsync, so you can create a sparse image or sparsebundle as a destination, and it will even retain ACL’s (certain patched rsync binaries complain about smb_acl’s, like the one I used, which is bundled in DeployStudio’s Tools folder.) As mentioned about /tmp in the NetBoot environment earlier, sparseimages should be created in a subfolder of the repo, or you could easily run into ‘out of space’ errors.

When logging, you can use tee or simply redirect output, but more customized feedback in the actual log window in the Deploy Studio netboot runtime is helpful. There’s a “custom_logger” function used in some of the bundled scripts… which literally does nothing but echo $1 – pretty rudimentary. For output that doesn’t display when run as part of a script, you can redirect that output to /dev/stdout and have it shown instead of fooling around with echo or printf.
e.g. rsync -avz /source /destination >/dev/stdout
There may be a lag if verbose output happens in a short amount of time, as the main log file in the repo is being written to simultaneously to what is printed onscreen.

Oh, and the silliest thing I’ve noticed: your script needs to be owned by root:wheel and 777 in the scripts folder of the repo in order to show up in the workflow interface for selection… It’s got it’s quirks, but it’s just about the best out there!

Evaluating the Tokens, or Order of Expansion in Bash

Monday, July 23rd, 2012

Previously in our series on commonly overlooked things in bash, we spoke about being specific with the binaries our script will call, and mentioned conventions to use after deciding what to variable-ize. The rigidity and lack of convenience afforded by bash starts to poke through when we’re trying to abstract re-usable inputs through making them into variables, and folks are commonly tripped up when trying to have everything come out as intended on the other side, when that line runs. You may already know to put quotes around just about every variable to catch the possibility of spaces messing things up, and we’re not even touching on complex ‘sanitization’ of things like non-roman alphabets and/or UTF-8 encoding. Knowing the ‘order of operation expansion’ that the interpreter will use when running our scripts is important. It’s not all drudgery, though, as we’ll uncover features available to bash that you may not have realized exist.

For instance, you may know curly braces can be used did you know there’s syntax to, for example, expand to multiple extensions for the same filenames by putting them in curly braces, comma-separated? An interactive example(with set -x)
cp veryimportantconfigfile{,-backup}
+ cp veryimportantconfigfile veryimportantconfigfile-backup

That’s referred to as filename (or just) brace expansion, and is the first in order of the (roughly) six types of expansion the bash interpreter goes through when evaluating lines and ‘token-ized’ variables in a script.

Since you’re CLI-curious and go command line (trademark @thespider) all the time, you’re probably familiar with not only that you can use tilde(~) for a shortcut to the current logged-in users home directory, but also that just cd alone will assume you meant you wanted to go to that home directory? A users home gets a lot of traffic, and while the builtin $HOME variable is probably more reliable if you must include interaction with home directories in your script, tilde expansion (including any subdirectories tagged to the end) is the next in our expansion order.

Now things get (however underwhelmingly) more interesting. Third in the hit parade, each with semi-equal weighting, are
a. the standard “variable=foo, echo $variable” style ‘variable expressions’ we all know and love,
b. backtick-extracted results of commands, which can also be achieved with $(command) (and, worse came to worse, you could force another expansion of a variable with the eval command)
c. arithmetic expressions (like -gt for greater than, equal, less than, etc.) as we commonly use for comparison tests,
and an interesting set of features that are actually convenient (and mimic some uses of regular expressions), called (misleadingly)
$. dollar sign substitution. All of the different shorthand included under this category has been written about elsewhere in detail, but one in particular is an ad-hoc twist on a catchall that you could use via the ‘shell options’, or shopt command(originally created to expand on ‘set‘, which we mentioned in our earlier article when adding a debug option with ‘set -x‘). All of the options available with shopt are also a bit too numerous to cover now, but one that you’ll see particularly strict folks use is ‘nounset‘, to ensure that variables have always been defined if they’re going to be evaluated as the script runs. It’s only slightly confusing that a variable can have an empty string for a value, which would pass this check. Often, it’s the other way around, and we’ll have variables that are defined without being used; the thing we’d really like to look out for is when a variable is supposed to have a ‘real’ value, and the script could cause ill affects by running without one – so the question becomes how do we check for those important variables as they’re expanded?
A symbol used in bash that will come up later when we cover getopt is the colon, which refers to the existence of an argument, or the variables value (text or otherwise) that you’d be expecting to have set. Dollar sign substitution mimics this concept when allowing you to ad hoc check for empty (or ‘null’) variables by following a standard ‘$variable‘ with ‘:?’ (finished product: ${variable:?})- in other words, it’s a test if the $variable expanded into a ‘real’ value, and it will exit the script at that point with an error if unset, like an ejector seat.

Moving on to the less heavy expansions, the next is… command lookups in the run environments PATH, which are evaluated like regular(western) sentences, from left to right.
As it traipses along down a line running a command, it follows that commands rules regarding if it’s supposed to expect certain switches and arguments, and assumes those are split by some sort of separation (whitespace by default), referred to as the Internal Field Separator. The order of expansion continues with this ‘word splitting’.

And finally, there’s regular old pathname pattern matching – if you’re processing a file or folder in a directory, it find the first instance that matches to evaluate that – pretty straightforward. You may notice we’re often linking to the Bash Guide for Beginners site, as hosted by The Linux Documentation Project. Beyond that resource, there’s also videos from 2011(iTunesU link) and 2012(youtube) Penn State Mac Admins conference on this topic if you need a refresher before we forge ahead for a few more posts.

Back to Basics with Bash

Tuesday, July 17th, 2012

The default shell for Macs has been bash for as long as some of us can remember(as long as we forget it was tcsh through 10.2.8… and before that… there was no shell, it was OS 9!) Bash as a scripting language doesn’t get the best reputation as it is certainly suboptimal and generally unoptimized for modern workflows. To get common things done you need to care about procedural tasks and things can become very ‘heavy’ very quickly. With more modern programming languages that have niceties like API’s and libraries, the catchphrase you’ll hear is you get loads of functionality ‘for free,’ but it’s good to know how far we can get, and why those object-oriented folks keep telling us we’re missing out. And, although most of us are using bash every time we open a shell(zsh users probably know all this stuff anyway) there are things a lot of us aren’t doing in scripts that could be better. Bash is not going away, and is plenty serviceable for ‘lighter’, one-off tasks, so over the course of a few posts we’ll touch on bash-related topics.

Some things even a long-time scripter may easily overlook is how we might set variables more smartly and _often_, making good decisions and being specific about what we choose to variable-ize. If the purpose of a script is to customize things in a way that’s reusable, making a variable out of that customization (say, for example, a hostname or notification email address) allows us to easily re-set that variable in the future. And in our line of work, if you do something once, it is highly probable you’ll do it again.

Something else you may have seen in certain scripts is the PATH variable being explicitly set or overridden, under the assumption that may not be set in the environment the script runs in, or the droids binaries we’re looking for will definitely be found once we set the path directories specifically. This is well-intentioned, but imprecise to put it one way, clunky to put it another. Setting a custom path, or having binaries customized that could end up interacting with our script may cause unintended issues, so some paranoia should be exhibited. As scientists and troubleshooters, being as specific as possible always pays returns, so a guiding principle we should consider adopting is to, instead of setting the path and assuming, make a variable for each binary called as part of a script.

Now would probably be a good time to mention a few things that assist us when setting variables for binaries. Oh, and as conventions go, it helps to leave variable names that are set for binaries as lowercase, and all caps for the customizations we’re shoving in, which helps us visually see only our customized info in all caps as we debug/inspect the script and when we go in to update those variables for a new environment. /usr/bin/which tells us what is the path to the binary which is currently the first discovered in our path, for example ‘which which’ tells us we first found a version of ‘which’ in /usr/bin. Similarly, you may realize from its name what /usr/bin/whereis does. Man pages as a mini-topic is also discussed here. However, a more useful way to tell if you’re using the most efficient version of a binary is to check it with /usr/bin/type. If it’s a shell builtin, like echo, it may be faster than alternatives found at other paths, and you may not even find it necessary to make a variable for it, since there is little chance someone has decided to replace bash’s builtin ‘cd’…

The last practice we’ll try to spread the adoption of is using declare when setting variables. Again, while a lazy sysadmin is a good sysadmin, a precise one doesn’t have to worry about as many failures. A lack of portability across shells helped folks overlook it, but this is useful even if it is bash-specific. When you use declare and -r for read-only, you’re ensuring your variable doesn’t accidentally get overwritten later in the script. Just like the tool ‘set’ for shell settings, which is used to debug scripts when using the xtrace option for tracing how variables are expanded and executed, you can remove the type designation from variables with a +, e.g. set +x. Integers can be ensured by using -i (which frees us from using ‘let’ when we are just simply setting a number), arrays with -a, and when you need a variable to stick around for longer than the individual script it’s set in or the current environment, you can export the variable with -x. Alternately, if you must use the same exact variable name with a different value inside a nested script, you can set the variable as local so you don’t ‘cross the streams’. We hope this starts a conversation on proper bash-ing, look forward to more ‘back to basics’ posts like this one.

Building A Custom CrashPlan PROe Installer

Friday, April 13th, 2012

CrashPlan PROe installation can be customized for various deployment scenarios

Customization of implementations for over 10,000 clients is considered a special case by Code 42, the makers of CrashPlan, and requires that you contact their sales department. Likewise, re-branding the client application to hide the CrashPlan logo also requires a special license.

Planning Your Deployment

A large scale deployment of CrashPlan PROe clients requires a certain level of planning and setup before you can proceed. This usually means a test environment to iron out the details that you wish to configure. Multiple locations, bandwidth, and storage are obvious concerns that will need a certain amount of tuning before and after the service ‘goes live’. Also, an LDAP server populated with the expected information or a prepared xml document that has identifiable machine information needs to be matched with account and registration data. Not just account credentials, but also filing computers and accounts into groups through the use of Organizations (which directly relate to the registration information used) should also be considered.

Which Files to Change

The CrashPlan PROe installer has different files for Windows and Mac OS X, but the gist is largely the same for either. There is a customizable script (or .bat file) in that you can use to specify variables to feed information into a template that are specific to your deployment. The script can be customized to reference ldap information, or even a shared data source that can provide account information based on an identifiable resource such as a MAC address.

Mac OS X 

Download the installer DMG and make a copy of it. The path we’ll be working in is:

Install CrashPlanPRO.mpkg/Contents/Resources/

Inside the Resources directory there is a Custom-example folder that contains the template and script to customize.

Duplicate the Custom-example to Custom

userinfo.sh is a configuration script that has (commented-out by default) sections for parsing usernames from the current home folder, hostname, or from LDAP. This would also be where one could gather other machine information ( such as mac address ) and match it to data in a shared document on a file server.

In the same folder as userinfo.sh is the folder “conf” which contains the file default.service.xml. The contents of this file can be fed variable information from the configuration script to set the user name, computer name, ldap specifics, and password that will be used upon installation. It is advisable to test new user creation when using LDAP and CrashPlan organizations, to ensure users . It is possible to specify those properties in this xml list.

So the process breaks down like this. edit the userinfo.sh to populate the default.service.xml. let the installer run and make contact with the server and let the organization policies set all non custom settings.

XML Parameters

default.service.xml has the following properties

Property
Description
config.servicePeerConfig.authority
By supplying the address, registrationKey, username and password, the user will bypass the registration / login screen. The following tables describe authority attributes that you can specify and their corresponding parameters.

Authority Attributes
Attributes
Description
address
the primary address and port to the server that manages the accounts and issues licenses. If you are running multiple PRO Server, enter the address for the Master PRO Server.
secondaryAddress
(optional) the secondary address and port to the authority that manages the accounts and issues licenses.

Note: This is an advanced setting. Use only if you are familiar with its use and results.

registrationKey
a valid Registration Key for an organization within your Master PRO Server. Hides the Registration Key field on the register screen if a value is given.
username
the username to use when authorizing the computer, can use params listed below
password
the password used when authorizing the computer, can use params listed below
hideAddress
(true/false) do not prompt or allow user to change the address (default is false)
locked
(true/false) allow user to change the server address on the Settings > Account page. (do not set if hideAddress=“true”)

Authority Parameters
Parameter
Description
${username}
determined from the CP_USER_NAME command-line argument, the CP_USER_NAME environment variable, or “user.name” Java system property from the user interface once it launches.
${computername}
system computer name
${generated}
random 8 characters, typically used for password
${uniqueId}
GUID
${deferred}
for LDAP and Auto register only! This allows clients to register without manually entering a password and requiring user to login to desktop the first time.
servicePeerConfig.listenForBackup
Set to false to turn off the inbound backup listener by default.

Sample Usage
All of these samples are for larger installations where you know the address of the PRO Server and want to specify a Registration Key for your users.
Note: NONE of these schemes require you to create the user accounts on your PRO Server ahead of time.

  • Random Password: Your users will end up with a random 8-character password. In order to access their account they will have to use the Reset My Password feature OR have their password reset by an admin.
  • Fixed Password: All users will end up with the same password. This is appropriate if your users will not have access to the CrashPlan Desktop UI and the credentials will be held by an admin.
  • Deferred Password: FOR LDAP ONLY! This scheme allows the client to begin backing up, but it is not officially “logged in”. The first time the user opens the Desktop UI they will be prompted with a login screen and they will have to supply their LDAP username/password to successfully use CrashPlan to change their settings or restore data.

Changing CrashPlan PRO’s Appearance (Co-branding)

This information pertains to editing the installer for co-branding. Skip this section if you are not co-branding your CrashPlan PRO.
Co-Branding: Changing the Skin and Images Contents

You can modify any of the images that appear in the PRO Server admin console as well as those that appear in the email header. Here are the graphics you may substitute:
.Custom/skin folder contents
Filename Description
logo_splash.png splash screen logo
splash.png transparent splash background (Windows XP only)
splash_default.png splash background, must NOT be transparent (Windows Vista, Mac, Linux, Solaris, etc.)
logo_main.png main application logo that appears on the upper right of the desktop
window_bg.jpg main application background
icon_app_128x128.png
icon_app_64x64.png
icon_app_32x32.png
icon_app_16x16.png icons that appear on desktop, customizable with Private Label agreement only

View examples
In the Custom/skin folder, locate the image you wish to replace.
Create another image that is the same size with your logo on it.
For best results, we recommend using the same dimensions as the graphics files we’ve supplied.
Place your customized version into the Content-custom folder you created.
Make sure not to change the filename or folder structure, so that CrashPlan PRO will be able to find the file.
Co-Branding: Editing the Text Properties File

You can change the text that appears as the application name or product name in CrashPlan PRO Client. Make your changes in thetxt_.properties files in the Custom/lang folder.
The txt.properties file is English and is the default language.
Each file contains the text for a language. Please refer to the Internationalization document from Sun for details (http://java.sun.com/developer/technicalArticles/J2SE/locale/).
The language is identified in the comments at the beginning of the file.
When you change the application or product name, keep in mind that using very long names could affect the flow / layout of the text in a window or message box.
Text Property Description
Product.B42_PRO The name of the product as it would appear on the Settings > Account page, such as CrashPlan PRO
application.name The application name appears in error messages, instructions, descriptions throughout the UI.

Creating an Installer

Make the customizations that you want as part of your deployment, then follow the instructions to build a self-installing .exe file.
How It Works – Windows Installs

Test your settings by running the CrashPlan_[date].exe installer.
Make sure the installer.exe file and the Custom folder reside in the same parent folder.
Re-zip the contents of your Custom folder so you have a new customized.zip that contains:
Crashplan_[date].exe
Custom (includes the skin and conf folders)
cpinstall.ico
Turn your zip file into a self-extracting / installing file for your users.
For example, download the zip2secureexe from http://www.chilkatsoft.com/ChilkatSfx.asp
The premium version is not required; however, it does have some nice features and they certainly deserve your support if you use their utility.
Launch zip2secureexe, then :
specify the zip file:customized.zip
specify the name of the program to run after unzipping: CrashPlan_[date].exe
check the Build an EXE option to automatically unzip to a temporary directory
specify the app title:CrashPlan Installer
specify the icon file:cpinstall.ico
click Create to create your self-extracting zip file
Windows Push Installs

Review / edit cp_silent_install.bat and cp_silent_uninstall.bat.
These show how the push installation system needs to execute the Windows installer.
If your push install software requires an MSI, download the 32-bit MSI or the 64-bit MSI.
If you have made customizations, place the Custom directory that contain your customizations next to the MSI file.
To apply the customizations, run the msiexec with Administrator rights:
Right-click CMD.EXE, and select Run as Administrator.
Enter msiexec /i

cp_silent_install.bat

@ECHO OFF

REM The LDAP login user name and the CrashPlan user name.
SET CP_USER_NAME=colt
Echo UserName: %CP_USER_NAME%

REM The users home directory, used in backup selection path variables.
SET CP_USER_HOME=C:\Documents and Settings\crashplan
Echo UserHome: %CP_USER_HOME%

REM Tells the installer not to run CrashPlan client interface following the installation.
SET CP_SILENT=true
Echo Silent: %CP_SILENT%

SET CP_ARGS=”CP_USER_NAME=%CP_USER_NAME%&CP_USER_HOME=%CP_USER_HOME%”
Echo Arguments: %CP_ARGS%

REM You can use any of the msiexec command-line options.
ECHO Installing CrashPlan…
CrashPlanPRO_2008-09-15.exe /qn /l* install.log CP_ARGS=%CP_ARGS% CP_SILENT=%CP_SILENT%

cp_silent_uninstall.bat

@ECHO OFF

REM Tells the installer to remove ALL CrashPlan files under C:/Program Files/CrashPlan.
SET CP_REMOVE_ALL_FILES=true
EHCO CP_REMOVE_ALL_FILES=%CP_REMOVE_ALL_FILES%

ECHO Uninstalling CrashPlan…
msiexec /x {AC7EB437-982A-47C0-BC9A-E7FBD06B1ED6} /qn CP_REMOVE_ALL_FILES=%CP_REMOVE_ALL_FILES%

How It Works – Mac OS X Installer

PRO Server customers who have a lot of Mac clients often want to push out and run the installer for many clients at a time. Because we don’t offer a push installation solution, you’ll need to use other software to push-install CrashPlan, such as Apple’s ARD.
Run Install CrashPlanPRO.mpkg to test your settings:
At the command line, type open Install\ CrashPlanPRO.mpkg from /Volumes/CrashPlanPRO/)
Launch Install CrashPlanPRO.mpkg to test your settings.
Unmount the resulting disk image and distribute to users.
Note: If you do not want the user interface to start up after installation or you want to run the installer as root (instead of user), change the userInfo.sh file as described in next section.
Understanding the userInfo.sh File
This Mac-specific file is in the Custom-example folder inside the installer metapackage. Edit this file to set the user name and home variables if you wish to run the installer from an account other than root, such as user, and/or you wish to prevent the user interface from starting up after installation.
Be sure to read the comments inside the file.
How It Works – Linux Installer
Edit your install script as needed.
Run the install script to test your settings.
Tar/gzip the crashplan folder and share it with other users.
Custom Folder Contents
When you open the installer zip file or resource contents and view the Custom-example folder, the structure looks like this:
Contents of resource folder
Custom (folder)
skin (folder)
logo_splash.png
splash.png
splash_default.png
logo_main.png
window_bg.jpg
logo_main.png
icon_app_128x128.png
icon_app_64x64.png
icon_app_32x32.png
icon_app_16x16.png
lang (folder)
txt_.properties
conf (folder)
default.service.xml
cpinstall.ico (Windows only)
must be created using an icon editor
userInfo.sh (Mac only)

Customizing the PRO Server Admin Console

You can also change the appearance of the PRO Server admin console and email headers and footers.
In the ./content/Manage, locate the images and macros you wish to modify and copy them into ./content-custom/Manage-custom using the same sub-folder and file names as the originals. Placing them there protects your changes from being wiped during the next upgrade.
Our HTML macros are written with Apache Velocity. If your site stops working after you’ve changed a macro, delete or move the customized version to get it working again.
Location of Key PRO Server Files
These locations may change in a future release so you will be responsible to move your customized versions to keep your images working.
CrashPlanPRO/images/login_background.jpg
CrashPlanPRO/images/header_background.gif
CrashPlanPRO/styles/main_override.css
macros/cppStartHeader.vm ++ (see below)
macros/cppFooterDiv.vm ++ (see below)
Email images are:
content/Default/emails/images/header/proserver_banner.gif
content/Default/emails/images/header/proserver_banner_backup_report.gif
++ These files are web macros. You’ll need to update these in place instead of copying them to the custom folder. They won’t work under the custom folder. Remember that our upgrade process will overwrite your changes.

Test-Driven Sysadmin with a Russo-Australian Accent

Friday, March 16th, 2012

One of the jokes in the Computer Science field goes like this: there are only 2 hard problems: cache invalidation, naming things, and off-by-one errors. Please do pardon the pun.

Besides the proclivity to name things strangely in the tech community, we often latch on to acronyms and terms that show our pride in being proficient with cutting-edge (or obscure) concepts. As with fashion, there is an ebb and flow to what’s new, but one thing that is here to stay are tests for code, exemplified by the concept of TDD or Test-Driven Development. When you work with complex systems, dependancies can become a fragile house of cards, but here’s another take on that concept: “here in Australia, “babushka doll” is the colloquial term for Russian nesting dolls. Deps” (short for dependancies) “are intended to be small, tidy chunks of code, nested within each other – hence the name”

Babushka is the name of a tool, for Mac OS X and Linux, that tests for the software or settings your system relies on – and if it isn’t present, it goes about changing that for you. Its claim of “no job too small” hints at how atomic and for-mere-mortals the tool was made to be. In comparison to configuration management tools like Puppet and Chef, which are also written in Ruby, it’s much more humble with a proportional community in comparison. The larger tools strive to deliver the ‘holy trinity’, consisting of a package, a configuration file, and a service (gathered in modules by Puppet parlance or recipes in Chef.) Babushka can just deliver the package and lets you build from there.

It was originally released a few years ago, and has recently been refreshed with new capabilities and approachable, comprehensive documentation. Unlike centralized business systems that require curation to take into account things like volume licensing, Babushka can let you reach right out to publicly available freeware. For developers it affords more conveniences like the command line tools that used to require Xcode, package managers like homebrew, and support for Ubuntu’s standard package manager as well.

Git and Github.com both play a big part in Babushka; and not just that Git’s the version control system it uses and Github is the site it can be downloaded from. If you decide you’d like to use someone else’s ‘Deps’ to set up your workstation, there is a simplified syntax to not only specify a user on Github whose repository you’d like to work out of, but you can now search across Github for all of the repositories Babushka knows about.

One way of getting started super fast is just running this simple command: bash -c “`curl babushka.me/up`”

Now installing via this method is not the most secure, but you can audit the code since it is open source and make your own assurances that your network communication is secure before using it. For examples, you can look at the creator’s deps or your humble author’s.

New article on Xsan Scripting by 318

Saturday, April 11th, 2009

318 has published another article on Xsanity, for scripting various notifications and monitors for Xsan and packaged up into a nice package installer. You can find it here
http://www.xsanity.com/article.php/20090407150134377.