Posts Tagged ‘bash’

Bash Tidbits

Friday, November 23rd, 2012

If you’re like me you have a fairly customized shell environment full of aliases, functions and other goodies to assist with the various sysadmin tasks you need to do.  This makes being a sysadmin easy when you’re up and running on your primary machine but what happens when you’re main machine crashes?

Last weekend my laptop started limping through the day and finally dropped dead and I was left with a pile of work yet on my secondary machine.  Little to no customization was present on this machine which made me nearly pull out my hair on more than one occasion.

Below is a list of my personal shell customizations and other goodies that you may find useful to have as well.  This is easily installed into your ~/.bashrc or ~/.bash_profile file to run every time


# Useful Variables
export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
export SN=`netstat -nr| grep -m 1 -iE ‘default|′ | awk ‘{print \$2}’ | sed ‘s/\.[0-9]*$//’ `
export ph=””
PS1=’\[\033[0;37m\]\u\[\033[0m\]@\[\033[1;35m\]\h\[\033[0m\]:\[\033[1;36m\]\w\[\033[0m\]\$ ‘# Aliases
alias arin=’whois -h’
alias grep=’grep –color’
alias locate=’locate -i’
alias ls=’ls -lh’
alias ns=’nslookup’
alias nsmx=’nslookup -q=mx’
alias pg=’ping’
alias ph=’ping’
alias phobos=’ssh -i ~/.ssh/identity -p 2200 -X -C -t screen -R’
alias pr=’ping `netstat -nr| grep -m 1 -iE ‘\”default|′\” | awk ‘\”{print $2}’\”`’
alias py=’ping’

At the top of the file you have 2 variables that set nice looking colors in the terminal so make it more readable.

One of my faviourite little shortcuts comes next.  You’ll notice that there is a variable called SN there and it is a shortcut for the subnet that you happen to be on.  I find myself having to do stuff to the various hosts on my subnet so if I can save having to type in 192.168.25 50 times a day then that’s definitely useful.  Here are a few examples of how to use it:

ping $SN.10
nmap -p 80 $SN.*
ssh admin@$SN.40

Also related is the alias named pr.  This finds the router and pings it to make sure it’s up.

Continuing down the list there is the alias ph which goes to my personal server.  Useful for all sorts of shortcuts and can save a fair amount of work.  Examples:

ssh alt229@$ph
scp ./test.txt alt229@$ph:~/

There are a bunch of other useful aliases there too so feel free to poach some of these for your own environment!

A Bash Quicky

Thursday, August 30th, 2012

In our last episode spelunking a particularly shallow trough of bash goodness, we came across dollar sign substitution, which I said mimics some uses of regular expressions. Regex’s are often thought of as thick or dense with meaning. One of my more favorite descriptions goes something like, if you measured each character used in code for a regex in cups of coffee, you’d find the creators of this particular syntax the most primo, industrial-strength-caffeinated folks around. I’m paraphrasing, of course.

Now copy-pasta-happy, cargo-culting-coders like myself tend to find working code samples and reuse salvaged pieces almost without thinking, often recognizing the shape of the lines of code more than the underlying meaning. Looping back around to dollar sign substitution, we can actually interpret this commonly used value, assigned to a variable meaning the name of the script:
Okay children, what does it all mean? Well, let’s start at the very beginning(a very good place to start):
${0}The dollar sign and curly braces force an evaluation of the symbols contained inside, often used for returning complex series of variables. As an aside, counting in programming languages starts with zero, and each space-separated part of the text is defined with a number per place in the order, also known as positional parameters. The entire path to our script is given the special ‘seat’ of zero, so this puts the focus on that zero position.

Regrouping quickly, our objective is to pull out the path leading up to the script’s name. So we’re essentially gathering up all the stuff up to and including the last forward slash before our scripts filename, and chuckin’ them in the lorry bin.
${0##*}To match all of the instances of a pattern, in our case the forward slashes in our path, we double up the number signs(or pound sign for telcom fans, or hash for our friends on the fairer side of the puddle.) This performs a “greedy” match, gobbling up all instances, with a star “globbing”, to indiscriminately mop up any matching characters encountered along the way.
${0##*/}Then we cap the whole mess off by telling it to stop when it hits the last occurrence of a character, in this case forward slash. And that’s that!

Pardon the tongue-in-cheek tone of this quick detour into a bash-style regex-analogue… but to reward the masochists, here’s another joke from Puppet-gif-contest-award-winner @pmbuko:

Email from a linux user: “Slash is full.” I wanted to respond: “Did he enjoy his meal?”

Evaluating the Tokens, or Order of Expansion in Bash

Monday, July 23rd, 2012

Previously in our series on commonly overlooked things in bash, we spoke about being specific with the binaries our script will call, and mentioned conventions to use after deciding what to variable-ize. The rigidity and lack of convenience afforded by bash starts to poke through when we’re trying to abstract re-usable inputs through making them into variables, and folks are commonly tripped up when trying to have everything come out as intended on the other side, when that line runs. You may already know to put quotes around just about every variable to catch the possibility of spaces messing things up, and we’re not even touching on complex ‘sanitization’ of things like non-roman alphabets and/or UTF-8 encoding. Knowing the ‘order of operation expansion’ that the interpreter will use when running our scripts is important. It’s not all drudgery, though, as we’ll uncover features available to bash that you may not have realized exist.

For instance, you may know curly braces can be used did you know there’s syntax to, for example, expand to multiple extensions for the same filenames by putting them in curly braces, comma-separated? An interactive example(with set -x)
cp veryimportantconfigfile{,-backup}
+ cp veryimportantconfigfile veryimportantconfigfile-backup

That’s referred to as filename (or just) brace expansion, and is the first in order of the (roughly) six types of expansion the bash interpreter goes through when evaluating lines and ‘token-ized’ variables in a script.

Since you’re CLI-curious and go command line (trademark @thespider) all the time, you’re probably familiar with not only that you can use tilde(~) for a shortcut to the current logged-in users home directory, but also that just cd alone will assume you meant you wanted to go to that home directory? A users home gets a lot of traffic, and while the builtin $HOME variable is probably more reliable if you must include interaction with home directories in your script, tilde expansion (including any subdirectories tagged to the end) is the next in our expansion order.

Now things get (however underwhelmingly) more interesting. Third in the hit parade, each with semi-equal weighting, are
a. the standard “variable=foo, echo $variable” style ‘variable expressions’ we all know and love,
b. backtick-extracted results of commands, which can also be achieved with $(command) (and, worse came to worse, you could force another expansion of a variable with the eval command)
c. arithmetic expressions (like -gt for greater than, equal, less than, etc.) as we commonly use for comparison tests,
and an interesting set of features that are actually convenient (and mimic some uses of regular expressions), called (misleadingly)
$. dollar sign substitution. All of the different shorthand included under this category has been written about elsewhere in detail, but one in particular is an ad-hoc twist on a catchall that you could use via the ‘shell options’, or shopt command(originally created to expand on ‘set‘, which we mentioned in our earlier article when adding a debug option with ‘set -x‘). All of the options available with shopt are also a bit too numerous to cover now, but one that you’ll see particularly strict folks use is ‘nounset‘, to ensure that variables have always been defined if they’re going to be evaluated as the script runs. It’s only slightly confusing that a variable can have an empty string for a value, which would pass this check. Often, it’s the other way around, and we’ll have variables that are defined without being used; the thing we’d really like to look out for is when a variable is supposed to have a ‘real’ value, and the script could cause ill affects by running without one – so the question becomes how do we check for those important variables as they’re expanded?
A symbol used in bash that will come up later when we cover getopt is the colon, which refers to the existence of an argument, or the variables value (text or otherwise) that you’d be expecting to have set. Dollar sign substitution mimics this concept when allowing you to ad hoc check for empty (or ‘null’) variables by following a standard ‘$variable‘ with ‘:?’ (finished product: ${variable:?})- in other words, it’s a test if the $variable expanded into a ‘real’ value, and it will exit the script at that point with an error if unset, like an ejector seat.

Moving on to the less heavy expansions, the next is… command lookups in the run environments PATH, which are evaluated like regular(western) sentences, from left to right.
As it traipses along down a line running a command, it follows that commands rules regarding if it’s supposed to expect certain switches and arguments, and assumes those are split by some sort of separation (whitespace by default), referred to as the Internal Field Separator. The order of expansion continues with this ‘word splitting’.

And finally, there’s regular old pathname pattern matching – if you’re processing a file or folder in a directory, it find the first instance that matches to evaluate that – pretty straightforward. You may notice we’re often linking to the Bash Guide for Beginners site, as hosted by The Linux Documentation Project. Beyond that resource, there’s also videos from 2011(iTunesU link) and 2012(youtube) Penn State Mac Admins conference on this topic if you need a refresher before we forge ahead for a few more posts.

Back to Basics with Bash

Tuesday, July 17th, 2012

The default shell for Macs has been bash for as long as some of us can remember(as long as we forget it was tcsh through 10.2.8… and before that… there was no shell, it was OS 9!) Bash as a scripting language doesn’t get the best reputation as it is certainly suboptimal and generally unoptimized for modern workflows. To get common things done you need to care about procedural tasks and things can become very ‘heavy’ very quickly. With more modern programming languages that have niceties like API’s and libraries, the catchphrase you’ll hear is you get loads of functionality ‘for free,’ but it’s good to know how far we can get, and why those object-oriented folks keep telling us we’re missing out. And, although most of us are using bash every time we open a shell(zsh users probably know all this stuff anyway) there are things a lot of us aren’t doing in scripts that could be better. Bash is not going away, and is plenty serviceable for ‘lighter’, one-off tasks, so over the course of a few posts we’ll touch on bash-related topics.

Some things even a long-time scripter may easily overlook is how we might set variables more smartly and _often_, making good decisions and being specific about what we choose to variable-ize. If the purpose of a script is to customize things in a way that’s reusable, making a variable out of that customization (say, for example, a hostname or notification email address) allows us to easily re-set that variable in the future. And in our line of work, if you do something once, it is highly probable you’ll do it again.

Something else you may have seen in certain scripts is the PATH variable being explicitly set or overridden, under the assumption that may not be set in the environment the script runs in, or the droids binaries we’re looking for will definitely be found once we set the path directories specifically. This is well-intentioned, but imprecise to put it one way, clunky to put it another. Setting a custom path, or having binaries customized that could end up interacting with our script may cause unintended issues, so some paranoia should be exhibited. As scientists and troubleshooters, being as specific as possible always pays returns, so a guiding principle we should consider adopting is to, instead of setting the path and assuming, make a variable for each binary called as part of a script.

Now would probably be a good time to mention a few things that assist us when setting variables for binaries. Oh, and as conventions go, it helps to leave variable names that are set for binaries as lowercase, and all caps for the customizations we’re shoving in, which helps us visually see only our customized info in all caps as we debug/inspect the script and when we go in to update those variables for a new environment. /usr/bin/which tells us what is the path to the binary which is currently the first discovered in our path, for example ‘which which’ tells us we first found a version of ‘which’ in /usr/bin. Similarly, you may realize from its name what /usr/bin/whereis does. Man pages as a mini-topic is also discussed here. However, a more useful way to tell if you’re using the most efficient version of a binary is to check it with /usr/bin/type. If it’s a shell builtin, like echo, it may be faster than alternatives found at other paths, and you may not even find it necessary to make a variable for it, since there is little chance someone has decided to replace bash’s builtin ‘cd’…

The last practice we’ll try to spread the adoption of is using declare when setting variables. Again, while a lazy sysadmin is a good sysadmin, a precise one doesn’t have to worry about as many failures. A lack of portability across shells helped folks overlook it, but this is useful even if it is bash-specific. When you use declare and -r for read-only, you’re ensuring your variable doesn’t accidentally get overwritten later in the script. Just like the tool ‘set’ for shell settings, which is used to debug scripts when using the xtrace option for tracing how variables are expanded and executed, you can remove the type designation from variables with a +, e.g. set +x. Integers can be ensured by using -i (which frees us from using ‘let’ when we are just simply setting a number), arrays with -a, and when you need a variable to stick around for longer than the individual script it’s set in or the current environment, you can export the variable with -x. Alternately, if you must use the same exact variable name with a different value inside a nested script, you can set the variable as local so you don’t ‘cross the streams’. We hope this starts a conversation on proper bash-ing, look forward to more ‘back to basics’ posts like this one.

Basic Script for Creating Mirrors

Monday, September 12th, 2011

Moving a volume to a mirror is often the first things people do to a new server that shows up out of the box. While this script reads input about two volumes and creates a mirror based on that input, it’s easily migrated into something akin to a DeployStudio or scripted workflow:

#Converts a standalone disk to a RAID 1 and automates adding the second member.
echo -n "Enter the name of the first volume to be placed in the mirror: "
read disk_1
export disk_1nv=`echo $disk_1 | sed 's:/Volumes/::g'`
echo "
creating the $disk_1 mirror"
sleep 2
export disk_1slice=`diskutil list "$disk_1" | grep -m 1 "$disk_1nv" | grep -o "disk..."`
diskutil appleRAID enable mirror $disk_1slice
echo -n "Enter the name of the second volume to be placed in the mirror: "
read disk_2
export disk_2nv=`echo $disk_2 | sed 's:/Volumes/::g'`
export disk_2root=`diskutil list "$disk_2" | grep -m 1 "$disk_2nv" | grep -o "disk."`
export raid_uuid=`diskutil info $disk_1slice | grep "Parent RAID Set UUID" | sed -e 's_Parent RAID Set UUID:__g;s_^[ \t]*__'`
diskutil AppleRAID add member $disk_2root $raid_uuid

Script for Populating Jabber Buddy Lists in iChat

Monday, February 22nd, 2010

Note: Uses a Jabber server hosted on yourfqdn.

The 10.6 OS X ichat server has an autobuddy feature, but this feature only works with a user’s original shortname: if they have multiple shortname aliases, these additional shortnames will not have a buddy list associated with them when they login, as the jabber database keys off of the logged in name: each shortname maintains it’s own buddy list, and aliases are not handled by autobuddy population.

To get around this limitation I have created a shell script residing at: /usr/local/bin/ This script when ran traverses the Open Directory user database and inits jabber accounts for all user shortnames (using /usr/bin/jabber_autobuddy –inituser shortname@yourfqdn). This creates an active record for that shortname. After this is created for all shortnames in the system, the script then calls /usr/bin/jabber_autobuddy -m, which creates a buddy list for all users that contains an entry for all active records.

Unfortunately there is no way to auto-fire this script when a new user alias is added, it must be run by hand. To do so, after creating a new user account (or add a new shortname to an existing account) simply open a terminal window and type the following command:

sudo /usr/local/bin/

You will then be prompted for authentication. Once you authenticate, the script will process and create/init the appropriate accounts and ensure that they are buddied with all existing users.

Contents of /usr/local/bin/


## Specify search base
declare -x SEARCHBASE=”/LDAPv3/″

## Specify our jabber domain
declare -x JABBERDOMAIN=”yourFQDN”

## Iterate through all of our OD users
for user in $(dscl $SEARCHBASE list /Users); do
case “$user” in

echo “Resolving aliases for: $user”
## Read all shortnames for the user
for shortname in $(dscl -url $SEARCHBASE read /Users/$user RecordName | grep -v RecordName | sed -e ‘s/^\ //g’); do
echo “Initing jabber for username: $shortname”
## Init the shortname
jabber_autobuddy –inituser “${shortname//%20/ }@$JABBERDOMAIN”

## Populate all inited accounts
jabber_autobuddy -m

LoginHooks and Network Home Directories: till death do ye part

Sunday, December 27th, 2009

The term Login/Logout script is fairly explanatory; a login script is simply an executable script which is ran immediately following the successful authentication of a user at an OS X GUI login window. In OS X, GUI authentication is handled by the loginwindow process, which upon logging in a user, has facilities to launch specific scripts as specified by it’s configuration. Likewise, upon logging out of a user session, the loginwindow process can call logout scripts at the end of the process. These scripts are often called hooks, due to the manner in which the script is caught by the login or logout processes.

Login and logout hooks are functionally identical, and are executed under uid 0. That is, they run with root privileges. In order to properly identify the user environment in which they are running, the system passes the user’s short name as the first argument to the script.

Hooks can be extremely useful in a number of scenarios. In one way or another, they are generally used to “prep” a user environment. Such preparations might include configuration of a particular program’s configuration, user environment optimizations, configuration changes, software installation, mounting of a network sharepoint, etc… Anything you can script, you can turn into a loginhook; you have access to the same tools that you would have in an OS X shell environment: Perl, Python, and bash scripts, which can pretty much do all of your bidding, short of double-bagging your groceries.

That being said, the majority of the environments that I feel are required candidate’s for loginhooks are operating with Network Home Directories. That is, the entirety of a user’s environment is loaded and accessed live across the network from any client station on the network. When a user with a network home directory logs into a computer, it looks and acts pretty much the same as if it were a standard OS X session, however, when they save a document to their Desktop or Documents folders, it’s actually being saved on a hard drive attached to a file server, rather than the hard drive in the computer in front of the user. I say that it acts “pretty much the same”, and there-in lies the rub. It’s not the same. If not planned properly, Network Home directories can be an absolutely crippling experience. Very rarely do people realize the burden that switching to a networkhome model bares on a fileserver. First and foremost, say you have 30 user’s on your network, all using local home directories. Now, because you heard Network Homes are cool (and they are), you want to implement them for all your users. Downtime due to problematic equipment is reduced, data can be better secured, and backups are easier to manage. You have a file server, so why not?
Well the problem is that your 3 drive Xserve running OD, AFP, Wiki’s, and Retrospect is a poor substitute for 30 individual hard drives currently being utilized by your client stations.
But I digress, this article assumes this is not you. This article assumes you are smart, that you have a good server distribution, planning for about 60-75 concurrent users per server, and a fast RAID backend (RAID 10 recommended). This article assumes you have a server and storage solution capable of sustaining your environment, but you just aren’t quite happy with the end user results: more beachballs, crashy programs, and general headaches.

If this is you, then loginhooks can definitely relieve some stress. The nature of Network Home Directories means that some programs will simply not function properly. For instance, programs which use a database for storage may find that network home directories are too underperforming to properly operate. Conversely, you may find that applications which deal with media files are detrimental to server disk performance, negatively impacting your user base; 30 simultaneous users editing movies in iMovie over your network probably isn’t going to provide a very good experience I’m afraid. Other programs may just be coded poorly and do not know how to deal with network home directory paths. Locking issues are also not uncommon. Older version’s of Firefox used an IP-encoded .parentlock file in ~/Library/Application Support/Firefox/Profiles that could cause locking issues when switching between computers. iPhoto also has it’s own locking data, stored at “~/Pictures/iPhoto\ Library/”. If iPhoto launches and that file is present, it will complain about the lock and die. Not much fun for your end user’s, and something perfectly fixable through the deployment of loginhooks.

These examples are fairly typical of problems which you can negate through the implementation of loginhooks. The first two issues, performance, are the easiest ones to combat; if a program’s access patterns make it unstable or resource unfriendly when stored on a network home directory, why not store it on the local disk instead? For instance, you may have noticed that Adobe Reader 9 simply will not work on Network Home Directories. You launch the program and it dies shortly thereafter. Well, Reader stores it’s support data at ~/Library/Application Support/Acrobat/. What happens if we place this directory on local storage:

## list my current directory
helyx:~ hunterbj$ pwd

## remove the Acrobat folder
helyx:~ hunterbj$ rm -r ~/Library/Application\ Support/Adobe/Acrobat

## make a temp folder on the local drive and create a symlink
helyx:~ hunterbj$ mkdir /tmp/myacrobat/
helyx:~ hunterbj$ ln -s /tmp/myacrobat/ ~/Library/Application\ Support/Adobe/Acrobat

Now, if we open up Reader, the program miraculously works! This is due to the fact that when Reader tries to access it’s data at ~/Library/Application Support/Adobe/Acrobat/, which is in my home directory on the network, it is actually redirecting to the local drive’s temp folder at /tmp/myacrobat/. Acrobat is no longer using the network storage for it’s own data, and is now functional to boot! Redirection like this can be very handy, but the biggest caveat is that the data is now stored on the local drive, not the network. If the user moves to a different computer, that data will no longer be there: the folder /tmp/myacrobat won’t even exist on the next computer, and so the symlink will be broken, and in turn, Reader will be broken. Even if /tmp/myacrobat did exist, it would contain different data than the last computer. This isn’t a big deal for Acrobat, but lets say you were redirecting instead ~/Movies so that iMovie footage doesn’t crush your server. Now a user can only access those movies from the one computer. Similarly, you might be tempted to redirect “~/Documents/Microsoft User Data” or “~/Library/Mail”, but if your user’s use POP accounts, that could cause issues. This certainly can have implications that may not work in your environment, so plan carefully.

loginhooks!!! get yer loginhooks!!!

Well if you’re still reading, you might be saying to yourself “gee, these loginhooks sure sound nice, I wish I knew how to script!”, well, luckily we can help here. Attached you will find login and logout scripts which contain everything you will need to deploy loginscripts to your environment. Login and Logout Hook Examples

The attached loginhook script has numerous provisions for redirecting folders, and includes a few application-specific tweaks that can be used in your environment.

First and foremost, these scripts focus on local folder redirection, which will fix 90% of the problems that crop up in Network home directory environments. The script ships by default with a list of recommended folders to redirect, but you will want to examine your environment and redirect what fits best for you.

If you want to tweak which folders are redirected, you can open the attached script and search for the following lines (hint: it’s close to the top after the intro header)

## Specify our Redirects
## example: redirectDirs=”Library/Caches:deleteExisting,Library/Fonts:syncExisting”
redirectDirs+=”,Library/Application Support/Adobe/Acrobat:deleteExisting”

The basic syntax for an entry is: path/to/folder:action  The path should be entered relative to the user’s home directory, and action can be one of the following:

a. ‘deleteExisting’ (the specified folder will be deleted)
b. ‘syncExisting’ (the specified folder will be renamed to “folder (network)”, and it’s contents will be copied to the local redirect folder)
c. ‘moveExisting’ (the specified folder will be renamed to “folder (network)”

Each redirection entry should be separated by a comma. For instance, if you wanted to redirect user’s Mail and Caches folders, you would have the following entries (and remove all others):


Please note, this is the same thing as:


Each folder specified above will be redirected to a folder on the local drive. For instance, if user “testuser” logs in with the above command, they will have a local folder created at /Users/Local/testuser, which will mirror the user’s home directory, but will only contain items specified by your redirectDirs entries (in this example, ~/Library/Caches and ~/Library/Mail).

You can change /Users/Local to any directory of your choosing, simply modify the appropriate line at the top of the configuration file:

## Global folder where all local users cache data will reside

The associated logouthook ensures that all specified redirects will be undone when a user logs out. For entries specifying the actions “moveExisting” or “syncExisting”, upon logout, the “folder (network)” folder will be renamed to “folder”. You do not need to specify any redirectDirs entries in the logouthook, it reads those in from a preference file. The only time you will need to modify logouthook is if you change your localDataDir entry from something other than /Users/Local

On top of this, the loginhook does some client-side tweaking, namely it has some provisions for fixing Firefox and iPhoto lock-file problems that can occur in network homedir environments. If you are hit by either of these bugs (you’ll know if you are), then the attached scripts can help with that. Simply open the script, and look for the following section:

## Look for and clear iPhoto locks. set to 1 for true
declare -x -i iPhotoLockFix=0

## look for and clear FireFox locks. set to 1 for true
declare -x -i firefoxLockFix=0

Each of these application specific tweaks can help you to resolve these issues. To enable any of these fixes, simply change the =0 to =1, and then deploy it in workgroup manager.

Deploying loginhooks

You can deploy loginhooks a few different ways. Independently, it is possible to configure a loginhook locally for use by a single computer. Local configuration for loginhooks are stored in the file /Library/Preferences/ You can use the defaults command to configure loginwindow to fire login and logout hooks:

sudo defaults write LoginHook /Library/Loginhooks/
sudo defaults write LogoutHook /Library/Loginhooks/

For the above to work properly, you will need to have created the folder /Library/Loginhooks, and saved the appropriate loginhooks in the folder. They will need to be executable to fire, you can use the chmod +x command for this:

chmod +x /Library/Loginhooks/*

Alternatively (and preferably), we can deploy login and logout hooks en mass to our clients using Open Directory and MCX. By selecting the “login” managed preference pane on any computer or computer group record in Workgroup Manager you will have access to a “Scripts” tab which provides you the ability to select both a login and a logout script to deploy to your computers. You can select any executable and then click save, which will encode the script in base64 and store it in Open Directory. If you’re having trouble selecting your script, the word executable is key here, if you’re loginscript is not executable, you won’t be able to select it from this interface. Also, make sure your script properly start wish a hash-bang statement (#!/bin/bash).

I would like to say that’s all there is to it, but unfortunately that’s not the case. The first problem is that, by default, an OS X client is set to ignore MCX login scripts. To turn these on on the client, run the following command:

defaults write /var/root/Library/preferences/ EnableMCXLoginScripts -bool true

The second problem is that once again by default an OS X client will likely not trust MCX scripts deployed from your OD environment. In order for a client to do so, we must specify the trust level that a client needs to establish with an OD server in order to trust the scripts that it deploys. To specify this trust, we must first determine the type of OD bind that is being used for client machines. This can be done by running the following command on a client desktop:

dscl localhost read /LDAPv3/ dsAttrTypeStandard:TrustInformation
TrustInformation: Authenticated Encryption

In this example, the client is using an authenticated bind over SSL. To enable MCX scripts with this directory, we can run the following:

defaults write /var/root/Library/preferences/ MCXScriptTrust -string Authenticated

In most environments untrusted binds are utilized and so the following command is most applicable:

defaults write /var/root/Library/preferences/ MCXScriptTrust -string Anonymous

Here is a list of all of the trust levels and their details:

  • FullTrust: The client will only trust scripts specified by Directory Servers to which the client has performed a trusted bind to. A FullTrust relationship also requires that the options to block man in the middle attacks, and Digitally sign every packet are checked.
  • Authenticated: The client will trust a server only if it has successfully authenticated via a trusted bind.
  • PartialTrust: Like a full trust, a partial trust requires a trusted bind. Packets here must also be Digitally signed. Active Directory bindings typically occur at this level.
  • Encryption: The client will trust only servers supporting ldaps:// connections
  • DHCP: The client will trust only servers specified in Option 95 of their active DHCP packet.
  • Anonymous: The client will trust scripts configured in any configured directory server.

Why not use MCX Redirects?

You may be thinking that all of the above is a lot of work, when MCX Redirects can accomplish the same thing. Well, you can certainly use MCX redirects to accomplish folder redirection. However, in my experience, they are fairly limited. There’s no ownership testing, no path validation, and if you want to use a folder other than /tmp for your redirects, you’ll need to create that folder yourself on each of your clients, as the built in facilities won’t do any folder path creation (other than the top-level user folder).

Lastly, MCX redirects run in user-space, and by default, an OS X home directory has ACL deny entries that prevent user’s from deleting ~/Movies for instance. These ACL’s will interfere with MCX redirects. Lastly, the “deleteExisting” option is really the only reliable MCX redirect function, it’s facilities to rename/replace are pretty lacking in my opinion. Additionally, the MCX teardown process has no facilities to detect simultaneous user logins, and as such could tear down an environment that is actively in use on another computer. The attached scripts have provisions to specifically address all of these issues. On top of this, MCX Redirects are only supported by 10.5 and later, while the provided script works with 10.4-10.6. Of these OS’s, 10.4 is the least efficient in terms of Network Home Directory usage, and is therefore a more ideal candidate for folder redirection (specifically ~/Library/Caches). Therefore, if you have a lot of 10.4 clients, loginscript solutions such as this are really your only option.

There’s really only one “gotcha” to this setup, which would be true of MCX redirects as well: If a user logs into one computer which been configured for redirects, and then, prior to logging out, logs into a machine that is not, then the user environment will be a little wonky: the symlinks to redirect to local storage will likely be in the user’s home directory, but the local redirect data will not be on the local drive, this will result in broken functionality on that client. Symptoms could include heavy beachballing and application instability (highly dependent on the application).

Moral of the story: user’s will need to ensure that they logout prior to logging into an older machine (Apple Menu->shutdown is fine), and your admins should ensure that all machines running network home directories are properly configured.

Script to Change Passwords in Mac OS X

Tuesday, July 11th, 2006

Changing a password in Mac OS X with a script is a straight forward process. Here is a script that will do so. Just replace the put currentpasswordhere in with the desired password and the 318admin with the name of the account you wish to change the password for.


#Changes da password for the 318admin


/usr/bin/dscl . passwd /Users/318admin “$password”

if [ $status == 0 ]; then
echo “Password was changed successfully.”
elif [ $status != 0 ]; then
echo “An error was encountered while attempting to change the password. /usr/bin/dscl exited $status.”

exit $status

How to build a ‘for loop’

Tuesday, October 18th, 2005

How to build a for loop

There are many tools in programming to control when certain commands are run.  Different tools operate in different ways. A for loop is a programming language statement which allows code to be executed over and over until a specific condition is met.  This particular  statement  explicitly distinguishes a loop counter or variable.  This means that the loop is does not wait for a  test condition to start operating, it waits for a condition to stop executing.

The syntax for this loop is as follows:

for  item in some_iterable_object:

A for statement has the same structure in many programming  languages.   All for loop statements make use of INTEGER variables. This means that you do not need to declare the initial value as an integer variable.

The first line of the command contains  the initial value statement;   An integer value, usually the number 1
followed by the test condition statement;  using logical statements such as  < ( less than )  = (equal to)  > (greater than)
ended by the increment statement; which increases the value of the integer by one.

for (counter = 1; counter <=5; counter++)

In the example above  the initial value statement sets the variable  counter  equal to the integer 1, the test condition then looks to see if that counter variable still qualifies to be less than or equal to 5,  which it does,  so it increments the variable to 2 and then executions the commands in the statement.

It loops and tests the value or counter again  ( 2 is less than 5 )  it increments it to 3 and runs the command again.

In its simplest form this basically tells the program to execute the statement 5 times.   In practice a programmer will engineer the executed statement to use the  counter variable to run through a series of values.

so if I have a series of variables.   a1, a2, a3, a4, a5 etc…

I can create the statement to apply some operator or math function to the variable.  For example

for (counter = 1; counter <=5; counter++)
“a”+counter = random(100);

This example would  iterate through all 5 variables and set their numerical values to a random number between 0 and 100

Through more complete statements you can nest all sorts of logical commands.