Archive for the ‘General Technology’ Category

BSD as a useful tool

Monday, December 17th, 2012

Whether or not you know it, the world runs on BSD. You can’t send a packet more than a few hops without a BSD-derived TCP/IP stack getting involved. Heck, you’d be hard pressed to find a machine which doesn’t already have BSD code throughout the OS.

Why BSD? Many companies don’t want to deal with GPL code and the BSD license allows any use so long as the BSD group is acknowledged. This is why Windows has BSD code, Mac OS X is based on BSD (both in its current incarnation which pulls much code from FreeBSD and NetBSD as well as via code which came from NeXTStep, which in turn was derived from 4.3BSD), GNU/Linux has lots of code which was written while looking at BSD code, and most TCP/IP stacks on routers and Internet devices are BSD code.

In the context of IT tools BSD excels due to its cleanliness and consistency. GNU/Linux, on the other hand, has so many different distributions and versions that it’s extremely difficult to do certain tasks across different distributions in any consistent way. Furthermore, the hardware requirements of GNU/Linux precludes using anything but typical x86 PC with a full compliment of associated resources. Managing GNU/Linux on non-x86 hardware is a hobby in its own right and not the kind of thing anyone would want to do in a production environment.

NetBSD in particular stands in stark contrast to GNU/Linux when deploying on machines of varying size and capacity. One could just as easily run NetBSD from an old Pentium 4 machine as a tiny StrongARM SheevaPlug, a retired PowerPC Macintosh, or a new 16 core AMD Interlagos machine. A usable system could have 32 megs of memory or 32 gigs. Disk space could be a 2 gig USB flash drive or tens of terabytes of RAID.

Configuration files are completely consistent across architecture and hardware. You may need to know a little about the hardware when you first install (wd, sd, ld for disks, ex, wm, fxp, et cetera for NICs, for example), but after that everything works the same no matter the underlying system.

Some instances where a BSD system can be invaluable are situations where the installed tools are too limited in scope to diagnose problems, where problematic hardware needs to be replaced or augmented quickly with whatever’s at hand, or where secure connectivity needs to be established quickly. Some examples where BSD has come in handy are:

In a warehouse where an expensive firewall device was flakey, BSD provided a quick backup. Removing the flaky device would have left the building with no Internet connection. An unused Celeron machine with a USB flash drive and an extra ethernet card made for a quick and easy NetBSD NAT / DHCP / DNS server for the building while the firewall device was diagnosed.

At another business an expensive firewall device was in use which is not capable of showing network utilization in any detail without setting up a separate computer for monitoring (and even then it is limited to giving very general and broad information), nor is it flexible when it comes to routing all traffic through alternate methods such as gre or ssh tunnels. Setting up an old Pentium 4 with a four port ethernet card gave us a router / NAT device which allowed us to do tests where we passed all traffic through a single tunnel to an upstream provider to test the ISP’s suggestion that too many connections were running simultaneously (which wasn’t the case, but sometimes you have to appease the responsible party before they’ll take the next step). They can also now monitor network traffic quickly and easily using darkstat (, monitor packet loss, see who on the local networks is causing network congestion, et cetera. The machine serves three separate local network segments which can talk with each other. One segment is blocked from accessing the Internet because it contains Windows systems running Avid, but can be turned on momentarily to allow for software activation and similar things.

When another business needed a place to securely host their own WordPress blog, an unused Celeron machine was set up with a permissions scheme which regular web hosting providers won’t typically allow. WordPress is set up so that neither the running php code nor the www user can write to areas which allow for script execution, eliminating almost all instances where WordPress flaws can give full hosting abilities to attackers, which is how WordPress is so often used to host phishing sites and advertising redirectors.

DNS hosting, NAT or routing can be set up in minutes, a bridge can be configured to do tcpdump capture, or a web proxy can be installed to save bandwidth and perform filtering. An SMTP relay can be locally installed to save datacenter bandwidth.

So let’s say you think that a NetBSD machine could help you. But how? If you haven’t used NetBSD yet, then here are some tips.

The latest version is 6.0. The ISOs from NetBSD’s FTP server typically weigh in at around 250 to 400 megabytes, so CDs are fine. The installer is pretty straightforward and the mechanisms for installing on various architectures is not germane.

After boot, the system is pretty bare, so here are things you’ll want to do:

Let’s look at a sample /etc/rc.conf:
dhcpcd_flags=”-C resolv.conf -C mtu”
ifconfig_wm1=”inet netmask″
dhcpd_flags=”wm1 -cf /etc/dhcpd.conf”
rtadvd_flags=”-c /etc/rtadvd.conf wm1″
named_flags=”-c /etc/namedb/named.conf

So what we have here are a number of somewhat obvious and a few not-so-obvious options. Let’s assume you know what hostname, sshd, named9, ipnat and dhcpd are for. You can even make guesses about many of the options. What about ifconfig_wm0 (and its flags), ip6mode and other not-so-obvious rc.conf options? First, obviously, you can:

man rc.conf

dhcpcd is a neat DHCP client which is lightweight, supports IPv6 auto discovery and is very configurable. man dhcpcd to see all the options; the example above gets a lease on wm0 but ignores any attempts by the DHCP server to set our resolvers or our interface’s MTU. ifconfig_wm1 should be pretty self-explanatory.

ipnat and ipfilter enable NetBSD’s built in ipfilter (also known as ipf) and its NAT. Configuration files may often be as simple as this for NAT in /etc/ipnat.conf:

map wm0 -> 0/32 proxy port ftp ftp/tcp
map wm0 -> 0/32 portmap tcp/udp 10000:50000
map wm0 -> 0/32
rdr wm0 port 5900 -> port 5900

And lines which look like this for ipfilter in /etc/ipf.conf:

block in quick from to any

There’s tons of documentation on the Internet, particularly here:

To quickly summarize, the first three lines set up NAT for the subnet. The ftp line is necessary because of the mess which is FTP. The second line says to only use port numbers in that range for NAT connections. The third line is for non-TCP and non-UDP protocols such as ICMP or IPSec. The fourth redirects port 5900 of the public facing IP to a host on the local network.

The ipf.conf line is straightforward; ipf in many instances is used to block attackers since you wouldn’t turn on or redirect services which you didn’t intend to be public. Other examples are in the documentation and include stateful inspection (including stateful UDP; I’ll let you think for a while about how that might work), load balancing, transparent filtering (on a bridge), port spanning, and so on. It’s really quite handy.

Next is BIND. It comes with NetBSD and if you know BIND, you know BIND. Simple, huh?

rtadvd is the IPv6 version of a DHCP daemon and ip6mode=router tells the system you intend to route IPv6 which does a few things for you such as setting net.inet6.ip6.forwarding=1. You’re probably one of those, “We don’t need that yet” people, so we’ll leave that for another time. IPv6 is easier than you think.

dhcpd is for the ISC DHCP server. man dhcpd and check out the options, but most should already look familiar.
So you have a system up and running. What next? You may want to run some software which isn’t included with the OS such as Apache (although bozohttpd is included if you just want to set up simple hosting), PHP, MySQL, or if you’d like some additional tools such as emacs, nmap, mtr, perl, vim, et cetera.

To get the pkgsrc tree in a way which makes updating later much easier, use CVS. Put this into your .cshrc:

setenv CVSROOT


cd /usr
cvs checkout -P pkgsrc

After that’s done (or while it’s running), set up /etc/mk.conf to your liking. Here’s one I use most places:

PKG_OPTIONS.sendmail=sasl starttls

Set LOCALBASE if you prefer a destination other than /usr/pkg/. PKG_RCD_SCRIPTS tells pkgsrc to install rc.d scripts when installing packages. PKG_OPTIONS.whatever might be different for various packages; I put this one in here as an example. To see what options you have, look at the for the package you’re curious about. CLEANDEPENDS tells pkgsrc to clean up working directories after a package has been compiled.

After the CVS has finished, you have a tree of Makefiles (and other files) which you can use as simply as:

cd /usr/pkgsrc/editors/vim
make update

That will automatically download, compile and install all prerequisites (if any) for the vim package, then download, compile and install vim. I personally use “make update” in case I’m updating an older package, FYI.

With software installed, the rc.conf system works similarly to the above. After adding Apache, for instance (www/apache24/), you can just add apache=YES >> /etc/rc.conf. That sets Apache to launch at boot; to start it without rebooting, just run /etc/rc.d/apache start.

One package which comes in very handy when trying to keep a collection of packages up to date is pkg_rolling-replace (/usr/pkgsrc/pkgtools/pkg_rolling-replace). After performing a cvs update in /usr/pkgsrc, one can simply run pkg_rolling-replace -ru and come back a little later; everything which has been updated in the CVS tree will be compiled and updated in the system.

Finally, to update the entire OS, there are just a handful of steps:

cd /usr
cvs checkout -P -rnetbsd-6 src

In this instance, the netbsd-6 tag specifies the release branch (as opposed to current) of NetBSD.

I keep a wrapper called in /usr/src so I don’t need to remember options. This makes sure that all the CPUs are used when compiling and the destinations of files are in tidy, easy to find places.

./ -j `sysctl -n hw.ncpu` -D ../dest-$1 -O ../obj-$1 -T ../tools -R ../sets -m $*

An example of a complete OS update would be:

./ amd64 tools
./ amd64 kernel=GENERIC
./ amd64 distribution
./ amd64 install=/


mv /netbsd /netbsd.old
mv /usr/obj/sys/arch/amd64/compile/GENERIC/netbsd /
shutdown -r now

Updating the OS is usually only necessary once every several years or when there’s an important security update. Security updates which pertain to the OS or software which comes with the OS are listed here:

The security postings have specific instructions on how to update just the relevant parts of the OS so that in most instances a complete rebuild and reboot are not necessary.

Security regarding installed packages can be checked using built-in tools. One of the package tools is called pkg_admin; this tool can compare installed packages with a list of packages known to have security issues. To do this, one can simply run:

pkg_admin fetch-pkg-vulnerabilities
pkg_admin audit

A sample of output might look like this:

Package mysql-server-5.1.63 has a unknown-impact vulnerability, see
Package mysql-server-5.1.63 has a multiple-vulnerabilities vulnerability, see
Package drupal-6.26 has a information-disclosure vulnerability, see

You can then decide whether the security issue may affect you or whether the packages need to be updated. This can be automated by adding a crontab entry for root:

# download vulnerabilities file
0 3 * * * /sbin/pkg_admin fetch-pkg-vulnerabilities >/dev/null 2>&1
5 3 * * * /sbin/pkg_admin audit

All in all, BSD is a wonderful tool for quick emergency fixes, for permanent low maintenance servers and anything in between.

iOS Backups Continued, and Configuration Profiles

Friday, December 14th, 2012

In our previous discussion of iOS Backups, the topic of configuration profiles being the ‘closest to the surface’ on a device was hinted at. What that means is, when Apple Configurator restores a backup, that’s the last thing to be applied to the device. For folks hoping to use Web Clips as a kind of app deployment, they need to realize that trying to restore a backup that has the web clip in a particular place doesn’t work – the backup that designates where icons on the home screen line up gets laid down before the web clip gets applied by the profile. It gets bumped to whichever would be the next home screen after the apps take their positions.

This makes a great segue into the topic of configuration profiles. Here’s a ‘secret’ hiding in plain sight: Apple Configurator can make profiles that work on 10.7+ Macs. (But please, don’t use it for that – see below.) iPCU possibly could generate usable ones as well, although one should consider the lack of full screen mode in the interface as a hint: it may not see much in the way of updates on the Mac from now on. iPCU is all you have in the way of an Apple-supported tool on Windows, though. (Protip: activate the iOS device before you try to put profiles on it – credit @bruienne for this reminder.)

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Now why would you avoid making, for example, a Wi-Fi configuration profile for use on a mac with Apple Configurator? Well there’s one humongous difference between iOS and Macs: individual users. Managing devices with profiles shows Apple tipping their cards: they seem to be saying you should think of only one user per device, and if it’s important enough to manage at all, it should be an always enforced setting. The Profile Manager service in Lion and Mountain Lion Server have an extra twist, though: you can push out settings for Mac users or the devices they own. If you want to manage a setting across all users of a device, you can do so at the Device Group level, which generates extra keys than those that are present in a profile generated by Apple Configurator. The end result is that a Configurator-generated profile will be user-specific, and fail with deployment methods that need to target the System. (Enlarge the above screenshot to see the differences – and yes, there’s a poorly obscured password in there. Bring it on, hax0rs!)

These are just more of the ‘potpourri’ type topics that we find time to share after being caught by peculiarities out in the field.

CrashPlan PROe Refresher

Thursday, December 13th, 2012

It seems that grokking the enterprise edition of Code 42′s CrashPlan backup service is confusing for everyone at first. I recall several months of reviewing presentations and having conversations with elusive sales staff before the arrangement of the moving parts and the management of its lifecycle clicked.

There’s a common early hangup for sysadmins trying to understand deployment to multi-user systems, with the only current way to protect each user from another’s data being to lock the client interface (if instituted as an implementation requirement.) What could be considered an inflexibility could just as easily be interpreted as a design decision that directly relates to licensing and workflow. The expected model these days is a single user may have multiple devices, but enabling end users to restore files (as we understand it) requires one user be granted access to the backup for an entire device. If that responsibility is designated to as the IT staff, then the end user must rely on IT to assist with a restore, instead of healing thyself. This isn’t exactly the direction business tech has been going for quite some time. The deeper point is, backup archives and ‘seats’ are tied to devices – encryption keys cascade down from a user, and interacting with the management of a device is, at this point, all or nothing.

This may be old hat to some, and just after the Pro name took on a new meaning into Code 42 hosted-only, the E for Enterprise version had seemingly been static for a spell – until things really picked up this year. With the 3.0 era came the phrasing “Cold Storage”, which is neither a separate location in the file hierarchy nor intended for long-term retention (like one may use Amazon’s new Glacier tier of storage for.) After a device is ‘deactivated’, it’s former archives are marked for deletion, just as in previous versions – this is just a new designation for the state of the archives. The actual configuration which determines when the deactivated device backup will finally be deleted can be designated deployment-wide or more granularly per organization. (Yes, you can find the offending GUID-tagged folder of the archives in the PROe servers filesystem and nuke it from orbit instead, if so inclined.)

ComputerBlock from the PROe API

computerBlock from the PROe API

Confusion could arise from the term that looks similar to deactivation, ‘deauthorization’. Again, you need to notice the separation between a user and their associated device. Deauthorization operates at the device level to put a temporary hold on its ability to log in and perform restores on the client. In API terms it’s most similar to a ComputerBlock. This still only affects licensing in the fact that you’d need to deactivate the device to get back it’s license for use elsewhere, (although jiggery-pokery may be able to resurrect a backup archive if the user still exists…) As always, test, test, test, distribute your eggs across multiple baskets, proceed with caution, and handle with care.

Getting your feet wet with ACL’s

Monday, December 3rd, 2012

As an old school unix geek I have to admit that I’ve been dragging my feet in my efforts to learn and really grasp the idea of Access Control Lists (ACL’s) until embarrassingly recently. As you may know, *nix OS’s of the past have had only basic levels of access control over files on a system and for a surprisingly long amount of time, these simple controls were enough to get by. You were given 3 permission scopes per file, representing the ability to assign specific permissions for the file’s owner, a single group, and everyone else. This was enough for smallish deployments and implementations but when you start having 1000′s of users, setting specific permissions per user started to get needlessly complicated.

Enter ACL’s which grant extremely granular control over every operation you can do on a file. Need a folder to propagate a single user’s permissions to all files but not folders? No problem. Need to give read only access and disallow deletes for a set of folders? No problem there as well. It’s this fine level control that makes using ACL’s important, even mandatory, in some specific cases.

Just a few days ago, encountered an program that was behaving strangely. It was crunching a large number of files and creating the correct output files but for some strange reason, it was deleting them immediately after creating them. I noticed this as I control – C’d the program and saw my file only for it to be deleted once the process resumed. If only there was a way for the OS to disallow the offending program from removing it’s output files…

This is where ACL’s come in and why they are so powerful. I was able to tell the OS to block a program from deleting anything in it’s output folder. Here’s the command I used to check the ACL’s and set them on my mac:

>ls -le

As you see there are no ACL’s set. To set the append only attribute I typed the following.

>chmod +a ‘alt229 deny delete’ ‘output folder’

You see, the ACL has been set.  I’ll try and delete something now.

What gives? Unlike Linux systems ACL inheritance isn’t enabled by default when set from the command line. We’ll need to tweak our original command to enable that.

Clear old permissions first:
>chmod -R -N *
>chmod +a ‘alt229 deny delete,file_inherit,directory_inherit’ ‘output folder’

Now permissions will inherit but only to newly created folders.  You’ll see that the extra permissions have only been set on the newly created folder named ‘subfolder3′

Rerun the command like this to apply it to existing folders.

>chmod -R +a ‘alt229 deny delete,file_inherit,directory_inherit’

Now, you won’t be able to delete any file that’s contained within the main folder and it’s sub folders.

 There are many other special permissions available to tweak your system and help pull you out of strange binds that you may find yourself in. Here’s a list of some of the other ACL’s available in OSX that you can use to customize your environment. This is straight from the man page.


The following permissions are applicable to all filesystem objects:


Delete the item.  Deletion may be granted by either this

permission on an object or the delete_child right on the

containing directory.


Read an objects basic attributes.  This is implicitly

granted if the object can be looked up and not explicitly



Write an object’s basic attributes.


Read extended attributes.


Write extended attributes.


Read an object’s extended security information (ACL).


Write an object’s security information (ownership, mode,



Change an object’s ownership.


The following permissions are applicable to directories:


List entries.


Look up files by name.


Add a file.


Add a subdirectory.


Delete a contained object.  See the file delete permission



The following permissions are applicable to non-directory filesystem


read    Open for reading.

write   Open for writing.

append  Open for writing, but in a fashion that only allows writes

into areas of the file not previously written.


Execute the file as a script or program.


ACL inheritance is controlled with the following permissions words, which

may only be applied to directories:


Inherit to files.


Inherit to directories.


This flag is only relevant to entries inherited by subdirec-

tories; it causes the directory_inherit flag to be cleared

in the entry that is inherited, preventing further nested

subdirectories from also inheriting the entry.


The entry is inherited by created items but not considered

when processing the ACL.



Thursday, November 29th, 2012

It was our privilege to be contacted by Bizappcenter to take part in a demo of their ‘Business App Store‘ solution. They have been active on the Simian mailing list for some time, and have a product to help the adoption of the technologies pioneered by Greg Neagle of Disney Animation Studios (Munki) and the Google Mac Operations Team. Our experience with the product is as follows.

To start, we were given admin logins to our portal. The instructions guide you through getting started with a normal software patch management workflow, although certain setup steps need to be taken into account. First is that you must add users and groups manually, there are no hooks for LDAP or Active Directory at present (although those are in the road map for the future). Admins can enter the serial number of each users computer, which allows a package to be generated with the proper certificates. Then invitations can be sent to users, who must install the client software that manages the apps specified by the admin from that point forward.


Sample applications are already loaded into the ‘App Catalog’, which can be configured to be installed for a group or a specific user. Uploading a drag-and-drop app in a zip archive worked without a hitch, as did uninstallation. End users can log into the web interface with the credentials emailed to them as part of the invitation, and can even ‘approve’ optional apps to become managed installs. This is a significant twist on the features offered by the rest of the web interfaces built on top of Munki, and more features (including cross-platform support) are supposedly planned.


If you’d like to discuss Mac application and patch management options, including options such as BizAppCenter for providing a custom app store for your organization, please contact

Outlook Mailbox Maintenance and Search Troubleshooting

Thursday, November 29th, 2012

How to keep an Outlook Database tidy. From the get go, it’s important to lay the foundation for Outlook and the user so that their database doesn’t grow out of hand.  This is done by:

  1. Organizing Folders the way the user would like them
  2. Creating Rules for users (if they need them)
  3. Creating an Archive Policy that moves their email to another database (PST).
  4. Mounting the archive PST in Outlook so that it’s searchable.
  5. Checking the size of the archive PST every quarter or half year to ensure the size hasn’t grown above it’s maximum.
  6. Creating a new archive folder for every year.

Organizing Folders the way the user would like them.

Sit down the user, and see how they would like to organize their folders.  If they don’t know, them revisit this with them in a couple of weeks / months.  Speak to them regarding their workflow and make recommendations to streamline their productivity as necessary.  Creating folder is as simple as right clicking the directory tree in Outlook and clicking on “Create Folder”.  It can also be done to create subfolders.

Creating Rules for users (if they need them).

Some users use rules, others don’t, some don’t even know they exist.  Start up a conversation with a user and see if they know what Outlook rules are, and see if they would like to know more about them, use some, or give it a test run for a day or so.  In a nutshell, Outlook rules move email from the Inbox to any mail enabled folder based on a set of, well rules.  You can move by sender, subject, keywords, etc.  Where to create rules is a little different depending on the version of Outlook you’re using:

Creating Rules in Outlook 2003:

Creating Rules in Outlook 2007:

Creating Rules in Outlook 2010:

Try to create rules that run from the Exchange server when possible.  This will allow the rules to run on the server and organize them before they hit the Outlook mail client.

Creating an Archive Policy that moves email to another database (PST)

NOTE: If autoarchiving from Outlook, the e-mail will not be available in Outlook Web Access / Active Sync.  If archiving in Exchange 2010 for a user, the Archive databases can be available in Outlook Web Access.  Proper licensing on Exchange and Outlook apply:

There are some defaults that Outlook uses.

  • Generally, it will automatically auto archive to an archive pst called archive.pst.
  • By default, it will tend to run every 14 days, and tend to archive all messages older than 6 months.
  • The archive.pst will be on the local workstation.
  • Microsoft best practice is to NOT store the PST file on the network due to it being fragile and if it receives any incomplete data it will get corrupt.
  • You cannot put the PST in read only mode, if you do, you will not be able to mount it until you take it out of read only mode.

Setting up AutoArchive, or manually archiving for Outlook 2003:

Setting up AutoArchive, or manually archiving for Outlook 2007:

Auto Archive Explained for Outlook 2010:

Turning off AutoArchive, or manually archiving for Outlook 2010:

Outlook PST Size limitations:

Outlook 2003 default is 20GB, but it can be changed:

Outlook 2007 default is 20GB, but it can be changed:

Outlook 2010 default is 50GB, but it can be changed:

Searching PSTs

For Outlook 2003 with latest updates:

  1. Open PST in Outlook
    1. File > Open > Outlook Data File
    2. When using Advanced Find make sure the archive.pst file is select to be searched.

For Outlook 2007:

  1. Ensure Windows Search is installed
  2. Go to Control Panel > Index Options and ensure your achive.pst is selected to be indexed.
  3. Now when you run a search, ensure “search all Outlook folders” is selected.  This will now allow the user to search ALL folders in Outlook at once, including the archive.pst.

For Outlook 2010

  1. Ensure archive.pst is open in Outlook
  2. Search using Instant Search in Outlook or Windows Search

Searching  doesn’t work in Outlook2007 and 2010: Troubleshooting steps you can do:

  1. Check the Event Logs for anything unusual with Office, Outlook, or Windows Search, and troubleshoot the errors that you find.
  2. Ensure that the pst file has been marked for being indexed:
    1. Outlook 2007:  Tools > Options > Search Options
    2. Outlook 2010: File > Options > section Search > Indexing Options > Modify > Microsoft Outlook
    3. Ensure the pst hasn’t gone over it’s maximum limit, if it has you will need to run scanpst.exe to repair it (you will lose some data within the PST, and there’s no way to control what will be removed).  If not, skip to  Step #4. Scanpst.exe can be found in different places depending on the version of Outlook you have:
      1. Outlook 2010

i.      Windows: C:\Program Files\Microsoft Office\Office14

ii.      Windows 64-bit: C:\Program Files (x86)\Microsoft Office\Office14

iii.      Outlook x64: C:\Program Files\Microsoft Office\Office14

  1. Outlook 2007

i.      Windows: C:\Program Files\Microsoft Office\Office12

ii.      Windows 64-bit: C:\Program Files (x86)\Microsoft Office\Office12

  1. After the repair has completed, open Outlook again and allow it to index (how long depends on how big the PST is).  If you check the Indexing Status, you should see it update at least every half hour.

i.      Check Indexing Status in Outlook 2010: Click in Search field > Click Search Tools button > Select Indexing Status

ii.      Check Indexing Status in Outlook 2008: Click on Tools > Instant Search > Indexing Status

  1. Proceed to Step #4.
  2. Disable and then re-enable the file for indexing.  Go to Search Options and remove the checkmark for the PST that is giving you issues.  Close Outlook and wait a couple of minutes.  Open Task Manager and ensure Outlook.exe is not running anymore. Once you’ve confirmed it’s stopped running on its own, open Outlook again and go back to the Search Options and put a check mark back on the PST that was giving you issues.  Leave Outlook open and alone and allow it to index until that Indexing Status says “0 items remaining”.
  3. If after indexing, it still doesn’t go down to “0 items remaining”, or isn’t even close, or the search STILL isn’t working properly, it’s possible the search index is corrupt.  To rebuild it, go to: Control Panel > Indexing Options > Advanced > Rebuild.  This is something that would best be done overnight as it will not only slow down Outlook, but slow down the computer as well.
  4. If rebuilding the Search Index still doesn’t work, then you may need to “Restore Defaults” .  On Windows 7, this can be done by clicking on the “Troubleshoot search and indexing” link under Control Panel > Indexing Options > Advanced.  Then click on “E-mail doesn’t appear in search results”.
  5. If after all of that, it still doesn’t work, it’s possible you have a corrupt PST.  In which case, follow through with step #3.
  6. If that still doesn’t work, consider patching up Microsoft Office to it’s latest updates.
  7. If that doesn’t work, consider repairing Microsoft Office by going to: Control Panel > Uninstall a Program > Microsoft Office 2010 > click on the Modify button > Click Repair.  Proceed to Step #4.
  8. If that still doesn’t work, create a new PST and import the data (using the Import function, or drag and drop)  from the bad PST into the new PST.  Proceed to Step #3.


LifeSize: Establishing A 3-Way Call

Tuesday, November 27th, 2012
I’m becoming pretty fond of LifeSize Video Conferencing Units. Mostly because they’re so easy to for end users that I rarely get any support calls about them. LifeSize Units Support 3 and More Way Video Conference Dialing. When I’ve done a 3way call in the past, I’ve just done the following:
  • Established the first call.
  • Use the Call button on the remote to bring up the address book screen (aka Call Manager).
  • Highlight the Requested call to add.
  • Clicking OK on the remote.
  • The second call added will appear side-by-side with your video of your call on the 2nd monitor. Your call should then appear on the first monitor of each of the two callers with their screen side-by-side with the first one you added on their second monitor.
  • When the call is finished, click on the hang up button on the remote to bring up Call Manager.
  • Click on the Hang Up button again to disconnect all users.
  • OR at this point you could also add another call, bandwidth permitting.
  • If you start a presentation while on the call then all callers will be tiled on the main screen and the presentation will play on the second screen.

Repeat this process to add more and more callers. If you have an RJ-11 w/ POTS you can also add voice callers. Granted they can’t see anything you’re piping over the video, but they can still participate in the areas of calls where they don’t need video.

Monitor Apache Load Times

Saturday, November 24th, 2012

When troubleshooting apache issues it becomes necessary sometimes to turn up the level of logging so that we can further determine what a given server is doing and why.  One new handy feature of the Apache 2 series is the ability to log how long it takes to serve a page.  This allows us to track load times throughout the entire website so that we can pipe it into our favourite analytical tool such as splunk or for you old admins, webalyzer or awstats.

Adding this new variable is straightforward.  Just navigate over to your httpd.conf file and look for the section that defines the various log formats.  We’re going to add the %D variable there which represents the time it takes to serve a page in microseconds.  Here is my httpd.conf for example:

LogFormat “%v:%p %h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\”" vhost_com
LogFormat “%h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\”" combined
LogFormat “%h %l %u %t \”%r\” %>s %O” common
LogFormat “%{Referer}i -> %U” referer
LogFormat “%{User-agent}i” agent

The quick and dirty way to get this mod installed is to look for the type of log that your server is configured to use (usually common or combined) and add the %D to the end (although you could put it anywhere)  As you see below I’ve added it to the combined part of the logfile.

LogFormat “%h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\” %D” combined

The other option is to make a new type of log and put it in there.  I’m going to make a new LogFormat named custom and put it there below.  Note that you’ll have to make sure that your vhost is set to use this type of log.

LogFormat “%h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\” %D” custom_log


Bash Tidbits

Friday, November 23rd, 2012

If you’re like me you have a fairly customized shell environment full of aliases, functions and other goodies to assist with the various sysadmin tasks you need to do.  This makes being a sysadmin easy when you’re up and running on your primary machine but what happens when you’re main machine crashes?

Last weekend my laptop started limping through the day and finally dropped dead and I was left with a pile of work yet on my secondary machine.  Little to no customization was present on this machine which made me nearly pull out my hair on more than one occasion.

Below is a list of my personal shell customizations and other goodies that you may find useful to have as well.  This is easily installed into your ~/.bashrc or ~/.bash_profile file to run every time


# Useful Variables
export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
export SN=`netstat -nr| grep -m 1 -iE ‘default|′ | awk ‘{print \$2}’ | sed ‘s/\.[0-9]*$//’ `
export ph=””
PS1=’\[\033[0;37m\]\u\[\033[0m\]@\[\033[1;35m\]\h\[\033[0m\]:\[\033[1;36m\]\w\[\033[0m\]\$ ‘# Aliases
alias arin=’whois -h’
alias grep=’grep –color’
alias locate=’locate -i’
alias ls=’ls -lh’
alias ns=’nslookup’
alias nsmx=’nslookup -q=mx’
alias pg=’ping’
alias ph=’ping’
alias phobos=’ssh -i ~/.ssh/identity -p 2200 -X -C -t screen -R’
alias pr=’ping `netstat -nr| grep -m 1 -iE ‘\”default|′\” | awk ‘\”{print $2}’\”`’
alias py=’ping’

At the top of the file you have 2 variables that set nice looking colors in the terminal so make it more readable.

One of my faviourite little shortcuts comes next.  You’ll notice that there is a variable called SN there and it is a shortcut for the subnet that you happen to be on.  I find myself having to do stuff to the various hosts on my subnet so if I can save having to type in 192.168.25 50 times a day then that’s definitely useful.  Here are a few examples of how to use it:

ping $SN.10
nmap -p 80 $SN.*
ssh admin@$SN.40

Also related is the alias named pr.  This finds the router and pings it to make sure it’s up.

Continuing down the list there is the alias ph which goes to my personal server.  Useful for all sorts of shortcuts and can save a fair amount of work.  Examples:

ssh alt229@$ph
scp ./test.txt alt229@$ph:~/

There are a bunch of other useful aliases there too so feel free to poach some of these for your own environment!

Playing Taps

Wednesday, November 21st, 2012

It seems like the whole world’s gone mobile, and along with it the tools to transition the stampede of devices coming through businesses doors into something manageable. For iOS, it wasn’t long ago that activation was through iTunes only(*gasp!*) and MDM was a hand-coded webpage with xml and redeemable code links on it. Back then Apple ID’s were a monumental headache (no change there) and Palm wasn’t dead yet. It could cause one to reminisce back to the first coming of Palm. Folklore has it there was a job duty at Palm called ‘tap counter’, to ensure nothing took longer than 3 taps to achieve. If you’ve deployed any number of iOS devices like iPads, you may be painfully aware just how many more than that there are just to take one of these devices out of the box before they get into a usable state:
Manually doing each individual device “over the air”, you need to tap 16 times to activate and use the device with an open wireless network (17 if it’s a newer iPad with Siri integration)

And the ‘iTunes Store Activation Mode’ method leaves 9 taps, since it skips the language selection and time zone choices along with the option to bypass Wi-Fi setup.

If you have access to a Mac running Apple Configurator, it takes only 13 after ‘Prepare’ the device for use. It would seem like things haven’t actually improved. But Apple Configurator has more tricks than just the newer one we discussed recently, which is getting Apple TV’s on a wireless network. When you want to do iOS’s version of Managed Preferences, configuration profiles (a.k.a .mobileconfig files,) that’s another two taps PER PROFILE. This is an opportunity to really learn to love Apple Configurator, though, as it shows two of it’s huge advantages here (the third being the fact it can do multi-user assignment on a single iPad, including checking sets of applications out and reclaiming the app licenses as desired)

- You can restore a backup of an activated device (or as many as 30 at once), which answers all of the setup questions in one automated step (along with any other manual customizations you may want)

- If you put the device in Supervision mode, you can even apply configuration profiles WITHOUT tapping “accept” and “install” for each and every one

There are so many things to consider with all the different ownership models for apps, devices, and the scenarios regarding MDM and BYOD, I thought it was worth just to have a mini-topic of ‘how do folks approach getting them iPads out of the box and into a usable state?’

OS X Server backup in Mountain Lion (and beyond)

Monday, November 19th, 2012

Data backup is a touchy subject. Nobody does it because they want to. They do it because sometimes bad things happen, and we need some way to take a dead server and transform it into a working one again. For Mac OS X Server, that wasn’t always easy. Because of it’s basic nature – a mixture of Open Source components and proprietary Apple technology – backing up OS X Server effectively would usually mean coming up with at least two backup solutions.

To help with all of this, 318 put together the sabackup package. Its purpose was to use Apple’s built-in server management command line tool (serveradmin) to export service settings in such a way that you could turn around and import them with serveradmin and get your server working again. I know that having those backed up settings not only allowed me to resurrect more than one server, but I also have used them to find out when a specific change was made. (Usually after we realized that said change had broken something.)

With Lion and Mountain Lion, Apple decided to address the problem of properly backing up services and service data, and Time Machine now includes a mechanism for backing up running on OS X Server. Inside the bundle, in the ServerRoot folder that is now the faux root for all services, you’ll find a ServerBackup command. This tool uses a selection of backups scripts in /Applications/ that allow for backup and restore of specific services. There’s also a collection of SysV-style scripts in /Applications/ that contain the parameters that ServerBackup will use when backing up services. As with all things Apple, they’re XML Plists. Certain services merit their own specfic backup scripts: Open Directory, PostgreSQL, File Sharing (called “sharePoints” in this context), Web, and Message Server. The OD script produces an Open Directory archive in /var/backups, the PostgreSQL script produces a dump of all your databases, and Message Server will give you a backup of the Jabber database. Web backs up settings, but it’s important to note that it doesn’t back up data. And then there’s the ServerSettings script, which produces a serveradmin dump of all settings for all services. Everything is logged in /var/log/server_backup.

This is what sabackup was designed to do, only Apple has done it in a more modular, more robust, and 100% more Apple-supported way. With that in mind, we’ve decided to cease development on sabackup. Relying on Apple’s tools means that as new services are added, they should be backed up without any additional work on your part – ServerBackup will be updated along with

ServerBackup has its quirks, mind you. It’s tied to Time Machine, which means Time Machine has to be enabled for it to work. That doesn’t mean you have to use Time Machine for anything else. If you exclude all the Finder-visible folders, you’ll still get a .ServerBackup folder at the root of the volume backup, with all the server backups. You’ll also get /private, including var (where backups and logs are), and etc, where a lot of config files live. You can dedicate a small drive to Time Machine, let Time Machine handle the backup of settings and data from services, and make sure that drive is a part of your primary backup solution. You do have a primary backup solution, don’t you?

Custom dynamic dns updater

Sunday, November 18th, 2012

Serving pages over a dynamic ip can be frustrating, especially if you try to use a free dynamic dns account.  Many of them expire if not used in X number of days, some cost more many than your actual domain and a lot of the built in clients in many of today’s popular routers don’t work reliably.

This is where some custom script foo comes in.  Using industry standards like SSH, SSI’s and cronjobs we can setup a super lightweight script that sends your dynamic ip to a webserver so that it can direct visitors to your in house server.


The graphic below should help visualize:


As you can see from the diagram this script runs, gathers a single variable and then pushes it out to a server via ssh.  From there the server calls that file and uses the ip as a variable to pass along to clients visiting the website by using a simple meta refresh.


Dynamic IP Configuration

After getting ssh keys setup there are really only 2 steps to getting this script to work.  If you haven’t set those up before refer to this guide for help.


Step 1.  Download the script here and change the following 4 variables to your own setup

IDENTITY = path to your ssh identity file (usually ~/.ssh/identity or ~/.ssh/id_rsa)

DEST_SERVER = ip or hostname of the server you’re sending your ip to

DEST_FILE = temp file on the server that holds your ip (/tmp/myip)

USERNAME = username to logon as

Step 2.  Setup a crontab to run this script at certain intervals

Here is my sample crontab which runs this script once an hour at 15 after:

# m    h    dom    mon    dow    command
15     *    *      *      *      /home/alt229/bin/


Web Server Configuration

Configuration of the webserver is nearly as simple.  Just make sure that server side includes are enabled first.  Then, create a file named index.shtml with the following contents.
<html xmlns="">
    <title>Zync Catalog Redirect</title>
	<meta http-equiv="refresh" content="0;URL='http://<!--#include virtual="cableip" -->'" />
    <body bgcolor="#000000">
	<p>Redirecting you to <!--#include virtual="myip" --></p> 

When clients hit your server and get served this page they will automatically get redirected to your dynamic address.

Owncloud: Finally a Dropbox that can sync to your local LAN

Saturday, November 17th, 2012

If I had a megabyte for the number of times I praised a cloud provider’s service while simultaneously lamenting that my data had to leave my LAN to live out the rest of it’s days on their servers then I’d have collected enough MB’s to fill a DVD.

That statement is definitely a tad hyperbolic but the need for good “cloud” software that average users can admin and control is definitely much sought after and needed.

Enter Owncloud which has the lofty goal of making all your data accessible everywhere and over all your devices. I said it was lofty right? I was more than a tad skeptical when I read this too but Owncloud delivers.

Setup is almost deceptively straightforward as all you have to do is download a tar file and extract it to your web root folder. From there, make sure your permissions are all owned by the apache daemon user (www-data for ubuntu and apache for redhat / centos).  The only real tricky part is making sure you have all the prerequisites installed and have SSL running properly.  Check the official site for a list of prerequisites

tar -jxvf owncloud-4.5.3.tar.bz2
mv owncloud /var/www
chown -R www-data /var/www/owncloud/

You’ll want to make sure that the Allow Overrides variable is set to all or else the custom .htacces that comes with Owncloud won’t work and certain modifications (such as moving the data folder outside the web root) will need to be made due to security concerns.

Next step is to login to your domain and create a username and password.  You also have the option of connecting to a mysql database or using SQLite.  If unsure choose SQLite as it’s the easiest to setup and most compatible.

Create Admin User


Next, you have to install the sync client on your local machine.  Grab the latest version from the official website:

Run the installer and open the app.  The first time you run it it’ll require your connection settings.  Enter them like so:

Hit next and with any luck you’ll be off to the races!  The default folder is ~/ownCloud and you can start syncing files immediately simply by dragging and dropping.


Next time we’ll go over some more in depth configurations such as configuring contact / calendar syncing as well as syncing to an amazone s3 bucket.

If you get stuck anywhere in the process please refer to the official install guide located here:

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.


Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

Recover Data From Crashed SharePoint Server

Thursday, November 1st, 2012

If you ever find yourself in the unfortunate situation of having to recover a corrupted SharePoint server fear not!  What used to be a manual and very tedious process is now quite manageable with a little bit of code and basic knowledge of SharePoint server.

The reason that this process can be so tricky is because SharePoint stores all it’s files in a SQL database and while that provides much more functionality than a straight file server it also increases the complexity of backing up and recovering files located within it.

Luckily, there is a small script that can be run on the server that exports all data within a SharePoint database.  The following are the steps you can use to recover your documents from a crashed SharePoint server.


Here are the basic steps to getting your docs.

  1. Backup your database(s)
  2. Create a temp database in your default SQL containter
  3. Download and customize this code
  4. Compile the code
  5. Run the program


Step 1:  

The first thing you’ll need to do is open up your SQL Manager and create a backup of the DB you’re wanting to save.  Normally you need to connect to \\.\pipe\MSSQL$Microsoft##ssee\sql\query and then you’ll see the correct SharePoint databases.  In this example, the database is called STS_SERVER_1 but yours will likely be different.  Right click this database and back it up to a single file.  Telling it to go to 2 backup files can cause problems.


Step 2:  

Close and reopen the SQL Manager but this time connect to the default server.  In my case it is “Server\SQLEXPRESS”  Once inside here navigate to databases, right click and then hit restore.  I named my database “TEMP_DB” but feel free to name it whatever you like.  Select the backup file you just created and start the restore.


Step 3:  

Download this code to your desktop and save it as spdbex.cs.  You’ll need to change 2 variables inside the code.  Look for this part near the top of the code.

string DBConnString = 
“Server=ServerName\\SQLEXPRESS;” +

Yours may look like this:

string DBConnString = 
“Server=YourServer\\SQLEXPRESS;” +

Step 4:

To compile the code, run this command in a command prompt.  It’s assumed that the spdbex.cs is in the current folder.

%WINDIR%\Microsoft.NET\Framework\v2.0.50727\csc /target:exe /out:spdbex.exe spdbex.cs

Step 5:

Assuming everything went ok you should be able to just type in the program name and then you’ll be good to go.  This will put all files that were stored in your SharePoint database into the current folder and sub folder.

Note: Meta data and file versions are not preserved during this restore.

(Sysadmin) Software Design Decisions

Wednesday, October 3rd, 2012

When approaching a task with an inkling to automate, sometimes you find an open source project that fits the bill. But the creator will work within constraints, and often express their opinion of what’s important to ‘solve’ as a problem and therefore prioritize on: a deployment tool is not necessarily a patch management tool is not necessarily a configuration management tool, and so on. One of the things I’ve dealt with is trying to gauge the intent of a developer and deciding if they are interested in further discussion/support/development of a given project. Knowing why one decision was made or another can be helpful in these situations. In that category of things I wish someone could have written so I could read it, here’s the design decisions behind the sonOfBackupRestoreScripts project I’ve been toying with as an add-on to DeployStudio(heretofore DS), which you can hopefully understand why I am not releasing as an official, supportable tool in it’s current bash form after reading the following.
I’ve adapted some of the things Google used in their outline for Simian as a model, to give this some structure.

Project Objective:

To move user home folders and local authentication/cached credentials between workstations in a customizable and optimized manner, preserving the integrity of the data/user records as much as possible


For speed and data integrity, rsync is used to move selections of the users home folder(minus caches, trash, and common exclusions made by Time Machine). To increase portability and preserve mac-specific attributes, a disk image is generated to enclose the data. The user account information is copied separately and helpful information is displayed at the critical points as it moves from one stage to another and during the backup itself.

Requirements: DeployStudio Server / NetBoot

DS, as a service, enables an infrastructure to run the script in, and automounts a repository to interact with over the network. Meant to work optimally with or without a NetBoot environment, an architecture assumption being made during development/testing is wired ethernet, with the use of USB/Thunderbolt adapters if clients are MacBook Airs. Even old minis can function fine as the server, assuming the repo is located on a volume with enough space available to accept the uncompressed backups.

Implementation Details: Major Components / Underlying Programs

- source/destination variables

Parameters can be passed to the script to change the source/destination of backups/restores with the -s(for source) and -d(…) switches and then a path that is reachable by the NetBooted system.

- hdiutil

A simple sparsediskimage is created which can expand up to 100GBs with the built-in binary hdiutil. The file system format of that container is JHFS+, and a bunch of other best practices, cobbled together from Bombich’s Carbon Copy Cloner(heretofore CCC) and InstaDMG, are employed.

- cp

The cp binary is used to just copy the user records from the directory service the data resides on to the root of the sparseimage, and the admin group’s record is copied into a ‘group’ folder. If hashes exist in /var/db/shadow/hash, which is how passwords were stored previous to 10.7, those are moved to a ‘hashes’ folder.

- rsync

A custom, even more current build of rsync could be generated if the instructions listed here are followed. Ideally, a battle-tested version like the one bundled with CCC’s (/Applications/Carbon\ Copy\, which is actually a heavily customized rsync version 3.0.6) could be used, but it’s output isn’t easy to adapt and see an overview of the progress during a CLI transfer. Regardless, the recommended switches are employed in hopes to get a passing grade on the backupBouncer test. The 3.0.7 version bundled with DS itself (/Applications/Utilities/DeployStudio\, which for whatever reason is excluded when the assistant creates NetBoot sets) was used during development/testing.


The Users folder on the workstation that’s being backed up is what’s targeted directly, so any users that have been deleted or subfolders can be removed with the exclusions file fed to the rsync command, and without catch-all, asterisk(*) ‘file globbing’, you’d need to be specific about certain types of files you want to exclude if they’re in certain directories. For example, to not backup any mp3 files, no matter where they are in the user folders being backed up, you’d add - *.mp3 Additional catch-all excludes could be used, as detailed in the script, which specifically excludes ipsw’s(iOS firmware/OS installers) like this: --exclude='*.ipsw'


Pretty much everything done via both rsync and cp are done in reverse, utilizing the source/destination options, so a backup taken from one machine can easily be chosen to restore to another.

Security Considerations:

Very little security is applied during storage. Files are transferred over password-protected AFP, so a separate server and repo could be used to minimize potential access by whoever can access the main DS service. Nothing encrypts the files inside the sparseimages, and if present, the older password format is a hash that could potentially be cracked over a great length of time. The home folder ACL’s and ownership/perms are preserved, so in that respect it’s secure according to whoever has access to the local file systems on the server and client.

Excluded/Missing Features:
(Don’t You Wish Every Project Said That?)

Hopefully this won’t sound like a soul-bearing confession, but here goes:
No checks are in place if there isn’t enough space on destinations, nor if a folder to backup is larger than the currently hard-coded 100GB sparseimage cap (after exclusions.) Minimal redirection of logs is performed, so the main DS log can quickly hit a 2MB cap and stop updating the DS NetBoot log window/GUI if there’s a boatload of progress echo’d to stdout. The process to restore a users admin group membership(or any other group on the original source) is not performed, although the group’s admin.plist can be queried after the fact. Nor is there even reporting on Deleted Users orphaned home folders if they do actually need to be preserved, by default they’re just part of the things rsync excludes. All restrictions are performed in the Excludes.txt file fed to rsync, so it cannot be passed as a parameter to the script.
And the biggest possible unpleasantness is also the #1 reason I’m not considering continuing development in bash: UID collisions. If you restore a 501 user to an image with a pre-existing 501 user that was the only admin… bad things will happen. (We’ve changed our default admin user’s UID as a result.) If you get lucky, you can change one user’s UID or the other and chown to fix things as admin before all heck breaks lose… If this isn’t a clean image, there’s no checking for duplicate users with newer data, there’s no filevault1 or 2 handling, no prioritization so if it can only fit a few home folders it’ll do so and warn about the one(s) that wouldn’t fit, no version checking on the binaries in case different NetBoot sets are used, no fixing of ByHostPrefs(although DS’s finalize script should handle that), no checks with die function are performed if the restore destination doesn’t have enough space, since common case is restoring to same HD or a newer, presumably larger computer. Phew!


The moral of the story is that the data structures available in most of the other scripting languages are more suited for these checks and to perform evasive action, as necessary. Bash does really ungainly approximations of tuples/dictionaries/hash tables, and forced the previous version of this project to perform all necessary checks and actions during a single loop per-user to keep things functional without growing exponentially longer and more complex.

Let’s look forward to the distant future when this makes it’s way into Python for the next installment in this project. Of course I’ve already got the name of the successor to SonOfBackupRestoreScripts: BrideOfBackupRestoreScripts!

MacSysAdmin 2012 Slides and Videos are Live!

Thursday, September 20th, 2012

318 Inc. CTO Charles Edge and Solutions Architect alumni Zack Smith were back at the MacSysAdmin Conference in Sweden again this year, and the slides and videos are now available! All the 2012 presentations can be found here, and past years are at the bottom of this page.

Unity Best Practices In AVID Environments

Thursday, September 6th, 2012

Avid Unity environments are still common these days because the price for Avid’s ISIS SAN is tremendously high. While a Unity typically started anywhere from $50,000 to $100,000, a typical ISIS starts around the same price even though the ISIS is based on more typical, less expensive commodity hardware. The ISIS is based on common gigabit networking, whereas the Unity is based on fibre channel SCSI.

Avid Unity systems come in two flavors. Both can be accessed by fibre channel or by gigabit ethernet. The first flavor is all fibre channel hardware. The second uses a hardware RAID card in a server enclosure with a sixteen drive array and shares that storage over fibre channel and/or gigabit ethernet.

Components in a fibre channel only Unity can be broken down so:

  • Avid Unity clients
  • Fibre channel switch
  • Fibre channel storage
  • Avid Unity head

Components in a chassis-based Unity are:

  • Avid Unity clients
  • Fibre channel switch
  • Avid Unity controller with SATA RAID

The fibre channel only setup can be more easily upgraded. Because such setups are generally older, they typically came with a 2U rackmount dual Pentium 3 (yes, Pentium 3!) server. They use a 2 gigabit ATTO fibre channel card and reliability can be questionable after a decade.

The Unity head can be swapped for a no-frills Intel machine (AMD doesn’t work, and there’s not enough time in the world to figure out why), but one must take care to be careful about video drivers. Several different integrated video chips and several video cards have drivers which somehow conflict with Unity software, so sometimes it’s easier to simply not install any drivers since nothing depends on them. The other requirements / recommendations are a working parallel port (for the Unity dongle), a PCIe slot (for a 4 gigabit ATTO fibre channel card) and 4 gigs of memory (so that Avid File Manager can use a full 3 gigabytes).

The fibre channel switch is typically either a 2 gigabit Vixel switch or a 4 gigabit Qlogic 5200 or 5600 switch. The older Vixel switches have a tendency to fail because there are little heat sinks attached to each port chip which face downward, and after a while sometimes a heat sink or two fall off and the chip dies. Since Vixel is not in business, the only replacement is a Qlogic.

The fibre channel storage can be swapped for a SATA-fibre RAID chassis so long as the chassis supports chopping up RAID sets into many smaller logical drives on separate LUNs. Drives which Avid sells can be as large as 1 TB if using the latest Unity software, so dividing up the storage into LUNs no larger than 1 TB is a good idea.

Changing storage configuration while the Unity has data is typically not done due to the complexity and lack of proper understanding of what it entails. If it’s to be done, it’s typically safer to use a client or multiple clients to back up all the Unity workspaces to normal storage, then reconfigure the Unity’s storage from scratch. If that is what is done, that’s the best opportunity to add storage, change from fibre channel drives to RAID, take advantage of RAID-6, et cetera.

Next up is how Avid uses storage. The Unity essentially thinks that it’s given a bunch of drives. Drives cannot easily be added, so the only time to change total storage is when the Unity will be reconfigured from scratch.

The group of all available drives is called the Data Drive Set. There is only one Data Drive Set and it has a certain number of drives. You can create a Data Drive Set with different sized drives, but there needs to be a minimum of four drives of the same size to make an Allocation Group. Spares can be added so that detected disk failures can trigger a copy of a failing drive to a spare.

Once a Data Drive Set is created, the File Manager can be started and Allocation Groups can be created. The reasoning behind Allocation Groups is so that groups of drives can be kept together and certain workspaces can be put on certain Allocation Groups to maximize throughput and/or I/O.

There are pretty much two different families of file access patterns. One is pure video streaming which is, as one might guess, just a continuous stream of data with very little other file I/O. Sometimes caching parameters on fibre-SATA RAID are configured to have large video-only or video-primary drive sets (sets of logical volumes cut up from a single RAID set) are set to optimize streams. The other file access pattern would be handling lots of little files such as audio, stills, render files and project files. Caching parameters set for optimizing lots of small random file I/O can show a noticeable improvement, particularly for the Allocation Group which has the workspace on which the projects are kept.

Workspaces are what they sound like. When creating a workspace, you decide which Allocation Group that workspace will exist. Workspaces can be expanded and contracted even while clients are actively working in that workspace. The one workspace which matters most when it comes to performance is the projects workspace. Because Avid projects tend to have hundreds or thousands of little files, an overloaded Unity can end up taking tens of seconds to simply open a bin in Media Composer which will certainly affect editors trying to work. The Attic is kept on the projects workspace, too, unless explicitly set to a different destination.

Although Unity systems can have ridiculously long uptimes, like any filesystem there can be problems. Sometimes lock files won’t go away when they’re supposed to, sometimes there can be namespace collisions, and sometimes a Unity workspace can simply become slow without explanation. The simplest way to handle filesystem problems, especially since there are no filesystem repair tools, is to create a new workspace, copy everything out of the old workspace, then delete the old workspace. Fragmentation is not checkable in any way, so this is a good way to make a heavily used projects workspace which has been around for ages a bit faster, too.

Avids have always had issues when there are too many files in a single directory. Since the media scheme on Avids involves Media Composer creating media files in workspaces on its own, one should take care to make sure that there aren’t any single directories in media workspaces (heck, any workspaces) which have more than 5,000 files. Media directories are created based on the client computer’s name in the context of the Unity, so if a particular media folder has too many items, that folder can be renamed to the same name with a “-1″ at the end (or “-(n+1)”).

Avid has said that the latest Media Composer (6.0.3 at the time of this writing) is not compatible with the latest Unity client (5.5.3). This is not true and while certain exotic actions might not work well (uncompressed HD, large number of simultaneous multicam, perhaps), all basic editing functions work just fine.

Finally, it should be pointed out that when planning ways to back up Unity workspaces, Windows clients are bad candidates. Because of the limitation on the number of simultaneously mounted workspaces being dependent on the number of drive letters available, Windows clients can only back up at most 25 workspaces at a time. Macs have no limitation on the number of workspaces they can mount simultaneously, plus Macs have rsync built in to the OS, so they’re a more natural candidate for performing backups.

Digital Forensics – Best Practices

Thursday, September 6th, 2012

Best Practices for Seizing Electronic Evidence

A joint project of the International Association of Chiefs of Police
The United States Secret Service

Recognizing Potential Evidence:

Computers and digital media are increasingly involved in unlawful activities. The computer may be contraband, fruits of the crime, a tool of the offense, or a storage container holding evidence of the offense. Investigation of any criminal activity may produce electronic evidence. Computers and related evidence range from the mainframe computer to the pocket-sized peronal data assistant, to the smallest electronic chip storage device. Images, audio, text, and other data on these media can be easily altered or destroyed. It is imperative that investigators recognize, protect, seize and search such devices in accordance with applicable statutes, policies and best practices, and guidelines.

Answers to the following questions will better determine the role of the computer in the crime:

  1. Is the computer contraband or fruits of a crime?
    For example, was the computer software or hardware stolen?
  2. Is the computer system a tool of the offense?
    For example, was the system actively used by the defendant to commit the offense? Were fake IDs or other counterfeit documents prepared using the computer, scanner, or printer?
  3. Is the computer system only incidental to the offense, i.e., being used to store evidence of the offense?
    For example, is a drug dealer maintaining his trafficking records in his computer?
  4. Is the computer system both instrumental to the offense and a storage device for the evidence?
    For example, did the computer hacker use her computer to attack other systems and also use it to store stolen credit card information?

Once the computer’s role is known and understood, the following essential questions should be answered:

  1. Is there probable cause to seize the hardware?
  2. Is there probable cause to seize the software?
  3. Is there probable cause to seize the data?
  4. Where will this search be conducted?
    For example, is it practical to search the computer system on site or must the examination be conducted at a field office or lab?
    If Law Enforcement officers remove the computer system from the premises, to conduct the search, must they return the computer system, or copies of the seized data, to its owner/user before trial?
    Considering the incredible storage capacities of computers, how will experts search this data in an efficient and timely manner?

Preparing For The Search and/or Seizure

Using evidence obtained from a computer in a legal proceeding requires:

  1. Probable cause for issuance of a warrant or an exception to the warrant requirement.
    CAUTION: If you encounter potential evidence that may be outside of the scope of your existing warrant or legal authority, contact your agency’s legal advisor or the prosecutor as an additional warrant may be necessary.
  2. Use of appropriate collection techniques so as not to alter or destroy evidence.
  3. Forensic examination of the system completed by trained personnel in a speedy fashion, with expert testimony available at trial.

Conducting The  Search and/or Seizure

Once the computers role is understood and all legal requirements are fulfilled:

1. Secure The Scene

  • Officer Safety is Paramount.
  • Preserve Area for Potential Fingerprints.
  • Immediately Restrict Access to Computers/Systems; Isolate from Phone, Network, as well as Internet, because data can be accessed remotely on the system in question.

2. Secure The Computer As Evidence

  • If the computer is powered “OFF”, DO NOT TURN IT ON, under any circumstances!
  • If the computer is still powered “ON”…

Stand-alone computer (non-networked):

  1. Photograph screen, then disconnect all power sources; Unplug from back of computer first, and then proceed to unplug from outlet. System may be connected to a UPS which would prevent it from shutting off.
  2. Place evidence tape over each drive slot.
  3. Photograph/Diagram and label back of computer components with existing connections.
  4. Label all connectors/cable ends to allow for reassembly, as needed.
  5. If transport is required, package components and transport/store components always as fragile cargo.
  6. Keep away from magnets, radio transmitters, and otherwise hostile environments.

Networked or Business Computers: Consult a Computer Specialist for Further Assistance!

  1. Pulling the plug on a networked computer could severely damage system.
  2. Disrupt legitimate business.
  3. Create liability for investigators or law enforcement personnel.



A Bash Quicky

Thursday, August 30th, 2012

In our last episode spelunking a particularly shallow trough of bash goodness, we came across dollar sign substitution, which I said mimics some uses of regular expressions. Regex’s are often thought of as thick or dense with meaning. One of my more favorite descriptions goes something like, if you measured each character used in code for a regex in cups of coffee, you’d find the creators of this particular syntax the most primo, industrial-strength-caffeinated folks around. I’m paraphrasing, of course.

Now copy-pasta-happy, cargo-culting-coders like myself tend to find working code samples and reuse salvaged pieces almost without thinking, often recognizing the shape of the lines of code more than the underlying meaning. Looping back around to dollar sign substitution, we can actually interpret this commonly used value, assigned to a variable meaning the name of the script:
Okay children, what does it all mean? Well, let’s start at the very beginning(a very good place to start):
${0}The dollar sign and curly braces force an evaluation of the symbols contained inside, often used for returning complex series of variables. As an aside, counting in programming languages starts with zero, and each space-separated part of the text is defined with a number per place in the order, also known as positional parameters. The entire path to our script is given the special ‘seat’ of zero, so this puts the focus on that zero position.

Regrouping quickly, our objective is to pull out the path leading up to the script’s name. So we’re essentially gathering up all the stuff up to and including the last forward slash before our scripts filename, and chuckin’ them in the lorry bin.
${0##*}To match all of the instances of a pattern, in our case the forward slashes in our path, we double up the number signs(or pound sign for telcom fans, or hash for our friends on the fairer side of the puddle.) This performs a “greedy” match, gobbling up all instances, with a star “globbing”, to indiscriminately mop up any matching characters encountered along the way.
${0##*/}Then we cap the whole mess off by telling it to stop when it hits the last occurrence of a character, in this case forward slash. And that’s that!

Pardon the tongue-in-cheek tone of this quick detour into a bash-style regex-analogue… but to reward the masochists, here’s another joke from Puppet-gif-contest-award-winner @pmbuko:

Email from a linux user: “Slash is full.” I wanted to respond: “Did he enjoy his meal?”

Evaluating the Tokens, or Order of Expansion in Bash

Monday, July 23rd, 2012

Previously in our series on commonly overlooked things in bash, we spoke about being specific with the binaries our script will call, and mentioned conventions to use after deciding what to variable-ize. The rigidity and lack of convenience afforded by bash starts to poke through when we’re trying to abstract re-usable inputs through making them into variables, and folks are commonly tripped up when trying to have everything come out as intended on the other side, when that line runs. You may already know to put quotes around just about every variable to catch the possibility of spaces messing things up, and we’re not even touching on complex ‘sanitization’ of things like non-roman alphabets and/or UTF-8 encoding. Knowing the ‘order of operation expansion’ that the interpreter will use when running our scripts is important. It’s not all drudgery, though, as we’ll uncover features available to bash that you may not have realized exist.

For instance, you may know curly braces can be used did you know there’s syntax to, for example, expand to multiple extensions for the same filenames by putting them in curly braces, comma-separated? An interactive example(with set -x)
cp veryimportantconfigfile{,-backup}
+ cp veryimportantconfigfile veryimportantconfigfile-backup

That’s referred to as filename (or just) brace expansion, and is the first in order of the (roughly) six types of expansion the bash interpreter goes through when evaluating lines and ‘token-ized’ variables in a script.

Since you’re CLI-curious and go command line (trademark @thespider) all the time, you’re probably familiar with not only that you can use tilde(~) for a shortcut to the current logged-in users home directory, but also that just cd alone will assume you meant you wanted to go to that home directory? A users home gets a lot of traffic, and while the builtin $HOME variable is probably more reliable if you must include interaction with home directories in your script, tilde expansion (including any subdirectories tagged to the end) is the next in our expansion order.

Now things get (however underwhelmingly) more interesting. Third in the hit parade, each with semi-equal weighting, are
a. the standard “variable=foo, echo $variable” style ‘variable expressions’ we all know and love,
b. backtick-extracted results of commands, which can also be achieved with $(command) (and, worse came to worse, you could force another expansion of a variable with the eval command)
c. arithmetic expressions (like -gt for greater than, equal, less than, etc.) as we commonly use for comparison tests,
and an interesting set of features that are actually convenient (and mimic some uses of regular expressions), called (misleadingly)
$. dollar sign substitution. All of the different shorthand included under this category has been written about elsewhere in detail, but one in particular is an ad-hoc twist on a catchall that you could use via the ‘shell options’, or shopt command(originally created to expand on ‘set‘, which we mentioned in our earlier article when adding a debug option with ‘set -x‘). All of the options available with shopt are also a bit too numerous to cover now, but one that you’ll see particularly strict folks use is ‘nounset‘, to ensure that variables have always been defined if they’re going to be evaluated as the script runs. It’s only slightly confusing that a variable can have an empty string for a value, which would pass this check. Often, it’s the other way around, and we’ll have variables that are defined without being used; the thing we’d really like to look out for is when a variable is supposed to have a ‘real’ value, and the script could cause ill affects by running without one – so the question becomes how do we check for those important variables as they’re expanded?
A symbol used in bash that will come up later when we cover getopt is the colon, which refers to the existence of an argument, or the variables value (text or otherwise) that you’d be expecting to have set. Dollar sign substitution mimics this concept when allowing you to ad hoc check for empty (or ‘null’) variables by following a standard ‘$variable‘ with ‘:?’ (finished product: ${variable:?})- in other words, it’s a test if the $variable expanded into a ‘real’ value, and it will exit the script at that point with an error if unset, like an ejector seat.

Moving on to the less heavy expansions, the next is… command lookups in the run environments PATH, which are evaluated like regular(western) sentences, from left to right.
As it traipses along down a line running a command, it follows that commands rules regarding if it’s supposed to expect certain switches and arguments, and assumes those are split by some sort of separation (whitespace by default), referred to as the Internal Field Separator. The order of expansion continues with this ‘word splitting’.

And finally, there’s regular old pathname pattern matching – if you’re processing a file or folder in a directory, it find the first instance that matches to evaluate that – pretty straightforward. You may notice we’re often linking to the Bash Guide for Beginners site, as hosted by The Linux Documentation Project. Beyond that resource, there’s also videos from 2011(iTunesU link) and 2012(youtube) Penn State Mac Admins conference on this topic if you need a refresher before we forge ahead for a few more posts.

Back to Basics with Bash

Tuesday, July 17th, 2012

The default shell for Macs has been bash for as long as some of us can remember(as long as we forget it was tcsh through 10.2.8… and before that… there was no shell, it was OS 9!) Bash as a scripting language doesn’t get the best reputation as it is certainly suboptimal and generally unoptimized for modern workflows. To get common things done you need to care about procedural tasks and things can become very ‘heavy’ very quickly. With more modern programming languages that have niceties like API’s and libraries, the catchphrase you’ll hear is you get loads of functionality ‘for free,’ but it’s good to know how far we can get, and why those object-oriented folks keep telling us we’re missing out. And, although most of us are using bash every time we open a shell(zsh users probably know all this stuff anyway) there are things a lot of us aren’t doing in scripts that could be better. Bash is not going away, and is plenty serviceable for ‘lighter’, one-off tasks, so over the course of a few posts we’ll touch on bash-related topics.

Some things even a long-time scripter may easily overlook is how we might set variables more smartly and _often_, making good decisions and being specific about what we choose to variable-ize. If the purpose of a script is to customize things in a way that’s reusable, making a variable out of that customization (say, for example, a hostname or notification email address) allows us to easily re-set that variable in the future. And in our line of work, if you do something once, it is highly probable you’ll do it again.

Something else you may have seen in certain scripts is the PATH variable being explicitly set or overridden, under the assumption that may not be set in the environment the script runs in, or the droids binaries we’re looking for will definitely be found once we set the path directories specifically. This is well-intentioned, but imprecise to put it one way, clunky to put it another. Setting a custom path, or having binaries customized that could end up interacting with our script may cause unintended issues, so some paranoia should be exhibited. As scientists and troubleshooters, being as specific as possible always pays returns, so a guiding principle we should consider adopting is to, instead of setting the path and assuming, make a variable for each binary called as part of a script.

Now would probably be a good time to mention a few things that assist us when setting variables for binaries. Oh, and as conventions go, it helps to leave variable names that are set for binaries as lowercase, and all caps for the customizations we’re shoving in, which helps us visually see only our customized info in all caps as we debug/inspect the script and when we go in to update those variables for a new environment. /usr/bin/which tells us what is the path to the binary which is currently the first discovered in our path, for example ‘which which’ tells us we first found a version of ‘which’ in /usr/bin. Similarly, you may realize from its name what /usr/bin/whereis does. Man pages as a mini-topic is also discussed here. However, a more useful way to tell if you’re using the most efficient version of a binary is to check it with /usr/bin/type. If it’s a shell builtin, like echo, it may be faster than alternatives found at other paths, and you may not even find it necessary to make a variable for it, since there is little chance someone has decided to replace bash’s builtin ‘cd’…

The last practice we’ll try to spread the adoption of is using declare when setting variables. Again, while a lazy sysadmin is a good sysadmin, a precise one doesn’t have to worry about as many failures. A lack of portability across shells helped folks overlook it, but this is useful even if it is bash-specific. When you use declare and -r for read-only, you’re ensuring your variable doesn’t accidentally get overwritten later in the script. Just like the tool ‘set’ for shell settings, which is used to debug scripts when using the xtrace option for tracing how variables are expanded and executed, you can remove the type designation from variables with a +, e.g. set +x. Integers can be ensured by using -i (which frees us from using ‘let’ when we are just simply setting a number), arrays with -a, and when you need a variable to stick around for longer than the individual script it’s set in or the current environment, you can export the variable with -x. Alternately, if you must use the same exact variable name with a different value inside a nested script, you can set the variable as local so you don’t ‘cross the streams’. We hope this starts a conversation on proper bash-ing, look forward to more ‘back to basics’ posts like this one.

Video on Setting up TheLuggage

Friday, July 13th, 2012

The Luggage is shaping up to be the go-to packaging software for Mac Admins. Getting started can be daunting for some, though, so I’ve narrated a video taking you through the steps required to set it up. Not included:
- Getting a Mac account (while this process can mostly be done for free, it’s the best and easiest way if you do have access)
- Downloading the tools from the Mac Dev Center (Command Line Tools and Auxiliary Tools for Xcode)
- Choosing your favorite text editor (no emacs vs vi wars, thanks)

Setting up The Luggage from Allister Banks on Vimeo.

Happy Packaging! Please find us on Twitter or leave a comment if you have any feedback.

Download Another Copy of Office 2010

Tuesday, July 3rd, 2012

Did your Office 2010 DVD go missing? Let’s see, you open the drawer it’s supposed to be in and an evil Gremlin jumps out at you, using broken pieces of the DVD as shanks flying this way and that, trying to cut your eyes out! Well, we tried to tell you not to feed the cute little guys… Or maybe it got scratched while being prodded by aliens who abducted it to try and steal Microsoft’s source code. Maybe it’s just stuck inside that huge Lego castle that you just can’t bring yourself to tear down to get at it…

Whatever the problem, fret not (once you seek medical attention for the fireball that crashed to Earth, burning just your disk or escape from the black hole that sucked your DVD into a vortex, miraculously leaving that New Kids on the Block CD in the place of your disk)! Microsoft has a solution for you. To download a fresh, new file that you can burn to a DVD, just go to this site and enter your serial number:

Within minutes (or hours if your bandwidth isn’t so great) you’ll be reunited with your old pal Clippy!

Pass the Time With Easily-Accessible Conference Videos

Thursday, June 7th, 2012

It was the pleasure of two 318′ers to attend and present at PSU Mac Admins Conference last month, and lickety-split, the videos have been made available! (Slides should appear shortly.) Not only can you pass the time away with on-demand streaming from YouTube, you can also download them for offline access (like the plane) at the iTunesU channel. Enjoy the current most popular video!

MacPorts new-ish tricks, and a new-ish trickster, Rudix

Monday, May 14th, 2012

As the bucket-loads of package providers in Puppet may lead you to believe, if we do not study history we are doomed to repeat it. Or more to the point, there is no shortage of projects focused on solving the same ‘how do I get the bits of code I want to execute on a machine installed’ issue. Mac Sysadmins have used Fink and (originally named DarwinPorts) MacPorts to acquire various open source software and unix tools not bundled with the operating system. A disadvantage many people found in those projects was the reliance on developer tools and compile time to actually go through the build-from-source process, which brings us to the news that was brought to our attention this weekend, via the Twitter: MacPorts now hosts pre-built archives for Lion, which are used automatically when available. There are a few caveats (e.g. it would only be available to the projects with compatible licensing), but this functionality was added for Snow Leopard mid-last year, along with another interesting development: you can host your own custom pre-built archives on a local network as described here.

All of this is to say that if you thought the game was over and competing projects like Homebrew had won… then you haven’t been paying attention to all those innovators, putting more tools in our belts.

Speaking of optimizations in package management, while MacPorts can generate packages once you’ve acquired the source or binary archive, another project called Rudix goes one step further and hosts packages of the software it offers on googlecode. It specifically won’t build from source, but its packages are meant to include all the necessary dependencies, and like other managers it can be driven from the command line, and uninstall as necessary. No more excuses not to have iperf or mtr when you need it, and if you’d rather have a little more control over the version of ssh-copy-id than what Homebrew provides, you can use a project like the Luggage.

Xsan Deployment Checklist

Tuesday, April 10th, 2012

One of the harder aspects of building systems consistently in a repeatable fashion is that you often need a checklist to follow in order to maintain that consistency. Therefore, we’ve started an Xsan Installation Checklist, which we hope will help keep all the i’s dotted and t’s crossed. Feel free to submit any items we should add to the checklist and also feel free to use it to verify the configuration of your own Xsans.


[ ] Work out ahead of time how permissions will be dealt with:

  • Active Directory
  • Open Directory
  • Local Clients in same group with different UIDs.

[ ] If Active Directory is already in place, verify that system are bound properly.

[ ] If Open Directory is already in place, verify that system are bound properly.

[ ] If Open Directory is not already in place, configure Open Directory.

[ ] All client Public interfaces should have working forward and reverse DNS resolution.

Fibre Channel (Qlogic)

[ ] Update Qlogic firmware to latest on all switches.

[ ] Set nicknames for all devices in the fabric.

[ ] Export the nicknames.xml file and give to customer or import to workstation running Qlogic San Surfer.

[ ] Set the domain IDs on the Qlogic. Different Domain ID for each switch.

[ ] Set port speed manually on Qlogic and clients. Don’t use auto-negotiation.

[ ] Configure the appropriate Qlogic port properties for Targets (Storage) and Initiators (Clients).


  • Device Scan On
  • I/O Streamguard Off
  • Initiators
  • Device Scan Off
  • I/O Streamguard On

[ ] Avoid fully populating Qlogic 9200 blades, only use 8-12 ports of each blade to avoid flooding backplane.

[ ] If the switch has redundant power, plug each PS into different circuits.

[ ] Split HBA (client port) and storage ports across switches, i.e. port 0 on switch 1, port 1 on switch 2.

Storage (Promise)

[ ] Update Controller firmware to latest version

[ ] If client has a spare controller, update that as well.  Also label box with updated firmware number

[ ] Work out LUNs for MetaData/Journal and Data (MD should be RAID 1, Data should be RAID 5 or 6)

[ ] Adjust script for formatting Promise RAIDs – refer to this link

[ ] Start formatting LUNS according to strategy – this can take up to 24 hours.

Metadata Network

[ ] If customer has Spanning Tree enabled, make sure Portfast is enabled as well. If possible, disable ST.

[ ] Verify that both clients and servers have GigE connection.

General Client/Server

[ ] Label your NICs clearly: Public LAN and Metadata LAN.

[ ] Configure Metadata network with IP and Subnet Mask only. No router or DNS.

[ ] Disable unused network interfaces.

[ ] Make sure Public Interface is top interface in System Preferences/Network

[ ] Disable IPv6 on all interfaces.

[ ] Energy Saver settings: Make sure “put hard disks to sleep when possible” is disabled.

[ ] Make sure Startup Disk is set to the proper local boot volume.

Metadata Controllers

[ ] Install XSAN on Snow Leopard machines and below (XSAN is included with Lion)

[ ] All MDCs should have mirrored boot drives, with AutoRebuild enabled.

[ ] Sync the clocks via NTP. Make sure all clients and MDCs point to same NTP server.

[ ] Add MDCs to XSAN

Volume Configuration

[ ] Label all the LUNs clearly.

[ ] Configure the MetaData LUN as a mirrored Raid 1.

[ ] Use an even number of LUNs per pool.

[ ] Use Apple defaults for block size and stripe breadth and test to see if performance is acceptable.

[ ] Do NOT enable Extended Attributes.

[ ] Verify email notification is turned on.

[ ] Make sure the customer knows not to go below 20% free space.

XSAN Creation/Management

[ ] Verify that the same version of Xsan is running on on all MDCs and clients.

[ ] For 10.6 and below – Add XSAN Serial numbers to XSAN Admin

[ ] Add Clients to XSAN

[ ] Verify performance of XSAN

  • Test speed
  • Test IO
  • Test sustained throughput
  • Test with different file types
  • Test within applications (real world testing)

[ ] Document XSAN for client

[ ] Upload documentation


Filemaker 12 New Features & Key Changes

Friday, April 6th, 2012

FileMaker Pro 12, Go and Server were all released to the public in early April 2012. Each product brings its own set of new features. First and foremost is the new .fmp12 file format. It is the first file format update since version 7 of FileMaker which added multiple tables per file. This file format update feels more incremental but will introduce a number of changes for environments as they upgrade into the latest version. All the recently released products require this new file format.

Filemaker Pro 12 and Pro 12 Advanced
These are the workhorses of the Filemaker world. Much of the interface remains familiar to user of FileMaker 11 and earlier. Most of the updates in the FileMaker Pro client are related to layout and display. Version 12 provides new visual updates including gradients, alpha channel support, rounded ends on data fields and image slicing. Guides for common screen sizes for both Desktop and iOS devices will make layout designers much happier by reducing the number of times you’ll need to go back and forth between Layout and Browse while tweaking a layout to see if you’ve exceeded the display dimensions. Additional visual goodies in the new version include rounded buttons and hover states. All these visual goodies make Filemaker 12 appear much like CSS-3 webpages.

Containers are now treated a bit differently. You can specify default locations for files stored in containers. This option is selected in FILE:MANAGE:CONTAINERS. Container files also have additional options when defining them as fields in the database. In Field Options:Storage, there is a new section for Containers where you can specify the default location, and whether or not the file is encrypted (by choosing Secure Storage or Open Storage).

Real World Performance.
Working on a client file, conversion from .fp7 to .fmp12 took about 15 minutes for a 650MB file with around 700K records in it. Conversion was smooth and the resulting file opened and appeared and parsed ok, both in terms of schema, data, scripts and security. A script for parsing through some text fields for an automated data migration takes about 13 minutes to run in FileMaker Advanced 11 and FileMaker Advanced 12. Performance appears to be substantially similar among the clients without making further changes, although given some of the new features of 12, it is entirely possible to get far better performance, especially if you have a 64 bit system.

Filemaker Server and Server Advanced
FileMaker Server packs perhaps the biggest change in a 64 bit engine on the backend. This will make FileMaker Server Admins much happier. This means that FileMaker Server will be able to address much larger datasets natively in RAM, without paging them to disk. Also of interest to the FileMaker Database administrator is new progressive backups which should allow for a better balance between performance of the database and protection of the data. Backup and plugins have now been spun out to their own processes so a problem with either backup or a problematic plugin won’t take down your whole FileMaker Server.

Containers in databases hosted on the server will also now support progressive downloads so that you won’t need to wait for an entire video to download before you can start watching it. This will be a boon to iOS users. Which leads me to the final piece of the new FileMaker 12 triumvirate.

FileMaker Go
FileMaker Go also sports many of the new features of its siblings. Support for the .fmp12 is the biggest change, but not the only change. Also of interest is the ability to both print and export records. This will make FileMaker Go much more attractive as a client for users out in the field. No longer will you need to have FileMaker on a laptop or desktop to get outputs for clients or hard copies for signatures. The final coup de grace for Filemaker Go is its price, free from the App Store. FileMaker Go still requires a database created with Filemaker Pro or Advanced 12. FileMaker Go doesn’t provide the tools for developing a database as that’s not really what it’s meant to be. Once developed, the database can either be hosted on the iOS device itself or FileMaker Server for collaboration with other users (both iOS and FileMaker Client users). Databases hosted locally, as may be the case if you have users going offline, can then be synchronized to the server when the device comes back online (which may require some custom work to get just right).

FileMaker 12 Pro, Advanced, Server and Server Advanced are available as either a boxed product or a download from FileMaker Go is available as a free download from the App Store. 318 is a FileMaker partner and our staff are enthusiasts of the product. If you need help or want to discuss a migration to the latest version FileMaker, please feel free to contact your Professional Services Manager, or if you do not yet have one.

Using Nagios NIBs with ESX

Thursday, March 22nd, 2012

What is a MIB

A MIB is a Management Information Base. It is an index based upon a network standard that categorizes data for a specific device so that SNMP servers can read the data.

Where to Obtain VMware vSphere MIBs

VMware MIBs are specific to VMware Version, you can try to use the ESX MIBs for ESXi. They can be downloaded from Click on VMware vSphere > find the version of ESX that you are running under “Other versions of VMware vSphere” (the latest version will be the page that you’re on). Click on “Drivers & Tools”. Then click on “VMware vSphere x SNMP MIBs” where “x” is your version.

How to add VMware vSphere MIBs into Nagios

  • Download the VMware vSphere MIBs from
  • Copy the MIB files to /usr/share/snmp/mibs/
  • Run check_snmp -m ALL so it detects the new MIBs

Editing snmpd.conf and starting snmpd on ESX

  • Stop snmpd: service snmpd stop
  • Backup snmp.xml: cp /etc/vmware/snmp.xml /etc/vmware/snmp.xml.old
  • Edit snmp.xml with your favorite CLI text editor to have the following:


  • Backup snmpd.conf: cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.old
  • Use your favorite CLI text editor and edit /etc/snmp/snmpd.conf
  • Erase everything in it.
  • Add in the following and save it:

load  99 99 99
syslocation ServerRoom
syscontact  “ESX Administrator”
rocommunity  public
view systemview included .
proxy -v 1 -c public .

  • Change “syslocation” and “syscontact” to whatever you want
  • Save your work
  • Configure snmpd to autostart: chkconfig snmpd on
  • Allow SNMP through firewall: esxcfg-firewall –e snmpd
  • Start the SNMP daemon: service snmpd start
  • Restart the mgmt-vmware service: service mgmt-vmware restart

Determining OID

OID’s are MIB specific variables that you can instruct an SNMP server monitor to look for. These variables can be determined by reading the MIBs. One tool that assists with doing this is MIB Browser by iReasoning Networks MIB Browser can run on Windows, Mac OS X, and Linux/UNIX. To obtain the appropriate OID’s:

  • Load the MIBs in MIB Browser by going to File > Load Mibs
  • Manually comb through to find the OID you want (it will be connected to a string that will be similar to wording used in VSphere).


  • SNMP MIBs was downloaded from for ESX 4.1
  • Loaded MIB for VMWARE-RESOURCES-MIB into MIB Browser
  • Searched for “Mem” (Edit > Find in MIB Tree), found “vmwMemAvail”, the OID for this is . (use the OID shown in the dropdown that is near the menu in the MIB Browser – it will show the full OID which will sometimes include a “0″ at the end that the OID listed towards the bottom of the window will not)
  • Add OID into remotehost.cfg (or linux config file) file in Nagios

define service{
use             generic-service ; Inherit values from a template
host_name           ESX4_1
service_description  Memory Available
check_command       check_snmp!-C public -o . -m all

host_name: the name of the device (whatever you want to call it)
service_description: the name of the service you are monitoring (whatever you want to call it)
check_command: -C is to define the community SNMP string, -o is to define the OID to read, -m is to define which MIB files to load – to be more specific, for this example you can narrow “-m all” to “-m VMWARE-RESOURCES-MIB.MIB”

Once you’ve done the above you should be able to monitor “Memory Available” for ESX through Nagios.  Repeat the procedure, changing steps where applicable for the specific OID you want to monitor.  If you have questions, or need assistance, please contact 318, Inc. at 1-877-318-1318.

The (Distributed) Version Control Hosting Landscape

Monday, March 19th, 2012

When working with complex code, configuration files, or just plain text, using Version control (or VC for short) should be like brushing your teeth. You should do it regularly, and getting into a routine with it will protect you from yourself. Our internet age has dragged us into more modern ways of tracking changes to and collaborating on source code, and in this article we’ll discuss the web-friendly and social ways of hosting and discovering code.

One of the earliest sites to rise to prominence was Sourceforge, which is now owned by the company behind Slashdot and Thinkgeek. Focused around projects instead of individuals, and offering more basic VC systems, like… CVS, Sourceforge became a site many open source developers would host and/or distribute their software through. Lately, Sourceforge seems to be on the wane, as it is found to be redirect and advertising-heavy.

When Google wanted to attract more attention to its open source projects and give outsiders a way to contribute, it opened in 2005. In addition to SVN, Mercurial (a.k.a. Hg) was available as an alternative VC option in 2009, as it was the system adopted by the Python language, whose creator is an employee at Google, Guido van Rossum. Hg was one of the original Distributed Version Control Systems, DVCS for short, and the complexity of such a system could feel ‘bolted-on’ when using Google for hosting (especially in the cloning interface), and its recent introduction of Git as an option mid last year brings this feeling out even more.

Bitbucket was another prominent early champion of Hg, and its focus, like those previously mentioned, is also on projects. Atlassian, the company behind it, are real titans in the industry, as the stewards of the Jira bug-tracking software, Confluence wiki, HipChat web-based IM/chatroom service, and have recently purchased the mac DVCS GUI client SourceTree. Even more indicative of the fast-paced and free-thinking approach of how Atlassian has done business is their adoption of Git late last year as an option for Bitbucket, going so far as to guide folks to move their Hg projects to it.

But the 900-pound gorilla in comparison to all of these is Github, with their motto, ‘Social Coding’. Collaboration can tightly couple developers and make open source dependent on the approval or contributions of others. In contrast, ‘Forking’ as a central concept to Git makes this interdependency less pronounced, and abstracts the project away to put more focus on the individual creators. Many words have already been spent on the phenomenon that is Git and Github by extension, just as its Rails engine enjoyed in years past, so we’ll just sign off here by recommending you sign up somewhere and join the social coding movement!