Archive for the ‘Mass Deployments’ Category

Mavericks Is Here, And It’s Free!

Tuesday, October 22nd, 2013

At 318 we’ve been hard at work preparing for the release of OS X 10.9, Mavericks and OS X Server 3.0. We’ve spent a lot of time writing, testing and filing our findings away. And now the time is here. Mavericks is available on the App Store, with OS X Server and iOS 7.0.3. Additionally, JAMF has released Casper 9.2 and other vendors are releasing patches as quickly as they can.

With new updates to Safari, the addition of iBooks, the new Maps app, better integration of the Calendar, Finder tagging, a new multiple display manager, newer automation features for FileVault, iCloud Keychain, new Notifications and Finder tags, Mavericks is sure to inspire a lot of people to upgrade immediately. Especially now that Apple has announced it’s free! Before you have hundreds of people upgrade in your environment though, check out the new Caching Server 2 in Mavericks Server app. It’s awesome and will help make sure you can still stream some Howard Stern while everyone is doing the upgrade.

OS X Server also gets a bunch of new updates, with Profile Manager 3 adding more that you can control/deploy, Xcode server providing code sharing, new command line options, new options within existing services, etc. Definitely check it out!

And if you need any help with this stuff, we’re more than happy to help prepare for and implement upgrades, whether it’s just for a few clients or for a few thousand!

Wishes Granted! Apple Configurator 1.4 and iOS 7

Wednesday, September 25th, 2013

Back in June, we posted about common irritations of iOS(6) device deployment, especially in schools or other environments trying to remove features that could distract students. Just like with the Genie, we asked for three wishes:

- 1. Prevent the addition of other email accounts, or 2. the sign-in (or creation!) of Twitter/Facebook(/Vimeo/Flickr, etc.) accounts

Yay! Rejoice in the implementation of your feature requests! At least when on a supervised device, you can now have these options show up as greyed out.

- 3. Disable the setting of a password lock…

Boo! This is still in the realm of things only an MDM can do for you. But at least it’s not something new that MDM’s need to implement. More agile ways to interact with App Lock should be showing up in a lot more vendors products for a ‘do not pass go, do not collect $200 dollars’ way to lead a group of iPads through the exact app they should be using. Something new we’re definitely looking forward to for MDM vendors to implement is…

Over-the-Air Supervision!

Won’t it be neat when we don’t need to tether all these devices to get those extra management features?

And One More Thing

Screen Shot 2013-09-24 at 2.35.14 PM

Oh, and one last feature I made reference to in passing, you can now sync a Supervised device to a computer! …With the caveat that you need to designate that functionality at the time you move the device into Supervise mode, and the specific Restriction payload needs setting appropriately.

Screen Shot 2013-09-24 at 2.34.29 PM

We hope you enjoy the bounty that a new OS and updated admin tools brings.

Increase Shared Memory for Postgres

Friday, August 16th, 2013

The default installation of Postgres in OS X Server can be pretty useful. You may find that as your databases grow that you need to increase the amount of shared memory that those databases can access. This is a kernel thing, so requires sysctl to get just right. You can do so manually just to get your heavy lifting done and oftentimes you won’t need the settings to persist across a restart. Before doing anything I like to just grab a snapshot of all my kernel MIBs:

sysctl -a > ~/Desktop/kernmibs

I like increasing these incrementally, so to bring up the maximum memory to 16 megs and increase some of the other settings equally, you might look to do something like this:

sysctl -w kern.sysv.shmmax=16777216
sysctl -w kern.sysv.shmmni=256
sysctl -w kern.sysv.shmseg=64
sysctl -w kern.sysv.shmall=393216

To change back, just restart (or use sysctl -w to load them back in). If you need more for things other than loading and converting databases or patching postgres, then you can bring them up even higher (I like to increment in multiples):

sysctl -w kern.sysv.shmmax=268435456

Inconsistent Upgrade Behavior on Software-Mirrored RAID Volumes

Thursday, August 8th, 2013

It came up again recently, so this post is to warn folks treading the same path in the near future. First a little ‘brass tacks’ background: As you probably know, as of 10.7 Lion’s Mac App Store-only distribution, you can choose the option to extract the InstallESD.dmg from the Install Mac OS X (insert big cat name here) application, and avoid duplicitous downloads and manual Apple ID logins. One could even automate the process on a network that supports NetInstall with a redundantly named NetInstall set to essentially ‘virtualize’ or serve up the installer app on the network.

We’ve found recently that more than a few environments are just getting around to upgrading after taking a ‘wait and see’ approach to Lion, and jumping straight to 10.8 Mountain Lion. Getting to the meat after all this preamble… it was also, at one time, considered best practice to use RAID to mirror the boot disk, even without a hardware card to remove the CPU overhead. (It hadn’t been considered a factor then, but even modern storage virtualization *cough*Drobo*cough* can perform… poorly. I personally recommend what I call a ‘lazy mirror’, having CCC clone the volume and putting less writes on the disk over time, and getting the redundancy of CCC reporting SMART status of the source and destination.)

When upgrading a software-mirror’d boot drives OS, you get a message about features been unavailable, namely FileVault2 and the Recovery Partition it relies upon. If it detects the machine being upgraded is running (a relic of a bygone era, a separate OS called quaintly) ‘Mac OS X Server,’ it additionally warns that the server functionality will be suspended until Server.app 2.x can be installed via… the Mac App Store. We’ve found it can do an upgrade of those paused services(at least those that are still provided by the 2.2.1 version of the Server application) and pick up where it left off without incident after being installed and launched.

If, however, you use a Mac App Store-downloaded application to perform the process, we’ve seen higher success rates of a stable upgrade. If instead you tried to save time with either the InstallESD.dmg or NetInstall set methods mentioned earlier, a failure symptom occurred that, post-update, the disk would never complete its first boot(verbose boot was not conclusive as to reasons, either.) Moving the application bundle to another machine(volume license codes have, of course, been applied to the appropriate AppleID on the machines designated for upgrades,) hasn’t been as successful, although the recommended repackaging of the Install app, as Apple has referred to in certain documentation, wasn’t attempted this particular time. In some cases even breaking the software mirror didn’t allow the disk to complete an upgrade successfully. Another symptom before we could tell it was going to fail is the drop-down sheet warning of the loss of server functionality would immediately cause the entire window to lose focus while about to initiate the update. A radar has not been filed due to the fact that a supported(albeit semi time-intensive) method exists and as been more consistently successful.

LOPSA-East 2013

Monday, March 18th, 2013

For the first year I’ll be speaking at the newly-rebranded League of Extraordinary Gentlemen League of Professional System Administrators conference in New Brunswick, New Jersey! It’s May 3rd and 4th, and should be a change from the Mac-heavy conferences we’ve been associated with as of late. I’ll be giving a training class, Intro to Mac and iOS Lifecycle Management, and a talk on Principled Patch Management with Munki. Registration is open now! Jersey is lovely that time of year, please consider attending!

 

LOPSA-East '13

Regarding FileVault 2, Part One, In Da Club

Monday, January 28th, 2013

FileVaultIcon

IT needs to have a way to access FileVault 2(just called FV2 from here on) encrypted volumes in the case of a forgotten password or just getting control over a machine we’re asked to support. Usually an institution will employ a key escrow system to manage FDE(Full Disk Encryption) when working at scale. One technique, employed by Google’s previously mentioned Cauliflower Vest, is based on the ‘personal’ recovery key(a format I’ll refer to as the ‘license plate’, since it looks like this: RZ89-A79X-PZ6M-LTW5-EEHL-45BY.) The other involves putting a certificate in place, and is documented in Apple’s white paper on the topic. That paper only goes into the technical details later in the appendix, and I thought I’d review some of the salient points briefly.

There are three layers to the FV2 cake, divided by the keys interacted with when unlocking the drive:
Derived Encryption Keys(plural), the Key Encrypting Key(from the department of redundancy department) and the Volume Encrypting Key. Let’s use a (well-worn) abstraction so your eyes don’t glaze over. There’s the guest list and party promoter(DEKs), the bouncer(KEK), and the key to the FV2 VIP lounge(VEK). User accounts on the system can get on the (DEK) guest list for eventual entry to the VIP, and the promoter may remove those folks with skinny jeans, ironic nerd glasses without lenses, or Ugg boots with those silly salt-stained, crumpled-looking heels from the guest list, since they have that authority.

The club owner has his name on the lease(the ‘license plate’ key or cert-based recovery), and the bouncer’s paycheck. Until drama pop off, and the cops raid the joint, and they call the ambulance and they burn the club down… and there’s a new lease and ownership and staff, the bouncer knows which side of his bread is buttered.

The bouncer is a simple lad. He gets the message when folks are removed from the guest list, but if you tell him there’s a new owner(cert or license plate), he’s still going to allow the old owner to sneak anybody into the VIP for bottle service like it’s your birthday, shorty. Sorry about the strained analogy, but I hope you get the spirit of the issue at hand.

The moral of the story is, there’s an expiration method(re-wrapping the KEK based on added/modified/removed DEKs) for the(in this case, user…) passphrase-based unlock. ONLY. The FilevaultMaster.keychain cert has a password you can change, but if access has been granted to a previous version with a known password, that combination will continue to work until the drive is decrypted and re-encrypted. And the license plate version can’t be regenerated or invalidated after initial encryption.

So the two institutional-scale methods previously mentioned still get through the bouncer unlock the drive until you tear the roof of the mofo tear the club up de- and re-encrypt the volume.

But here’s an interesting point, there’s another type of DEK/passphrase-based unlock that can be expired/rotated besides per-user: a disk-based passphrase. I’ll get to describing that in Part Deux…

Configure network printers via command line on Macs

Wednesday, December 26th, 2012

In a recent Twitter conversation with other folks supporting Macs we discussed ways to programmatically deploy printers. Larger environments that have a management solution like JAMF Software’s Casper can take advantage of its ability to capture printer information and push to machines. Smaller environments, though, may only have Apple Remote Desktop or even nothing at all.

Because Mac OS X incorporates CUPS printing, administrators can utilize the lpadmin and lpoptions command line tools to programmatically configure new printers for users.

lpadmin

A basic command for configuring a new printer using lpadmin looks something like this:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E

Several options follow the lpadmin command:

  • -p = Printer name (queue name if sharing the printer)
  • -v = IP address or DNS name of the printer
  • -D = Description of the printer (appears in the Printers list)
  • -L = Location of the printer
  • -P = Path the the printer PPD file
  • -E = Enable this printer

The result of running this command in the Terminal application as an administrator looks like this:

New printer

lpoptions

Advanced printer models may have duplex options, multiple trays, additional memory or special features such as stapling or binding. Consult the printer’s guide or its built-in web page for a list of these installed printer features.

After installing a test printer, use the lpoptions command with the -l option in the Terminal to “list” the feature names from the printer:

lpoptions -p "salesbw" -l

The result is usually a long list of features that may look something like:

HPOption_Tray4/Tray 4: True *False
HPOption_Tray5/Tray 5: True *False
HPOption_Duplexer/Duplex Unit: True *False
HPOption_Disk/Printer Disk: *None RAMDisk HardDisk
HPOption_Envelope_Feeder/Envelope Feeder: True *False
...

Each line is an option. The first line above displays the option for Tray 4 and shows the default setting is False. If the printer has the optional Tray 4 drawer installed then enable this option when running the lpadmin command by following it with:

-o HPOption_Tray4=True

Be sure to use the option name to the left of the slash not the friendly name with spaces after the slash.

To add the duplex option listed on the third line add:

-o HPOption_Duplexer=True

And to add the envelope feeder option listed on the fifth line add:

-o HPOption_Envelope_Feeder=True

Add as many options as necessary by stringing them together at the end of the lpadmin command:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E -o HPOption_Tray4=True -o HPOption_Duplexer=True -o HPOption_Envelope_Feeder=True

The result of running the lpadmin command with the -o options enables these available features when viewing the Print & Scan preferences:

Printer options

With these features enabled for the printer in Print & Scan, they also appear as selectable items in all print dialogs:

Printer dialog

 

Create A Package To Enable SNMP On OS X

Friday, November 16th, 2012

Building on the Tech Journal post “Enable SNMP On Multiple Mac Workstations Using A Script“, let’s take the snmpd.conf file and put it into an Apple Installer package for easier deployment.

Most any packaging tool for Mac OS X, such as Apple’s PackageMaker, JAMF Software’s Composer, Absolute Software’s InstallEase or Stéphane Sudre’s Iceberg, can create a simple package combining a couple of scripts with a payload file. For this example, the payload will be the snmpd.conf file itself and the scripts will be preflight and postflight scripts to protect existing data and start the SNMP service.

First, create the snmpd.conf file using the instructions in the Create the snmpd.conf file section from the prior post.

Next, create the preflight and postflight scripts using a plain text editor such as TextEdit.app or BBEdit.app. Save each script as “preflight” or “postflight” without any file extensions.

Preflight script

The preflight script stops the SNMP service if it’s running and renames any existing /usr/share/snmp/snmpd.conf file to snmpd.bak followed by a unique date and time.

#!/bin/sh
# Preflight

# Stop the SNMP service if it's running
/bin/launchctl list | /usr/bin/grep org.net-snmp.snmpd
if [ $? = 0 ] ; then
     /bin/launchctl unload -w /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist
     /usr/bin/logger SNMP service stopped. # Appears in /private/var/log/system.log
fi

# Rename the snmpd.conf file if it exists
if [ -f /usr/share/snmp/snmpd.conf ] ; then
     /bin/mv /usr/share/snmp/snmpd.conf /usr/share/snmp/snmpd.bak$( /bin/date "+%Y%m%d%H%M%S" )
fi

exit 0

Postflight script

The postflight script starts the SNMP service

#!/bin/sh
# Postflight

# Start the SNMP service if it's running
/bin/launchctl load -w /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist
if [ $? = 0 ] ; then
     /usr/bin/logger SNMP service started. # Appears in /private/var/log/system.log
fi

exit 0

The elements are ready to add to the packaging application. Using Iceberg as an example, create a new new project and select Package from the Core Templates. Name the project “Enable SNMP” and select a location to store the project files such as ~/Iceberg. Copy the snmpd.conf file and preflight and postflight scripts to the ~/Iceberg/Enable SNMP folder for easier access.

Iceberg folder

Edit any information in the Settings pane to add clarity or leave the defaults automatically populated.

Settings

Select the Scripts pane and drag the preflight and postflight scripts into the Installation Scripts window being sure to match the preflight script to the preflight script file and the postflight script to the postflight script file.

Scripts

Select the Files pane. Right-click the top-level root folder in the files list and select New Folder. Name this new folder “usr”. It should appear at the same level as the Applications, Library and System folders. Continue creating new folders until the /usr/share/snmp folder hierarchy is complete. Then drag in the snmpd.conf file so that it falls under the snmp folder.

Select Archive menu –> Show Info to display each folder’s ownership and permissions. Adjust ownership of the new folders and the snmpd.conf file to owner:root and group:wheel. Adjust permissions to 755 for folders and 644 for the file (see screenshot).

Files

The package is ready. Select Build menu –> Build to create the package. Iceberg places new packages into the project folder: ~/Iceberg/Enable SNMP/build/Enable SNMP.pkg.

Copy the newly created package to a test machine and double-click to run it. Verify that everything worked correctly by running the snmpget command:

snmpget -c talkingmoose-read localhost system.sysDescr.0

It should return something like:

SNMPv2-MIB::sysDescr.0 = STRING: Darwin TMI 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

When satisfied the installer works correctly use a deployment tool such as Apple Remote Desktop, Casper or munki to distribute the package to Mac workstations.

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.

EULA

Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

DeployStudio Scripting Tips

Tuesday, September 25th, 2012

I’ve given a presentation on my affinity for DeployStudio, yet with it being closed source, it’s sometimes like an enigma (wrapped in a mystery (wrapped in bacon)). However, a workflow exists to enable scripting within it, although the only option besides automatically running it when dropped into a workflow is non-interactively passing arguments to a script. Even with little in the way of documented information, projects have popped up to take advantage of the framework provided.

Most noticeably, Rusty Myers’ BackupRestore scripts enabled quite an interesting workflow: first, you could run one workflow to tar(or ditto) the user folders to a new Backup directory in the repo, with a few customizable exceptions. And then, when either specified or dropped into a workflow that had a restore action precede it, you could put the users and their associated passwords back into place. This is obviously pretty darn useful for clean(er) migrations and/or OS upgrades, or simply refreshing existing workstations with a new base set of software. Many folks in the MacEnterprise community contributed features, including FileVault(version 1) support, and updates were made for the curveball Lion introduced in respect to how passwords are stored(nested inside the user record plist itself.)

I’m in the process of creating a successor to this project, so I thought I’d share some of the experiences I’ve had and pointers I’ve come across as a sysadmin, not a software developer, attempting to get a repeatable task accomplished inside this framework. Tip number zero is the same advice given to all students of programming in scripting languages: don’t write a lot before running and getting feedback. So, I booted a laptop with a few small user folders to my DeployStudio-generated netboot set, authenticated to the interface, and opened terminal. That netboot set includes the optional Python framework(Ruby is another option if you’d like access to that scripting language), which I’ll be using in the future. Along with selecting “Display Runtime log window by default”, I extended the “Quit runtime automatically after__” number to 90 seconds, so when testing inside of a workflow I wouldn’t be kicked out as I iterated and repeated the process.

To get started, I made an “admin” folder inside the repo, put an updated version of the rsync binary in it(since the one in /usr/bin that ships with OS X is pretty ancient), and started writing a script in that directory which I could therefore run from terminal on the netbooted laptop over VNC/ARD. For starters, here’s tip #1:
DeployStudio mounts the repo in /tmp/DSNetworkRepository. While /tmp isn’t read only, you will get out-of-space errors and general unhappiness if you use it for anything except mountpoints.
Tip #2:
No, you can’t use symlinks in those subfolders to point somewhere else on the DeployStudio server, since it assumes the directory the symlink would point to is relative to the root of the netboot set instead. (No, really, it wouldn’t even work when using ln -s on the machine the repo’s hosted from instead of Finder’s aliases, which definitely don’t work in scripts.)
tip #3:
For persnickety folks like myself that MUST use a theme in terminal and can’t deal to not have option set as the meta key, you’re probably bummed that the Preferences menu item is greyed out and command-comma doesn’t seem to work. There is a way, though: from the Shell menu choose Show Inspector. Then from the settings tab, double-click the theme you prefer. The full settings window will appear, and you can have your modern conveniences again.
tip #4:
How does DeployStudio decide what is the first mounted volume, you may wonder? I invite(dare?) you to ‘bikeshed‘(find a more optimized way to accomplish a relatively trivial task) this particular piece of code:
system_profiler SPSerialATADataType | awk -F': ' '/Mount Point/ { print $2}' | head -n1
In the case of the restore script, hard-coding the DS_LAST_RESTORED_VOLUME variable (on line 44 in Rusty’s current Lion version), or changing the argument in the workflow to pass that path with the -v flag will remove the dependency on restoring an image before putting user folders back in place.

Two more tips before I go, which are both specific to the task I’m trying to accomplish. Ownership on files will not be preserved when moved to the repo with rsync, so you can create a sparse image or sparsebundle as a destination, and it will even retain ACL’s (certain patched rsync binaries complain about smb_acl’s, like the one I used, which is bundled in DeployStudio’s Tools folder.) As mentioned about /tmp in the NetBoot environment earlier, sparseimages should be created in a subfolder of the repo, or you could easily run into ‘out of space’ errors.

When logging, you can use tee or simply redirect output, but more customized feedback in the actual log window in the Deploy Studio netboot runtime is helpful. There’s a “custom_logger” function used in some of the bundled scripts… which literally does nothing but echo $1 – pretty rudimentary. For output that doesn’t display when run as part of a script, you can redirect that output to /dev/stdout and have it shown instead of fooling around with echo or printf.
e.g. rsync -avz /source /destination >/dev/stdout
There may be a lag if verbose output happens in a short amount of time, as the main log file in the repo is being written to simultaneously to what is printed onscreen.

Oh, and the silliest thing I’ve noticed: your script needs to be owned by root:wheel and 777 in the scripts folder of the repo in order to show up in the workflow interface for selection… It’s got it’s quirks, but it’s just about the best out there!

MacSysAdmin 2012 Slides and Videos are Live!

Thursday, September 20th, 2012

318 Inc. CTO Charles Edge and Solutions Architect alumni Zack Smith were back at the MacSysAdmin Conference in Sweden again this year, and the slides and videos are now available! All the 2012 presentations can be found here, and past years are at the bottom of this page.

Configuration Profiles: The Future is… Soon

Tuesday, September 4th, 2012

Let’s say, hypothetically, you’ve been in the Mac IT business for a couple revisions of the ol’ OS, and are familiar with centralized management from a directory service. While we’re having hypothetical conversations about our identity, let’s also suppose we’re not too keen on going the way of the dinosaur via planned obsolescence, so we embrace the fact that Configuration Profiles are the future. Well I’m here to say that knowing how the former mechanism, Managed Preferences (née Managed Client for OS X, or MCX) from a directory service interacted with your system is still important.

 
Does the technique to nest a faux directory service on each computer locally(ergo LocalMCX), and utilize it to apply management to the entire system still work in 10.8 Mountain Lion? Yes. Do applied Profiles show settings in the same directory Managed Preferences did? Yes… which can possibly cause conflicts. So while practically, living in the age of Profiles is great when 802.1x used to be so hard to manage, there are pragmatic concerns as well. Not everyone upgraded to Lion the moment it was released, just over a year ago, so we’re wise to continue using MCX first wherever the population is significantly mixed.

 
And then there’s a show-stopper through which Apple opened up a Third-Party Opportunity (trademark Arek Dreyer): Profiles, when coming straight out of Mac OS X Server’s 10.7 or 10.8 Profile Manager service, can only apply management with the Always frequency. Just like the former IBM Thinkpads, you could have any color you wanted as long as it’s black. No friendly defaults set by your administrator that you can change later, Profiles settings, once applied, was basically frozen in carbonite.

 
So what party stepped in to address this plight? ##osx-server discussions were struck up by the always-generous never-timid @gregneagle, about the fact that Profiles can actually, although undocumented, contain settings that enable Once or Often frequency. Certain preferences can ONLY by managed at the Often level, because they aren’t made to be manageable system-wide, like certain application (e.g. Office 2011) and Screen Saver preferences (since those live in the user’s ~/Library/ByHost folder.)

 
The end result was Tim Sutton’s mcxToProfile script, hosted on Github, which works like a champ for both of the examples just listed. Note: this script utilizes a Profiles Custom Settings section only, so for the things already supported by Profiles (like loginwindow) it’s certainly best to get onboard with what the $20 Server.app can already provide. But another big plus of the script is… you can use it script without having ProfileManager set up anywhere on your network.

 
So there you go, consider using mcxToProfile to update your management, and give feedback to Tim on the twitters or the GitHubs!

Mountain Lion Now Available On The App Store

Wednesday, July 25th, 2012

Mountain Lion and Mountain Lion Server are now available on the OS X App Store. With a host of new features, notably including the ability to mirror desktops to Apple TV, Notification Center, Messages and other features, Mountain Lion is sure to be a boon to many an organization. But some features are also now missing, such as DHCP and Podcast Producer in OS X Server and of all things, negative mode.

Mountain Lion is one of the more incremental OS updates that Apple has released and so migrating from Lion to Mountain Lion is sure to be a breeze for many environments. But it’s still a major OS update and so should be taken carefully, as new features such as Gatekeeper are likely to give some environments fits. Before jumping into the App Store, be aware of all the various aspects of deployment. 318 can work with customers to plan and implement such migrations. Call your Professional Services Manager or contact sales@318.com for more information.

Evaluating the Tokens, or Order of Expansion in Bash

Monday, July 23rd, 2012

Previously in our series on commonly overlooked things in bash, we spoke about being specific with the binaries our script will call, and mentioned conventions to use after deciding what to variable-ize. The rigidity and lack of convenience afforded by bash starts to poke through when we’re trying to abstract re-usable inputs through making them into variables, and folks are commonly tripped up when trying to have everything come out as intended on the other side, when that line runs. You may already know to put quotes around just about every variable to catch the possibility of spaces messing things up, and we’re not even touching on complex ‘sanitization’ of things like non-roman alphabets and/or UTF-8 encoding. Knowing the ‘order of operation expansion’ that the interpreter will use when running our scripts is important. It’s not all drudgery, though, as we’ll uncover features available to bash that you may not have realized exist.

For instance, you may know curly braces can be used did you know there’s syntax to, for example, expand to multiple extensions for the same filenames by putting them in curly braces, comma-separated? An interactive example(with set -x)
cp veryimportantconfigfile{,-backup}
+ cp veryimportantconfigfile veryimportantconfigfile-backup

That’s referred to as filename (or just) brace expansion, and is the first in order of the (roughly) six types of expansion the bash interpreter goes through when evaluating lines and ‘token-ized’ variables in a script.

Since you’re CLI-curious and go command line (trademark @thespider) all the time, you’re probably familiar with not only that you can use tilde(~) for a shortcut to the current logged-in users home directory, but also that just cd alone will assume you meant you wanted to go to that home directory? A users home gets a lot of traffic, and while the builtin $HOME variable is probably more reliable if you must include interaction with home directories in your script, tilde expansion (including any subdirectories tagged to the end) is the next in our expansion order.

Now things get (however underwhelmingly) more interesting. Third in the hit parade, each with semi-equal weighting, are
a. the standard “variable=foo, echo $variable” style ‘variable expressions’ we all know and love,
b. backtick-extracted results of commands, which can also be achieved with $(command) (and, worse came to worse, you could force another expansion of a variable with the eval command)
c. arithmetic expressions (like -gt for greater than, equal, less than, etc.) as we commonly use for comparison tests,
and an interesting set of features that are actually convenient (and mimic some uses of regular expressions), called (misleadingly)
$. dollar sign substitution. All of the different shorthand included under this category has been written about elsewhere in detail, but one in particular is an ad-hoc twist on a catchall that you could use via the ‘shell options’, or shopt command(originally created to expand on ‘set‘, which we mentioned in our earlier article when adding a debug option with ‘set -x‘). All of the options available with shopt are also a bit too numerous to cover now, but one that you’ll see particularly strict folks use is ‘nounset‘, to ensure that variables have always been defined if they’re going to be evaluated as the script runs. It’s only slightly confusing that a variable can have an empty string for a value, which would pass this check. Often, it’s the other way around, and we’ll have variables that are defined without being used; the thing we’d really like to look out for is when a variable is supposed to have a ‘real’ value, and the script could cause ill affects by running without one – so the question becomes how do we check for those important variables as they’re expanded?
A symbol used in bash that will come up later when we cover getopt is the colon, which refers to the existence of an argument, or the variables value (text or otherwise) that you’d be expecting to have set. Dollar sign substitution mimics this concept when allowing you to ad hoc check for empty (or ‘null’) variables by following a standard ‘$variable‘ with ‘:?’ (finished product: ${variable:?})- in other words, it’s a test if the $variable expanded into a ‘real’ value, and it will exit the script at that point with an error if unset, like an ejector seat.

Moving on to the less heavy expansions, the next is… command lookups in the run environments PATH, which are evaluated like regular(western) sentences, from left to right.
As it traipses along down a line running a command, it follows that commands rules regarding if it’s supposed to expect certain switches and arguments, and assumes those are split by some sort of separation (whitespace by default), referred to as the Internal Field Separator. The order of expansion continues with this ‘word splitting’.

And finally, there’s regular old pathname pattern matching – if you’re processing a file or folder in a directory, it find the first instance that matches to evaluate that – pretty straightforward. You may notice we’re often linking to the Bash Guide for Beginners site, as hosted by The Linux Documentation Project. Beyond that resource, there’s also videos from 2011(iTunesU link) and 2012(youtube) Penn State Mac Admins conference on this topic if you need a refresher before we forge ahead for a few more posts.

Back to Basics with Bash

Tuesday, July 17th, 2012

The default shell for Macs has been bash for as long as some of us can remember(as long as we forget it was tcsh through 10.2.8… and before that… there was no shell, it was OS 9!) Bash as a scripting language doesn’t get the best reputation as it is certainly suboptimal and generally unoptimized for modern workflows. To get common things done you need to care about procedural tasks and things can become very ‘heavy’ very quickly. With more modern programming languages that have niceties like API’s and libraries, the catchphrase you’ll hear is you get loads of functionality ‘for free,’ but it’s good to know how far we can get, and why those object-oriented folks keep telling us we’re missing out. And, although most of us are using bash every time we open a shell(zsh users probably know all this stuff anyway) there are things a lot of us aren’t doing in scripts that could be better. Bash is not going away, and is plenty serviceable for ‘lighter’, one-off tasks, so over the course of a few posts we’ll touch on bash-related topics.

Some things even a long-time scripter may easily overlook is how we might set variables more smartly and _often_, making good decisions and being specific about what we choose to variable-ize. If the purpose of a script is to customize things in a way that’s reusable, making a variable out of that customization (say, for example, a hostname or notification email address) allows us to easily re-set that variable in the future. And in our line of work, if you do something once, it is highly probable you’ll do it again.

Something else you may have seen in certain scripts is the PATH variable being explicitly set or overridden, under the assumption that may not be set in the environment the script runs in, or the droids binaries we’re looking for will definitely be found once we set the path directories specifically. This is well-intentioned, but imprecise to put it one way, clunky to put it another. Setting a custom path, or having binaries customized that could end up interacting with our script may cause unintended issues, so some paranoia should be exhibited. As scientists and troubleshooters, being as specific as possible always pays returns, so a guiding principle we should consider adopting is to, instead of setting the path and assuming, make a variable for each binary called as part of a script.

Now would probably be a good time to mention a few things that assist us when setting variables for binaries. Oh, and as conventions go, it helps to leave variable names that are set for binaries as lowercase, and all caps for the customizations we’re shoving in, which helps us visually see only our customized info in all caps as we debug/inspect the script and when we go in to update those variables for a new environment. /usr/bin/which tells us what is the path to the binary which is currently the first discovered in our path, for example ‘which which’ tells us we first found a version of ‘which’ in /usr/bin. Similarly, you may realize from its name what /usr/bin/whereis does. Man pages as a mini-topic is also discussed here. However, a more useful way to tell if you’re using the most efficient version of a binary is to check it with /usr/bin/type. If it’s a shell builtin, like echo, it may be faster than alternatives found at other paths, and you may not even find it necessary to make a variable for it, since there is little chance someone has decided to replace bash’s builtin ‘cd’…

The last practice we’ll try to spread the adoption of is using declare when setting variables. Again, while a lazy sysadmin is a good sysadmin, a precise one doesn’t have to worry about as many failures. A lack of portability across shells helped folks overlook it, but this is useful even if it is bash-specific. When you use declare and -r for read-only, you’re ensuring your variable doesn’t accidentally get overwritten later in the script. Just like the tool ‘set’ for shell settings, which is used to debug scripts when using the xtrace option for tracing how variables are expanded and executed, you can remove the type designation from variables with a +, e.g. set +x. Integers can be ensured by using -i (which frees us from using ‘let’ when we are just simply setting a number), arrays with -a, and when you need a variable to stick around for longer than the individual script it’s set in or the current environment, you can export the variable with -x. Alternately, if you must use the same exact variable name with a different value inside a nested script, you can set the variable as local so you don’t ‘cross the streams’. We hope this starts a conversation on proper bash-ing, look forward to more ‘back to basics’ posts like this one.

Creating a binding script to join Windows 7 clients to Active Directory

Tuesday, July 3rd, 2012

There are some different ways to join Windows 7 to a domain.  You can do it manually, use djoin.exe to do it offline, use powershell, or use netdom.exe.

  • Doing so manually can get cumbersome when you have a lot of different computers to do it on.
  • With Djoin.exe you will have to run it on a member computer already joined to the domain for EACH computer you want to join since it will create a computer object in AD for each computer before hand.
  • Powershell is OK to use, but you have to set the script to unrestricted before hand on EACH computer.
  • Netdom is the way to go since you prep once for the domain, then run the script with Administrator privledges on whatever computers you want to join on the domain.  Netdom doesn’t come on most versions of Windows 7 by default.  There are two versions of netdom.exe, one for x86 and one for x64.  You can obtain netdom.exe by installing Remote Server Administration Tools (RSAT) for Windows 7, and then copying netdom.exe to a share.

A quick way to deal with both x86 and x64 architectures in the same domain would be to make two scripts.  One for x86 and one for x64 and have the appropriate netdom.exe in two different spots \\server\share\x86\ and \\server\share\x64\.

You’ll need to either grab netdom.exe from a version of windows 7 that already has it, or you’ll need to install RSAT for either x64 or x86 Windows 7 from here: http://www.microsoft.com/en-us/download/details.aspx?id=7887, which ever you will be working with.  Install that on a staging computer.   The following steps are how to get netdom.exe from the RSAT installation.

  1. Download and install RSAT for either x64 or x86.
  2. Follow the help file that opens after install for enabling features.
  3. Enable the following feature: Remote Server Administration Tools > Role Administration Tools > AD DS and AD LDS Tools > AD DS Tools > AD DS Snap-ins and Command-Line Tools

netdom.exe will now be under C:\windows\system32

Create a share readable by everybody on the domain, and drop netdom.exe there.

Create a script with the following (From: http://social.technet.microsoft.com/Forums/en/ITCG/thread/6039153c-d7f1-4011-b9cd-a1f111d099aa):

@echo off
SET netdomPath=c:\windows\system32
SET domain=domain.net
CALL BATCH.BAT %passwd%
CALL BATCH.BAT %adminUser%
SET sourcePath=\\fileshare\folder\

::If necessary, copy netdom to the local machine
IF EXIST c:\windows\system32\netdom.exe goto join
COPY %sourcePath%netdom.exe %netdomPath%
COPY %sourcePath%dsquery.exe %netdomPath%
COPY %sourcePath%dsrm.exe %netdomPath%

:Join
::Join PC to the domain
NETDOM JOIN %computerName% /d:%domain% /UD:%adminUser% /PD:%passwd%

SHUTDOWN -r -t 0

Change domain and sourcepath to their real places.  Remove dsquery.exe and dsrm.exe if not needed.  If you’re just joining a domain, and not running anything after, then you don’t need them.

Create another script called “BATCH.BAT” that will hold your credentials that have access to joining computers to the domain.  Put BATCH.BAT in both places that house your Join-To-Domain script (…/x86 and …/x64)

@echo off
SET passwd=thisismypassword
SET adminuser=thisismyadminusername

  1. Ensure you have the scripts in the same directory.
  2. Open up a command prompt with Administrator privledges and change directory to the location of your scripts.

Runnning the first script will:

  1. Run a check to see if netdom, dsquery, and dsrm are installed under system32, if they are, it will then join the domain, if not it will attempt to download them from your share.
  2. Once it ensures it has the files it needs, it will join the computer to the domain under the “Computers” OU with its current computer name using the credentials set by BATCH.BAT.
  3. It will reboot when done.

This will work on both Server 2003 and Server 2008.

Pass the Time With Easily-Accessible Conference Videos

Thursday, June 7th, 2012

It was the pleasure of two 318′ers to attend and present at PSU Mac Admins Conference last month, and lickety-split, the videos have been made available! (Slides should appear shortly.) Not only can you pass the time away with on-demand streaming from YouTube, you can also download them for offline access (like the plane) at the iTunesU channel. Enjoy the current most popular video!

Video On Setting Up Lion Server As A Software Update Server

Monday, May 14th, 2012

Video On Setting Up File Sharing In Lion Server

Friday, May 11th, 2012

Video on Setting Up Profile Manager in Lion Server

Wednesday, May 9th, 2012

Microsoft’s System Center Configuration Manager 2012

Sunday, March 18th, 2012

Microsoft has released the Beta 2 version of System Center Configuration Manager (SCCM) aka System Center 2012. SCCM is a powerful tool that Microsoft has been developing for over a decade. It started as an automation tool and has grown into a full-blown management tool that allows you to manage, update, and distribute software, license, policies and a plethora of other amazing features to users, workstation, servers, and devices including mobile devices and tablets. The new version has been simplified infrastructure-wise, without losing functionality compared to previous versions.

SCCM provides end-users with a easy to use web portal that will allow them to choose what software they want easily, providing an instant response to install the application in a timely manner. For Mobile devices the management console has an exchange connector and will support any device that can use Exchange Active Sync protocol. It will allow you to push policies and settings to your devices (i.e. encryption configurations, security settings, etc…). Windows phone 7 features are also manageable through SCCM.

The Exchange component sits natively with the configuration manager and does not have to interface with Exchange directly to be utilized. You can also define minimal rights for people to just install and/or configure what they need and nothing more. The bandwidth usage can be throttled to govern its impact on the local network.

SCCM will also interface with Unix and Linux devices, allowing multiple platform and device management. At this point, many 3rd party tools such as the Casper Suite and Absolute Manage also plug into SCCM nicely. Overall this is a robust tool for the multi platform networks that have so commonly developed in today’s business needs everywhere.

Microsoft allows you to try the software at http://www.microsoft.com/en-us/server-cloud/system-center/default.aspx. For more information, contact your 318 Professional Services Manager or sales@318.com if you do not yet have one.

More fuel for the Simian fire – how does free sound?

Thursday, March 15th, 2012

Well we’ve been busy keeping our finger on the pulse of the Mac-managing open source community, and that genuine interest and participation continues to pay off. Earlier, we highlighted how inexpensive and mature the Simian project running on Google App Engine (GAE for short) is, although as of this writing refreshed documentation is still forthcoming. In that article we mentioned only one tool needs to be run on a Mac as part of maintaining packages posted to the service, and an attempt is being made to remove even the need for that. This new project was originally announced here, and has a growing number of collaborators. But that isn’t the biggest news about Managed Software Update (Munki) and Simian we have to announce today.

A technique that had been previously overlooked is now proven to be functional that allows you to use Simian as the repository of all of your configurations, but serve the actual packages from an arbitrary URL. Theoretically, if you take the publicly available pkginfo files, modify them to point to a web server on your LAN, (or even the vendors website directly, if you want them to be available from anywhere,) and your GAE service would fall under the free utilization limits with very little maintenance effort. This is big for institutions with a tight budget and/or multiple locations that want to take advantage of the App Engine platforms availability and Simian’s great interface. Beyond helping you save on bandwidth usage, this can also help control where your licensed software is stored.

Previously people have wished they could adapt Google’s code to run on their local network with the TyphoonAE beta project, but versus the recommended & supported method to deploy the server component, this is a great middle ground that brings down a barrier for folks having difficulty forecasting costs.

It’s an exciting time, with many fully-featured offerings to consider.

Munki’s Missing Link, the Simian Server Component from Google

Tuesday, March 13th, 2012

At MacWorld 2011, Ed Marczak and Clay Caviness gave a presentation called A Week in the life of Google IT. It included quite the bombshell, that Google was open-sourcing its Managed Software Update (Munki) server component for use on the Google App Engine (or GAE). Some began immediately evaluating the solution, but Munki itself was still young, and the enterprise-intent of the tool made it hard for smaller environments to consider evaluating. Luckily, the developers at Google kept at it, and just like GAE graduated from beta and other Google products got a facelift, a new primate now stands in our midst (mist?): Simian 2.0!

With enhancements more than skin deep, this release ups the ante for competing ‘munkiweb’ admin components, with rich logs and text editor-less manifest generation. For every package you’d like to distribute, only one run of the Munki makepkginfo tool is required – the rest can be done with web forms. No more ritual running of makecatalogs, just click the snazzy buttons in the interface!

Unlike the similarly GAE-based Cauliflower Vest, Simian does not require a Google account for per-client secure transmission, which makes evaluation easier. While GAE has ‘billable‘ levels, the free version allows for 1GB of storage with 1GB of upload and… yup, 1GB of download. While GAE may not be quite as straightforward to calculate the cost of as other ‘Platform as a Service’ offerings, it is, to use a phrase, ‘dumb cheap’. The only time the server’s instance would cost you during billable operation is when Admins are maintaining the packages stored, or when clients are actively checking in (by default once a day) and pulling packages down. As Google ‘dogfood’s the product, they have reported $.75/client per YEAR in the way of GAE-related costs.

Getting started with Simian is not a walk in the park, however: you must wrap your brain around the concept of a certificate authority (or CA), understand why the configuration files are a certain way based on the Simian way of managing Munki, and then pay close attention as you deploy your customized server and clients. Planning your Simian deployment starts with either creating or reusing an existing certificate authority, which would be a great way to leverage Puppet if it’s already running in your environment. Your server just needs to have its private key and public certificate signed by the same authority as the clients to secure their communication. Small or proof-of-concept deployments can use this guide to step you through a quick Certificate Authority setup.

When it comes to the server configuration, it’s good to specify who will be granted admin access, in addition to the email contact info for your support team. The GAE instance requires a Google account for authentication, and it is recommended that access is restricted to users from a particular Google Apps domain (free or otherwise). One tripping point is when allowing domain access to the GAE instance, you need to go to a somewhat obscure location in your GoogleApps dashboard (linked from above where the current services are listed on the dashboard tab, as pictured):

Ready to take the plunge? Once configurations have been set in the three files specified in the wiki, and the certs you’ll use to identify and authenticate your server, CA, and a client are stowed in the appropriate directories, go ahead and send it up to the great App Engine in the sky.

See our follow-up article

Introduction to Centralized Configurations with Puppet

Thursday, March 8th, 2012

One of the hardest things for IT to tackle at large scale is workstation lifecycle management. Machines need to be deployed, maintained, and re-provisioned based on the needs of the business. Many of the solutions provided by vendors need to be driven by people, pulling levers and applying changes in realtime. Since Macs have a Unix foundation, they can take advantage of an automation tool used for Linux and other platforms, Puppet. It can be used to cut down on a lot of the manual interaction present in other systems, and is based on the concept that configuration should be expressed in readable text, which can then be checked into a version control system.
To quickly bootstrap a client-server setup the Puppet Enterprise product is recommended, but we’ll be doing things in a scaled-down fashion for this post. We’ll use Macs, and it won’t matter what OS either the puppetmaster(server) or client is running on, nor if either are a Virtual Machine. First, install Facter, a complementary tool to collect specifications about your system, and then Puppet, from the PuppetLabs download site. Then, open Terminal and run this command to begin configuring the server, which adds the ‘puppet’ user and group:

sudo /usr/sbin/puppetmasterd --mkusers

Then, we’ll create a configuration file to specify a few default directories and the hostname of the server, so it can begin securing communication with the ssl certificates it will generate. I’m using computers bonjour names throughout this example, but DNS and networking/firewalls should be configured as appropriate for production setups, among other optimizations.

sudo vim /etc/puppet/puppet.conf
#/etc/puppet/puppet.conf
[master]
vardir = /var/lib/puppet
libdir = $vardir/lib
ssldir = /etc/puppet/ssl
certname = mini.local

Before we move on, an artifact of the –mkusers command above is that the puppet process may have been started in the background. For us to apply the changes we’ve made and start over with the server in verbose mode, you can just kill the ruby process started by the puppet user, either in Activity Monitor or otherwise.Now, let’s move on to telling the server what we’d like to see passed down to each client, or ‘node’:

sudo vim /etc/puppet/manifests/site.pp
# /etc/puppet/manifests/site.pp
import "classes/*"
import "nodes"
sudo vim /etc/puppet/manifests/nodes.pp
# /etc/puppet/manifests/nodes.pp
node '318admins-macbook-air.local' {
  include testing
}
sudo vim /etc/puppet/manifests/classes/testing.pp
# /etc/puppet/manifests/classes/testing.pp
 class testing {
   exec { "Run Recon, Run":
    command  => /usr/sbin/jamf recon -username '318admin' -passhash 'GOBBLEDEGOOK' -sshUsername \
    'casperadmin' -sshPasshash 'GOOBLEDEBOK' -swu -skipFonts -skipPlugins,  }
}

Here we’ve created three files as we customized them to serve a laptop with the bonjour name 318admins-macbook-air.local. Site.pp points the server to the configurations and  clients it can manage, Nodes.pp allows a specific client to receive a certain set of configurations(although you could use ‘node default include company_wide’ to affect everyone), and the actual configuration we’d like to enforce is present in Testing.pp.

One last tweak and our server is ready:

sudo chown -R puppet:puppet /etc/puppet

and we actually run the server, with some extra feedback turned on, with this:

sudo puppet master --no-daemonize --onetime --verbose --debug

Now, we can move on to setting up our client. Besides installing the same packages (in the same order) as above, we need to add a few directories and one file before we’re ready to go:

sudo mkdir -p /var/lib/puppet/var
sudo mkdir /var/lib/puppet/ssl
sudo vim /etc/puppet/puppet.conf
# /etc/puppet/puppet.conf
[main]
server = mini.local
[agent]
vardir = /var/lib/puppet
ssldir = /var/lib/ssl
certname = 318admin-macbook-air.local

Then we’re ready to connect our client.

sudo puppet agent --no-daemonize --onetime --verbose --debug

You should see something like this on the server, “notice: 318admins-macbook-air.local has a waiting certificate request”. On the server we go ahead and sign it like this:

sudo puppet cert --sign 318admins-macbook-air.local

Running puppet agent again should result in the a successful connection this time, with the configuration being passed down from the server for the client to apply.

This is just a small sample of how you can quickly start using Puppet, and we hope to share more of its benefits when integrated with other systems in the future.

Patch Management (and More) for Macs with Managed Software Update

Friday, March 2nd, 2012

When compared to Linux distributions, Mac OS X has lacked a standard, built-in package management system. Although network based software is still a possibility with OS X’s Unix foundation, in practice it is used in very few environments. The fact that developer tools are not included by default raises the barrier to entry for all systems that purport to allow simplified installation of software, and much has been made of the Mac App Store filling the void for mere mortals.

Businesses, however, have engaged software companies to acquire volume licenses, which simplify asset tracking and deployment concerns. Employees expect certain tools to be available to them, and support personnel carefully monitor the workstations under their purview to proactively address security concerns and stability issues. The Mac App Store was not designed with these concerns in mind, and even projects like MacPorts and Homebrew lack the centralization that configuration and patch management systems provide.

Managed Software Update (MSU for short) is an application developed by Greg Neagle of Walt Disney Animation Studios to provide and end-user interface to a businesses centrally managed software repository. It relies upon a larger project called Munki (calling to mind helper monkeys) that requires little infrastructure to implement. Workstations can be managed at a company-wide, department, and individual level, with as much overlap as makes sense. And just as the thin or modular imaging methods utilize packages as their building blocks to modify an images configuration, MSU can enforce settings just as well as it can insure security patches are installed in a timely fashion.

Among other benefits, MSU gives IT the power to uninstall software when it would be better provisioned elsewhere, provide a self-service interface to approved software, and takes away the number one source of friction for employees: “Why can’t I be an administrator so I can install my own software and updates, like I can at home?”

With Managed Software Update, businesses can now safely and efficiently address this concern.

Mac OS X 10.7.3 and 10.7.3 Server Now Available

Wednesday, February 1st, 2012

Mac OS X 10.7.3 and Mac OS X Server 10.7.3 are now available for download through software update:


The update comes with fixes to better language, smart card ServerBackup, Profile Manager, opendirectoryd/directory images, file sharing and support for a number of other aspects of the OS. Some specific aspects include disconnecting specific users w/ Server.app, more ACL information in Server.app, setting login greetings, etc.

The client update and available information is available at OS X Lion Update 10.7.3 (Client)

The client combo update and available information is available at OS X Lion Update 10.7.3 (Client Combo)

The server update is available at OS X Lion Update 10.7.3 (Server)

The server combo update is available at OS X Lion Update 10.7.3 (Server) Combo

The Server Admin Tools are available at Server Admin Tools 10.7.3

Also, ARD has been revved up to 3.5.2. It is available at Apple Remote Desktop 3.5.2 Client

Also, of note, AirPort Utility also got an update yesterday. It is available at AirPort Utility 6.0 for Mac OS X Lion

Building a Mac and iOS App Store Software Update Service

Wednesday, November 9th, 2011

Let’s say you run a network with a large number of Mac OS X or iOS (or, more likely, both) devices. Software Update and the two App Stores (Mac App Store and iOS App Store) make keeping all those devices up-to-date a pretty straightforward process. They are a huge improvement compared with the rather old-fashioned practice of looking through applications, visiting the web site for each one and manually downloading updated versions. When updating two or more similar machines, of course, one only needed to download the updated version once, then copy it to each other machine. Better, but a process that when performed across a lot of machines requires a lot of work.

However, even though the App Store and Software Update Server in Mac OS X Server make things easier, there’s no simple way to download things once and distribute the downloaded files to multiple machines for items purchased on the App Store. When large updates come out (such as a new version of iOS), you’re essentially downloading huge amounts of data to each and every machine, and if machines are set to automatically download updates, you could even have a large number of them downloading simultaneously.

Of course you can run your own Software Update service in Mac OS X Server, but this requires that every client machine be configured to use the local server. This works well for machines under your control, but for all those people who bring in their own laptops this doesn’t help.

What’s worse is that there’s currently no way whatsoever to run a Software Update-like service for App Store purchases. Imagine if you have a lab of dozens or hundreds of Macs with Final Cut X or iPads (or iPhones, iPod Touches, whatever comes out next with iMovie or ). Any time there’s an update you’re potentially downloading over a gigabyte per machine in the case of Final Cut X or 70 megabytes or so in the case of iMovie. That can easily add up to a tremendous amount of traffic and the congestion, complaints and headaches which go with it..

What’s needed is an easy way to cache App Store downloads. While we’re at it, it would also be nice to transparently have machines use our own Software Update server. Let’s be even a little more ambitious and do this without needing Mac OS X Server. Aw, heck – let’s make it work on any reasonably Unix-like OS.

So how do we do this? The App Stores and Software Update services use http for fetching files. So what we need to do is to capture those http requests and either redirect them to a local store of Software Update files or locally cached App Store files.

Just as an aside, it’d be tremendously difficult to create a local store of App Store files if for no other reason than the fact that there are currently more than half a million applications. Add to this the rate at which updates become available and your machine would probably never be finished attempting to download all of the applications! Considering this, we’re looking at running Apache and squid on our Unix-like machine and doing a little redirection magic on whatever device does NAT or routes for us.

Note: There’s no reason that the same machine can’t do both NAT/routing and Apache/squid, although in most environments we are assuming that the machine would simply be a proxy for Mac or iOS-based devices. To make this example end-to-end though, we’ll run the router on the host.

Our example uses a Mac OS X (non-Server) machine running Leopard which is doing both NAT and running our Apache and squid software. We’re simply using the Internet Sharing service, the public network interface is en0 (which we don’t use anywhere) and the interface which will serve our iOS and Apple clients is en1 and has the address 10.0.2.1.

Everyone has their own favorite way of installing software on Unix-like OSes and a discussion about which is best and why would certainly be outside the scope of this article. In these examples we’re using NetBSD’s pkgsrc for no other reason than the fact that it will compile packages from source with a base directory which is easily configurable (feel free to use ports or some other automated tool according to what platform you are using). Get pkgsrc (usually via cvs; we’ll assume it’s put into /usr which can be as simple as:

cd /usr ; setenv CVSROOT :pserver:anoncvs@anoncvs.netbsd.org:/cvsroot ; cvs checkout -P pkgsrc

And then run /usr/pkgsrc/bootstrap/bootstrap like so:

cd /usr/pkgsrc/bootstrap/
./bootstrap --prefix /usr/local --pkgdbdir /usr/local/var/db/pkg --sysconfdir /usr/local/etc --varbase /usr/local/var --ignore-case-check

This puts all files into /usr/local including logs and configuration files, so keeping your system clean is simple and keeping track of the differences between built-in and pkgsrc software is easy. Next, install pkgsrc’s www/squid and www/apache (and net/wget if your Unix doesn’t already have it):

cd /usr/pkgsrc/www/squid
bmake update
cd /usr/pkgsrc/www/apache22
bmake update
cd /usr/pkgsrc/net/wget
bmake update

Note that on systems like Mac OS X which come with GNU make by default, that pkgsrc uses bmake; if you have BSD make already, just use make. Another note is that /usr/local/sbin is not in Mac OS X’s path by default, so add /usr/local/sbin to /etc/paths if you’re going to use it.

Now that the software is installed in consistent locations we can configure it. The squid.conf file only needs one line to be changed; everything else is added. Find the line which says:

http_port 3128

And change it to:

http_port 3128 intercept

Then add the following lines:

maximum_object_size_in_memory 4096 KB
cache_replacement_policy heap LFUDA
cache_dir ufs /usr/local/var/squid/cache 16384 16 256
maximum_object_size 2097152 KB
refresh_pattern -i .ipa$ 360 90% 10800 override-expire ignore-no-cache ignore-no-store ignore-private ignore-reload ignore-must-revalidate
refresh_pattern -i .pkg$ 360 90% 10080 override-expire ignore-no-cache ignore-no-store ignore-private ignore-reload ignore-must-revalidate
acl no_cache_local dstdomain 10.0.2.1
cache deny no_cache_local
redirect_program /usr/local/bin/rewrite.pl

These settings are chosen to cache large files up to 2 gigabytes in size in a 16 gig cache on disk and to ignore cache directives with regards to .pkg and .ipa files. Adjust to your own liking. Of course, replace 10.0.2.1 with the private IP of your machine. The cache deny with that address is used to make sure that redirected Software Update files are not cached in squid which would just take up room which better used for App Store files.

The URL rewriting script (create /usr/local/bin/rewrite.pl) just changes Apple Software Update URLs to point to our server:

#!/usr/bin/env perl
$|=1;
while (<>) {
s@http://swscan.apple.com@http://10.0.2.1/swscan.apple.com@;
s@http://swcdn.apple.com@http://10.0.2.1/swcdn.apple.com@;
s@http://swquery.apple.com@http://10.0.2.1/swquery.apple.com@;
print;
}

Next we configure Apache. The location you choose for the Software Update files can be anywhere (in our example, they’re on a FireWire attached drive mounted at /Volumes/sw_updates/) which needs to be allowed in the Apache configuration.

Add to /usr/local/etc/httpd/httpd.conf:

<Directory “/Volumes/sw_updates/”>
Options Indexes FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all
</Directory>
<VirtualHost *:80>
ServerAdmin hostmaster@318.com
DocumentRoot “/Volumes/sw_updates”
ErrorLog “/usr/local/var/log/httpd/swupdate_error_log”
CustomLog “/usr/local/var/log/httpd/swupdate_access_log” common
</VirtualHost>

The log lines are purely optional. If you don’t add them, logs will still be written at /usr/local/var/log/httpd/access_log and error_log.

Next, we configure ipfw (in the case of Mac OS X or FreeBSD) to redirect all port 80 traffic transparently to our squid instance. If you’re using a different device for NAT/routing or different firewalling software such as ipfilter, see the examples listed below.

ipfw add 333 fwd 10.0.2.1,3128 tcp from any to any 80 recv en1

Note that on Snow Leopard and Lion you’ll need to make this change, too:

sysctl -w net.inet.ip.scopedroute=0

ipfilter would look like this for the same ipfw task from above (if you’re using Linux):

rdr en1 0.0.0.0/0 port 80 -> 10.0.2.1 port 3128 tcp

Again, the local private IP is 10.0.2.1 and the local private interface is en1; substitute your IP and interface.

Finally, we need to mirror all Apple Software Updates. A simple shell script can do this. Save this file somewhere (named mirror_swupdate.sh, for instance) and run it from cron now and then, perhaps once a night:

#!/bin/sh

 

location=$1 # This is the root of our Software Update tree
mkdir -p $1
cd $1

for index in index-leopard-snowleopard.merged-1.sucatalog index-leopard.merged-1.sucatalog index-lion-snowleopard-leopard.merged-1.sucatalog
do
wget --mirror http://swscan.apple.com/content/catalogs/others/$index

 

for swfile in `cat swscan.apple.com/content/catalogs/others/$index | grep "http://" | awk -F">" '{ print $2 }' | awk -F"<" '{ print $1 }'`
do
echo $swfile
wget --mirror "$swfile"
done
done

Invoke this with the top of the tree of your Software Update files as you’ve used in the Apache config, like so:

./mirror_swupdate.sh /Volumes/sw_updates

Expect this to run for a long time the first time you run this because you’ll be downloading around 60 gigabytes of updates. Every time it runs afterwards, though, files won’t be downloaded again unless they change (which they won’t; new updates will show up as new files).

Start squid and Apache, then tail your Apache log and run Software Update to test:

/usr/local/share/examples/rc.d/apache start
/usr/local/share/examples/rc.d/squid start
tail -f /usr/local/var/log/httpd/swupdate_access_log

At this point, you can redirect your software updates to the host. Updates for both the Mac App Store and iOS are also now cached. In the next article we’ll look at using some squid extensions to enable you to block applications from the App Stores or block updates in the event that an update is problematic.

Deploying Font Servers

Friday, October 21st, 2011

Mac OS X has come with the ability to activate and deactivate Fonts on the fly since 10.5, when Font Book was introduced. Font Book allows a single user to manage their fonts easily. But many will find that managing fonts on a per-computer basis ends up not being enough. Which begs the question: who needs a font server? A very simplistic answer is any organization with more than 5 users working in a collaborative environment. This could be the creative print shops, editorial, motion graphics, advertising agencies and other creative environments. But corporate environments where font licensing and compliance is important are also great candidates.

Lack of font management is a cost center for many organizations. There is a loss of productivity every time a user has to manually add fonts when opening co-workers documents, or the cost of a job going out with the wrong version of a font. Some of the other benefits of fonts servers are separate font sets for different workgroups and isolating corrupt fonts to clean up large font libraries, along with quick searching and identification of fonts.

Font Management and Best Practices

Anyone who uses fonts for daily workflow needs font management. This could be a standalone product such as Suitcase Fusion or Font Agent Pro. But larger environments invariably need to collaborate and share fonts between users, meaning many environments need font servers. Two such products include Extensis Universal Type Server and Font Agent Pro Server. But before adding font management products, users should clean up and any fonts loaded or installed and added prior to moving to a managed font environment. Places to look for fonts when cleaning them up include the following:

  • ~/Library/Fonts
  • /Library/Fonts
  • /System/Library Fonts

Leaving any necessary system, Microsoft Web Core, and required Adobe fonts.

The best resource for this process can be found at Extensis Font Best Practices in OX v.7, which can be found at: http://www.extensis.com/en/downloads/document_download.jsp?docId=5600039

Types of Font Server Products Available

There are two major font server publishers: Extensis and Font Agent Pro. Both have workgroup and enterprise products. All server products from both products work on a client/server model. Both can sync entire font sets or serve fonts on-demand. The break down for the Extensis Universal Type Sever is at 10 clients. Below 10 clients Universal Type Server Lite is a 10 clients product, which lacks Enterprise features, such as the ability to use a SQL database or integrate in Open Directory or Active Directory. The full Universal Type Server Professional adds Directory integration, external database use, and font compliance features and is sold as 10-user license, with an additional per seat license.

Insider Software offers two levels of font servers. The first is FontAgent Pro Team Server designed for small workgroups and sold in a 5 or 10 client configuration. The next level of product is Font Agent Pro Enterprise server. This adds the same directory services integration as Universal Type Server Professional. This product also has Kerberos single sign on, server replication and fail over. It uses the same per-seat pricing structure as Universal Type Server Professional.

A third tool is also available in Monotype Font Explorer, at http://www.fontexplorerx.com, which we will look at later in this article.

Pre-Deployment Strategies and Projects

Before any font server deployment, there are a few things to take into consideration. First is number of clients. This will guide you to which product will be appropriate for installation. Also note if Directory integration and compliance is needed. Is failover or a robust database important. The most important part of any font server installation is the fonts. How may are there, where are they coming from, are separate workgroups needed? Are all your fonts legal? In my experience probably not. Is legal compliance required for you organization or your clients? What is the preferred font type, PostScript Type 1, Open Type? What version are the fonts? Most fonts have been “acquired” over time, with some Postscript fonts dating back to early to mid nineties. As a font server is just a database, the axiom “garbage in, garbage out” is true here as well. This should lead to a pre-deployment font library consolidation and clean up. This can be either be done by 318 or we can train the you to perform this task. If compliance is an issue this is where we would weed out unlicensed fonts. Which to my experience is about 90% of all fonts. A clean, organized font set is the most important part of pre-deployment.

A major part of any font server roll out should be compliance and licensing. This allows for the tracking and reporting of font licenses and to make sure that stays in licensing and compliance.

Extensis

Universal Type Server includes the ability to generate and export reports to help you determine if you are complying with your font licenses. The font compliance feature only allows you to track your licensing compliance and does not restrict access to noncompliant fonts. To help you understand how the font licensing compliance, let’s look at the following typical example of how to use licenses and the font compliance report in your environment.

Say you are starting up your own design shop and need a good group of licensed fonts for your designers to create projects that will bring you fame and fortune. You know that fonts are valuable, and you want to be sure that you have purchased enough licenses for your requirements. So, you purchase a 10­user license of a sizable font library. Using the Universal Type Client, these fonts are added to a Type Server workgroup as a set. A font license is then created and the Number of Seats field is set to 10. This license is then applied to all fonts in the set.

When you run the font compliance report, Universal Type Server compares the number of seats allowed to the total number of unique users who have access to the workgroup. If more users have access than licenses available, the fonts are listed as “non-­compliant.” You can now either remove users from the workgroup or purchase more font licenses to become compliant.

Universal Type Server is unique amongst other products in that it uses a checksum process to catalog fonts. Others just use file names and paths.

Universal Type Server can limit users to be able to only download fonts installed by administrators. For initial deployment, each user does not need to download all of the fonts, which helps in environments when you have a lot of fonts (e.g. more than 5 GB of fonts) that need to get distributed to several hundreds clients, so if each user had to download all of the fonts (e.g. each time they get imaged), they could loose a production system for some time.

Universal Type Server Deployment

Universal Type Server system requirements include the following:

Macintosh Server

•          Mac OS X v 10.5.7, 10.6 Mac OS X Server 10.5 or 10.6•          1.6 GHz or faster 32-bit (x86) or 64-bit (x64) processor (PowerPC is not supported)
•          1 GB available RAM
•          250 MB of hard disk space + space for fonts
•          Safari 3.0 or Firefox 3.0 or higher*
•          Adobe Flash Player 10 or higher*

Windows Server

•          Windows XP SP3 (32-bit only), Server 2003 SP2, Server 2008 SP2 (32 or 64-bit version**)
•          P4 or faster processor***
•          1 GB available RAM
•          250 MB of hard disk space + space for fonts
•          Internet Explorer 7 or Firefox 3.0 or higher*
•          Adobe Flash Player 10 or higher*
•          Adobe Reader 7 to read PDF documentation*
•          Microsoft .NET 3.5 or higher

Universal Type Server Installation Process:

1.         Verify server system requirements
2.         Run the installer on the target server machine
3.         Login to the Server Administration web interface
4.         Serialize the server
5.         Set the Bonjour Name
6.         Resolve any port conflicts
7.         Set any desired server configuration options, including backup schedule, log file configuration, secure connection options, and any other necessary server settings.
8.         After installing the server, configure workgroups, roles and add users.

The basic user and workgroup configuration steps include:

1.   Plan your configuration
2.   Create workgroups
3.   Create new users
4.   Add users to workgroups
5.   Assign workgroup roles to users
6.   Modify user settings as required

Optional Setup:

  1. Managing System Fonts with System Font Policy The System Font Policy feature allows Universal Type Server administrators to create a list of system fonts that are allowed in a user’s system font folder.
  2. Font Compliance Reporting
    The font compliance feature only allows you to track your licensing
    compliance and does not restrict access to noncompliant fonts.
  3. Directory Integration
    Directory integration allows network administrators to automatically
    synchronize users from an LDAP service
    (Active Directory on Windows or Open Directory on Mac OS X) with Universal Type Server workgroups.

* UTS Documentation:

http://tinyurl.com/4xgn9rr

Both Universal Type Server Professional and Font Agent Pro Enterprise can be configured for Open Directory, Active Directory, and LDAP integration. Both also can utilize Kerberos Single User sign on. Universal Type Sever Professional directory integration instructions can be found in the UTS 2 Users and Workgroups Administration Guide at http://tinyurl.com/4xgn9rr. Some users have reported issues connecting to Open Directory (which happens with all products, not just this one).

Universal Type Server runs in Flash for administrative functions, which many do not like.

Monotype Font Explorer

Monotype Font Explorer is a third tool that can be used to manage fonts. Available at http://www.fontexplorerx.com there are some things that some environments do not like about Universal Type Server or Font Agent Pro. Let’s face it, the reason there are multiple products and multiple workflows is that some work for some environments and others work for other environments/workflows better. For example, Font Agent Pro stores master fonts on one client machine, which is then synchronized to the server, and from there to the rest of the clients; not everyone wants a client system acting as a master to the server. Font Explorer keeps the master is on the server, groups and synchronization works well and the administration is in the same window as font management. And best of all, Font Explorer is also typically cheaper than its server-based competitors in the font management space.

Extensis publishes a guide as to which fonts to include in the system and which to handle in the font management software. According to Apple documentation, and fonts in my ~/Library/Fonts folder take precedence to fonts in /Library/Fonts, which again takes precedence to /System/Library/Fonts. That means that if I install Times in my ~/Library/Fonts folder, it will be used instead of the font with the same name in /Library/Fonts or in /System/Library/Fonts. So how is it that I should care which fonts is installed where, as the font management applocation should simple take precedence to the others? If it does not take precedence, then where in the chain is it actually activating fonts? Maybe fonts are handled in these solution in parallel with the system mechanism? Thats the only explanation I can find to that, but is then only valid for UTS, or is it also valid for the other solutions?

End User Training and Font Czar

No font server installation would be complete without end user training and the appointment of a Font Czar. User training can be a fairly easy endeavor if client systems are using the same publishers stand-alone font client. Other times it could entail discussing licensing and compliance concepts along with adding metadata to fonts. An onsite Font Czar (or more than one) is very important to font server installations. The Font Czar cleans up and ingests new fonts, adds new users to font server, and in general be the Font Admin. This is usually a senior designer or technical point of contact for the creative environment.

Conclusion

Font Book is adequate for most users that don’t need a server. Universal Type Server, Font Agent Pro and FontExplorer are all great products if you need a font server. They all are installed centrally and allow end users to administer fonts, based on the server configuration and group memberships. They all work with directory services (some better than others) and can be mass deployed. In big workgroups or enterprises, where only a few people are handling the administration of fonts for a lot of people, a centralized font management solution is a must. But in much smaller organizations, it requires care and feeding, which represents a soft cost that often rivals a cost to purchase the solution.

Finally, test all of the tools available. Each exists for a reason. Find the one that works with the workflow of your environment before purchasing and installing anything.

Note: Thanks to Søren Theilgaard of Humac for some of the FontExplorer text!

Performing a CrashPlan PROe Server Installation

Wednesday, April 13th, 2011

This is a checklist for installing CrashPlan PROe Server.

Prepare your deployment:  Before you install the server software you should have the following ready:

  1. A static IP address. If this is a shared server, whenever possible, CrashPlan should have a dedicated network interface.
  2. (Recommended) Fully qualified Host Name in DNS. IP addresses will work, but for ease of management internally (and even more important externally,) working DNS to point to the service is best.
  3. Firewall port forwards for network connections. Ports 4280 and 4282 are needed for client-server communication, and to send software updates. 4285 is also needed if you wish to manage the server via HTTPS from the WAN.
  4. There should be a dedicated storage (preferably with a secure level of RAID) volume for backup data.
  5. Although a second server install (as server/destination licenses are free) is best for near-full redundancy, secondary destination volumes can be configured on external drives for offsite backup.
  6. LDAP connection. If you will be reading user account information from an LDAP server, make sure you  have the credentials and server information to access it from the CrashPlan Server install.
  7. If you’d like multiple locations to backup to local servers, ensure that your first master is installed in the most ideal environment for your anticipated usage. This is referred to as the Master server, which requires higher uptime and accessibility, as all licensing/user additions and removals rely upon it.

Installation

  1.  Go to https://www.crashplan.com/enterprise/download.html
  2. If you have not purchased CrashPlan licenses through a reseller, you can fill out the web form to be issued a trial master license key. Otherwise, check the “I already have a master key” checkbox to be presented with the downloads.
  3. Download the CrashPlan PROe server installer (the client software is located further down on the page.)  Choose the appropriate installer for your server (Mac, Windows, Linux, or Solaris.)
  4. Run the installer. When the installation completes you will be asked to enter the master key in order to activate the software.  If you don’t have it at that time, you can enter it later via the web interface.

Configuration

  1. Initial Setup. On the server, from a web browser, connect to http://127.0.0.1:4285   This is the web interface of the CrashPlan PROe Server. If you did not enter the master key during installation, you will prompted to enter it here.
  2. Log into the server using the default admin user credentials provided on the screen.  Immediately change the username and password for the ‘Superuser’ by going to Settings Tab > Edit Server Settings in the sidebar > then Superuser in the sidebar. Just as with Directory Administrator user names, customizing the user name is also recommended.
  3. Assign networking information. Click on the Settings tab > Edit Server Settings > Network Addresses. You will see fields in which to enter the Primary and Secondary  network addresses or DNS name(s). This information will match how clients attempt to connect to the server, so for ease of management, using an IP address for the primary and DNS for the secondary may make the most sense. Changes to the servers address would therefore immediately propagate for clients instead of waiting for DNS, although TTL preparation would help. Another consideration is where the majority of the clients will be accessing the server from.
  4. Assign the default storage volume: By default, CrashPlan PROe will assign a directory on the boot volume as the storage volume. Navigate to the Settings tab > Add Storage. You will be presented with a page that has links to Add Custom DirectoryAdd Pro Server, or Unused Volumes. If the data volume is attached to the file system with a UNC path it will be listed as an Unused Volume. Select the new storage volume, optionally with a subdirectory. Finally, to indicate this new volume as the default storage volume for new clients, navigate to the Settings tab > Edit Server Settings, and the third line has a drop-down menu for Mount Point for New Computers. You can then remove the default storage location on the boot volume.
  5. Create Organizations. At installation time there will be one default organization. All new users created will be added to this group. You can create an arbitrary number of organizations and sub organizations, if you believe client settings should be propagated differently for certain departments. At least one sub-organization can be helpful in complex environments, especially with Slave servers. Each division can have managers assigned for managing, alerting, and/ or reporting purposes, as well.
  6. Create User Accounts. Users can be created manually in the web interface, during the deployment of the client software, or through LDAP lookups.
  7. Set Client Backup Defaults. If you’d like to restrict certain files or location from the clients backups, you may do so from the Settings tab > Edit Client Settings. By default, nothing is excluded, but only the users home folder is included. It may be useful to restrict file types that the company is not concerned about, or modify the time period for keeping old versions. If storage space is a concern and customers are including very large files in the backup, you may want to purge deleted files on an accelerated schedule (default is never.) Allowing reports to be sent to each individual customer can also be enabled, or optionally setting may be locked down to read-only. In particular, especially if multiple computers share the same account, forcing the entering of a password to open the desktop interface may be useful to turn on and not allow it to be changed. These changes can be propagated for the entire Master server, the organization, or an individual client/user installation.
  8. Install CrashPlan PROe on a test machine for final testing. The installation of a client will require the Registration key that is generated for the organization that the user should be ‘filed’ into, the Master servers network information, the creation of a username (usually the customers email address, or the function that computer performs,) and a password. Once complete, the client will register with the server and begin backing up the home folder of the currently logged-in customer (by default.)

Disabling Spanning Tree on Cisco Switches

Monday, February 21st, 2011

Spanning Tree Protocol has always been a problem with Mac OS X Server. This goes back to the early days when OS’s whacked each other over the head with rocks to go from Alpha to Beta. This usually manifests itself in weird speed and connectivity issues. You can mitigate by changing timing values, but when testing, it is often easiest to start by disabling Spanning Tree Protocol, seeing if the problems you have go away and then working from there.

By default, Spanning Tree is enabled on all Cisco Switches. In this article we’ll look at disabling Spanning Tree Protocol. But it is important to point out that once disabled, it is important to keep in mind that creating an additional VLAN automatically runs another instance of spanning tree protocol, so you may need to repeat this process in the future.

First backup the device. Then, ssh into the device:

ssh admin@64.32.49.172

You should be prompted for credentials at this time if using telnet. If you are using SSH you should only be prompted for the password. Once connected to the device you will need to go into enable mode by typing en at the command prompt and hit enter:

en

It may prompt you for a password, which you will need to know. Once complete you will notice that the prompt turns from an > to an # symbol. Now that you have administrative access, you will need to go into global configuration mode using the config t command:

config t

Now let’s actually disable spanning tree protocol. Enter in the no verb followed by spanning-tree, the protocol we’re disabling, followed by VLAN, followed by the VLAN identifier:

no spanning-tree VLAN vlan-id

Repeat for each VLAN if you need to do this on multiple. When done, exit config mode by entering the end command:

end

You can then enter the show command along with the spanning-tree option and view to see if there are any remaining spanning tree’s still active and verify if your command took:

show spanning-tree

If the command took and spanning tree is no longer enabled. Run the coppy command, followed by running-config and then startup-config, which copies your running configuration to your startup configuration making your change permanent:

copy running-config startup-config

It is then usually recommended to go ahead and reboot servers and clients prior to testing.