Special Considerations for ‘Supervised’ iPads

July 18th, 2013 by Allister Banks

Apple Configurator brings two key features. First, when Supervision is applied, one is able to pull back App ‘codes’ redeemed to devices when it is decided the App should be used on another iPad or simply be removed so as not to distract the current user. The other is when you move to the ‘Assign’ stage, it can become a multi-user device, which helps firewall off multiple users data from each other or facilitate handouts to Apps that support document transfer within Configurator. These are somewhat specialized use cases, so many use it for the basic setup functionality which can be found in other tools, but in a less-optimized workflow than Configurator provides.

Supervised iPads get people a lot of the way toward their common goals, so they sometimes find that Assignment isn’t as necessary. Perhaps they use Google Apps with Drive, or have a webclip they deploy to point folks to a site, and then fill out forms as a way of working on documents or collaborating. This still presents two inevitable events for the random bumps or mishaps that may befall an iPad deployment:
1. What if an iPad is dropped and the screen gets cracked?
2. What if the iPad becomes non-responsive due to a bath in water, or worse, gets lost/forgotten/stolen?

For the first issue, damaged iPads would lose all of the paid App codes redeemed on that device when wiped during repair, so it should be connected to Configurator and unsupervised before sending out. You do so by highlighting the supervised iPad and choosing ‘Unsupervise’ from the Device menu, as shown:
Unsupervise from Device menu
To reclaim the inventory number used by a lost or otherwise non-contactable iPad, it should be removed by following the same process, but holding down the Option key while the unrecoverable iPad is highlighted, and from the Device menu choosing Remove. Unfortunately, new App codes would need to be purchased to replace the Apps used on the non-retrievable iPad.


Troubleshoot network port connectivity in SonicWall devices

June 12th, 2013 by William Smith

Few things are as aggravating to a technician or customer as two vendors blaming each other for a problem.

I recently ran into this when I was unable to establish communication from the Internet to a client’s internal server through a SonicWall TZ 100 firewall device. The customer’s server would accept connections internally but not externally. The firewall had the necessary ports open to allow communication but the application just couldn’t reach the server.

Other applications worked just fine—just this one was failing… somewhere. Two of my co-workers verified my settings and couldn’t find any problems. The problem lay either with the ISP blocking the port or a malfunction with the SonicWall.

When I reported the problem to the ISP the technician quickly said “we don’t block any ports.” He checked a few items and reaffirmed nothing on their end was causing our problem. OK, so that left the SonicWall device. It was under maintenance and I called for technical support.

The SonicWall technician remote controlled my computer to view my setup. He found nothing wrong as well and said the problem is the ISP blocking the port. I replied to him the ISP said nothing was blocked.

The technician proceeded to prove the SonicWall was working correctly, which made my day!

Packet Monitor

SonicWall devices have a Packet Monitor feature that works independently of any configured settings. It can capture incoming traffic as it enters the firewall before routing it to the local network. It can also filter for traffic on specific ports making the results easier to examine.

Assume the port I need to verify is 445, which is Windows file sharing (CIFS/SMB). This is commonly blocked for security reasons. Assume its Mac counterpart, port 548, is working correctly and a completely random port such as port 54321 is not configured at all. This random port will be my “control” in my testing.

  1. Log in to the SonicWall device and select System –> Packet Monitor in the lefthand navigation pane.

    Packet Monitor menu
  2. In the right pane click the Configure button.Packet Monitor Configure
  3. Under the Settings tab of the Packet Monitor Configuration window enable all options under the Exclude Filter section. This eliminates any management traffic from contaminating the results.
    Configure Settings
  4. Under the Monitor Filter tab enter the following information and enable the following items:
    Interface Name(s): X1 — This is the Internet facing port of the SonicWall device.
    • Ether Type(s): IP — By specifying IP we eliminate any ARP or PPPOE traffic.
    • IP Type(s): TCP — This eliminates any UDP, ICMP or other types of IP traffic.
    • Source Port(s): 548 — For now, we’ll test with a known working port.
    • Destination IP Address(es): The public IP address of the SonicWall device.
    Enable Bidirectional Address and Port Matching: Enabled
    • Forwarded packets only: Enabled
    • Dropped packets only: Enabled
    Click the OK button to save the settings and close the window.
    Monitor filter
  5. Finally, click the Start Capture button. The window reflects that tracing is active.
    Start Capture

Now that the SonicWall device is monitoring Mac file sharing traffic, use telnet in the Terminal application to verify this type of traffic is actually reaching the destination.

  1. In the Terminal application enter:
    telnet 97.XXX.XXX.14 548
    Telnet 548
  2. In the Captured Packets window below an entry appears in blue and indicates the packet was forwarded to the destination server inside the local network.Captured 548 packet

With connectivity verified on port 548, test next with port 54321. This port should not be open but the SonicWall should at least register the attempt.

  1. Revisit the Monitor Filter and change the Source Port from 548 to 54321. Click the OK button and then click the Clear button to erase the captured packets.
    Monitor filter 54321
  2. In the Terminal application enter:
    telnet 97.XXX.XXX.14 54321
    Telnet 54321

    Terminal should reflect it cannot connect on port 54321. The SonicWall doesn’t accept this port.

  3. However, the SonicWall packet filter will at least acknowledge the attempt and report the packet was dropped.
    Dropped 54321 packet

The SonicWall packet filter is clearly registering attempts for open ports and closed ports. So, what happens when the ISP is blocking a port?

  1. Revisit the Monitor Filter and change the Source Port from 54321 to 445 (the suspected ISP-blocked port). Click the OK button and click the Clear button to erase captured packets.
    Monitor filter 445
  2. In the Terminal application enter:
    telnet 97.XXX.XXX.14 445
    Telnet 443

    This time Terminal acts differently. It neither succeeds nor fails. It just keeps trying.

  3. The SonicWall shows nothing because it never receives the packet.
    No 445 packet received

This concludes the test and proves the SonicWall is functioning normally. Convincing the ISP it’s still blocking the port… that’s another story.

Spelunking An iTunes Backup

June 12th, 2013 by Allister Banks

Say you’re excited about installing a particular beta of a particular mobile operating system, and are foolhardy enough to put it on a phone that was in use for business purposes. Let’s go even further, hypothetically, and say you had been using iCloud Backup, but made a backup with iTunes before upgrading… leaving about half a day gap, during which contacts were added. This is a phone that’s often used for testing and little else, so no accounts besides iCloud are configured and you don’t encrypt the backup because you don’t have passwords you want/need restored. After the beta upgrade completes, you restore the iCloud Backup, leaving out that one phone number that’s the direct line to a level two support group at a certain backup company. iTunes is just not fun to plug into, though, so let’s go spelunking in the backup it created.

First, I need to put the backup into a state I can interact with it in. For that I chose the product with the best domain name, http://supercrazyawesome.com, and its iOS Backup Extractor. I chose to put it all in tmp, so it gets dumped sooner rather than later, and found a promising database to sift through:

in tmp

Following basic sqlite3 commands I found on @tvsutton’s site, I saw a promising table, ABPersonFullTextSearch_content. Sure enough, the contact info I was missing was there and I could pull it out to restore just that one contact I’d created.

 I never use this theme

Reduce email forgery by using SPF DNS records

June 10th, 2013 by William Smith

Here’s the problem:

Super Uber Bank is the largest bank in the world and spammers are using its name because of its recognizability. They’re sending email that looks like it’s coming from “Super Uber Bank Technical Support”. Super Uber Bank’s customers are clicking on links in those spam messages, which take them to a credit card phishing site, and those customers are handing over all their account information and passwords.

Email forgery is simple. Nothing stops me from setting my email address to “Super Uber Bank Technical Support <techsupport@superuberbank.com>” and sending you a message with a link to my credit card phishing site. Few email service providers require their customers to use valid email addresses when sending mail. Spammers just use their own servers anyway.

This puts the onus on the recipient’s mail server to validate incoming messages before passing them to you to read. Spam filtering is an art as much as a science and the software has to balance between what may be legitimate email but looks like spam and what is spam but looks legitimate.

Sender Policy Framework (SPF) records are DNS entries for a domain that provide the names of its authoritative email servers and sending domains. Continuing the Super Uber Bank story:

Super Uber Bank often uses a third party marketing company that specializes in mass-marketing emails. So, it decides to implement an SPF record for its superuberbank.com domain. In that record, Super Uber Bank includes its marketing company’s domain  ”supercrazymarketing.com” as authoritative for sending mail on its behalf.

An SPF record is a simple text (TXT) or resource record added to the authoritative DNS servers for a domain. It sits alongside any A host records, MX records, CNAMEs, etc.

ubermail.superuberbank.com   A
superuberbank.com            MX      ubermail.superuberbank.com
@                            TXT     v=spf1 ip4: include:supercrazymarketing.com -all

In this case the third line is the SPF record. The “@” symbol is shorthand for the current domain (superuberbank.com). “TXT” denotes this is a text record, which for DNS servers requires a value or “v=spf1 ip4: include:supercrazymarketing.com -all”.

The text of the value breaks down like this:

v=spf1                              This is the version of SPF record being used
ip4:                  Allow mail from my own server at this IP address
include:supercrazymarketing.com     Include this as a valid sending domain
-all                                Reject anything that doesn't match this criteria

And so the story ends:

The spammers are using open relays from a variety of seedy ISPs to send their phishing scams. However, when a Super Uber Bank customer’s email server receives one of these bogus Super Uber Bank Technical Support messages, it compares the address of the sending server to the addresses and domains Super Uber Bank has in its SPF record. Because the server is not in the SPF record the customer’s email server rejects the message without sending it to the customer.

Incorrectly implemented SPF records can stop mail flowing altogether; therefore, customers should consult their ISPs and, likewise, ISPs should consult with their customers before implementation. Some services such as Google Apps and Office 365 have published SPF record information for their customers.

The SPF Project offers a plethora of information for end-users, service providers and support technicians. Their site also includes links for mailing lists and references for SPF consultants who can assist with more complex email scenarios.

iOS 7 Management API and Apple Configurator Wishlist Quicky

June 6th, 2013 by Allister Banks

We feel privileged to be living in the modern era, iOS device activation can happen over-the-air, and use of iTunes has almost completely been eclipsed by Apple Configurator. But it isn’t uncommon to hear the sysadmins being referred to as ‘the haters,’ since things can never be easy or nice enough for us. (And in reality, there’s still plenty of conflict and stress to go around without worrying about the reliability or functionality of our tools.) Besides the fact enrollment profiles themselves can always be removed at any time by end users, there are also still surprisingly numerous things that would require manual interaction to manage, and missing integration with other Apple products. With something that could be called iOS7 potentially around the corner, and with no inside information, here’s some of the things that still trip up the modern iOS deployment in certain environments.

As of this point in time, through the official management API and payloads documented in the canonical reference Apple provides, you cannot do the following:

- Disable the setting of a password lock
Especially in education, the accidental turning on of this ‘feature’ has probably sold MDM more than anything else
- Prevent the addition of other email accounts
File transfer and content distribution is still by no means a solved problem, and email has always been a ubiquitous option – but in certain environments we probably don’t want accounts added nilly-willy… (er, strike that, reverse…)
- Prevent the sign-in (or creation!) of Twitter or Facebook accounts
Yay for social media integration! Boo for education or other environments where these devices aren’t to be used ‘socially.’

Account addition OR creation

Apple Configurator can allow the handing out of documents to an app like Adobe Reader(which still has an unfortunate amount of Adobe’s interruptions in its first-time use experience,) and you can collect documents as well when assigned devices get checked back in. The two apps you CAN’T at present add content/documents to? Apple’s own iTunesU and iBooks apps! Nor can you pull in iMovie projects or pictures from the Camera Roll.

The longer you work with these things, the more corner/edge-cases you notice – like the fact you can’t use two MDM services on the same device. It makes sense when you know the moving parts and think about the ramifications, but it still can surprise folks because documentation doesn’t seem to warn against it. (That I’ve found, at least, feel free to correct us on the Twitter or elsewhere!) We mention these things not to say it’s a horrible experience to deploy the devices in most use cases, just to point out there’s always room for improvement and we’re excited to see what the next version might offer.

Allister’s Talks From Penn State MacAdmins

June 5th, 2013 by Charles Edge

IPv6: Quick start for administrators

May 26th, 2013 by William Smith

Networking support folks have been buzzing about IPv6 since it was first formally introduced in December 1998. This is the IP addressing system to augment the current IPv4 system in use since the 1970s. It promises a much bigger address space for the world’s increasing number of Internet-connected devices.

Addresses will go from looking like this (IPv4):

to looking something like this (IPv6):


We’re experimenting with IPv6 in our offices, so I thought I’d compile a short list of things administrators may find useful to know.

Read the rest of this entry »

Casper Focus Now Available

April 29th, 2013 by Charles Edge

For a long time I’ve been saying that the #1 challenge with regard to using iOS is content distribution. Others have mirrored that by saying that the device is a content aggregator, etc. The challenge is keeping everyone on the same page, with the same content and distributing administration of all of that to those who need it.

Well, our friends at JAMF software are, as usual, right in the middle of resolving the more challenging issues of the day with regard to iOS and OS X. In this case they’ve released a new tool called Casper Focus that enables rudimentary administrative tasks by teachers.



Now, I don’t want anyone to take the word rudimentary to be a bad thing. You see, accessing and remotely controlling devices can be a big challenge. The learning curve can be steep. By only giving delegated administrators a few options that learning curve can be drastically reduced. Lock, enable, distribute data. These are the very basic tasks teachers need.

Overall, this is yet another great addition to the Casper family of products and 318 is excited to work with our customers to integrate Casper Focus into the environments for our customers where appropriate. Call your Professional Services Manager today for more information!

10 Windows 8 Keyboard Combinations

April 23rd, 2013 by Charles Edge

Some helpful tips (in the form of keyboard combinations) on getting wizardly fast with navigating around Windows 8:

  • Windows key: Brings up the Start menu. On a touch screen keyboard you can then swipe through the charms in the Start menu.
  • Windows-x: Brings up a menu with many of the systems administration tools you’ll need in Windows 8, including Disk Management, a command prompt, device manager, etc.
  • Windows-r: Brings up a Run dialog
  • Windows-c: Bring up the sidebar that allows you to search, access devices or tap/click on Settings to bring up a Shut-Down menu.
  • Windows-l: Lock the screen
  • Windows-k: Bring up Devices
  • Windows-h: Bring up Sharing
  • Windows-f: Bring up Files in the Start Menu search
  • Windows-i: Bring up Settings (Control panel, personalization, desktop, Power menu, network selection for Wi-Fi, etc)
  • Windows-Q: Brings up the apps screen so you can select a program to open
  • And for your extra credit, most of the alt keys still work, but I find I now use Alt-F4 more than I used to, which closes a window

Xsan & Media Composer

April 22nd, 2013 by Charles Edge

There’s a product out there called SANFusion that can allow Media Composer to read disk images on Xsan as though each were a separate workspace in AvidFS. This allows environments to leverage existing pools of storage sitting on Xsan as though they were sitting on an Isis or a Terablock. Check it out at SANFusion:

SAN Fusion is an easy and cost-effective solution that allows you to use Avid Media Composer with your existing Xsan infrastructure.

By using an existing Xsan volume as a backing-store for SAN Fusion’s virtualized workspaces, editors can experience authentic Unity-style bin sharing and locking from within Media Composer 5.5.3 or later. Unlike other solutions that purport to make Avid work with Xsan, SAN Fusion is a client-side application with no additional server or minimum number of seats to buy.

SAN Fusion clients mount and unmount workspaces on demand through our intuitive GUI to provide Media Composer a real-time translation layer between it and Apple’s Xsan filesystem. The end result is the best of both worlds: Avid’s first class bin and media sharing combined with Apple’s storage-agnostic low (or no) cost cluster filesystem.

The $1,500 price tag shouldn’t scare you off. Just compare the cost of a Promise X30 to the same amount of storage from Avid and you’ll get the idea why. And if you need any help with such things, give us a shout at sales@318.com.

Installing OpenXenManager on Ubuntu

April 21st, 2013 by Charles Edge

OpenXenManager is a nice graphical environment for managing the Xen virtualization environment. You still need to use QEMU or virt-install/virt-manager to manager domains themselves, but OpenXenManager takes some of the guessing games out of things. Having said that, xm is still the way to go for the most part.

Before you install OpenXenManager, you’ll need svn, which isn’t baked into a default Ubuntu 12.04 or 12.10 installation. We’ll just use apt-get to build subversion and some python frameworks required for openxenmanager:

sudo apt-get install subversion
sudo apt-get install python-glade2 python-gtk-vnc

Next, grab openxenmanager using subversion:

svn co https://openxenmanager.svn.sourceforge.net/svnroot/openxenmanager openxenmanager

Hop into your openxenmanager trunk and fire it up:

cd openxenmanager/trunk

Now you have a GUI for managing domU. Good luck!

How TED’s introduction to Google Glass Really Happened

April 12th, 2013 by Allister Banks

Starts with a replay of the Google Glass commercial, but then an uncomfortable founder of an internet company rambles uncomfortably. 13:12 in he jokes about recording from the stage without people knowing. Strange times we live in.

BCC Mail In OS X Server

April 4th, 2013 by Charles Edge

OS X Server has the ability to bcc mail that flows through it. This can be a good way to keep a copy of mail for the purposes of things like legal requirements. To enable this feature, once upon a time you could use the GUI in OS X Server. These days, the feature is still there but is now accessed through the command line as the always_bcc_enabled option within serveradmin’s mail settings. To enable this option, use the following command:

sudo serveradmin settings mail:postfix:always_bcc_enabled = yes

Once enabled, you will also need to supply an actual address to bcc mail to, which is done using always_bcc as follows:

sudo serveradmin settings mail:postfix:always_bcc = "backup@318.com

Next, you’ll want to

sudo serveradmin stop mail
sudo serveradmin start mail

Finally, if there are any issues, putting the postfix logging facility into debug mode can help you triangulate, done using the following command (and restarting the mail service again):

sudo serveradmin settings mail:postfix:log_level = "debug"

Quick Update to a Radiotope Guide for Built-In Mac OS X VPN Connections

March 26th, 2013 by Allister Banks

Just a note for those folks with well-worn bookmarks to this post on Ed Marczak’s blog, Radiotope.com, for authenticating VPN connections with Mac OS X Server’s Open Directory, which is still valid today. When trying to use the System Preferences VPN client/network adapter with the built-in L2TP server in Sonicwall, though, I was curious why OD auth wasn’t working for me, but users that were local to the Sonicwall were. Having been a while since the last time I’d set it up, I went on a search engine spelunking and found a link that did the trick.

In particular, a comment by Ted Dively brought to my attention the fact you need to change the order(in the VPN sidebar item, L2TP Server, when the Configure button pop-up is clicked, it’s under the PPP tab) that the L2TP service is configured to use for authentication type, PAP instead of the more standard MSCHAPv2.

Where it's done

We hope that is of help to current and future generations.

LOPSA-East 2013

March 18th, 2013 by Allister Banks

For the first year I’ll be speaking at the newly-rebranded League of Extraordinary Gentlemen League of Professional System Administrators conference in New Brunswick, New Jersey! It’s May 3rd and 4th, and should be a change from the Mac-heavy conferences we’ve been associated with as of late. I’ll be giving a training class, Intro to Mac and iOS Lifecycle Management, and a talk on Principled Patch Management with Munki. Registration is open now! Jersey is lovely that time of year, please consider attending!


LOPSA-East '13

PSU MacAdmins Conference 2013

February 27th, 2013 by Allister Banks

It's Secret!

For the third year, I’ll be presenting at PSU MacAdmins Conference! This year I’m lucky enough to be able to present two talks, “Backup, Front to Back” and “Enough Networking to be Dangerous”. But I’m really looking forward to what I can learn from those speaking for the first time, like Pepijn Bruienne and Graham Gilbert among others. The setting and venue is top-notch. It’s taking place May 22nd through the 24th, with a Boot Camp for more foundational topics May 21st. Hope you can join us!

Spin passwords using Apple Remote Desktop

February 18th, 2013 by William Smith

We routinely need to change our administrative passwords on multiple computers as part of our security policy. Since we already have remote access to many of our Mac OS X computers through Apple Remote Desktop (ARD), changing that administrator password is quick and simple.

First, a short shell script:

# Change an account's password

/usr/bin/dscl . passwd /Users/$ACCOUNT $PASSWORD

if [ $? = 0 ] ; then
echo "Password reset."
echo "Password not reset."

In ARD, click the Send UNIX Command button and paste the script into the top field. Choose to run this command as a specific user and specify root.

Send UNIX Command

From the Template drop down menu in the upper right corner select Save as Template… and save these settings with a descriptive name such as Spin ladmin password.

Save as template

To use and reuse this template, select the workstations with the old account password and click the Send UNIX Command button in ARD’s toolbar. Choose the Spin ladmin password template from the Template drop down menu. Adjust the account name and password accordingly in the script and then click the Send button.

ARD can spin dozens or hundreds of account passwords in just a few seconds without having to know the original.

Windows 7: What a difference time makes!

February 6th, 2013 by William Smith

While visiting our Santa Monica office I called on a client having difficulties connecting his Windows 7 computer to his file server. I expected this to be a pretty straight-forward issue since the environment was pretty simple:

  • Only his PC wouldn’t connect—everyone else was connecting without issue
  • Just a handful of users (about five)
  • All wired connections (no wireless)
  • A Windows Server 2003 file server
  • No domain—just a workgroup
  • A small simple router for DHCP and Internet

I went through basic troubleshooting steps and verified that the client could indeed browse devices on the network (including the server), had no unusual ping times or network settings and that his account password for the server was correct. Every attempt to connect prompted for his name and password but the server wouldn’t accept his credentials. He was repeatedly prompted for his credentials.

Likewise, a new server account that I created myself would work on other PCs but not his. I narrowed my focus to his machine and started asking questions:

Q: Was this a new machine?

A: Yes, pretty new.

Q: Had it ever connected to the server?

A: Yes, it had.

Q: Any recent unusual activity in the office?

A: Yes, the office had to be shut down for some recent power maintenance in the building a week ago. When he restarted the server it wouldn’t power up. Hardware technicians had diagnosed a few failed components. Only yesterday afternoon had the server been back online.

Q: Did he have the only Windows 7 PC in the office?

A: Yes.

The last question led me to believe he was having an issue with security negotiations between his PC and the server because the two OS versions were nearly 10 years apart and Windows 7 has considerably stricter security. Nothing in the PC’s local policy nor the server’s local policy for Microsoft Networking looked unusual. I tested a little registry change that seemed logical:

Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\LSA

Create Dword LmCompatibilityLevel with a value of 1

Restart computer and try to connect again.

No change.

After more research I found another potential solution on Microsoft Answers but was skeptical it would solve my issue since the computers were in a workgroup and not a domain. Check the time.

Just as soon as I was ready to pursue that idea the office manager asked me to check the backups on the server because they were reporting dates from October 2010. That was a smack to the face!

A quick resync to the time.windows.com NTP server corrected the time and the Windows 7 user logged in immediately.

QuickTip: Retrospect

February 2nd, 2013 by Charles Edge

False-positive errors muddy our view, so just as many strive to quiet noisy syslogs, we can easily overcome a common complaint we see Retrospect make when doing differential SQL backups: ‘the master DB can’t be DIFF’d’.

Complicating matters is Retro’s SQL connector doesn’t seem to allow you to exclude that one as a source. Simply make an exclusion for ‘file or folder name matches: master’ and you’ll see it get to the master in the log and decide no files necessary to backup, keeping the result of that script error-free.

A Little About Xen

February 1st, 2013 by Charles Edge

Xen is one of those cool open source projects which seems like the kind of thing you’d probably want to run if it weren’t for the fact that everyone forces you to run ESX(i). It’s free, it’s well documented, and no matter how irrational salespeople can be, nobody can say there’s no support or documentation for it. So how does Xen work, and how does it compare with ESXi?

For starters, there’s no need to be overly concerned with what hardware is supported. Instead of being dependent on a specific OS, Xen is a driverless hypervisor which runs in conjunction with a host OS. This host OS might be GNU/Linux, NetBSD, Solaris or others. Since the host OS handles talking with hardware, any hardware which is supported by the host OS can be used with Xen. In a nutshell, Xen can be run on any x86 box, although full virtualization requires Intel’s VT-x or AMD’s AMD-V hardware support.

So let’s say you want to set up Xen. How? In many instances it’s as simple as installing a GNU/Linux distribution or a NetBSD distribution. Straightforward directions can be found here:


Let’s say you’ve already done all of that and you’re sitting at the command prompt of your Xen dom0. How do we create virtual machines? We’ll make an example using Windows 2012 Server since that just happens to be what I’m installing today.

Just like the how-to for installing a Xen dom0 instance, there are lots of how-tos for installing Windows and other OSes under Xen. We’ll summarize the important points using a minimal VM configuration file:

name = "win2012"
memory = "8192"
disk = [ 'file:/usr/xen/win2012/win2012.img,ioemu:hda,w', 'file:/usr/xen/win2012/winserver2012.iso,ioemu:hdb:cdrom,r' ]
vif = [ 'bridge=bridge0', ]
vcpus = 1
sdl = 0
vncconsole = 1
vfb = [ 'type=vnc,vncdisplay=12,vncpasswd=booboo' ]
on_reboot = 'destroy'
on_crash = 'destroy'

# boot on floppy (a), hard disk (c) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
usbdevice = 'tablet' # Helps with mouse pointer positioning

Some of the options are pretty self-explanatory such as name, memory, vcpus, on_reboot and on_crash. Others may need a little explanation. Builder identifies the kind of virtualization. For paravirtualization, no builder need be specified, but you must be running a Xen-aware domu. For full virtualization, use hvm. device_model determines the executable run to emulate devices which the virtual machine will use. The default Xen qemu device model appears to work well nowadays with all versions of Windows. The disk line should make sense, but be aware that you’ll need to make the disk image yourself. If you wish to preallocate the entire file, use something like:

dd if=/dev/zero of=/usr/xen/win2012/win2012.img bs=1048576 count=32768

Which would write 32 gigs of zero to the disk image file. If you don’t care about preallocating the space, you can run:

dd if=/dev/zero of=/usr/xen/win2012/win2012.img seek=32767 bs=1048576 count=1

vif is the virtual network interface. A bridge to a real NIC is simplest, although other options are available if desired.

sdl is some other way of presenting the graphics screen, but I don’t know how that works. vnc, on the other hand, can be run on localhost and sent over X11 very easily. My options above create a vnc listener on display 12 (localhost:5912) with a simple password. One can forward X11, with compression if preferred, and run vncviewer on the Xen dom0. If X11 scares you, you can also use ssh to forward port 5912 from localhost of the dom0 to a port on your local machine and run VNC or Screen Sharing on your local machine.

There are great reasons to do something like this, like the ability to talk directly to the ssh daemon on the dom0 instance… But whatever… Blah blah blah. We’re now running Windows in Xen…

Set Splunk MySql Monitor To Start On Boot (CentOS)

January 31st, 2013 by Erin Scott

Back in the old days of unix there was an easy way to start a daemon or script every time a computer booted.  Simply put it in one of the /etc/rc.? text files and it would start all the services in the order specified.  Later, it was made more flexible by having different startup folders based on which runlevel you were on.  Even later still scripts these rc[1-6].d startup folders became deprecated yet are still used to some extent by legacy programs and now things are all managed with new commands.


To put it bluntly, it’s messy, non intuitive and definitely not as easy as it should be.  There is hope however and getting a script or daemon to run “the right way” at startup isn’t too terribly daunting and I’ll walk you through the process now.


In our instance we need a program called splunkmysqlmonitor.py to run on boot.  It takes one of 3 arguments, start, stop, restart, and is located in /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/.  It’s almost ready to run at startup but first we should look at the command we’re using to call splunkmysqlmonitor.py and that’s the chkconfig command.

The chkconfig command takes a script that’s located in /etc/init.d and creates all the necessary symlinks for it in the rc[1-6].d folders that tell the system what order to start all the services and which runlevels start which services.  Runlevels are mostly deprecated in linux these days but just as an FYI, the runlevels you need to pay attention to are 2,3,4 and 5 and they are amost always identical.  The only thing you really need to worry about is the order in the boot process that the scripts get started and and less so the order that the script gets shutdown when rebooted.  For example, a program that relies on nfs to be on when running started necessarily needs to be run after the nfs command mounts the drives successfully.  Numbers lower in the list start first and the list goes from 1 – 99.  Since splunk is at priority 90 and this monitor needs to start after splunk I’ll give it a priority of 95.  As for shutdown, this service should turn off quickly since it relies on other services to run and may spit out errors if these dependent services are turned off before it.  I’ll give this shutdown a priority of 5 which means it’ll be one of the first processes to shutdown.


So now that we know when in the boot process the script should run and at (priority 95) which run levels it should run from (2,3,4,5) we just need to put this info into the system somehow.  We do this by adding specially formatted comment lines into our script located in /etc/init.d.  Here’s what our example looks like with the new comments added


#!/usr/bin/env python
#         run level  startup  shutdown
# chkconfig: 2345      95        5 
# description: monitors local mysql processes for splunk
# processname: splunkmysqlmonitor
import sys, time, os, socket...

Now we have to put the script into the /etc/init.d folder and that is best done with a symlink.

     ln -s /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/splunkmysqlmonitor.py /etc/init.d

And finally the chkconfig command itself

     chkconfig --add /etc/init.d/splunkmysqlmonitor.py

This should add the script to startup and next time you reboot it’ll launch automagically.

[More Splunk: Part 4] Narrow search results to create an alert

January 30th, 2013 by William Smith

This post continues [More Splunk: Part 3] Report on remote server activity.

Now that we have Splunk generating reports and turning raw data into useful information, let’s use that information to trigger something to happen automatically such as sending an email alert.

In the prior posts a Splunk Forwarder was gathering information using a shell script and sending the results to the Splunk Receiver. To find those results we used this search string:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh"

It returned data every 60 seconds that looked something like:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Using the timechart function of Splunk we extracted the MySQLCPU field to get its value 23.2 and put that into a graph for easier viewing.

Area graph

Returning to view that graph every few minutes, hours or days can get tedious if nothing really changes or the data isn’t out of the ordinary. Ideally, Splunk would watch the data and alert us when something is out of the ordinary. That’s where alerts are useful.

For example, the graph above shows the highest spike in activity to be around 45% and we can assume that a spike at 65% would be unusual. We want to know about that before processor usage gets out of control.

Configuring Splunk for email alerts

Before Splunk can send email alerts it needs basic email server settings for outgoing mail (SMTP). Click the Manager link in the upper right corner and then click System Settings. Click on Email alert settings. Enter public or private outgoing mail server settings for Splunk. If using a public mail server such as Gmail then include a user name and password to authenticate to the server and select the option for either SSL or TLS. Be sure to append port number 465 for SSL or 587 for TLS to the mail server name.

Splunk email server settings

In the same settings area Splunk includes some additional basic settings. Modify them as needed or just accept the defaults.

Splunk additional email server settings

Click the Save button when done.

Refining the search

Next, select Search from the App menu. Let’s refine the search to find only those results that may be out of the ordinary. Our first search found all results for the MySQLCPU field but now we want to limit its results to anything at 65% or higher. The where function is our new friend.

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | where MySQLCPU >= 65

This takes the result from the Forwarder and pipes it into an operation that returns only values of the MySQLCPU field that are greater than or equal to “65″. The search results, we hope, are empty. To verify the search is working correctly, change the value temporarily from “65″ to something lower such as “30″ or “40″. The lower values should return multiple results.

On a side note but unrelated to our need, if we wanted an alert for a range of values an AND operator connecting two statements will limit the results to something between values:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | where MySQLCPU >= 55 AND MySQLCPU <=65

Creating an alert

An alert will evaluate this search as frequently as Splunk receives new data and if it spots any results other than nothing then it can do something automatically.

With the search results in view (or lack of them), select Alert… from the Create drop down menu in the upper right corner. Name the search “MySQL CPU Usage Over 65%” or something that’s recognizable later. One drawback with Splunk is that it won’t allow renaming the search later. To do that requires editing more .conf files. Leave the Schedule at its default Trigger in real-time whenever a result matches. Click the Next button.

Schedule an alert

Enable Send email and enter one or more addresses to receive the alerts. Also, enable Throttling by selecting Suppress for results with the same field value and enter the MySQLCPU field name. Set the suppression time to five minutes, which is pretty aggressive. Remember, the script on the Forwarder server is sending new values every minute. Without throttling Splunk would send an alert every minute as well. This will allow an administrator to keep some sanity. Click the Next button.

Enable alert actions

Finally, select whether to keep the alert private or share it with other users on the Splunk system. This only applies to the Enterprise version of Splunk. Click the Finish button.

Share an alert

Splunk is now looking for new data to come from a Forwarder and as it receives that new data it’s going to evaluate it against the saved search. Any result other than no results found will trigger an email.

Note that alerts don’t need to just trigger emails. They can also run scripts. For example, an advanced Splunk search may look for multiple Java processes on a server running a Java-based application. If it found more than 20 spawned processes it could trigger a script to send a killall command to stop them before they consumed the server’s resources and then issue a start command to the application.

MySQL Monitoring with Splunk

January 30th, 2013 by Erin Scott

MySQL Logging with Splunk

Getting Splunk running and monitoring common log formats, such as apache logs and system logs, is a pretty straightforward process. Some would even call it intuitive but setting up some of the optional plugins can be tricky the first time you set it up. The following is a quick and dirty guide to getting the MySQL monitor from remora up and running in your splunk instance.

This article assumes you have a splunk server as well as a separate database server running a splunk forwarder that is pushing logs to the main splunk server.

The first step is to prepare your splunk server for the incoming mysql stats. We’ll need to make a custom index (called mysql in our case) on both the server and the database host.  See below:

create mysql index on splunk server

Once that’s done we’ll also need to create a custom tcp listener on the splunk server.  This is different from the standard listener that runs on port 9997.  Go to the manager and then data inputs to create:

add listener1


add listener2


set raw tcp lictener on splunk server


As you see we used port 9936 to be a listener that automatically imports into the mysql index. You’ll want to ensure that this port is reachable from your database server to ensure there are no firewalls blocking your connection. You can test this with a simple telnet command.  If you see a prompt that says “Escape character is” then you’re good to go.

telnet to port 9936 to test


Once we have verified the listener is up and running the next step is to get the mysql monitor installed on all the machines. It’s easily available via the splunk marketplace. All you need is to create a username and password.

go to marketplace to install apps

Once in the market place locate the Mysql monitor

install mysql monitor on splunk server and db servers

And then restart splunk

restart splunk

Now that that’s installed we need to make sure all the dependencies for the mysql monitor are setup on the database servers that will be pushing data to the main splunk server.

To install on a debian based os use this command:

    apt-get install python-mysqldb

For a redhat based os use this

    yum install MySQL-python

Accept all the dependencies and assuming there were no issues you’re just about ready.

Next on the list is to make sure your splunk monitoring daemon can talk to the local mysql server. On the machine in our test, we only have mysql running on the internal ip and have to ensure that the mysql user splunk@ can connect and has permission. You may need to run the following command to grant yourself permission.

     grant all privileges on *.* to 'splunk'@'mysql_ip' identified by 'your-password';

To verify that splunk can access your tables use the following command

     mysql -u splunk -h mysql_ip -p

Once you’ve got that down the last step is to configure the mysql monitor’s config.ini. Here’s the config.ini we used:


As of this writing, the place to put that config file is: /opt/splunk/etc/apps/mysqlmonitor/bin/daemon

To start the mysql monitor type this on the db server: /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/splunkmysqlmonitor.py

That’s it!  If you check the Splunk server then you should start seeing the mysql logs popping in immediately.

view mysql logs


mysql host overview


Pretty nice eh?


Next time I’ll show you how to make the splunk monitor daemon start on boot.

FileVault 2 Part Deux, Enter the Dragon

January 30th, 2013 by Allister Banks

The Godfather of FileVault, Rich Trouton, has probably encrypted more Macs than you. It’s literally a safe bet, horrible pun intended. But even he hadn’t taken into account a particular method of institution-wide deployment of recovery keys: disk-based passwords.

As an exercise, imagine you have tier one techs that need to get into machines as part of their duties. They would rather not target-disk recovery partition boot(thanks to Greg Neagle for clearing up confusion regarding how to apply that method) and slide a valuable certificate into place and whisper an incantation into its ear to operate on an un-booted volume, nor do they want to reset someone’s password with a ‘license plate’ code, they just want to unlock a machine that doesn’t necessarily have your admin enabled for FV2 on it. Back in 10.7, before the csfde(Google’s reverse-engineered CLI filevault initialization tool, mostly applicable to 10.7 since 10.8 has fdesetup) command line tool, the process of adding users was labor-intensive as well. Even in fdesetup times, you cannot specify multiple users without having their passwords and passing them in a unencrypted plist or stdin.

In this scenario, it’s less a ‘get out of jail free’ card for users that forget passwords, and more of a functional, day-to-day let-me-in secret knock. How do I get me one of those?

Enter the disk password. (Meaning like Enter the Dragon or Enter the Wu, not really ‘enter your disk password’, this is a webpage, not the actual pre-boot authentication screen.)




How did we get here? No advanced black magic, we just run diskutil cs(short for coreStorage, the name of the quacks-like-a-duck-so-call-it-a-duck logical volume manager built in to 10.7 Lion and later) with the convert and -passphrase options, pointing it at root. We could encrypt any accessible drive, but the changes to login are what we’re focusing on now.

The end result, once the process finishes and the machine reboots next, is this(un-customizable) icon appears at the login window:


Remember that this scenario is about ‘shave and a haircut, two bits’, not necessarily the institution-wide systems meant to securely manage recovery options. Why haven’t you(or the Godfather) heard of this having been implemented for institutions until now-ish?  (Was he too busy meticulously grooming his links to anything a mac admin could possibly need to know, or composing the copious content to later link to? Say that three times fast!) (Yes, the disk password functionality has been around for a bit, but we’ve gotten a report of this being deployed, which prompted this post.) Well, there are two less attractive parts of this setup that systems like Cauliflower Vest and commercial solutions like Credant or Casper sidestep:

1. The password (for one or many hosts) needs to be sent TO a shell on the local workstations command line in some way, and rotating the password requires the previous one to be passed to stdin
2. It can be confusing at the pre-boot login window that there seems to be a user account called Disk Password visible

What’s the huge advantage over the other systems? Need to rotate the password? No decrypt/re-encrypt time! (Unlike the ‘license plate’ method.) Old passwords are properly ‘expired’! (Unlike the ‘Institutional Recovery Key’ method of using a certificate.) I hope this can be of use to the environments that may be looking for more ‘middle ground’ between complex systems and manual interaction. Usability is always a factor when discussing security products, so the additional method is a welcome one to consider the benefits of and, as always, test.

Regarding FileVault 2, Part One, In Da Club

January 28th, 2013 by Allister Banks


IT needs to have a way to access FileVault 2(just called FV2 from here on) encrypted volumes in the case of a forgotten password or just getting control over a machine we’re asked to support. Usually an institution will employ a key escrow system to manage FDE(Full Disk Encryption) when working at scale. One technique, employed by Google’s previously mentioned Cauliflower Vest, is based on the ‘personal’ recovery key(a format I’ll refer to as the ‘license plate’, since it looks like this: RZ89-A79X-PZ6M-LTW5-EEHL-45BY.) The other involves putting a certificate in place, and is documented in Apple’s white paper on the topic. That paper only goes into the technical details later in the appendix, and I thought I’d review some of the salient points briefly.

There are three layers to the FV2 cake, divided by the keys interacted with when unlocking the drive:
Derived Encryption Keys(plural), the Key Encrypting Key(from the department of redundancy department) and the Volume Encrypting Key. Let’s use a (well-worn) abstraction so your eyes don’t glaze over. There’s the guest list and party promoter(DEKs), the bouncer(KEK), and the key to the FV2 VIP lounge(VEK). User accounts on the system can get on the (DEK) guest list for eventual entry to the VIP, and the promoter may remove those folks with skinny jeans, ironic nerd glasses without lenses, or Ugg boots with those silly salt-stained, crumpled-looking heels from the guest list, since they have that authority.

The club owner has his name on the lease(the ‘license plate’ key or cert-based recovery), and the bouncer’s paycheck. Until drama pop off, and the cops raid the joint, and they call the ambulance and they burn the club down… and there’s a new lease and ownership and staff, the bouncer knows which side of his bread is buttered.

The bouncer is a simple lad. He gets the message when folks are removed from the guest list, but if you tell him there’s a new owner(cert or license plate), he’s still going to allow the old owner to sneak anybody into the VIP for bottle service like it’s your birthday, shorty. Sorry about the strained analogy, but I hope you get the spirit of the issue at hand.

The moral of the story is, there’s an expiration method(re-wrapping the KEK based on added/modified/removed DEKs) for the(in this case, user…) passphrase-based unlock. ONLY. The FilevaultMaster.keychain cert has a password you can change, but if access has been granted to a previous version with a known password, that combination will continue to work until the drive is decrypted and re-encrypted. And the license plate version can’t be regenerated or invalidated after initial encryption.

So the two institutional-scale methods previously mentioned still get through the bouncer unlock the drive until you tear the roof of the mofo tear the club up de- and re-encrypt the volume.

But here’s an interesting point, there’s another type of DEK/passphrase-based unlock that can be expired/rotated besides per-user: a disk-based passphrase. I’ll get to describing that in Part Deux…

Sure, We Have a Mac Client, We Use Java!

January 24th, 2013 by Allister Banks

We all have our favorite epithets to invoke for certain software vendors and the practices they use. Some of our peers go downright apoplectic when speaking about those companies and the lack of advances we perceive in the name of manageable platforms. Not good, life is too short.

I wouldn’t have even imagined APC would be forgiving in this respect, they are quite obviously a hardware company. You may ask yourself, though, ‘is your refrigerator running’ is the software actually listening for a safe shutdown signal from the network card installed in the UPS? Complicating matters is:
- The reason we install this Network Shutdown software from APC on our server is to receive this signal over ethernet, not USB, so it’s not detected by Energy Saver like other, directly cabled models

- The shutdown notifier client doesn’t have a windowed process/menubar icon

- The process itself identifies as “Java” in Activity Monitor (just like… CrashPlan – although we can kindof guess which one is using 400+ MBs of virtual memory idle…)

Which sucks. (Seriously, it installs in /Users/Shared/Applications! And runs at boot with a StartupItem! In 2013! OMGWTFBBQ!)

Calm, calm, not to fear! ps sprinkled with awk to the rescue:

ps avx | awk '/java/&&/Notifier/&&!/awk/{print $17,$18}'

To explain the ps flags, first it allows for all users processes, prints in long format with more criteria, and the x is for even if they have no ‘controlling console.’ Then awk looks for both Java and the ‘Notifier’ jar name, minus our awk itself, and prints the relevant fields, highlighted below(trimmed and rewrapped for readability):



So at least we can tell that something is running, and appreciate the thoughtful development process APC followed, at least while we aren’t fashioning our own replacement with booster serial cables and middleware. Thanks to the googles and the overflown’ stacks for the proper flags to pass ps.

InstaDMG Issues, and Workflow Automation via Your Friendly Butler, Jenkins

January 17th, 2013 by Allister Banks

“It takes so long to run.”

“One change happens and I need to redo the whole thing”

“I copy-paste the newest catalogs I see posted on the web, the formatting breaks, and I continually have to go back and check to make sure it’s the newest one”

These are the issues commonly experienced with those who want to take advantage of InstaDMG, and to some, it may be enough to prevent them from being rid of their Golden Master ways. Of course there are a few options to address each of these, in turn, but you may have noticed a theme on blog posts I’ve penned recently, and that is:


(We’ll get to how automation takes over shortly.) First, to review, a customized InstaDMG build commonly consists of a few parts: the user account, a function to answer the setup assistant steps, and the bootstrap parts for your patch and/or configuration management system. To take advantage of the(hopefully) well-QA’d vanilla catalogs, you can nest it in your custom catalog via an include-file line, and you only update your custom software parts listed above in one place. (And preferably you keep those projects and catalog under version control as well.)

All the concerns paraphrased at the start of this post just happen to be discussed recently on The Graham Gilbert Dot Com. Go there now, and hear what he has to say about it. Check out his other posts, I can wait.

Graham Gilberts Blog
Back? Cool. Now you may think those are all the answers you need. You’re mostly right, you smarty you! SSDs are not so out-of-reach for normal folk, and they really do help to speed the I/O bound process up, so there’s less cost to create and repeat builds in general. But then there’s the other manual interaction and regular repetition parts – how can we limit it to as little as possible? Yes, the InstaDMG robot’s going to do the heavy lifting for us by speedily building an image, and using version control on our catalogs help us track change over time, but what if Integrating the changes from the vanilla catalogs was Continuous? (Answers within!) Read the rest of this entry »

FileMaker Server 12 Console + Java 7 Issue

January 16th, 2013 by Charles Edge

The latest Java 7 installer for OS X places a new control panel on the system, replacing the Java Preferences app. If you disable Java, the server console for FileMaker Server 12 will not open and no log entries are created.

If Java has previously been customized, you can resolve this issue by turning Java back on. There is a new System Preferences pane for Java 7. Click on the Java icon in System Preferences and unlike many software packages, a new Java Control Panel application opens.

Screen Shot 2013-01-16 at 9.05.30 AM

At the new Java Control Panel application, make sure that “Enable Java content in the browser” is checked.

Screen Shot 2013-01-16 at 9.06.44 AM

This is an important note to consider for FileMaker Server 12 administrators as Java 6 is reaching End Of Life next month, prompting many vendors to update their code in the recent past and not so distant future. Disabling java in Safari has no impact on using FileMaker Server.

If It’s Worth Doing, It’s Worth Doing At Least Three Times

January 14th, 2013 by Allister Banks

In my last post about web-driven automation, we took on the creation of Apple IDs in a way that would require a credit card before actually letting you download apps(even free ones.) This is fine to speed up the creation process when actual billing will be applied to each account one at a time, but for education or training purposes where non-volume license purchases wouldn’t be a factor, there is the aforementioned ‘BatchAppleIDCreator‘ applescript. It hasn’t been updated recently, though, and I still had more automation tools I wanted to let have a crack at a repetitive workflow like this use case.

SikuliScript was born out of MIT research in screen reading, which roughly approximates what humans do as they scan the screen for a pattern and then take action. One can build a Sikuli script from scratch by taking screenshots and then tying together the actions you’d like to take in its IDE(which essentially renders HTML pages of the ‘code’.) You can integrate Python or Java, although it needs(system) Java and the Sikuli tools to be in place in the Applications folder to work at all. For Apple ID creation in iTunes, which is the documented way to create an ID with the “None” payment method, Apple endorses the steps in this knowledge base document.Sikuli AutoAppleID Creator Project

When running, the script does a search for iBooks, clicks the “Free” button to trigger Apple ID login, clicks the Create Apple ID button, clicks through a splash screen, accepts the terms and conditions, and proceeds to type in information for you. It gets this info from a spreadsheet(ids.csv) that I adapted from the BatchAppleIDCreator project, but currently hard-codes just the security questions and answers. There is guidance in the first row on how to enter each field, and you must leave that instruction row in, although the NOT IMPLEMENTED section will not be used as of this first version.

It’s fastest to type selections and use the tab and/or arrow keys to navigate between the many fields in the two forms(first the ID selection/password/security question/birthdate options, then the users purchase information,) so I didn’t screenshot every question and make conditionals. It takes less than 45 seconds to do one Apple ID creation, and I made a 12 second timeout between each step in case of a slow network when running. It’s available on Github, please give us feedback with what you think.

Change PresStore’s port number to avoid conflicts with other services

January 10th, 2013 by William Smith

PresStore by Archiware is a multi-platform data backup and archive solution. Rather than writing a GUI control panel application for each platform Archiware uses a web-based front end.

By default PresStore uses port 8000 for access:


This is a common port number, though, for many applications such as Splunk, HTTP proxies, games and applications that communicate with remote server services. 8000 isn’t a special port—it’s just a common port.

If PresStore is installed on a UNIX-based server with another application also using port 8000, changing its port number to something else is as simple as renaming a file. This file is located in PresStore’s install directory and is called lexxserv:8000:


A local administrator can change the name of this file using the mv command. Assuming he wants to change it to port 8001, he’d use:

sudo mv /usr/local/aw/conf/lexxserv:8000 /usr/local/aw/conf/lexxserv:8001

After changing the port, stop the PresStore service:

sudo /usr/local/aw/stop-server

And start it again:

sudo /usr/local/aw/start-server

Or just use the restart-server command:

sudo /usr/local/aw/restart-server

Windows administrators will need to open the PresStore Server Manager utility and change the port number in the Service Functions section.