Archive for the ‘General Technology’ Category

We Love The AFP548 Podcast

Friday, March 7th, 2014

Archion Management In AVID Unity Environments

Thursday, March 6th, 2014

Archion SATA to fibre channel arrays are very similar to most other SATA to fibre arrays except Archion units are optimized for Avid Unity use. They are typically sixteen drive units which come preconfigured as three 5 drive RAID-5 sets with one spare. Each RAID-5 set is divided up into four logical drives which are approximately the size of the physical drives (obviously) so that the Unity thinks it is dealing with real individual drives.

Certain organizations (Keycode, most notably) disable SMART checking and auto-replacement of failing drives. When working on an Archion, this is the first thing to check and to reenable. Avid Unity systems can drop all clients and the File Manager can stop operating if a persistently failing drive which causes pauses during access of failing blocks continues to be used.

Like most other SATA to fibre channel arrays, Archion units also support sending alerts via SMTP. This should also be configured so that any warnings or failures can be handled as soon as possible after an event occurs.

The drives are numbered going across from one through sixteen. RAID set 1 would be drives one through five, RAID set 2 would be six through ten, RAID set 3 would be eleven through fifteen, and drive sixteen would be the spare. You can generally gauge the amount of time an Archion has been in use based on how many drives are no longer in the proper order. As drives fail and the spare gets used, the replaced drive becomes the spare, so after a few years the numbering will be quite inconsistent.

Since Avid Unity SANs are typically used 24/7, it may not be feasible to ask everyone to stop working to bring the File Manager down. Hot swapping a failed drive should work, but File Manager will fail if a certain number of I/O operations get queued without being dispatched. In order to minimize the possibility of File Manager failing, it is recommended to attempt hot swap of the failed drive when Unity activity is relatively quiet. This is much more true if a drive which needs to be replaced is in an active RAID set since the rescan of the drives will result in the automatic rebuild of that RAID set.

The Archion has a Java based GUI which is accessible through a web browser. The default IP address for an Archion is 192.168.1.123 (which is based on Avid Unity defaults in the 192.168.1.0/24 subnet). Additional Archion units will have addresses right above 123.

The default password is 00000000 (eight zeros). After logging in you will typically want to check the event log, check the SMART status of all drives, wait until the Unity is relatively idle, swap the failed drive, then check the event logs to make sure everything happened as planned.

Due to the fact that rebuilding a RAID set reads every block on every disk in that set, it is recommended that SMART status be checked and the event log checked again after the rebuild is finished. If the failed drive is replaced after the array has had time to finish rebuilding, then the checks while swapping the drive suffice.

Archion units generally do not require manual intervention when swapping drives. Like most other arrays, the unit will sound an annoying alarm when a drive fails and show a red LED on the drive which has failed. When you replace the failed drive with a new drive, the unit will automatically turn that drive into the hot spare without any intervention.

It is recommended to log in and check the status during and after swap simply to ensure that the array which was rebuilding as a result of the failed drive has had no errors or warnings during rebuild. It is entirely possible that exercising the drives during rebuild can cause another drive to begin to fail or to fail. Checking the event log after the rebuild has completed is always recommended.

Upgrade Lifesize Video Conferencing Units

Wednesday, March 5th, 2014

Updating a Lifesize Head Unit requires Internet Explorer on Windows with Flash installed. To run the update, first login to www.lifesize.com to download updates. From here, click on Support and then Software Downloads. Then click on the serial number of your unit and choose the file to download from the column on the right.

Once downloaded, login to the Head Unit using the IP address and password (by default the password for a Lifesize is 1234. Once downloaded, click on the Maintenance Tab on the right and select System Upgrade. Browse to your update (which is a .cmg file) and then click on the Upgrade button. Now wait until the unit restarts and test a connection.

Setting up iMedica

Saturday, March 1st, 2014

iMedica is an Eletcronic Health Record (”EHR”) Electronic Data Interchange (”EDI”) used for Partner Relationship Management (”PRM”). It is used in the health industry to manage front and back office activities that occur in regular medical business. It utilizes SQL Databases, Active Directory authentication, and streamlined OCR input.

Note: The following assumes that the server is already setup as a DC, and ALL workstations have already been joined to the domain.

Non-Cache Clients are clients that will constantly be in communication with the iMedica server when iMedica is running on the client. To use non-cached clients, first verify users can log onto the machine using Active Directory account. Cache Clients require MSDE or MS SQL Server Personal with Enterprise Manager.

Once all requirements have been completed, an iMedica tech will need access to the server. Create ANOTHER account, with admin privledges. Call iMedica, or follow schedule, and they will install the iMedica software on the server REMOTELY. This will take about 4-5 hours. Once the server is ready, the workstations will need to be configured to communicate with iMedica with the iMedica client.

The iMedica technician who installed the Server portion, should e-mail the client a list of how to install the iMedica client, and attach the appropriate databases to the client. Please keep in mind that THIS NEEDS TO BE DONE FOR EACH USER THAT WILL BE USING A COMPUTER. So, if there’s a windows computer that will have 5 nurses logging in with their AD credentials, then you need to install the iMedica client 5 TIMES under each user account. The user accounts DO NOT need to be local administrators.

The following needs to be completed WITH a local administrator account:

  • Go to “\\servername\imedica_install\PreInstall_items\tabletPCRuntime” and run setup.exe
  • Go to “\\servername\imedica_install\PreInstall_items\Tablet PC SDK 1.7” and run setup.exe

Client Setup: Initially Installing and Configuring the Client:

  • Go to “\\servername\imedica_client_install\” and run iMedica.Prm.Client.exe
  • After it is installed, it will automatically open. Once it is open, do the following:
  • Click on “Advanced >>”
  • Click on the magnifying glass that has now revealed itself below the “Advanced >>” button
  • Click “New”
  • You will now create 2 database connections. One to the Production Database, and another to a Training Database.
  • Production Database
  • Fill out the fields as follows: ID:
  • Enter the “Dr.s initials”-DB
  • Enter the NAME: “Dr’s name” -Server PRM
  • Enter the Application Server: “servername” (ex. this-server)
  • Enter the SQL Server: “servername”
  • Enter the DATABASE: PRM
  • Click “OK”
  • For the training Database fill out the fields as follows:
  • Enter the ID: Training
  • Enter the NAME: Training DB
  • Enter the Application Server: “servername” (ex. this-server)
  • Enter the SQL Server: “servername”
  • DATABASE: PRMTraining
  • Click “OK”
  • Highlight the “Training” database, and click “OK”
  • Remove the checkmark from the box labeled “Use Windows login user”
  • Login with the imedica admin credentials that you created during server prep/setup.

A successful configuration will allow you to login to the program without any errors. For the client setup, follow Steps 1-2 (a,b, &c), 7-9 from Stage 2: Phase B. You will need to login as each user on the computer and initiate the install. Ignore steps 3-6 from Stage 2: Phase B since the computer has already made the connection.

There can also be two types of scanners in an iMedica deployment. One will be a “card scanner” for Driver’s Licenses, and Insurance Cards. The Second will be a multiplex multiple page scanner for medical records. To setup the scanners:

  • Determine which workstation will be the “card scanner” and which will be used for “medical records”. There doesn’t have to be anything special with the workstations other than the workstaions having been prepped for iMedica, and already having iMedica installed.
  • Connect the card scanner to the workstation.
  • Login to the workstation using an administrator account
  • Insert the CD that the card scanner came with.
  • Install the drivers automatically from the CD.
  • Go to Control Panel -> Scanners and doubleclick on it.
  • Proceed as if you were going to scan, it will now ask to calibrate.
  • Place in a black and white paper (the scanner should have came with “calibration papers”) in the scanner.
  • Run calibration, and close the scanner dialogue box.
  • Login as a user that will be using iMedica on the card scanner workstation
  • Login to iMedica using the Training Database.
  • Click on “Find Patient”
  • Click the patient’s name
  • On the new window, click on the patient’s name again.
  • Wait about 30 seconds for the “Driver’s License Number” to become a hyperlink.
  • Click on the “Driver’s License Number” hyperlink.
  • Click on Import
  • Place anyone’s driver’s license into the scanner (picture first face down)
  • Choose “CSSN Driver License Import”
  • It will now have the Driver’s License scanned in. Click OK.
  • The driver’s license information is now bound into the patient’s information, the picture tied to the patient’s profile, and all of the info from the driver’s license OCR’d into the appropriate fields into iMedica.

To install the medical record scanner:

  1. Connect the card scanner to the workstation.
  2. Login to the workstation using an administrator account
  3. Insert the CD that the card scanner came with.
  4. Install the drivers automatically from the CD.
  5. Go to Control Panel -> Scanners and doubleclick on it.
  6. Proceed as if you were going to scan, it will now ask to calibrate.
  7. Place in a black and white paper (the scanner should have came with “calibration papers”) in the scanner.
  8. Run calibration, and close the scanner dialogue box.
  9. If calibration is not available, that’s OK.
  10. Some scanners will not have that available.

Once this has been done, iMedica has now been rolled out. All that’s left is training and possible transfer of data from the old records system to the new one, which will be done with iMedica themselves.

When a new employee arrives, they will need to have an account in Active Directory, and have iMedica installed on EVERY worksation that the user will have to possibly work at. When a new workstation is purchased, the installation steps from earlier will have to be followed to prep the workstation and add the users.

Pulling Report Info from MunkiWebAdmin

Wednesday, November 6th, 2013

Alright, you’ve fallen in love with the Dashboard in MunkiWebAdmin – we don’t blame you, it’s quite the sight. Now you know one day you’ll hack on Django and the client pre/postflight scripts until you can add that perfect view to further extend it’s reporting and output functionality, but in the meantime you just want to export a list of all those machines still running 10.6.8. Mavericks is free, and them folks still on Snow Leo are long overdue. If you’ve only got a handful of clients, maybe you set up MunkiWebAdmin using sqlite(since nothing all that large is actually stored in the database itself.)

MunkiWebAdmin in action

Let’s go spelunking and try to output just those clients in a more digestible format than html, so I’d use the csv output option for starters. We could tool around in an interactive session with the sqlite binary, but in this example we’ll just run the query on that binary and cherry-pick the info we want. Most often, we’ll use the information submitted as a report by the pre- and postflight scripts munki runs, which dumps in to the reports_machine table. And the final part is as simple as you’d expect, we just select all from that particular table where the OS version equals exactly 10.6.8. Here’s the one-liner:

$sqlite3 -csv /Users/Shared/munkiwebadmin_env/munkiwebadmin/munkiwebadmin.db\
 "SELECT * FROM reports_machine WHERE os_version='10.6.8';"

 


And the resultant output:
b8:f6:b1:00:00:00,Berlin,"","",192.168.222.100,"MacBookPro10,1","Intel Core i7","2.6 GHz",x86_64,"8 GB"...

You can then open that in your favorite spreadsheet editing application and parse it for whatever is in store for it next!

Wishes Granted! Apple Configurator 1.4 and iOS 7

Wednesday, September 25th, 2013

Back in June, we posted about common irritations of iOS(6) device deployment, especially in schools or other environments trying to remove features that could distract students. Just like with the Genie, we asked for three wishes:

- 1. Prevent the addition of other email accounts, or 2. the sign-in (or creation!) of Twitter/Facebook(/Vimeo/Flickr, etc.) accounts

Yay! Rejoice in the implementation of your feature requests! At least when on a supervised device, you can now have these options show up as greyed out.

- 3. Disable the setting of a password lock…

Boo! This is still in the realm of things only an MDM can do for you. But at least it’s not something new that MDM’s need to implement. More agile ways to interact with App Lock should be showing up in a lot more vendors products for a ‘do not pass go, do not collect $200 dollars’ way to lead a group of iPads through the exact app they should be using. Something new we’re definitely looking forward to for MDM vendors to implement is…

Over-the-Air Supervision!

Won’t it be neat when we don’t need to tether all these devices to get those extra management features?

And One More Thing

Screen Shot 2013-09-24 at 2.35.14 PM

Oh, and one last feature I made reference to in passing, you can now sync a Supervised device to a computer! …With the caveat that you need to designate that functionality at the time you move the device into Supervise mode, and the specific Restriction payload needs setting appropriately.

Screen Shot 2013-09-24 at 2.34.29 PM

We hope you enjoy the bounty that a new OS and updated admin tools brings.

Add OS X Network Settings Remotely (Without Breaking Stuff)

Monday, September 23rd, 2013

So you’re going to send a computer off to a colocation facility, and it’ll use a static IP and DNS when it gets there, the info for which it’ll need before it arrives. Just like colo, you access this computer remotely to prepare it for its trip, but don’t want to knock it off the network while prepping this info, so you can verify it’s good to go and shut it down.

It’s the type of thing, like setting up email accounts programmatically, that somebody should have figured out and shared with the community as some point. But even if my google-fu is weak, I guess I can deal with having tomatoes thrown at me, so here’s a rough mock-up:

 

#!/bin/bash
# purpose: add a network location with manual IP info without switching 
#   This script lets you fill in settings and apply them on en0(assuming that's active)
#   but only interrupts current connectivity long enough to apply the settings,
#   it then immediately switches back. (It also assumes a 'Static' location doesn't already exist...)
#   Use at your own risk! No warranty granted or implied! Tell us we're doing it rong on twitter!
# author: Allister Banks, 318 Inc.

# set -x

declare -xr networksetup="/usr/sbin/networksetup"

declare -xr MYIP="192.168.111.177"
declare -xr MYMASK="255.255.255.0"
declare -xr MYROUTER="192.168.111.1"
declare -xr DNSSERVERS="8.8.8.8 8.8.4.4"

declare -x PORTANDSERVICE=`$networksetup -listallhardwareports | awk '/en0/{print x};{x=$0}' | cut -d ' ' -f 3`

$networksetup -createlocation "Static" populate
$networksetup -switchtolocation "Static"
$networksetup -setmanual $PORTANDSERVICE $MYIP $MYMASK $MYROUTER
$networksetup -setdnsservers $PORTANDSERVICE $DNSSERVERS
$networksetup -switchtolocation Automatic

exit 0

Caveats: The script assumes the interface you want to be active in the future is en0, just for ease of testing before deployment. Also, that there isn’t already a network location called ‘Static’, and that you do want all interface populated upon creation(because I couldn’t think of particularly good reasons why not.)

If you find the need, give it a try and tweet at us with your questions/comments!


The ‘Hidden’ Summary Tab

Friday, August 9th, 2013

Do you want AirPort Utility to look how it used to? Howsabout something akin to the Logs interface you could use to see connected client’s? Well, mashing the option key has paid off again! As alerted to me on the Twitter via an @dmnelson re-tweet, https://twitter.com/jeff_lamarche/status/364905545272012800


This doesn’t really get you more in the way of features, but when change is scary and goes jingly-jangly in our pockets, seeing a familiar modal dialog makes us feel at ease.

summary

Apple Mail 6.2 – Unexpectedly Quits When Selecting Messages

Friday, July 19th, 2013

I recently ran into an interesting issue, with Apple Mail seeming to randomly crash.

While browsing Apple Mail and selecting certain e-mails, Apple Mail would ‘Unexpectedly Quit’. The mail messages that would cause these quits were seemingly random. None were from the same contact, they had no shared elements (attachments, subject line, invalid characters, etc.), there was nothing that would help to distinguish the cause of these crashes.

The only element that seemed to line up, was that there each had multiple recipients in the To: field, perhaps indicting a corrupted recipients file. To test this, I tried to open and review the previous recipients.

Screen Shot 2013-07-19 at 10.33.04 AM

On selecting previous recipients, mail would become non-responsive and hang indefinitely. This appeared to indicate the cause of the unexpected quit was a corrupted MailRecents-V4.abcdmr.

This was fixed with the following steps:

- Close out of Mail

- Open Finder,

- From the Go menu, select Go To Folder… and paste in the following: ~/Library/Application Support/AddressBook/

 

Screen Shot 2013-07-19 at 1.58.22 PM

 

- When the window opens, select the MailRecents-V4.abcdmr and and rename it to “OldMailRecents-V4.abcdmr”

Screen Shot 2013-07-19 at 2.02.38 PM

- Reopen Mail, and breathe a sigh of relief.

That’s it!

Note: The above steps will erase your autocomplete for previous recipients. If you have a backup of your MailRecents-V4.abcdmr through Time Machine or Crashplan, restore that file (from a point prior to the date you started experiencing the issue)  to the above Library path and complete the rest of the above steps.

Special Considerations for ‘Supervised’ iPads

Thursday, July 18th, 2013

Apple Configurator brings two key features. First, when Supervision is applied, one is able to pull back App ‘codes’ redeemed to devices when it is decided the App should be used on another iPad or simply be removed so as not to distract the current user. The other is when you move to the ‘Assign’ stage, it can become a multi-user device, which helps firewall off multiple users data from each other or facilitate handouts to Apps that support document transfer within Configurator. These are somewhat specialized use cases, so many use it for the basic setup functionality which can be found in other tools, but in a less-optimized workflow than Configurator provides.

Supervised iPads get people a lot of the way toward their common goals, so they sometimes find that Assignment isn’t as necessary. Perhaps they use Google Apps with Drive, or have a webclip they deploy to point folks to a site, and then fill out forms as a way of working on documents or collaborating. This still presents two inevitable events for the random bumps or mishaps that may befall an iPad deployment:
1. What if an iPad is dropped and the screen gets cracked?
2. What if the iPad becomes non-responsive due to a bath in water, or worse, gets lost/forgotten/stolen?

For the first issue, damaged iPads would lose all of the paid App codes redeemed on that device when wiped during repair, so it should be connected to Configurator and unsupervised before sending out. You do so by highlighting the supervised iPad and choosing ‘Unsupervise’ from the Device menu, as shown:
Unsupervise from Device menu
To reclaim the inventory number used by a lost or otherwise non-contactable iPad, it should be removed by following the same process, but holding down the Option key while the unrecoverable iPad is highlighted, and from the Device menu choosing Remove. Unfortunately, new App codes would need to be purchased to replace the Apps used on the non-retrievable iPad.

optionRemove

Spelunking An iTunes Backup

Wednesday, June 12th, 2013

Say you’re excited about installing a particular beta of a particular mobile operating system, and are foolhardy enough to put it on a phone that was in use for business purposes. Let’s go even further, hypothetically, and say you had been using iCloud Backup, but made a backup with iTunes before upgrading… leaving about half a day gap, during which contacts were added. This is a phone that’s often used for testing and little else, so no accounts besides iCloud are configured and you don’t encrypt the backup because you don’t have passwords you want/need restored. After the beta upgrade completes, you restore the iCloud Backup, leaving out that one phone number that’s the direct line to a level two support group at a certain backup company. iTunes is just not fun to plug into, though, so let’s go spelunking in the backup it created.

First, I need to put the backup into a state I can interact with it in. For that I chose the product with the best domain name, http://supercrazyawesome.com, and its iOS Backup Extractor. I chose to put it all in tmp, so it gets dumped sooner rather than later, and found a promising database to sift through:

in tmp

Following basic sqlite3 commands I found on @tvsutton’s site, I saw a promising table, ABPersonFullTextSearch_content. Sure enough, the contact info I was missing was there and I could pull it out to restore just that one contact I’d created.

 I never use this theme

Reduce email forgery by using SPF DNS records

Monday, June 10th, 2013

Here’s the problem:

Super Uber Bank is the largest bank in the world and spammers are using its name because of its recognizability. They’re sending email that looks like it’s coming from “Super Uber Bank Technical Support”. Super Uber Bank’s customers are clicking on links in those spam messages, which take them to a credit card phishing site, and those customers are handing over all their account information and passwords.

Email forgery is simple. Nothing stops me from setting my email address to “Super Uber Bank Technical Support <techsupport@superuberbank.com>” and sending you a message with a link to my credit card phishing site. Few email service providers require their customers to use valid email addresses when sending mail. Spammers just use their own servers anyway.

This puts the onus on the recipient’s mail server to validate incoming messages before passing them to you to read. Spam filtering is an art as much as a science and the software has to balance between what may be legitimate email but looks like spam and what is spam but looks legitimate.

Sender Policy Framework (SPF) records are DNS entries for a domain that provide the names of its authoritative email servers and sending domains. Continuing the Super Uber Bank story:

Super Uber Bank often uses a third party marketing company that specializes in mass-marketing emails. So, it decides to implement an SPF record for its superuberbank.com domain. In that record, Super Uber Bank includes its marketing company’s domain  ”supercrazymarketing.com” as authoritative for sending mail on its behalf.

An SPF record is a simple text (TXT) or resource record added to the authoritative DNS servers for a domain. It sits alongside any A host records, MX records, CNAMEs, etc.

ubermail.superuberbank.com   A       50.100.200.254
superuberbank.com            MX      ubermail.superuberbank.com
@                            TXT     v=spf1 ip4:50.100.200.254 include:supercrazymarketing.com -all

In this case the third line is the SPF record. The “@” symbol is shorthand for the current domain (superuberbank.com). “TXT” denotes this is a text record, which for DNS servers requires a value or “v=spf1 ip4:50.100.200.254 include:supercrazymarketing.com -all”.

The text of the value breaks down like this:

v=spf1                              This is the version of SPF record being used
ip4:50.100.200.254                  Allow mail from my own server at this IP address
include:supercrazymarketing.com     Include this as a valid sending domain
-all                                Reject anything that doesn't match this criteria

And so the story ends:

The spammers are using open relays from a variety of seedy ISPs to send their phishing scams. However, when a Super Uber Bank customer’s email server receives one of these bogus Super Uber Bank Technical Support messages, it compares the address of the sending server to the addresses and domains Super Uber Bank has in its SPF record. Because the server is not in the SPF record the customer’s email server rejects the message without sending it to the customer.

Incorrectly implemented SPF records can stop mail flowing altogether; therefore, customers should consult their ISPs and, likewise, ISPs should consult with their customers before implementation. Some services such as Google Apps and Office 365 have published SPF record information for their customers.

The SPF Project offers a plethora of information for end-users, service providers and support technicians. Their site also includes links for mailing lists and references for SPF consultants who can assist with more complex email scenarios.

iOS 7 Management API and Apple Configurator Wishlist Quicky

Thursday, June 6th, 2013

We feel privileged to be living in the modern era, iOS device activation can happen over-the-air, and use of iTunes has almost completely been eclipsed by Apple Configurator. But it isn’t uncommon to hear the sysadmins being referred to as ‘the haters,’ since things can never be easy or nice enough for us. (And in reality, there’s still plenty of conflict and stress to go around without worrying about the reliability or functionality of our tools.) Besides the fact enrollment profiles themselves can always be removed at any time by end users, there are also still surprisingly numerous things that would require manual interaction to manage, and missing integration with other Apple products. With something that could be called iOS7 potentially around the corner, and with no inside information, here’s some of the things that still trip up the modern iOS deployment in certain environments.

As of this point in time, through the official management API and payloads documented in the canonical reference Apple provides, you cannot do the following:

Restrictions
- Disable the setting of a password lock
Especially in education, the accidental turning on of this ‘feature’ has probably sold MDM more than anything else
- Prevent the addition of other email accounts
File transfer and content distribution is still by no means a solved problem, and email has always been a ubiquitous option – but in certain environments we probably don’t want accounts added nilly-willy… (er, strike that, reverse…)
- Prevent the sign-in (or creation!) of Twitter or Facebook accounts
Yay for social media integration! Boo for education or other environments where these devices aren’t to be used ‘socially.’

Account addition OR creation

Apple Configurator can allow the handing out of documents to an app like Adobe Reader(which still has an unfortunate amount of Adobe’s interruptions in its first-time use experience,) and you can collect documents as well when assigned devices get checked back in. The two apps you CAN’T at present add content/documents to? Apple’s own iTunesU and iBooks apps! Nor can you pull in iMovie projects or pictures from the Camera Roll.

The longer you work with these things, the more corner/edge-cases you notice – like the fact you can’t use two MDM services on the same device. It makes sense when you know the moving parts and think about the ramifications, but it still can surprise folks because documentation doesn’t seem to warn against it. (That I’ve found, at least, feel free to correct us on the Twitter or elsewhere!) We mention these things not to say it’s a horrible experience to deploy the devices in most use cases, just to point out there’s always room for improvement and we’re excited to see what the next version might offer.

How TED’s introduction to Google Glass Really Happened

Friday, April 12th, 2013


Starts with a replay of the Google Glass commercial, but then an uncomfortable founder of an internet company rambles uncomfortably. 13:12 in he jokes about recording from the stage without people knowing. Strange times we live in.

Quick Update to a Radiotope Guide for Built-In Mac OS X VPN Connections

Tuesday, March 26th, 2013

Just a note for those folks with well-worn bookmarks to this post on Ed Marczak’s blog, Radiotope.com, for authenticating VPN connections with Mac OS X Server’s Open Directory, which is still valid today. When trying to use the System Preferences VPN client/network adapter with the built-in L2TP server in Sonicwall, though, I was curious why OD auth wasn’t working for me, but users that were local to the Sonicwall were. Having been a while since the last time I’d set it up, I went on a search engine spelunking and found a link that did the trick.

In particular, a comment by Ted Dively brought to my attention the fact you need to change the order(in the VPN sidebar item, L2TP Server, when the Configure button pop-up is clicked, it’s under the PPP tab) that the L2TP service is configured to use for authentication type, PAP instead of the more standard MSCHAPv2.

Where it's done

We hope that is of help to current and future generations.

LOPSA-East 2013

Monday, March 18th, 2013

For the first year I’ll be speaking at the newly-rebranded League of Extraordinary Gentlemen League of Professional System Administrators conference in New Brunswick, New Jersey! It’s May 3rd and 4th, and should be a change from the Mac-heavy conferences we’ve been associated with as of late. I’ll be giving a training class, Intro to Mac and iOS Lifecycle Management, and a talk on Principled Patch Management with Munki. Registration is open now! Jersey is lovely that time of year, please consider attending!

 

LOPSA-East '13

PSU MacAdmins Conference 2013

Wednesday, February 27th, 2013

It's Secret!

For the third year, I’ll be presenting at PSU MacAdmins Conference! This year I’m lucky enough to be able to present two talks, “Backup, Front to Back” and “Enough Networking to be Dangerous”. But I’m really looking forward to what I can learn from those speaking for the first time, like Pepijn Bruienne and Graham Gilbert among others. The setting and venue is top-notch. It’s taking place May 22nd through the 24th, with a Boot Camp for more foundational topics May 21st. Hope you can join us!

Set Splunk MySql Monitor To Start On Boot (CentOS)

Thursday, January 31st, 2013

Back in the old days of unix there was an easy way to start a daemon or script every time a computer booted.  Simply put it in one of the /etc/rc.? text files and it would start all the services in the order specified.  Later, it was made more flexible by having different startup folders based on which runlevel you were on.  Even later still scripts these rc[1-6].d startup folders became deprecated yet are still used to some extent by legacy programs and now things are all managed with new commands.

 

To put it bluntly, it’s messy, non intuitive and definitely not as easy as it should be.  There is hope however and getting a script or daemon to run “the right way” at startup isn’t too terribly daunting and I’ll walk you through the process now.

 

In our instance we need a program called splunkmysqlmonitor.py to run on boot.  It takes one of 3 arguments, start, stop, restart, and is located in /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/.  It’s almost ready to run at startup but first we should look at the command we’re using to call splunkmysqlmonitor.py and that’s the chkconfig command.

The chkconfig command takes a script that’s located in /etc/init.d and creates all the necessary symlinks for it in the rc[1-6].d folders that tell the system what order to start all the services and which runlevels start which services.  Runlevels are mostly deprecated in linux these days but just as an FYI, the runlevels you need to pay attention to are 2,3,4 and 5 and they are amost always identical.  The only thing you really need to worry about is the order in the boot process that the scripts get started and and less so the order that the script gets shutdown when rebooted.  For example, a program that relies on nfs to be on when running started necessarily needs to be run after the nfs command mounts the drives successfully.  Numbers lower in the list start first and the list goes from 1 – 99.  Since splunk is at priority 90 and this monitor needs to start after splunk I’ll give it a priority of 95.  As for shutdown, this service should turn off quickly since it relies on other services to run and may spit out errors if these dependent services are turned off before it.  I’ll give this shutdown a priority of 5 which means it’ll be one of the first processes to shutdown.

 

So now that we know when in the boot process the script should run and at (priority 95) which run levels it should run from (2,3,4,5) we just need to put this info into the system somehow.  We do this by adding specially formatted comment lines into our script located in /etc/init.d.  Here’s what our example looks like with the new comments added

 

#!/usr/bin/env python
#         run level  startup  shutdown
# chkconfig: 2345      95        5 
# description: monitors local mysql processes for splunk
# processname: splunkmysqlmonitor
#
import sys, time, os, socket...

Now we have to put the script into the /etc/init.d folder and that is best done with a symlink.

     ln -s /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/splunkmysqlmonitor.py /etc/init.d

And finally the chkconfig command itself

     chkconfig --add /etc/init.d/splunkmysqlmonitor.py

This should add the script to startup and next time you reboot it’ll launch automagically.

MySQL Monitoring with Splunk

Wednesday, January 30th, 2013

MySQL Logging with Splunk

Getting Splunk running and monitoring common log formats, such as apache logs and system logs, is a pretty straightforward process. Some would even call it intuitive but setting up some of the optional plugins can be tricky the first time you set it up. The following is a quick and dirty guide to getting the MySQL monitor from remora up and running in your splunk instance.

This article assumes you have a splunk server as well as a separate database server running a splunk forwarder that is pushing logs to the main splunk server.

The first step is to prepare your splunk server for the incoming mysql stats. We’ll need to make a custom index (called mysql in our case) on both the server and the database host.  See below:

create mysql index on splunk server

Once that’s done we’ll also need to create a custom tcp listener on the splunk server.  This is different from the standard listener that runs on port 9997.  Go to the manager and then data inputs to create:

add listener1

 

add listener2

 

set raw tcp lictener on splunk server

 

As you see we used port 9936 to be a listener that automatically imports into the mysql index. You’ll want to ensure that this port is reachable from your database server to ensure there are no firewalls blocking your connection. You can test this with a simple telnet command.  If you see a prompt that says “Escape character is” then you’re good to go.

telnet to port 9936 to test

 

Once we have verified the listener is up and running the next step is to get the mysql monitor installed on all the machines. It’s easily available via the splunk marketplace. All you need is to create a username and password.

go to marketplace to install apps

Once in the market place locate the Mysql monitor

install mysql monitor on splunk server and db servers

And then restart splunk

restart splunk

Now that that’s installed we need to make sure all the dependencies for the mysql monitor are setup on the database servers that will be pushing data to the main splunk server.

To install on a debian based os use this command:

    apt-get install python-mysqldb

For a redhat based os use this

    yum install MySQL-python

Accept all the dependencies and assuming there were no issues you’re just about ready.

Next on the list is to make sure your splunk monitoring daemon can talk to the local mysql server. On the machine in our test, we only have mysql running on the internal ip and have to ensure that the mysql user splunk@172.16.154.141 can connect and has permission. You may need to run the following command to grant yourself permission.

     grant all privileges on *.* to 'splunk'@'mysql_ip' identified by 'your-password';

To verify that splunk can access your tables use the following command

     mysql -u splunk -h mysql_ip -p

Once you’ve got that down the last step is to configure the mysql monitor’s config.ini. Here’s the config.ini we used:

[mysql]
 host=172.16.154.141
 port=3306
 username=splunk
 password=your-password
[splunk]
 host=172.16.154.250
 port=9936
[statusvars]
 interval=10
[slavestatus]
 interval=10
[tablestats]
 interval=3600
[processlist]
 interval=10

As of this writing, the place to put that config file is: /opt/splunk/etc/apps/mysqlmonitor/bin/daemon

To start the mysql monitor type this on the db server: /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/splunkmysqlmonitor.py

That’s it!  If you check the Splunk server then you should start seeing the mysql logs popping in immediately.

view mysql logs

 

mysql host overview

 

Pretty nice eh?

 

Next time I’ll show you how to make the splunk monitor daemon start on boot.

FileVault 2 Part Deux, Enter the Dragon

Wednesday, January 30th, 2013

The Godfather of FileVault, Rich Trouton, has probably encrypted more Macs than you. It’s literally a safe bet, horrible pun intended. But even he hadn’t taken into account a particular method of institution-wide deployment of recovery keys: disk-based passwords.

As an exercise, imagine you have tier one techs that need to get into machines as part of their duties. They would rather not target-disk recovery partition boot(thanks to Greg Neagle for clearing up confusion regarding how to apply that method) and slide a valuable certificate into place and whisper an incantation into its ear to operate on an un-booted volume, nor do they want to reset someone’s password with a ‘license plate’ code, they just want to unlock a machine that doesn’t necessarily have your admin enabled for FV2 on it. Back in 10.7, before the csfde(Google’s reverse-engineered CLI filevault initialization tool, mostly applicable to 10.7 since 10.8 has fdesetup) command line tool, the process of adding users was labor-intensive as well. Even in fdesetup times, you cannot specify multiple users without having their passwords and passing them in a unencrypted plist or stdin.

In this scenario, it’s less a ‘get out of jail free’ card for users that forget passwords, and more of a functional, day-to-day let-me-in secret knock. How do I get me one of those?

Enter the disk password. (Meaning like Enter the Dragon or Enter the Wu, not really ‘enter your disk password’, this is a webpage, not the actual pre-boot authentication screen.)

 

diskPasswordification

 

How did we get here? No advanced black magic, we just run diskutil cs(short for coreStorage, the name of the quacks-like-a-duck-so-call-it-a-duck logical volume manager built in to 10.7 Lion and later) with the convert and -passphrase options, pointing it at root. We could encrypt any accessible drive, but the changes to login are what we’re focusing on now.

The end result, once the process finishes and the machine reboots next, is this(un-customizable) icon appears at the login window:

diskPassicon

Remember that this scenario is about ‘shave and a haircut, two bits’, not necessarily the institution-wide systems meant to securely manage recovery options. Why haven’t you(or the Godfather) heard of this having been implemented for institutions until now-ish?  (Was he too busy meticulously grooming his links to anything a mac admin could possibly need to know, or composing the copious content to later link to? Say that three times fast!) (Yes, the disk password functionality has been around for a bit, but we’ve gotten a report of this being deployed, which prompted this post.) Well, there are two less attractive parts of this setup that systems like Cauliflower Vest and commercial solutions like Credant or Casper sidestep:

1. The password (for one or many hosts) needs to be sent TO a shell on the local workstations command line in some way, and rotating the password requires the previous one to be passed to stdin
2. It can be confusing at the pre-boot login window that there seems to be a user account called Disk Password visible

What’s the huge advantage over the other systems? Need to rotate the password? No decrypt/re-encrypt time! (Unlike the ‘license plate’ method.) Old passwords are properly ‘expired’! (Unlike the ‘Institutional Recovery Key’ method of using a certificate.) I hope this can be of use to the environments that may be looking for more ‘middle ground’ between complex systems and manual interaction. Usability is always a factor when discussing security products, so the additional method is a welcome one to consider the benefits of and, as always, test.

Sure, We Have a Mac Client, We Use Java!

Thursday, January 24th, 2013

We all have our favorite epithets to invoke for certain software vendors and the practices they use. Some of our peers go downright apoplectic when speaking about those companies and the lack of advances we perceive in the name of manageable platforms. Not good, life is too short.

I wouldn’t have even imagined APC would be forgiving in this respect, they are quite obviously a hardware company. You may ask yourself, though, ‘is your refrigerator running’ is the software actually listening for a safe shutdown signal from the network card installed in the UPS? Complicating matters is:
- The reason we install this Network Shutdown software from APC on our server is to receive this signal over ethernet, not USB, so it’s not detected by Energy Saver like other, directly cabled models

- The shutdown notifier client doesn’t have a windowed process/menubar icon

- The process itself identifies as “Java” in Activity Monitor (just like… CrashPlan – although we can kindof guess which one is using 400+ MBs of virtual memory idle…)

Which sucks. (Seriously, it installs in /Users/Shared/Applications! And runs at boot with a StartupItem! In 2013! OMGWTFBBQ!)

Calm, calm, not to fear! ps sprinkled with awk to the rescue:

ps avx | awk '/java/&&/Notifier/&&!/awk/{print $17,$18}'

To explain the ps flags, first it allows for all users processes, prints in long format with more criteria, and the x is for even if they have no ‘controlling console.’ Then awk looks for both Java and the ‘Notifier’ jar name, minus our awk itself, and prints the relevant fields, highlighted below(trimmed and rewrapped for readability):

:./comp/pcns.jar:./comp/Notifier.jar: 

com.apcc.m11.arch.application.Application

So at least we can tell that something is running, and appreciate the thoughtful development process APC followed, at least while we aren’t fashioning our own replacement with booster serial cables and middleware. Thanks to the googles and the overflown’ stacks for the proper flags to pass ps.

InstaDMG Issues, and Workflow Automation via Your Friendly Butler, Jenkins

Thursday, January 17th, 2013

“It takes so long to run.”

“One change happens and I need to redo the whole thing”

“I copy-paste the newest catalogs I see posted on the web, the formatting breaks, and I continually have to go back and check to make sure it’s the newest one”

These are the issues commonly experienced with those who want to take advantage of InstaDMG, and to some, it may be enough to prevent them from being rid of their Golden Master ways. Of course there are a few options to address each of these, in turn, but you may have noticed a theme on blog posts I’ve penned recently, and that is:

BETTER LIVING THROUGH AUTOMATION!

(We’ll get to how automation takes over shortly.) First, to review, a customized InstaDMG build commonly consists of a few parts: the user account, a function to answer the setup assistant steps, and the bootstrap parts for your patch and/or configuration management system. To take advantage of the(hopefully) well-QA’d vanilla catalogs, you can nest it in your custom catalog via an include-file line, and you only update your custom software parts listed above in one place. (And preferably you keep those projects and catalog under version control as well.)

All the concerns paraphrased at the start of this post just happen to be discussed recently on The Graham Gilbert Dot Com. Go there now, and hear what he has to say about it. Check out his other posts, I can wait.

Graham Gilberts Blog
Back? Cool. Now you may think those are all the answers you need. You’re mostly right, you smarty you! SSDs are not so out-of-reach for normal folk, and they really do help to speed the I/O bound process up, so there’s less cost to create and repeat builds in general. But then there’s the other manual interaction and regular repetition parts – how can we limit it to as little as possible? Yes, the InstaDMG robot’s going to do the heavy lifting for us by speedily building an image, and using version control on our catalogs help us track change over time, but what if Integrating the changes from the vanilla catalogs was Continuous? (Answers within!) (more…)

If It’s Worth Doing, It’s Worth Doing At Least Three Times

Monday, January 14th, 2013

In my last post about web-driven automation, we took on the creation of Apple IDs in a way that would require a credit card before actually letting you download apps(even free ones.) This is fine to speed up the creation process when actual billing will be applied to each account one at a time, but for education or training purposes where non-volume license purchases wouldn’t be a factor, there is the aforementioned ‘BatchAppleIDCreator‘ applescript. It hasn’t been updated recently, though, and I still had more automation tools I wanted to let have a crack at a repetitive workflow like this use case.

SikuliScript was born out of MIT research in screen reading, which roughly approximates what humans do as they scan the screen for a pattern and then take action. One can build a Sikuli script from scratch by taking screenshots and then tying together the actions you’d like to take in its IDE(which essentially renders HTML pages of the ‘code’.) You can integrate Python or Java, although it needs(system) Java and the Sikuli tools to be in place in the Applications folder to work at all. For Apple ID creation in iTunes, which is the documented way to create an ID with the “None” payment method, Apple endorses the steps in this knowledge base document.Sikuli AutoAppleID Creator Project

When running, the script does a search for iBooks, clicks the “Free” button to trigger Apple ID login, clicks the Create Apple ID button, clicks through a splash screen, accepts the terms and conditions, and proceeds to type in information for you. It gets this info from a spreadsheet(ids.csv) that I adapted from the BatchAppleIDCreator project, but currently hard-codes just the security questions and answers. There is guidance in the first row on how to enter each field, and you must leave that instruction row in, although the NOT IMPLEMENTED section will not be used as of this first version.

It’s fastest to type selections and use the tab and/or arrow keys to navigate between the many fields in the two forms(first the ID selection/password/security question/birthdate options, then the users purchase information,) so I didn’t screenshot every question and make conditionals. It takes less than 45 seconds to do one Apple ID creation, and I made a 12 second timeout between each step in case of a slow network when running. It’s available on Github, please give us feedback with what you think.

Change PresStore’s port number to avoid conflicts with other services

Thursday, January 10th, 2013

PresStore by Archiware is a multi-platform data backup and archive solution. Rather than writing a GUI control panel application for each platform Archiware uses a web-based front end.

By default PresStore uses port 8000 for access:

http://localhost:8000

This is a common port number, though, for many applications such as Splunk, HTTP proxies, games and applications that communicate with remote server services. 8000 isn’t a special port—it’s just a common port.

If PresStore is installed on a UNIX-based server with another application also using port 8000, changing its port number to something else is as simple as renaming a file. This file is located in PresStore’s install directory and is called lexxserv:8000:

/usr/local/aw/conf/lexxserv:8000

A local administrator can change the name of this file using the mv command. Assuming he wants to change it to port 8001, he’d use:

sudo mv /usr/local/aw/conf/lexxserv:8000 /usr/local/aw/conf/lexxserv:8001

After changing the port, stop the PresStore service:

sudo /usr/local/aw/stop-server

And start it again:

sudo /usr/local/aw/start-server

Or just use the restart-server command:

sudo /usr/local/aw/restart-server

Windows administrators will need to open the PresStore Server Manager utility and change the port number in the Service Functions section.

25 Tips For Technical Writers

Wednesday, January 9th, 2013

At 318, we write a pretty good amount of content. We have 5 or so authors on staff, write tons of technical documentation for customers and develop a fair amount of courseware. These days, I edit almost as much as I write. And in doing so, I’ve picked up on some interesting trends in how people write, prompting me to write up some tips for the blossoming technical writer out there:

  1. Define the goal. What do you want to say? The text on the back jacket of most of my books was written before I ever wrote an outline. Sometimes I update the text when I’m done with a book because the message can change slightly with technical writing as you realize some things you’d hoped to accomplish aren’t technically possible (or maybe not in the amount of time you need to use).
  2. Make an outline. Before you sit down to write a single word, you should know a goal and have an outline that matches to that goal. The outline should be broken down in much the same way you’d lay out chapters and then sections within the chapter.
  3. Keep your topics separate. A common trap is to point at other chapters too frequently. Technical writing does have a little bit of the find your own adventure aspect, but referencing other chapters is often overused.
  4. Clearly differentiate between section orders within a chapter. Most every modern word processing tool (from WordPress to Word) provides the ability to have a Header or Heading 1 and a Header or Heading 2. Be careful not to confuse yourself. I like to take my outline and put it into my word processing program and then build out my headers from the very beginning. When I do so, I like for each section to have a verb and a subject that defines what we’re going to be doing. For example, I might have Header 1 as Install OS X, with Header 2 as Formatting Drives followed by Header 2 as Using the Recovery Partition followed by Header 3 of Installing the Operating System.
  5. Keep your paragraphs and sentences structured. Beyond the headings structure, make sure that each sentence only has one thought (and that sentences aren’t running on and on and on). Also, make sure that each paragraph illustrates a sequence of thoughts. Structure is much more important with technical writing than with, let’s say, science fiction. Varying sentence structure can keep people awake.
  6. Use good grammar. Bad grammar makes things hard to read and most importantly gets in the way of your message getting to your intended audience. Strunk and White’s Elements of Style is very useful if you hit a place where you’re not sure what to write. Grammar rules are a lot less stringent with online writing, such as a website. When it comes to purposefully breaking grammatical rules, I like to make an analogy with fashion. If you show up to a very formal company in $400 jeans, they don’t care that your jeans cost more than most of their slacks; they just get cranky you’re wearing jeans. Not everyone will pick up on purposeful grammatical lapses. Many will just judge you harshly. Especially if they hail from the midwest.
  7. Define your audience. Are you writing for non-technical users trying to use a technical product? Are you writing for seasoned Unix veterans trying to get acquainted with a new version of Linux? Are you writing for hardened programmers? The more clearly you define the audience the easier it is to target a message to that audience. The wider the scope of the audience the more people are going to get lost, feel they’re reading content below their level, etc.
  8. Know your style guide. According to who you are writing for, they probably have a style guide of some sort. This style guide will lay out how you write, specific grammar styles they want used, hopefully a template with styles pre-defined, etc. I’ve completed several writing gigs, only to discover I need to go back and reapply styles to the entire content. When you do that, something will always get missed…
  9. Quoting is important when writing code. It’s also important to quote some text. If you have a button or text on a screen with one word that begins with a capped letter, you don’t need to quote that in most style guides. But if there’s only one word and any of the words use a non-capped letter or have a special character then the text should all be quoted. It’s also important to quote and attribute text from other locations. Each style guide does this differently.
  10. Be active. No, I’m not saying you should run on a treadmill while trying to dictate the chapter of a book to Siri. Use an active voice. For example, don’t say “When installing an operating system on a Mac you should maybe consider using a computer that is capable of running that operating system.” Instead say something like “Check the hardware compatibility list for the operating system before installation.”
  11. Be careful with pronouns. When I’m done writing a long document I’ll do a find for all instances of it (and a few other common pronouns) and look for places to replace with the correct noun.
  12. Use examples. Examples help to explain an otherwise intangible idea. It’s easy to tell a reader they should enable alerts on a system, but much more impactful to show a reader how to receive an alert when a system exceeds 80 percent of disk capacity.
  13. Use bullets or numbered lists. I love writing in numbered lists and bullets (as with these tips). Doing so allows an author to most succinctly go through steps and portray a lot of information that is easily digestible to the audience. Also, if one of your bullets ends with a period, they all must. And the tense of each must match.
  14. Use tables. If bullets are awesome then tables are the coolest. You can impart a lot of information using tables. Each needs some text explaining what is in the table and a point that you’re usually trying to make by including the table.
  15. Judiciously use screen shots. If there’s only one button in a screen shot then you probably don’t need the screen shot. If there are two buttons you still probably don’t need the screen shot. If there are 20 and it isn’t clear in the text which to use, you might want to show the screen. It’s easy to use too many or not enough screen shots. I find most of my editors have asked for more and more screens until we get to the point that we’re cutting actual content to fit within a certain page count window. But I usually have a good idea of what I want to be a screen shot and what I don’t want to be a screen shot from the minute I look at the outline for a given chapter. Each screen shot should usually be called out within your text.
  16. Repetition is not a bad thing. This is one of those spots where I disagree with some of my editors from time to time. Editors will say “but you said that earlier” and I’ll say “it’s important.” Repetition can be a bad thing, if you’re just rehashing content, but if you intentionally repeat something to drive home a point then repetition isn’t always a bad thing. Note: I like to use notes/callouts when I repeat things. 
  17. White space is your friend. Margins, space between headers, kerning of fonts. Don’t pack too much crap into too little space or the reader won’t be able to see what you want them to see.
  18. Proofread, proofread, proofread. And have someone else proofread your stuff.
  19. Jargon, acronyms and abbreviations need to be explained. If you use APNS you only have to define it once, but it needs to be defined.
  20. I keep having editors say “put some personality into it” but then they invariably edit out the personality. Not sure if this just means I have a crappy personality, but it brings up a point: while you may want to liven up text, don’t take away from the meaning by doing so.
  21. Don’t reinvent the wheel. Today I was asked again to have an article from krypted included in a book. I never have a problem with contributing an article to a book, especially since I know how long it takes to write all this stuff. If I can save another author a few hours or days then they can push the envelope of their book that much further.
  22. Technical writing is not a conversation. Commas are probably bad. The word um is definitely bad. Technical writing should not ramble but be somewhat formal. You can put some flourish in, but make sure the sentences and arguments are meaningful, as with a thesis.
  23. Be accurate. Technical reviewers or technical editors help to make sure you’re accurate, but test everything. Code, steps, etc. Make sure that what you’re saying is correct up to the patch level and not just for a specific environment, like your company or school.
  24. Use smooth transitions between chapters. This means a conclusion that at least introduces the next chapter in each. Don’t overdo the transitions or get into the weeds of explaining an entire topic again.
  25. Real writers publish. If you write a 300 page document and no one ever sees it, did that document happen? If the document isn’t released in a timely manner then the content might be out of date before getting into a readers hands. I like to take my outline (step 2) and establish a budget (a week, 20 hours, or something like that).

Quickly forward individual emails using Outlook for Mac

Tuesday, January 8th, 2013

Forward messageForwarding an email message is fairly simple but forwarding multiple messages can be inconvenient for either the sender or the receiver.

If the sender forwards multiple messages as attachments then the recipient receives one message with a variety of potentially unrelated information. This also makes sorting by subject or Date Sent impossible. If the recipient wants individual messages then the forwarder has no option but to send each message individually. This is time consuming.

Like most email clients for Mac OS X, Outlook for Mac can forward messages but it has a unique feature that makes automating forwarding individual messages easy without resorting to scripting—it can forward using a rule.

But Apple’s Mail, Thunderbird and practically any other email client for Mac has rules too! What makes Outlook different?

Outlook can run disabled rules individually. Both Mail and Thunderbird support creating rules and then disabling them so that they won’t be applied to incoming messages, however, for either to run a single rule manually it must run all rules whether they’re enabled or disabled. Running a long list of rules is potentially troublesome.

To configure a fule in Outlook:

  1. Select Tools menu –> Rules… and select the type of email account using this rule (POP, IMAP or Exchange).
  2. Click the + (plus) button to add a new rule.
  3. Give the rule a descriptive name such as “Forward to <email address>”.
  4. Set the rule to apply to All Messages.
  5. Set the rule to Forward To <email address>.
  6. Deselect the Enabled option. This prevents the rule from firing when new mail arrives.

Rule settings

To use this rule to forward multiple messages individually:

  1. Select one or more messages in Outlook’s message list.
  2. Right-click or Control-click anywhere within the selected messages.
  3. Select Rules –> Apply –> Forward to <email address>.

Forward rule

The rule will run for each message and should take only a few seconds to run. The recipient will receive individually forwarded messages. Both sides save time.

…’Til You Make It

Monday, January 7th, 2013

Say you need a bunch of Apple IDs, and you need them pronto. There’s a form you can fill out, a bunch of questions floating in a window in some application, it can feel very… manual. A gentleman on the Enterprise iOS site entered, filling the void with an Applescript that could batch create ID’s with iTunes (and has seen updates thanks to Aaron Friemark.)

That bikeshed, though, was just not quite the color I was looking for. I decided to Fake it. Are we not Professional Computer Operators?

Before I go into the details, a different hypothetical use case: say you just migrated mail servers, and didn’t do quite enough archiving previously. Client-side moves may be impractical or resource-intensive. So you’d rather archive server-side, but can’t manipulate the mail server directly, and the webmail GUI is a touch cumbersome: are we relegated to ‘select all -> move -> choose folder -> confirm’ while our life-force drains away?

Fake is described as a tool for web automation and testing. It’s been around for a bit, but took an ‘Aha!’ moment while pondering these use cases for me to realize its power. What makes it genius is you don’t need to scour html source to find the id of the element you want to interact with! Control-drag to the element, specify what you want to do with it. (There are top-knotch videos describing these options on the website.) And it can loop. And delay(either globally or between tasks,) and the tasks can be grouped and disabled in sections and organized in a workflow and saved for later use. (Can you tell I’m a bit giddy about it?)

Fakeinaction-MailSo that mail archive can loop away while you do dishes. Got to the end of a date range? Pause it, change the destination folder mid-loop, and keep it going. (There is a way to look at the elements and make a conditional when it reads a date stamp, but I didn’t get that crazy with it… yet.)

And now even verifying the email addresses used with the Apple ID can be automated! Blessed be the lazy sysadmin.

The State of Tablets in Schools

Thursday, January 3rd, 2013

Any managed IT environment needs policies. One of the obvious ones is to refresh the hardware on some sort of schedule so that the tools people need are available and they aren’t hampered by running new software on old hardware. Commonly, security updates are available exclusively on the newest release of an operating system. Tablets are just the same, and education has been seeing as much of an influx of iOS devices as anywhere else.

Fraser Speirs has just gone through the process of evaluating replacements for iPads used in education, and discusses the criteria he’s come up with and his conclusions on his blog

A Simple, Yet Cautionary Tale

Friday, December 28th, 2012

While we don’t normally cover web development security basics, or find much to report when poking around in iOS apps, a great example of independent investigative tech journalism related to these topics broke late last week. On Nick Arnott(@noir‘s) blog Neglected Potential, he expands on a previous post involving how data is stored within an app(nice shout-out to a personal fave, PhoneView by Ecamm,) to talk about how it communicates with whatever services it may be hooked up to. Generally speaking, SSL and PKI don’t magically solve all our issues(as comically referred to here: This is 2012 and we’re still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary,) and end users reflexively clicking ‘accept’ on self-signed cert warnings is the front lines of the convenience vs. security battle. No, you shouldn’t send auth in plaintext just ’cause it’s SSL. (Yes, you should be seeding any straggler self-signed certs on the devices in your purview so you don’t need to say ‘just for this ONE sites self-signed cert, please just click Continue’.) The fact that a banking users SSN number was being sent to the app on every communication was… surprising, and corrected immediately after the heightened interest resulting from the aforementioned blog post.

Security via public trust

Security via public trust

After the publicity surrounding the post, however, folks were reassured by getting an immediate audience with the Director of Engineering at Simple, Brian Merritt(@btmerr.) Perhaps the flaw may have been considered too contrived a process for traditional(read: an email to their security team) channels at Simple to respond in a way that satisfied Mr. Arnott before he went ahead and published his post. “If only Jimmy had gone to the police,” the saying goes, “none of this would have happened” – please do note that while responsible disclosure was attempted, the issue is with PKI and not Simple itself, and updates were added to the post when clarifications were worth mentioning to present the facts in an even-handed manner. A key take-away is the fact that there is no live, zero-day exploit going on, just the relative ineffectiveness of PKI being exposed.

simpleNoEntryWhenTalkingToCharles

Although a process can enable the snooping of traffic, by default proxy’d SSL wouldn’t be allowed to start a session

But even more importantly, the fact that observing the traffic was even possible (thanks to CharlesProxy, also recently mentioned on @tvsutton‘s MacOps blog) highlights the ease with which basic internet security can be thwarted, and how much progress is left to be made. Of the improvements out there, Certificate Pinning is one of those ‘new to me’ concept enhancements regarding PKI, which luckily already has proposals in for review with the IETF. (An interesting contender from about a year ago is expounded on at the tack.io site.) There are quite a few variables involved that make intelligent discussion of the topic difficult for amateurs, but the take-away should be that you can inspect these things yourselves, as convoluted as it may be to get to the root cause of security issues. Hopefully we’ll have easier-to-deploy systems that’ll enable us to never ‘give up’ and use autosign again.

Thanks to Mr. Merritt, Michael Lynn and Jeff McCune for reviewing drafts of this post.

Manually delete data from Splunk

Thursday, December 27th, 2012

By default Splunk doesn’t delete the logging data that it’s gathered. Without taking some action to remove data it will continue to process all results from Day 1 if doing an All time search. That may be desirable in some cases where preserving and searching the data for historical purposes is necessary, but when using Splunk only as a monitoring tool the older data becomes superfluous after time.

Manually deleting information from Splunk is irreversible and doesn’t necessarily free disk space. Splunk users should only delete when they’ve verified their search results return the information they expect.

Enabling “can_delete”

No user can delete data until he’s been provided the can_delete role. Not even the admin account in the free version of Splunk has this capability enabled. To enable can_delete:

  1. Click the Manager link and then click the Access Controls link in the Users and authentication section.
  2. Click the Users link. If running the free version of Splunk then the admin account is the only account available. Click the admin link. Otherwise, consider creating a special account just for sensitive procedures, such as deleting data, and assigning the can_delete role only to that user.
  3. For the admin account move the can_delete role from the Available roles to the Selected roles section.
    Enable can_delete
  4. Click the Save button to keep the changes.

Finding data

Before deleting data be sure that a search returns the exact data to be deleted. This is as simple as performing a regular search using the Time range drop down menu to the right of the Search field.

The Time range  menu offers numerous choices for limiting results by time including Week to dateYear to date and Yesterday. In this case, let’s search for data from the Previous month:

Search time range

Wait for the search to complete then verify the results returned at the results that need deleting.

Deleting data

Deleting the found data is as simple as performing a search and then piping it into a delete command:

Search and delete

 

This runs the search again, deleting found results on the fly, which is why searching first before deleting is important. Keep in mind that a “delete and search” routine takes as long or longer to run as the initial search and consumes processing power on the Splunk server. If deleting numerous records proceed with one search/delete at a time to avoid overtaxing the server.