Archive for January, 2013

Set Splunk MySql Monitor To Start On Boot (CentOS)

Thursday, January 31st, 2013

Back in the old days of unix there was an easy way to start a daemon or script every time a computer booted.  Simply put it in one of the /etc/rc.? text files and it would start all the services in the order specified.  Later, it was made more flexible by having different startup folders based on which runlevel you were on.  Even later still scripts these rc[1-6].d startup folders became deprecated yet are still used to some extent by legacy programs and now things are all managed with new commands.

 

To put it bluntly, it’s messy, non intuitive and definitely not as easy as it should be.  There is hope however and getting a script or daemon to run “the right way” at startup isn’t too terribly daunting and I’ll walk you through the process now.

 

In our instance we need a program called splunkmysqlmonitor.py to run on boot.  It takes one of 3 arguments, start, stop, restart, and is located in /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/.  It’s almost ready to run at startup but first we should look at the command we’re using to call splunkmysqlmonitor.py and that’s the chkconfig command.

The chkconfig command takes a script that’s located in /etc/init.d and creates all the necessary symlinks for it in the rc[1-6].d folders that tell the system what order to start all the services and which runlevels start which services.  Runlevels are mostly deprecated in linux these days but just as an FYI, the runlevels you need to pay attention to are 2,3,4 and 5 and they are amost always identical.  The only thing you really need to worry about is the order in the boot process that the scripts get started and and less so the order that the script gets shutdown when rebooted.  For example, a program that relies on nfs to be on when running started necessarily needs to be run after the nfs command mounts the drives successfully.  Numbers lower in the list start first and the list goes from 1 – 99.  Since splunk is at priority 90 and this monitor needs to start after splunk I’ll give it a priority of 95.  As for shutdown, this service should turn off quickly since it relies on other services to run and may spit out errors if these dependent services are turned off before it.  I’ll give this shutdown a priority of 5 which means it’ll be one of the first processes to shutdown.

 

So now that we know when in the boot process the script should run and at (priority 95) which run levels it should run from (2,3,4,5) we just need to put this info into the system somehow.  We do this by adding specially formatted comment lines into our script located in /etc/init.d.  Here’s what our example looks like with the new comments added

 

#!/usr/bin/env python
#         run level  startup  shutdown
# chkconfig: 2345      95        5 
# description: monitors local mysql processes for splunk
# processname: splunkmysqlmonitor
#
import sys, time, os, socket...

Now we have to put the script into the /etc/init.d folder and that is best done with a symlink.

     ln -s /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/splunkmysqlmonitor.py /etc/init.d

And finally the chkconfig command itself

     chkconfig --add /etc/init.d/splunkmysqlmonitor.py

This should add the script to startup and next time you reboot it’ll launch automagically.

[More Splunk: Part 4] Narrow search results to create an alert

Wednesday, January 30th, 2013

This post continues [More Splunk: Part 3] Report on remote server activity.

Now that we have Splunk generating reports and turning raw data into useful information, let’s use that information to trigger something to happen automatically such as sending an email alert.

In the prior posts a Splunk Forwarder was gathering information using a shell script and sending the results to the Splunk Receiver. To find those results we used this search string:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh"

It returned data every 60 seconds that looked something like:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Using the timechart function of Splunk we extracted the MySQLCPU field to get its value 23.2 and put that into a graph for easier viewing.

Area graph

Returning to view that graph every few minutes, hours or days can get tedious if nothing really changes or the data isn’t out of the ordinary. Ideally, Splunk would watch the data and alert us when something is out of the ordinary. That’s where alerts are useful.

For example, the graph above shows the highest spike in activity to be around 45% and we can assume that a spike at 65% would be unusual. We want to know about that before processor usage gets out of control.

Configuring Splunk for email alerts

Before Splunk can send email alerts it needs basic email server settings for outgoing mail (SMTP). Click the Manager link in the upper right corner and then click System Settings. Click on Email alert settings. Enter public or private outgoing mail server settings for Splunk. If using a public mail server such as Gmail then include a user name and password to authenticate to the server and select the option for either SSL or TLS. Be sure to append port number 465 for SSL or 587 for TLS to the mail server name.

Splunk email server settings

In the same settings area Splunk includes some additional basic settings. Modify them as needed or just accept the defaults.

Splunk additional email server settings

Click the Save button when done.

Refining the search

Next, select Search from the App menu. Let’s refine the search to find only those results that may be out of the ordinary. Our first search found all results for the MySQLCPU field but now we want to limit its results to anything at 65% or higher. The where function is our new friend.

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | where MySQLCPU >= 65

This takes the result from the Forwarder and pipes it into an operation that returns only values of the MySQLCPU field that are greater than or equal to “65″. The search results, we hope, are empty. To verify the search is working correctly, change the value temporarily from “65″ to something lower such as “30″ or “40″. The lower values should return multiple results.

On a side note but unrelated to our need, if we wanted an alert for a range of values an AND operator connecting two statements will limit the results to something between values:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | where MySQLCPU >= 55 AND MySQLCPU <=65

Creating an alert

An alert will evaluate this search as frequently as Splunk receives new data and if it spots any results other than nothing then it can do something automatically.

With the search results in view (or lack of them), select Alert… from the Create drop down menu in the upper right corner. Name the search “MySQL CPU Usage Over 65%” or something that’s recognizable later. One drawback with Splunk is that it won’t allow renaming the search later. To do that requires editing more .conf files. Leave the Schedule at its default Trigger in real-time whenever a result matches. Click the Next button.

Schedule an alert

Enable Send email and enter one or more addresses to receive the alerts. Also, enable Throttling by selecting Suppress for results with the same field value and enter the MySQLCPU field name. Set the suppression time to five minutes, which is pretty aggressive. Remember, the script on the Forwarder server is sending new values every minute. Without throttling Splunk would send an alert every minute as well. This will allow an administrator to keep some sanity. Click the Next button.

Enable alert actions

Finally, select whether to keep the alert private or share it with other users on the Splunk system. This only applies to the Enterprise version of Splunk. Click the Finish button.

Share an alert

Splunk is now looking for new data to come from a Forwarder and as it receives that new data it’s going to evaluate it against the saved search. Any result other than no results found will trigger an email.

Note that alerts don’t need to just trigger emails. They can also run scripts. For example, an advanced Splunk search may look for multiple Java processes on a server running a Java-based application. If it found more than 20 spawned processes it could trigger a script to send a killall command to stop them before they consumed the server’s resources and then issue a start command to the application.

MySQL Monitoring with Splunk

Wednesday, January 30th, 2013

MySQL Logging with Splunk

Getting Splunk running and monitoring common log formats, such as apache logs and system logs, is a pretty straightforward process. Some would even call it intuitive but setting up some of the optional plugins can be tricky the first time you set it up. The following is a quick and dirty guide to getting the MySQL monitor from remora up and running in your splunk instance.

This article assumes you have a splunk server as well as a separate database server running a splunk forwarder that is pushing logs to the main splunk server.

The first step is to prepare your splunk server for the incoming mysql stats. We’ll need to make a custom index (called mysql in our case) on both the server and the database host.  See below:

create mysql index on splunk server

Once that’s done we’ll also need to create a custom tcp listener on the splunk server.  This is different from the standard listener that runs on port 9997.  Go to the manager and then data inputs to create:

add listener1

 

add listener2

 

set raw tcp lictener on splunk server

 

As you see we used port 9936 to be a listener that automatically imports into the mysql index. You’ll want to ensure that this port is reachable from your database server to ensure there are no firewalls blocking your connection. You can test this with a simple telnet command.  If you see a prompt that says “Escape character is” then you’re good to go.

telnet to port 9936 to test

 

Once we have verified the listener is up and running the next step is to get the mysql monitor installed on all the machines. It’s easily available via the splunk marketplace. All you need is to create a username and password.

go to marketplace to install apps

Once in the market place locate the Mysql monitor

install mysql monitor on splunk server and db servers

And then restart splunk

restart splunk

Now that that’s installed we need to make sure all the dependencies for the mysql monitor are setup on the database servers that will be pushing data to the main splunk server.

To install on a debian based os use this command:

    apt-get install python-mysqldb

For a redhat based os use this

    yum install MySQL-python

Accept all the dependencies and assuming there were no issues you’re just about ready.

Next on the list is to make sure your splunk monitoring daemon can talk to the local mysql server. On the machine in our test, we only have mysql running on the internal ip and have to ensure that the mysql user splunk@172.16.154.141 can connect and has permission. You may need to run the following command to grant yourself permission.

     grant all privileges on *.* to 'splunk'@'mysql_ip' identified by 'your-password';

To verify that splunk can access your tables use the following command

     mysql -u splunk -h mysql_ip -p

Once you’ve got that down the last step is to configure the mysql monitor’s config.ini. Here’s the config.ini we used:

[mysql]
 host=172.16.154.141
 port=3306
 username=splunk
 password=your-password
[splunk]
 host=172.16.154.250
 port=9936
[statusvars]
 interval=10
[slavestatus]
 interval=10
[tablestats]
 interval=3600
[processlist]
 interval=10

As of this writing, the place to put that config file is: /opt/splunk/etc/apps/mysqlmonitor/bin/daemon

To start the mysql monitor type this on the db server: /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/splunkmysqlmonitor.py

That’s it!  If you check the Splunk server then you should start seeing the mysql logs popping in immediately.

view mysql logs

 

mysql host overview

 

Pretty nice eh?

 

Next time I’ll show you how to make the splunk monitor daemon start on boot.

FileVault 2 Part Deux, Enter the Dragon

Wednesday, January 30th, 2013

The Godfather of FileVault, Rich Trouton, has probably encrypted more Macs than you. It’s literally a safe bet, horrible pun intended. But even he hadn’t taken into account a particular method of institution-wide deployment of recovery keys: disk-based passwords.

As an exercise, imagine you have tier one techs that need to get into machines as part of their duties. They would rather not target-disk recovery partition boot(thanks to Greg Neagle for clearing up confusion regarding how to apply that method) and slide a valuable certificate into place and whisper an incantation into its ear to operate on an un-booted volume, nor do they want to reset someone’s password with a ‘license plate’ code, they just want to unlock a machine that doesn’t necessarily have your admin enabled for FV2 on it. Back in 10.7, before the csfde(Google’s reverse-engineered CLI filevault initialization tool, mostly applicable to 10.7 since 10.8 has fdesetup) command line tool, the process of adding users was labor-intensive as well. Even in fdesetup times, you cannot specify multiple users without having their passwords and passing them in a unencrypted plist or stdin.

In this scenario, it’s less a ‘get out of jail free’ card for users that forget passwords, and more of a functional, day-to-day let-me-in secret knock. How do I get me one of those?

Enter the disk password. (Meaning like Enter the Dragon or Enter the Wu, not really ‘enter your disk password’, this is a webpage, not the actual pre-boot authentication screen.)

 

diskPasswordification

 

How did we get here? No advanced black magic, we just run diskutil cs(short for coreStorage, the name of the quacks-like-a-duck-so-call-it-a-duck logical volume manager built in to 10.7 Lion and later) with the convert and -passphrase options, pointing it at root. We could encrypt any accessible drive, but the changes to login are what we’re focusing on now.

The end result, once the process finishes and the machine reboots next, is this(un-customizable) icon appears at the login window:

diskPassicon

Remember that this scenario is about ‘shave and a haircut, two bits’, not necessarily the institution-wide systems meant to securely manage recovery options. Why haven’t you(or the Godfather) heard of this having been implemented for institutions until now-ish?  (Was he too busy meticulously grooming his links to anything a mac admin could possibly need to know, or composing the copious content to later link to? Say that three times fast!) (Yes, the disk password functionality has been around for a bit, but we’ve gotten a report of this being deployed, which prompted this post.) Well, there are two less attractive parts of this setup that systems like Cauliflower Vest and commercial solutions like Credant or Casper sidestep:

1. The password (for one or many hosts) needs to be sent TO a shell on the local workstations command line in some way, and rotating the password requires the previous one to be passed to stdin
2. It can be confusing at the pre-boot login window that there seems to be a user account called Disk Password visible

What’s the huge advantage over the other systems? Need to rotate the password? No decrypt/re-encrypt time! (Unlike the ‘license plate’ method.) Old passwords are properly ‘expired’! (Unlike the ‘Institutional Recovery Key’ method of using a certificate.) I hope this can be of use to the environments that may be looking for more ‘middle ground’ between complex systems and manual interaction. Usability is always a factor when discussing security products, so the additional method is a welcome one to consider the benefits of and, as always, test.

Regarding FileVault 2, Part One, In Da Club

Monday, January 28th, 2013

FileVaultIcon

IT needs to have a way to access FileVault 2(just called FV2 from here on) encrypted volumes in the case of a forgotten password or just getting control over a machine we’re asked to support. Usually an institution will employ a key escrow system to manage FDE(Full Disk Encryption) when working at scale. One technique, employed by Google’s previously mentioned Cauliflower Vest, is based on the ‘personal’ recovery key(a format I’ll refer to as the ‘license plate’, since it looks like this: RZ89-A79X-PZ6M-LTW5-EEHL-45BY.) The other involves putting a certificate in place, and is documented in Apple’s white paper on the topic. That paper only goes into the technical details later in the appendix, and I thought I’d review some of the salient points briefly.

There are three layers to the FV2 cake, divided by the keys interacted with when unlocking the drive:
Derived Encryption Keys(plural), the Key Encrypting Key(from the department of redundancy department) and the Volume Encrypting Key. Let’s use a (well-worn) abstraction so your eyes don’t glaze over. There’s the guest list and party promoter(DEKs), the bouncer(KEK), and the key to the FV2 VIP lounge(VEK). User accounts on the system can get on the (DEK) guest list for eventual entry to the VIP, and the promoter may remove those folks with skinny jeans, ironic nerd glasses without lenses, or Ugg boots with those silly salt-stained, crumpled-looking heels from the guest list, since they have that authority.

The club owner has his name on the lease(the ‘license plate’ key or cert-based recovery), and the bouncer’s paycheck. Until drama pop off, and the cops raid the joint, and they call the ambulance and they burn the club down… and there’s a new lease and ownership and staff, the bouncer knows which side of his bread is buttered.

The bouncer is a simple lad. He gets the message when folks are removed from the guest list, but if you tell him there’s a new owner(cert or license plate), he’s still going to allow the old owner to sneak anybody into the VIP for bottle service like it’s your birthday, shorty. Sorry about the strained analogy, but I hope you get the spirit of the issue at hand.

The moral of the story is, there’s an expiration method(re-wrapping the KEK based on added/modified/removed DEKs) for the(in this case, user…) passphrase-based unlock. ONLY. The FilevaultMaster.keychain cert has a password you can change, but if access has been granted to a previous version with a known password, that combination will continue to work until the drive is decrypted and re-encrypted. And the license plate version can’t be regenerated or invalidated after initial encryption.

So the two institutional-scale methods previously mentioned still get through the bouncer unlock the drive until you tear the roof of the mofo tear the club up de- and re-encrypt the volume.

But here’s an interesting point, there’s another type of DEK/passphrase-based unlock that can be expired/rotated besides per-user: a disk-based passphrase. I’ll get to describing that in Part Deux…

Sure, We Have a Mac Client, We Use Java!

Thursday, January 24th, 2013

We all have our favorite epithets to invoke for certain software vendors and the practices they use. Some of our peers go downright apoplectic when speaking about those companies and the lack of advances we perceive in the name of manageable platforms. Not good, life is too short.

I wouldn’t have even imagined APC would be forgiving in this respect, they are quite obviously a hardware company. You may ask yourself, though, ‘is your refrigerator running’ is the software actually listening for a safe shutdown signal from the network card installed in the UPS? Complicating matters is:
- The reason we install this Network Shutdown software from APC on our server is to receive this signal over ethernet, not USB, so it’s not detected by Energy Saver like other, directly cabled models

- The shutdown notifier client doesn’t have a windowed process/menubar icon

- The process itself identifies as “Java” in Activity Monitor (just like… CrashPlan – although we can kindof guess which one is using 400+ MBs of virtual memory idle…)

Which sucks. (Seriously, it installs in /Users/Shared/Applications! And runs at boot with a StartupItem! In 2013! OMGWTFBBQ!)

Calm, calm, not to fear! ps sprinkled with awk to the rescue:

ps avx | awk '/java/&&/Notifier/&&!/awk/{print $17,$18}'

To explain the ps flags, first it allows for all users processes, prints in long format with more criteria, and the x is for even if they have no ‘controlling console.’ Then awk looks for both Java and the ‘Notifier’ jar name, minus our awk itself, and prints the relevant fields, highlighted below(trimmed and rewrapped for readability):

:./comp/pcns.jar:./comp/Notifier.jar: 

com.apcc.m11.arch.application.Application

So at least we can tell that something is running, and appreciate the thoughtful development process APC followed, at least while we aren’t fashioning our own replacement with booster serial cables and middleware. Thanks to the googles and the overflown’ stacks for the proper flags to pass ps.

InstaDMG Issues, and Workflow Automation via Your Friendly Butler, Jenkins

Thursday, January 17th, 2013

“It takes so long to run.”

“One change happens and I need to redo the whole thing”

“I copy-paste the newest catalogs I see posted on the web, the formatting breaks, and I continually have to go back and check to make sure it’s the newest one”

These are the issues commonly experienced with those who want to take advantage of InstaDMG, and to some, it may be enough to prevent them from being rid of their Golden Master ways. Of course there are a few options to address each of these, in turn, but you may have noticed a theme on blog posts I’ve penned recently, and that is:

BETTER LIVING THROUGH AUTOMATION!

(We’ll get to how automation takes over shortly.) First, to review, a customized InstaDMG build commonly consists of a few parts: the user account, a function to answer the setup assistant steps, and the bootstrap parts for your patch and/or configuration management system. To take advantage of the(hopefully) well-QA’d vanilla catalogs, you can nest it in your custom catalog via an include-file line, and you only update your custom software parts listed above in one place. (And preferably you keep those projects and catalog under version control as well.)

All the concerns paraphrased at the start of this post just happen to be discussed recently on The Graham Gilbert Dot Com. Go there now, and hear what he has to say about it. Check out his other posts, I can wait.

Graham Gilberts Blog
Back? Cool. Now you may think those are all the answers you need. You’re mostly right, you smarty you! SSDs are not so out-of-reach for normal folk, and they really do help to speed the I/O bound process up, so there’s less cost to create and repeat builds in general. But then there’s the other manual interaction and regular repetition parts – how can we limit it to as little as possible? Yes, the InstaDMG robot’s going to do the heavy lifting for us by speedily building an image, and using version control on our catalogs help us track change over time, but what if Integrating the changes from the vanilla catalogs was Continuous? (Answers within!) (more…)

FileMaker Server 12 Console + Java 7 Issue

Wednesday, January 16th, 2013

The latest Java 7 installer for OS X places a new control panel on the system, replacing the Java Preferences app. If you disable Java, the server console for FileMaker Server 12 will not open and no log entries are created.

If Java has previously been customized, you can resolve this issue by turning Java back on. There is a new System Preferences pane for Java 7. Click on the Java icon in System Preferences and unlike many software packages, a new Java Control Panel application opens.

Screen Shot 2013-01-16 at 9.05.30 AM

At the new Java Control Panel application, make sure that “Enable Java content in the browser” is checked.

Screen Shot 2013-01-16 at 9.06.44 AM

This is an important note to consider for FileMaker Server 12 administrators as Java 6 is reaching End Of Life next month, prompting many vendors to update their code in the recent past and not so distant future. Disabling java in Safari has no impact on using FileMaker Server.

If It’s Worth Doing, It’s Worth Doing At Least Three Times

Monday, January 14th, 2013

In my last post about web-driven automation, we took on the creation of Apple IDs in a way that would require a credit card before actually letting you download apps(even free ones.) This is fine to speed up the creation process when actual billing will be applied to each account one at a time, but for education or training purposes where non-volume license purchases wouldn’t be a factor, there is the aforementioned ‘BatchAppleIDCreator‘ applescript. It hasn’t been updated recently, though, and I still had more automation tools I wanted to let have a crack at a repetitive workflow like this use case.

SikuliScript was born out of MIT research in screen reading, which roughly approximates what humans do as they scan the screen for a pattern and then take action. One can build a Sikuli script from scratch by taking screenshots and then tying together the actions you’d like to take in its IDE(which essentially renders HTML pages of the ‘code’.) You can integrate Python or Java, although it needs(system) Java and the Sikuli tools to be in place in the Applications folder to work at all. For Apple ID creation in iTunes, which is the documented way to create an ID with the “None” payment method, Apple endorses the steps in this knowledge base document.Sikuli AutoAppleID Creator Project

When running, the script does a search for iBooks, clicks the “Free” button to trigger Apple ID login, clicks the Create Apple ID button, clicks through a splash screen, accepts the terms and conditions, and proceeds to type in information for you. It gets this info from a spreadsheet(ids.csv) that I adapted from the BatchAppleIDCreator project, but currently hard-codes just the security questions and answers. There is guidance in the first row on how to enter each field, and you must leave that instruction row in, although the NOT IMPLEMENTED section will not be used as of this first version.

It’s fastest to type selections and use the tab and/or arrow keys to navigate between the many fields in the two forms(first the ID selection/password/security question/birthdate options, then the users purchase information,) so I didn’t screenshot every question and make conditionals. It takes less than 45 seconds to do one Apple ID creation, and I made a 12 second timeout between each step in case of a slow network when running. It’s available on Github, please give us feedback with what you think.

Change PresStore’s port number to avoid conflicts with other services

Thursday, January 10th, 2013

PresStore by Archiware is a multi-platform data backup and archive solution. Rather than writing a GUI control panel application for each platform Archiware uses a web-based front end.

By default PresStore uses port 8000 for access:

http://localhost:8000

This is a common port number, though, for many applications such as Splunk, HTTP proxies, games and applications that communicate with remote server services. 8000 isn’t a special port—it’s just a common port.

If PresStore is installed on a UNIX-based server with another application also using port 8000, changing its port number to something else is as simple as renaming a file. This file is located in PresStore’s install directory and is called lexxserv:8000:

/usr/local/aw/conf/lexxserv:8000

A local administrator can change the name of this file using the mv command. Assuming he wants to change it to port 8001, he’d use:

sudo mv /usr/local/aw/conf/lexxserv:8000 /usr/local/aw/conf/lexxserv:8001

After changing the port, stop the PresStore service:

sudo /usr/local/aw/stop-server

And start it again:

sudo /usr/local/aw/start-server

Or just use the restart-server command:

sudo /usr/local/aw/restart-server

Windows administrators will need to open the PresStore Server Manager utility and change the port number in the Service Functions section.

25 Tips For Technical Writers

Wednesday, January 9th, 2013

At 318, we write a pretty good amount of content. We have 5 or so authors on staff, write tons of technical documentation for customers and develop a fair amount of courseware. These days, I edit almost as much as I write. And in doing so, I’ve picked up on some interesting trends in how people write, prompting me to write up some tips for the blossoming technical writer out there:

  1. Define the goal. What do you want to say? The text on the back jacket of most of my books was written before I ever wrote an outline. Sometimes I update the text when I’m done with a book because the message can change slightly with technical writing as you realize some things you’d hoped to accomplish aren’t technically possible (or maybe not in the amount of time you need to use).
  2. Make an outline. Before you sit down to write a single word, you should know a goal and have an outline that matches to that goal. The outline should be broken down in much the same way you’d lay out chapters and then sections within the chapter.
  3. Keep your topics separate. A common trap is to point at other chapters too frequently. Technical writing does have a little bit of the find your own adventure aspect, but referencing other chapters is often overused.
  4. Clearly differentiate between section orders within a chapter. Most every modern word processing tool (from WordPress to Word) provides the ability to have a Header or Heading 1 and a Header or Heading 2. Be careful not to confuse yourself. I like to take my outline and put it into my word processing program and then build out my headers from the very beginning. When I do so, I like for each section to have a verb and a subject that defines what we’re going to be doing. For example, I might have Header 1 as Install OS X, with Header 2 as Formatting Drives followed by Header 2 as Using the Recovery Partition followed by Header 3 of Installing the Operating System.
  5. Keep your paragraphs and sentences structured. Beyond the headings structure, make sure that each sentence only has one thought (and that sentences aren’t running on and on and on). Also, make sure that each paragraph illustrates a sequence of thoughts. Structure is much more important with technical writing than with, let’s say, science fiction. Varying sentence structure can keep people awake.
  6. Use good grammar. Bad grammar makes things hard to read and most importantly gets in the way of your message getting to your intended audience. Strunk and White’s Elements of Style is very useful if you hit a place where you’re not sure what to write. Grammar rules are a lot less stringent with online writing, such as a website. When it comes to purposefully breaking grammatical rules, I like to make an analogy with fashion. If you show up to a very formal company in $400 jeans, they don’t care that your jeans cost more than most of their slacks; they just get cranky you’re wearing jeans. Not everyone will pick up on purposeful grammatical lapses. Many will just judge you harshly. Especially if they hail from the midwest.
  7. Define your audience. Are you writing for non-technical users trying to use a technical product? Are you writing for seasoned Unix veterans trying to get acquainted with a new version of Linux? Are you writing for hardened programmers? The more clearly you define the audience the easier it is to target a message to that audience. The wider the scope of the audience the more people are going to get lost, feel they’re reading content below their level, etc.
  8. Know your style guide. According to who you are writing for, they probably have a style guide of some sort. This style guide will lay out how you write, specific grammar styles they want used, hopefully a template with styles pre-defined, etc. I’ve completed several writing gigs, only to discover I need to go back and reapply styles to the entire content. When you do that, something will always get missed…
  9. Quoting is important when writing code. It’s also important to quote some text. If you have a button or text on a screen with one word that begins with a capped letter, you don’t need to quote that in most style guides. But if there’s only one word and any of the words use a non-capped letter or have a special character then the text should all be quoted. It’s also important to quote and attribute text from other locations. Each style guide does this differently.
  10. Be active. No, I’m not saying you should run on a treadmill while trying to dictate the chapter of a book to Siri. Use an active voice. For example, don’t say “When installing an operating system on a Mac you should maybe consider using a computer that is capable of running that operating system.” Instead say something like “Check the hardware compatibility list for the operating system before installation.”
  11. Be careful with pronouns. When I’m done writing a long document I’ll do a find for all instances of it (and a few other common pronouns) and look for places to replace with the correct noun.
  12. Use examples. Examples help to explain an otherwise intangible idea. It’s easy to tell a reader they should enable alerts on a system, but much more impactful to show a reader how to receive an alert when a system exceeds 80 percent of disk capacity.
  13. Use bullets or numbered lists. I love writing in numbered lists and bullets (as with these tips). Doing so allows an author to most succinctly go through steps and portray a lot of information that is easily digestible to the audience. Also, if one of your bullets ends with a period, they all must. And the tense of each must match.
  14. Use tables. If bullets are awesome then tables are the coolest. You can impart a lot of information using tables. Each needs some text explaining what is in the table and a point that you’re usually trying to make by including the table.
  15. Judiciously use screen shots. If there’s only one button in a screen shot then you probably don’t need the screen shot. If there are two buttons you still probably don’t need the screen shot. If there are 20 and it isn’t clear in the text which to use, you might want to show the screen. It’s easy to use too many or not enough screen shots. I find most of my editors have asked for more and more screens until we get to the point that we’re cutting actual content to fit within a certain page count window. But I usually have a good idea of what I want to be a screen shot and what I don’t want to be a screen shot from the minute I look at the outline for a given chapter. Each screen shot should usually be called out within your text.
  16. Repetition is not a bad thing. This is one of those spots where I disagree with some of my editors from time to time. Editors will say “but you said that earlier” and I’ll say “it’s important.” Repetition can be a bad thing, if you’re just rehashing content, but if you intentionally repeat something to drive home a point then repetition isn’t always a bad thing. Note: I like to use notes/callouts when I repeat things. 
  17. White space is your friend. Margins, space between headers, kerning of fonts. Don’t pack too much crap into too little space or the reader won’t be able to see what you want them to see.
  18. Proofread, proofread, proofread. And have someone else proofread your stuff.
  19. Jargon, acronyms and abbreviations need to be explained. If you use APNS you only have to define it once, but it needs to be defined.
  20. I keep having editors say “put some personality into it” but then they invariably edit out the personality. Not sure if this just means I have a crappy personality, but it brings up a point: while you may want to liven up text, don’t take away from the meaning by doing so.
  21. Don’t reinvent the wheel. Today I was asked again to have an article from krypted included in a book. I never have a problem with contributing an article to a book, especially since I know how long it takes to write all this stuff. If I can save another author a few hours or days then they can push the envelope of their book that much further.
  22. Technical writing is not a conversation. Commas are probably bad. The word um is definitely bad. Technical writing should not ramble but be somewhat formal. You can put some flourish in, but make sure the sentences and arguments are meaningful, as with a thesis.
  23. Be accurate. Technical reviewers or technical editors help to make sure you’re accurate, but test everything. Code, steps, etc. Make sure that what you’re saying is correct up to the patch level and not just for a specific environment, like your company or school.
  24. Use smooth transitions between chapters. This means a conclusion that at least introduces the next chapter in each. Don’t overdo the transitions or get into the weeds of explaining an entire topic again.
  25. Real writers publish. If you write a 300 page document and no one ever sees it, did that document happen? If the document isn’t released in a timely manner then the content might be out of date before getting into a readers hands. I like to take my outline (step 2) and establish a budget (a week, 20 hours, or something like that).

Quickly forward individual emails using Outlook for Mac

Tuesday, January 8th, 2013

Forward messageForwarding an email message is fairly simple but forwarding multiple messages can be inconvenient for either the sender or the receiver.

If the sender forwards multiple messages as attachments then the recipient receives one message with a variety of potentially unrelated information. This also makes sorting by subject or Date Sent impossible. If the recipient wants individual messages then the forwarder has no option but to send each message individually. This is time consuming.

Like most email clients for Mac OS X, Outlook for Mac can forward messages but it has a unique feature that makes automating forwarding individual messages easy without resorting to scripting—it can forward using a rule.

But Apple’s Mail, Thunderbird and practically any other email client for Mac has rules too! What makes Outlook different?

Outlook can run disabled rules individually. Both Mail and Thunderbird support creating rules and then disabling them so that they won’t be applied to incoming messages, however, for either to run a single rule manually it must run all rules whether they’re enabled or disabled. Running a long list of rules is potentially troublesome.

To configure a fule in Outlook:

  1. Select Tools menu –> Rules… and select the type of email account using this rule (POP, IMAP or Exchange).
  2. Click the + (plus) button to add a new rule.
  3. Give the rule a descriptive name such as “Forward to <email address>”.
  4. Set the rule to apply to All Messages.
  5. Set the rule to Forward To <email address>.
  6. Deselect the Enabled option. This prevents the rule from firing when new mail arrives.

Rule settings

To use this rule to forward multiple messages individually:

  1. Select one or more messages in Outlook’s message list.
  2. Right-click or Control-click anywhere within the selected messages.
  3. Select Rules –> Apply –> Forward to <email address>.

Forward rule

The rule will run for each message and should take only a few seconds to run. The recipient will receive individually forwarded messages. Both sides save time.

…’Til You Make It

Monday, January 7th, 2013

Say you need a bunch of Apple IDs, and you need them pronto. There’s a form you can fill out, a bunch of questions floating in a window in some application, it can feel very… manual. A gentleman on the Enterprise iOS site entered, filling the void with an Applescript that could batch create ID’s with iTunes (and has seen updates thanks to Aaron Friemark.)

That bikeshed, though, was just not quite the color I was looking for. I decided to Fake it. Are we not Professional Computer Operators?

Before I go into the details, a different hypothetical use case: say you just migrated mail servers, and didn’t do quite enough archiving previously. Client-side moves may be impractical or resource-intensive. So you’d rather archive server-side, but can’t manipulate the mail server directly, and the webmail GUI is a touch cumbersome: are we relegated to ‘select all -> move -> choose folder -> confirm’ while our life-force drains away?

Fake is described as a tool for web automation and testing. It’s been around for a bit, but took an ‘Aha!’ moment while pondering these use cases for me to realize its power. What makes it genius is you don’t need to scour html source to find the id of the element you want to interact with! Control-drag to the element, specify what you want to do with it. (There are top-knotch videos describing these options on the website.) And it can loop. And delay(either globally or between tasks,) and the tasks can be grouped and disabled in sections and organized in a workflow and saved for later use. (Can you tell I’m a bit giddy about it?)

Fakeinaction-MailSo that mail archive can loop away while you do dishes. Got to the end of a date range? Pause it, change the destination folder mid-loop, and keep it going. (There is a way to look at the elements and make a conditional when it reads a date stamp, but I didn’t get that crazy with it… yet.)

And now even verifying the email addresses used with the Apple ID can be automated! Blessed be the lazy sysadmin.

The State of Tablets in Schools

Thursday, January 3rd, 2013

Any managed IT environment needs policies. One of the obvious ones is to refresh the hardware on some sort of schedule so that the tools people need are available and they aren’t hampered by running new software on old hardware. Commonly, security updates are available exclusively on the newest release of an operating system. Tablets are just the same, and education has been seeing as much of an influx of iOS devices as anywhere else.

Fraser Speirs has just gone through the process of evaluating replacements for iPads used in education, and discusses the criteria he’s come up with and his conclusions on his blog