Archive for the ‘IT Management’ Category

We Love The AFP548 Podcast

Friday, March 7th, 2014

Upgrade Lifesize Video Conferencing Units

Wednesday, March 5th, 2014

Updating a Lifesize Head Unit requires Internet Explorer on Windows with Flash installed. To run the update, first login to www.lifesize.com to download updates. From here, click on Support and then Software Downloads. Then click on the serial number of your unit and choose the file to download from the column on the right.

Once downloaded, login to the Head Unit using the IP address and password (by default the password for a Lifesize is 1234. Once downloaded, click on the Maintenance Tab on the right and select System Upgrade. Browse to your update (which is a .cmg file) and then click on the Upgrade button. Now wait until the unit restarts and test a connection.

Pulling Report Info from MunkiWebAdmin

Wednesday, November 6th, 2013

Alright, you’ve fallen in love with the Dashboard in MunkiWebAdmin – we don’t blame you, it’s quite the sight. Now you know one day you’ll hack on Django and the client pre/postflight scripts until you can add that perfect view to further extend it’s reporting and output functionality, but in the meantime you just want to export a list of all those machines still running 10.6.8. Mavericks is free, and them folks still on Snow Leo are long overdue. If you’ve only got a handful of clients, maybe you set up MunkiWebAdmin using sqlite(since nothing all that large is actually stored in the database itself.)

MunkiWebAdmin in action

Let’s go spelunking and try to output just those clients in a more digestible format than html, so I’d use the csv output option for starters. We could tool around in an interactive session with the sqlite binary, but in this example we’ll just run the query on that binary and cherry-pick the info we want. Most often, we’ll use the information submitted as a report by the pre- and postflight scripts munki runs, which dumps in to the reports_machine table. And the final part is as simple as you’d expect, we just select all from that particular table where the OS version equals exactly 10.6.8. Here’s the one-liner:

$sqlite3 -csv /Users/Shared/munkiwebadmin_env/munkiwebadmin/munkiwebadmin.db\
 "SELECT * FROM reports_machine WHERE os_version='10.6.8';"

 


And the resultant output:
b8:f6:b1:00:00:00,Berlin,"","",192.168.222.100,"MacBookPro10,1","Intel Core i7","2.6 GHz",x86_64,"8 GB"...

You can then open that in your favorite spreadsheet editing application and parse it for whatever is in store for it next!

Who’s Really In-Charge of Your Company’s Tech: Customers

Friday, November 1st, 2013

In June of 2007, Gartner analyst were warning all IT managers to stay away from Apple’s iPhone concluding that it wasn’t ready for business environments. In April 2010, the same advice could be heard about Apple’s iPad. What changed since then? Overwhelming customer demand.

Technologies such as Cloud Computing, Mobile (smartphones and tablets), Social Media have infiltrated businesses and have had an impact on how consumers and employees use technology, how they interact and engage with others, and how they create, use, and digest information. These trends are as much social as they are economic and technological.

CUSTOMERS ARE NOW IN CONTROL

At one point, you as the CEO or CFO of your organization and your IT manager could dictate how employees and customers could do business with you from a technology standpoint. That’s just not the case any more. Now, technology based products and services are designed with the consumer in mind and businesses will just have to figure out how to use them within their organizations. As an example, simply look at the fact that according to IDC research, tablets will outsell PCs in the 4th quarter of 2013.

KEY CONCEPTS FOR YOU TO KNOW

Below are a few ideas to consider that could help you stay ahead or on pace with the intersection of technology innovation and business growth.

  • Understand what is expected of you: the technology that customers and employees use at home or in their personal lives is so easy and sophisticated that when they do business with you or when they work for you, they will “demand” and expect more choices, instant responses, greater personalization, flexibility, and functionality.
  • Customers want increasingly more control and access – not less. The trends is moving with customers, you will that they will want to own their own information as well as gain access to their work applications. And if the can’t do that, then frustration will set in.
  • Technology use by customers is social context drive not just business:  In other words, the wave that is moving the adoption, expectations, and use of technology is about how we as a society are using technology. Therefore, it is not going to stop, its going to get crazier and for those who ignore it and try to keep doing “business as usual”, it could be disastrous.

The recent issues surrounding the Affordable Healthcare Act’s website exemplify these concepts.  People expect to have a website that works like Amazon when ordering their books or Facebook when they are posting their kid’s photos. If your technology is not live and working 24/7, it could be serious blow to your reputation (or poll numbers if you are a politician).

WHAT CAN YOU DO?

The good news is that the same technology innovation driving you crazy from business standpoint could also be used to help you serve existing and new customers in new ways that are both beneficial to them and to your as a business.

Here are a few things you can do:

  • Determine how your customer using technology now? Are they using PCs, tablets, smartphones? Are they virtual? Do they still use paper or are they going digital?
  • How can you give your customer more choices? Can you let them control what messages they receive from you? Can you give them access to parts of your internal information that could help them in their business? Can you share more info with them?
  • Do you have the right infrastructure in place to accommodate your customers? Do you have systems that they can access securely and via the devices they use such as mobile (smartphone, tablets), desktop, etc. Can you share with them easily and securely?
  • Are you proactively planning how you can get a head of the curve (and competitors) and researching how you can use mobile, social, and personal devices to do business with your customers?

Another thing to consider is outsourcing your technology. A good outsource IT provider could managing your IT environment in such a way that it will save you money and allow you to focus on your core competencies as well as meet your customers expectation by ensuring that your systems run smoothly.

However, just as in important, choose a provider with the thought leaders or innovation expertise that could help you stay ahead of the technology curve so a leader in your industry and not playing catch up with competitors.

The Cost Considerations of Managed Services

Wednesday, October 30th, 2013

With the economic, social, and technology trends pushing IT outsourcing concepts into the spotlight for businesses, it has become increasingly important for CFOs to take a serious look at managed services. For companies, managed services has been a boon in reducing or avoiding cost, helping businesses shift from a capex to an opex model, and or reducing complexity and risk, and for accessing new IT expertise without taking on additional employee headcount.

For many CFOs, eliminating or reducing upfront hardware or monthly employee cost is usually seen as the most important factor when choosing to go towards a managed service or IT outsourcing model, however, we at 318 recommend that you take in a thorough analysis of all the cost involved in your current in-house model and see how it compares to outsourcing.

Below are some typical cost that should be taken into consideration:

Labor

  • Salaries and wages
  • Overtime/Benefits
  • Payroll Taxes
  • Travel

Hardware/Software

  • Purchases & Upgrades
  • Sales Tax (Local/State)
  • Shipping
  • Installation
  • Write Offs
  • Disposal Cost
  • Licensing
  • Implementation

Maintenance

  • Hardware and Software
  • Other products/services
  • Staffing

Access Control/Security

  • Infrastructure
  • Administration
  • Monitoring

Recovery/Disaster Recovery

  • Staffing
  • Hardware/Software
  • Testing
  • Insurance/Compliance

Technical Expertise

  • Internal/External Resources
  • Training
  • Staffing

Facilities

  • Building and Floor Space
  • Property Taxes
  • Utilities and Security
  • Furniture/Equipment

Ongoing support

  • Insurance
  • Audits
  • Legal
  • Service levels

These are just some of the issues that CFOs need to consider. So when all is said, done, and analyzed, is it worth it? From a financial perspective, it may be. One thing is for sure, the trends are definitely moving in the direction where soon rather than later it will not only be a financially smart move to outsource your IT needs and focus on your expertise but it will be an expected business practice.

Let us know what you think, we would love to receive your feedback. You can reach us at sales@318.com.

Wishes Granted! Apple Configurator 1.4 and iOS 7

Wednesday, September 25th, 2013

Back in June, we posted about common irritations of iOS(6) device deployment, especially in schools or other environments trying to remove features that could distract students. Just like with the Genie, we asked for three wishes:

- 1. Prevent the addition of other email accounts, or 2. the sign-in (or creation!) of Twitter/Facebook(/Vimeo/Flickr, etc.) accounts

Yay! Rejoice in the implementation of your feature requests! At least when on a supervised device, you can now have these options show up as greyed out.

- 3. Disable the setting of a password lock…

Boo! This is still in the realm of things only an MDM can do for you. But at least it’s not something new that MDM’s need to implement. More agile ways to interact with App Lock should be showing up in a lot more vendors products for a ‘do not pass go, do not collect $200 dollars’ way to lead a group of iPads through the exact app they should be using. Something new we’re definitely looking forward to for MDM vendors to implement is…

Over-the-Air Supervision!

Won’t it be neat when we don’t need to tether all these devices to get those extra management features?

And One More Thing

Screen Shot 2013-09-24 at 2.35.14 PM

Oh, and one last feature I made reference to in passing, you can now sync a Supervised device to a computer! …With the caveat that you need to designate that functionality at the time you move the device into Supervise mode, and the specific Restriction payload needs setting appropriately.

Screen Shot 2013-09-24 at 2.34.29 PM

We hope you enjoy the bounty that a new OS and updated admin tools brings.

Add OS X Network Settings Remotely (Without Breaking Stuff)

Monday, September 23rd, 2013

So you’re going to send a computer off to a colocation facility, and it’ll use a static IP and DNS when it gets there, the info for which it’ll need before it arrives. Just like colo, you access this computer remotely to prepare it for its trip, but don’t want to knock it off the network while prepping this info, so you can verify it’s good to go and shut it down.

It’s the type of thing, like setting up email accounts programmatically, that somebody should have figured out and shared with the community as some point. But even if my google-fu is weak, I guess I can deal with having tomatoes thrown at me, so here’s a rough mock-up:

 

#!/bin/bash
# purpose: add a network location with manual IP info without switching 
#   This script lets you fill in settings and apply them on en0(assuming that's active)
#   but only interrupts current connectivity long enough to apply the settings,
#   it then immediately switches back. (It also assumes a 'Static' location doesn't already exist...)
#   Use at your own risk! No warranty granted or implied! Tell us we're doing it rong on twitter!
# author: Allister Banks, 318 Inc.

# set -x

declare -xr networksetup="/usr/sbin/networksetup"

declare -xr MYIP="192.168.111.177"
declare -xr MYMASK="255.255.255.0"
declare -xr MYROUTER="192.168.111.1"
declare -xr DNSSERVERS="8.8.8.8 8.8.4.4"

declare -x PORTANDSERVICE=`$networksetup -listallhardwareports | awk '/en0/{print x};{x=$0}' | cut -d ' ' -f 3`

$networksetup -createlocation "Static" populate
$networksetup -switchtolocation "Static"
$networksetup -setmanual $PORTANDSERVICE $MYIP $MYMASK $MYROUTER
$networksetup -setdnsservers $PORTANDSERVICE $DNSSERVERS
$networksetup -switchtolocation Automatic

exit 0

Caveats: The script assumes the interface you want to be active in the future is en0, just for ease of testing before deployment. Also, that there isn’t already a network location called ‘Static’, and that you do want all interface populated upon creation(because I couldn’t think of particularly good reasons why not.)

If you find the need, give it a try and tweet at us with your questions/comments!


Inconsistent Upgrade Behavior on Software-Mirrored RAID Volumes

Thursday, August 8th, 2013

It came up again recently, so this post is to warn folks treading the same path in the near future. First a little ‘brass tacks’ background: As you probably know, as of 10.7 Lion’s Mac App Store-only distribution, you can choose the option to extract the InstallESD.dmg from the Install Mac OS X (insert big cat name here) application, and avoid duplicitous downloads and manual Apple ID logins. One could even automate the process on a network that supports NetInstall with a redundantly named NetInstall set to essentially ‘virtualize’ or serve up the installer app on the network.

We’ve found recently that more than a few environments are just getting around to upgrading after taking a ‘wait and see’ approach to Lion, and jumping straight to 10.8 Mountain Lion. Getting to the meat after all this preamble… it was also, at one time, considered best practice to use RAID to mirror the boot disk, even without a hardware card to remove the CPU overhead. (It hadn’t been considered a factor then, but even modern storage virtualization *cough*Drobo*cough* can perform… poorly. I personally recommend what I call a ‘lazy mirror’, having CCC clone the volume and putting less writes on the disk over time, and getting the redundancy of CCC reporting SMART status of the source and destination.)

When upgrading a software-mirror’d boot drives OS, you get a message about features been unavailable, namely FileVault2 and the Recovery Partition it relies upon. If it detects the machine being upgraded is running (a relic of a bygone era, a separate OS called quaintly) ‘Mac OS X Server,’ it additionally warns that the server functionality will be suspended until Server.app 2.x can be installed via… the Mac App Store. We’ve found it can do an upgrade of those paused services(at least those that are still provided by the 2.2.1 version of the Server application) and pick up where it left off without incident after being installed and launched.

If, however, you use a Mac App Store-downloaded application to perform the process, we’ve seen higher success rates of a stable upgrade. If instead you tried to save time with either the InstallESD.dmg or NetInstall set methods mentioned earlier, a failure symptom occurred that, post-update, the disk would never complete its first boot(verbose boot was not conclusive as to reasons, either.) Moving the application bundle to another machine(volume license codes have, of course, been applied to the appropriate AppleID on the machines designated for upgrades,) hasn’t been as successful, although the recommended repackaging of the Install app, as Apple has referred to in certain documentation, wasn’t attempted this particular time. In some cases even breaking the software mirror didn’t allow the disk to complete an upgrade successfully. Another symptom before we could tell it was going to fail is the drop-down sheet warning of the loss of server functionality would immediately cause the entire window to lose focus while about to initiate the update. A radar has not been filed due to the fact that a supported(albeit semi time-intensive) method exists and as been more consistently successful.

Casper Focus Now Available

Monday, April 29th, 2013

For a long time I’ve been saying that the #1 challenge with regard to using iOS is content distribution. Others have mirrored that by saying that the device is a content aggregator, etc. The challenge is keeping everyone on the same page, with the same content and distributing administration of all of that to those who need it.

Well, our friends at JAMF software are, as usual, right in the middle of resolving the more challenging issues of the day with regard to iOS and OS X. In this case they’ve released a new tool called Casper Focus that enables rudimentary administrative tasks by teachers.

casper-focus-annotated-UI-no-bleed_1.jpg

 

Now, I don’t want anyone to take the word rudimentary to be a bad thing. You see, accessing and remotely controlling devices can be a big challenge. The learning curve can be steep. By only giving delegated administrators a few options that learning curve can be drastically reduced. Lock, enable, distribute data. These are the very basic tasks teachers need.

Overall, this is yet another great addition to the Casper family of products and 318 is excited to work with our customers to integrate Casper Focus into the environments for our customers where appropriate. Call your Professional Services Manager today for more information!

Quick Update to a Radiotope Guide for Built-In Mac OS X VPN Connections

Tuesday, March 26th, 2013

Just a note for those folks with well-worn bookmarks to this post on Ed Marczak’s blog, Radiotope.com, for authenticating VPN connections with Mac OS X Server’s Open Directory, which is still valid today. When trying to use the System Preferences VPN client/network adapter with the built-in L2TP server in Sonicwall, though, I was curious why OD auth wasn’t working for me, but users that were local to the Sonicwall were. Having been a while since the last time I’d set it up, I went on a search engine spelunking and found a link that did the trick.

In particular, a comment by Ted Dively brought to my attention the fact you need to change the order(in the VPN sidebar item, L2TP Server, when the Configure button pop-up is clicked, it’s under the PPP tab) that the L2TP service is configured to use for authentication type, PAP instead of the more standard MSCHAPv2.

Where it's done

We hope that is of help to current and future generations.

LOPSA-East 2013

Monday, March 18th, 2013

For the first year I’ll be speaking at the newly-rebranded League of Extraordinary Gentlemen League of Professional System Administrators conference in New Brunswick, New Jersey! It’s May 3rd and 4th, and should be a change from the Mac-heavy conferences we’ve been associated with as of late. I’ll be giving a training class, Intro to Mac and iOS Lifecycle Management, and a talk on Principled Patch Management with Munki. Registration is open now! Jersey is lovely that time of year, please consider attending!

 

LOPSA-East '13

PSU MacAdmins Conference 2013

Wednesday, February 27th, 2013

It's Secret!

For the third year, I’ll be presenting at PSU MacAdmins Conference! This year I’m lucky enough to be able to present two talks, “Backup, Front to Back” and “Enough Networking to be Dangerous”. But I’m really looking forward to what I can learn from those speaking for the first time, like Pepijn Bruienne and Graham Gilbert among others. The setting and venue is top-notch. It’s taking place May 22nd through the 24th, with a Boot Camp for more foundational topics May 21st. Hope you can join us!

Regarding FileVault 2, Part One, In Da Club

Monday, January 28th, 2013

FileVaultIcon

IT needs to have a way to access FileVault 2(just called FV2 from here on) encrypted volumes in the case of a forgotten password or just getting control over a machine we’re asked to support. Usually an institution will employ a key escrow system to manage FDE(Full Disk Encryption) when working at scale. One technique, employed by Google’s previously mentioned Cauliflower Vest, is based on the ‘personal’ recovery key(a format I’ll refer to as the ‘license plate’, since it looks like this: RZ89-A79X-PZ6M-LTW5-EEHL-45BY.) The other involves putting a certificate in place, and is documented in Apple’s white paper on the topic. That paper only goes into the technical details later in the appendix, and I thought I’d review some of the salient points briefly.

There are three layers to the FV2 cake, divided by the keys interacted with when unlocking the drive:
Derived Encryption Keys(plural), the Key Encrypting Key(from the department of redundancy department) and the Volume Encrypting Key. Let’s use a (well-worn) abstraction so your eyes don’t glaze over. There’s the guest list and party promoter(DEKs), the bouncer(KEK), and the key to the FV2 VIP lounge(VEK). User accounts on the system can get on the (DEK) guest list for eventual entry to the VIP, and the promoter may remove those folks with skinny jeans, ironic nerd glasses without lenses, or Ugg boots with those silly salt-stained, crumpled-looking heels from the guest list, since they have that authority.

The club owner has his name on the lease(the ‘license plate’ key or cert-based recovery), and the bouncer’s paycheck. Until drama pop off, and the cops raid the joint, and they call the ambulance and they burn the club down… and there’s a new lease and ownership and staff, the bouncer knows which side of his bread is buttered.

The bouncer is a simple lad. He gets the message when folks are removed from the guest list, but if you tell him there’s a new owner(cert or license plate), he’s still going to allow the old owner to sneak anybody into the VIP for bottle service like it’s your birthday, shorty. Sorry about the strained analogy, but I hope you get the spirit of the issue at hand.

The moral of the story is, there’s an expiration method(re-wrapping the KEK based on added/modified/removed DEKs) for the(in this case, user…) passphrase-based unlock. ONLY. The FilevaultMaster.keychain cert has a password you can change, but if access has been granted to a previous version with a known password, that combination will continue to work until the drive is decrypted and re-encrypted. And the license plate version can’t be regenerated or invalidated after initial encryption.

So the two institutional-scale methods previously mentioned still get through the bouncer unlock the drive until you tear the roof of the mofo tear the club up de- and re-encrypt the volume.

But here’s an interesting point, there’s another type of DEK/passphrase-based unlock that can be expired/rotated besides per-user: a disk-based passphrase. I’ll get to describing that in Part Deux…

Sure, We Have a Mac Client, We Use Java!

Thursday, January 24th, 2013

We all have our favorite epithets to invoke for certain software vendors and the practices they use. Some of our peers go downright apoplectic when speaking about those companies and the lack of advances we perceive in the name of manageable platforms. Not good, life is too short.

I wouldn’t have even imagined APC would be forgiving in this respect, they are quite obviously a hardware company. You may ask yourself, though, ‘is your refrigerator running’ is the software actually listening for a safe shutdown signal from the network card installed in the UPS? Complicating matters is:
- The reason we install this Network Shutdown software from APC on our server is to receive this signal over ethernet, not USB, so it’s not detected by Energy Saver like other, directly cabled models

- The shutdown notifier client doesn’t have a windowed process/menubar icon

- The process itself identifies as “Java” in Activity Monitor (just like… CrashPlan – although we can kindof guess which one is using 400+ MBs of virtual memory idle…)

Which sucks. (Seriously, it installs in /Users/Shared/Applications! And runs at boot with a StartupItem! In 2013! OMGWTFBBQ!)

Calm, calm, not to fear! ps sprinkled with awk to the rescue:

ps avx | awk '/java/&&/Notifier/&&!/awk/{print $17,$18}'

To explain the ps flags, first it allows for all users processes, prints in long format with more criteria, and the x is for even if they have no ‘controlling console.’ Then awk looks for both Java and the ‘Notifier’ jar name, minus our awk itself, and prints the relevant fields, highlighted below(trimmed and rewrapped for readability):

:./comp/pcns.jar:./comp/Notifier.jar: 

com.apcc.m11.arch.application.Application

So at least we can tell that something is running, and appreciate the thoughtful development process APC followed, at least while we aren’t fashioning our own replacement with booster serial cables and middleware. Thanks to the googles and the overflown’ stacks for the proper flags to pass ps.

InstaDMG Issues, and Workflow Automation via Your Friendly Butler, Jenkins

Thursday, January 17th, 2013

“It takes so long to run.”

“One change happens and I need to redo the whole thing”

“I copy-paste the newest catalogs I see posted on the web, the formatting breaks, and I continually have to go back and check to make sure it’s the newest one”

These are the issues commonly experienced with those who want to take advantage of InstaDMG, and to some, it may be enough to prevent them from being rid of their Golden Master ways. Of course there are a few options to address each of these, in turn, but you may have noticed a theme on blog posts I’ve penned recently, and that is:

BETTER LIVING THROUGH AUTOMATION!

(We’ll get to how automation takes over shortly.) First, to review, a customized InstaDMG build commonly consists of a few parts: the user account, a function to answer the setup assistant steps, and the bootstrap parts for your patch and/or configuration management system. To take advantage of the(hopefully) well-QA’d vanilla catalogs, you can nest it in your custom catalog via an include-file line, and you only update your custom software parts listed above in one place. (And preferably you keep those projects and catalog under version control as well.)

All the concerns paraphrased at the start of this post just happen to be discussed recently on The Graham Gilbert Dot Com. Go there now, and hear what he has to say about it. Check out his other posts, I can wait.

Graham Gilberts Blog
Back? Cool. Now you may think those are all the answers you need. You’re mostly right, you smarty you! SSDs are not so out-of-reach for normal folk, and they really do help to speed the I/O bound process up, so there’s less cost to create and repeat builds in general. But then there’s the other manual interaction and regular repetition parts – how can we limit it to as little as possible? Yes, the InstaDMG robot’s going to do the heavy lifting for us by speedily building an image, and using version control on our catalogs help us track change over time, but what if Integrating the changes from the vanilla catalogs was Continuous? (Answers within!) (more…)

If It’s Worth Doing, It’s Worth Doing At Least Three Times

Monday, January 14th, 2013

In my last post about web-driven automation, we took on the creation of Apple IDs in a way that would require a credit card before actually letting you download apps(even free ones.) This is fine to speed up the creation process when actual billing will be applied to each account one at a time, but for education or training purposes where non-volume license purchases wouldn’t be a factor, there is the aforementioned ‘BatchAppleIDCreator‘ applescript. It hasn’t been updated recently, though, and I still had more automation tools I wanted to let have a crack at a repetitive workflow like this use case.

SikuliScript was born out of MIT research in screen reading, which roughly approximates what humans do as they scan the screen for a pattern and then take action. One can build a Sikuli script from scratch by taking screenshots and then tying together the actions you’d like to take in its IDE(which essentially renders HTML pages of the ‘code’.) You can integrate Python or Java, although it needs(system) Java and the Sikuli tools to be in place in the Applications folder to work at all. For Apple ID creation in iTunes, which is the documented way to create an ID with the “None” payment method, Apple endorses the steps in this knowledge base document.Sikuli AutoAppleID Creator Project

When running, the script does a search for iBooks, clicks the “Free” button to trigger Apple ID login, clicks the Create Apple ID button, clicks through a splash screen, accepts the terms and conditions, and proceeds to type in information for you. It gets this info from a spreadsheet(ids.csv) that I adapted from the BatchAppleIDCreator project, but currently hard-codes just the security questions and answers. There is guidance in the first row on how to enter each field, and you must leave that instruction row in, although the NOT IMPLEMENTED section will not be used as of this first version.

It’s fastest to type selections and use the tab and/or arrow keys to navigate between the many fields in the two forms(first the ID selection/password/security question/birthdate options, then the users purchase information,) so I didn’t screenshot every question and make conditionals. It takes less than 45 seconds to do one Apple ID creation, and I made a 12 second timeout between each step in case of a slow network when running. It’s available on Github, please give us feedback with what you think.

25 Tips For Technical Writers

Wednesday, January 9th, 2013

At 318, we write a pretty good amount of content. We have 5 or so authors on staff, write tons of technical documentation for customers and develop a fair amount of courseware. These days, I edit almost as much as I write. And in doing so, I’ve picked up on some interesting trends in how people write, prompting me to write up some tips for the blossoming technical writer out there:

  1. Define the goal. What do you want to say? The text on the back jacket of most of my books was written before I ever wrote an outline. Sometimes I update the text when I’m done with a book because the message can change slightly with technical writing as you realize some things you’d hoped to accomplish aren’t technically possible (or maybe not in the amount of time you need to use).
  2. Make an outline. Before you sit down to write a single word, you should know a goal and have an outline that matches to that goal. The outline should be broken down in much the same way you’d lay out chapters and then sections within the chapter.
  3. Keep your topics separate. A common trap is to point at other chapters too frequently. Technical writing does have a little bit of the find your own adventure aspect, but referencing other chapters is often overused.
  4. Clearly differentiate between section orders within a chapter. Most every modern word processing tool (from WordPress to Word) provides the ability to have a Header or Heading 1 and a Header or Heading 2. Be careful not to confuse yourself. I like to take my outline and put it into my word processing program and then build out my headers from the very beginning. When I do so, I like for each section to have a verb and a subject that defines what we’re going to be doing. For example, I might have Header 1 as Install OS X, with Header 2 as Formatting Drives followed by Header 2 as Using the Recovery Partition followed by Header 3 of Installing the Operating System.
  5. Keep your paragraphs and sentences structured. Beyond the headings structure, make sure that each sentence only has one thought (and that sentences aren’t running on and on and on). Also, make sure that each paragraph illustrates a sequence of thoughts. Structure is much more important with technical writing than with, let’s say, science fiction. Varying sentence structure can keep people awake.
  6. Use good grammar. Bad grammar makes things hard to read and most importantly gets in the way of your message getting to your intended audience. Strunk and White’s Elements of Style is very useful if you hit a place where you’re not sure what to write. Grammar rules are a lot less stringent with online writing, such as a website. When it comes to purposefully breaking grammatical rules, I like to make an analogy with fashion. If you show up to a very formal company in $400 jeans, they don’t care that your jeans cost more than most of their slacks; they just get cranky you’re wearing jeans. Not everyone will pick up on purposeful grammatical lapses. Many will just judge you harshly. Especially if they hail from the midwest.
  7. Define your audience. Are you writing for non-technical users trying to use a technical product? Are you writing for seasoned Unix veterans trying to get acquainted with a new version of Linux? Are you writing for hardened programmers? The more clearly you define the audience the easier it is to target a message to that audience. The wider the scope of the audience the more people are going to get lost, feel they’re reading content below their level, etc.
  8. Know your style guide. According to who you are writing for, they probably have a style guide of some sort. This style guide will lay out how you write, specific grammar styles they want used, hopefully a template with styles pre-defined, etc. I’ve completed several writing gigs, only to discover I need to go back and reapply styles to the entire content. When you do that, something will always get missed…
  9. Quoting is important when writing code. It’s also important to quote some text. If you have a button or text on a screen with one word that begins with a capped letter, you don’t need to quote that in most style guides. But if there’s only one word and any of the words use a non-capped letter or have a special character then the text should all be quoted. It’s also important to quote and attribute text from other locations. Each style guide does this differently.
  10. Be active. No, I’m not saying you should run on a treadmill while trying to dictate the chapter of a book to Siri. Use an active voice. For example, don’t say “When installing an operating system on a Mac you should maybe consider using a computer that is capable of running that operating system.” Instead say something like “Check the hardware compatibility list for the operating system before installation.”
  11. Be careful with pronouns. When I’m done writing a long document I’ll do a find for all instances of it (and a few other common pronouns) and look for places to replace with the correct noun.
  12. Use examples. Examples help to explain an otherwise intangible idea. It’s easy to tell a reader they should enable alerts on a system, but much more impactful to show a reader how to receive an alert when a system exceeds 80 percent of disk capacity.
  13. Use bullets or numbered lists. I love writing in numbered lists and bullets (as with these tips). Doing so allows an author to most succinctly go through steps and portray a lot of information that is easily digestible to the audience. Also, if one of your bullets ends with a period, they all must. And the tense of each must match.
  14. Use tables. If bullets are awesome then tables are the coolest. You can impart a lot of information using tables. Each needs some text explaining what is in the table and a point that you’re usually trying to make by including the table.
  15. Judiciously use screen shots. If there’s only one button in a screen shot then you probably don’t need the screen shot. If there are two buttons you still probably don’t need the screen shot. If there are 20 and it isn’t clear in the text which to use, you might want to show the screen. It’s easy to use too many or not enough screen shots. I find most of my editors have asked for more and more screens until we get to the point that we’re cutting actual content to fit within a certain page count window. But I usually have a good idea of what I want to be a screen shot and what I don’t want to be a screen shot from the minute I look at the outline for a given chapter. Each screen shot should usually be called out within your text.
  16. Repetition is not a bad thing. This is one of those spots where I disagree with some of my editors from time to time. Editors will say “but you said that earlier” and I’ll say “it’s important.” Repetition can be a bad thing, if you’re just rehashing content, but if you intentionally repeat something to drive home a point then repetition isn’t always a bad thing. Note: I like to use notes/callouts when I repeat things. 
  17. White space is your friend. Margins, space between headers, kerning of fonts. Don’t pack too much crap into too little space or the reader won’t be able to see what you want them to see.
  18. Proofread, proofread, proofread. And have someone else proofread your stuff.
  19. Jargon, acronyms and abbreviations need to be explained. If you use APNS you only have to define it once, but it needs to be defined.
  20. I keep having editors say “put some personality into it” but then they invariably edit out the personality. Not sure if this just means I have a crappy personality, but it brings up a point: while you may want to liven up text, don’t take away from the meaning by doing so.
  21. Don’t reinvent the wheel. Today I was asked again to have an article from krypted included in a book. I never have a problem with contributing an article to a book, especially since I know how long it takes to write all this stuff. If I can save another author a few hours or days then they can push the envelope of their book that much further.
  22. Technical writing is not a conversation. Commas are probably bad. The word um is definitely bad. Technical writing should not ramble but be somewhat formal. You can put some flourish in, but make sure the sentences and arguments are meaningful, as with a thesis.
  23. Be accurate. Technical reviewers or technical editors help to make sure you’re accurate, but test everything. Code, steps, etc. Make sure that what you’re saying is correct up to the patch level and not just for a specific environment, like your company or school.
  24. Use smooth transitions between chapters. This means a conclusion that at least introduces the next chapter in each. Don’t overdo the transitions or get into the weeds of explaining an entire topic again.
  25. Real writers publish. If you write a 300 page document and no one ever sees it, did that document happen? If the document isn’t released in a timely manner then the content might be out of date before getting into a readers hands. I like to take my outline (step 2) and establish a budget (a week, 20 hours, or something like that).

…’Til You Make It

Monday, January 7th, 2013

Say you need a bunch of Apple IDs, and you need them pronto. There’s a form you can fill out, a bunch of questions floating in a window in some application, it can feel very… manual. A gentleman on the Enterprise iOS site entered, filling the void with an Applescript that could batch create ID’s with iTunes (and has seen updates thanks to Aaron Friemark.)

That bikeshed, though, was just not quite the color I was looking for. I decided to Fake it. Are we not Professional Computer Operators?

Before I go into the details, a different hypothetical use case: say you just migrated mail servers, and didn’t do quite enough archiving previously. Client-side moves may be impractical or resource-intensive. So you’d rather archive server-side, but can’t manipulate the mail server directly, and the webmail GUI is a touch cumbersome: are we relegated to ‘select all -> move -> choose folder -> confirm’ while our life-force drains away?

Fake is described as a tool for web automation and testing. It’s been around for a bit, but took an ‘Aha!’ moment while pondering these use cases for me to realize its power. What makes it genius is you don’t need to scour html source to find the id of the element you want to interact with! Control-drag to the element, specify what you want to do with it. (There are top-knotch videos describing these options on the website.) And it can loop. And delay(either globally or between tasks,) and the tasks can be grouped and disabled in sections and organized in a workflow and saved for later use. (Can you tell I’m a bit giddy about it?)

Fakeinaction-MailSo that mail archive can loop away while you do dishes. Got to the end of a date range? Pause it, change the destination folder mid-loop, and keep it going. (There is a way to look at the elements and make a conditional when it reads a date stamp, but I didn’t get that crazy with it… yet.)

And now even verifying the email addresses used with the Apple ID can be automated! Blessed be the lazy sysadmin.

The State of Tablets in Schools

Thursday, January 3rd, 2013

Any managed IT environment needs policies. One of the obvious ones is to refresh the hardware on some sort of schedule so that the tools people need are available and they aren’t hampered by running new software on old hardware. Commonly, security updates are available exclusively on the newest release of an operating system. Tablets are just the same, and education has been seeing as much of an influx of iOS devices as anywhere else.

Fraser Speirs has just gone through the process of evaluating replacements for iPads used in education, and discusses the criteria he’s come up with and his conclusions on his blog

Manually delete data from Splunk

Thursday, December 27th, 2012

By default Splunk doesn’t delete the logging data that it’s gathered. Without taking some action to remove data it will continue to process all results from Day 1 if doing an All time search. That may be desirable in some cases where preserving and searching the data for historical purposes is necessary, but when using Splunk only as a monitoring tool the older data becomes superfluous after time.

Manually deleting information from Splunk is irreversible and doesn’t necessarily free disk space. Splunk users should only delete when they’ve verified their search results return the information they expect.

Enabling “can_delete”

No user can delete data until he’s been provided the can_delete role. Not even the admin account in the free version of Splunk has this capability enabled. To enable can_delete:

  1. Click the Manager link and then click the Access Controls link in the Users and authentication section.
  2. Click the Users link. If running the free version of Splunk then the admin account is the only account available. Click the admin link. Otherwise, consider creating a special account just for sensitive procedures, such as deleting data, and assigning the can_delete role only to that user.
  3. For the admin account move the can_delete role from the Available roles to the Selected roles section.
    Enable can_delete
  4. Click the Save button to keep the changes.

Finding data

Before deleting data be sure that a search returns the exact data to be deleted. This is as simple as performing a regular search using the Time range drop down menu to the right of the Search field.

The Time range  menu offers numerous choices for limiting results by time including Week to dateYear to date and Yesterday. In this case, let’s search for data from the Previous month:

Search time range

Wait for the search to complete then verify the results returned at the results that need deleting.

Deleting data

Deleting the found data is as simple as performing a search and then piping it into a delete command:

Search and delete

 

This runs the search again, deleting found results on the fly, which is why searching first before deleting is important. Keep in mind that a “delete and search” routine takes as long or longer to run as the initial search and consumes processing power on the Splunk server. If deleting numerous records proceed with one search/delete at a time to avoid overtaxing the server.

Configure network printers via command line on Macs

Wednesday, December 26th, 2012

In a recent Twitter conversation with other folks supporting Macs we discussed ways to programmatically deploy printers. Larger environments that have a management solution like JAMF Software’s Casper can take advantage of its ability to capture printer information and push to machines. Smaller environments, though, may only have Apple Remote Desktop or even nothing at all.

Because Mac OS X incorporates CUPS printing, administrators can utilize the lpadmin and lpoptions command line tools to programmatically configure new printers for users.

lpadmin

A basic command for configuring a new printer using lpadmin looks something like this:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E

Several options follow the lpadmin command:

  • -p = Printer name (queue name if sharing the printer)
  • -v = IP address or DNS name of the printer
  • -D = Description of the printer (appears in the Printers list)
  • -L = Location of the printer
  • -P = Path the the printer PPD file
  • -E = Enable this printer

The result of running this command in the Terminal application as an administrator looks like this:

New printer

lpoptions

Advanced printer models may have duplex options, multiple trays, additional memory or special features such as stapling or binding. Consult the printer’s guide or its built-in web page for a list of these installed printer features.

After installing a test printer, use the lpoptions command with the -l option in the Terminal to “list” the feature names from the printer:

lpoptions -p "salesbw" -l

The result is usually a long list of features that may look something like:

HPOption_Tray4/Tray 4: True *False
HPOption_Tray5/Tray 5: True *False
HPOption_Duplexer/Duplex Unit: True *False
HPOption_Disk/Printer Disk: *None RAMDisk HardDisk
HPOption_Envelope_Feeder/Envelope Feeder: True *False
...

Each line is an option. The first line above displays the option for Tray 4 and shows the default setting is False. If the printer has the optional Tray 4 drawer installed then enable this option when running the lpadmin command by following it with:

-o HPOption_Tray4=True

Be sure to use the option name to the left of the slash not the friendly name with spaces after the slash.

To add the duplex option listed on the third line add:

-o HPOption_Duplexer=True

And to add the envelope feeder option listed on the fifth line add:

-o HPOption_Envelope_Feeder=True

Add as many options as necessary by stringing them together at the end of the lpadmin command:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E -o HPOption_Tray4=True -o HPOption_Duplexer=True -o HPOption_Envelope_Feeder=True

The result of running the lpadmin command with the -o options enables these available features when viewing the Print & Scan preferences:

Printer options

With these features enabled for the printer in Print & Scan, they also appear as selectable items in all print dialogs:

Printer dialog

 

BSD as a useful tool

Monday, December 17th, 2012

Whether or not you know it, the world runs on BSD. You can’t send a packet more than a few hops without a BSD-derived TCP/IP stack getting involved. Heck, you’d be hard pressed to find a machine which doesn’t already have BSD code throughout the OS.

Why BSD? Many companies don’t want to deal with GPL code and the BSD license allows any use so long as the BSD group is acknowledged. This is why Windows has BSD code, Mac OS X is based on BSD (both in its current incarnation which pulls much code from FreeBSD and NetBSD as well as via code which came from NeXTStep, which in turn was derived from 4.3BSD), GNU/Linux has lots of code which was written while looking at BSD code, and most TCP/IP stacks on routers and Internet devices are BSD code.

In the context of IT tools BSD excels due to its cleanliness and consistency. GNU/Linux, on the other hand, has so many different distributions and versions that it’s extremely difficult to do certain tasks across different distributions in any consistent way. Furthermore, the hardware requirements of GNU/Linux precludes using anything but typical x86 PC with a full compliment of associated resources. Managing GNU/Linux on non-x86 hardware is a hobby in its own right and not the kind of thing anyone would want to do in a production environment.

NetBSD in particular stands in stark contrast to GNU/Linux when deploying on machines of varying size and capacity. One could just as easily run NetBSD from an old Pentium 4 machine as a tiny StrongARM SheevaPlug, a retired PowerPC Macintosh, or a new 16 core AMD Interlagos machine. A usable system could have 32 megs of memory or 32 gigs. Disk space could be a 2 gig USB flash drive or tens of terabytes of RAID.

Configuration files are completely consistent across architecture and hardware. You may need to know a little about the hardware when you first install (wd, sd, ld for disks, ex, wm, fxp, et cetera for NICs, for example), but after that everything works the same no matter the underlying system.

Some instances where a BSD system can be invaluable are situations where the installed tools are too limited in scope to diagnose problems, where problematic hardware needs to be replaced or augmented quickly with whatever’s at hand, or where secure connectivity needs to be established quickly. Some examples where BSD has come in handy are:

In a warehouse where an expensive firewall device was flakey, BSD provided a quick backup. Removing the flaky device would have left the building with no Internet connection. An unused Celeron machine with a USB flash drive and an extra ethernet card made for a quick and easy NetBSD NAT / DHCP / DNS server for the building while the firewall device was diagnosed.

At another business an expensive firewall device was in use which is not capable of showing network utilization in any detail without setting up a separate computer for monitoring (and even then it is limited to giving very general and broad information), nor is it flexible when it comes to routing all traffic through alternate methods such as gre or ssh tunnels. Setting up an old Pentium 4 with a four port ethernet card gave us a router / NAT device which allowed us to do tests where we passed all traffic through a single tunnel to an upstream provider to test the ISP’s suggestion that too many connections were running simultaneously (which wasn’t the case, but sometimes you have to appease the responsible party before they’ll take the next step). They can also now monitor network traffic quickly and easily using darkstat (http://unix4lyfe.org/darkstat/), monitor packet loss, see who on the local networks is causing network congestion, et cetera. The machine serves three separate local network segments which can talk with each other. One segment is blocked from accessing the Internet because it contains Windows systems running Avid, but can be turned on momentarily to allow for software activation and similar things.

When another business needed a place to securely host their own WordPress blog, an unused Celeron machine was set up with a permissions scheme which regular web hosting providers won’t typically allow. WordPress is set up so that neither the running php code nor the www user can write to areas which allow for script execution, eliminating almost all instances where WordPress flaws can give full hosting abilities to attackers, which is how WordPress is so often used to host phishing sites and advertising redirectors.

DNS hosting, NAT or routing can be set up in minutes, a bridge can be configured to do tcpdump capture, or a web proxy can be installed to save bandwidth and perform filtering. An SMTP relay can be locally installed to save datacenter bandwidth.

So let’s say you think that a NetBSD machine could help you. But how? If you haven’t used NetBSD yet, then here are some tips.

The latest version is 6.0. The ISOs from NetBSD’s FTP server typically weigh in at around 250 to 400 megabytes, so CDs are fine. The installer is pretty straightforward and the mechanisms for installing on various architectures is not germane.

After boot, the system is pretty bare, so here are things you’ll want to do:

Let’s look at a sample /etc/rc.conf:

hostname=wopr.example.com
sshd=YES
ifconfig_wm0=”dhcp”
dhcpcd_flags=”-C resolv.conf -C mtu”
ifconfig_wm1=”inet 192.168.50.1 netmask 255.255.255.0″
ipfilter=YES
ipnat=YES
dhcpd=YES
dhcpd_flags=”wm1 -cf /etc/dhcpd.conf”
ip6mode=router
rtadvd=YES
rtadvd_flags=”-c /etc/rtadvd.conf wm1″
named9=YES
named_chrootdir=”/var/chroot/named”
named_flags=”-c /etc/namedb/named.conf

So what we have here are a number of somewhat obvious and a few not-so-obvious options. Let’s assume you know what hostname, sshd, named9, ipnat and dhcpd are for. You can even make guesses about many of the options. What about ifconfig_wm0 (and its flags), ip6mode and other not-so-obvious rc.conf options? First, obviously, you can:

man rc.conf

dhcpcd is a neat DHCP client which is lightweight, supports IPv6 auto discovery and is very configurable. man dhcpcd to see all the options; the example above gets a lease on wm0 but ignores any attempts by the DHCP server to set our resolvers or our interface’s MTU. ifconfig_wm1 should be pretty self-explanatory.

ipnat and ipfilter enable NetBSD’s built in ipfilter (also known as ipf) and its NAT. Configuration files may often be as simple as this for NAT in /etc/ipnat.conf:

map wm0 192.168.50.0/24 -> 0/32 proxy port ftp ftp/tcp
map wm0 192.168.50.0/24 -> 0/32 portmap tcp/udp 10000:50000
map wm0 192.168.50.0/24 -> 0/32
rdr wm0 0.0.0.0/0 port 5900 -> 192.168.50.175 port 5900

And lines which look like this for ipfilter in /etc/ipf.conf:

block in quick from 78.159.112.198/32 to any

There’s tons of documentation on the Internet, particularly here:

http://coombs.anu.edu.au/~avalon/

To quickly summarize, the first three lines set up NAT for the 192.168.50.0/24 subnet. The ftp line is necessary because of the mess which is FTP. The second line says to only use port numbers in that range for NAT connections. The third line is for non-TCP and non-UDP protocols such as ICMP or IPSec. The fourth redirects port 5900 of the public facing IP to a host on the local network.

The ipf.conf line is straightforward; ipf in many instances is used to block attackers since you wouldn’t turn on or redirect services which you didn’t intend to be public. Other examples are in the documentation and include stateful inspection (including stateful UDP; I’ll let you think for a while about how that might work), load balancing, transparent filtering (on a bridge), port spanning, and so on. It’s really quite handy.

Next is BIND. It comes with NetBSD and if you know BIND, you know BIND. Simple, huh?

rtadvd is the IPv6 version of a DHCP daemon and ip6mode=router tells the system you intend to route IPv6 which does a few things for you such as setting net.inet6.ip6.forwarding=1. You’re probably one of those, “We don’t need that yet” people, so we’ll leave that for another time. IPv6 is easier than you think.

dhcpd is for the ISC DHCP server. man dhcpd and check out the options, but most should already look familiar.
So you have a system up and running. What next? You may want to run some software which isn’t included with the OS such as Apache (although bozohttpd is included if you just want to set up simple hosting), PHP, MySQL, or if you’d like some additional tools such as emacs, nmap, mtr, perl, vim, et cetera.

To get the pkgsrc tree in a way which makes updating later much easier, use CVS. Put this into your .cshrc:

setenv CVSROOT :pserver:anoncvs@anoncvs.netbsd.org:/cvsroot

Then,

cd /usr
cvs checkout -P pkgsrc

After that’s done (or while it’s running), set up /etc/mk.conf to your liking. Here’s one I use most places:

LOCALBASE=/usr/local
FAILOVER_FETCH=YES
SKIP_LICENSE_CHECK=YES
SMART_MESSAGES=YES
IRSSI_USE_PERL=YES
PKG_RCD_SCRIPTS=YES
PKG_OPTIONS.sendmail=sasl starttls
CLEANDEPENDS=yes

Set LOCALBASE if you prefer a destination other than /usr/pkg/. PKG_RCD_SCRIPTS tells pkgsrc to install rc.d scripts when installing packages. PKG_OPTIONS.whatever might be different for various packages; I put this one in here as an example. To see what options you have, look at the options.mk for the package you’re curious about. CLEANDEPENDS tells pkgsrc to clean up working directories after a package has been compiled.

After the CVS has finished, you have a tree of Makefiles (and other files) which you can use as simply as:

cd /usr/pkgsrc/editors/vim
make update

That will automatically download, compile and install all prerequisites (if any) for the vim package, then download, compile and install vim. I personally use “make update” in case I’m updating an older package, FYI.

With software installed, the rc.conf system works similarly to the above. After adding Apache, for instance (www/apache24/), you can just add apache=YES >> /etc/rc.conf. That sets Apache to launch at boot; to start it without rebooting, just run /etc/rc.d/apache start.

One package which comes in very handy when trying to keep a collection of packages up to date is pkg_rolling-replace (/usr/pkgsrc/pkgtools/pkg_rolling-replace). After performing a cvs update in /usr/pkgsrc, one can simply run pkg_rolling-replace -ru and come back a little later; everything which has been updated in the CVS tree will be compiled and updated in the system.

Finally, to update the entire OS, there are just a handful of steps:

cd /usr
cvs checkout -P -rnetbsd-6 src

In this instance, the netbsd-6 tag specifies the release branch (as opposed to current) of NetBSD.

I keep a wrapper called go.sh in /usr/src so I don’t need to remember options. This makes sure that all the CPUs are used when compiling and the destinations of files are in tidy, easy to find places.

#!/bin/sh
./build.sh -j `sysctl -n hw.ncpu` -D ../dest-$1 -O ../obj-$1 -T ../tools -R ../sets -m $*

An example of a complete OS update would be:

./go.sh amd64 tools
./go.sh amd64 kernel=GENERIC
./go.sh amd64 distribution
./go.sh amd64 install=/

Then,

mv /netbsd /netbsd.old
mv /usr/obj/sys/arch/amd64/compile/GENERIC/netbsd /
shutdown -r now

Updating the OS is usually only necessary once every several years or when there’s an important security update. Security updates which pertain to the OS or software which comes with the OS are listed here:

http://www.netbsd.org/support/security/

The security postings have specific instructions on how to update just the relevant parts of the OS so that in most instances a complete rebuild and reboot are not necessary.

Security regarding installed packages can be checked using built-in tools. One of the package tools is called pkg_admin; this tool can compare installed packages with a list of packages known to have security issues. To do this, one can simply run:

pkg_admin fetch-pkg-vulnerabilities
pkg_admin audit

A sample of output might look like this:

Package mysql-server-5.1.63 has a unknown-impact vulnerability, see http://secunia.com/advisories/47894/
Package mysql-server-5.1.63 has a multiple-vulnerabilities vulnerability, see http://secunia.com/advisories/51008/
Package drupal-6.26 has a information-disclosure vulnerability, see http://secunia.com/advisories/49131/

You can then decide whether the security issue may affect you or whether the packages need to be updated. This can be automated by adding a crontab entry for root:

# download vulnerabilities file
0 3 * * * /sbin/pkg_admin fetch-pkg-vulnerabilities >/dev/null 2>&1
5 3 * * * /sbin/pkg_admin audit

All in all, BSD is a wonderful tool for quick emergency fixes, for permanent low maintenance servers and anything in between.

iOS Backups Continued, and Configuration Profiles

Friday, December 14th, 2012

In our previous discussion of iOS Backups, the topic of configuration profiles being the ‘closest to the surface’ on a device was hinted at. What that means is, when Apple Configurator restores a backup, that’s the last thing to be applied to the device. For folks hoping to use Web Clips as a kind of app deployment, they need to realize that trying to restore a backup that has the web clip in a particular place doesn’t work – the backup that designates where icons on the home screen line up gets laid down before the web clip gets applied by the profile. It gets bumped to whichever would be the next home screen after the apps take their positions.

This makes a great segue into the topic of configuration profiles. Here’s a ‘secret’ hiding in plain sight: Apple Configurator can make profiles that work on 10.7+ Macs. (But please, don’t use it for that – see below.) iPCU possibly could generate usable ones as well, although one should consider the lack of full screen mode in the interface as a hint: it may not see much in the way of updates on the Mac from now on. iPCU is all you have in the way of an Apple-supported tool on Windows, though. (Protip: activate the iOS device before you try to put profiles on it – credit @bruienne for this reminder.)

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Now why would you avoid making, for example, a Wi-Fi configuration profile for use on a mac with Apple Configurator? Well there’s one humongous difference between iOS and Macs: individual users. Managing devices with profiles shows Apple tipping their cards: they seem to be saying you should think of only one user per device, and if it’s important enough to manage at all, it should be an always enforced setting. The Profile Manager service in Lion and Mountain Lion Server have an extra twist, though: you can push out settings for Mac users or the devices they own. If you want to manage a setting across all users of a device, you can do so at the Device Group level, which generates extra keys than those that are present in a profile generated by Apple Configurator. The end result is that a Configurator-generated profile will be user-specific, and fail with deployment methods that need to target the System. (Enlarge the above screenshot to see the differences – and yes, there’s a poorly obscured password in there. Bring it on, hax0rs!)

These are just more of the ‘potpourri’ type topics that we find time to share after being caught by peculiarities out in the field.

CrashPlan PROe Refresher

Thursday, December 13th, 2012

It seems that grokking the enterprise edition of Code 42′s CrashPlan backup service is confusing for everyone at first. I recall several months of reviewing presentations and having conversations with elusive sales staff before the arrangement of the moving parts and the management of its lifecycle clicked.

There’s a common early hangup for sysadmins trying to understand deployment to multi-user systems, with the only current way to protect each user from another’s data being to lock the client interface (if instituted as an implementation requirement.) What could be considered an inflexibility could just as easily be interpreted as a design decision that directly relates to licensing and workflow. The expected model these days is a single user may have multiple devices, but enabling end users to restore files (as we understand it) requires one user be granted access to the backup for an entire device. If that responsibility is designated to as the IT staff, then the end user must rely on IT to assist with a restore, instead of healing thyself. This isn’t exactly the direction business tech has been going for quite some time. The deeper point is, backup archives and ‘seats’ are tied to devices – encryption keys cascade down from a user, and interacting with the management of a device is, at this point, all or nothing.

This may be old hat to some, and just after the Pro name took on a new meaning into Code 42 hosted-only, the E for Enterprise version had seemingly been static for a spell – until things really picked up this year. With the 3.0 era came the phrasing “Cold Storage”, which is neither a separate location in the file hierarchy nor intended for long-term retention (like one may use Amazon’s new Glacier tier of storage for.) After a device is ‘deactivated’, it’s former archives are marked for deletion, just as in previous versions – this is just a new designation for the state of the archives. The actual configuration which determines when the deactivated device backup will finally be deleted can be designated deployment-wide or more granularly per organization. (Yes, you can find the offending GUID-tagged folder of the archives in the PROe servers filesystem and nuke it from orbit instead, if so inclined.)

ComputerBlock from the PROe API

computerBlock from the PROe API

Confusion could arise from the term that looks similar to deactivation, ‘deauthorization’. Again, you need to notice the separation between a user and their associated device. Deauthorization operates at the device level to put a temporary hold on its ability to log in and perform restores on the client. In API terms it’s most similar to a ComputerBlock. This still only affects licensing in the fact that you’d need to deactivate the device to get back it’s license for use elsewhere, (although jiggery-pokery may be able to resurrect a backup archive if the user still exists…) As always, test, test, test, distribute your eggs across multiple baskets, proceed with caution, and handle with care.

iOS and Backups

Wednesday, December 12th, 2012

If you’re like us, you’re a fan of our modern era, as we are (for the most part) better off than we previously were for managing iOS devices. One such example is bootstrapping, although we’re still a ways away from traditional ‘imaging’. You don’t need Xcode to update the OS in parallel, iPCU to generate configuration profiles, and iTunes for restoring backups anymore. Nowadays in our Apple Configurator world, you don’t interact with iTunes much at all (although it needs to be present for assisting in loading apps and takes a part in activation.)

So what are backups like now, what are the differences between a restore from, say, iCloud versus Apple Configurator? Well, as it was under the previous administration, iTunes has all our stuff, practically our entire base belongs to it. It knows about our Apple ID, it has the ‘firmware’ or OS itself cached, we can rearrange icons with our pointing human interface device… good times. Backups with iTunes are pretty close to imaging, as an IT admin would possibly define it. The new kids on the block(iCloud, Apple Configurator,) however, have a different approach.

iOS devices maintain a heavily structured and segmented environment. Configuration profiles are bolted on top(more on this in a future episode), ‘Userspace’ and many settings are closer to the surface, apps live further down towards the core, and the OS is the nougat-y center. Apple Configurator interacts with all these modularly, and backups take the stage after the OS and apps have been laid down. This means if your backup includes apps that Apple Configurator did not provide for you… the apps(and their corresponding sandboxed data) are no longer with us, the backup it makes cannot restore the apps or their placement on the home screen.

iCloud therefore stands head and shoulders above the rest(even if iTunes might be faster.) It’s proven to be a reliable repository of backups, while managing a cornucopia of other data – mail, contacts, calendars, etc. It’s a pretty sweet deal that all you need is to plug in to power for a backup to kick off, which makes testing devices by wiping them just about as easy as it can get. (Assuming the apps have the right iCloud-compatibility, so the saved games and other sandbox data can be backed up…) Could it be better? Of course. What’s your radar for restoring a single app? (At this point, that can be accomplished with iTunes and manual interaction only.) How about more control over frequency/retention? Never satisfied, these IT folk.

How to Configure basic High Availability (Hardware Failover) on a SonicWALL

Friday, November 30th, 2012

Configuring High Availability (Hardware Failover) SonicWALL requires the following:

1. Two SonicWALLs of the same model (TZ 200 and up).

2. Both SonicWALLs need to be registered at MySonicWALL.com (regular registration, and then one as HF Primary, one as HF Secondary).

3. The same firmware versions need to be on both SonicWALLs.

4. Static IP addresses are required for the WAN Virtual IP interface (you can’t use DHCP).

5. Three LAN IP addresses (one for Virtual IP, one for the management IP, and one for the Backup management IP).

6. Cross over cable (to connect SonicWALLs to each other) on the last ethernet interfaces.

7. 1 hub or switch for the WAN port on each SonicWALL to connect to.

8. 1 hub or switch for the LAN port on each SonicWALL to connect to.

Caveats

1. High Availability cannot be configured if “built-in wireless is enabled”.

2. On NSA 2400MX units, High Availability cannot be configured if PortShield is enabled.

3. Stateful HA is not supported for connections on which DPI-SSL is applied.

4. On TZ210 units the HA port/Interface must be UNASSIGNED before setting up HA (last available copper ethernet interfaces).

 

Setup

1. Register both SonicWALLs at MySonicWALL as High Availability Pairs BEFORE connecting them to each other:

• “Associating an Appliance at First Registration”: http://www.fuzeqna.com/sonicwallkb/consumer/kbdetail.asp?kbid=6233#Associating_an_Appliance_at_First_Registration_

• “Associating Pre-Registered Appliances”: http://www.fuzeqna.com/sonicwallkb/consumer/kbdetail.asp?kbid=6235#Associating_Pre-Registered_Appliances

• “Associating a New Unit to a Pre-Registered Appliance”: http://www.fuzeqna.com/sonicwallkb/consumer/kbdetail.asp?kbid=6236#Associating_a_New_Unit_to_a_Pre-Registered_Appliance

2. Login to Primary HF and configure the SonicWALL (firewall rules, VPN, etc).

3. Connect the SonicWALLs to each other on their last ethernet ports using a cross over cable.

4. Connect the WAN port on both SonicWALLs to a switch or hub using straight through (standard) ethernet cables, and then connect the switch to your upstream device (modem, router, ADTRAN, etc.)

5. Ensure the Primary HF can still communicate to the Internet.

6. Connect the LAN port on both SonicWALLs to a switch or hub using straight through (standard) ethernet cables, and then connect them to your main LAN switch (if you don’t have one, you should purchase one. This will be the switch that all your LAN nodes connect to.).

7. Go to High Availability > Settings.

8. Select the Enable High Availability checkbox.

9. Under SonicWALL Address Settings, type in the serial number for the Secondary HF (Backup SonicWALL). You can find the serial number on the back of the SonicWALL security appliance, or in the System > Status screen of the backup unit. The serial number for the Primary SonicWALL is automatically populated.

10. Click Accept to save these settings.

 

Configuring Advanced High Availability Settings

1. Click High Availability > Advanced.

2. Put a check mark for Enable Preempt Mode.

3. Put a check mark for Generate / Overwrite Backup Firmware and Settings when Upgrading Firmware.

4. Put a check mark for Enable Virtual MAC.

5. Leave the Heartbeat Interval at default (5000ms).

6. Leave the Probe Interval at default (no less than 5 seconds).

7. Leave Probe Count and Election Delay Time at default.

8. Ensure there’s a checkmark for Include Certificates/Keys.

9. Press Synchronize settings.

 

Configuring High Availability > Monitoring Setting

(Only do the following on the primary unit, they will be sync’d with the secondary unit).

1. Login as the administrator on the Primary SonicWALL.

2. Click High Availability > Monitoring.

3. Click the Configure icon for an interface on the LAN (ex. X0).

4. To enable link detection between the designated HA interface on the Primary and Backup units, leave the Enable Physical Interface monitoring checkbox selected.

5. In the Primary IP Address field, enter the unique LAN management IP address.

6. In the Backup IP Address field, enter the unique LAN management IP address of the backup unit.

7. Select the Allow Management on Primary/Backup IP Address checkbox.

8. In the Logical Probe IP Address field, enter the IP address of a downstream device on the LAN network that should be monitored for connectivity (something that has an address that’s always turned on like a server or managed switch).

9. Click OK.

10. To configure monitoring on any of the other interfaces, repeat the above steps.

11. When finished with all High Availability configuration, click Accept. All changes will be synchronized to the idle HA device automatically.

 

Testing the Configuration

1. Allow some time for the configuration to sync (at least a few minutes). Power off the Primary SonicWALL. The Backup SonicWALL should quickly take over.

2. Test to ensure Internet access is OK.

3. Test to ensure LAN access is OK.

4. Log into the Backup SonicWALL using the unique LAN address you configured.

5. The management interface should now display “Logged Into: Backup SonicWALL Status: (green ball)”. If all licenses are not already synchronized with the Primary SonicWALL, go to System > Licenses and register this SonicWALL on mysonicwall.com. This allows the SonicWALL licensing server to synchronize the licenses.

6. Power the Primary SonicWALL back on, wait a few minutes, then log back into the management interface. The management interface should again display “Logged Into: Primary SonicWALL Status: (green ball)”.

NOTE: Successful High Availability synchronization is not logged, only failures are logged.

BizAppCenter

Thursday, November 29th, 2012

It was our privilege to be contacted by Bizappcenter to take part in a demo of their ‘Business App Store‘ solution. They have been active on the Simian mailing list for some time, and have a product to help the adoption of the technologies pioneered by Greg Neagle of Disney Animation Studios (Munki) and the Google Mac Operations Team. Our experience with the product is as follows.

To start, we were given admin logins to our portal. The instructions guide you through getting started with a normal software patch management workflow, although certain setup steps need to be taken into account. First is that you must add users and groups manually, there are no hooks for LDAP or Active Directory at present (although those are in the road map for the future). Admins can enter the serial number of each users computer, which allows a package to be generated with the proper certificates. Then invitations can be sent to users, who must install the client software that manages the apps specified by the admin from that point forward.

emailInvite

Sample applications are already loaded into the ‘App Catalog’, which can be configured to be installed for a group or a specific user. Uploading a drag-and-drop app in a zip archive worked without a hitch, as did uninstallation. End users can log into the web interface with the credentials emailed to them as part of the invitation, and can even ‘approve’ optional apps to become managed installs. This is a significant twist on the features offered by the rest of the web interfaces built on top of Munki, and more features (including cross-platform support) are supposedly planned.

sampleOptionalinstall

If you’d like to discuss Mac application and patch management options, including options such as BizAppCenter for providing a custom app store for your organization, please contact sales@318.com

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.

EULA

Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

MacSysAdmin 2012 Slides and Videos are Live!

Thursday, September 20th, 2012

318 Inc. CTO Charles Edge and Solutions Architect alumni Zack Smith were back at the MacSysAdmin Conference in Sweden again this year, and the slides and videos are now available! All the 2012 presentations can be found here, and past years are at the bottom of this page.

Unity Best Practices In AVID Environments

Thursday, September 6th, 2012

Avid Unity environments are still common these days because the price for Avid’s ISIS SAN is tremendously high. While a Unity typically started anywhere from $50,000 to $100,000, a typical ISIS starts around the same price even though the ISIS is based on more typical, less expensive commodity hardware. The ISIS is based on common gigabit networking, whereas the Unity is based on fibre channel SCSI.

Avid Unity systems come in two flavors. Both can be accessed by fibre channel or by gigabit ethernet. The first flavor is all fibre channel hardware. The second uses a hardware RAID card in a server enclosure with a sixteen drive array and shares that storage over fibre channel and/or gigabit ethernet.

Components in a fibre channel only Unity can be broken down so:

  • Avid Unity clients
  • Fibre channel switch
  • Fibre channel storage
  • Avid Unity head

Components in a chassis-based Unity are:

  • Avid Unity clients
  • Fibre channel switch
  • Avid Unity controller with SATA RAID

The fibre channel only setup can be more easily upgraded. Because such setups are generally older, they typically came with a 2U rackmount dual Pentium 3 (yes, Pentium 3!) server. They use a 2 gigabit ATTO fibre channel card and reliability can be questionable after a decade.

The Unity head can be swapped for a no-frills Intel machine (AMD doesn’t work, and there’s not enough time in the world to figure out why), but one must take care to be careful about video drivers. Several different integrated video chips and several video cards have drivers which somehow conflict with Unity software, so sometimes it’s easier to simply not install any drivers since nothing depends on them. The other requirements / recommendations are a working parallel port (for the Unity dongle), a PCIe slot (for a 4 gigabit ATTO fibre channel card) and 4 gigs of memory (so that Avid File Manager can use a full 3 gigabytes).

The fibre channel switch is typically either a 2 gigabit Vixel switch or a 4 gigabit Qlogic 5200 or 5600 switch. The older Vixel switches have a tendency to fail because there are little heat sinks attached to each port chip which face downward, and after a while sometimes a heat sink or two fall off and the chip dies. Since Vixel is not in business, the only replacement is a Qlogic.

The fibre channel storage can be swapped for a SATA-fibre RAID chassis so long as the chassis supports chopping up RAID sets into many smaller logical drives on separate LUNs. Drives which Avid sells can be as large as 1 TB if using the latest Unity software, so dividing up the storage into LUNs no larger than 1 TB is a good idea.

Changing storage configuration while the Unity has data is typically not done due to the complexity and lack of proper understanding of what it entails. If it’s to be done, it’s typically safer to use a client or multiple clients to back up all the Unity workspaces to normal storage, then reconfigure the Unity’s storage from scratch. If that is what is done, that’s the best opportunity to add storage, change from fibre channel drives to RAID, take advantage of RAID-6, et cetera.

Next up is how Avid uses storage. The Unity essentially thinks that it’s given a bunch of drives. Drives cannot easily be added, so the only time to change total storage is when the Unity will be reconfigured from scratch.

The group of all available drives is called the Data Drive Set. There is only one Data Drive Set and it has a certain number of drives. You can create a Data Drive Set with different sized drives, but there needs to be a minimum of four drives of the same size to make an Allocation Group. Spares can be added so that detected disk failures can trigger a copy of a failing drive to a spare.

Once a Data Drive Set is created, the File Manager can be started and Allocation Groups can be created. The reasoning behind Allocation Groups is so that groups of drives can be kept together and certain workspaces can be put on certain Allocation Groups to maximize throughput and/or I/O.

There are pretty much two different families of file access patterns. One is pure video streaming which is, as one might guess, just a continuous stream of data with very little other file I/O. Sometimes caching parameters on fibre-SATA RAID are configured to have large video-only or video-primary drive sets (sets of logical volumes cut up from a single RAID set) are set to optimize streams. The other file access pattern would be handling lots of little files such as audio, stills, render files and project files. Caching parameters set for optimizing lots of small random file I/O can show a noticeable improvement, particularly for the Allocation Group which has the workspace on which the projects are kept.

Workspaces are what they sound like. When creating a workspace, you decide which Allocation Group that workspace will exist. Workspaces can be expanded and contracted even while clients are actively working in that workspace. The one workspace which matters most when it comes to performance is the projects workspace. Because Avid projects tend to have hundreds or thousands of little files, an overloaded Unity can end up taking tens of seconds to simply open a bin in Media Composer which will certainly affect editors trying to work. The Attic is kept on the projects workspace, too, unless explicitly set to a different destination.

Although Unity systems can have ridiculously long uptimes, like any filesystem there can be problems. Sometimes lock files won’t go away when they’re supposed to, sometimes there can be namespace collisions, and sometimes a Unity workspace can simply become slow without explanation. The simplest way to handle filesystem problems, especially since there are no filesystem repair tools, is to create a new workspace, copy everything out of the old workspace, then delete the old workspace. Fragmentation is not checkable in any way, so this is a good way to make a heavily used projects workspace which has been around for ages a bit faster, too.

Avids have always had issues when there are too many files in a single directory. Since the media scheme on Avids involves Media Composer creating media files in workspaces on its own, one should take care to make sure that there aren’t any single directories in media workspaces (heck, any workspaces) which have more than 5,000 files. Media directories are created based on the client computer’s name in the context of the Unity, so if a particular media folder has too many items, that folder can be renamed to the same name with a “-1″ at the end (or “-(n+1)”).

Avid has said that the latest Media Composer (6.0.3 at the time of this writing) is not compatible with the latest Unity client (5.5.3). This is not true and while certain exotic actions might not work well (uncompressed HD, large number of simultaneous multicam, perhaps), all basic editing functions work just fine.

Finally, it should be pointed out that when planning ways to back up Unity workspaces, Windows clients are bad candidates. Because of the limitation on the number of simultaneously mounted workspaces being dependent on the number of drive letters available, Windows clients can only back up at most 25 workspaces at a time. Macs have no limitation on the number of workspaces they can mount simultaneously, plus Macs have rsync built in to the OS, so they’re a more natural candidate for performing backups.