Archive for September, 2012

DeployStudio Scripting Tips

Tuesday, September 25th, 2012

I’ve given a presentation on my affinity for DeployStudio, yet with it being closed source, it’s sometimes like an enigma (wrapped in a mystery (wrapped in bacon)). However, a workflow exists to enable scripting within it, although the only option besides automatically running it when dropped into a workflow is non-interactively passing arguments to a script. Even with little in the way of documented information, projects have popped up to take advantage of the framework provided.

Most noticeably, Rusty Myers’ BackupRestore scripts enabled quite an interesting workflow: first, you could run one workflow to tar(or ditto) the user folders to a new Backup directory in the repo, with a few customizable exceptions. And then, when either specified or dropped into a workflow that had a restore action precede it, you could put the users and their associated passwords back into place. This is obviously pretty darn useful for clean(er) migrations and/or OS upgrades, or simply refreshing existing workstations with a new base set of software. Many folks in the MacEnterprise community contributed features, including FileVault(version 1) support, and updates were made for the curveball Lion introduced in respect to how passwords are stored(nested inside the user record plist itself.)

I’m in the process of creating a successor to this project, so I thought I’d share some of the experiences I’ve had and pointers I’ve come across as a sysadmin, not a software developer, attempting to get a repeatable task accomplished inside this framework. Tip number zero is the same advice given to all students of programming in scripting languages: don’t write a lot before running and getting feedback. So, I booted a laptop with a few small user folders to my DeployStudio-generated netboot set, authenticated to the interface, and opened terminal. That netboot set includes the optional Python framework(Ruby is another option if you’d like access to that scripting language), which I’ll be using in the future. Along with selecting “Display Runtime log window by default”, I extended the “Quit runtime automatically after__” number to 90 seconds, so when testing inside of a workflow I wouldn’t be kicked out as I iterated and repeated the process.

To get started, I made an “admin” folder inside the repo, put an updated version of the rsync binary in it(since the one in /usr/bin that ships with OS X is pretty ancient), and started writing a script in that directory which I could therefore run from terminal on the netbooted laptop over VNC/ARD. For starters, here’s tip #1:
DeployStudio mounts the repo in /tmp/DSNetworkRepository. While /tmp isn’t read only, you will get out-of-space errors and general unhappiness if you use it for anything except mountpoints.
Tip #2:
No, you can’t use symlinks in those subfolders to point somewhere else on the DeployStudio server, since it assumes the directory the symlink would point to is relative to the root of the netboot set instead. (No, really, it wouldn’t even work when using ln -s on the machine the repo’s hosted from instead of Finder’s aliases, which definitely don’t work in scripts.)
tip #3:
For persnickety folks like myself that MUST use a theme in terminal and can’t deal to not have option set as the meta key, you’re probably bummed that the Preferences menu item is greyed out and command-comma doesn’t seem to work. There is a way, though: from the Shell menu choose Show Inspector. Then from the settings tab, double-click the theme you prefer. The full settings window will appear, and you can have your modern conveniences again.
tip #4:
How does DeployStudio decide what is the first mounted volume, you may wonder? I invite(dare?) you to ‘bikeshed‘(find a more optimized way to accomplish a relatively trivial task) this particular piece of code:
system_profiler SPSerialATADataType | awk -F': ' '/Mount Point/ { print $2}' | head -n1
In the case of the restore script, hard-coding the DS_LAST_RESTORED_VOLUME variable (on line 44 in Rusty’s current Lion version), or changing the argument in the workflow to pass that path with the -v flag will remove the dependency on restoring an image before putting user folders back in place.

Two more tips before I go, which are both specific to the task I’m trying to accomplish. Ownership on files will not be preserved when moved to the repo with rsync, so you can create a sparse image or sparsebundle as a destination, and it will even retain ACL’s (certain patched rsync binaries complain about smb_acl’s, like the one I used, which is bundled in DeployStudio’s Tools folder.) As mentioned about /tmp in the NetBoot environment earlier, sparseimages should be created in a subfolder of the repo, or you could easily run into ‘out of space’ errors.

When logging, you can use tee or simply redirect output, but more customized feedback in the actual log window in the Deploy Studio netboot runtime is helpful. There’s a “custom_logger” function used in some of the bundled scripts… which literally does nothing but echo $1 – pretty rudimentary. For output that doesn’t display when run as part of a script, you can redirect that output to /dev/stdout and have it shown instead of fooling around with echo or printf.
e.g. rsync -avz /source /destination >/dev/stdout
There may be a lag if verbose output happens in a short amount of time, as the main log file in the repo is being written to simultaneously to what is printed onscreen.

Oh, and the silliest thing I’ve noticed: your script needs to be owned by root:wheel and 777 in the scripts folder of the repo in order to show up in the workflow interface for selection… It’s got it’s quirks, but it’s just about the best out there!

MacSysAdmin 2012 Slides and Videos are Live!

Thursday, September 20th, 2012

318 Inc. CTO Charles Edge and Solutions Architect alumni Zack Smith were back at the MacSysAdmin Conference in Sweden again this year, and the slides and videos are now available! All the 2012 presentations can be found here, and past years are at the bottom of this page.

Unity Best Practices In AVID Environments

Thursday, September 6th, 2012

Avid Unity environments are still common these days because the price for Avid’s ISIS SAN is tremendously high. While a Unity typically started anywhere from $50,000 to $100,000, a typical ISIS starts around the same price even though the ISIS is based on more typical, less expensive commodity hardware. The ISIS is based on common gigabit networking, whereas the Unity is based on fibre channel SCSI.

Avid Unity systems come in two flavors. Both can be accessed by fibre channel or by gigabit ethernet. The first flavor is all fibre channel hardware. The second uses a hardware RAID card in a server enclosure with a sixteen drive array and shares that storage over fibre channel and/or gigabit ethernet.

Components in a fibre channel only Unity can be broken down so:

  • Avid Unity clients
  • Fibre channel switch
  • Fibre channel storage
  • Avid Unity head

Components in a chassis-based Unity are:

  • Avid Unity clients
  • Fibre channel switch
  • Avid Unity controller with SATA RAID

The fibre channel only setup can be more easily upgraded. Because such setups are generally older, they typically came with a 2U rackmount dual Pentium 3 (yes, Pentium 3!) server. They use a 2 gigabit ATTO fibre channel card and reliability can be questionable after a decade.

The Unity head can be swapped for a no-frills Intel machine (AMD doesn’t work, and there’s not enough time in the world to figure out why), but one must take care to be careful about video drivers. Several different integrated video chips and several video cards have drivers which somehow conflict with Unity software, so sometimes it’s easier to simply not install any drivers since nothing depends on them. The other requirements / recommendations are a working parallel port (for the Unity dongle), a PCIe slot (for a 4 gigabit ATTO fibre channel card) and 4 gigs of memory (so that Avid File Manager can use a full 3 gigabytes).

The fibre channel switch is typically either a 2 gigabit Vixel switch or a 4 gigabit Qlogic 5200 or 5600 switch. The older Vixel switches have a tendency to fail because there are little heat sinks attached to each port chip which face downward, and after a while sometimes a heat sink or two fall off and the chip dies. Since Vixel is not in business, the only replacement is a Qlogic.

The fibre channel storage can be swapped for a SATA-fibre RAID chassis so long as the chassis supports chopping up RAID sets into many smaller logical drives on separate LUNs. Drives which Avid sells can be as large as 1 TB if using the latest Unity software, so dividing up the storage into LUNs no larger than 1 TB is a good idea.

Changing storage configuration while the Unity has data is typically not done due to the complexity and lack of proper understanding of what it entails. If it’s to be done, it’s typically safer to use a client or multiple clients to back up all the Unity workspaces to normal storage, then reconfigure the Unity’s storage from scratch. If that is what is done, that’s the best opportunity to add storage, change from fibre channel drives to RAID, take advantage of RAID-6, et cetera.

Next up is how Avid uses storage. The Unity essentially thinks that it’s given a bunch of drives. Drives cannot easily be added, so the only time to change total storage is when the Unity will be reconfigured from scratch.

The group of all available drives is called the Data Drive Set. There is only one Data Drive Set and it has a certain number of drives. You can create a Data Drive Set with different sized drives, but there needs to be a minimum of four drives of the same size to make an Allocation Group. Spares can be added so that detected disk failures can trigger a copy of a failing drive to a spare.

Once a Data Drive Set is created, the File Manager can be started and Allocation Groups can be created. The reasoning behind Allocation Groups is so that groups of drives can be kept together and certain workspaces can be put on certain Allocation Groups to maximize throughput and/or I/O.

There are pretty much two different families of file access patterns. One is pure video streaming which is, as one might guess, just a continuous stream of data with very little other file I/O. Sometimes caching parameters on fibre-SATA RAID are configured to have large video-only or video-primary drive sets (sets of logical volumes cut up from a single RAID set) are set to optimize streams. The other file access pattern would be handling lots of little files such as audio, stills, render files and project files. Caching parameters set for optimizing lots of small random file I/O can show a noticeable improvement, particularly for the Allocation Group which has the workspace on which the projects are kept.

Workspaces are what they sound like. When creating a workspace, you decide which Allocation Group that workspace will exist. Workspaces can be expanded and contracted even while clients are actively working in that workspace. The one workspace which matters most when it comes to performance is the projects workspace. Because Avid projects tend to have hundreds or thousands of little files, an overloaded Unity can end up taking tens of seconds to simply open a bin in Media Composer which will certainly affect editors trying to work. The Attic is kept on the projects workspace, too, unless explicitly set to a different destination.

Although Unity systems can have ridiculously long uptimes, like any filesystem there can be problems. Sometimes lock files won’t go away when they’re supposed to, sometimes there can be namespace collisions, and sometimes a Unity workspace can simply become slow without explanation. The simplest way to handle filesystem problems, especially since there are no filesystem repair tools, is to create a new workspace, copy everything out of the old workspace, then delete the old workspace. Fragmentation is not checkable in any way, so this is a good way to make a heavily used projects workspace which has been around for ages a bit faster, too.

Avids have always had issues when there are too many files in a single directory. Since the media scheme on Avids involves Media Composer creating media files in workspaces on its own, one should take care to make sure that there aren’t any single directories in media workspaces (heck, any workspaces) which have more than 5,000 files. Media directories are created based on the client computer’s name in the context of the Unity, so if a particular media folder has too many items, that folder can be renamed to the same name with a “-1″ at the end (or “-(n+1)”).

Avid has said that the latest Media Composer (6.0.3 at the time of this writing) is not compatible with the latest Unity client (5.5.3). This is not true and while certain exotic actions might not work well (uncompressed HD, large number of simultaneous multicam, perhaps), all basic editing functions work just fine.

Finally, it should be pointed out that when planning ways to back up Unity workspaces, Windows clients are bad candidates. Because of the limitation on the number of simultaneously mounted workspaces being dependent on the number of drive letters available, Windows clients can only back up at most 25 workspaces at a time. Macs have no limitation on the number of workspaces they can mount simultaneously, plus Macs have rsync built in to the OS, so they’re a more natural candidate for performing backups.

Digital Forensics – Best Practices

Thursday, September 6th, 2012

Best Practices for Seizing Electronic Evidence

A joint project of the International Association of Chiefs of Police
and
The United States Secret Service

Recognizing Potential Evidence:

Computers and digital media are increasingly involved in unlawful activities. The computer may be contraband, fruits of the crime, a tool of the offense, or a storage container holding evidence of the offense. Investigation of any criminal activity may produce electronic evidence. Computers and related evidence range from the mainframe computer to the pocket-sized peronal data assistant, to the smallest electronic chip storage device. Images, audio, text, and other data on these media can be easily altered or destroyed. It is imperative that investigators recognize, protect, seize and search such devices in accordance with applicable statutes, policies and best practices, and guidelines.

Answers to the following questions will better determine the role of the computer in the crime:

  1. Is the computer contraband or fruits of a crime?
    For example, was the computer software or hardware stolen?
  2. Is the computer system a tool of the offense?
    For example, was the system actively used by the defendant to commit the offense? Were fake IDs or other counterfeit documents prepared using the computer, scanner, or printer?
  3. Is the computer system only incidental to the offense, i.e., being used to store evidence of the offense?
    For example, is a drug dealer maintaining his trafficking records in his computer?
  4. Is the computer system both instrumental to the offense and a storage device for the evidence?
    For example, did the computer hacker use her computer to attack other systems and also use it to store stolen credit card information?

Once the computer’s role is known and understood, the following essential questions should be answered:

  1. Is there probable cause to seize the hardware?
  2. Is there probable cause to seize the software?
  3. Is there probable cause to seize the data?
  4. Where will this search be conducted?
    For example, is it practical to search the computer system on site or must the examination be conducted at a field office or lab?
    If Law Enforcement officers remove the computer system from the premises, to conduct the search, must they return the computer system, or copies of the seized data, to its owner/user before trial?
    Considering the incredible storage capacities of computers, how will experts search this data in an efficient and timely manner?

Preparing For The Search and/or Seizure

Using evidence obtained from a computer in a legal proceeding requires:

  1. Probable cause for issuance of a warrant or an exception to the warrant requirement.
    CAUTION: If you encounter potential evidence that may be outside of the scope of your existing warrant or legal authority, contact your agency’s legal advisor or the prosecutor as an additional warrant may be necessary.
  2. Use of appropriate collection techniques so as not to alter or destroy evidence.
  3. Forensic examination of the system completed by trained personnel in a speedy fashion, with expert testimony available at trial.

Conducting The  Search and/or Seizure

Once the computers role is understood and all legal requirements are fulfilled:

1. Secure The Scene

  • Officer Safety is Paramount.
  • Preserve Area for Potential Fingerprints.
  • Immediately Restrict Access to Computers/Systems; Isolate from Phone, Network, as well as Internet, because data can be accessed remotely on the system in question.

2. Secure The Computer As Evidence

  • If the computer is powered “OFF”, DO NOT TURN IT ON, under any circumstances!
  • If the computer is still powered “ON”…

Stand-alone computer (non-networked):

  1. Photograph screen, then disconnect all power sources; Unplug from back of computer first, and then proceed to unplug from outlet. System may be connected to a UPS which would prevent it from shutting off.
  2. Place evidence tape over each drive slot.
  3. Photograph/Diagram and label back of computer components with existing connections.
  4. Label all connectors/cable ends to allow for reassembly, as needed.
  5. If transport is required, package components and transport/store components always as fragile cargo.
  6. Keep away from magnets, radio transmitters, and otherwise hostile environments.

Networked or Business Computers: Consult a Computer Specialist for Further Assistance!

  1. Pulling the plug on a networked computer could severely damage system.
  2. Disrupt legitimate business.
  3. Create liability for investigators or law enforcement personnel.

 

DISCLOSURE:
THIS TECH JOURNAL ENTRY BY 318, INC. IS FOR REFERENCE ONLY.
THIS DOCUMENT SHOULD NOT BE USED VERBATIM TO CONDUCT ACTIVE FORENSIC INVESTIGATIONS NOR BE USED AS A LEGAL PRECEDENT OR REPLACEMENT FOR ESTABLISHED FORENSIC PRACTICES ESTABLISHED IN YOUR JURISDICTION. PLEASE FOLLOW PROPER LEGAL AND FORENSIC INVESTIGATION PROCEDURES AS ESTABLISHED BY YOUR CITY, COUNTY, AND STATE!
318, INC. SHALL NOT BE HELD LIABLE. USE OF THIS DOCUMENT IS AT YOUR OWN RISK!

Configuration Profiles: The Future is… Soon

Tuesday, September 4th, 2012

Let’s say, hypothetically, you’ve been in the Mac IT business for a couple revisions of the ol’ OS, and are familiar with centralized management from a directory service. While we’re having hypothetical conversations about our identity, let’s also suppose we’re not too keen on going the way of the dinosaur via planned obsolescence, so we embrace the fact that Configuration Profiles are the future. Well I’m here to say that knowing how the former mechanism, Managed Preferences (née Managed Client for OS X, or MCX) from a directory service interacted with your system is still important.

 
Does the technique to nest a faux directory service on each computer locally(ergo LocalMCX), and utilize it to apply management to the entire system still work in 10.8 Mountain Lion? Yes. Do applied Profiles show settings in the same directory Managed Preferences did? Yes… which can possibly cause conflicts. So while practically, living in the age of Profiles is great when 802.1x used to be so hard to manage, there are pragmatic concerns as well. Not everyone upgraded to Lion the moment it was released, just over a year ago, so we’re wise to continue using MCX first wherever the population is significantly mixed.

 
And then there’s a show-stopper through which Apple opened up a Third-Party Opportunity (trademark Arek Dreyer): Profiles, when coming straight out of Mac OS X Server’s 10.7 or 10.8 Profile Manager service, can only apply management with the Always frequency. Just like the former IBM Thinkpads, you could have any color you wanted as long as it’s black. No friendly defaults set by your administrator that you can change later, Profiles settings, once applied, was basically frozen in carbonite.

 
So what party stepped in to address this plight? ##osx-server discussions were struck up by the always-generous never-timid @gregneagle, about the fact that Profiles can actually, although undocumented, contain settings that enable Once or Often frequency. Certain preferences can ONLY by managed at the Often level, because they aren’t made to be manageable system-wide, like certain application (e.g. Office 2011) and Screen Saver preferences (since those live in the user’s ~/Library/ByHost folder.)

 
The end result was Tim Sutton’s mcxToProfile script, hosted on Github, which works like a champ for both of the examples just listed. Note: this script utilizes a Profiles Custom Settings section only, so for the things already supported by Profiles (like loginwindow) it’s certainly best to get onboard with what the $20 Server.app can already provide. But another big plus of the script is… you can use it script without having ProfileManager set up anywhere on your network.

 
So there you go, consider using mcxToProfile to update your management, and give feedback to Tim on the twitters or the GitHubs!