Posts Tagged ‘CLI’

Add OS X Network Settings Remotely (Without Breaking Stuff)

Monday, September 23rd, 2013

So you’re going to send a computer off to a colocation facility, and it’ll use a static IP and DNS when it gets there, the info for which it’ll need before it arrives. Just like colo, you access this computer remotely to prepare it for its trip, but don’t want to knock it off the network while prepping this info, so you can verify it’s good to go and shut it down.

It’s the type of thing, like setting up email accounts programmatically, that somebody should have figured out and shared with the community as some point. But even if my google-fu is weak, I guess I can deal with having tomatoes thrown at me, so here’s a rough mock-up:

 

#!/bin/bash
# purpose: add a network location with manual IP info without switching 
#   This script lets you fill in settings and apply them on en0(assuming that's active)
#   but only interrupts current connectivity long enough to apply the settings,
#   it then immediately switches back. (It also assumes a 'Static' location doesn't already exist...)
#   Use at your own risk! No warranty granted or implied! Tell us we're doing it rong on twitter!
# author: Allister Banks, 318 Inc.

# set -x

declare -xr networksetup="/usr/sbin/networksetup"

declare -xr MYIP="192.168.111.177"
declare -xr MYMASK="255.255.255.0"
declare -xr MYROUTER="192.168.111.1"
declare -xr DNSSERVERS="8.8.8.8 8.8.4.4"

declare -x PORTANDSERVICE=`$networksetup -listallhardwareports | awk '/en0/{print x};{x=$0}' | cut -d ' ' -f 3`

$networksetup -createlocation "Static" populate
$networksetup -switchtolocation "Static"
$networksetup -setmanual $PORTANDSERVICE $MYIP $MYMASK $MYROUTER
$networksetup -setdnsservers $PORTANDSERVICE $DNSSERVERS
$networksetup -switchtolocation Automatic

exit 0

Caveats: The script assumes the interface you want to be active in the future is en0, just for ease of testing before deployment. Also, that there isn’t already a network location called ‘Static’, and that you do want all interface populated upon creation(because I couldn’t think of particularly good reasons why not.)

If you find the need, give it a try and tweet at us with your questions/comments!


Sure, We Have a Mac Client, We Use Java!

Thursday, January 24th, 2013

We all have our favorite epithets to invoke for certain software vendors and the practices they use. Some of our peers go downright apoplectic when speaking about those companies and the lack of advances we perceive in the name of manageable platforms. Not good, life is too short.

I wouldn’t have even imagined APC would be forgiving in this respect, they are quite obviously a hardware company. You may ask yourself, though, ‘is your refrigerator running’ is the software actually listening for a safe shutdown signal from the network card installed in the UPS? Complicating matters is:
- The reason we install this Network Shutdown software from APC on our server is to receive this signal over ethernet, not USB, so it’s not detected by Energy Saver like other, directly cabled models

- The shutdown notifier client doesn’t have a windowed process/menubar icon

- The process itself identifies as “Java” in Activity Monitor (just like… CrashPlan – although we can kindof guess which one is using 400+ MBs of virtual memory idle…)

Which sucks. (Seriously, it installs in /Users/Shared/Applications! And runs at boot with a StartupItem! In 2013! OMGWTFBBQ!)

Calm, calm, not to fear! ps sprinkled with awk to the rescue:

ps avx | awk '/java/&&/Notifier/&&!/awk/{print $17,$18}'

To explain the ps flags, first it allows for all users processes, prints in long format with more criteria, and the x is for even if they have no ‘controlling console.’ Then awk looks for both Java and the ‘Notifier’ jar name, minus our awk itself, and prints the relevant fields, highlighted below(trimmed and rewrapped for readability):

:./comp/pcns.jar:./comp/Notifier.jar: 

com.apcc.m11.arch.application.Application

So at least we can tell that something is running, and appreciate the thoughtful development process APC followed, at least while we aren’t fashioning our own replacement with booster serial cables and middleware. Thanks to the googles and the overflown’ stacks for the proper flags to pass ps.

Configure network printers via command line on Macs

Wednesday, December 26th, 2012

In a recent Twitter conversation with other folks supporting Macs we discussed ways to programmatically deploy printers. Larger environments that have a management solution like JAMF Software’s Casper can take advantage of its ability to capture printer information and push to machines. Smaller environments, though, may only have Apple Remote Desktop or even nothing at all.

Because Mac OS X incorporates CUPS printing, administrators can utilize the lpadmin and lpoptions command line tools to programmatically configure new printers for users.

lpadmin

A basic command for configuring a new printer using lpadmin looks something like this:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E

Several options follow the lpadmin command:

  • -p = Printer name (queue name if sharing the printer)
  • -v = IP address or DNS name of the printer
  • -D = Description of the printer (appears in the Printers list)
  • -L = Location of the printer
  • -P = Path the the printer PPD file
  • -E = Enable this printer

The result of running this command in the Terminal application as an administrator looks like this:

New printer

lpoptions

Advanced printer models may have duplex options, multiple trays, additional memory or special features such as stapling or binding. Consult the printer’s guide or its built-in web page for a list of these installed printer features.

After installing a test printer, use the lpoptions command with the -l option in the Terminal to “list” the feature names from the printer:

lpoptions -p "salesbw" -l

The result is usually a long list of features that may look something like:

HPOption_Tray4/Tray 4: True *False
HPOption_Tray5/Tray 5: True *False
HPOption_Duplexer/Duplex Unit: True *False
HPOption_Disk/Printer Disk: *None RAMDisk HardDisk
HPOption_Envelope_Feeder/Envelope Feeder: True *False
...

Each line is an option. The first line above displays the option for Tray 4 and shows the default setting is False. If the printer has the optional Tray 4 drawer installed then enable this option when running the lpadmin command by following it with:

-o HPOption_Tray4=True

Be sure to use the option name to the left of the slash not the friendly name with spaces after the slash.

To add the duplex option listed on the third line add:

-o HPOption_Duplexer=True

And to add the envelope feeder option listed on the fifth line add:

-o HPOption_Envelope_Feeder=True

Add as many options as necessary by stringing them together at the end of the lpadmin command:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E -o HPOption_Tray4=True -o HPOption_Duplexer=True -o HPOption_Envelope_Feeder=True

The result of running the lpadmin command with the -o options enables these available features when viewing the Print & Scan preferences:

Printer options

With these features enabled for the printer in Print & Scan, they also appear as selectable items in all print dialogs:

Printer dialog

 

(Sysadmin) Software Design Decisions

Wednesday, October 3rd, 2012

When approaching a task with an inkling to automate, sometimes you find an open source project that fits the bill. But the creator will work within constraints, and often express their opinion of what’s important to ‘solve’ as a problem and therefore prioritize on: a deployment tool is not necessarily a patch management tool is not necessarily a configuration management tool, and so on. One of the things I’ve dealt with is trying to gauge the intent of a developer and deciding if they are interested in further discussion/support/development of a given project. Knowing why one decision was made or another can be helpful in these situations. In that category of things I wish someone could have written so I could read it, here’s the design decisions behind the sonOfBackupRestoreScripts project I’ve been toying with as an add-on to DeployStudio(heretofore DS), which you can hopefully understand why I am not releasing as an official, supportable tool in it’s current bash form after reading the following.
I’ve adapted some of the things Google used in their outline for Simian as a model, to give this some structure.

Project Objective:

To move user home folders and local authentication/cached credentials between workstations in a customizable and optimized manner, preserving the integrity of the data/user records as much as possible

Overview:

For speed and data integrity, rsync is used to move selections of the users home folder(minus caches, trash, and common exclusions made by Time Machine). To increase portability and preserve mac-specific attributes, a disk image is generated to enclose the data. The user account information is copied separately and helpful information is displayed at the critical points as it moves from one stage to another and during the backup itself.

Requirements: DeployStudio Server / NetBoot

DS, as a service, enables an infrastructure to run the script in, and automounts a repository to interact with over the network. Meant to work optimally with or without a NetBoot environment, an architecture assumption being made during development/testing is wired ethernet, with the use of USB/Thunderbolt adapters if clients are MacBook Airs. Even old minis can function fine as the server, assuming the repo is located on a volume with enough space available to accept the uncompressed backups.

Implementation Details: Major Components / Underlying Programs

- source/destination variables

Parameters can be passed to the script to change the source/destination of backups/restores with the -s(for source) and -d(…) switches and then a path that is reachable by the NetBooted system.

- hdiutil

A simple sparsediskimage is created which can expand up to 100GBs with the built-in binary hdiutil. The file system format of that container is JHFS+, and a bunch of other best practices, cobbled together from Bombich’s Carbon Copy Cloner(heretofore CCC) and InstaDMG, are employed.

- cp

The cp binary is used to just copy the user records from the directory service the data resides on to the root of the sparseimage, and the admin group’s record is copied into a ‘group’ folder. If hashes exist in /var/db/shadow/hash, which is how passwords were stored previous to 10.7, those are moved to a ‘hashes’ folder.

- rsync

A custom, even more current build of rsync could be generated if the instructions listed here are followed. Ideally, a battle-tested version like the one bundled with CCC’s ccc_helper.app (/Applications/Carbon\ Copy\ Cloner.app/Contents/MacOS/ccc_helper.app/Contents/MacOS/rsync, which is actually a heavily customized rsync version 3.0.6) could be used, but it’s output isn’t easy to adapt and see an overview of the progress during a CLI transfer. Regardless, the recommended switches are employed in hopes to get a passing grade on the backupBouncer test. The 3.0.7 version bundled with DS itself (/Applications/Utilities/DeployStudio\ Admin.app/Contents/Frameworks/DSCore.framework/Versions/A/Resources/Tools/rsync, which for whatever reason is excluded when the assistant creates NetBoot sets) was used during development/testing.

-Exclusions

The Users folder on the workstation that’s being backed up is what’s targeted directly, so any users that have been deleted or subfolders can be removed with the exclusions file fed to the rsync command, and without catch-all, asterisk(*) ‘file globbing’, you’d need to be specific about certain types of files you want to exclude if they’re in certain directories. For example, to not backup any mp3 files, no matter where they are in the user folders being backed up, you’d add - *.mp3 Additional catch-all excludes could be used, as detailed in the script, which specifically excludes ipsw’s(iOS firmware/OS installers) like this: --exclude='*.ipsw'

-Restore

Pretty much everything done via both rsync and cp are done in reverse, utilizing the source/destination options, so a backup taken from one machine can easily be chosen to restore to another.

Security Considerations:

Very little security is applied during storage. Files are transferred over password-protected AFP, so a separate server and repo could be used to minimize potential access by whoever can access the main DS service. Nothing encrypts the files inside the sparseimages, and if present, the older password format is a hash that could potentially be cracked over a great length of time. The home folder ACL’s and ownership/perms are preserved, so in that respect it’s secure according to whoever has access to the local file systems on the server and client.

Excluded/Missing Features:
(Don’t You Wish Every Project Said That?)

Hopefully this won’t sound like a soul-bearing confession, but here goes:
No checks are in place if there isn’t enough space on destinations, nor if a folder to backup is larger than the currently hard-coded 100GB sparseimage cap (after exclusions.) Minimal redirection of logs is performed, so the main DS log can quickly hit a 2MB cap and stop updating the DS NetBoot log window/GUI if there’s a boatload of progress echo’d to stdout. The process to restore a users admin group membership(or any other group on the original source) is not performed, although the group’s admin.plist can be queried after the fact. Nor is there even reporting on Deleted Users orphaned home folders if they do actually need to be preserved, by default they’re just part of the things rsync excludes. All restrictions are performed in the Excludes.txt file fed to rsync, so it cannot be passed as a parameter to the script.
And the biggest possible unpleasantness is also the #1 reason I’m not considering continuing development in bash: UID collisions. If you restore a 501 user to an image with a pre-existing 501 user that was the only admin… bad things will happen. (We’ve changed our default admin user’s UID as a result.) If you get lucky, you can change one user’s UID or the other and chown to fix things as admin before all heck breaks lose… If this isn’t a clean image, there’s no checking for duplicate users with newer data, there’s no filevault1 or 2 handling, no prioritization so if it can only fit a few home folders it’ll do so and warn about the one(s) that wouldn’t fit, no version checking on the binaries in case different NetBoot sets are used, no fixing of ByHostPrefs(although DS’s finalize script should handle that), no checks with die function are performed if the restore destination doesn’t have enough space, since common case is restoring to same HD or a newer, presumably larger computer. Phew!

Wrapup:

The moral of the story is that the data structures available in most of the other scripting languages are more suited for these checks and to perform evasive action, as necessary. Bash does really ungainly approximations of tuples/dictionaries/hash tables, and forced the previous version of this project to perform all necessary checks and actions during a single loop per-user to keep things functional without growing exponentially longer and more complex.

Let’s look forward to the distant future when this makes it’s way into Python for the next installment in this project. Of course I’ve already got the name of the successor to SonOfBackupRestoreScripts: BrideOfBackupRestoreScripts!

DeployStudio Scripting Tips

Tuesday, September 25th, 2012

I’ve given a presentation on my affinity for DeployStudio, yet with it being closed source, it’s sometimes like an enigma (wrapped in a mystery (wrapped in bacon)). However, a workflow exists to enable scripting within it, although the only option besides automatically running it when dropped into a workflow is non-interactively passing arguments to a script. Even with little in the way of documented information, projects have popped up to take advantage of the framework provided.

Most noticeably, Rusty Myers’ BackupRestore scripts enabled quite an interesting workflow: first, you could run one workflow to tar(or ditto) the user folders to a new Backup directory in the repo, with a few customizable exceptions. And then, when either specified or dropped into a workflow that had a restore action precede it, you could put the users and their associated passwords back into place. This is obviously pretty darn useful for clean(er) migrations and/or OS upgrades, or simply refreshing existing workstations with a new base set of software. Many folks in the MacEnterprise community contributed features, including FileVault(version 1) support, and updates were made for the curveball Lion introduced in respect to how passwords are stored(nested inside the user record plist itself.)

I’m in the process of creating a successor to this project, so I thought I’d share some of the experiences I’ve had and pointers I’ve come across as a sysadmin, not a software developer, attempting to get a repeatable task accomplished inside this framework. Tip number zero is the same advice given to all students of programming in scripting languages: don’t write a lot before running and getting feedback. So, I booted a laptop with a few small user folders to my DeployStudio-generated netboot set, authenticated to the interface, and opened terminal. That netboot set includes the optional Python framework(Ruby is another option if you’d like access to that scripting language), which I’ll be using in the future. Along with selecting “Display Runtime log window by default”, I extended the “Quit runtime automatically after__” number to 90 seconds, so when testing inside of a workflow I wouldn’t be kicked out as I iterated and repeated the process.

To get started, I made an “admin” folder inside the repo, put an updated version of the rsync binary in it(since the one in /usr/bin that ships with OS X is pretty ancient), and started writing a script in that directory which I could therefore run from terminal on the netbooted laptop over VNC/ARD. For starters, here’s tip #1:
DeployStudio mounts the repo in /tmp/DSNetworkRepository. While /tmp isn’t read only, you will get out-of-space errors and general unhappiness if you use it for anything except mountpoints.
Tip #2:
No, you can’t use symlinks in those subfolders to point somewhere else on the DeployStudio server, since it assumes the directory the symlink would point to is relative to the root of the netboot set instead. (No, really, it wouldn’t even work when using ln -s on the machine the repo’s hosted from instead of Finder’s aliases, which definitely don’t work in scripts.)
tip #3:
For persnickety folks like myself that MUST use a theme in terminal and can’t deal to not have option set as the meta key, you’re probably bummed that the Preferences menu item is greyed out and command-comma doesn’t seem to work. There is a way, though: from the Shell menu choose Show Inspector. Then from the settings tab, double-click the theme you prefer. The full settings window will appear, and you can have your modern conveniences again.
tip #4:
How does DeployStudio decide what is the first mounted volume, you may wonder? I invite(dare?) you to ‘bikeshed‘(find a more optimized way to accomplish a relatively trivial task) this particular piece of code:
system_profiler SPSerialATADataType | awk -F': ' '/Mount Point/ { print $2}' | head -n1
In the case of the restore script, hard-coding the DS_LAST_RESTORED_VOLUME variable (on line 44 in Rusty’s current Lion version), or changing the argument in the workflow to pass that path with the -v flag will remove the dependency on restoring an image before putting user folders back in place.

Two more tips before I go, which are both specific to the task I’m trying to accomplish. Ownership on files will not be preserved when moved to the repo with rsync, so you can create a sparse image or sparsebundle as a destination, and it will even retain ACL’s (certain patched rsync binaries complain about smb_acl’s, like the one I used, which is bundled in DeployStudio’s Tools folder.) As mentioned about /tmp in the NetBoot environment earlier, sparseimages should be created in a subfolder of the repo, or you could easily run into ‘out of space’ errors.

When logging, you can use tee or simply redirect output, but more customized feedback in the actual log window in the Deploy Studio netboot runtime is helpful. There’s a “custom_logger” function used in some of the bundled scripts… which literally does nothing but echo $1 – pretty rudimentary. For output that doesn’t display when run as part of a script, you can redirect that output to /dev/stdout and have it shown instead of fooling around with echo or printf.
e.g. rsync -avz /source /destination >/dev/stdout
There may be a lag if verbose output happens in a short amount of time, as the main log file in the repo is being written to simultaneously to what is printed onscreen.

Oh, and the silliest thing I’ve noticed: your script needs to be owned by root:wheel and 777 in the scripts folder of the repo in order to show up in the workflow interface for selection… It’s got it’s quirks, but it’s just about the best out there!

A Bash Quicky

Thursday, August 30th, 2012

In our last episode spelunking a particularly shallow trough of bash goodness, we came across dollar sign substitution, which I said mimics some uses of regular expressions. Regex’s are often thought of as thick or dense with meaning. One of my more favorite descriptions goes something like, if you measured each character used in code for a regex in cups of coffee, you’d find the creators of this particular syntax the most primo, industrial-strength-caffeinated folks around. I’m paraphrasing, of course.

Now copy-pasta-happy, cargo-culting-coders like myself tend to find working code samples and reuse salvaged pieces almost without thinking, often recognizing the shape of the lines of code more than the underlying meaning. Looping back around to dollar sign substitution, we can actually interpret this commonly used value, assigned to a variable meaning the name of the script:
${0##*/}
Okay children, what does it all mean? Well, let’s start at the very beginning(a very good place to start):
${0}The dollar sign and curly braces force an evaluation of the symbols contained inside, often used for returning complex series of variables. As an aside, counting in programming languages starts with zero, and each space-separated part of the text is defined with a number per place in the order, also known as positional parameters. The entire path to our script is given the special ‘seat’ of zero, so this puts the focus on that zero position.

Regrouping quickly, our objective is to pull out the path leading up to the script’s name. So we’re essentially gathering up all the stuff up to and including the last forward slash before our scripts filename, and chuckin’ them in the lorry bin.
${0##*}To match all of the instances of a pattern, in our case the forward slashes in our path, we double up the number signs(or pound sign for telcom fans, or hash for our friends on the fairer side of the puddle.) This performs a “greedy” match, gobbling up all instances, with a star “globbing”, to indiscriminately mop up any matching characters encountered along the way.
${0##*/}Then we cap the whole mess off by telling it to stop when it hits the last occurrence of a character, in this case forward slash. And that’s that!

Pardon the tongue-in-cheek tone of this quick detour into a bash-style regex-analogue… but to reward the masochists, here’s another joke from Puppet-gif-contest-award-winner @pmbuko:

Email from a linux user: “Slash is full.” I wanted to respond: “Did he enjoy his meal?”

Creating a binding script to join Windows 7 clients to Active Directory

Tuesday, July 3rd, 2012

There are some different ways to join Windows 7 to a domain.  You can do it manually, use djoin.exe to do it offline, use powershell, or use netdom.exe.

  • Doing so manually can get cumbersome when you have a lot of different computers to do it on.
  • With Djoin.exe you will have to run it on a member computer already joined to the domain for EACH computer you want to join since it will create a computer object in AD for each computer before hand.
  • Powershell is OK to use, but you have to set the script to unrestricted before hand on EACH computer.
  • Netdom is the way to go since you prep once for the domain, then run the script with Administrator privledges on whatever computers you want to join on the domain.  Netdom doesn’t come on most versions of Windows 7 by default.  There are two versions of netdom.exe, one for x86 and one for x64.  You can obtain netdom.exe by installing Remote Server Administration Tools (RSAT) for Windows 7, and then copying netdom.exe to a share.

A quick way to deal with both x86 and x64 architectures in the same domain would be to make two scripts.  One for x86 and one for x64 and have the appropriate netdom.exe in two different spots \\server\share\x86\ and \\server\share\x64\.

You’ll need to either grab netdom.exe from a version of windows 7 that already has it, or you’ll need to install RSAT for either x64 or x86 Windows 7 from here: http://www.microsoft.com/en-us/download/details.aspx?id=7887, which ever you will be working with.  Install that on a staging computer.   The following steps are how to get netdom.exe from the RSAT installation.

  1. Download and install RSAT for either x64 or x86.
  2. Follow the help file that opens after install for enabling features.
  3. Enable the following feature: Remote Server Administration Tools > Role Administration Tools > AD DS and AD LDS Tools > AD DS Tools > AD DS Snap-ins and Command-Line Tools

netdom.exe will now be under C:\windows\system32

Create a share readable by everybody on the domain, and drop netdom.exe there.

Create a script with the following (From: http://social.technet.microsoft.com/Forums/en/ITCG/thread/6039153c-d7f1-4011-b9cd-a1f111d099aa):

@echo off
SET netdomPath=c:\windows\system32
SET domain=domain.net
CALL BATCH.BAT %passwd%
CALL BATCH.BAT %adminUser%
SET sourcePath=\\fileshare\folder\

::If necessary, copy netdom to the local machine
IF EXIST c:\windows\system32\netdom.exe goto join
COPY %sourcePath%netdom.exe %netdomPath%
COPY %sourcePath%dsquery.exe %netdomPath%
COPY %sourcePath%dsrm.exe %netdomPath%

:Join
::Join PC to the domain
NETDOM JOIN %computerName% /d:%domain% /UD:%adminUser% /PD:%passwd%

SHUTDOWN -r -t 0

Change domain and sourcepath to their real places.  Remove dsquery.exe and dsrm.exe if not needed.  If you’re just joining a domain, and not running anything after, then you don’t need them.

Create another script called “BATCH.BAT” that will hold your credentials that have access to joining computers to the domain.  Put BATCH.BAT in both places that house your Join-To-Domain script (…/x86 and …/x64)

@echo off
SET passwd=thisismypassword
SET adminuser=thisismyadminusername

  1. Ensure you have the scripts in the same directory.
  2. Open up a command prompt with Administrator privledges and change directory to the location of your scripts.

Runnning the first script will:

  1. Run a check to see if netdom, dsquery, and dsrm are installed under system32, if they are, it will then join the domain, if not it will attempt to download them from your share.
  2. Once it ensures it has the files it needs, it will join the computer to the domain under the “Computers” OU with its current computer name using the credentials set by BATCH.BAT.
  3. It will reboot when done.

This will work on both Server 2003 and Server 2008.

Setting up Netboot helpers on a Cisco device

Tuesday, April 3rd, 2012

Configure a Cisco device for forwarding bootp requests is a pretty straightforward process. First off, this will only apply to Cisco Routers and some switches. You will need to verify if you device supports the IP Helper command. For example, the Cisco ASA will not support bootp requests.

By default the IP Helper command will forward different types of UDP traffic. The two important ones 67 and 68 for DHCP and BOOTP requests. Other ports can be customized to forward with some other commands as well. But it is quite simple pretty much if you have a Netboot server you can configure the IP Helper command to point that servers IP address.

Here is an example, lets say your NetBoot server has an IP Address of 10.0.0.200. You would simply go into the global configuration mode switch to the interface you want to utilize and type “ip helper-address 10.0.0.200″ to simply relay those requests to that address. Depending on your situation you also might want to setup the device to ignore BOOTP requests (in cases that you have DHCP and BOOTP on the same network). That command is “ip dhcp bootp ignore”. Using the IP helper and Bootp ignore command together will ensure that those bootp requests are forwarded out the interface to the specified address.

Last if you have multiple subnets you can setup multiple IP Helper address statements on your device to do multiple forwarding.

Auditing Email in Google Apps

Thursday, March 22nd, 2012

In order to address situations where a Google Apps admin needs access to a user’s mail data, Google provides an Email Audit API. It allows administrators to audit a user’s email and chats, and also download a user’s complete mailbox. While Google provides this API, third-party tools are required in order to make use of the functionality. While there are some add-ons in the Google Apps Marketplace that make email auditing available, the most direct method of gaining access to this is with a command-line tool called Google Apps Manager. GAM is a very powerful management tool for Google Apps, but here we will focus on just what’s required to use the Email Audit API.

Using GAM requires granting access, with a Google Apps admin account, to a specific system. An OAuth token for the domain is stored in the GAM folder. Also, if you’re going to download email exports, it’s necessary to generate a GPG key and upload that to Google Apps. In light of both of these factors, it’s best to designate a specific system as the GAM management system. GAM is a collection of Python modules, so whatever system you designate should be something that has a recent version of Python. We’ll assume that we’re using a fairly recent Mac.

What we’ll do is download GPG and generate a GPG key, and then download GAM and get it connected to Google Apps.

Generating a GPG key

The GPGTools installer is here: http://www.gpgtools.org/installer/index.html

After installation, open up Terminal, in the account that you’ll be using to manage Google Apps.

Run the command:

$ gpg –gen-key –expert

For type of key, choose “RSA and RSA (default)”. For key size, you can probably safely choose a smaller key. Bear in mind that all your mailbox exports will be encrypted with this key and then will need to be decrypted after download. This can take a non-trivial amount of time, especially for larger mailboxes, and a larger key will mean much longer encryption and decryption times. A 1024-bit key should be fine in most cases.

When asked for how long the key should be valid, choose 0 so that the key does not expire.

Next you’ll be prompted for your name, email address and a comment. This information is not, at the moment, used by Google for anything. However, in the interests of long-term usability, I would recommend using the email address and name of an actual admin for the Google Apps domain.

Finally, you’ll be asked for a passphrase. This passphrase will be required in order to decrypt the downloaded mailboxes. Do not forget it. You will be unable to decrypt the downloads without it.

When key creation is complete, you’ll see something like this:

pub 1024R/0660D980 2012-03-22
Key fingerprint = A642 0721 2D4A 9150 6ED1 DBD7 AFFF 992F 0660 D980
uid Apps Admin
sub 1024R/6D1C197B 2012-03-22

Make a note of the ID of the public key, which in this case is 0660D980. You’ll need the ID to upload the key to Google.

Installing GAM

Prior to installing GAM, you’ll want to open up your default browser and log into to your Google Apps domain as an administrator. It’s not technically necessary – you can log in as an admin when the GAM install needs access, but you’ll find it authenticates more reliably if log in in advance.

GAM can be found here: http://code.google.com/p/google-apps-manager/downloads/list

Download the python-src package, and put it somewhere in the home directory of the same user that generated the GPG key. The most reliable way to invoke GAM is using the python command to call the script:

$ python ~/Desktop/gam-2/gam.py

This assumes it was unzipped to the Desktop of the user account. Change the path where appropriate. In order to make this a bit easier, you can create an alias that will allow you to call it with just “gam”

$ alias gam=”python ~/Desktop/gam-2/gam.py”

From here on, we’ll assume you did this. Bear in mind that aliases created this way only last until the session ends (i.e. the Terminal window gets closed).

The first command you’ll need to run is:

$ gam info domain

You’ll be asked to enter your Google Apps Domain, and then you’ll be asked for a Client ID and secret. These are only necessary if you’ll be using Group Settings commands, which we won’t. Press enter to continue. You’ll now be presented with a list of scopes that this GAM install will be authorized for. You can just enter “16″ to continue with all selected, or you can just select Audit Monitors, Activity and Mailbox Exports for Email Audit functions. When you continue, you’ll see this:

You should now see a web page asking you to grant Google Apps Manager access. If you’re not logged in as an administrator, you can do that now, though you may experience some odd behavior. Once you grant access, return to the terminal Window and press Enter. At this point, GAM will retrieve information about your domain from Google Apps, and you’ll be returned to a shell prompt. GAM is installed and almost ready to use.

Uploading the GPG Key

There’s one final step to take before mailbox export requests are possible. The GPG key you generated earlier must be uploaded to Google. What you can do is have gpg export the key and pipe that directly to GAM. You’ll need the ID of the key so that you export the correct one to GAM. If you didn’t make a note of the ID earlier, you can see all the available keys with:

$ gpg –list-keys

pub 1024R/0660D980 2012-03-22
uid Apps Admin
sub 1024R/6D1C197B 2012-03-22

The ID you want is that of the public key. In this case, 0660D980. Now export an ASCII armored key and pipe it to GAM.

$ gpg –export –armor 0660D980 | gam audit uploadkey

Now you’re ready to request mailbox exports.

Dealing with mailbox exports

To request a mailbox export, use:

$ gam audit export includedeleted

This will submit a request for a mailbox export, including all drafts, chants, and trash. You can leave off “includedeleted” if you don’t want their trash. GAM will show you a request ID, which you can use to check the status of a request.

To check the status of one request, use:

$ gam audit export status

If you leave off either username or request ID, you’ll be shown the status of all requests, pending and completed. To download a request you can use:

$ gam audit export download

You must specify both the username and the request ID. Please note that GAM will download the files to the current working directory. The files will be named “export---.mbox.gpg. The numbers will start at 0. In order to decrypt the downloaded files, you’ll need to use GPG.

$ gpg –output –decrypt

This will decrypt one of the files. The predicatbility of the names makes it easy to programatically decrypt all the files. For instance if the username were bob, the ID were 53521381, and there were 8 files, you could use this command:

$ for i in {0..7}; do gpg –output export-bob-53521381-$i.mbox –decrypt export-bob-53521381-$i.mbox.gpg; done

When decryption is completed, you can take the resulting mbox files and import them into any mail client that supports mbox – Thunderbird is a good choice, though Mail.app should work as well – or you can just look at them in a text editor.

Further Reading

For more details about using GAM or the Email Audit API, please consult the official documentation.

Google Apps Manager Wiki: http://code.google.com/p/google-apps-manager/wiki/GettingStarted

Google’s Email Audit API reference: https://developers.google.com/google-apps/email-audit/

Adding incoming and outgoing access rules on a Cisco ASA

Saturday, March 17th, 2012

To understand incoming and outgoing rules there are a couple of things to know before you can define your rules. Let’s start with an understanding of traffic flow on an ASA. All incoming rules are meant to define traffic that come inbound to the ASA’s interface. Outgoing is for all traffic that is going outbound of an ASA’s interface. It does not matter which interface it is since this is a matter data flow and each active interface on an ASA will have it’s own unique address.

To try an explain this further let’s say we have and internal interface with an IP address of 10.0.0.1 that is for your local area network to connect to. You can add a permit or deny rule to this interface specifying whether incoming or outgoing  traffic will be permitted or not. This allows you to control what computers can communicate past that interface or not. Essentially you would define most of your rules for the local area network on the internal interface, governing which systems/devices could access the internet, certain protocols, or not.

Now if you know about the basic configuration of an ASA you know that you have to set the security level of the Internal and External ports. So by default these devices allow traffic from a higher security interface to a lower security interface. NAT/PAT will need to be configured depending on if you want to define port traffic for specified protocols.

For this article I will just mention that their are several types of Access Control Lists (ACL) that you can create on an ASA. These types are Standard, Extended, Ethertype, webtype, and IPV6. For this example we will use Extended because most likely that is what most everyone will use the most. With extended ACL not only can you specify IP addresses in the access control list, but you can specify port traffic to match the protocol that might be required.

Lets look at the the examples below:

You will see we are in the configuration terminal mode

ASA(config)# access-list acl extended permit tcp any host 192.0.43.10 eq 80

-So the first part “access-list acl” means the access list will be named “acl”.
-Next you have a choice between type of access list. We are using Extended for this example.
-The next portion is the permit or deny option and we have permit selected for this statement.
-On the next selection that say’s “any” this refers to inside traffic (simply meaning that any internal traffic is allowed). If you dont use any you can specify specific devices by using “host and the IP address like that last part of this ACL statement.
-The next part of this refers to specifying a specific host address of 192.0.43.10 equals port 80.

So this example tells us that our access control list named ACL will allow any inside traffic out the host address of 192.0.43.10 that is internet traffic.

Later you will notice that your statement will look like this on the ASA

ASA(config)access-list acl extended permit tcp any host 192.0.43.10 www
Notice how “eq 80″ default http traffic changed automatically to www) This is common on Cisco ASA devices).

Configuring a Cisco ASA 5505 with the basics

Thursday, March 1st, 2012

The Cisco ASA 5505 is great for small to medium businesses. Below are the steps you will have to complete to configure your ASA to communicate with the internet. There are many more steps, options, and features to these devices (which later there will be more articles in regards to some of these features).

Bring your device into configuration mode
318ASA>en
Brings the device into enable mode

318ASA#config t
Change to configuration terminal mode

318ASA(config)#
The ASA is now ready to be configured when you see (config)#

Configure the internal interface VLAN (ASA’s use VLAN’s for added security by default)
318ASA(config)# interface Vlan 1

Configure interface VLAN 1
318ASA(config-if)# nameif inside
Name the interface inside

318ASA(config-if)#security-level 100

Set’s the security level to 100

318ASA(config-if)#ip address 192.168.5.1 255.255.255.0
Assign your IP address

318ASA(config-if)#no shut
Make sure the interface is enabled and active

Configure the external interface VLAN (This is your WAN\internet connection)
318ASA(config)#interface Vlan 2
Creates the VLAN2 interface

318ASA(config-if)# nameif outside
Name’s the interface outside

318ASA(config-if)#security-level 0
Assigns the most strict security level to the outside interface (lower the number the higher the security).

318ASA(config-if)#ip address 76.79.219.82 255.255.255.0
Assign your Public Address to the outside interface

318ASA(config-if)#no shut
Enable the outside interface to be active.

Enable and assign the external WAN to Ethernet 0/0 using VLAN2
318ASA(config)#interface Ethernet0/0
Go to the Ethernet 0/0 interface settings

318ASA(config-if)#switchport access vlan 2
Assign the interface to use VLAN2

318ASA(config-if)#no shut
Enable the interface to be active.

Enable and assign the internal LAN interface Ethernet 0/1 (note ports 0/1-0/7 act as a switch but all interfaces are disabled by default).
318ASA(config)#interface Ethernet0/1
Go to the Ethernet 0/1 interface settings

318ASA(config-if)#no shut
Enable the interface to be active.
If you need multiple LAN ports you can do the same for Ethernet0/2 to 0/7.

To have traffic route from LAN to WAN you must configure Network Address Translation on the outside interface
318ASA(config)#global (outside) 1 interface
318ASA(config)#nat (inside) 1 0.0.0.0 0.0.0.0

***NOTE for ASA Version 8.3 and later***
Cisco announced the new Cisco ASA software version 8.3. This version introduces several important configuration changes, especially on the NAT/PAT mechanism. The “global” command is no longer supported. NAT (static and dynamic) and PAT are configured under network objects. The PAT configuration below is for ASA 8.3 and later:

318ASA(config)#nat (inside,outside) dynamic interface

For more info you can reference this article from Cisco with regards to the changes – http://www.cisco.com/en/US/docs/security/asa/asa83/upgrading/migrating.html

Configure the default route (for this example default gateway is 76.79.219.81)
318ASA(config)#route outside 0.0.0.0 0.0.0.0 76.79.219.81 2 1

Last but not least verify and save your configurations. If you do not save your configurations you will have to.

Verify your settings are working. Once you have verified your configurations write to memory to save the configuration. If you do not write to memory your configurations will be lost upon the next reboot.

318ASA(config)#wr mem

Upgrading VMware ESX 3i to ESXi 4.1i

Friday, November 18th, 2011

Requirements:

a. Workstation – Windows or linux based, either real or VM.

b. The VMhost – Should be backed up, and running ESX 3i (3.5.0)

c. new serial number for ESXi 4

1. go to vmware.com and find the vSphere Hypervisor download area. download and the most current vsphere cli tools

2. download the appropriate upgrade package for the server. the file should be named something similar to “

3. stop all VMs on the hypervisor and place it in maintenance mode.

4. from your workstation, run the vSphere CLI.

5. in the CLI, run “cd bin”

6. in the CLI, run “vihostupgrade.pl –server -i -b path-to-your-upgrade.zip

MySQL Backup Options

Thursday, July 8th, 2010

MySQL bills itself as the world’s most popular open source database. It turns up all over, including most installations of WordPress. Packages for multiple platforms make installation easy and online resources are plentiful. Web-based admin tools like phpMyAdmin are very popular and there are many stand-alone options for managing MySQL databases as well.

When it comes to back-up, though, are you prepared? Backup plug-ins for WordPress databases are fairly common, but what other techniques can be used? Scripting to the rescue!

On Unix-type systems, it’s easy to find one of the many example scripts online, customize them to your needs, then add the script to a nightly cron job (or launchd on Mac OS X systems). Most of these scripts use the mysqldump command to create a text file that contains the structure and data from your database. More advanced scripts can loop through multiple databases on the same server, compress the output and email you copies.

Here is an example we found online a long time ago and modified (thanks to the unknown author):


#!/bin/sh

# List all of the MySQL databases that you want to backup in here,
# each separated by a space
databases="database1 database2 database3"

# Directory where you want the backup files to be placed
backupdir=/mydatabasebackups

# MySQL dump command, use the full path name here
mysqldumpcmd=/usr/local/mysql/bin/mysqldump

# MySQL Username and password
userpassword=" --user=myusername --password=mypassword"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert --lock-tables"

# Unix Commands
gzip=/usr/bin/gzip
uuencode=/usr/bin/uuencode

# Create our backup directory if not already there
mkdir -p ${backupdir}
if [ ! -d ${backupdir} ]
then
echo "Not a directory: ${backupdir}"
exit 1
fi

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
do
$mysqldumpcmd $userpassword $dumpoptions $database > ${backupdir}/${database}.sql
done

# Compress all of our backup files
echo "Compressing Dump Files"
for database in $databases
do
rm -f ${backupdir}/${database}.sql.gz
$gzip ${backupdir}/${database}.sql
done

# And we're done
ls -l ${backupdir}
echo "Dump Complete!"
exit

Once you verify that your backup script is giving you valid backup files, these should be added to your other backup routines, such as CrashPlan, Mozy, Retrospect, Time Machine, Backup Exec, PresSTORE, etc. It never hurts to have too many copies of your critical data files.

To make sure your organization is prepared, contact your 318 account manager today, or email sales@318.com for assistance.

Script for Populating Jabber Buddy Lists in iChat

Monday, February 22nd, 2010

Note: Uses a Jabber server hosted on yourfqdn.

The 10.6 OS X ichat server has an autobuddy feature, but this feature only works with a user’s original shortname: if they have multiple shortname aliases, these additional shortnames will not have a buddy list associated with them when they login, as the jabber database keys off of the logged in name: each shortname maintains it’s own buddy list, and aliases are not handled by autobuddy population.

To get around this limitation I have created a shell script residing at: /usr/local/bin/createAutoBuddyLists.sh. This script when ran traverses the Open Directory user database and inits jabber accounts for all user shortnames (using /usr/bin/jabber_autobuddy –inituser shortname@yourfqdn). This creates an active record for that shortname. After this is created for all shortnames in the system, the script then calls /usr/bin/jabber_autobuddy -m, which creates a buddy list for all users that contains an entry for all active records.

Unfortunately there is no way to auto-fire this script when a new user alias is added, it must be run by hand. To do so, after creating a new user account (or add a new shortname to an existing account) simply open a terminal window and type the following command:

sudo /usr/local/bin/createAutoBuddyLists.sh

You will then be prompted for authentication. Once you authenticate, the script will process and create/init the appropriate accounts and ensure that they are buddied with all existing users.

Contents of /usr/local/bin/createAutoBuddyLists.sh:
#!/bin/bash

PATH=/usr/bin

## Specify search base
declare -x SEARCHBASE=”/LDAPv3/127.0.0.1″

## Specify our jabber domain
declare -x JABBERDOMAIN=”yourFQDN”

## Iterate through all of our OD users
for user in $(dscl $SEARCHBASE list /Users); do
case “$user” in
“root”)
continue;;
vpn_*)
continue;;
esac

echo “Resolving aliases for: $user”
## Read all shortnames for the user
for shortname in $(dscl -url $SEARCHBASE read /Users/$user RecordName | grep -v RecordName | sed -e ‘s/^\ //g’); do
echo “Initing jabber for username: $shortname”
## Init the shortname
jabber_autobuddy –inituser “${shortname//%20/ }@$JABBERDOMAIN”
done
done

## Populate all inited accounts
jabber_autobuddy -m

Preparing for Exchange 2007

Wednesday, January 27th, 2010

Make sure you have a fully updated Windows 2008 64bit install setup for the following commands to work. Note that Windows 2008 R2 will NOT work with Exchange 2007.

Exchange 2007 has a lot of prerequisites that need to be installed before you can install Exchange 2007. Instead of going through a bunch of Wizards and using trial and error to make sure you have everything installed, you can set them up using a command line.

The first command that should be run is:

ServerManagerCmd -i PowerShell

This will install and configure everything that Exchange 2007 needs for PowerShell.

IIS has several components that need to be installed to use Exchange 2007. You can create a quick batch script that includes them all. The following commands need to be run:

ServerManagerCmd -i Web-Server
ServerManagerCmd -i Web-ISAPI-Ext
ServerManagerCmd -i Web-Metabase
ServerManagerCmd -i Web-Lgcy-Mgmt-Console
ServerManagerCmd -i Web-Basic-Auth
ServerManagerCmd -i Web-Digest-Auth
ServerManagerCmd -i Web-Windows-Auth
ServerManagerCmd -i Web-Dyn-Compression

If you plan on using RPC over HTTP (Outlook Anywhere) you will need to run this command after all of the IIS commands have finished:

ServerManagerCmd -i RPC-over-HTTP-proxy

After running these commands you should be ready to run the actual setup files. When you run setup.exe you should see that everything before option 4. Is greyed out. Option 4. is what triggers the install. If anything has not finished look through the command lines to make sure no errors have shown up.

Screen Shots & ARD

Tuesday, December 8th, 2009

For what it’s worth, we take ours from the command line. It helps keep proper track of the names screens. Simply open up a terminal window on a remote server via Apple Remote Desktop (ARD) and run the following command:

sleep 3; screencapture -iw ~/Desktop/filename.png

When you run that full string as a command you’ll have 3 seconds after hitting enter to highlight your target window, at which point your cursor will switch to the photo in window selection mode. Alternatively, you can run:

sleep 3; screencapture -iwc

Which will capture the picture to the remote machines clipboard (and can then be copied via ARD, and opened in Preview (File->new from clipboard).

New CLI Options in Final Cut Server

Wednesday, November 25th, 2009

For those of us who thought that the Final Cut Server 1.5.1 update was just a couple of minor bug fixes, there’s a little more than meets the eye. If you run /Library/Application Support/Final Cut Server/Final Cut Server.bundle/Contents/MacOS/fcsvr_client then you’ll note that there are a few fun new features. While there hasn’t been enough time to thoroughly put the new options through their paces, we do hope to do further reporting on them as we become more comfortable with leveraging them for automations. Stay tuned!

Reading Virtual Memory Stats

Thursday, October 29th, 2009

The vm_stat command in Mac OS X will show you the free, active, inactive, wired down, copy-on-write, zero filled, and reactivated pages for virtual memory utilization. You will also see the pageins as well as pageouts. If you wish to write these statistics routinely then you can use the vm_stat command followed by an integer. For example, to see the virtual memory statistics every 5 seconds:

vm_stat 5

Greylisting and Snow Leopard Server

Thursday, October 8th, 2009

10.6 has introduced the use of Greylisting as a spam prevention mechanism. In short, it denies the first attempt for an MTA to deliver a message, once the server tries a second time (after an acceptable amount of delay, proving it’s not an overeager spammer), it can be added to a temporary approval list so future emails are delivered without a delay.

The problem with this is many popular mail systems, including gmail, don’t exactly behave as expected, so the messages may take hours before they are delivered. To get around this, the people championing greylisting suggest maintaining a whitelist of these popular, but ‘non standard’ mail servers, allowing them to bypass the greylist process entirely and accepting the messages the first time around. The other problem is for companies that send mail through mxlogic and other similar services, the mail is sent from the first available server, potentially causing delayed because they were being sent by a different mxlogic box each time.

The problem with this under 10.6 is there is no gui or interface to inform you that greylisting is enabled (it gets turned on when you enable spam filtering), and so it just takes forever for messages to hit your inbox. You can start managing the whitelist / greylist system, or you can just turn it off:

cp /etc/postfix/main.cf /etc/postfix/main.cf.bak

vi /etc/postfix/main.cf

change line 667 from:

smtpd_recipient_restrictions = permit_sasl_authenticated permit_mynetworks reject_unauth_destination check_policy_service unix:private/policy permit

To the following (removing check_policy_service unix:private/policy):

smtpd_recipient_restrictions = permit_sasl_authenticated permit_mynetworks reject_unauth_destination permit

You can then run postfix with the reload verb to reload the config files, as follows:

postfix reload

Troubleshooting File Replication Pro

Saturday, May 2nd, 2009

Check the Web GUI.

To Check the logs via the Web Gui
- On the server, open Safari and go to http://localhost:9100 and authenticate
- Go To Scheduled Jobs and view the Logs for the 2 Way Replication Job

You can also tail the logs on the server. They are in /Applications/FileReplicationPro/logs and among the various logs in that location, the most useful log would be the syncronization log.

Many times the logs show that the server TimeSync is to fare between, the date and time are not correct. Each Server has a script you can run to resync the time. To Run this Script
Open Terminal on both First and Second servers and run
sudo /scripts/updatetime.sh

You should see output in the terminal window and in the Console related to the time&date are now in sync with the time server.

To Stop and Restart the Replication Service

Open Terminal and run the following commands as sudo
systemstarter stop FRPRep
systemstarter stop FRPHelp
systemstarter stop FRPMgmt
once the services are stopped, start them up again in the following order
systemstarter start FRPRep
systemstarter start FRPHelp
systemstarter start FRPMgmt

You also should restart the second (or tertiary) Client:
Open Terminal and run the following commands as sudo
systemstarter stop FRPRep
wait for the service to stop and then start it again with this command
systemstarter start FRPRep

ESX Patch Management

Tuesday, April 14th, 2009

VMware’s ESX Server, like any system, needs to be updated regularly. To see what patches have been installed on your ESX server use the following command:

esxupdate -query

Once you know what updates have already been applied to your system it’s time to go find the updates that still need to be applied. You can download the updates that have not yet been run at http://support.vmware.com/selfsupport/download/. Here you will see a bevy of information about each patch and can determine whether you consider it an important patch to run. At a minimum, all security patches should be run as often as your change control environment allows. Once downloaded make sure you have enough free space to install the software you’ve just downloaded and then you will need to copy the patches to the server (using ssh, scp or whatever tool you prefer to use to copy files to your ESX host). Now extract the patches prior to running them. To do so use the tar command, as follows:

tar xvzf .tgz

Once extracted, cd into the patch directory and then use the esxupdate command with the update flag and then the test flag, as follows:

esxupdate –test update

Provided that the update tests clean, run the update itself with the following command (still with a working directory inside the extracted tarball from a couple of steps ago):

esxupdate update

There are a couple of flags that can be used with esxupdate. Chief amongst them are -noreboot (which doesn’t reboot after a given update), -d, -b and -l (which are used for working with bundles and depots).

If esxupdate fails with an error code these can be cross referenced using the ESX Patch Management Guide.

You can also run patches without copying the updates to the server manually, although this will require you to know the URL of the patch. To do so, first locate the patch number that you would like to run. Then, open outgoing ports on the server as follows:

esxcfg-firewall -allowOutgoing

Next, issue the esxupdate command with the path embedded:

esxupdate –noreboot -r http:// update

Once you’ve looped through all the updates you are looking to run, lock down your ESX firewall again using the following command:

esxcfg-firewall -blockOutgoing

New article on Xsan Scripting by 318

Saturday, April 11th, 2009

318 has published another article on Xsanity, for scripting various notifications and monitors for Xsan and packaged up into a nice package installer. You can find it here
http://www.xsanity.com/article.php/20090407150134377.

Sleeping Windows from the Command Line

Friday, April 10th, 2009

Windows, like Mac OS X can be put to sleep, locked or suspended from the command line. To suspend a host you would run the following command:

rundll32 powrprof.dll,SetSuspendState

To lock a Windows computer from the command line, use the following command:

rundll user32.dll,LockWorkStation

To put a machine in Hibernation mode:

rundll32 powrprof.dll,SetSuspendState Hibernate

If you would rather simply shut the computer down, then there is also the shutdown command, which can be issued at the command line. You can also use tsshutdn, which provides a few more options than the traditional shutdown command. All of these commands can also be scripted. For example, using the at command to provide a one time instance (which is actually a feature built into tsshutdn and shutdown). Another way to automate these in WIndows would be to issue the schtasks command (or simply write a batch file and use the GUI).

Kerio Mail Server 6.6 and IMAP

Friday, February 27th, 2009

Update 6.6 introduced many updates to Kerio, one of those updates changes how IMAP reacts to deleted items.

Typically, when an item is deleted from an IMAP client, that item has a strike through to show that it has been deleted. To further delete this item, you must purge or expunge the deleted items to completely remove them.

The 6.6 update changed the way Kerio reacts to deletions by completely deleting the item for good. No moving to deleted items, no warning, just a hard delete.

This can be changed, but it must be done globally. It cannot be done on a per user basis.

Login to the mail server
ON A MAC
Stop the mail server
sudo or su
go to /usr/local/kerio/mailserver
edit mailserver.cfg
change 1
to 0

When I spoke to Kerio Tech Support, they said that this change was done due to overwhelming requests by the customers. They acknowledged that this goes against the RFC. KMS 6.7 should move the deleted items to the deleted folder instead of trashing them completely.

Xsanity article on Configuring Network Settings using the Command Line

Tuesday, February 10th, 2009

We have posted another article to Xsanity on “Setting up the Network Stack from the Command Line”. An excerpt from the article is as follows:

Interconnectivity with Xsan is usually a pretty straight forward beast. Make sure you can communicate in an unfettered manner on a house network, on a metadata network and on a fibre channel network and you’re pretty much good to go. One thing that seems to confuse a lot of people when they’re first starting out is how to configure the two ethernets. We’re going to go ahead and do two things at once, explain how to configure the interface and show how to automate said configuration from the command line so you can quickly deploy and then subsequently troubleshoot issues that you encounter from the perspective of the Ethernet networks.

View the full article here.

Xsanity Article on Managing Fibre Channel from the Command Line

Friday, January 23rd, 2009

We have posted another article on Xsanity. This one is on managing Fibre Channel settings from the command line. The following is an excerpt from the article:

Once upon a time there was Fibre Channel Utility. Then there was a System Preference pane. But the command line utility, fibreconfig, is the quickest, most verbose way of obtaining information and setting various parameters for your Apple branded cards.

To get started with fibreconfig, it’s easiest to start with just asking the fibreconfig binary to simply display all the information available on the fibre channel environment. This can be done by using the –l option as follows:

View the full article here.

Xsanity Article on Labeling LUNs from the Command Line

Monday, January 19th, 2009

We have published another article on Xsanity. This one on using removable media with Xsan. More importantly this article shows how to label a LUN using the command line tool cvlabel. An excerpt is as follows:

Sometimes you just need a small SAN for testing… Times when you don’t have fibre channel and you don’t need to provide access to multiple clients, like maybe if you’re writing an article for Xsanity on an airplane. Now this volume is not going to be supported (or supportable) by anyone and nor should it be (so don’t use it for real data), but you can use USB and FireWire drives for a small test Xsan…

View the full article here.

Xsanity – Article on cvadmin

Thursday, January 15th, 2009

We wrote an article on using cvadmin to manage Xsan from the command line. It’s available on Xsanity here.

Mac OS X 10.5: Time Machine at the CLI

Saturday, October 18th, 2008

You can customize what Time Machine does not back up by using the following plist:

/System/Library/CoreServices/backupd.bundle/Contents/Resources/StdExclusions.plist

Simply add the strings that you don’t want to back up and it will no longer back up those locations. Remove the strings to re-add them at a later date.
In the UserPathsExcluded key, you can exclude paths that in relation to users home directories.