Archive for November, 2012

How to Configure basic High Availability (Hardware Failover) on a SonicWALL

Friday, November 30th, 2012

Configuring High Availability (Hardware Failover) SonicWALL requires the following:

1. Two SonicWALLs of the same model (TZ 200 and up).

2. Both SonicWALLs need to be registered at MySonicWALL.com (regular registration, and then one as HF Primary, one as HF Secondary).

3. The same firmware versions need to be on both SonicWALLs.

4. Static IP addresses are required for the WAN Virtual IP interface (you can’t use DHCP).

5. Three LAN IP addresses (one for Virtual IP, one for the management IP, and one for the Backup management IP).

6. Cross over cable (to connect SonicWALLs to each other) on the last ethernet interfaces.

7. 1 hub or switch for the WAN port on each SonicWALL to connect to.

8. 1 hub or switch for the LAN port on each SonicWALL to connect to.

Caveats

1. High Availability cannot be configured if “built-in wireless is enabled”.

2. On NSA 2400MX units, High Availability cannot be configured if PortShield is enabled.

3. Stateful HA is not supported for connections on which DPI-SSL is applied.

4. On TZ210 units the HA port/Interface must be UNASSIGNED before setting up HA (last available copper ethernet interfaces).

 

Setup

1. Register both SonicWALLs at MySonicWALL as High Availability Pairs BEFORE connecting them to each other:

• “Associating an Appliance at First Registration”: http://www.fuzeqna.com/sonicwallkb/consumer/kbdetail.asp?kbid=6233#Associating_an_Appliance_at_First_Registration_

• “Associating Pre-Registered Appliances”: http://www.fuzeqna.com/sonicwallkb/consumer/kbdetail.asp?kbid=6235#Associating_Pre-Registered_Appliances

• “Associating a New Unit to a Pre-Registered Appliance”: http://www.fuzeqna.com/sonicwallkb/consumer/kbdetail.asp?kbid=6236#Associating_a_New_Unit_to_a_Pre-Registered_Appliance

2. Login to Primary HF and configure the SonicWALL (firewall rules, VPN, etc).

3. Connect the SonicWALLs to each other on their last ethernet ports using a cross over cable.

4. Connect the WAN port on both SonicWALLs to a switch or hub using straight through (standard) ethernet cables, and then connect the switch to your upstream device (modem, router, ADTRAN, etc.)

5. Ensure the Primary HF can still communicate to the Internet.

6. Connect the LAN port on both SonicWALLs to a switch or hub using straight through (standard) ethernet cables, and then connect them to your main LAN switch (if you don’t have one, you should purchase one. This will be the switch that all your LAN nodes connect to.).

7. Go to High Availability > Settings.

8. Select the Enable High Availability checkbox.

9. Under SonicWALL Address Settings, type in the serial number for the Secondary HF (Backup SonicWALL). You can find the serial number on the back of the SonicWALL security appliance, or in the System > Status screen of the backup unit. The serial number for the Primary SonicWALL is automatically populated.

10. Click Accept to save these settings.

 

Configuring Advanced High Availability Settings

1. Click High Availability > Advanced.

2. Put a check mark for Enable Preempt Mode.

3. Put a check mark for Generate / Overwrite Backup Firmware and Settings when Upgrading Firmware.

4. Put a check mark for Enable Virtual MAC.

5. Leave the Heartbeat Interval at default (5000ms).

6. Leave the Probe Interval at default (no less than 5 seconds).

7. Leave Probe Count and Election Delay Time at default.

8. Ensure there’s a checkmark for Include Certificates/Keys.

9. Press Synchronize settings.

 

Configuring High Availability > Monitoring Setting

(Only do the following on the primary unit, they will be sync’d with the secondary unit).

1. Login as the administrator on the Primary SonicWALL.

2. Click High Availability > Monitoring.

3. Click the Configure icon for an interface on the LAN (ex. X0).

4. To enable link detection between the designated HA interface on the Primary and Backup units, leave the Enable Physical Interface monitoring checkbox selected.

5. In the Primary IP Address field, enter the unique LAN management IP address.

6. In the Backup IP Address field, enter the unique LAN management IP address of the backup unit.

7. Select the Allow Management on Primary/Backup IP Address checkbox.

8. In the Logical Probe IP Address field, enter the IP address of a downstream device on the LAN network that should be monitored for connectivity (something that has an address that’s always turned on like a server or managed switch).

9. Click OK.

10. To configure monitoring on any of the other interfaces, repeat the above steps.

11. When finished with all High Availability configuration, click Accept. All changes will be synchronized to the idle HA device automatically.

 

Testing the Configuration

1. Allow some time for the configuration to sync (at least a few minutes). Power off the Primary SonicWALL. The Backup SonicWALL should quickly take over.

2. Test to ensure Internet access is OK.

3. Test to ensure LAN access is OK.

4. Log into the Backup SonicWALL using the unique LAN address you configured.

5. The management interface should now display “Logged Into: Backup SonicWALL Status: (green ball)”. If all licenses are not already synchronized with the Primary SonicWALL, go to System > Licenses and register this SonicWALL on mysonicwall.com. This allows the SonicWALL licensing server to synchronize the licenses.

6. Power the Primary SonicWALL back on, wait a few minutes, then log back into the management interface. The management interface should again display “Logged Into: Primary SonicWALL Status: (green ball)”.

NOTE: Successful High Availability synchronization is not logged, only failures are logged.

BizAppCenter

Thursday, November 29th, 2012

It was our privilege to be contacted by Bizappcenter to take part in a demo of their ‘Business App Store‘ solution. They have been active on the Simian mailing list for some time, and have a product to help the adoption of the technologies pioneered by Greg Neagle of Disney Animation Studios (Munki) and the Google Mac Operations Team. Our experience with the product is as follows.

To start, we were given admin logins to our portal. The instructions guide you through getting started with a normal software patch management workflow, although certain setup steps need to be taken into account. First is that you must add users and groups manually, there are no hooks for LDAP or Active Directory at present (although those are in the road map for the future). Admins can enter the serial number of each users computer, which allows a package to be generated with the proper certificates. Then invitations can be sent to users, who must install the client software that manages the apps specified by the admin from that point forward.

emailInvite

Sample applications are already loaded into the ‘App Catalog’, which can be configured to be installed for a group or a specific user. Uploading a drag-and-drop app in a zip archive worked without a hitch, as did uninstallation. End users can log into the web interface with the credentials emailed to them as part of the invitation, and can even ‘approve’ optional apps to become managed installs. This is a significant twist on the features offered by the rest of the web interfaces built on top of Munki, and more features (including cross-platform support) are supposedly planned.

sampleOptionalinstall

If you’d like to discuss Mac application and patch management options, including options such as BizAppCenter for providing a custom app store for your organization, please contact sales@318.com

Outlook Mailbox Maintenance and Search Troubleshooting

Thursday, November 29th, 2012

How to keep an Outlook Database tidy. From the get go, it’s important to lay the foundation for Outlook and the user so that their database doesn’t grow out of hand.  This is done by:

  1. Organizing Folders the way the user would like them
  2. Creating Rules for users (if they need them)
  3. Creating an Archive Policy that moves their email to another database (PST).
  4. Mounting the archive PST in Outlook so that it’s searchable.
  5. Checking the size of the archive PST every quarter or half year to ensure the size hasn’t grown above it’s maximum.
  6. Creating a new archive folder for every year.

Organizing Folders the way the user would like them.

Sit down the user, and see how they would like to organize their folders.  If they don’t know, them revisit this with them in a couple of weeks / months.  Speak to them regarding their workflow and make recommendations to streamline their productivity as necessary.  Creating folder is as simple as right clicking the directory tree in Outlook and clicking on “Create Folder”.  It can also be done to create subfolders.

Creating Rules for users (if they need them).

Some users use rules, others don’t, some don’t even know they exist.  Start up a conversation with a user and see if they know what Outlook rules are, and see if they would like to know more about them, use some, or give it a test run for a day or so.  In a nutshell, Outlook rules move email from the Inbox to any mail enabled folder based on a set of, well rules.  You can move by sender, subject, keywords, etc.  Where to create rules is a little different depending on the version of Outlook you’re using:

Creating Rules in Outlook 2003: http://www2.lse.ac.uk/intranet/LSEServices/itservices/guides/softwareAndTraining/microsoftOffice/outlook/email-rules.aspx

Creating Rules in Outlook 2007: http://uis.georgetown.edu/email/clients/outlook2007/outlook2007.createrule.html

Creating Rules in Outlook 2010: http://office.microsoft.com/en-us/outlook-help/manage-email-messages-by-using-rules-HA010355682.aspx

Try to create rules that run from the Exchange server when possible.  This will allow the rules to run on the server and organize them before they hit the Outlook mail client.

Creating an Archive Policy that moves email to another database (PST)

NOTE: If autoarchiving from Outlook, the e-mail will not be available in Outlook Web Access / Active Sync.  If archiving in Exchange 2010 for a user, the Archive databases can be available in Outlook Web Access.  Proper licensing on Exchange and Outlook apply: http://www.microsoft.com/exchange/en-us/licensing-exchange-server-email.aspx

There are some defaults that Outlook uses.

  • Generally, it will automatically auto archive to an archive pst called archive.pst.
  • By default, it will tend to run every 14 days, and tend to archive all messages older than 6 months.
  • The archive.pst will be on the local workstation.
  • Microsoft best practice is to NOT store the PST file on the network due to it being fragile and if it receives any incomplete data it will get corrupt.
  • You cannot put the PST in read only mode, if you do, you will not be able to mount it until you take it out of read only mode.

Setting up AutoArchive, or manually archiving for Outlook 2003: http://office.microsoft.com/en-us/outlook-help/back-up-or-delete-items-using-autoarchive-HP005243393.aspx

Setting up AutoArchive, or manually archiving for Outlook 2007: http://office.microsoft.com/en-us/outlook-help/automatically-move-or-delete-older-items-with-autoarchive-HA010105706.aspx

Auto Archive Explained for Outlook 2010: http://office.microsoft.com/en-us/outlook-help/autoarchive-settings-explained-HA010362337.aspx

Turning off AutoArchive, or manually archiving for Outlook 2010: http://office.microsoft.com/en-us/outlook-help/archive-items-manually-HA010355564.aspx

Outlook PST Size limitations:

Outlook 2003 default is 20GB, but it can be changed: http://support.microsoft.com/kb/832925

Outlook 2007 default is 20GB, but it can be changed: http://support.microsoft.com/kb/832925

Outlook 2010 default is 50GB, but it can be changed: http://support.microsoft.com/kb/832925

Searching PSTs

For Outlook 2003 with latest updates:

  1. Open PST in Outlook
    1. File > Open > Outlook Data File
    2. When using Advanced Find make sure the archive.pst file is select to be searched.

For Outlook 2007:

  1. Ensure Windows Search is installed
  2. Go to Control Panel > Index Options and ensure your achive.pst is selected to be indexed.
  3. Now when you run a search, ensure “search all Outlook folders” is selected.  This will now allow the user to search ALL folders in Outlook at once, including the archive.pst.

For Outlook 2010

  1. Ensure archive.pst is open in Outlook
  2. Search using Instant Search in Outlook or Windows Search

Searching  doesn’t work in Outlook2007 and 2010: Troubleshooting steps you can do:

  1. Check the Event Logs for anything unusual with Office, Outlook, or Windows Search, and troubleshoot the errors that you find.
  2. Ensure that the pst file has been marked for being indexed:
    1. Outlook 2007:  Tools > Options > Search Options
    2. Outlook 2010: File > Options > section Search > Indexing Options > Modify > Microsoft Outlook
    3. Ensure the pst hasn’t gone over it’s maximum limit, if it has you will need to run scanpst.exe to repair it (you will lose some data within the PST, and there’s no way to control what will be removed).  If not, skip to  Step #4. Scanpst.exe can be found in different places depending on the version of Outlook you have:
      1. Outlook 2010

i.      Windows: C:\Program Files\Microsoft Office\Office14

ii.      Windows 64-bit: C:\Program Files (x86)\Microsoft Office\Office14

iii.      Outlook x64: C:\Program Files\Microsoft Office\Office14

  1. Outlook 2007

i.      Windows: C:\Program Files\Microsoft Office\Office12

ii.      Windows 64-bit: C:\Program Files (x86)\Microsoft Office\Office12

  1. After the repair has completed, open Outlook again and allow it to index (how long depends on how big the PST is).  If you check the Indexing Status, you should see it update at least every half hour.

i.      Check Indexing Status in Outlook 2010: Click in Search field > Click Search Tools button > Select Indexing Status

ii.      Check Indexing Status in Outlook 2008: Click on Tools > Instant Search > Indexing Status

  1. Proceed to Step #4.
  2. Disable and then re-enable the file for indexing.  Go to Search Options and remove the checkmark for the PST that is giving you issues.  Close Outlook and wait a couple of minutes.  Open Task Manager and ensure Outlook.exe is not running anymore. Once you’ve confirmed it’s stopped running on its own, open Outlook again and go back to the Search Options and put a check mark back on the PST that was giving you issues.  Leave Outlook open and alone and allow it to index until that Indexing Status says “0 items remaining”.
  3. If after indexing, it still doesn’t go down to “0 items remaining”, or isn’t even close, or the search STILL isn’t working properly, it’s possible the search index is corrupt.  To rebuild it, go to: Control Panel > Indexing Options > Advanced > Rebuild.  This is something that would best be done overnight as it will not only slow down Outlook, but slow down the computer as well.
  4. If rebuilding the Search Index still doesn’t work, then you may need to “Restore Defaults” .  On Windows 7, this can be done by clicking on the “Troubleshoot search and indexing” link under Control Panel > Indexing Options > Advanced.  Then click on “E-mail doesn’t appear in search results”.
  5. If after all of that, it still doesn’t work, it’s possible you have a corrupt PST.  In which case, follow through with step #3.
  6. If that still doesn’t work, consider patching up Microsoft Office to it’s latest updates.
  7. If that doesn’t work, consider repairing Microsoft Office by going to: Control Panel > Uninstall a Program > Microsoft Office 2010 > click on the Modify button > Click Repair.  Proceed to Step #4.
  8. If that still doesn’t work, create a new PST and import the data (using the Import function, or drag and drop)  from the bad PST into the new PST.  Proceed to Step #3.

 

[More Splunk: Part 3] Report on remote server activity

Wednesday, November 28th, 2012

This post continues [More Splunk: Part 2] Configure a simple Splunk Forwarder.

With data flowing from the Splunk Forwarders into the Splunk Receiver server, the last step toward getting meaningful information is to create a search for specific data and put it into a report.

Splunk searches range from simplistic strings such as “error” to complex phrases that resemble Excel formulas mixed with shell scripting. To extract the data gathered from a remote server will require narrowing down the location of the data from host to source to field and then manipulating the field values to get meaning from them.

Creating a search

After logging in to the Splunk Receiver server, select Search from the App menu.

Choose Search

This presents a page with a seemingly simple search field at the top with three panels below called “Sources”, “Source Types” and “Hosts”. The window is actually a very helpful formula builder for creating complex searches. Locate the Hosts area. This lists both the local computer as well as all Splunk Forwarders.

Hosts

Clicking any of the host names, in this case “TMI”, begins building the search formula. It automatically inserts a correctly formatted string into the Search field:

host="TMI"

At the same time Splunk displays a table of data from that host and begins displaying a dynamic graph based on that data. Without any filtering or refining it’s displaying the count of records from log files it has gathered. Interesting but not very useful.

Host search

Now that the data shown is narrowed down to the server, let’s narrow it down to the data coming from the counters.sh script running on the server. The script is considered the “source” of the data and the path to the script is the value:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh"

This search result narrows Splunk’s results considerably. Note that Splunk is highlighting the host and source information in the textual data. Also, note how the graph consistently shows “1″ across its scope. This indicates it’s reporting one record for each time reported. Again, not very useful.

Source search

What we really want are the values of the results displayed over time. This is handled by the “timechart” function in Splunk. The formula now pipes the data returned from the host and source into a function:

host="TMI" source="/applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | timechart avg(MySQLCPU)

Remember that the counters.sh script was written to denote “fields” called “MySQLCPU” and “ApacheCount”. Using the field name in the timechart function returns the values over time. Using “avg” returns the average of the values (really, just the average of the one value). The final result is a simple table of data, which is all that’s needed to create a report.

Timechart

Creating a report

Now, we can graph this table of data. From the Create menu select Report… Splunk creates a rough graph, which is useful but not very easy to read.

Initial graph

Using the formatting options above the graph, adjust these items:

  • Chart type: area
  • Chart title: MySQL CPU Usage

Area graph

To save this graph so that it’s easily accessible without having to recreate the search each time, let’s add it to a dashboard. A dashboard is a single Splunk page that can act as an overview for multiple related or unrelated processes or servers.

From the Create drop down menu select Dashboard panel… Name the new panel “MySQL CPU Usage” and click the Next button. If an appropriate dashboard already exists simply choose to add the panel to that existing dashboard. Otherwise, name the new dashboard itself “Servers Dashboard” and click the Next button. Click the Finish button when done.

To view the report panel without having to recreate the search each time, locate the Dashboards & Views menu and select the Servers Dashboard.

Select dashboard

A dashboard can hold any number of report graphs for one or multiple machines. Create a new search and then create a new report based on that search. When done save it to the dashboard. Drag and drop panels on the page to reorder them or put higher priority panels toward the top or left of the page.

LifeSize: Establishing A 3-Way Call

Tuesday, November 27th, 2012
I’m becoming pretty fond of LifeSize Video Conferencing Units. Mostly because they’re so easy to for end users that I rarely get any support calls about them. LifeSize Units Support 3 and More Way Video Conference Dialing. When I’ve done a 3way call in the past, I’ve just done the following:
  • Established the first call.
  • Use the Call button on the remote to bring up the address book screen (aka Call Manager).
  • Highlight the Requested call to add.
  • Clicking OK on the remote.
  • The second call added will appear side-by-side with your video of your call on the 2nd monitor. Your call should then appear on the first monitor of each of the two callers with their screen side-by-side with the first one you added on their second monitor.
  • When the call is finished, click on the hang up button on the remote to bring up Call Manager.
  • Click on the Hang Up button again to disconnect all users.
  • OR at this point you could also add another call, bandwidth permitting.
  • If you start a presentation while on the call then all callers will be tiled on the main screen and the presentation will play on the second screen.

Repeat this process to add more and more callers. If you have an RJ-11 w/ POTS you can also add voice callers. Granted they can’t see anything you’re piping over the video, but they can still participate in the areas of calls where they don’t need video.

[More Splunk: Part 2] Configure a simple Splunk Forwarder

Monday, November 26th, 2012

This post continues [More Splunk: Part 1] Monitor specific processes on remote servers.

So far, I have a simple shell script called counters.sh that will return two pieces of information I want fed into my Splunk indexer server:

  • MySQL CPU usage
  • Count of Apache web server processes

It’s going to return a result that looks something like:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Install the Forwarder

For each server  I need a Splunk agent called a Forwarder installed. The Forwarder’s purpose is to send the data collected on the local server to a remote Splunk server for indexing and reporting. Splunk offers three types of Forwarders but I want the one with the lightest weight and overhead—a Universal Forwarder. For my testing I downloaded the Mac OS X 10.7 installer and installed it onto OS X 10.8 without any noticeable issues.

At this point the Forwarder service hasn’t been started yet. I first want to add my script and a couple of configuration files. The configuration files are necessary because the Universal Forwarder has no web interface to facilitate point and click configuration.

Create and populate the app directory

First, I want to create a folder for my “app”. An app is a directory of scripts and configuration files. By creating my own app directory I can control the behavior of its contents, overriding preset server defaults if I choose.

mkdir /Applications/splunkforwarder/etc/app/talkingmoose/

Inside my app folder I’ll create two more called bin and local:

mkdir /Applications/splunkforwarder/etc/app/talkingmoose/bin
mkdir /Applications/splunkforwarder/etc/app/talkingmoose/local

The bin folder is a Splunk security requirement. Any executable, such as a script, must reside in this folder. This is where I’ll place my counters.sh script and make it executable using chmod +x.

The local folder will contain two plain text configuration (.conf) files:

  • inputs.conf
  • outputs.conf

Put simply, inputs.conf is the configuration file that controls executing the script and getting its data into the Splunk Forwarder. And outputs.conf is the configuration file that controls sending the data out to the indexing server or “Splunk Receiver”. These files can be very simple or very complex depending on the needs. I like simple.

Contents of inputs.conf

[script:///Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh]
disabled = false
interval = 60.0

This .conf file tells the Splunk Forwarder where to find the script to execute and then executes it every 60 seconds.

Contents of outputs.conf

[tcpout:group1]
server=192.168.5.42:9997

This .conf file tells the Splunk Forwarder to send its collected script data to a specific IP address on port 9997 where the Splunk Receiver is listening.

Configure the Splunk Receiver to listen

All that’s left to do is configure the Splunk Receiver to listen for data coming in from Splunk Forwarders on port 9997 via its web interface and start the Splunk Forwarder’s service via its command line utility.

Enable receiving

On the Splunk Receiver server, the server accepting all the data for searching later, click the Manager link in the upper right corner and then click Forwarding and receiving. Click on Configure receiving and then click the New button to create a new listening port. Enter 9997 or another port number not commonly used. Click the Save button.

Enable forwarding

On each Splunk Forwarder the necessary files are already in place. The only task left is to start the Forwarder’s service.

sudo /Applications/splunkforwarder/bin/splunk start

If this is the first time running the start command then press the spacebar repeatedly to read the license agreement or press “q” to quit and immediately accept the agreement.

To test that the Forwarder is working run the list command:

sudo /Applications/splunkforwarder/bin/splunk list forward-server

If prompted for credentials use Splunk’s defaults:

Splunk username: admin
Password: changeme

It should return something that looks like this:

Active forwards:
192.168.5.42:9997
Configured but inactive forwards:
None

Searching on the Splunk Receiver should also return results from the Forwarders. Search for host="<forwarderHostName>".

Now that remote server data is flowing into the Splunk indexer machine the last step is to search for it and turn it into meaningful reports.

[More Splunk: Part 3] Report on remote server activity

Monitor Apache Load Times

Saturday, November 24th, 2012

When troubleshooting apache issues it becomes necessary sometimes to turn up the level of logging so that we can further determine what a given server is doing and why.  One new handy feature of the Apache 2 series is the ability to log how long it takes to serve a page.  This allows us to track load times throughout the entire website so that we can pipe it into our favourite analytical tool such as splunk or for you old admins, webalyzer or awstats.

Adding this new variable is straightforward.  Just navigate over to your httpd.conf file and look for the section that defines the various log formats.  We’re going to add the %D variable there which represents the time it takes to serve a page in microseconds.  Here is my httpd.conf for example:

LogFormat “%v:%p %h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\”" vhost_com
bined
LogFormat “%h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\”" combined
LogFormat “%h %l %u %t \”%r\” %>s %O” common
LogFormat “%{Referer}i -> %U” referer
LogFormat “%{User-agent}i” agent

The quick and dirty way to get this mod installed is to look for the type of log that your server is configured to use (usually common or combined) and add the %D to the end (although you could put it anywhere)  As you see below I’ve added it to the combined part of the logfile.

LogFormat “%h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\” %D” combined

The other option is to make a new type of log and put it in there.  I’m going to make a new LogFormat named custom and put it there below.  Note that you’ll have to make sure that your vhost is set to use this type of log.

LogFormat “%h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\” %D” custom_log

 

Bash Tidbits

Friday, November 23rd, 2012

If you’re like me you have a fairly customized shell environment full of aliases, functions and other goodies to assist with the various sysadmin tasks you need to do.  This makes being a sysadmin easy when you’re up and running on your primary machine but what happens when you’re main machine crashes?

Last weekend my laptop started limping through the day and finally dropped dead and I was left with a pile of work yet on my secondary machine.  Little to no customization was present on this machine which made me nearly pull out my hair on more than one occasion.

Below is a list of my personal shell customizations and other goodies that you may find useful to have as well.  This is easily installed into your ~/.bashrc or ~/.bash_profile file to run every time

 

# Useful Variables
export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
export SN=`netstat -nr| grep -m 1 -iE ‘default|0.0.0.0′ | awk ‘{print \$2}’ | sed ‘s/\.[0-9]*$//’ `
export ph=”phobos.crapnstuff.com”
PS1=’\[\033[0;37m\]\u\[\033[0m\]@\[\033[1;35m\]\h\[\033[0m\]:\[\033[1;36m\]\w\[\033[0m\]\$ ‘# Aliases
alias arin=’whois -h whois.arin.net’
alias grep=’grep –color’
alias locate=’locate -i’
alias ls=’ls -lh’
alias ns=’nslookup’
alias nsmx=’nslookup -q=mx’
alias pg=’ping google.com’
alias ph=’ping phobos.crapnstuff.com’
alias phobos=’ssh -i ~/.ssh/identity -p 2200 -X -C -t alt229@phobos.crapnstuff.com screen -R’
alias pr=’ping `netstat -nr| grep -m 1 -iE ‘\”default|0.0.0.0′\” | awk ‘\”{print $2}’\”`’
alias py=’ping yahoo.com’

At the top of the file you have 2 variables that set nice looking colors in the terminal so make it more readable.

One of my faviourite little shortcuts comes next.  You’ll notice that there is a variable called SN there and it is a shortcut for the subnet that you happen to be on.  I find myself having to do stuff to the various hosts on my subnet so if I can save having to type in 192.168.25 50 times a day then that’s definitely useful.  Here are a few examples of how to use it:

ping $SN.10
nmap -p 80 $SN.*
ssh admin@$SN.40

Also related is the alias named pr.  This finds the router and pings it to make sure it’s up.

Continuing down the list there is the alias ph which goes to my personal server.  Useful for all sorts of shortcuts and can save a fair amount of work.  Examples:

ssh alt229@$ph
scp ./test.txt alt229@$ph:~/

There are a bunch of other useful aliases there too so feel free to poach some of these for your own environment!

[More Splunk: Part 1] Monitor specific processes on remote servers

Thursday, November 22nd, 2012

I was given a simple Splunk project: Monitor MySQL CPU usage and Apache web server processes on multiple servers.

Splunk is an amazing product but it’s also a beast! While it may be just a tool in one administrator’s arsenal of gadgets it could very well be another administrator’s full time job. Installing the software is a breeze and getting interesting reports is child play. Getting the meaningful reports you want, on the other hand, requires skills in the realms of system administration, scripting, statistics and formula building (think Excel).

My first project with the software was to monitor two things on remote servers:

  • MySQL CPU usage
  • Count of Apache web server processes

It sounds simple but involves a few pieces:

  • Writing a script to get the data
  • Configuring servers as Splunk Forwarders
  • Forwarding the data to a central server
  • Creating a search to populate a meaningful chart

Create the script

This is the easy part but it requires some special formatting to get Splunk to recognize the data it returns.

First, Splunk parses most any log file based on a time stamp and it can recognize many different versions of timestamps. The data following the timestamp constitutes the rest of the row or record. When Splunk gets to a second timestamp it considers that information to be another record.

So, my script output needed a timestamp. I followed the RFC-3339 specs (one of many formats), which describes something that looks like this:

2012-11-20 14:10:14-08:00

That’s a simple calendar date followed by a time denoted by its offset from GMT time. In this case the -08:00 denotes Pacific Standard Time or PST.

Next, I needed to collect two pieces of data: MySQL CPU usage and the number of active Apache web server processes. I started with a couple of shell script ps commands.

MySQL CPU usage

ps aux | grep mysqld | grep -v grep | awk '{ print $3 }'

Count of Apache web processes

ps ax | grep httpd | grep -v grep | wc -l

While Splunk can understand a standard timestamp as a timestamp it needs some metadata to describe the information that these commands are returning. That means each piece of information needs a name or “field”. This creates a key/value pair it can use when searching on the information later.

In other words the MySQL command above will return a number like “23.2″. Splunks needs a name like “MySQLCPU”. The key/value pair then needs to be in the form of:

MySQLCPU=23.2

This is the entire script to return the timestamp and two key/value pairs separated by tabs:

#!/bin/sh

# RFC-3339 date format, Pacific
TIMESTAMP=$( date “+%Y-%m-%d %T-08:00″ )

# Get CPU usage of the mysqld process
CPUPERCENTAGE=$( ps aux | grep mysqld | grep -v grep | awk ‘{ print $3 }’ )

# Get count of httpd processes
APACHECOUNTRAW=$( ps ax | grep httpd | grep -v grep | wc -l )
APACHECOUNT=$( echo $APACHECOUNTRAW | sed -e ‘s/^[ \t]*//’ )

echo “$TIMESTAMP\\tMySQLCPU=$CPUPERCENTAGE\\tApacheCount=$APACHECOUNT”

It will return a result that looks something like this:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Save this script with a descriptive name such as counters.sh. Each Splunk Forwarder server will run it to gather information at specified time intervals and send those results to the Splunk Indexer server. For that see:

[More Splunk: Part 2] Configure a simple Splunk Forwarder

Playing Taps

Wednesday, November 21st, 2012

It seems like the whole world’s gone mobile, and along with it the tools to transition the stampede of devices coming through businesses doors into something manageable. For iOS, it wasn’t long ago that activation was through iTunes only(*gasp!*) and MDM was a hand-coded webpage with xml and redeemable code links on it. Back then Apple ID’s were a monumental headache (no change there) and Palm wasn’t dead yet. It could cause one to reminisce back to the first coming of Palm. Folklore has it there was a job duty at Palm called ‘tap counter’, to ensure nothing took longer than 3 taps to achieve. If you’ve deployed any number of iOS devices like iPads, you may be painfully aware just how many more than that there are just to take one of these devices out of the box before they get into a usable state:
Manually doing each individual device “over the air”, you need to tap 16 times to activate and use the device with an open wireless network (17 if it’s a newer iPad with Siri integration)

And the ‘iTunes Store Activation Mode’ method leaves 9 taps, since it skips the language selection and time zone choices along with the option to bypass Wi-Fi setup.

If you have access to a Mac running Apple Configurator, it takes only 13 after ‘Prepare’ the device for use. It would seem like things haven’t actually improved. But Apple Configurator has more tricks than just the newer one we discussed recently, which is getting Apple TV’s on a wireless network. When you want to do iOS’s version of Managed Preferences, configuration profiles (a.k.a .mobileconfig files,) that’s another two taps PER PROFILE. This is an opportunity to really learn to love Apple Configurator, though, as it shows two of it’s huge advantages here (the third being the fact it can do multi-user assignment on a single iPad, including checking sets of applications out and reclaiming the app licenses as desired)

- You can restore a backup of an activated device (or as many as 30 at once), which answers all of the setup questions in one automated step (along with any other manual customizations you may want)

- If you put the device in Supervision mode, you can even apply configuration profiles WITHOUT tapping “accept” and “install” for each and every one

There are so many things to consider with all the different ownership models for apps, devices, and the scenarios regarding MDM and BYOD, I thought it was worth just to have a mini-topic of ‘how do folks approach getting them iPads out of the box and into a usable state?’

Create Empty Files Of Arbitrary File Sizes In Windows

Wednesday, November 21st, 2012
The fsutil command in Windows can be used to create empty files of arbitrary sizes. To create a file to a path, use fsutil along with the file option, followed by create new, then the path and then the size. For example, to create a 100MB file called myfile.txt at c:\testfiles:
fsutil file createnew “c:\testfiles myfile.txt” 100000000

Export Exchange 2007 Mailbox Users Sorted By size

Tuesday, November 20th, 2012

Let’s say you need to run a report in Exchange 2007 containing the following items:

  • AD Display Name
  • Mailbox Size
  • Mailbox Item Count
  • Current Storage Limit (if applicable)

And you need this list sorted by Mailbox Size, descending, you would run the following command in Exchange Management Shell, on the Exchange 2007 server, using AD/Exchange admin rights:

Get-MailboxStatistics | where {$_.ObjectClass –eq “Mailbox”} | Sort-Object TotalItemSize –Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size(MB)”;expression={$_.TotalItemSize.Value.ToMB()}},@{label=”Items”;expression={$_.ItemCount}},@{label=”StorageLimit”;expression={$_.StorageLimitStatus}} -auto >c:\mx_size_report.txt

The above will output to a txt file called “mx_size_report.txt”

Let’s say you want to view this in Excel, simply open Excel and import this TXT file.  You will now have an Excel manageable file with the report values you just generated.

OS X Server backup in Mountain Lion (and beyond)

Monday, November 19th, 2012

Data backup is a touchy subject. Nobody does it because they want to. They do it because sometimes bad things happen, and we need some way to take a dead server and transform it into a working one again. For Mac OS X Server, that wasn’t always easy. Because of it’s basic nature – a mixture of Open Source components and proprietary Apple technology – backing up OS X Server effectively would usually mean coming up with at least two backup solutions.

To help with all of this, 318 put together the sabackup package. Its purpose was to use Apple’s built-in server management command line tool (serveradmin) to export service settings in such a way that you could turn around and import them with serveradmin and get your server working again. I know that having those backed up settings not only allowed me to resurrect more than one server, but I also have used them to find out when a specific change was made. (Usually after we realized that said change had broken something.)

With Lion and Mountain Lion, Apple decided to address the problem of properly backing up services and service data, and Time Machine now includes a mechanism for backing up running on OS X Server. Inside the Server.app bundle, in the ServerRoot folder that is now the faux root for all Server.app services, you’ll find a ServerBackup command. This tool uses a selection of backups scripts in /Applications/Server.app/Contents/ServerRoot/usr/libexec that allow for backup and restore of specific services. There’s also a collection of SysV-style scripts in /Applications/Server.app/Contents/ServerRoot/etc/server_backup that contain the parameters that ServerBackup will use when backing up services. As with all things Apple, they’re XML Plists. Certain services merit their own specfic backup scripts: Open Directory, PostgreSQL, File Sharing (called “sharePoints” in this context), Web, and Message Server. The OD script produces an Open Directory archive in /var/backups, the PostgreSQL script produces a dump of all your databases, and Message Server will give you a backup of the Jabber database. Web backs up settings, but it’s important to note that it doesn’t back up data. And then there’s the ServerSettings script, which produces a serveradmin dump of all settings for all services. Everything is logged in /var/log/server_backup.

This is what sabackup was designed to do, only Apple has done it in a more modular, more robust, and 100% more Apple-supported way. With that in mind, we’ve decided to cease development on sabackup. Relying on Apple’s tools means that as new services are added, they should be backed up without any additional work on your part – ServerBackup will be updated along with Server.app.

ServerBackup has its quirks, mind you. It’s tied to Time Machine, which means Time Machine has to be enabled for it to work. That doesn’t mean you have to use Time Machine for anything else. If you exclude all the Finder-visible folders, you’ll still get a .ServerBackup folder at the root of the volume backup, with all the server backups. You’ll also get /private, including var (where backups and logs are), and etc, where a lot of config files live. You can dedicate a small drive to Time Machine, let Time Machine handle the backup of settings and data from Server.app services, and make sure that drive is a part of your primary backup solution. You do have a primary backup solution, don’t you?

Custom dynamic dns updater

Sunday, November 18th, 2012

Serving pages over a dynamic ip can be frustrating, especially if you try to use a free dynamic dns account.  Many of them expire if not used in X number of days, some cost more many than your actual domain and a lot of the built in clients in many of today’s popular routers don’t work reliably.

This is where some custom script foo comes in.  Using industry standards like SSH, SSI’s and cronjobs we can setup a super lightweight script that sends your dynamic ip to a webserver so that it can direct visitors to your in house server.

 

The graphic below should help visualize:

 

As you can see from the diagram this script runs, gathers a single variable and then pushes it out to a server via ssh.  From there the server calls that file and uses the ip as a variable to pass along to clients visiting the website by using a simple meta refresh.

 

Dynamic IP Configuration

After getting ssh keys setup there are really only 2 steps to getting this script to work.  If you haven’t set those up before refer to this guide for help.

 

Step 1.  Download the ip_updater.sh script here and change the following 4 variables to your own setup

IDENTITY = path to your ssh identity file (usually ~/.ssh/identity or ~/.ssh/id_rsa)

DEST_SERVER = ip or hostname of the server you’re sending your ip to

DEST_FILE = temp file on the server that holds your ip (/tmp/myip)

USERNAME = username to logon as

Step 2.  Setup a crontab to run this script at certain intervals

Here is my sample crontab which runs this script once an hour at 15 after:

# m    h    dom    mon    dow    command
15     *    *      *      *      /home/alt229/bin/ip_update.sh

 

Web Server Configuration

Configuration of the webserver is nearly as simple.  Just make sure that server side includes are enabled first.  Then, create a file named index.shtml with the following contents.
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Zync Catalog Redirect</title>
	<meta http-equiv="refresh" content="0;URL='http://<!--#include virtual="cableip" -->'" />
    </head>
    <body bgcolor="#000000">
	<p>Redirecting you to <!--#include virtual="myip" --></p> 
    </body> 
</html>

When clients hit your server and get served this page they will automatically get redirected to your dynamic address.

Owncloud: Finally a Dropbox that can sync to your local LAN

Saturday, November 17th, 2012

If I had a megabyte for the number of times I praised a cloud provider’s service while simultaneously lamenting that my data had to leave my LAN to live out the rest of it’s days on their servers then I’d have collected enough MB’s to fill a DVD.

That statement is definitely a tad hyperbolic but the need for good “cloud” software that average users can admin and control is definitely much sought after and needed.

Enter Owncloud which has the lofty goal of making all your data accessible everywhere and over all your devices. I said it was lofty right? I was more than a tad skeptical when I read this too but Owncloud delivers.

Setup is almost deceptively straightforward as all you have to do is download a tar file and extract it to your web root folder. From there, make sure your permissions are all owned by the apache daemon user (www-data for ubuntu and apache for redhat / centos).  The only real tricky part is making sure you have all the prerequisites installed and have SSL running properly.  Check the official site for a list of prerequisites http://owncloud.org/support/install/

wget http://mirrors.owncloud.org/releases/owncloud-4.5.3.tar.bz2
tar -jxvf owncloud-4.5.3.tar.bz2
mv owncloud /var/www
chown -R www-data /var/www/owncloud/

You’ll want to make sure that the Allow Overrides variable is set to all or else the custom .htacces that comes with Owncloud won’t work and certain modifications (such as moving the data folder outside the web root) will need to be made due to security concerns.

Next step is to login to your domain and create a username and password.  You also have the option of connecting to a mysql database or using SQLite.  If unsure choose SQLite as it’s the easiest to setup and most compatible.

Create Admin User

 

Next, you have to install the sync client on your local machine.  Grab the latest version from the official website: https://owncloud.com/download.

Run the installer and open the app.  The first time you run it it’ll require your connection settings.  Enter them like so:

Hit next and with any luck you’ll be off to the races!  The default folder is ~/ownCloud and you can start syncing files immediately simply by dragging and dropping.

 

Next time we’ll go over some more in depth configurations such as configuring contact / calendar syncing as well as syncing to an amazone s3 bucket.

If you get stuck anywhere in the process please refer to the official install guide located here: http://owncloud.org/support/install/

Create A Package To Enable SNMP On OS X

Friday, November 16th, 2012

Building on the Tech Journal post “Enable SNMP On Multiple Mac Workstations Using A Script“, let’s take the snmpd.conf file and put it into an Apple Installer package for easier deployment.

Most any packaging tool for Mac OS X, such as Apple’s PackageMaker, JAMF Software’s Composer, Absolute Software’s InstallEase or Stéphane Sudre’s Iceberg, can create a simple package combining a couple of scripts with a payload file. For this example, the payload will be the snmpd.conf file itself and the scripts will be preflight and postflight scripts to protect existing data and start the SNMP service.

First, create the snmpd.conf file using the instructions in the Create the snmpd.conf file section from the prior post.

Next, create the preflight and postflight scripts using a plain text editor such as TextEdit.app or BBEdit.app. Save each script as “preflight” or “postflight” without any file extensions.

Preflight script

The preflight script stops the SNMP service if it’s running and renames any existing /usr/share/snmp/snmpd.conf file to snmpd.bak followed by a unique date and time.

#!/bin/sh
# Preflight

# Stop the SNMP service if it's running
/bin/launchctl list | /usr/bin/grep org.net-snmp.snmpd
if [ $? = 0 ] ; then
     /bin/launchctl unload -w /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist
     /usr/bin/logger SNMP service stopped. # Appears in /private/var/log/system.log
fi

# Rename the snmpd.conf file if it exists
if [ -f /usr/share/snmp/snmpd.conf ] ; then
     /bin/mv /usr/share/snmp/snmpd.conf /usr/share/snmp/snmpd.bak$( /bin/date "+%Y%m%d%H%M%S" )
fi

exit 0

Postflight script

The postflight script starts the SNMP service

#!/bin/sh
# Postflight

# Start the SNMP service if it's running
/bin/launchctl load -w /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist
if [ $? = 0 ] ; then
     /usr/bin/logger SNMP service started. # Appears in /private/var/log/system.log
fi

exit 0

The elements are ready to add to the packaging application. Using Iceberg as an example, create a new new project and select Package from the Core Templates. Name the project “Enable SNMP” and select a location to store the project files such as ~/Iceberg. Copy the snmpd.conf file and preflight and postflight scripts to the ~/Iceberg/Enable SNMP folder for easier access.

Iceberg folder

Edit any information in the Settings pane to add clarity or leave the defaults automatically populated.

Settings

Select the Scripts pane and drag the preflight and postflight scripts into the Installation Scripts window being sure to match the preflight script to the preflight script file and the postflight script to the postflight script file.

Scripts

Select the Files pane. Right-click the top-level root folder in the files list and select New Folder. Name this new folder “usr”. It should appear at the same level as the Applications, Library and System folders. Continue creating new folders until the /usr/share/snmp folder hierarchy is complete. Then drag in the snmpd.conf file so that it falls under the snmp folder.

Select Archive menu –> Show Info to display each folder’s ownership and permissions. Adjust ownership of the new folders and the snmpd.conf file to owner:root and group:wheel. Adjust permissions to 755 for folders and 644 for the file (see screenshot).

Files

The package is ready. Select Build menu –> Build to create the package. Iceberg places new packages into the project folder: ~/Iceberg/Enable SNMP/build/Enable SNMP.pkg.

Copy the newly created package to a test machine and double-click to run it. Verify that everything worked correctly by running the snmpget command:

snmpget -c talkingmoose-read localhost system.sysDescr.0

It should return something like:

SNMPv2-MIB::sysDescr.0 = STRING: Darwin TMI 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

When satisfied the installer works correctly use a deployment tool such as Apple Remote Desktop, Casper or munki to distribute the package to Mac workstations.

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.

EULA

Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

Connect Casper to Active Directory

Wednesday, November 14th, 2012

Integrating any system into Active Directory can seem like a daunting task, especially for someone who’s not an AD administrator or doesn’t even has access to the directory service. JAMF Software has supported connecting Casper to AD for several versions of its product and has refined the connection process to be simple enough for someone with little or no AD experience to complete.

Connecting Casper to AD allows it to take advantage of existing user and group accounts, eliminating the tedium of creating them manually, and the user himself has one less password to remember. When his password changes the new password works immediately in Casper. Likewise, when a user’s account expires or is disabled then access to Casper ceases.

Gather the following information for the connection process:

  • Service account. This should be an AD account dedicated for Casper to use to authenticate to AD. It should be set not to expire and not to require changing at first login. This requires both the account name and its AD password.
  • The name of an AD Domain Controller (same as a Windows Global Catalog server, which assumes the role of an LDAP server).
  • The name of the organization’s NetBIOS domain.
  • The login names for any two user accounts in AD. Passwords aren’t required; these are used for testing lookups only.
  • The names for any two security groups in AD that include one or both test user accounts. These are used for testing lookups only. (Domain Users and Domain Admins are two common security groups.)

To connect Casper to AD do the following:

  1. Log in to the JAMF Software Server (JSS) for Casper using a local user account.
  2. Navigate to Settings tab –> LDAP Server Connections. Click on the Add LDAP Server Connection button. This begins a process that verifies the service account’s credentials and creates the user and group mappings between Casper and AD.
    New LDAP Server Connection button
  3. Select Active Directory as the LDAP server type and click the Continue button.
    LDAP server connection type
  4. For Host name enter the fully qualified domain name or IP address of the Domain Controller.
  5. For AD Domain enter the Windows NetBIOS domain name. Click the Continue button.
    Domain information
  6. Enter the name of the service account and its password that the JSS will use to authenticate and connect to AD. Click the Continue button.
    Service account
  7. If the Enter Test Accounts page appears then AD has accepted the service account’s credentials. Now, enter the account names of two AD users. These can be your own and a co-worker’s account. For the best results pick two users who are in very different parts of the organization. Click the Continue button.
    Test accounts
  8. The Verify Attribute Mappings page should display information about each user the JSS found in AD. Mappings are the pairing of attributes and values for an object in AD. In this case, verify the Username shown is actually the user’s short account name, verify Real Name shows the user’s first and last name, verify that Email displays the correct email address for each user, etc.New mappings
  9. Some fields may not be populated. That’s typically because the AD information is incomplete. If either user has information for a field but not the other then verify that information is correct or at least in the correct format.
  10. Casper may have wrongly mapped an attribute. For example, the telephoneNumber attribute may actually be phone in AD. To change the mapping click the edit button (ellipsis) to the right of the mapping and review the LDAP Attributes to see if another one is more suitable. Changing the attribute immediately changes the values for each user to help quickly identify better choices. Click the Return to Verify Mappings button when done.
    Edit mappings
  11. The new mappings appear in the list. Click the Continue button.
    New mappings
  12. Enter the two domain security groups and verify whether the test users are members. They may be members of one, both or none. Click the Continue button.
    Verify groups
  13. Finally, click the Save button to save the settings.
    Complete

Now, when adding new users to Casper, the JSS can pull the user information from AD.

  1. Navigate to Settings tab –> Accounts. Click on the Add Account from LDAP button.
    New Account button
  2. Enter the name of an AD user who should have privileges in the JSS. Click the Next button.
    Add User from LDAP Account
  3. If the lookup returns more than one result then locate the correct result and click the Add… link to the right.
    Result
  4. Grant the necessary privileges to the JSS and click the Save button.

At this point the newly added user should be able to log in to the JSS using his AD credentials. The JSS will also use the AD information for email alerts and other functions.

If the LDAP connection is ever deleted then existing LDAP user accounts will fail to work, even if the LDAP connection is recreated. Re-enabling users to log in will require adding their accounts and privileges again under the new LDAP connection.

Install A Profile On Apple TV Using Apple Configurator

Tuesday, November 13th, 2012

With a recent software update administrators gained the ability to apply network management profiles to 2nd and 3rd generation Apple TV devices. Apple TV supports applying profiles via HTTP download or using the most recent update to Apple Configurator, which had been for iOS devices only.

Applying a profile to an Apple TV using Apple Configurator requires:

Create a Wifi configuration

Most options in Apple Configurator apply only to iOS devices but wifi settings will apply to Apple TV as well. Creating a profile to configure wireless network settings can be useful for deploying multiple Apple TVs and preventing network changes.

  1. Launch Apple Configurator and make sure the Prepare pane in the toolbar is selected.
  2. Enter a meaningful name for this configuration.
    Start Apple Configurator
  3. Click the ” + ” button (plus) at the bottom of the window and select Create New Profile… from the drop down menu.
  4. Under the General settings payload of the new profile complete the mandatory fields of the payload.
    Profile General Settings
  5. Be sure to scroll to the bottom of the General settings payload window to include additional security information for allowing removal of the profile.
    General security settings
  6. Select Wi-Fi settings payload in the left column and populate the settings to join the Apple TV to a wireless network.
    Wi-Fi settings
  7. Click the Save button and the new profile will appear in the Profiles list for the configuration.
    Profiles list
  8. At this point connect the Mac to the Apple TV using the USB cable. If necessary, unplug the video cable leading to the television.
  9. Click the Prepare button at the bottom of the Apple Configurator window and click the Apply button when prompted.
    Apply
  10. The name from the Apple TV will appear in the right column and details about the progress will flash below the name. Applying the configuration should take just a few seconds. When the progress indicator to the right shows complete disconnect the USB cable from the Apple TV.
    Applying settings

Verify the profile

The profile name appears on the Apple TV under Settings.

  1. Navigate to the main menu of the Apple TV.
    Apple TV main screen
  2. Select Settings –> General.
    Apple TV settings
  3. Scroll to the bottom and select Profiles. Select the Apple TV – Wifi profile that was upload via Apple Configurator to view its details.
    Apple TV profile

If the profile allows removal then the Remove Profile button is available at the top of the profile information screen. Otherwise, it’s dimmed. The Apple Configurator can overwrite this profile with another one that allows it to be removed.

Publishing the profile to a website

Apple Configurator can also create a .mobileconfig file to publish to a website for download directly to the Apple TV. To create the file return to the Profiles list on the Prepare pane and highlight one or more profiles. Click the Share icon at the bottom to save the file.

Enable SNMP On Multiple Mac Workstations Using A Script

Monday, November 12th, 2012

SNMP can be a valuable tool for monitoring the health of unattended Mac workstations acting as a farm to process information for remote users. If the health of a farm member degrades because its hard drive gets full or a process gets stuck then SNMP can send traps to a Network Management Station to alert the administrator.

Before SNMP will return any useful information an administrator must configure the Mac using the snmpconf command. By default this command runs interactively and prompts him for basic information to create the /usr/share/snmp/snmpd.conf file. However, he can use this file to script the same configuration for other machines without interaction. The script can also run a simple launchd command afterward to start the snmp service.

Create the snmpd.conf file

Creating the snmpd.conf file is as simple as running a command in the Terminal and answering a few questions.

  1. Launch the Terminal application found in /Applications/Utilities.
  2. The Terminal defaults to the current user’s home folder. Verify this using the pwd command. This is where the snmpconf command will create the snmpd.conf file.
  3. Enter snmpconf in the Terminal and press return.
  4. This begins a series of simple questions. The first question is:

    The following installed configuration files were found:

    1: /etc/snmp/snmpd.conf

    Would you like me to read them in? Their content will be merged with the output files created by this session.

    Valid answer examples: “all”, “none”,”3″,”1,2,5″

    Read in which (default = all):

    Press return to accept the default answer “all”.

  5. The next question is:

    I can create the following types of configuration files for you.
    Select the file type you wish to create:
    (you can create more than one as you run this program)

    1: snmpd.conf
    2: snmptrapd.conf
    3: snmp.conf

    Other options: quit

    Select File:

    Enter 1 to choose to create the snmpd.conf file.

  6. Next, choose 1 for Access Control Setup. This will set the community name for both read/write as well as read access. For monitoring purposes an administrator should configure read-only communities such as talkingmoose-read. Set the community name for both SNMPv3 read-only user as well as SNMPv1/SNMPv2 read-only access community name. These may be the same name.
  7. When the read-only communities are set then type finished to exit the access control setup and proceed to the rest of the sections.

Some questions will be for more advanced snmp settings, which some administrators will want to partially or fully customize. For basic snmp functionality either accept the defaults or don’t answer the questions. At minimum, though, complete the Access Control Setup and System Information Setup sections.

After answering the questions and returning to the top level section type quit to complete creating the snmpd.conf file. The snmpconf command places this file in the current working directory in Terminal.

Load snmpd.conf onto another Mac

Loading these settings on another machine requires the same snmpconf command but with some instructions to use the newly created file. Do the following:

  1. Copy the snmpd.conf file to the new machine.
  2. Run the following command on the new machine:sudo snmpconf -R /path/to/snmpd.conf -a -f -i snmpd.conf

This snmpconf command takes the supplied snmpd.conf file (-R /path/to/snmpd.conf) to quietly configure a new one (-a) overwriting anything already configured (-f) and places it in the correct location (-i), which is /usr/share/snmp/.

Start SNMP

After the settings are loaded and a newly created snmpd.conf file exists in /usr/share/snmp/, start the SNMP service:

sudo launchctl load -w
/System/Library/LaunchDaemons/org.net-snmp.snmpd.plist

Test using snmpwalk

To verify the settings are applied correctly use the snmpwalk command to read SNMP data from the Mac using the read-only user or community name created when completing the Access Control Setup section earlier:

snmpwalk -v1 -c talkingmoose-read localhost

This should return a lengthy amount of information that begins with something like:

SNMPv2-MIB::sysDescr.0 = STRING: Darwin TMServer.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.255
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (751563) 2:05:15.63
SNMPv2-MIB::sysContact.0 = STRING: "William Smith"
SNMPv2-MIB::sysName.0 = STRING: TMServer.local
SNMPv2-MIB::sysLocation.0 = STRING: "Saint Paul"
SNMPv2-MIB::sysServices.0 = INTEGER: 12

Deployment

The most efficient deployment method for current and future Mac farm machines is an Apple Installer package. Add the snmpd.conf file as a resource file to the package and add a postflight script to load the file and start the SNMP service.

Use Casper to collect Mac App Store IDs

Sunday, November 11th, 2012

An administrator may need to allow his users access to the Mac App Store but might prefer they download software only under sanctioned Apple IDs. Using an extension attribute in Casper, he can compile a list of all Apple IDs used on every Mac.

When a user enters an Apple ID to access the Mac App Store, it gets stored in his Home folder in:

~/Library/Preferences/com.apple.storeagent.plist

So long as he doesn’t sign out of the Mac App Store (he’ll probably just quit when he’s done) the ID remains in the file. Multiple users on a machine may use multiple Apple IDs because the credentials are stored for each user rather than once for the computer.

The following script gathers a list of unique Apple IDs from all user accounts and then returns the list to the Casper JSS as an extension attribute.

#!/bin/sh # Get a list of existing .plist files USERFOLDERS=$( find /Users/*/Library/Preferences \ -name com.apple.storeagent.plist ) # Make a list of Apple IDs found in the .plists for APLIST in $USERFOLDERS do IDLIST=$( echo "$IDLIST\n$( defaults read $APLIST AppleID )" ) done # Remove blank and duplicate lines IDLIST=$( echo "$IDLIST" | sed '/^$/d' | uniq ) # Return the result echo "<result>$IDLIST</result>"

To add the following script as an extension attribute in the JSS:

  1. Navigate to Settings tab –> Inventory Options –> Inventory Collection Preferences –> Extension Attributes tab –> Add Extension Attribute.
  2. Name this Extension Attribute “Mac App Store Apple IDs”.
  3. Set the Data Type to String.
  4. Set the Input Type to Script and paste in the script.
  5. Click the OK button and then Save the Extension Attribute.

TRIPPing On Lync

Saturday, November 10th, 2012

Microsoft Lync can require as much or as little bandwidth as you can give it, according to what you are using Lync for. At its most basic, Lync is a tool for instant messaging. At its most complicated, Lync can plug-in to Microsoft Outlook, schedule a video conference with 10 of your coworkers (without posting the fact that you had said video conference to your Google+ timeline btw), share your screen so you can step your parents through setting up Windows RDP to fix a problem on their computer and pass PBX style traffic to provide voice services; all the while still letting you instant message your wife that you’ll be late coming home because you are stuck on the video conference, a screen share and a phone call also being managed with Lync.

Because you can do so much with Lync, as you start to do some of the more bandwidth intensive tasks, you might notice performance issues. Especially if you have an office of people running Office 365 and Lync Online to communicate with customers and one another. There are two types of performance to be concerned with with regards to any video or VoIP-based teleconference solution. The first is latency and the second is speed. TRIPP stands for the Transport Reliability IP Probe. TRIPP can be used to test your connection and return with information about what kind of performance you can expect to have.

TRIPP is easy to use. Open a browser to http://trippdb3.online.lync.com and click on Start Test.

When prompted, provide a Session ID (if you don’t have one, simply enter 0 and hit the Return key).

The test then runs. The first step is to look at latency. Wait for the rest to complete.

When finished, you’ll see a summary page that outlines the kind of performance you can expect from Lync.

If you have latency issues then it’s often due to too many hops for various sessions. This can be difficult to troubleshoot as it’s often up to an ISP to resolve routing table issues or provide better services. Bandwidth problems can be addressed by reducing the number of services on your network or increasing your throughput. You can also associate a higher priority for this type of traffic. Consistency of Service often comes down to QoS.

So far, I’ve managed to run TRIPP on Windows, Linux and as you can see from these screens, OS X.

Capture Network Device Information Using Casper

Friday, November 9th, 2012

JAMF Software’s Casper suite is designed to capture and store information about Mac and Windows clients. However, it can also store information about network resources such as printers and routers by using a server or workstation as a pseudo SNMP Network Management Station. The following example illustrates how to use a Casper Extension Attribute to store the uptime of an Airport Extreme base station in a managed client’s record in the JAMF Software Server (JSS).

Uptime is the length of time a device has been active since its last reboot. An Airport Extreme base station should have a relatively long uptime (weeks or months) compared to a workstation (days). If the uptime of a base station is always just a few days then that may indicate hardware failure or power problems.

First, using the snmpwalk command, a server or workstation can poll the public community of any Airport base station at its IP address:

snmpwalk -v1 -c public -M /usr/share/snmp/mibs
-m AIRPORT-BASESTATION-3-MIB 192.168.5.1

This command will return a lot of information. Applying grep to return just the sysUpTime information and cut to trim away  everything but the value of sysUpTime, the final result looks something like:

$ snmpwalk -v1 -c public -M /usr/share/snmp/mibs
-m AIRPORT-BASESTATION-3-MIB 192.168.5.1 | grep sysUpTime | cut -d \) -f 2

286 days, 10:38:38.70

An extension attribute is simply a shell script that runs a command to gather information and then returns that information to be stored in the JSS. Every managed computer in the JSS runs these scripts during routine inventories. But only one should be dedicated to polling the base station and storing the uptime information.

During a routine inventory this script verifies whether the name of the computer in the script matches the name of the current computer. If they match then it runs the snmpwalk command to poll the base station for its uptime.

To add this as an extension attribute in the JSS:

  1. Navigate to Settings tab –> Inventory Options –> Inventory Collection Preferences –> Extension Attributes tab –> Add Extension Attribute.
  2. Name this Extension Attribute “Airport Uptime”.
  3. Set the Data Type to String.
  4. Set the Input Type to Script and paste in the script.
  5. Edit the script by entering the name of the computer that should poll the Airport base station.
  6. Enter the IP address of the Airport base station in the script as well.
  7. Click the OK button and then Save the Extension Attribute.

Run the Recon application on the polling computer to update its inventory in the JSS. When done the EA should return the uptime for the base station to the computer’s record.

To view the information search for the computer in the JSS and click its Details link. Click the Extension Attributes section on the next page and locate the “Airport Uptime” Extension Attribute on the right.

Update: John C. Welch has written a companion piece to this post outlining some better and more efficient ways to accomplish the SNMP polling: A companion post to a 318 post. Thanks for the writeup, John!

Recover Data From Crashed SharePoint Server

Thursday, November 1st, 2012

If you ever find yourself in the unfortunate situation of having to recover a corrupted SharePoint server fear not!  What used to be a manual and very tedious process is now quite manageable with a little bit of code and basic knowledge of SharePoint server.

The reason that this process can be so tricky is because SharePoint stores all it’s files in a SQL database and while that provides much more functionality than a straight file server it also increases the complexity of backing up and recovering files located within it.

Luckily, there is a small script that can be run on the server that exports all data within a SharePoint database.  The following are the steps you can use to recover your documents from a crashed SharePoint server.

 

Here are the basic steps to getting your docs.

  1. Backup your database(s)
  2. Create a temp database in your default SQL containter
  3. Download and customize this code
  4. Compile the code
  5. Run the program

 

Step 1:  

The first thing you’ll need to do is open up your SQL Manager and create a backup of the DB you’re wanting to save.  Normally you need to connect to \\.\pipe\MSSQL$Microsoft##ssee\sql\query and then you’ll see the correct SharePoint databases.  In this example, the database is called STS_SERVER_1 but yours will likely be different.  Right click this database and back it up to a single file.  Telling it to go to 2 backup files can cause problems.

 

Step 2:  

Close and reopen the SQL Manager but this time connect to the default server.  In my case it is “Server\SQLEXPRESS”  Once inside here navigate to databases, right click and then hit restore.  I named my database “TEMP_DB” but feel free to name it whatever you like.  Select the backup file you just created and start the restore.

 

Step 3:  

Download this code to your desktop and save it as spdbex.cs.  You’ll need to change 2 variables inside the code.  Look for this part near the top of the code.

string DBConnString = 
“Server=ServerName\\SQLEXPRESS;” +
“Database=TEMP_DB;Trusted_Connection=True;”;

Yours may look like this:

string DBConnString = 
“Server=YourServer\\SQLEXPRESS;” +
“Database=RESTORED_DB;Trusted_Connection=True;”;

Step 4:

To compile the code, run this command in a command prompt.  It’s assumed that the spdbex.cs is in the current folder.

%WINDIR%\Microsoft.NET\Framework\v2.0.50727\csc /target:exe /out:spdbex.exe spdbex.cs

Step 5:

Assuming everything went ok you should be able to just type in the program name and then you’ll be good to go.  This will put all files that were stored in your SharePoint database into the current folder and sub folder.

Note: Meta data and file versions are not preserved during this restore.