Archive for the ‘Network Architecture’ Category

Test Access to Microsoft Resources

Friday, January 17th, 2014

Microsoft provides a tool at to test access to their servers and cloud services. Using the Remote connectivity analyzer you can test connections to Lync, review message headers, verify Autodiscover records are working properly, test outbound access to POP/SMTP/IMAP, verify mail flow from an IP, challenge single sign-on and of course test ActiveSync.

Screen Shot 2014-01-17 at 3.28.36 PM

Overall, the Remote Connectivity Analyzer is a great tool for any Microsoft tech and a valuable weapon in the Mac Admin’s batbelt as well!

Add OS X Network Settings Remotely (Without Breaking Stuff)

Monday, September 23rd, 2013

So you’re going to send a computer off to a colocation facility, and it’ll use a static IP and DNS when it gets there, the info for which it’ll need before it arrives. Just like colo, you access this computer remotely to prepare it for its trip, but don’t want to knock it off the network while prepping this info, so you can verify it’s good to go and shut it down.

It’s the type of thing, like setting up email accounts programmatically, that somebody should have figured out and shared with the community as some point. But even if my google-fu is weak, I guess I can deal with having tomatoes thrown at me, so here’s a rough mock-up:


# purpose: add a network location with manual IP info without switching 
#   This script lets you fill in settings and apply them on en0(assuming that's active)
#   but only interrupts current connectivity long enough to apply the settings,
#   it then immediately switches back. (It also assumes a 'Static' location doesn't already exist...)
#   Use at your own risk! No warranty granted or implied! Tell us we're doing it rong on twitter!
# author: Allister Banks, 318 Inc.

# set -x

declare -xr networksetup="/usr/sbin/networksetup"

declare -xr MYIP=""
declare -xr MYMASK=""
declare -xr MYROUTER=""
declare -xr DNSSERVERS=""

declare -x PORTANDSERVICE=`$networksetup -listallhardwareports | awk '/en0/{print x};{x=$0}' | cut -d ' ' -f 3`

$networksetup -createlocation "Static" populate
$networksetup -switchtolocation "Static"
$networksetup -setmanual $PORTANDSERVICE $MYIP $MYMASK $MYROUTER
$networksetup -setdnsservers $PORTANDSERVICE $DNSSERVERS
$networksetup -switchtolocation Automatic

exit 0

Caveats: The script assumes the interface you want to be active in the future is en0, just for ease of testing before deployment. Also, that there isn’t already a network location called ‘Static’, and that you do want all interface populated upon creation(because I couldn’t think of particularly good reasons why not.)

If you find the need, give it a try and tweet at us with your questions/comments!

Increase Shared Memory for Postgres

Friday, August 16th, 2013

The default installation of Postgres in OS X Server can be pretty useful. You may find that as your databases grow that you need to increase the amount of shared memory that those databases can access. This is a kernel thing, so requires sysctl to get just right. You can do so manually just to get your heavy lifting done and oftentimes you won’t need the settings to persist across a restart. Before doing anything I like to just grab a snapshot of all my kernel MIBs:

sysctl -a > ~/Desktop/kernmibs

I like increasing these incrementally, so to bring up the maximum memory to 16 megs and increase some of the other settings equally, you might look to do something like this:

sysctl -w kern.sysv.shmmax=16777216
sysctl -w kern.sysv.shmmni=256
sysctl -w kern.sysv.shmseg=64
sysctl -w kern.sysv.shmall=393216

To change back, just restart (or use sysctl -w to load them back in). If you need more for things other than loading and converting databases or patching postgres, then you can bring them up even higher (I like to increment in multiples):

sysctl -w kern.sysv.shmmax=268435456

The ‘Hidden’ Summary Tab

Friday, August 9th, 2013

Do you want AirPort Utility to look how it used to? Howsabout something akin to the Logs interface you could use to see connected client’s? Well, mashing the option key has paid off again! As alerted to me on the Twitter via an @dmnelson re-tweet,

This doesn’t really get you more in the way of features, but when change is scary and goes jingly-jangly in our pockets, seeing a familiar modal dialog makes us feel at ease.


Troubleshoot network port connectivity in SonicWall devices

Wednesday, June 12th, 2013

Few things are as aggravating to a technician or customer as two vendors blaming each other for a problem.

I recently ran into this when I was unable to establish communication from the Internet to a client’s internal server through a SonicWall TZ 100 firewall device. The customer’s server would accept connections internally but not externally. The firewall had the necessary ports open to allow communication but the application just couldn’t reach the server.

Other applications worked just fine—just this one was failing… somewhere. Two of my co-workers verified my settings and couldn’t find any problems. The problem lay either with the ISP blocking the port or a malfunction with the SonicWall.

When I reported the problem to the ISP the technician quickly said “we don’t block any ports.” He checked a few items and reaffirmed nothing on their end was causing our problem. OK, so that left the SonicWall device. It was under maintenance and I called for technical support.

The SonicWall technician remote controlled my computer to view my setup. He found nothing wrong as well and said the problem is the ISP blocking the port. I replied to him the ISP said nothing was blocked.

The technician proceeded to prove the SonicWall was working correctly, which made my day!

Packet Monitor

SonicWall devices have a Packet Monitor feature that works independently of any configured settings. It can capture incoming traffic as it enters the firewall before routing it to the local network. It can also filter for traffic on specific ports making the results easier to examine.

Assume the port I need to verify is 445, which is Windows file sharing (CIFS/SMB). This is commonly blocked for security reasons. Assume its Mac counterpart, port 548, is working correctly and a completely random port such as port 54321 is not configured at all. This random port will be my “control” in my testing.

  1. Log in to the SonicWall device and select System –> Packet Monitor in the lefthand navigation pane.

    Packet Monitor menu
  2. In the right pane click the Configure button.Packet Monitor Configure
  3. Under the Settings tab of the Packet Monitor Configuration window enable all options under the Exclude Filter section. This eliminates any management traffic from contaminating the results.
    Configure Settings
  4. Under the Monitor Filter tab enter the following information and enable the following items:
    Interface Name(s): X1 — This is the Internet facing port of the SonicWall device.
    • Ether Type(s): IP — By specifying IP we eliminate any ARP or PPPOE traffic.
    • IP Type(s): TCP — This eliminates any UDP, ICMP or other types of IP traffic.
    • Source Port(s): 548 — For now, we’ll test with a known working port.
    • Destination IP Address(es): The public IP address of the SonicWall device.
    Enable Bidirectional Address and Port Matching: Enabled
    • Forwarded packets only: Enabled
    • Dropped packets only: Enabled
    Click the OK button to save the settings and close the window.
    Monitor filter
  5. Finally, click the Start Capture button. The window reflects that tracing is active.
    Start Capture

Now that the SonicWall device is monitoring Mac file sharing traffic, use telnet in the Terminal application to verify this type of traffic is actually reaching the destination.

  1. In the Terminal application enter:
    telnet 97.XXX.XXX.14 548
    Telnet 548
  2. In the Captured Packets window below an entry appears in blue and indicates the packet was forwarded to the destination server inside the local network.Captured 548 packet

With connectivity verified on port 548, test next with port 54321. This port should not be open but the SonicWall should at least register the attempt.

  1. Revisit the Monitor Filter and change the Source Port from 548 to 54321. Click the OK button and then click the Clear button to erase the captured packets.
    Monitor filter 54321
  2. In the Terminal application enter:
    telnet 97.XXX.XXX.14 54321
    Telnet 54321

    Terminal should reflect it cannot connect on port 54321. The SonicWall doesn’t accept this port.

  3. However, the SonicWall packet filter will at least acknowledge the attempt and report the packet was dropped.
    Dropped 54321 packet

The SonicWall packet filter is clearly registering attempts for open ports and closed ports. So, what happens when the ISP is blocking a port?

  1. Revisit the Monitor Filter and change the Source Port from 54321 to 445 (the suspected ISP-blocked port). Click the OK button and click the Clear button to erase captured packets.
    Monitor filter 445
  2. In the Terminal application enter:
    telnet 97.XXX.XXX.14 445
    Telnet 443

    This time Terminal acts differently. It neither succeeds nor fails. It just keeps trying.

  3. The SonicWall shows nothing because it never receives the packet.
    No 445 packet received

This concludes the test and proves the SonicWall is functioning normally. Convincing the ISP it’s still blocking the port… that’s another story.

Allister’s Talks From Penn State MacAdmins

Wednesday, June 5th, 2013

IPv6: Quick start for administrators

Sunday, May 26th, 2013

Networking support folks have been buzzing about IPv6 since it was first formally introduced in December 1998. This is the IP addressing system to augment the current IPv4 system in use since the 1970s. It promises a much bigger address space for the world’s increasing number of Internet-connected devices.

Addresses will go from looking like this (IPv4):

to looking something like this (IPv6):


We’re experimenting with IPv6 in our offices, so I thought I’d compile a short list of things administrators may find useful to know.


[More Splunk: Part 4] Narrow search results to create an alert

Wednesday, January 30th, 2013

This post continues [More Splunk: Part 3] Report on remote server activity.

Now that we have Splunk generating reports and turning raw data into useful information, let’s use that information to trigger something to happen automatically such as sending an email alert.

In the prior posts a Splunk Forwarder was gathering information using a shell script and sending the results to the Splunk Receiver. To find those results we used this search string:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/"

It returned data every 60 seconds that looked something like:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Using the timechart function of Splunk we extracted the MySQLCPU field to get its value 23.2 and put that into a graph for easier viewing.

Area graph

Returning to view that graph every few minutes, hours or days can get tedious if nothing really changes or the data isn’t out of the ordinary. Ideally, Splunk would watch the data and alert us when something is out of the ordinary. That’s where alerts are useful.

For example, the graph above shows the highest spike in activity to be around 45% and we can assume that a spike at 65% would be unusual. We want to know about that before processor usage gets out of control.

Configuring Splunk for email alerts

Before Splunk can send email alerts it needs basic email server settings for outgoing mail (SMTP). Click the Manager link in the upper right corner and then click System Settings. Click on Email alert settings. Enter public or private outgoing mail server settings for Splunk. If using a public mail server such as Gmail then include a user name and password to authenticate to the server and select the option for either SSL or TLS. Be sure to append port number 465 for SSL or 587 for TLS to the mail server name.

Splunk email server settings

In the same settings area Splunk includes some additional basic settings. Modify them as needed or just accept the defaults.

Splunk additional email server settings

Click the Save button when done.

Refining the search

Next, select Search from the App menu. Let’s refine the search to find only those results that may be out of the ordinary. Our first search found all results for the MySQLCPU field but now we want to limit its results to anything at 65% or higher. The where function is our new friend.

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/" | where MySQLCPU >= 65

This takes the result from the Forwarder and pipes it into an operation that returns only values of the MySQLCPU field that are greater than or equal to “65″. The search results, we hope, are empty. To verify the search is working correctly, change the value temporarily from “65″ to something lower such as “30″ or “40″. The lower values should return multiple results.

On a side note but unrelated to our need, if we wanted an alert for a range of values an AND operator connecting two statements will limit the results to something between values:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/" | where MySQLCPU >= 55 AND MySQLCPU <=65

Creating an alert

An alert will evaluate this search as frequently as Splunk receives new data and if it spots any results other than nothing then it can do something automatically.

With the search results in view (or lack of them), select Alert… from the Create drop down menu in the upper right corner. Name the search “MySQL CPU Usage Over 65%” or something that’s recognizable later. One drawback with Splunk is that it won’t allow renaming the search later. To do that requires editing more .conf files. Leave the Schedule at its default Trigger in real-time whenever a result matches. Click the Next button.

Schedule an alert

Enable Send email and enter one or more addresses to receive the alerts. Also, enable Throttling by selecting Suppress for results with the same field value and enter the MySQLCPU field name. Set the suppression time to five minutes, which is pretty aggressive. Remember, the script on the Forwarder server is sending new values every minute. Without throttling Splunk would send an alert every minute as well. This will allow an administrator to keep some sanity. Click the Next button.

Enable alert actions

Finally, select whether to keep the alert private or share it with other users on the Splunk system. This only applies to the Enterprise version of Splunk. Click the Finish button.

Share an alert

Splunk is now looking for new data to come from a Forwarder and as it receives that new data it’s going to evaluate it against the saved search. Any result other than no results found will trigger an email.

Note that alerts don’t need to just trigger emails. They can also run scripts. For example, an advanced Splunk search may look for multiple Java processes on a server running a Java-based application. If it found more than 20 spawned processes it could trigger a script to send a killall command to stop them before they consumed the server’s resources and then issue a start command to the application.

How to Configure basic High Availability (Hardware Failover) on a SonicWALL

Friday, November 30th, 2012

Configuring High Availability (Hardware Failover) SonicWALL requires the following:

1. Two SonicWALLs of the same model (TZ 200 and up).

2. Both SonicWALLs need to be registered at (regular registration, and then one as HF Primary, one as HF Secondary).

3. The same firmware versions need to be on both SonicWALLs.

4. Static IP addresses are required for the WAN Virtual IP interface (you can’t use DHCP).

5. Three LAN IP addresses (one for Virtual IP, one for the management IP, and one for the Backup management IP).

6. Cross over cable (to connect SonicWALLs to each other) on the last ethernet interfaces.

7. 1 hub or switch for the WAN port on each SonicWALL to connect to.

8. 1 hub or switch for the LAN port on each SonicWALL to connect to.


1. High Availability cannot be configured if “built-in wireless is enabled”.

2. On NSA 2400MX units, High Availability cannot be configured if PortShield is enabled.

3. Stateful HA is not supported for connections on which DPI-SSL is applied.

4. On TZ210 units the HA port/Interface must be UNASSIGNED before setting up HA (last available copper ethernet interfaces).



1. Register both SonicWALLs at MySonicWALL as High Availability Pairs BEFORE connecting them to each other:

• “Associating an Appliance at First Registration”:

• “Associating Pre-Registered Appliances”:

• “Associating a New Unit to a Pre-Registered Appliance”:

2. Login to Primary HF and configure the SonicWALL (firewall rules, VPN, etc).

3. Connect the SonicWALLs to each other on their last ethernet ports using a cross over cable.

4. Connect the WAN port on both SonicWALLs to a switch or hub using straight through (standard) ethernet cables, and then connect the switch to your upstream device (modem, router, ADTRAN, etc.)

5. Ensure the Primary HF can still communicate to the Internet.

6. Connect the LAN port on both SonicWALLs to a switch or hub using straight through (standard) ethernet cables, and then connect them to your main LAN switch (if you don’t have one, you should purchase one. This will be the switch that all your LAN nodes connect to.).

7. Go to High Availability > Settings.

8. Select the Enable High Availability checkbox.

9. Under SonicWALL Address Settings, type in the serial number for the Secondary HF (Backup SonicWALL). You can find the serial number on the back of the SonicWALL security appliance, or in the System > Status screen of the backup unit. The serial number for the Primary SonicWALL is automatically populated.

10. Click Accept to save these settings.


Configuring Advanced High Availability Settings

1. Click High Availability > Advanced.

2. Put a check mark for Enable Preempt Mode.

3. Put a check mark for Generate / Overwrite Backup Firmware and Settings when Upgrading Firmware.

4. Put a check mark for Enable Virtual MAC.

5. Leave the Heartbeat Interval at default (5000ms).

6. Leave the Probe Interval at default (no less than 5 seconds).

7. Leave Probe Count and Election Delay Time at default.

8. Ensure there’s a checkmark for Include Certificates/Keys.

9. Press Synchronize settings.


Configuring High Availability > Monitoring Setting

(Only do the following on the primary unit, they will be sync’d with the secondary unit).

1. Login as the administrator on the Primary SonicWALL.

2. Click High Availability > Monitoring.

3. Click the Configure icon for an interface on the LAN (ex. X0).

4. To enable link detection between the designated HA interface on the Primary and Backup units, leave the Enable Physical Interface monitoring checkbox selected.

5. In the Primary IP Address field, enter the unique LAN management IP address.

6. In the Backup IP Address field, enter the unique LAN management IP address of the backup unit.

7. Select the Allow Management on Primary/Backup IP Address checkbox.

8. In the Logical Probe IP Address field, enter the IP address of a downstream device on the LAN network that should be monitored for connectivity (something that has an address that’s always turned on like a server or managed switch).

9. Click OK.

10. To configure monitoring on any of the other interfaces, repeat the above steps.

11. When finished with all High Availability configuration, click Accept. All changes will be synchronized to the idle HA device automatically.


Testing the Configuration

1. Allow some time for the configuration to sync (at least a few minutes). Power off the Primary SonicWALL. The Backup SonicWALL should quickly take over.

2. Test to ensure Internet access is OK.

3. Test to ensure LAN access is OK.

4. Log into the Backup SonicWALL using the unique LAN address you configured.

5. The management interface should now display “Logged Into: Backup SonicWALL Status: (green ball)”. If all licenses are not already synchronized with the Primary SonicWALL, go to System > Licenses and register this SonicWALL on This allows the SonicWALL licensing server to synchronize the licenses.

6. Power the Primary SonicWALL back on, wait a few minutes, then log back into the management interface. The management interface should again display “Logged Into: Primary SonicWALL Status: (green ball)”.

NOTE: Successful High Availability synchronization is not logged, only failures are logged.

[More Splunk: Part 3] Report on remote server activity

Wednesday, November 28th, 2012

This post continues [More Splunk: Part 2] Configure a simple Splunk Forwarder.

With data flowing from the Splunk Forwarders into the Splunk Receiver server, the last step toward getting meaningful information is to create a search for specific data and put it into a report.

Splunk searches range from simplistic strings such as “error” to complex phrases that resemble Excel formulas mixed with shell scripting. To extract the data gathered from a remote server will require narrowing down the location of the data from host to source to field and then manipulating the field values to get meaning from them.

Creating a search

After logging in to the Splunk Receiver server, select Search from the App menu.

Choose Search

This presents a page with a seemingly simple search field at the top with three panels below called “Sources”, “Source Types” and “Hosts”. The window is actually a very helpful formula builder for creating complex searches. Locate the Hosts area. This lists both the local computer as well as all Splunk Forwarders.


Clicking any of the host names, in this case “TMI”, begins building the search formula. It automatically inserts a correctly formatted string into the Search field:


At the same time Splunk displays a table of data from that host and begins displaying a dynamic graph based on that data. Without any filtering or refining it’s displaying the count of records from log files it has gathered. Interesting but not very useful.

Host search

Now that the data shown is narrowed down to the server, let’s narrow it down to the data coming from the script running on the server. The script is considered the “source” of the data and the path to the script is the value:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/"

This search result narrows Splunk’s results considerably. Note that Splunk is highlighting the host and source information in the textual data. Also, note how the graph consistently shows “1″ across its scope. This indicates it’s reporting one record for each time reported. Again, not very useful.

Source search

What we really want are the values of the results displayed over time. This is handled by the “timechart” function in Splunk. The formula now pipes the data returned from the host and source into a function:

host="TMI" source="/applications/splunkforwarder/etc/apps/talkingmoose/bin/" | timechart avg(MySQLCPU)

Remember that the script was written to denote “fields” called “MySQLCPU” and “ApacheCount”. Using the field name in the timechart function returns the values over time. Using “avg” returns the average of the values (really, just the average of the one value). The final result is a simple table of data, which is all that’s needed to create a report.


Creating a report

Now, we can graph this table of data. From the Create menu select Report… Splunk creates a rough graph, which is useful but not very easy to read.

Initial graph

Using the formatting options above the graph, adjust these items:

  • Chart type: area
  • Chart title: MySQL CPU Usage

Area graph

To save this graph so that it’s easily accessible without having to recreate the search each time, let’s add it to a dashboard. A dashboard is a single Splunk page that can act as an overview for multiple related or unrelated processes or servers.

From the Create drop down menu select Dashboard panel… Name the new panel “MySQL CPU Usage” and click the Next button. If an appropriate dashboard already exists simply choose to add the panel to that existing dashboard. Otherwise, name the new dashboard itself “Servers Dashboard” and click the Next button. Click the Finish button when done.

To view the report panel without having to recreate the search each time, locate the Dashboards & Views menu and select the Servers Dashboard.

Select dashboard

A dashboard can hold any number of report graphs for one or multiple machines. Create a new search and then create a new report based on that search. When done save it to the dashboard. Drag and drop panels on the page to reorder them or put higher priority panels toward the top or left of the page.

[More Splunk: Part 2] Configure a simple Splunk Forwarder

Monday, November 26th, 2012

This post continues [More Splunk: Part 1] Monitor specific processes on remote servers.

So far, I have a simple shell script called that will return two pieces of information I want fed into my Splunk indexer server:

  • MySQL CPU usage
  • Count of Apache web server processes

It’s going to return a result that looks something like:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Install the Forwarder

For each server  I need a Splunk agent called a Forwarder installed. The Forwarder’s purpose is to send the data collected on the local server to a remote Splunk server for indexing and reporting. Splunk offers three types of Forwarders but I want the one with the lightest weight and overhead—a Universal Forwarder. For my testing I downloaded the Mac OS X 10.7 installer and installed it onto OS X 10.8 without any noticeable issues.

At this point the Forwarder service hasn’t been started yet. I first want to add my script and a couple of configuration files. The configuration files are necessary because the Universal Forwarder has no web interface to facilitate point and click configuration.

Create and populate the app directory

First, I want to create a folder for my “app”. An app is a directory of scripts and configuration files. By creating my own app directory I can control the behavior of its contents, overriding preset server defaults if I choose.

mkdir /Applications/splunkforwarder/etc/app/talkingmoose/

Inside my app folder I’ll create two more called bin and local:

mkdir /Applications/splunkforwarder/etc/app/talkingmoose/bin
mkdir /Applications/splunkforwarder/etc/app/talkingmoose/local

The bin folder is a Splunk security requirement. Any executable, such as a script, must reside in this folder. This is where I’ll place my script and make it executable using chmod +x.

The local folder will contain two plain text configuration (.conf) files:

  • inputs.conf
  • outputs.conf

Put simply, inputs.conf is the configuration file that controls executing the script and getting its data into the Splunk Forwarder. And outputs.conf is the configuration file that controls sending the data out to the indexing server or “Splunk Receiver”. These files can be very simple or very complex depending on the needs. I like simple.

Contents of inputs.conf

disabled = false
interval = 60.0

This .conf file tells the Splunk Forwarder where to find the script to execute and then executes it every 60 seconds.

Contents of outputs.conf


This .conf file tells the Splunk Forwarder to send its collected script data to a specific IP address on port 9997 where the Splunk Receiver is listening.

Configure the Splunk Receiver to listen

All that’s left to do is configure the Splunk Receiver to listen for data coming in from Splunk Forwarders on port 9997 via its web interface and start the Splunk Forwarder’s service via its command line utility.

Enable receiving

On the Splunk Receiver server, the server accepting all the data for searching later, click the Manager link in the upper right corner and then click Forwarding and receiving. Click on Configure receiving and then click the New button to create a new listening port. Enter 9997 or another port number not commonly used. Click the Save button.

Enable forwarding

On each Splunk Forwarder the necessary files are already in place. The only task left is to start the Forwarder’s service.

sudo /Applications/splunkforwarder/bin/splunk start

If this is the first time running the start command then press the spacebar repeatedly to read the license agreement or press “q” to quit and immediately accept the agreement.

To test that the Forwarder is working run the list command:

sudo /Applications/splunkforwarder/bin/splunk list forward-server

If prompted for credentials use Splunk’s defaults:

Splunk username: admin
Password: changeme

It should return something that looks like this:

Active forwards:
Configured but inactive forwards:

Searching on the Splunk Receiver should also return results from the Forwarders. Search for host="<forwarderHostName>".

Now that remote server data is flowing into the Splunk indexer machine the last step is to search for it and turn it into meaningful reports.

[More Splunk: Part 3] Report on remote server activity

[More Splunk: Part 1] Monitor specific processes on remote servers

Thursday, November 22nd, 2012

I was given a simple Splunk project: Monitor MySQL CPU usage and Apache web server processes on multiple servers.

Splunk is an amazing product but it’s also a beast! While it may be just a tool in one administrator’s arsenal of gadgets it could very well be another administrator’s full time job. Installing the software is a breeze and getting interesting reports is child play. Getting the meaningful reports you want, on the other hand, requires skills in the realms of system administration, scripting, statistics and formula building (think Excel).

My first project with the software was to monitor two things on remote servers:

  • MySQL CPU usage
  • Count of Apache web server processes

It sounds simple but involves a few pieces:

  • Writing a script to get the data
  • Configuring servers as Splunk Forwarders
  • Forwarding the data to a central server
  • Creating a search to populate a meaningful chart

Create the script

This is the easy part but it requires some special formatting to get Splunk to recognize the data it returns.

First, Splunk parses most any log file based on a time stamp and it can recognize many different versions of timestamps. The data following the timestamp constitutes the rest of the row or record. When Splunk gets to a second timestamp it considers that information to be another record.

So, my script output needed a timestamp. I followed the RFC-3339 specs (one of many formats), which describes something that looks like this:

2012-11-20 14:10:14-08:00

That’s a simple calendar date followed by a time denoted by its offset from GMT time. In this case the -08:00 denotes Pacific Standard Time or PST.

Next, I needed to collect two pieces of data: MySQL CPU usage and the number of active Apache web server processes. I started with a couple of shell script ps commands.

MySQL CPU usage

ps aux | grep mysqld | grep -v grep | awk '{ print $3 }'

Count of Apache web processes

ps ax | grep httpd | grep -v grep | wc -l

While Splunk can understand a standard timestamp as a timestamp it needs some metadata to describe the information that these commands are returning. That means each piece of information needs a name or “field”. This creates a key/value pair it can use when searching on the information later.

In other words the MySQL command above will return a number like “23.2″. Splunks needs a name like “MySQLCPU”. The key/value pair then needs to be in the form of:


This is the entire script to return the timestamp and two key/value pairs separated by tabs:


# RFC-3339 date format, Pacific
TIMESTAMP=$( date “+%Y-%m-%d %T-08:00″ )

# Get CPU usage of the mysqld process
CPUPERCENTAGE=$( ps aux | grep mysqld | grep -v grep | awk ‘{ print $3 }’ )

# Get count of httpd processes
APACHECOUNTRAW=$( ps ax | grep httpd | grep -v grep | wc -l )
APACHECOUNT=$( echo $APACHECOUNTRAW | sed -e ‘s/^[ \t]*//’ )


It will return a result that looks something like this:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Save this script with a descriptive name such as Each Splunk Forwarder server will run it to gather information at specified time intervals and send those results to the Splunk Indexer server. For that see:

[More Splunk: Part 2] Configure a simple Splunk Forwarder

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.


Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

Capture Network Device Information Using Casper

Friday, November 9th, 2012

JAMF Software’s Casper suite is designed to capture and store information about Mac and Windows clients. However, it can also store information about network resources such as printers and routers by using a server or workstation as a pseudo SNMP Network Management Station. The following example illustrates how to use a Casper Extension Attribute to store the uptime of an Airport Extreme base station in a managed client’s record in the JAMF Software Server (JSS).

Uptime is the length of time a device has been active since its last reboot. An Airport Extreme base station should have a relatively long uptime (weeks or months) compared to a workstation (days). If the uptime of a base station is always just a few days then that may indicate hardware failure or power problems.

First, using the snmpwalk command, a server or workstation can poll the public community of any Airport base station at its IP address:

snmpwalk -v1 -c public -M /usr/share/snmp/mibs

This command will return a lot of information. Applying grep to return just the sysUpTime information and cut to trim away  everything but the value of sysUpTime, the final result looks something like:

$ snmpwalk -v1 -c public -M /usr/share/snmp/mibs
-m AIRPORT-BASESTATION-3-MIB | grep sysUpTime | cut -d \) -f 2

286 days, 10:38:38.70

An extension attribute is simply a shell script that runs a command to gather information and then returns that information to be stored in the JSS. Every managed computer in the JSS runs these scripts during routine inventories. But only one should be dedicated to polling the base station and storing the uptime information.

During a routine inventory this script verifies whether the name of the computer in the script matches the name of the current computer. If they match then it runs the snmpwalk command to poll the base station for its uptime.

To add this as an extension attribute in the JSS:

  1. Navigate to Settings tab –> Inventory Options –> Inventory Collection Preferences –> Extension Attributes tab –> Add Extension Attribute.
  2. Name this Extension Attribute “Airport Uptime”.
  3. Set the Data Type to String.
  4. Set the Input Type to Script and paste in the script.
  5. Edit the script by entering the name of the computer that should poll the Airport base station.
  6. Enter the IP address of the Airport base station in the script as well.
  7. Click the OK button and then Save the Extension Attribute.

Run the Recon application on the polling computer to update its inventory in the JSS. When done the EA should return the uptime for the base station to the computer’s record.

To view the information search for the computer in the JSS and click its Details link. Click the Extension Attributes section on the next page and locate the “Airport Uptime” Extension Attribute on the right.

Update: John C. Welch has written a companion piece to this post outlining some better and more efficient ways to accomplish the SNMP polling: A companion post to a 318 post. Thanks for the writeup, John!

Emailing A File To

Wednesday, April 18th, 2012 has a number of features that can be used for workflow automation. One such feature is the ability to have an email address that is tied to a folder. Most services support the ability for that email address to be used to inform users of updates to directories. However, a somewhat unique feature is that has the ability to assign an email address to the folder so that any time you send mail to the folder, that file is added to the folder. For example, I scan a contract and email it to a vendor, I can also bcc a folder called contracts and the contract will appear in the folder.

To setup an email address for a folder, open and click on a folder that you’d like to get an email address assigned to. Then click on the disclosure triangle on the right side of the screen for Folder Options and click on Email Options.

At the Email Options tab of the Folder Properties overlay screen, check the box for Allow uploads to this folder via email. Here, you can also use the Only allow uploads from collaborators in this folder checkbox to restrict who is able to email files to the folder.

While emailing files to get them into a folder isn’t for everyone, it is a great new take on a dropbox type of folder. You can also then sync these folders with folders in Mac OS X and Windows. This type of functionality is also a great way to do student submissions of coursework, file-based workflows for iOS and various automated workflows based on emails.

Secure Site-to-Site VPN tunnel using the ASA

Sunday, April 8th, 2012

Site to Site VPN enables an encrypted connection between private networks over a public network (i.e. the Internet).

Basic steps to configure a site-to-site VPN with a Cisco ASA begin with defining the ISAKMP Policy. An ISAKMP/IKE policy defines how a connection is to be created, authenticated, and protected. You can have multiple policies on your Cisco ASA. You might need to do this if your ASA needs to connect to multiple devices with different policy configurations.

  • Authentication: specifies the method to use for device authentication
  • Hash: specifies the HMAC function to use
  • Encryption: specifies which algorithm to use
  • Group: specifies the DH key group to use

Next, you will need to establish IPsec transform set. Different Firmware versions and different Cisco devices have different options for the following…

  • Esp-md5-hmac: ESP with the MD5 (HMAC variant) authentication algorithm
  • Esp-aes: ESP with the 128-bit Advanced Encryption Standard (AES) encryption algorithim.
  • Esp-des: ESP with the 56-bit Data Encryption Standard (DES) encryption algorithm.
  • Esp-3des: ESP with the 168-bit DES encryption algorithm (3DES or Triple DES)
  • Ah-md5-hmac: AH with the MD5 (HMAC variant) authentication algorithm
  • Ah-sha-hmac: AH with the SHA (HMAC variant) authentication algorithm

3. Configure crypto access list-

Crypto ACL’s are used to identify which traffic is to be encrypted and which traffic is not. After the ACL is defined, the crypto maps use the ACL to identify the type of traffic that IPSec protects.

It’s not recommended to use the permit ip any any command. It causes all outbound traffic to be encrypted, and sends all traffic to the specified peer.

4. Configure crypto map

Used to verify the previously defined parameters

5. Now apply crypto map to the outside interface.


Configuration of ASA-1

You might have to enable ISAKMP on your device

ASA-1(config)#crypto isakmp enable

First defined the IKE polices on ASA-1

ASA-1(config)#crypto isakmp policy 10

The lower the policy number, the higher the priority it will set the ISAKMP policy to, affecting which policies will be used between sites.

General rule of thumb is to give the most secure policy the lowest number (like 1) and the least secure policy the highest number (like 10000)

ASA-1(config-isakmp)#encryption des

(enable encryption des)

ASA-1(config-isakmp)#hash md5

(enable algorithm md5 for hashing)

ASA-1(config-isakmp)#authentication pre-share

(enable Pre-shared method)

ASA-1(config-isakmp)#group 2

(enable group 2)


(Exit from crypto isakmp mode)

  • The next step is to create a pre-shared key (password) on ASA-1.

ASA-1(config)#crypto isakmp key office address

(Here the Key is “office” and is ASA-2 Address)

  • Now create an access list to define only interesting traffic.

ASA-1(config)#access-list 100 permit ip host host

(100 is access list number and is source address and is destination address.)

  • Now create the transform-set for encryption and hashing.

ASA-1(config)#crypto ipsec transform-set ts2 esp-des esp-md5-hmac

(Here encryption type is des and hashing method is md5-hmac)

ASA-1(config)#crypto map testcryp 10 ipsec-isakmp

(crypto map name testcryp)

ASA-1(config)# crypto map testcryp 10 match address 100

(apply the access list)

ASA-1(config)# crypto map testcryp 10 set transform-set ts2

(apply the transform set)

ASA-1(config)# crypto map testcryp 10 set peer

(Set remote peer address)

  • Now apply the crypto map to the ASA – A interface

ASA-1(config)# crypto map testcryp interface outside

(Apply crypto map on outside interface)

ASA-1(config)# crypto isakmp enable outside

(To enable crypto isakmp on ASA)

Configuration of ASA-2

First defined the IKE polices on ASA-2

ASA-2(config)#crypto isakmp policy 10

(10 is isakmp policy number)

ASA-2(config-isakmp)#encryption des

(enable encryption des)

ASA-2(config-isakmp)#hash md5

(enable algorithm md5 for hashing)

ASA-2(config-isakmp)#authentication pre-share

(enable Pre-shared method)

ASA-2(config-isakmp)#group 2

(enable diffie-Helman group 2)


(Exit from crypto isakmp mode)

  • The next step is to create a pre-shared key (password) on ASA – B.

ASA-2(config)#crypto isakmp key office address

(Here Key is “office” and is ASA – A Address)

  • Now create an access list to define only interesting traffic.

ASA-2(config)#access-list 100 permit ip host host

(100 is access list number and is source address and is destination address.)

  • Now create the transform-set for encryption and hashing.

ASA-2(config)#crypto ipsec transform-set ts2 esp-des esp-md5-hmac

(Here encryption type is des and hashing technique is md5-hmac)

ASA-2(config)#crypto map testcryp 10 ipsec-isakmp

(crypto map name testcryp)

ASA-2(config)# crypto map testcryp 10 match address 100

(apply the access list)

ASA-2(config)# crypto map testcryp 10 set transform-set ts2

(apply the transform set)

ASA-2(config)# crypto map testcryp 10 set peer

(Set remote peer address)

  • Now apply the crypto map to the ASA – B outside interface

ASA-2(config)# crypto map testcryp interface outside

(Apply crypto map on outside interface)

ASA-2(config)# crypto isakmp enable outside

(To enable crypto isakmp on ASA)

Now to verify the secure tunnel, ping to other remote location.

ASA-2(config)# ping

Setting up Netboot helpers on a Cisco device

Tuesday, April 3rd, 2012

Configure a Cisco device for forwarding bootp requests is a pretty straightforward process. First off, this will only apply to Cisco Routers and some switches. You will need to verify if you device supports the IP Helper command. For example, the Cisco ASA will not support bootp requests.

By default the IP Helper command will forward different types of UDP traffic. The two important ones 67 and 68 for DHCP and BOOTP requests. Other ports can be customized to forward with some other commands as well. But it is quite simple pretty much if you have a Netboot server you can configure the IP Helper command to point that servers IP address.

Here is an example, lets say your NetBoot server has an IP Address of You would simply go into the global configuration mode switch to the interface you want to utilize and type “ip helper-address″ to simply relay those requests to that address. Depending on your situation you also might want to setup the device to ignore BOOTP requests (in cases that you have DHCP and BOOTP on the same network). That command is “ip dhcp bootp ignore”. Using the IP helper and Bootp ignore command together will ensure that those bootp requests are forwarded out the interface to the specified address.

Last if you have multiple subnets you can setup multiple IP Helper address statements on your device to do multiple forwarding.

Installing a SonicWALL ViewPoint Virtual Machine

Monday, April 2nd, 2012

When installing a Viewpoint VM machine you will need to download three items.

First is the SonicWALL_ViewPoint_Virtual_Appliance_GSG.pdf available from
This will be you step by step instruction manual for installing the Viewpoint VM.
Next you will need to identify which version VXI host and then download the same version client as your VXI host.
Lastly you will need log into and download the sw_gmsvp_vm_eng_6.0.6022.1243.950GB.ova from

When you have all three of these downloaded open the SonicWALL_ViewPoint_Virtual_Appliance_GSG and start going through the step by step instructions.
You will first install the VM client and may run into the first gotcha. Depending on machine setup the .exe may be blocked from running.
The download will look like this:, get properties on this file and unblock if blocked.
After the install of the VM client follow the instructions in the PDF till you get to page 18 step 2.

2. When the console window opens, click inside the window, type snwlcli at the login:
prompt and then press Enter. Your mouse pointer disappears when you click in the
console window. To release it, press Ctrl+Alt

Here is where you will run into the biggest gotcha.

You will be ask to log into with name and password, on first login use name of: snwlcli no password,
Then use the default name and password and continue.

Microsoft’s System Center Configuration Manager 2012

Sunday, March 18th, 2012

Microsoft has released the Beta 2 version of System Center Configuration Manager (SCCM) aka System Center 2012. SCCM is a powerful tool that Microsoft has been developing for over a decade. It started as an automation tool and has grown into a full-blown management tool that allows you to manage, update, and distribute software, license, policies and a plethora of other amazing features to users, workstation, servers, and devices including mobile devices and tablets. The new version has been simplified infrastructure-wise, without losing functionality compared to previous versions.

SCCM provides end-users with a easy to use web portal that will allow them to choose what software they want easily, providing an instant response to install the application in a timely manner. For Mobile devices the management console has an exchange connector and will support any device that can use Exchange Active Sync protocol. It will allow you to push policies and settings to your devices (i.e. encryption configurations, security settings, etc…). Windows phone 7 features are also manageable through SCCM.

The Exchange component sits natively with the configuration manager and does not have to interface with Exchange directly to be utilized. You can also define minimal rights for people to just install and/or configure what they need and nothing more. The bandwidth usage can be throttled to govern its impact on the local network.

SCCM will also interface with Unix and Linux devices, allowing multiple platform and device management. At this point, many 3rd party tools such as the Casper Suite and Absolute Manage also plug into SCCM nicely. Overall this is a robust tool for the multi platform networks that have so commonly developed in today’s business needs everywhere.

Microsoft allows you to try the software at For more information, contact your 318 Professional Services Manager or if you do not yet have one.

Adding incoming and outgoing access rules on a Cisco ASA

Saturday, March 17th, 2012

To understand incoming and outgoing rules there are a couple of things to know before you can define your rules. Let’s start with an understanding of traffic flow on an ASA. All incoming rules are meant to define traffic that come inbound to the ASA’s interface. Outgoing is for all traffic that is going outbound of an ASA’s interface. It does not matter which interface it is since this is a matter data flow and each active interface on an ASA will have it’s own unique address.

To try an explain this further let’s say we have and internal interface with an IP address of that is for your local area network to connect to. You can add a permit or deny rule to this interface specifying whether incoming or outgoing  traffic will be permitted or not. This allows you to control what computers can communicate past that interface or not. Essentially you would define most of your rules for the local area network on the internal interface, governing which systems/devices could access the internet, certain protocols, or not.

Now if you know about the basic configuration of an ASA you know that you have to set the security level of the Internal and External ports. So by default these devices allow traffic from a higher security interface to a lower security interface. NAT/PAT will need to be configured depending on if you want to define port traffic for specified protocols.

For this article I will just mention that their are several types of Access Control Lists (ACL) that you can create on an ASA. These types are Standard, Extended, Ethertype, webtype, and IPV6. For this example we will use Extended because most likely that is what most everyone will use the most. With extended ACL not only can you specify IP addresses in the access control list, but you can specify port traffic to match the protocol that might be required.

Lets look at the the examples below:

You will see we are in the configuration terminal mode

ASA(config)# access-list acl extended permit tcp any host eq 80

-So the first part “access-list acl” means the access list will be named “acl”.
-Next you have a choice between type of access list. We are using Extended for this example.
-The next portion is the permit or deny option and we have permit selected for this statement.
-On the next selection that say’s “any” this refers to inside traffic (simply meaning that any internal traffic is allowed). If you dont use any you can specify specific devices by using “host and the IP address like that last part of this ACL statement.
-The next part of this refers to specifying a specific host address of equals port 80.

So this example tells us that our access control list named ACL will allow any inside traffic out the host address of that is internet traffic.

Later you will notice that your statement will look like this on the ASA

ASA(config)access-list acl extended permit tcp any host www
Notice how “eq 80″ default http traffic changed automatically to www) This is common on Cisco ASA devices).

Lost a password to your Cisco Device and need to recover the settings?

Friday, March 9th, 2012

Most of us know that Cisco can be a bit complicated and sometimes things happen that are not so forgiving. One of those is losing a password on a Cisco device. The downside to this is if you did not know that you could reset the password using a console cable you might be freaking out thinking you might have to reset to factory defaults. Well thank you Cisco for providing a backdoor to their devices. Now for each device the commands and procedures can be slightly different, so you will want to look up from Cisco the password recovery steps for you specific device. In the example I will show you the steps on how to reset the password on a Cisco ASA 5505 using Terminal from a Macbook.

First thing you will need to have on all the Cisco devices is Console port access. For this reason it is important to ensure there are strict physical security measures in place. Access to the device allows someone to have access to the procedures that I am about to list, which can give them unwanted entry to your device.

1.Connect to the device using the console port\cable. The cable is usually an RJ45 to Serial so on my Macbook I don’t have a serial port so I use a serial to USB adapter. All my configurations are than done in terminal. If you’re on a PC you can use your telnet application or the MS-DOS CMD window.

Using a Macbook with the serial to USB adapter requires I use the “Screen /dev/tty.KeySerial1 9600” command to be able to use terminal as my telnet window. This will allow you to view the bootup of the device as soon as it has power.

2. Now shutdown the ASA, and power it back up. During the startup messages, press and hold the “Escape” key when prompted to enter ROMMON.

3. To update the configuration register value, enter the following command:

rommon #1> confreg 0x41

4. To have the ASA ignore the startup configuration during its startup, enter the following command

rommon #1> confreg

The ASA will display the current configurations register value, and will prompt you to change the value:

Current Configuration Register: 0x00000011
Configuration Summary:
boot TFTP image, boot default image from Flash on netboot failure
Do you wish to change this configuration? y/n [n]:

5. Take note of the current configuration register value (it will be used to restore later). At the prompt enter “Y” for yes and hit enter.

The ASA will prompt you for new values.

6. Accept all the defaults, except for the “disable system configuration?” value; at that prompt, enter “Y” for yes and hit enter.

7. Reload the ASA by using entering:

rommon #2> boot

The ASA loads a default configuration instead of the startup configuration.

8. Enter privileged EXEC mode by entering:

hostname> en

9. When prompted for the password press “Enter” so the password will be blank.

10. Next Load the startup config by entering:

hostname# copy startup-config running-config

11. Enter global configuration mode by using this command:

hostname# config t

12. Change the passwords in the configuration by using these commands, as necessary:

hostname(config)# password newpassword
hostname(config)# enable password newpassword
hostname(config)# username newusername password newpassword

13. Change the configuration register to load the startup configuration at the next reload by entering:

hostname(config)# config-register 0x00000011

* Note- 0×00000011 is the current configurations register you noted in step 4.

13. Save the new passwords to the startup configuration by entering:

hostname(config)# wr mem


The commands used in the example above were referenced from Cisco article

Virtual Desktop Infrastructure (VDI) for Mac OS X

Thursday, March 8th, 2012

What is Virtual Desktop Infrastructure (VDI)? VDI is technology that enables you to connect to a host’s shared repository of virtualized environments and then allows you to run them on your computer or device, but still utilizing the host’s resources. In other words, it allows you to connect to an OS dedicated to you using your local device as a remote (read: thin) client.

The difference between VDI and Terminal Services or a traditional Citrix setup is that in a Terminal Server or Cirix setup, many users are connecting to a server, sharing the resources of the server, and are all still under the same end-user OS layer and hardware ecosystem. Using VDI, each user has a dedicated virtual machine running a workstation OS, now only under the same hardware ecosystem. Some VDI tools can then be synchronized to the local workstation and run offline as well, leveraging the local systems resources.

Mac OS X was initially left out of the virtual desktop infrastructure space. But with the introduction of VMware View 4.5, users of the Apple-based platform get a chance to dabble in leveraging a virtualized desktop infrastructure in much the same way that users of other platforms can. With VMware View Client for Tech Preview, Mac users can leverage PCoIP (PC over IP) instead of only relying on Remote Desktop for connecting to their virtual desktops. The current offerings of the VMware View Client for OS X do not offer the same type of features as the Windows version, but VMware is working on matching those features across their clients.

Citrix has its own implementation of VDI called XenDesktop. XenDesktop is similar in its offerings to VMware View and is another enterprise class option in VDI implementation. OS X can connect to the virtual desktop through Citrix Receiver. A difference bewteen the two would be the protocol which is used to deliver the best virtualized desktop expeirence. While VMware View uses PCoIP (UDP Based), Citrix XenDeskop uses HDX (High Definition Experience) which is TCP based. Both do a good job at connecting to their respective virtual desktop using different protocols, and both also support using Remote Desktop to connect to the virtual desktop.

Mokafive is a newcomer into the VDI scene, geared specifically to the Mac OS X platform. Mokafive takes a different spin on VDI, and sets up the virtual desktop to utilize the resources of the local device instead of a centralized server (it should be noted though, that both XenDesktop and VMware View now offer that same capability, each with its own unique implementation). Mokafive does so from a Mokefive server using a desktop virtual machine called a LivePC that it uses as a “golden image” (a master virtual machine that’s used for deployment).  One of its main strengths is it’s easy to understand and use.

With all of the VDI options that are out, there’s an acronym that’s being used called BYOC (Bring Your Own Computer).  With this idea, companies may begin to allow more employees to bring their Macbooks to work and then run the corporate virtual desktop on their Macbooks without the IT staff having to be too concerned about line of business application compatibility on OS X since it will just run on the corporate virtual desktop.  Choosing the VDI to do this for your company seems to be more of a question of which solution lines up best with your current infrastructure/familiarity vs. simplicity. If you would like to discuss VDI or other forms of virtualization with 318, please contact your Professional Services Manager or if you do not yet have one.


Wednesday, March 7th, 2012

In the routing world, NAT stands Network Address Translation while PAT stands for Port Address Translation. To many they’re going to be pretty similar while to others they couldn’t be more different.

When you have an Internet connection for your business network you are usually given a range of public static IP addresses. With these addresses you can use your Cisco router to use NAT technology, which will allow you to map an external address to an internal address (NAT is One to One addressing).  Your NAT router translates traffic coming into and leaving your private network so it works in both directions.

Let’s say your computer has an IP address of and the Router has a public IP address of If you go to the Internet from your address, it will be translated to the address using the NAT protocol, which will allow you to communicate external your network. It also allows for the return of that data and the opposite to happen when data returns to it will translate back to your address to receive the information to your system with the address of

Port Address translation is almost the same thing but it allows you to specify the TCP or UDP protocol (port) to be used. Let’s pretend you need to access a mail server at your network from externally. Most likely your port will be the standard SMTP port 25. Assuming it is you would configure the router to allow traffic from port 25 external your network to come through to your mail server’s port 25, thus sending and receiving e-mail. You can also use PAT to define traffic from a specific port to translate to a different port. For example if you have to use port 25 for an external mail client but you have a custom port of 26 internally to the mail server. You can define a Static PAT rule that can define all outside port 25 traffic will route to port 26 internally allowing port 25 traffic to reach your mail server on port 26.

*Note: PAT works hand in hand with NAT and is linked to the public and internal IP addresses. With PAT You may route many to one addressing (i.e. all internal addresses go out a single Public IP address for internet using port 80).

Configuring a Cisco ASA 5505 with the basics

Thursday, March 1st, 2012

The Cisco ASA 5505 is great for small to medium businesses. Below are the steps you will have to complete to configure your ASA to communicate with the internet. There are many more steps, options, and features to these devices (which later there will be more articles in regards to some of these features).

Bring your device into configuration mode
Brings the device into enable mode

318ASA#config t
Change to configuration terminal mode

The ASA is now ready to be configured when you see (config)#

Configure the internal interface VLAN (ASA’s use VLAN’s for added security by default)
318ASA(config)# interface Vlan 1

Configure interface VLAN 1
318ASA(config-if)# nameif inside
Name the interface inside

318ASA(config-if)#security-level 100

Set’s the security level to 100

318ASA(config-if)#ip address
Assign your IP address

318ASA(config-if)#no shut
Make sure the interface is enabled and active

Configure the external interface VLAN (This is your WAN\internet connection)
318ASA(config)#interface Vlan 2
Creates the VLAN2 interface

318ASA(config-if)# nameif outside
Name’s the interface outside

318ASA(config-if)#security-level 0
Assigns the most strict security level to the outside interface (lower the number the higher the security).

318ASA(config-if)#ip address
Assign your Public Address to the outside interface

318ASA(config-if)#no shut
Enable the outside interface to be active.

Enable and assign the external WAN to Ethernet 0/0 using VLAN2
318ASA(config)#interface Ethernet0/0
Go to the Ethernet 0/0 interface settings

318ASA(config-if)#switchport access vlan 2
Assign the interface to use VLAN2

318ASA(config-if)#no shut
Enable the interface to be active.

Enable and assign the internal LAN interface Ethernet 0/1 (note ports 0/1-0/7 act as a switch but all interfaces are disabled by default).
318ASA(config)#interface Ethernet0/1
Go to the Ethernet 0/1 interface settings

318ASA(config-if)#no shut
Enable the interface to be active.
If you need multiple LAN ports you can do the same for Ethernet0/2 to 0/7.

To have traffic route from LAN to WAN you must configure Network Address Translation on the outside interface
318ASA(config)#global (outside) 1 interface
318ASA(config)#nat (inside) 1

***NOTE for ASA Version 8.3 and later***
Cisco announced the new Cisco ASA software version 8.3. This version introduces several important configuration changes, especially on the NAT/PAT mechanism. The “global” command is no longer supported. NAT (static and dynamic) and PAT are configured under network objects. The PAT configuration below is for ASA 8.3 and later:

318ASA(config)#nat (inside,outside) dynamic interface

For more info you can reference this article from Cisco with regards to the changes –

Configure the default route (for this example default gateway is
318ASA(config)#route outside 2 1

Last but not least verify and save your configurations. If you do not save your configurations you will have to.

Verify your settings are working. Once you have verified your configurations write to memory to save the configuration. If you do not write to memory your configurations will be lost upon the next reboot.

318ASA(config)#wr mem

Final Cut Server EOL’d – What do we do now?

Friday, December 9th, 2011

318 has been working to provide our clients with a strategy to replace Final Cut Server, now that FCS has been EOL’d by Apple. We are proud to announce a comprehensive strategy and solution in the form of CatDV Enterprise Server and Client, by Square Box Systems, LTD.

The first question should always be, “Do we need to implement a new solution?” In many cases, and at least for now, the answer may be “No, not yet.” There will come a time, however, when the needs of the workflow, software, hardware, or some other factor will necessitate a new Digital Asset Management (DAM) System implementation.

Once the decision has been made to deploy a new DAM, many additional questions will arise. How do we keep our metadata intact? Can we re-use our clip and edit proxies? How do we keep our current automations? 318 can work with you to address these issues. We are asking ourselves the same questions with an eye towards minimizing the hassles associated with migrating such a major piece of infrastructure.

318 has spent the last year evaluating many of the DAM solutions in the marketplace, with an emphasis on whether or not the solution is an appropriate replacement for Final Cut Server in terms of cost, functionality and scalability, and after many internal discussions, CatDV best matched these criteria. In terms of cost, CatDV is one of the most affordable solutions in the marketplace. In terms of functionality, CatDV matches or exceeds the functionality of Final Cut Server. In terms of scalability, CatDV far exceeds the capabilities of Final Cut Server.

The final link in the chain is migrating data and recreating workflows from Final Cut Server to CatDV. 318 has the facility and ability to migrate your metadata with a minimum of user intervention. We also have the ability to analyze your Final Cut Server workflows and re-create the functionality in CatDV, including shell scripting and highly customized workflow integrations for ingest and archive.

We are a CatDV authorized reseller, and have staff trained by CatDV personnel. 318 stands ready to spec, deploy, configure and maintain your CatDV solution and help you with the transition from your Final Cut Server to CatDV. Please don’t hesitate to contact us for a demo and discussion of what CatDV can do for your video workflows.

Finally, 318 is working with other vendors to continue expanding our portfolio of SAN and DAM solutions. Keep on the lookout for what will hopefully be a few other additions once our thorough vetting process has been completed! If you would like further information on any of this, please feel free to contact your Professional Services Manager or if you do not yet have one.

Building a Mac and iOS App Store Software Update Service

Wednesday, November 9th, 2011

Let’s say you run a network with a large number of Mac OS X or iOS (or, more likely, both) devices. Software Update and the two App Stores (Mac App Store and iOS App Store) make keeping all those devices up-to-date a pretty straightforward process. They are a huge improvement compared with the rather old-fashioned practice of looking through applications, visiting the web site for each one and manually downloading updated versions. When updating two or more similar machines, of course, one only needed to download the updated version once, then copy it to each other machine. Better, but a process that when performed across a lot of machines requires a lot of work.

However, even though the App Store and Software Update Server in Mac OS X Server make things easier, there’s no simple way to download things once and distribute the downloaded files to multiple machines for items purchased on the App Store. When large updates come out (such as a new version of iOS), you’re essentially downloading huge amounts of data to each and every machine, and if machines are set to automatically download updates, you could even have a large number of them downloading simultaneously.

Of course you can run your own Software Update service in Mac OS X Server, but this requires that every client machine be configured to use the local server. This works well for machines under your control, but for all those people who bring in their own laptops this doesn’t help.

What’s worse is that there’s currently no way whatsoever to run a Software Update-like service for App Store purchases. Imagine if you have a lab of dozens or hundreds of Macs with Final Cut X or iPads (or iPhones, iPod Touches, whatever comes out next with iMovie or ). Any time there’s an update you’re potentially downloading over a gigabyte per machine in the case of Final Cut X or 70 megabytes or so in the case of iMovie. That can easily add up to a tremendous amount of traffic and the congestion, complaints and headaches which go with it..

What’s needed is an easy way to cache App Store downloads. While we’re at it, it would also be nice to transparently have machines use our own Software Update server. Let’s be even a little more ambitious and do this without needing Mac OS X Server. Aw, heck – let’s make it work on any reasonably Unix-like OS.

So how do we do this? The App Stores and Software Update services use http for fetching files. So what we need to do is to capture those http requests and either redirect them to a local store of Software Update files or locally cached App Store files.

Just as an aside, it’d be tremendously difficult to create a local store of App Store files if for no other reason than the fact that there are currently more than half a million applications. Add to this the rate at which updates become available and your machine would probably never be finished attempting to download all of the applications! Considering this, we’re looking at running Apache and squid on our Unix-like machine and doing a little redirection magic on whatever device does NAT or routes for us.

Note: There’s no reason that the same machine can’t do both NAT/routing and Apache/squid, although in most environments we are assuming that the machine would simply be a proxy for Mac or iOS-based devices. To make this example end-to-end though, we’ll run the router on the host.

Our example uses a Mac OS X (non-Server) machine running Leopard which is doing both NAT and running our Apache and squid software. We’re simply using the Internet Sharing service, the public network interface is en0 (which we don’t use anywhere) and the interface which will serve our iOS and Apple clients is en1 and has the address

Everyone has their own favorite way of installing software on Unix-like OSes and a discussion about which is best and why would certainly be outside the scope of this article. In these examples we’re using NetBSD’s pkgsrc for no other reason than the fact that it will compile packages from source with a base directory which is easily configurable (feel free to use ports or some other automated tool according to what platform you are using). Get pkgsrc (usually via cvs; we’ll assume it’s put into /usr which can be as simple as:

cd /usr ; setenv CVSROOT ; cvs checkout -P pkgsrc

And then run /usr/pkgsrc/bootstrap/bootstrap like so:

cd /usr/pkgsrc/bootstrap/
./bootstrap --prefix /usr/local --pkgdbdir /usr/local/var/db/pkg --sysconfdir /usr/local/etc --varbase /usr/local/var --ignore-case-check

This puts all files into /usr/local including logs and configuration files, so keeping your system clean is simple and keeping track of the differences between built-in and pkgsrc software is easy. Next, install pkgsrc’s www/squid and www/apache (and net/wget if your Unix doesn’t already have it):

cd /usr/pkgsrc/www/squid
bmake update
cd /usr/pkgsrc/www/apache22
bmake update
cd /usr/pkgsrc/net/wget
bmake update

Note that on systems like Mac OS X which come with GNU make by default, that pkgsrc uses bmake; if you have BSD make already, just use make. Another note is that /usr/local/sbin is not in Mac OS X’s path by default, so add /usr/local/sbin to /etc/paths if you’re going to use it.

Now that the software is installed in consistent locations we can configure it. The squid.conf file only needs one line to be changed; everything else is added. Find the line which says:

http_port 3128

And change it to:

http_port 3128 intercept

Then add the following lines:

maximum_object_size_in_memory 4096 KB
cache_replacement_policy heap LFUDA
cache_dir ufs /usr/local/var/squid/cache 16384 16 256
maximum_object_size 2097152 KB
refresh_pattern -i .ipa$ 360 90% 10800 override-expire ignore-no-cache ignore-no-store ignore-private ignore-reload ignore-must-revalidate
refresh_pattern -i .pkg$ 360 90% 10080 override-expire ignore-no-cache ignore-no-store ignore-private ignore-reload ignore-must-revalidate
acl no_cache_local dstdomain
cache deny no_cache_local
redirect_program /usr/local/bin/

These settings are chosen to cache large files up to 2 gigabytes in size in a 16 gig cache on disk and to ignore cache directives with regards to .pkg and .ipa files. Adjust to your own liking. Of course, replace with the private IP of your machine. The cache deny with that address is used to make sure that redirected Software Update files are not cached in squid which would just take up room which better used for App Store files.

The URL rewriting script (create /usr/local/bin/ just changes Apple Software Update URLs to point to our server:

#!/usr/bin/env perl
while (<>) {

Next we configure Apache. The location you choose for the Software Update files can be anywhere (in our example, they’re on a FireWire attached drive mounted at /Volumes/sw_updates/) which needs to be allowed in the Apache configuration.

Add to /usr/local/etc/httpd/httpd.conf:

<Directory “/Volumes/sw_updates/”>
Options Indexes FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all
<VirtualHost *:80>
DocumentRoot “/Volumes/sw_updates”
ErrorLog “/usr/local/var/log/httpd/swupdate_error_log”
CustomLog “/usr/local/var/log/httpd/swupdate_access_log” common

The log lines are purely optional. If you don’t add them, logs will still be written at /usr/local/var/log/httpd/access_log and error_log.

Next, we configure ipfw (in the case of Mac OS X or FreeBSD) to redirect all port 80 traffic transparently to our squid instance. If you’re using a different device for NAT/routing or different firewalling software such as ipfilter, see the examples listed below.

ipfw add 333 fwd,3128 tcp from any to any 80 recv en1

Note that on Snow Leopard and Lion you’ll need to make this change, too:

sysctl -w net.inet.ip.scopedroute=0

ipfilter would look like this for the same ipfw task from above (if you’re using Linux):

rdr en1 port 80 -> port 3128 tcp

Again, the local private IP is and the local private interface is en1; substitute your IP and interface.

Finally, we need to mirror all Apple Software Updates. A simple shell script can do this. Save this file somewhere (named, for instance) and run it from cron now and then, perhaps once a night:



location=$1 # This is the root of our Software Update tree
mkdir -p $1
cd $1

for index in index-leopard-snowleopard.merged-1.sucatalog index-leopard.merged-1.sucatalog index-lion-snowleopard-leopard.merged-1.sucatalog
wget --mirror$index


for swfile in `cat$index | grep "http://" | awk -F">" '{ print $2 }' | awk -F"<" '{ print $1 }'`
echo $swfile
wget --mirror "$swfile"

Invoke this with the top of the tree of your Software Update files as you’ve used in the Apache config, like so:

./ /Volumes/sw_updates

Expect this to run for a long time the first time you run this because you’ll be downloading around 60 gigabytes of updates. Every time it runs afterwards, though, files won’t be downloaded again unless they change (which they won’t; new updates will show up as new files).

Start squid and Apache, then tail your Apache log and run Software Update to test:

/usr/local/share/examples/rc.d/apache start
/usr/local/share/examples/rc.d/squid start
tail -f /usr/local/var/log/httpd/swupdate_access_log

At this point, you can redirect your software updates to the host. Updates for both the Mac App Store and iOS are also now cached. In the next article we’ll look at using some squid extensions to enable you to block applications from the App Stores or block updates in the event that an update is problematic.

Creating an Access List on a Cisco ASA

Tuesday, November 8th, 2011

Cisco provides basic traffic filtering capabilities with access control lists (also referred to as access lists). Access lists can be configured for all routed network protocols (IP, AppleTalk, and so on) to filter the packets of those protocols as the packets pass through a router. You can configure access lists on your ASA router to control access to a network: access lists can prevent certain traffic from entering or exiting a network. You can do this by port or IP address.

The access control list (ACL) methodology on the Cisco ASA is interface-based. Therefore, each interface must have a specified security level (0-100), with 100 being most secure and 0 being least secure. Once configurations are in place, traffic from a more secure interface is allowed to access less secure interfaces by default. Conversely, less secure interfaces are blocked from accessing more secure interfaces.

Some common commands used to configure Cisco ASA interfaces include:

  • nameif – used to name the interface
  • security-level – used to configure the interface’s security level
  • access-list – used to permit or deny traffic
  • access-group – applies an ACL to an interface

We can configure an access list to permit or deny traffic, based on a specific port or protocol. With deny-by-default, everything is automatically blocked and must be explicitly allowed (on Routers it is the opposite where everything is allowed and you have to deny ports or protocols to block them).

Let’s say we want to configure an ACL on an ASA to permit all FTP traffic from any host to To do this, we must input the following ACL:

ASA(config)# access-list OUTBOUND permit tcp any host eq ftp

Now let’s say we want to configure an ACL on an ASA to deny all FTP traffic from any host to To do this, we must input the following ACL:

ASA(config)# access-list OUTBOUND deny tcp any host eq ftp

Access lists are also used in defining rate limit’s when defining QOS settings. Here is a helpful guide to assist in choosing the right number to associate to an ACL:

Protocols with Access Lists Specified by Numbers

  • Protocol                                                                          Range
  • IP                                                                                      1-99, 1300-1999
  • Extended IP                                                                   100-199, 2000-2699
  • Ethernet type code                                                       200-299
  • Ethernet address                                                          700-799
  • Transparent bridging (protocol type)                    200-299
  • Transparent bridging (vendor code)                      700-799
  • Extended transparent bridging                               1100-1199
  • DECnet and extended DECnet                                 300-399
  • XNS                                                                                 400-499
  • Extended XNS                                                               500-599
  • AppleTalk                                                                      600-699
  • Source-route bridging (protocol type)                   200-299
  • Source-route bridging (vendor code)                     700-799
  • IPX                                                                                  800-899
  • Extended IPX                                                               900-999
  • IPX SAP                                                                        1000-1099
  • Standard VINES                                                           1-100
  • Extended VINES                                                          101-200
  • Simple VINES                                                               201-300

Basic SonicWALL Router Setups

Tuesday, October 11th, 2011

A work in progress…

1. Register the Sonicwall appliance at A new account may be created for this purpose

2. Download the latest firmware from

3. Disable popup blocking on your browser

4. The default IP of a factory Sonicwall device is Connect to the Sonicwall (you need to adjust your Ethernet NIC’s config to match the Sonicwall’s network settings)

5. Follow the setup wizard and define a WAN IP, LAN IP, and DHCP range of IPs

6. Upload the newer firmware downloaded above and boot from it

7. In the https://[Sonicwall IP Address]/diag.html screen, uncheck the box ““Enforce Host Tag Search with for CFS”

8. Use the Public Server Wizard to create additional systems on the LAN that need to be publicly accessible. Note that the default WAN IP address provided in the wizard is the SonicWALL’s, but you can enter a different WAN IP; this creates a NAT policy using a new Address Object in the WAN zone

9. If more than one service needs to be visible for a system (ie, a mail sever needing 993, 587, 465, etc.), just select a single service during the wizard setup and then modify the “Service Group” that the wizard creates to include additional services that you want visible

10. For site-to-site VPN, follow the documentation in the SonicOS Administrators guide. Typically we have found that setting the VPN policy up in Aggressive Mode works more reliably than Main Mode

Performing a CrashPlan PROe Server Installation

Wednesday, April 13th, 2011

This is a checklist for installing CrashPlan PROe Server.

Prepare your deployment:  Before you install the server software you should have the following ready:

  1. A static IP address. If this is a shared server, whenever possible, CrashPlan should have a dedicated network interface.
  2. (Recommended) Fully qualified Host Name in DNS. IP addresses will work, but for ease of management internally (and even more important externally,) working DNS to point to the service is best.
  3. Firewall port forwards for network connections. Ports 4280 and 4282 are needed for client-server communication, and to send software updates. 4285 is also needed if you wish to manage the server via HTTPS from the WAN.
  4. There should be a dedicated storage (preferably with a secure level of RAID) volume for backup data.
  5. Although a second server install (as server/destination licenses are free) is best for near-full redundancy, secondary destination volumes can be configured on external drives for offsite backup.
  6. LDAP connection. If you will be reading user account information from an LDAP server, make sure you  have the credentials and server information to access it from the CrashPlan Server install.
  7. If you’d like multiple locations to backup to local servers, ensure that your first master is installed in the most ideal environment for your anticipated usage. This is referred to as the Master server, which requires higher uptime and accessibility, as all licensing/user additions and removals rely upon it.


  1.  Go to
  2. If you have not purchased CrashPlan licenses through a reseller, you can fill out the web form to be issued a trial master license key. Otherwise, check the “I already have a master key” checkbox to be presented with the downloads.
  3. Download the CrashPlan PROe server installer (the client software is located further down on the page.)  Choose the appropriate installer for your server (Mac, Windows, Linux, or Solaris.)
  4. Run the installer. When the installation completes you will be asked to enter the master key in order to activate the software.  If you don’t have it at that time, you can enter it later via the web interface.


  1. Initial Setup. On the server, from a web browser, connect to   This is the web interface of the CrashPlan PROe Server. If you did not enter the master key during installation, you will prompted to enter it here.
  2. Log into the server using the default admin user credentials provided on the screen.  Immediately change the username and password for the ‘Superuser’ by going to Settings Tab > Edit Server Settings in the sidebar > then Superuser in the sidebar. Just as with Directory Administrator user names, customizing the user name is also recommended.
  3. Assign networking information. Click on the Settings tab > Edit Server Settings > Network Addresses. You will see fields in which to enter the Primary and Secondary  network addresses or DNS name(s). This information will match how clients attempt to connect to the server, so for ease of management, using an IP address for the primary and DNS for the secondary may make the most sense. Changes to the servers address would therefore immediately propagate for clients instead of waiting for DNS, although TTL preparation would help. Another consideration is where the majority of the clients will be accessing the server from.
  4. Assign the default storage volume: By default, CrashPlan PROe will assign a directory on the boot volume as the storage volume. Navigate to the Settings tab > Add Storage. You will be presented with a page that has links to Add Custom DirectoryAdd Pro Server, or Unused Volumes. If the data volume is attached to the file system with a UNC path it will be listed as an Unused Volume. Select the new storage volume, optionally with a subdirectory. Finally, to indicate this new volume as the default storage volume for new clients, navigate to the Settings tab > Edit Server Settings, and the third line has a drop-down menu for Mount Point for New Computers. You can then remove the default storage location on the boot volume.
  5. Create Organizations. At installation time there will be one default organization. All new users created will be added to this group. You can create an arbitrary number of organizations and sub organizations, if you believe client settings should be propagated differently for certain departments. At least one sub-organization can be helpful in complex environments, especially with Slave servers. Each division can have managers assigned for managing, alerting, and/ or reporting purposes, as well.
  6. Create User Accounts. Users can be created manually in the web interface, during the deployment of the client software, or through LDAP lookups.
  7. Set Client Backup Defaults. If you’d like to restrict certain files or location from the clients backups, you may do so from the Settings tab > Edit Client Settings. By default, nothing is excluded, but only the users home folder is included. It may be useful to restrict file types that the company is not concerned about, or modify the time period for keeping old versions. If storage space is a concern and customers are including very large files in the backup, you may want to purge deleted files on an accelerated schedule (default is never.) Allowing reports to be sent to each individual customer can also be enabled, or optionally setting may be locked down to read-only. In particular, especially if multiple computers share the same account, forcing the entering of a password to open the desktop interface may be useful to turn on and not allow it to be changed. These changes can be propagated for the entire Master server, the organization, or an individual client/user installation.
  8. Install CrashPlan PROe on a test machine for final testing. The installation of a client will require the Registration key that is generated for the organization that the user should be ‘filed’ into, the Master servers network information, the creation of a username (usually the customers email address, or the function that computer performs,) and a password. Once complete, the client will register with the server and begin backing up the home folder of the currently logged-in customer (by default.)

Restricting Outgoing Email To a 3rd Party SMTP Relay Host on SonicWALLs

Friday, November 12th, 2010

Often times, it is necessary to lockdown outbound traffic to MX Logic. MX Logic can provide outbound filtering capabilities which assists against getting blacklisted, while also scanning your outgoing e-mail for malware. Also, limiting only the server to communicate with MX Logic ensures that no rogue mail servers can send out e-mail (often done by infected devices).

This guide assumes you have already used the Wizard to setup port forwarding, firewall rules, and NAT policies for allowing the mail server to be accessed via the SonicWALL.

To Lockdown a SonicWALL to Outbound Email to MX Logic
1. Determine what port you will be sending out on. If you are using a non standard port, you will first need to make a custom service object on the SonicWALL for the port.
2. Create an Address Group containing the Address Objects for MX Logic
1. Go to Network
2. Go to Address Objects
3. Add Address Object
1. Name: MX Logic 1
2. Zone Assignment: WAN
3. TYPE: Network
4. Network: IP From MX Logic
5. Netmask: Subnet From MX Logic
NOTE: You will need to do this for each subnet that MX Logic Offers. Name them sequentially. The Address info can be found on MX Logic’s Portal.
4. Go to Address Objects
5. Create Address Object Group
6. Add all of your MX Logic Address Objects to the Address Object Group, and call it “MX Logic”
7. Save all your changes.
3. Go to Firewall
4. Go to LAN to WAN
5. Click Add
6. Create a Rule that allows the mail server on the LAN to send out to anywhere on the WAN.
1. Action: Allow
2. From Zone: LAN
3. To Zone: WAN
4. Service: SMTP (or whatever you named your custom one)
5. Source: Your Address Object Representing Your Mail Server
6. Destination: MX Logic (The Address Object Group you created Previously).
7. Save your changes.
7. Create Another Rule to block all other outbound e-mail.
1. Go to Firewall
2. Go to LAN to WAN
3. Click Add
4. Action: Deny
5. From Zone: LAN
6. To Zone: WAN
7. Service: SMTP (or whatever you named your custom one)
8. Source: Any
9. Destination: Any
10. Save Your changes
8. Adjust Rule Order.
1. Ensure that the MX Logic Outbound rule is above the rule that blocks all other devices from sending SMTP traffic out to the Internet.
2. Apply the changes.
NOTE: By doing this, any laptop users, or other portable device users, that may try to send email over port 25 through other servers (Gmail, Yahoo, AOL, etc.) will be DENIED by the SonicWALL.