Archive for the ‘Linux’ Category

Checking ClamAV Logs

Thursday, February 20th, 2014

As with all Antivirus packages, we should be checking logs routinely. The following steps basically lay out what to do:

1. Log into servers

2. There are 3 log files we are concerned with:

  • /var/log/clamd.log – this is the sytem log for CLAM-AV
  • /var/log/freshclam.log – this is the log for update definitions for CLAM-AV
  • /var/log/clamscan.log – this is the weekly output of the Clam AV scheduled scan log file

3. To read the log files perform the following commands with elevated privileges:

  • a. Cat /var/log/clamscan.log – each weekly scan is separated by a complete line of ‘—-‘
  • b. Cat /var/log/freshclam.log – make sure that Clam-AV is using a current virus definition database, and no errors are occurring
  • c. Cat /var/log/clamd.log – confirm there are no errors causing the service to crash

4. The Virus Scans are scheduled to run every Saturday at 3am every week.

5. ClamAV only supports virus definition updates to be installed up to 3 previous versions of ClamAV. Freshclam will show if the current ClamAV is out of date. To update ClamAV follow the instructions here:

Installing OpenXenManager on Ubuntu

Sunday, April 21st, 2013

OpenXenManager is a nice graphical environment for managing the Xen virtualization environment. You still need to use QEMU or virt-install/virt-manager to manager domains themselves, but OpenXenManager takes some of the guessing games out of things. Having said that, xm is still the way to go for the most part.

Before you install OpenXenManager, you’ll need svn, which isn’t baked into a default Ubuntu 12.04 or 12.10 installation. We’ll just use apt-get to build subversion and some python frameworks required for openxenmanager:

sudo apt-get install subversion
sudo apt-get install python-glade2 python-gtk-vnc

Next, grab openxenmanager using subversion:

svn co openxenmanager

Hop into your openxenmanager trunk and fire it up:

cd openxenmanager/trunk

Now you have a GUI for managing domU. Good luck!

A Little About Xen

Friday, February 1st, 2013

Xen is one of those cool open source projects which seems like the kind of thing you’d probably want to run if it weren’t for the fact that everyone forces you to run ESX(i). It’s free, it’s well documented, and no matter how irrational salespeople can be, nobody can say there’s no support or documentation for it. So how does Xen work, and how does it compare with ESXi?

For starters, there’s no need to be overly concerned with what hardware is supported. Instead of being dependent on a specific OS, Xen is a driverless hypervisor which runs in conjunction with a host OS. This host OS might be GNU/Linux, NetBSD, Solaris or others. Since the host OS handles talking with hardware, any hardware which is supported by the host OS can be used with Xen. In a nutshell, Xen can be run on any x86 box, although full virtualization requires Intel’s VT-x or AMD’s AMD-V hardware support.

So let’s say you want to set up Xen. How? In many instances it’s as simple as installing a GNU/Linux distribution or a NetBSD distribution. Straightforward directions can be found here:

Let’s say you’ve already done all of that and you’re sitting at the command prompt of your Xen dom0. How do we create virtual machines? We’ll make an example using Windows 2012 Server since that just happens to be what I’m installing today.

Just like the how-to for installing a Xen dom0 instance, there are lots of how-tos for installing Windows and other OSes under Xen. We’ll summarize the important points using a minimal VM configuration file:

name = "win2012"
memory = "8192"
disk = [ 'file:/usr/xen/win2012/win2012.img,ioemu:hda,w', 'file:/usr/xen/win2012/winserver2012.iso,ioemu:hdb:cdrom,r' ]
vif = [ 'bridge=bridge0', ]
vcpus = 1
sdl = 0
vncconsole = 1
vfb = [ 'type=vnc,vncdisplay=12,vncpasswd=booboo' ]
on_reboot = 'destroy'
on_crash = 'destroy'

# boot on floppy (a), hard disk (c) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
usbdevice = 'tablet' # Helps with mouse pointer positioning

Some of the options are pretty self-explanatory such as name, memory, vcpus, on_reboot and on_crash. Others may need a little explanation. Builder identifies the kind of virtualization. For paravirtualization, no builder need be specified, but you must be running a Xen-aware domu. For full virtualization, use hvm. device_model determines the executable run to emulate devices which the virtual machine will use. The default Xen qemu device model appears to work well nowadays with all versions of Windows. The disk line should make sense, but be aware that you’ll need to make the disk image yourself. If you wish to preallocate the entire file, use something like:

dd if=/dev/zero of=/usr/xen/win2012/win2012.img bs=1048576 count=32768

Which would write 32 gigs of zero to the disk image file. If you don’t care about preallocating the space, you can run:

dd if=/dev/zero of=/usr/xen/win2012/win2012.img seek=32767 bs=1048576 count=1

vif is the virtual network interface. A bridge to a real NIC is simplest, although other options are available if desired.

sdl is some other way of presenting the graphics screen, but I don’t know how that works. vnc, on the other hand, can be run on localhost and sent over X11 very easily. My options above create a vnc listener on display 12 (localhost:5912) with a simple password. One can forward X11, with compression if preferred, and run vncviewer on the Xen dom0. If X11 scares you, you can also use ssh to forward port 5912 from localhost of the dom0 to a port on your local machine and run VNC or Screen Sharing on your local machine.

There are great reasons to do something like this, like the ability to talk directly to the ssh daemon on the dom0 instance… But whatever… Blah blah blah. We’re now running Windows in Xen…

Set Splunk MySql Monitor To Start On Boot (CentOS)

Thursday, January 31st, 2013

Back in the old days of unix there was an easy way to start a daemon or script every time a computer booted.  Simply put it in one of the /etc/rc.? text files and it would start all the services in the order specified.  Later, it was made more flexible by having different startup folders based on which runlevel you were on.  Even later still scripts these rc[1-6].d startup folders became deprecated yet are still used to some extent by legacy programs and now things are all managed with new commands.


To put it bluntly, it’s messy, non intuitive and definitely not as easy as it should be.  There is hope however and getting a script or daemon to run “the right way” at startup isn’t too terribly daunting and I’ll walk you through the process now.


In our instance we need a program called to run on boot.  It takes one of 3 arguments, start, stop, restart, and is located in /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/.  It’s almost ready to run at startup but first we should look at the command we’re using to call and that’s the chkconfig command.

The chkconfig command takes a script that’s located in /etc/init.d and creates all the necessary symlinks for it in the rc[1-6].d folders that tell the system what order to start all the services and which runlevels start which services.  Runlevels are mostly deprecated in linux these days but just as an FYI, the runlevels you need to pay attention to are 2,3,4 and 5 and they are amost always identical.  The only thing you really need to worry about is the order in the boot process that the scripts get started and and less so the order that the script gets shutdown when rebooted.  For example, a program that relies on nfs to be on when running started necessarily needs to be run after the nfs command mounts the drives successfully.  Numbers lower in the list start first and the list goes from 1 – 99.  Since splunk is at priority 90 and this monitor needs to start after splunk I’ll give it a priority of 95.  As for shutdown, this service should turn off quickly since it relies on other services to run and may spit out errors if these dependent services are turned off before it.  I’ll give this shutdown a priority of 5 which means it’ll be one of the first processes to shutdown.


So now that we know when in the boot process the script should run and at (priority 95) which run levels it should run from (2,3,4,5) we just need to put this info into the system somehow.  We do this by adding specially formatted comment lines into our script located in /etc/init.d.  Here’s what our example looks like with the new comments added


#!/usr/bin/env python
#         run level  startup  shutdown
# chkconfig: 2345      95        5 
# description: monitors local mysql processes for splunk
# processname: splunkmysqlmonitor
import sys, time, os, socket...

Now we have to put the script into the /etc/init.d folder and that is best done with a symlink.

     ln -s /opt/splunk/etc/apps/mysqlmonitor/bin/daemon/ /etc/init.d

And finally the chkconfig command itself

     chkconfig --add /etc/init.d/

This should add the script to startup and next time you reboot it’ll launch automagically.

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.


Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

Rename files en masse

Friday, October 26th, 2012

There are more than a few shareware utilities for both Windws and Mac that give a user the ability to rename a bunch of files according to a certain criteria.  Gui utilities are always nice but what if you’re logged into a webserver and need to rename all .JPG files to .jpg?

There’s a simple perl script that give you the ability to rename all files in a directory according to the powerful rules of regular expressions.  Here are some example ways to use the script.


The following renames all files ending in .JPG to .jpg.

% rename ‘s/\.JPG$/jpg/’ *.JPG

 The next one converts all uppercase filenames to lowercase except for Makefiles.

% rename ‘tr/A-Z/a-z/ unless /^Make/’ *
The next one removes the preceeding dot in front of a filename unless it’s a .DS_Store file
% rename ‘s/^\.// unless /^.DS_Store/’ *

The next one appends the date to all text files in the current directory.

% rename ‘$_ .= “.2012-10-26″‘ *.txt

The last and arguably most useful way to use this tool is to pipe it through find.  This example renames all files in /var/www from .JPG to .jpg.

% find /var/www -name ‘*.JPG’ -print | rename ‘s/\.JPG$/\.jpg/’

Note: There are 2 variables you can set in the script.  The first is a list of files to ignore and the second is to turn on “dry run” mode. This will allow you to see what the script is going to do before irreversibly changing the file name.


Here is the script

Evaluating the Tokens, or Order of Expansion in Bash

Monday, July 23rd, 2012

Previously in our series on commonly overlooked things in bash, we spoke about being specific with the binaries our script will call, and mentioned conventions to use after deciding what to variable-ize. The rigidity and lack of convenience afforded by bash starts to poke through when we’re trying to abstract re-usable inputs through making them into variables, and folks are commonly tripped up when trying to have everything come out as intended on the other side, when that line runs. You may already know to put quotes around just about every variable to catch the possibility of spaces messing things up, and we’re not even touching on complex ‘sanitization’ of things like non-roman alphabets and/or UTF-8 encoding. Knowing the ‘order of operation expansion’ that the interpreter will use when running our scripts is important. It’s not all drudgery, though, as we’ll uncover features available to bash that you may not have realized exist.

For instance, you may know curly braces can be used did you know there’s syntax to, for example, expand to multiple extensions for the same filenames by putting them in curly braces, comma-separated? An interactive example(with set -x)
cp veryimportantconfigfile{,-backup}
+ cp veryimportantconfigfile veryimportantconfigfile-backup

That’s referred to as filename (or just) brace expansion, and is the first in order of the (roughly) six types of expansion the bash interpreter goes through when evaluating lines and ‘token-ized’ variables in a script.

Since you’re CLI-curious and go command line (trademark @thespider) all the time, you’re probably familiar with not only that you can use tilde(~) for a shortcut to the current logged-in users home directory, but also that just cd alone will assume you meant you wanted to go to that home directory? A users home gets a lot of traffic, and while the builtin $HOME variable is probably more reliable if you must include interaction with home directories in your script, tilde expansion (including any subdirectories tagged to the end) is the next in our expansion order.

Now things get (however underwhelmingly) more interesting. Third in the hit parade, each with semi-equal weighting, are
a. the standard “variable=foo, echo $variable” style ‘variable expressions’ we all know and love,
b. backtick-extracted results of commands, which can also be achieved with $(command) (and, worse came to worse, you could force another expansion of a variable with the eval command)
c. arithmetic expressions (like -gt for greater than, equal, less than, etc.) as we commonly use for comparison tests,
and an interesting set of features that are actually convenient (and mimic some uses of regular expressions), called (misleadingly)
$. dollar sign substitution. All of the different shorthand included under this category has been written about elsewhere in detail, but one in particular is an ad-hoc twist on a catchall that you could use via the ‘shell options’, or shopt command(originally created to expand on ‘set‘, which we mentioned in our earlier article when adding a debug option with ‘set -x‘). All of the options available with shopt are also a bit too numerous to cover now, but one that you’ll see particularly strict folks use is ‘nounset‘, to ensure that variables have always been defined if they’re going to be evaluated as the script runs. It’s only slightly confusing that a variable can have an empty string for a value, which would pass this check. Often, it’s the other way around, and we’ll have variables that are defined without being used; the thing we’d really like to look out for is when a variable is supposed to have a ‘real’ value, and the script could cause ill affects by running without one – so the question becomes how do we check for those important variables as they’re expanded?
A symbol used in bash that will come up later when we cover getopt is the colon, which refers to the existence of an argument, or the variables value (text or otherwise) that you’d be expecting to have set. Dollar sign substitution mimics this concept when allowing you to ad hoc check for empty (or ‘null’) variables by following a standard ‘$variable‘ with ‘:?’ (finished product: ${variable:?})- in other words, it’s a test if the $variable expanded into a ‘real’ value, and it will exit the script at that point with an error if unset, like an ejector seat.

Moving on to the less heavy expansions, the next is… command lookups in the run environments PATH, which are evaluated like regular(western) sentences, from left to right.
As it traipses along down a line running a command, it follows that commands rules regarding if it’s supposed to expect certain switches and arguments, and assumes those are split by some sort of separation (whitespace by default), referred to as the Internal Field Separator. The order of expansion continues with this ‘word splitting’.

And finally, there’s regular old pathname pattern matching – if you’re processing a file or folder in a directory, it find the first instance that matches to evaluate that – pretty straightforward. You may notice we’re often linking to the Bash Guide for Beginners site, as hosted by The Linux Documentation Project. Beyond that resource, there’s also videos from 2011(iTunesU link) and 2012(youtube) Penn State Mac Admins conference on this topic if you need a refresher before we forge ahead for a few more posts.

Back to Basics with Bash

Tuesday, July 17th, 2012

The default shell for Macs has been bash for as long as some of us can remember(as long as we forget it was tcsh through 10.2.8… and before that… there was no shell, it was OS 9!) Bash as a scripting language doesn’t get the best reputation as it is certainly suboptimal and generally unoptimized for modern workflows. To get common things done you need to care about procedural tasks and things can become very ‘heavy’ very quickly. With more modern programming languages that have niceties like API’s and libraries, the catchphrase you’ll hear is you get loads of functionality ‘for free,’ but it’s good to know how far we can get, and why those object-oriented folks keep telling us we’re missing out. And, although most of us are using bash every time we open a shell(zsh users probably know all this stuff anyway) there are things a lot of us aren’t doing in scripts that could be better. Bash is not going away, and is plenty serviceable for ‘lighter’, one-off tasks, so over the course of a few posts we’ll touch on bash-related topics.

Some things even a long-time scripter may easily overlook is how we might set variables more smartly and _often_, making good decisions and being specific about what we choose to variable-ize. If the purpose of a script is to customize things in a way that’s reusable, making a variable out of that customization (say, for example, a hostname or notification email address) allows us to easily re-set that variable in the future. And in our line of work, if you do something once, it is highly probable you’ll do it again.

Something else you may have seen in certain scripts is the PATH variable being explicitly set or overridden, under the assumption that may not be set in the environment the script runs in, or the droids binaries we’re looking for will definitely be found once we set the path directories specifically. This is well-intentioned, but imprecise to put it one way, clunky to put it another. Setting a custom path, or having binaries customized that could end up interacting with our script may cause unintended issues, so some paranoia should be exhibited. As scientists and troubleshooters, being as specific as possible always pays returns, so a guiding principle we should consider adopting is to, instead of setting the path and assuming, make a variable for each binary called as part of a script.

Now would probably be a good time to mention a few things that assist us when setting variables for binaries. Oh, and as conventions go, it helps to leave variable names that are set for binaries as lowercase, and all caps for the customizations we’re shoving in, which helps us visually see only our customized info in all caps as we debug/inspect the script and when we go in to update those variables for a new environment. /usr/bin/which tells us what is the path to the binary which is currently the first discovered in our path, for example ‘which which’ tells us we first found a version of ‘which’ in /usr/bin. Similarly, you may realize from its name what /usr/bin/whereis does. Man pages as a mini-topic is also discussed here. However, a more useful way to tell if you’re using the most efficient version of a binary is to check it with /usr/bin/type. If it’s a shell builtin, like echo, it may be faster than alternatives found at other paths, and you may not even find it necessary to make a variable for it, since there is little chance someone has decided to replace bash’s builtin ‘cd’…

The last practice we’ll try to spread the adoption of is using declare when setting variables. Again, while a lazy sysadmin is a good sysadmin, a precise one doesn’t have to worry about as many failures. A lack of portability across shells helped folks overlook it, but this is useful even if it is bash-specific. When you use declare and -r for read-only, you’re ensuring your variable doesn’t accidentally get overwritten later in the script. Just like the tool ‘set’ for shell settings, which is used to debug scripts when using the xtrace option for tracing how variables are expanded and executed, you can remove the type designation from variables with a +, e.g. set +x. Integers can be ensured by using -i (which frees us from using ‘let’ when we are just simply setting a number), arrays with -a, and when you need a variable to stick around for longer than the individual script it’s set in or the current environment, you can export the variable with -x. Alternately, if you must use the same exact variable name with a different value inside a nested script, you can set the variable as local so you don’t ‘cross the streams’. We hope this starts a conversation on proper bash-ing, look forward to more ‘back to basics’ posts like this one.

Microsoft’s System Center Configuration Manager 2012

Sunday, March 18th, 2012

Microsoft has released the Beta 2 version of System Center Configuration Manager (SCCM) aka System Center 2012. SCCM is a powerful tool that Microsoft has been developing for over a decade. It started as an automation tool and has grown into a full-blown management tool that allows you to manage, update, and distribute software, license, policies and a plethora of other amazing features to users, workstation, servers, and devices including mobile devices and tablets. The new version has been simplified infrastructure-wise, without losing functionality compared to previous versions.

SCCM provides end-users with a easy to use web portal that will allow them to choose what software they want easily, providing an instant response to install the application in a timely manner. For Mobile devices the management console has an exchange connector and will support any device that can use Exchange Active Sync protocol. It will allow you to push policies and settings to your devices (i.e. encryption configurations, security settings, etc…). Windows phone 7 features are also manageable through SCCM.

The Exchange component sits natively with the configuration manager and does not have to interface with Exchange directly to be utilized. You can also define minimal rights for people to just install and/or configure what they need and nothing more. The bandwidth usage can be throttled to govern its impact on the local network.

SCCM will also interface with Unix and Linux devices, allowing multiple platform and device management. At this point, many 3rd party tools such as the Casper Suite and Absolute Manage also plug into SCCM nicely. Overall this is a robust tool for the multi platform networks that have so commonly developed in today’s business needs everywhere.

Microsoft allows you to try the software at For more information, contact your 318 Professional Services Manager or if you do not yet have one.

Lion, SSH And Special Characters

Tuesday, August 16th, 2011

At 318, we spend a pretty good bit of time SSH’d into Linux systems from Mac OS X. Therefore, whether we’re loosing our color settings when SSH’ing into Ubuntu or unable to transfer files via SSH, when OS X has a problem with Linux/SSH, we notice it pretty quickly. One such problem that has come up since we started moving many of our client systems over to Lion is that special characters don’t work by default when using SSH. Which is funny because they’re so much easier to type in Lion.

This is due to a small setting in /etc/ssh_config. To correct the setting, open ssh_config in your favorite text editor. Then look for the following line:

SendEnv LANG LC_*

Then remove LC_* from the line. I like to use the reset command any time I make such a change:


Serial Adaptors, screen and OS X

Thursday, June 9th, 2011

Many of us use a Keyspan Serial adapter to manage devices with serial ports on them. Those who find you need to console into devices but hate the fact that you have to either use Zterm (which is no longer maintained) or boot a Windows Virtual Machine will find an application called goSerial pretty handy. GoSerial makes a Keyspan serial to usb adaptor, connected with a null modem cable, useful. You will be in CLI heaven in moments. goSerial can be downloaded here.

You can also use the screen command. The screen command will open a virtual terminal and provide the functionality of an old DEC VT100 terminal. Screen is one of the more useful tools when dealing with several servers concurrently, or several VT sessions as the case may be.

To open a screen session into an APC:

screen /dev/tty.KeySerial1 2400

To open a screen session into a Qlogic:

screen /dev/tty.KeySerial1 9600

To open a screen session into a Promise RAID:

screen /dev/tty.KeySerial1 115200

To see your active screens:

screen -ls

The output will show screens similar to the following:
6077.ttys001.krypted2 (Detached)

When you list the screens you’ll note that some can be detached. You can also start a screen detached. To do so, use the -d flag when invoking the screen (or -D if you don’t want to fork the process. To attach to a detached screen, use the -r option:

screen -r 6077.ttys001.krypted2

Or if you only have one active screen that has been detached, -R will automatically reconnect to it. It can be useful to have more friendly names when working with multiple screen sessions. To attach to an attached screen session, use -x:

screen -x 6077.ttys001.krypted2

To provide an easy-to-remember name, use the -s option. To initiate a screen called simply Qlogic, using the above Qlogic rate:

screen -s Qlogic /dev/tty.KeySerial1 9600

By creating a .screenrc file in your home directory you can also set many of the options for screen.

While the screen command is useful in connecting to external devices via the command line, that’s only a small part of what screen can do. Those using the Terminal application that comes with Mac OS X have been using an environment that acts like screen for some time. You invoke tabs and new terminal windows in order to leave, for example, a session tailing logs or editing a configuration file open, while using a separate session to read a man page or start a process. Screen takes all of this and packs it into one terminal screen for environments without such an interactive command line management tool. For example, if you ssh into a Linux host in a data center, you would have to initiate 2 sessions into hosts in order to have 2 concurrently running screens, whereas you would only need to invoke one ssh session (and you may be limited to one) and still have the flexibility you have with the Terminal screen, albeit in a single window perhaps.

For example, let’s say you ssh into a RHEL box and you want to invoke an emacs editor:

screen emacs prog.c

Now let’s say that you type a few lines of a new samba config file and you want to tail the samba logs to make sure you’re augmenting the correct options:

screen tail -f /var/log/samba/log.smbd

To then switch back to emacs:

screen -R

There’s lots more you can do with screen, but this should get ya’ started!

Suppressing the PHP Version

Thursday, April 28th, 2011

Yesterday, we looked at hiding the version of Apache being run on a web server. Today we’re going to look at suppressing the version of PHP.

By default, the PHP configuration file, php.ini, is stored at /etc/php5/apache2/php.ini (in most distributions of Linux) or just in /etc/php.ini (as with Mac OS X). In this file

vi /etc/php.ini

Then locate the expose_php variable within the file. Once found, set it to Off as follows:

expose_php = Off

Doing so will not improve the overall security of a system (unless you believe in security through obscurity). However, it is a good idea and will help defeat a number of vulnerability scanners. If you do suppress the Apache and PHP versioning information for the sake of passing a vulnerability scanner on a backported distribution of one of the packages then it would be a good idea to check the CVEs for the port you are using and verify that you are secure.

Hiding the Apache Software Version

Wednesday, April 27th, 2011

By default, Apache displays version information when queried. One aspect of securing Apache servers is to suppress this information from being shown to clients. This also helps immensely with vulnerability scanners that only look at the http header, as many vendors now backport or fork the code for Apache (e.g. Red Hat and Apple).

To do so, one need only make a small change to the httpd.conf file. By default, Apache stores its configuration files in Linux in the /etc/httpd/conf/httpd.conf file. In Mac OS X they can be found at /private/etc/apache2/httpd.conf Here, you will find the ServerTokens and ServerSignature directives. These should be set to ProductOnly and Off respectively, as follows:

ServerTokens ProductOnly
ServerSignature Off

Once these have been changed, you will need to restart the httpd service. One way to do so is to use init.d:

/etc/init.d/httpd restart

To verify that the version number has been suppressed, use telnet:

telnet http

Performing a CrashPlan PROe Server Installation

Wednesday, April 13th, 2011

This is a checklist for installing CrashPlan PROe Server.

Prepare your deployment:  Before you install the server software you should have the following ready:

  1. A static IP address. If this is a shared server, whenever possible, CrashPlan should have a dedicated network interface.
  2. (Recommended) Fully qualified Host Name in DNS. IP addresses will work, but for ease of management internally (and even more important externally,) working DNS to point to the service is best.
  3. Firewall port forwards for network connections. Ports 4280 and 4282 are needed for client-server communication, and to send software updates. 4285 is also needed if you wish to manage the server via HTTPS from the WAN.
  4. There should be a dedicated storage (preferably with a secure level of RAID) volume for backup data.
  5. Although a second server install (as server/destination licenses are free) is best for near-full redundancy, secondary destination volumes can be configured on external drives for offsite backup.
  6. LDAP connection. If you will be reading user account information from an LDAP server, make sure you  have the credentials and server information to access it from the CrashPlan Server install.
  7. If you’d like multiple locations to backup to local servers, ensure that your first master is installed in the most ideal environment for your anticipated usage. This is referred to as the Master server, which requires higher uptime and accessibility, as all licensing/user additions and removals rely upon it.


  1.  Go to
  2. If you have not purchased CrashPlan licenses through a reseller, you can fill out the web form to be issued a trial master license key. Otherwise, check the “I already have a master key” checkbox to be presented with the downloads.
  3. Download the CrashPlan PROe server installer (the client software is located further down on the page.)  Choose the appropriate installer for your server (Mac, Windows, Linux, or Solaris.)
  4. Run the installer. When the installation completes you will be asked to enter the master key in order to activate the software.  If you don’t have it at that time, you can enter it later via the web interface.


  1. Initial Setup. On the server, from a web browser, connect to   This is the web interface of the CrashPlan PROe Server. If you did not enter the master key during installation, you will prompted to enter it here.
  2. Log into the server using the default admin user credentials provided on the screen.  Immediately change the username and password for the ‘Superuser’ by going to Settings Tab > Edit Server Settings in the sidebar > then Superuser in the sidebar. Just as with Directory Administrator user names, customizing the user name is also recommended.
  3. Assign networking information. Click on the Settings tab > Edit Server Settings > Network Addresses. You will see fields in which to enter the Primary and Secondary  network addresses or DNS name(s). This information will match how clients attempt to connect to the server, so for ease of management, using an IP address for the primary and DNS for the secondary may make the most sense. Changes to the servers address would therefore immediately propagate for clients instead of waiting for DNS, although TTL preparation would help. Another consideration is where the majority of the clients will be accessing the server from.
  4. Assign the default storage volume: By default, CrashPlan PROe will assign a directory on the boot volume as the storage volume. Navigate to the Settings tab > Add Storage. You will be presented with a page that has links to Add Custom DirectoryAdd Pro Server, or Unused Volumes. If the data volume is attached to the file system with a UNC path it will be listed as an Unused Volume. Select the new storage volume, optionally with a subdirectory. Finally, to indicate this new volume as the default storage volume for new clients, navigate to the Settings tab > Edit Server Settings, and the third line has a drop-down menu for Mount Point for New Computers. You can then remove the default storage location on the boot volume.
  5. Create Organizations. At installation time there will be one default organization. All new users created will be added to this group. You can create an arbitrary number of organizations and sub organizations, if you believe client settings should be propagated differently for certain departments. At least one sub-organization can be helpful in complex environments, especially with Slave servers. Each division can have managers assigned for managing, alerting, and/ or reporting purposes, as well.
  6. Create User Accounts. Users can be created manually in the web interface, during the deployment of the client software, or through LDAP lookups.
  7. Set Client Backup Defaults. If you’d like to restrict certain files or location from the clients backups, you may do so from the Settings tab > Edit Client Settings. By default, nothing is excluded, but only the users home folder is included. It may be useful to restrict file types that the company is not concerned about, or modify the time period for keeping old versions. If storage space is a concern and customers are including very large files in the backup, you may want to purge deleted files on an accelerated schedule (default is never.) Allowing reports to be sent to each individual customer can also be enabled, or optionally setting may be locked down to read-only. In particular, especially if multiple computers share the same account, forcing the entering of a password to open the desktop interface may be useful to turn on and not allow it to be changed. These changes can be propagated for the entire Master server, the organization, or an individual client/user installation.
  8. Install CrashPlan PROe on a test machine for final testing. The installation of a client will require the Registration key that is generated for the organization that the user should be ‘filed’ into, the Master servers network information, the creation of a username (usually the customers email address, or the function that computer performs,) and a password. Once complete, the client will register with the server and begin backing up the home folder of the currently logged-in customer (by default.)

Backing Up Cisco Configurations Using Mac OS X

Friday, February 18th, 2011

Before you make configuration changes on devices you should make a backup of the device. You can basically use any platform you want to backup Cisco devices. Doing so in Mac OS X starts with the Terminal. So to backup a Cisco device you must first connect to the device in Terminal either through SSH or Telnet.

Then SSH to the device using the ssh command, followed by the username, an @ symbol and then the IP address or hostname of your device. Here, we’ll use an example of

ssh admin@

Note: One could also use telnet using the same type of string, but ssh is more secure.

Next, provide the password and you will see a prompt with the device name. Once connected to the device you will need to go into enable mode by typing “en” at the command prompt and hit enter. It may prompt you for an elevated privileges password, which you will need to know.

Once complete you will notice that the prompt turns from a > to a # symbol. The # symbol is akin to having root access. Now to backup the configuration of this device you will enter “show run” which is short for show running-config:

show run

You will see a ←-more→ prompt at the bottome of the page. Just hit the space bar until you are back a the prompt. Once you are at the prompt you will highlight all the text using your mouse that was just generated in the terminal and after its all highlighted hit “Command C” to copy the contents. Open your favorite text editor and use the “Command V” to paste the text. Be careful to use plain text here (I prefer to just use pico or vi rather than Word or TextEdit). Save the file as your configuration backup file for the Device.

NOTE: If you want to also get the IOS (IOS is different than iOS) version info you can run the “show version” instead of the “show run” command. And use the same steps to cut and paste.

If you cannot log into a device remotely, you can use a Keyspan adapter to use the serial port to connect to the device.

Use SSH Tunneling to Access Firewalled Devices

Friday, December 4th, 2009

Many environments have numerous Desktops or Servers which we may need to support remotely, but lack a full-fledged VPN solution. If the client has a server on a DMZ, or is forwarding SSH ports to a specific server, you can use SSH to then access other machines otherwise protected by the firewall.

For instance, say client MyCo has two servers:, and In this scenario, the backup server has no remote access, and has access available over port 22. Lets say the need arises to provide remote support for the backup server, which has both SSH and ARD/VNC enabled.

In this scenario, it is possible to open up a remote ARD session to the backup server from my remote laptop by utilizing ssh tunneling. To do so, I run the following command from my laptop:

ssh -L -N

This command tells ssh to open up the local port: 5901, and tunnel it to, which will in turn forward traffic to server over port 5900.

Once I have ran this command, I can open up a VNC connection to my local machine, which will then be forwarded through ssh to the clients private backup server:

open vnc://

Alternatively, you may only want shell access to the firewalled server, to accomplish that, we can instead open up port with ssh (once again, from my laptop):

ssh -L -N

From here I can ssh to the local port, which will once again forward to the backup server (this time over port 22):

ssh mycoadmin@ -p 50022


In order for this to work, ssh must be enabled on any client or server that you want to access. Also, publicly accessible server must be able to resolve the target name that you provide. For instance, in the above example if “” doesn’t properly resolve on, then the solution will not work. In this instance, you could specify the internal IP of the backup server:

ssh -L 50022: -N

The local port is somewhat arbitrary (5901 and 50022 in my examples), you just want to make sure that the port is not in use, which can be determined by looking at the output of `netstat -a -p TCP`, or through `lsof -i TCP:50022` where ’50022′ is the local port you want to open.

BRU Server 2.0 Now Available

Friday, July 24th, 2009

BRU Server 2.0 was released this week, offering a long anticipated update to the popular cross platform backup suite of applications. The main two features that the TOLIS group is highlighting include Encryption of backup target sets and client initiated backup.

Whether you are a BRU, Atempo, Bakbone, Backup Exec or Retrospect environment, 318 can assist you with planning, testing, verifying or restoring backups. Contact your 318 account manager today for more details.

Oracle Buys Sun

Monday, April 20th, 2009

Sun was in merger talks with IBM.  Talks that had fallen through.  Today, the Sun website says “Oracle to Buy Sun.” Oracle is the largest database company in the world and has been tinkering with selling support contracts for Linux and the Oracle suite of database products, that already includes PeopleSoft, Hyperion and Siebel. This merger, valued at $7.4Billion, will give Oracle access to sell hardware bundled solutions, further the Oracle development product offerings and give Oracle one of the best operating systems for running databases on the planet.

Oracle doesn’t just get hardware and Solaris though.  This move also solidifies a plan for Oracle customers to integrate Sun storage.  Oracle had previously been working with HP in a partnership that never seemed to gain traction.  Then there is Java, MySQL, VirtualBox, GlassFish and  A number of the Sun contributions will be Open Source projects, but overall it’s possible to see a strategy that can emerge from a new Oracle + Sun organization.

As a Sun partner, 318 can assist its clients through this transition, be it with storage, MySQL, Java, Solaris or Oracle middleware scripting.  Overall, this deal makes a lot of sense and 318 is behind doing whatever possible to ease our clients through the transition.

Finally, for those concerned that Oracle might just be buying Sun to kill off MySQL, keep in mind that the Open Source community built MySQL in the first place (or was integral to building it) and it can build another in its place just as easily, this time faster and with less required legacy support.  MySQL is not a fluke.  PostgreSQL or a newer solution will take its place if MySQL were to fall by the wayside under the Oracle helm. Oracle is not going to make MySQL into a martyr of sorts, and is going to want to capitalize on their investment (a Billion dollar purchase by Sun and obviously part of this purchase); especially with a clear business plan for MySQL to be profitable (which is why Sun bought them for such a lofty price in the first place). Overall, Oracle has no reason to kill MySQL; instead, with Siebel, MySQL, Oracle, PeopleSoft, etc – they can simply tout “All Your Databasen Are Belong To Us!”

EMC Celerra NX4 Defaults

Wednesday, April 15th, 2009

The EMC Celerra NX4 comes with a number of IPs (and other settings) set from the factory. The IP addressing, by default, is as follows:

  • Primary Internal Network –
  • Backup Internal Network –
  • Netmask
  • IP of Storage Processor A –
  • IP of Storage Processor B –
  • Gateway IP of Storage Processor A –
  • Gateway IP of Storage Processor B –

ESX Patch Management

Tuesday, April 14th, 2009

VMware’s ESX Server, like any system, needs to be updated regularly. To see what patches have been installed on your ESX server use the following command:

esxupdate -query

Once you know what updates have already been applied to your system it’s time to go find the updates that still need to be applied. You can download the updates that have not yet been run at Here you will see a bevy of information about each patch and can determine whether you consider it an important patch to run. At a minimum, all security patches should be run as often as your change control environment allows. Once downloaded make sure you have enough free space to install the software you’ve just downloaded and then you will need to copy the patches to the server (using ssh, scp or whatever tool you prefer to use to copy files to your ESX host). Now extract the patches prior to running them. To do so use the tar command, as follows:

tar xvzf .tgz

Once extracted, cd into the patch directory and then use the esxupdate command with the update flag and then the test flag, as follows:

esxupdate –test update

Provided that the update tests clean, run the update itself with the following command (still with a working directory inside the extracted tarball from a couple of steps ago):

esxupdate update

There are a couple of flags that can be used with esxupdate. Chief amongst them are -noreboot (which doesn’t reboot after a given update), -d, -b and -l (which are used for working with bundles and depots).

If esxupdate fails with an error code these can be cross referenced using the ESX Patch Management Guide.

You can also run patches without copying the updates to the server manually, although this will require you to know the URL of the patch. To do so, first locate the patch number that you would like to run. Then, open outgoing ports on the server as follows:

esxcfg-firewall -allowOutgoing

Next, issue the esxupdate command with the path embedded:

esxupdate –noreboot -r http:// update

Once you’ve looped through all the updates you are looking to run, lock down your ESX firewall again using the following command:

esxcfg-firewall -blockOutgoing

File Replication Pro Story About 318

Wednesday, March 25th, 2009

The File Replication Pro folks have published a customer success story outlining some of the ways we’re using their product. Check it out and if you have any questions about what we’re doing with it feel free to drop us a line!

Unraveling Unified Messaging

Friday, March 13th, 2009

There’s been a lot of talk the past year or two about unified messaging. You may remember the old ATT All in One commercial where a person was golfing and his important call would find him, and he wouldn’t miss the call. Or have you ever had a job where every morning you had to check your e-mail, then your voicemail on your phones, and then walk to the fax machine to check your faxes? Well, Google this week released a new service called Google Voice. Google Voice is just a revamp of their system called Google GrandCentral. You have one number that people will call, and Google will route the call to all of your phones to try and locate you, and allow you to essentially ignore the call or accept it. You can also search your emails, voicemails, and SMS messages from the web. Microsoft Exchange offers a system that will allow you to get all your email, voicemail and faxes in one centralized location. Weaver just released a service in February that will allow Asterisk users to have their voicemail transcribed automatically and e-mailed to them. Below is a chart of services offered by Google, Asterisk, and Microsoft Exchange 2007 Unified Messaging to give you a better understanding of what technology route you may want to go.

Microsoft Exchange 2007 Unified Messaging
Microsoft’s Exchange 2007 Unified Messaging goal is to tie in Email, Fax and Phone into one manageable place. An example that Microsoft uses is that first thing in the morning most people check their email, then check their voicemail, and after check their faxes. Exchange Unified Messaging has the ability to tie together all three of these communication technologies into a single place for management.

Exchange Unified Messaging on it’s own cannot serve a PBX function, but harnesses a current PBX infrastructure into Exchange for end users to have a seamless place to manage their communications. The current iteration of Exchange Unified Messaging is with Exchange 2007. To leverage the entire suite of features, you must use Outlook 2007.

Google Voice
Google Voice is a communication infrastructure much like Exchange Unified Messaging, but seems to be targeted for non-business consumers. Google Voice is the current iteration of what was once known as Google GrandCentral. Its purpose is unified messaging as well, as it ties in your Gmail, SMS and incoming phone calls into your phone account created on Google Voice. Google Voice is an IP-PBX (VoIP) that allows you to make and receive calls with unified messaging capabilities.

Receiving calls can be done through any cell phone that you have, or through their Google Voice web interface. Making calls can be done via GoogleVoice (web-based), or through any other phone (landline or cell phone). The price point is very good (as in free). The price is free for all calls made to US numbers (long distance charges to other countries apply, of course). It requires no additional hardware.

Asterisk is an open source IP-PBX (VoIP) platform based on Linux. It requires a computer to run on and can tie in your existing land line with almost any VoIP provider of your choice. Call pricing depends on your phone carriers.


Google Voice


Exchange 2007


Yes, stored on Google’s PBX Server.

Yes, stored on PBX Server.

Yes, originating from current PBX, but forwarded and stored in Exchange


Yes, integrated with Gmail.

Yes, SMTP’d to host of your choice.

Yes, integrated with Exchange and Outlook

Transcribing VoiceMail


Yes, not natively as it needs to use VoiceScribe[1] and then emails you the trasncript

No, but allows the user to take notes (including manually transcribing voicemail) to allow voicemail to be searchable via Outlook


The use is free, and calls to US numbers are free.  Your cell provider rates still apply, and Google has their own price for long distance calling[2].

Free to install and use, and configure.  The call price rate depends on your local and/or VoIP carrier.

Phone calls rates are based on your PBX/Call Provider.  Only certain PBXs are supported[3].  The price for Exchange is $699 for Standard or $3,999 for Enterprise depending on how many storage groups and databases per mailbox server role you need.[4]  Both come with unified messaging.

Can call more than one of your phones at a time to try to locate you.


Yes, but you need to purchase additional trunks (VoIP or PSTN)

Depends on PBX

Can automatically locate you and route calls depending on bluetooth proximity.




Native Address Book

Yes, integrated with your Google Account.


Yes, integrated with Exchange Contacts

Call Management

Yes, via your phones (and possibly through Google Voice)

Yes, via your phones or through HUD

Yes, through Outlook and possibly through your PBX Software



Yes, but it’s through VoIP, and not realiable[5]

Yes, through a standard fax line




Depends on PBX

Listen to voice messages without changing their context to another application

Yes, integrated with Google Voice

No – you need to use whatever sound application is installed on your computer

Yes integrated with Outlook


Unknown, but since it’s web based, it may work on Linux, Mac, and Windows.

Yes – Linux, Mac, and Windows

No, just Windows with Outlook 2007. You can play messages in Entourage, but may either have to change file type in Exchange from *.wma to *.wav, or have Mac users install WMP 9 for OS X[6]

Configure individual voice mail settings

Via phone or web

Via phone or web

Yes integrated with Outlook

View all voicemail in one location




Distinguish voice and fax messages from email messages within mailbox

No, just voice mail from email, and only through Google Voice


Yes integrated with Outlook

Determine whether a voice message has already been played



Yes integrated with Outlook

Add notes to a voicemail message natively



Yes integrated with Outlook

Reply to a voice mail with email

Unknown – not sure if it can work with blocked numbers or telephone numbers not in contacts.


Yes integrated with Outlook

Add telephone numbers received to Contacts natively



Yes integrated with Outlook

Share VoiceMail




Adding a user

Free.  Requires that each user is registered with a Google account.

Free.  Just create a new extension for IP phones.  For non-IP hard phones, you must buy a FXS card (or to connect a regular phone to an ATA).

You must buy CALs for each user.  For unified messaging, you must have both the Exchange Standard AND Entprise CAL.  Exchange Standard CAL is $67, Exchange Enterprise CAL is $35.[7]  You must purchase both CALs for each user.  You also need to add a user to your PBX – pricing and licensing depends on PBX provider.

There are some things that may catch your eye (or not) when you first see this chart. Exchange Unified Messaging is expensive, but offers a lot of features that the other two don’t. From a “birds eye view” it may also fit your enterprise better if your companies’ locations use different types of PBXs, but you want to “unify” all of the communication in Exchange.

If you have a heterogeneous environment or non Windows environment, Asterisk or Google Voice may be a better route for you.

If you are concerned with regulatory compliance, Google Voice may not be your best choice since you do not have a centralized location of all your communication readily available.

When determining which choice is a better fit for your business, carefully weigh your options (price, compliance and room for expansion to name a few). It will be exciting to see how the technologies are managed, and what the future holds for unified communications. If you plan to roll out any of these services, or are in need of consultation, please don’t hesitate to let us know. We’re here to help.

File Replication

Thursday, February 19th, 2009

Performing replication between physical locations is always an interesting task. Perhaps you’re only using your second location for a hot/cold site or maybe it’s a full blown branch office. In many cases, file replication can be achieved with no scripting, using off the shelf products such as Retrospect or even Carbon Copy Cloner. Other times, the needs are more granular and you may choose to script a solutions, as is often done using rsync.

However, a number of customers have found these solutions to leave something to be desired. Enter File Replication Pro. File Replication Pro allows administrators to replicate data between two locations in a variety of fashions and across a variety of operating systems in a highly configurable manner. Furthermore, File Replication Pro provides delta synchronization rather than full file copies, which means that you’re only pushing changes to files and not the full file over your replication medium, greatly reducing required bandwidth. File Replication Pro is also multi-platform (built on Java), allowing administrators to synchronize Sun, Windows, Mac OS X, etc.

If you struggle with File Replication issues, then we can help. Whatever the medium may be, give us a call and we can help you to determine the best solution for your needs!

Configuring a SonicWALL for Fonality/Trixbox

Thursday, August 7th, 2008

The Fonality/Trixbox server and phones should be on the same subnet, separated from the data network.

On the SonicWall:

Under Network/Interfaces, create a new Interface for the Phone System. Under the Zone option, create a new Zone for the Phone System. Name the zone Phone System. Under the “Switch Ports” tab, assign it a port on the SonicWall. Label this port for the phone system (in the SonicWall OS and physically).

Ubuntu 8.04 Released

Sunday, May 11th, 2008

ubuntulogo1.pngUbuntu 8.04 is now available – the first major release since 7.10. Code named Hardy heron, 8.04 will look familiar to long-time Ubuntu users. But under the hood, 8.04 sports a new kernel (2.6.24-12.13), a new rev of Gnome (2.22), improved graphical elements (such as Xorg 7.3), a spiffy new installer (Wubi), the latest and greatest in software, enhanced security and of course more intelligent default settings. The build is free to download the desktop version from

The new Ubuntu installer comes with a new utility called Wubi. Wubi can run as a Windows application, which means that Windows users will be able to more easily transition and learn about Ubuntu. Wubi can perform a full installation of Ubuntu as a file on a Windows hard drive. This means that you no longer need to install a second drive or perform complicated partitioning on an existing drive. When you boot up Ubuntu the system reads and writes to the disk image as though it were a standard drive letter, much like VMWare would do. Ubuntu can also be uninstalled as though it were a standard Windows application using Add/Remove Programs.

The new application set is solid. Firefox 3.0 comes pre-installed. Brasero provides an easier interface for burning CDs and DVDs. PulseAudio now gets installed by default (which is arguably a questionable decision but we found it worked great for us). The Transmission BitTorrent client is now included by default. Vinagre provides a very nice and streamlined VNC client for remote administration (although the latency for remote users is still a bit of a pain compared to the Microsoft RDP protocol). Inkscape has always been easy to install and use, but the popular Adobe Illustrator-like application it now comes bundled with Ubuntu.

In order to play nicer in the enterprise, the security infrastructure of Ubuntu has also had a nice upgrade. The Active Directory plug-in is provided using Likewise Open (unlike Mac OS X which sees a custom package specifically for this purpose). There is a new PolicyKit which provides policies similar to GPOs in Windows or MCX in Mac OS X. The default settings in 8.04 are also chosen with a bit more of a security mindset. New memory protection is built into 8.04, primarily to make exploits harder to uncover and prevent rootkits. Finally, UFW (uncomplicated firewall) is now built into the system to make firewall administration more accessible to the everyday *nix fan.

Network Administrators will be impressed by the inclusion of many new features. KVM is included in the Kernel and lib-virt and virtmanager are provided to make Ubuntu a very desirable virtualization platform. iSCSI support provides more targets with which to store those virtual machines and also expanded storage for those larger filers (eg – using Samba 3). Postfix and Dovecot provide a standardized mail server infrastructure out of the box. CUPS in 8.04 now supports Bonjour and Zeroconf protocols as well as the solid standbys of SMB, LPD, JetDirect and of course IPP. Those building web servers will be happy to see Apache 2, PHP 5, Perl, Python and Ruby on Rails (with GEM) and of course Sun Open JDK (community supported). If you need the database side of things there’s MySQL, Postgresql, DB2 and Oracle Database Express.

However, if you are just starting out keep in mind that Ubuntu Server does not come with a windowing system by default – so beef up those command line skills sooner rather than later! We are also still waiting for a roadmap for integrating much of the more Enterprise or Network-oriented packages. For example, we now have the PolicyKit and a solid Active Directory client. But how do we push out en masse the policies that we want our users to have post imaging?

So if you use Ubuntu or are interested in getting to know the Linux platform then 8.04 is likely a great move. It’s solid, stable and much improved over 7. It’s easier to migrate, virtualize and work in. The developers should be proud!

Open XML Draft Approved

Saturday, April 12th, 2008

The Microsoft Open XML standard is what Microsoft is hoping will be the standard in document formats. The first step in that process is now complete with Office Open XML being accepted as a draft standard by ISO, the International Organization for Standardization. ISO is the world’s largest developer of standards and has no governmental affiliation. Office 2007 created a stir by omitting the Open Document Format (ODF), which is already an ISO standard. Many had hoped that ODF would help to spark an uptick in the interest of applications such as as a replacement for the Microsoft Office Suite of applications. However, the ODF standard has had slow adoption in large part due to the Microsoft omission of it from Office. noooxml.jpg If Microsoft’s Open XML format receives ratification from ISO as a standard then it would introduce a pair of rival standards into the document community. In many ways, the non-official standardization of documents around the Microsoft doc format over the past decade has led to an unparalleled ability for organizations to trade information freely. However, many (especially in the open source community) feel that allowing Microsoft to hold all the cards is a dangerous thing and that by bringing about a truly open standard such as ODF there will be more options in the word processing suite that organizations can use.

The battle between ODF and Open XML is likely to rage on for years as the appeals and votes and red tape continue to drag on. Just to put things in perspective, ISO rejected the Open XML proposal in September of 2007 and after a rewrite based on input from vendors and members of ISO it was voted as a draft standard in March. The appeals process doesn’t close until June but we’re likely to see more red tape for awhile given the interests of the parties involved.

Setting Up VPN Clients in OS X, Vista and Windows XP

Thursday, November 29th, 2007

The steps for setting up VPN connections are straightforward for both Macs and PCs. Here are the steps to follow for setting up new VPN connection on a client desktop or laptop to their server:

Mac OS X (Tiger) – * First, open the ‘Applications’ folder by going to the Finder and choosing “New Finder Window” from the “File” menu. Click on the ”Applications” icon, then scroll down until you see the “Internet Connect” icon. * Click on the “Internet Connect” icon. * Next, go to the ‘File’ menu and select “New VPN Connection Window.” * On the window that pops up prompting you to choose which type of VPN, click ‘PPTP,’ then click ‘Continue.’ * In the new window, for the configuration, Click on the ‘Other’ and select ’Edit Configurations…’ * A new window will come up. You should then type in a description of the VPN connection in the Description text field. * Type in the DNS name of the server you want to connect to as the ‘Server Address.’ * Type in the username you will use to access the server. This username should have already been created on the server. * In the next text box, enter your VPN password. The password should also have been previously set. * Un-check ’Enable VPN on demand’, and ’Encryption’ should be set to ’Automatic’. * Click the ’OK’ button. Your configuration is saved, and you are ready to connect.

Mac OS X (Leopard) – * Go to the Apple menu in the upper left-hand corner of the top menu. * Click on System Preferences from the drop-down menu. * Click on ‘Network’ icon. * In the right-hand menu, click on the drop-down menu next to ‘Configuration’ , which currently says ‘Default’, and select ‘Add Configuration’. * Type in a name the configuration CITES VPN or the alternate name you chose in step # 8.

Mac OS X (Lion) – * Go to the Apple menu in the upper left-hand corner of the top menu. * Click on System Preferences from the drop-down menu. * Click on ‘Network’ icon * Click on the ‘plus’ button on the bottom of the left column and choose VPN from the Interface dropdown menut. * Choose the type of connection from the ‘VPN Type’menu (typically PPTP). * Label the connection with a name of your choosing in the ‘Service Name’ field. * Enter the proper information in the the ‘Server Address’ and ‘Account Name’ fields * If you are not using a shared computer you can click on the ‘Authentication Settings’ button and enter your password to store it for future sessions * Check the box labeled, ‘Show VPN status in menu bar’ * From the menu choose Connect yourchosenVPNlabel – the status of the connection will update and start counting seconds when you are connected.

12. In the right-hand menu, enter the following information:

Configuration: DAS VPN (or a name of your choosing) Server Address: Account Name: Your guest ID Encryption: Maximum (128 bit only) from the drop-down menu

13. Check the box next to Show VPN status in menu bar.

Windows Vista: 1. From the Start Menu, right click on Network, select Properties. This will open the Network and Sharing Center. 2. On the left side, click on Set up a connection or network. 3. Select Connect to a workplace. 4. Click on the Next button. 5. Select Use my Internet connection (VPN). 6. Replace the Example with the actual WAN IP address of the VPN server you will be connecting to. Also, you can change the name from VPN Connection to something that is more meaningful. 7. Click on the Next button. 8. Enter in the User Name and Password of your VPN account. 10. Now from the Network and Sharing Center, you can go to Manage Network Connections to see the new VPN connection. This is also where you disconnect. To reconnect later, go to the Network and Sharing Center and click Connect to a network.

Citrix and Open Source

Friday, November 2nd, 2007

It seems like everyone wants to dabble in the Open Source market these days. First came the RedHat, VA Linux and other public companies using Open Source technologies to ramp up. Then IT giants such as Novell, Sun and Apple started to come to markets with products faster due to their newfound Open Source roots. Now a lot of other companies are jumping on the bandwagon and introducing products based on Open Source technologies or purchasing other companies to help them do so quickly.

Citrix has purchased XenSource, a company that provided virtualization products based on the Xen Open Source virtualization platform. XenSource is now a prodcut of Citrix that is meant to compete directly with VMWare on the virtualization scene. Why use something like XenSource instead of just building a virtual cluster based on the actual Open Source Xen packages? Citrix offers annual support plans for Standard Edition, which allows customers to receive support. In addition, Citrix is providing free web-based resources, including online product documentation, a knowledge base, and discussion forums, as is done with their popular Metaframe products. And of course, XenSource becomes the preferred platform to run Citrix clusters on. Not that VMWare won’t do a fine job, but support will be a lot easier if you’re using XenSource.

Leopard: The New

Saturday, October 27th, 2007

Apple has been slowly winning over a lot of traditional Unix and Linux converts. This new breed of switcher is after a cool shell environment. In Leopard, Apple has upgraded to provide a whole slew of new features that are sure to continue winning new converts. Let’s just take a look at a few of them: Secure Keyboard Entry – Prevent other applications from detecting keystrokes used in terminal. Enable this using the Terminal menu. Tabbed Interface – I always have 3 shell windows open. That’s how I roll. But with the new tabbed interface (which you can access using the Command-T keystroke) I find that I’m using two shell windows with 3 tabs each. This gives me the ability to have a man page or process list on one side of my screen while being able to run other commands on the other side. You can fire up 2 shell windows and then open as many tabs as you like. Export Settings – This isn’t new in Leopard, but what is new in Leopard is that the tabs get exported along with window positions, layouts, themes and backgrounds. Themes – Glass, Homebrew, Novel, Red Sands – these themes allow you to use prebuilt templates for how you view your shell. These include background, text color, transparency. Can you imagine Steve sitting in his office at Apple dinking around with the Homebrew theme? Window Groups – A group of windows with a saved location, tabbed layout, shell configuration and settings. Terminal Inspector – Switch themes on the fly, view running process and increase the columns and rows of a shell environment. Titles – Set titles for your terminal windows so you can remember what was where.