Archive for the ‘Web Development’ Category

…’Til You Make It

Monday, January 7th, 2013

Say you need a bunch of Apple IDs, and you need them pronto. There’s a form you can fill out, a bunch of questions floating in a window in some application, it can feel very… manual. A gentleman on the Enterprise iOS site entered, filling the void with an Applescript that could batch create ID’s with iTunes (and has seen updates thanks to Aaron Friemark.)

That bikeshed, though, was just not quite the color I was looking for. I decided to Fake it. Are we not Professional Computer Operators?

Before I go into the details, a different hypothetical use case: say you just migrated mail servers, and didn’t do quite enough archiving previously. Client-side moves may be impractical or resource-intensive. So you’d rather archive server-side, but can’t manipulate the mail server directly, and the webmail GUI is a touch cumbersome: are we relegated to ‘select all -> move -> choose folder -> confirm’ while our life-force drains away?

Fake is described as a tool for web automation and testing. It’s been around for a bit, but took an ‘Aha!’ moment while pondering these use cases for me to realize its power. What makes it genius is you don’t need to scour html source to find the id of the element you want to interact with! Control-drag to the element, specify what you want to do with it. (There are top-knotch videos describing these options on the website.) And it can loop. And delay(either globally or between tasks,) and the tasks can be grouped and disabled in sections and organized in a workflow and saved for later use. (Can you tell I’m a bit giddy about it?)

Fakeinaction-MailSo that mail archive can loop away while you do dishes. Got to the end of a date range? Pause it, change the destination folder mid-loop, and keep it going. (There is a way to look at the elements and make a conditional when it reads a date stamp, but I didn’t get that crazy with it… yet.)

And now even verifying the email addresses used with the Apple ID can be automated! Blessed be the lazy sysadmin.


Thursday, November 29th, 2012

It was our privilege to be contacted by Bizappcenter to take part in a demo of their ‘Business App Store‘ solution. They have been active on the Simian mailing list for some time, and have a product to help the adoption of the technologies pioneered by Greg Neagle of Disney Animation Studios (Munki) and the Google Mac Operations Team. Our experience with the product is as follows.

To start, we were given admin logins to our portal. The instructions guide you through getting started with a normal software patch management workflow, although certain setup steps need to be taken into account. First is that you must add users and groups manually, there are no hooks for LDAP or Active Directory at present (although those are in the road map for the future). Admins can enter the serial number of each users computer, which allows a package to be generated with the proper certificates. Then invitations can be sent to users, who must install the client software that manages the apps specified by the admin from that point forward.


Sample applications are already loaded into the ‘App Catalog’, which can be configured to be installed for a group or a specific user. Uploading a drag-and-drop app in a zip archive worked without a hitch, as did uninstallation. End users can log into the web interface with the credentials emailed to them as part of the invitation, and can even ‘approve’ optional apps to become managed installs. This is a significant twist on the features offered by the rest of the web interfaces built on top of Munki, and more features (including cross-platform support) are supposedly planned.


If you’d like to discuss Mac application and patch management options, including options such as BizAppCenter for providing a custom app store for your organization, please contact

Bash Tidbits

Friday, November 23rd, 2012

If you’re like me you have a fairly customized shell environment full of aliases, functions and other goodies to assist with the various sysadmin tasks you need to do.  This makes being a sysadmin easy when you’re up and running on your primary machine but what happens when you’re main machine crashes?

Last weekend my laptop started limping through the day and finally dropped dead and I was left with a pile of work yet on my secondary machine.  Little to no customization was present on this machine which made me nearly pull out my hair on more than one occasion.

Below is a list of my personal shell customizations and other goodies that you may find useful to have as well.  This is easily installed into your ~/.bashrc or ~/.bash_profile file to run every time


# Useful Variables
export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
export SN=`netstat -nr| grep -m 1 -iE ‘default|′ | awk ‘{print \$2}’ | sed ‘s/\.[0-9]*$//’ `
export ph=””
PS1=’\[\033[0;37m\]\u\[\033[0m\]@\[\033[1;35m\]\h\[\033[0m\]:\[\033[1;36m\]\w\[\033[0m\]\$ ‘# Aliases
alias arin=’whois -h’
alias grep=’grep –color’
alias locate=’locate -i’
alias ls=’ls -lh’
alias ns=’nslookup’
alias nsmx=’nslookup -q=mx’
alias pg=’ping’
alias ph=’ping’
alias phobos=’ssh -i ~/.ssh/identity -p 2200 -X -C -t screen -R’
alias pr=’ping `netstat -nr| grep -m 1 -iE ‘\”default|′\” | awk ‘\”{print $2}’\”`’
alias py=’ping’

At the top of the file you have 2 variables that set nice looking colors in the terminal so make it more readable.

One of my faviourite little shortcuts comes next.  You’ll notice that there is a variable called SN there and it is a shortcut for the subnet that you happen to be on.  I find myself having to do stuff to the various hosts on my subnet so if I can save having to type in 192.168.25 50 times a day then that’s definitely useful.  Here are a few examples of how to use it:

ping $SN.10
nmap -p 80 $SN.*
ssh admin@$SN.40

Also related is the alias named pr.  This finds the router and pings it to make sure it’s up.

Continuing down the list there is the alias ph which goes to my personal server.  Useful for all sorts of shortcuts and can save a fair amount of work.  Examples:

ssh alt229@$ph
scp ./test.txt alt229@$ph:~/

There are a bunch of other useful aliases there too so feel free to poach some of these for your own environment!

Introducing Splunk: Funny name, serious logging

Thursday, November 15th, 2012

So, my boss says:

“Write an article called ‘Getting Started with Splunk.’”

I reply:

“What, you think I know all this stuff? This really would be a getting started article.”

But here it is and WOW is Splunk cool!

My only experience with Splunk up to a couple days ago was seeing a T-shirt with “Log is my copilot”. I knew it had something to do with gathering log files and making them easier to read and search. In about an hour I had gone to Splunk’s website to research the product, downloaded and installed it, and started viewing logs from my own system. The Splunk folks have made getting their product into their customer’s hands easy and getting started even easier.

What is Splunk?

Simply put, Splunk can gather just about any kind of data that goes into a log (system logs, website metrics, etc.) into one place and make viewing that data easy. It’s accessed via web browser so it’s accessible on any computer or mobile device such as an iPad.

What do I need to run Splunk?

Practically any common operating system today can run Splunk: Mac OS X, Linux, Windows, FreeBSD and more.

How much does Splunk cost?

Don’t worry about that right now. Download and install the free version. It takes minutes to install and is a no-brainer. Let’s get started.

Getting Splunk

IT managers and directors may be interested in watching the introductory and business case videos with the corporate speak (“operational intelligence” anyone?) and company endorsements. Techs will be interested in getting started. Right on their home page is a big green Free Download button. Go there, click it and locate the downloader for your OS of choice. I downloaded the Mac OS X 10.7 installer to test (and installed it on OS X 10.8 without any issues).

Splunk home

This does require a sign-up to create an account. It takes less than a minute to complete. After submitting the information the 100 MB download begins right away.

While waiting for the download…

When the download is on its way the Splunk folks kindly redirect to a page with some short videos to watch while waiting. Watch this first one called Getting data into Splunk. It’s only a few minutes and this is the first thing to do after getting into Splunk.

Installing and starting Splunk

The download arrives as a double-clickable Apple Installer package. Double-click and install it. Toward the end it opens a simple TextEdit window with instructions for how to start, stop and access the newly installed Splunk site.

Install done

Files are installed in /Applications/splunk and resemble a UNIX file system.

Splunk application folder

Open the Terminal application found in /Applications/Utilities and run the command /Applications/splunk/bin/splunk start. If this is the first time running Splunk it prompts to accept its license agreement. Tap the spacebar to scroll through and read the agreement or type “q” to quit and agree to the license.


Accepting the agreement continues to start Splunk where it displays some brief setup messages.

Starting Splunk

The setup then provides the local HTTP address for the newly installed Splunk site. Open this in a web browser to get to the login screen. The first login requires that the administrator account password be reset.

Splunk login

Following along with the Getting data into Splunk video, Splunk will need some information. Mac OS X stores its own log files. Let’s point to those.

Click the Add Data link to begin.

New Splunk home

Since Mac OS X’s log files are local to the machine, click A file or directory of files.

Add files

Click Next to specify local files.

Add local logs

This opens a window that exposes not only Mac OS X’s visible folders but its invisible folders as well. Browse to /var/log/system.log and click the Select button.

Browse logs folder

For now, opt to skip previewing the log file and click Continue.

Path to system.log

Now, let’s opt to monitor not only the system.log file but the entire /var/log folder containing dozens of other log files as well. Note that Splunk can watch rotated and zipped log files too. Click Save to finish adding logs.

Add /var/log folder

Let’s start searching!

Succes, start searching

The Search window initially displays a list of all logs Splunk is monitoring. To narrow the search change the time filter drop down menu to Last 60 minutes. This will make the results a little easier to see on a system that’s only been running a short while.

Last 24 hours

Now, search for install*. Splunk will only search for the word “install” without providing the asterisk as a wildcard character. Splunk supports not only wildcard searches but booleans, parentheses, quotes, etc. It will return every instance recorded in the logs that matches the search criteria. It also creates an interactive bar chart along the top of the page to indicate the number of occurrences found for the search at particular times.

Search for install

To further refine the search, Option+click most any word in the log entries below and Splunk will automatically add the necessary syntax to remove an item. In this case the install* search returned installinstaller and installd. Option+clicking installd changed the search criteria to install* NOT installd.

Modified search

Now what?

Continue exploring the videos to understand Splunk’s possibilities and take advantage of its Splunk Tutorial, which is available online as well as in PDF format for offline viewing. They do a great job leading users through setup and creating reports.

Still asking about price? Good.

The free version remains free but doesn’t include many features that really make it sing such as monitoring and alerts, multiple user accounts and support beyond the Splunk website. Cost depends primarily on the amount of data you want to suck into Splunk and have it watch. It’s not cheap but for an enterprise needing to meet certain service level requirements it beats browsing through multiple servers trying to find the right log with the right information.

FYI, putting together this 1,000-word article probably took me 10 times longer than performing the Splunk install itself and beginning to learn it. It’s really well-done and easy to use. Splunk makes getting started simple.

Using Squidman as a Web Proxy for OS X

Thursday, October 27th, 2011

Squid is an open source package available at that caches web files to a local server, increasing throughput for users and decreasing the amount of traffic on WAN connections. A Mac OS X software package named SquidMan, which includes Squid is available at SquidMan makes installing and using Squid much easier, giving nice buttons to use for management rather than managing Squid using configuration files.

Once SquidMan is downloaded, copy the SquidMan application bundle to the /Applications directory. Then open it. At the Helper Tool Installation screen click on the Yes button.

At the Squid Missing screen click on the OK button to install squid itself.

The Preferences screen then opens. Click on the Clients tab and, if you would like to restrict access to only a set of IP addresses, define them (or use the net mask to define a range).

Click on the General tab. Here, provide the following information:

  • HTTP Port: The port number that the proxy will run on.
  • Visible hostname: The hostname of the server (e.g.
  • Cache size: The total amount of space used for the proxies cache.
  • Maximum object size: The maximum size of single cached files.
  • Rotate logs: The frequency with which log files are rotated (I usually use Manually here).
  • Start Squid on launch: Automatically start squid when SquidMan is launched, and delay start by x number of seconds.
  • Quid Squid on logout: Define whether logging out of the server also stops squid.
  • Show errors produced by Squid: Displays squid’s errors in SquidMan.

Click on the Parent and define a proxy server that this one will use (if there is one, otherwise it just uses the web to directly access files). This feature is only used if you are daisy chaining multiple squid servers.

Click on the Direct tab and enter any sites that should not be proxied. Internal staging environments are a great example of sites that should bypass proxy servers.

At the Template tab, enter any custom variables.

Squid is usually used to cache and speed up web access, so the default configuration file is optimized for small files. In order to cache larger files effectively, change the configuration to allow for larger files (up to 64 megabytes) and allow for more total disk storage of cached files (up to 8 gigabytes in our tests for a few specific projects, but much larger is fine). This usually depends on the total available disk space on the machine which will run squid.

These are some of the options which we updated for a specific project we’re working on in the squid.conf (Template):

http_port 3128 transparent (add transparent if using NAT to redirect http requests):
maximum_object_size_in_memory 65536 KB
cache_dir ufs /usr/local/var/squid/cache 8192 16 256
maximum_object_size 65536 KB

These days, we prefer to use squid running in NetBSD’s pkgsrc, although any method of installation (such as the squidman approach) should be acceptable.

Next, click on the SquidMan application which should have been running the whole time and click Start Squid.

The squid daemon then starts. Looking at the processes running on the host reveals that it is run as follows:

/usr/local/squid/sbin/squid -f /Users/admin/Library/Preferences/squid.conf

Client systems can then be configured to use the squid proxy, or PAC (Proxy auto-config) file can be configured to configure clients. Another option being transparent parodying:

rdr de0 port 80 -> (local Squid server) port 3128 tcp

Deploying Font Servers

Friday, October 21st, 2011

Mac OS X has come with the ability to activate and deactivate Fonts on the fly since 10.5, when Font Book was introduced. Font Book allows a single user to manage their fonts easily. But many will find that managing fonts on a per-computer basis ends up not being enough. Which begs the question: who needs a font server? A very simplistic answer is any organization with more than 5 users working in a collaborative environment. This could be the creative print shops, editorial, motion graphics, advertising agencies and other creative environments. But corporate environments where font licensing and compliance is important are also great candidates.

Lack of font management is a cost center for many organizations. There is a loss of productivity every time a user has to manually add fonts when opening co-workers documents, or the cost of a job going out with the wrong version of a font. Some of the other benefits of fonts servers are separate font sets for different workgroups and isolating corrupt fonts to clean up large font libraries, along with quick searching and identification of fonts.

Font Management and Best Practices

Anyone who uses fonts for daily workflow needs font management. This could be a standalone product such as Suitcase Fusion or Font Agent Pro. But larger environments invariably need to collaborate and share fonts between users, meaning many environments need font servers. Two such products include Extensis Universal Type Server and Font Agent Pro Server. But before adding font management products, users should clean up and any fonts loaded or installed and added prior to moving to a managed font environment. Places to look for fonts when cleaning them up include the following:

  • ~/Library/Fonts
  • /Library/Fonts
  • /System/Library Fonts

Leaving any necessary system, Microsoft Web Core, and required Adobe fonts.

The best resource for this process can be found at Extensis Font Best Practices in OX v.7, which can be found at:

Types of Font Server Products Available

There are two major font server publishers: Extensis and Font Agent Pro. Both have workgroup and enterprise products. All server products from both products work on a client/server model. Both can sync entire font sets or serve fonts on-demand. The break down for the Extensis Universal Type Sever is at 10 clients. Below 10 clients Universal Type Server Lite is a 10 clients product, which lacks Enterprise features, such as the ability to use a SQL database or integrate in Open Directory or Active Directory. The full Universal Type Server Professional adds Directory integration, external database use, and font compliance features and is sold as 10-user license, with an additional per seat license.

Insider Software offers two levels of font servers. The first is FontAgent Pro Team Server designed for small workgroups and sold in a 5 or 10 client configuration. The next level of product is Font Agent Pro Enterprise server. This adds the same directory services integration as Universal Type Server Professional. This product also has Kerberos single sign on, server replication and fail over. It uses the same per-seat pricing structure as Universal Type Server Professional.

A third tool is also available in Monotype Font Explorer, at, which we will look at later in this article.

Pre-Deployment Strategies and Projects

Before any font server deployment, there are a few things to take into consideration. First is number of clients. This will guide you to which product will be appropriate for installation. Also note if Directory integration and compliance is needed. Is failover or a robust database important. The most important part of any font server installation is the fonts. How may are there, where are they coming from, are separate workgroups needed? Are all your fonts legal? In my experience probably not. Is legal compliance required for you organization or your clients? What is the preferred font type, PostScript Type 1, Open Type? What version are the fonts? Most fonts have been “acquired” over time, with some Postscript fonts dating back to early to mid nineties. As a font server is just a database, the axiom “garbage in, garbage out” is true here as well. This should lead to a pre-deployment font library consolidation and clean up. This can be either be done by 318 or we can train the you to perform this task. If compliance is an issue this is where we would weed out unlicensed fonts. Which to my experience is about 90% of all fonts. A clean, organized font set is the most important part of pre-deployment.

A major part of any font server roll out should be compliance and licensing. This allows for the tracking and reporting of font licenses and to make sure that stays in licensing and compliance.


Universal Type Server includes the ability to generate and export reports to help you determine if you are complying with your font licenses. The font compliance feature only allows you to track your licensing compliance and does not restrict access to noncompliant fonts. To help you understand how the font licensing compliance, let’s look at the following typical example of how to use licenses and the font compliance report in your environment.

Say you are starting up your own design shop and need a good group of licensed fonts for your designers to create projects that will bring you fame and fortune. You know that fonts are valuable, and you want to be sure that you have purchased enough licenses for your requirements. So, you purchase a 10­user license of a sizable font library. Using the Universal Type Client, these fonts are added to a Type Server workgroup as a set. A font license is then created and the Number of Seats field is set to 10. This license is then applied to all fonts in the set.

When you run the font compliance report, Universal Type Server compares the number of seats allowed to the total number of unique users who have access to the workgroup. If more users have access than licenses available, the fonts are listed as “non-­compliant.” You can now either remove users from the workgroup or purchase more font licenses to become compliant.

Universal Type Server is unique amongst other products in that it uses a checksum process to catalog fonts. Others just use file names and paths.

Universal Type Server can limit users to be able to only download fonts installed by administrators. For initial deployment, each user does not need to download all of the fonts, which helps in environments when you have a lot of fonts (e.g. more than 5 GB of fonts) that need to get distributed to several hundreds clients, so if each user had to download all of the fonts (e.g. each time they get imaged), they could loose a production system for some time.

Universal Type Server Deployment

Universal Type Server system requirements include the following:

Macintosh Server

•          Mac OS X v 10.5.7, 10.6 Mac OS X Server 10.5 or 10.6•          1.6 GHz or faster 32-bit (x86) or 64-bit (x64) processor (PowerPC is not supported)
•          1 GB available RAM
•          250 MB of hard disk space + space for fonts
•          Safari 3.0 or Firefox 3.0 or higher*
•          Adobe Flash Player 10 or higher*

Windows Server

•          Windows XP SP3 (32-bit only), Server 2003 SP2, Server 2008 SP2 (32 or 64-bit version**)
•          P4 or faster processor***
•          1 GB available RAM
•          250 MB of hard disk space + space for fonts
•          Internet Explorer 7 or Firefox 3.0 or higher*
•          Adobe Flash Player 10 or higher*
•          Adobe Reader 7 to read PDF documentation*
•          Microsoft .NET 3.5 or higher

Universal Type Server Installation Process:

1.         Verify server system requirements
2.         Run the installer on the target server machine
3.         Login to the Server Administration web interface
4.         Serialize the server
5.         Set the Bonjour Name
6.         Resolve any port conflicts
7.         Set any desired server configuration options, including backup schedule, log file configuration, secure connection options, and any other necessary server settings.
8.         After installing the server, configure workgroups, roles and add users.

The basic user and workgroup configuration steps include:

1.   Plan your configuration
2.   Create workgroups
3.   Create new users
4.   Add users to workgroups
5.   Assign workgroup roles to users
6.   Modify user settings as required

Optional Setup:

  1. Managing System Fonts with System Font Policy The System Font Policy feature allows Universal Type Server administrators to create a list of system fonts that are allowed in a user’s system font folder.
  2. Font Compliance Reporting
    The font compliance feature only allows you to track your licensing
    compliance and does not restrict access to noncompliant fonts.
  3. Directory Integration
    Directory integration allows network administrators to automatically
    synchronize users from an LDAP service
    (Active Directory on Windows or Open Directory on Mac OS X) with Universal Type Server workgroups.

* UTS Documentation:

Both Universal Type Server Professional and Font Agent Pro Enterprise can be configured for Open Directory, Active Directory, and LDAP integration. Both also can utilize Kerberos Single User sign on. Universal Type Sever Professional directory integration instructions can be found in the UTS 2 Users and Workgroups Administration Guide at Some users have reported issues connecting to Open Directory (which happens with all products, not just this one).

Universal Type Server runs in Flash for administrative functions, which many do not like.

Monotype Font Explorer

Monotype Font Explorer is a third tool that can be used to manage fonts. Available at there are some things that some environments do not like about Universal Type Server or Font Agent Pro. Let’s face it, the reason there are multiple products and multiple workflows is that some work for some environments and others work for other environments/workflows better. For example, Font Agent Pro stores master fonts on one client machine, which is then synchronized to the server, and from there to the rest of the clients; not everyone wants a client system acting as a master to the server. Font Explorer keeps the master is on the server, groups and synchronization works well and the administration is in the same window as font management. And best of all, Font Explorer is also typically cheaper than its server-based competitors in the font management space.

Extensis publishes a guide as to which fonts to include in the system and which to handle in the font management software. According to Apple documentation, and fonts in my ~/Library/Fonts folder take precedence to fonts in /Library/Fonts, which again takes precedence to /System/Library/Fonts. That means that if I install Times in my ~/Library/Fonts folder, it will be used instead of the font with the same name in /Library/Fonts or in /System/Library/Fonts. So how is it that I should care which fonts is installed where, as the font management applocation should simple take precedence to the others? If it does not take precedence, then where in the chain is it actually activating fonts? Maybe fonts are handled in these solution in parallel with the system mechanism? Thats the only explanation I can find to that, but is then only valid for UTS, or is it also valid for the other solutions?

End User Training and Font Czar

No font server installation would be complete without end user training and the appointment of a Font Czar. User training can be a fairly easy endeavor if client systems are using the same publishers stand-alone font client. Other times it could entail discussing licensing and compliance concepts along with adding metadata to fonts. An onsite Font Czar (or more than one) is very important to font server installations. The Font Czar cleans up and ingests new fonts, adds new users to font server, and in general be the Font Admin. This is usually a senior designer or technical point of contact for the creative environment.


Font Book is adequate for most users that don’t need a server. Universal Type Server, Font Agent Pro and FontExplorer are all great products if you need a font server. They all are installed centrally and allow end users to administer fonts, based on the server configuration and group memberships. They all work with directory services (some better than others) and can be mass deployed. In big workgroups or enterprises, where only a few people are handling the administration of fonts for a lot of people, a centralized font management solution is a must. But in much smaller organizations, it requires care and feeding, which represents a soft cost that often rivals a cost to purchase the solution.

Finally, test all of the tools available. Each exists for a reason. Find the one that works with the workflow of your environment before purchasing and installing anything.

Note: Thanks to Søren Theilgaard of Humac for some of the FontExplorer text!

Final Cut Pro X

Tuesday, September 20th, 2011

Version 10.0.1 of Final Cut Pro X is now out. This update returns the ability to use Final Cut Pro X projects and Events on Xsan. This is a must for multi-user environments. Users can now each others media and projects, and edit them from any system on the SAN, as with previous versions of Final Cut.

Additionally, some other new features including custom starting timecode, the new Tribute theme, GPU-accelerated exports, One-step transitions, media stems export and of course, XML support. XML support is very important as it introduces the ability to integrate Final Cut Pro X with asset management systems or APIs from other applications. The ability to interact with other tools helps to plan and implement an automated workflow, reducing the labor for reoccurring tasks common in media environments.

Apple also now provides a free 30 day trial to Final Cut Pro X. If your organization is considering migrating from Final Cut Studio into Final Cut Pro X, or if you have a Final Cut Server based asset management solution that you would like to migrate to something newer and supported, then please feel free to contact your 318 Professional Services Manager, or if you do not yet have one.

Suppressing the PHP Version

Thursday, April 28th, 2011

Yesterday, we looked at hiding the version of Apache being run on a web server. Today we’re going to look at suppressing the version of PHP.

By default, the PHP configuration file, php.ini, is stored at /etc/php5/apache2/php.ini (in most distributions of Linux) or just in /etc/php.ini (as with Mac OS X). In this file

vi /etc/php.ini

Then locate the expose_php variable within the file. Once found, set it to Off as follows:

expose_php = Off

Doing so will not improve the overall security of a system (unless you believe in security through obscurity). However, it is a good idea and will help defeat a number of vulnerability scanners. If you do suppress the Apache and PHP versioning information for the sake of passing a vulnerability scanner on a backported distribution of one of the packages then it would be a good idea to check the CVEs for the port you are using and verify that you are secure.

Configuring PHP in IIS on Windows Server 2003

Thursday, August 19th, 2010

By default, a site configured in IIS 6 will not support PHP. An extension mapping must be created so that IIS will know how to handle php scripts.

This assumes that PHP has been installed on the server in question.

1. Right-click on the site in question and choose Properties.
2. In the Properties box, click on Home Directory, and then Configuration
3. Under Application Extensions, click Add.
4. Either enter or browse to the PHP executable, php5isapi.dll.
5. Under extension, enter “.php”

You have the option to limit the HTML methods that PHP scripts will have access to. The limitations you impose depend on the security requirements of the client, but GET, HEAD, and POST should be enough for most PHP applications. Verbs should be separated by a comma, for example: GET,HEAD,POST

6. Save your changes, and restart IIS.

WordPress 3.0 “Thelonious” Now Available

Thursday, June 24th, 2010

As we wrote back in March, version 3.0 of WordPress was coming soon. Soon happened last week, as thousands of people starting downloading and upgrading (more than 1 million downloads the first week!).

Our first site upgrade went smoothly, although the automatic upgrading of certain plugins still fail and must be done manually. As always, it’s a good idea to make complete, tested backups of your site files and database dumps before starting the upgrade process.

WordPress 3.0 offers a lot of new online publishing options – call your 318 account manager today, or email for more information.

UPDATE: Our second site upgrade had to be reverted to version 2.9.2 temporarily due to an issue with the WP Events Calendar plugin. Since we had our site backups readily available, this was a quick and easy process.

In other WordPress community news, Automattic (the company behind announced a beta program for VaultPress, an online, automated method of backing up and securing your WordPress-based website.

The beta plans start at $15/month and include protection for the entire WordPress environment, including the WordPress files (theme, plugins, options), content (posts, photos, videos, audio files), and all comments. Implementation is via Vaultpress plug-in.

WordPress 3.0 on the Horizon

Wednesday, March 24th, 2010

According to the current project schedule, WordPress 3.0 will hit the release candidate milestone next month, with a target release date of May 1, 2010. Lots of people have been writing about the new features of this release, but here are some of the highlights from the official list:

  • The merge with WordPress Multi-User (and multisite capabilities)
  • Better support for custom post types
  • Better menu management
  • New default theme “Twenty Ten” (preview here)
  • Custom Navigation
  • Custom Backgrounds
  • Choose username for the first account, rather than using ‘admin’
  • New template files for custom post types
  • Author specific templates
  • jQuery updated to 1.4.2

The code merge between “normal” WordPress and WordPress MU is a major undertaking. This will also allow BuddyPress to officially be installed on non-multiuser sites. The new default theme and ability to easily modify backgrounds should allow non-designers to create a nice custom site without having to know too much about theme design. The WooNav addition looks great and the elimination of the default admin user is something we talked about in our last post on WordPress security.

This will be a major release that should be tested in a non-production environment before being deployed on your server. 318 will be ready to help you with this upgrade when it’s released – call us at 877.318.1318 to schedule an appointment today!

WordPress Security Auditing

Thursday, March 11th, 2010

After reading Sarah Gooding’s article, 7 Quick Strategies to Beef Up Your Security, we decided to take a look at our own WordPress settings here on the 318 Tech Journal.

Deleting the Default Admin User

Creating a new user with admin permissions, then logging in as that user and deleting the default “admin” account is great advice. Just make sure you assign all of the old admin users posts and links to the new account. Another caveat, if you are using the WPG2 plugin with a Gallery2 installation, make sure to remove the Gallery2 user links before deleting the old admin account.

Don’t Use the Default “wp_” Table Prefix

SQL injection attacks are very real, and this tip can help mitigate risk of infection. The WP Security Scan plug-in mentioned in the article has a built-in tool to help automate this change, but it can also lock you out of your dashboard. The trick is to make sure each user’s meta_key settings in the usermeta table match whatever prefix you choose:

wp_capabilities –> newprefix_capabilities
wp_usersettings –> newprefix_usersettings
wp_usersettingstime –> newprefix_usersettingstime
wp_user_level –> newprefix_user_level

Whitelisting Access to wp-admin by IP Address

This is typically done via .htaccess files and the AskApache Password Protection For WordPress plug-in mentioned in the article can help get the settings correct, although that plug-in has specific server requirements in order to run (it will run some tests for you to see if your server qualifies). If you do set this up, beware of dynamic IP address changes, which can lock you out in the future.

Other Items to Consider

  • Consider using a local MySQL application like Sequel Pro or the command line mysql tools for database configuration instead of public web-facing tools like phpMyAdmin. If you do use PMA, you should lock down access as much as possible using .htaccess controls (or other methods).
  • Tools like the WP Security Scan plug-in mentioned above or Donncha O Caoimh’s WordPress Exploit Scanner plug-in can help identify file permission issues in your WordPress setup.
  • Using SSH/SFTP instead of FTP to access your server is always good advice, even when you are using whitelists.
  • Stay up to date on both WordPress core files and all of your plug-ins.

318 is here to help you with all of your WordPress needs – call us today at 877.318.1318!

Adding a User and Folder to FTP Running Active Directory in Isolation Mode

Thursday, January 21st, 2010

Note: For the purpose of these directions the username is MyUser

First, create a user in Active Directory (assuming, also, that there is an FTP users container in AD)

Next, create a home directory in the FTP share (for MyUser it might be D:\Company Data\FTP\MyUser *naming the home folder the same as the user name*)

Go to the command line use these commands to map the directories to the accounts:

iisftp /SetADProp MyUser FTPRoot “D:\company data\ftp”

*note the use of parenthesis outside the path to specify this directory since there is a space between company and data*

iisftp /SetADProp MyUser FTPDir LaBioMed

You can verify this by using the command line ftp localhost and logging in with the new user credentials

You can also create and delete a file to make sure it correctly edits the folder.

Note: If the password changes for the domain administrator account you must change it in IIS for this.

Searching All Sites

Thursday, July 30th, 2009

318 is often called on to help bring together a comprehensive and cohesive web presence and strategy for organizations. We also strive to continually refine our own presence. As such 318 has added a new search engine to our sites at, which we hope will allow for maximum visibility into the information that we have provided to the community, and will continue to provide as time goes on. This portal can be used to search all outwardly facing aspects of the web presence (and it’s pretty sexy lookin’ to boot if we don’t say so ourselves).

Oracle Buys Sun

Monday, April 20th, 2009

Sun was in merger talks with IBM.  Talks that had fallen through.  Today, the Sun website says “Oracle to Buy Sun.” Oracle is the largest database company in the world and has been tinkering with selling support contracts for Linux and the Oracle suite of database products, that already includes PeopleSoft, Hyperion and Siebel. This merger, valued at $7.4Billion, will give Oracle access to sell hardware bundled solutions, further the Oracle development product offerings and give Oracle one of the best operating systems for running databases on the planet.

Oracle doesn’t just get hardware and Solaris though.  This move also solidifies a plan for Oracle customers to integrate Sun storage.  Oracle had previously been working with HP in a partnership that never seemed to gain traction.  Then there is Java, MySQL, VirtualBox, GlassFish and  A number of the Sun contributions will be Open Source projects, but overall it’s possible to see a strategy that can emerge from a new Oracle + Sun organization.

As a Sun partner, 318 can assist its clients through this transition, be it with storage, MySQL, Java, Solaris or Oracle middleware scripting.  Overall, this deal makes a lot of sense and 318 is behind doing whatever possible to ease our clients through the transition.

Finally, for those concerned that Oracle might just be buying Sun to kill off MySQL, keep in mind that the Open Source community built MySQL in the first place (or was integral to building it) and it can build another in its place just as easily, this time faster and with less required legacy support.  MySQL is not a fluke.  PostgreSQL or a newer solution will take its place if MySQL were to fall by the wayside under the Oracle helm. Oracle is not going to make MySQL into a martyr of sorts, and is going to want to capitalize on their investment (a Billion dollar purchase by Sun and obviously part of this purchase); especially with a clear business plan for MySQL to be profitable (which is why Sun bought them for such a lofty price in the first place). Overall, Oracle has no reason to kill MySQL; instead, with Siebel, MySQL, Oracle, PeopleSoft, etc – they can simply tout “All Your Databasen Are Belong To Us!”

Restoring Data From Rackspace

Wednesday, April 1st, 2009

Rackspace provides a managed Backup Solution. The backups are available for up to 1 Month back. 2 Weeks of Backups are located on their premises, and the previous 2 Weeks are stored offsite. If the files to restore are within that period your restore time will take longer, as they will have to move the tapes from their offsite location to Onsite to start the restore process.

Restores can either be performed from Rackspace’s Web Portal or a support phone call.

Calling Rackspace
Supply Account Name and Password
State you want to Restore Files, windows or linux computer
Give Backup Operator File Path, and Date to Restore From
A Ticket will be created, and updated with the Restore Process. This ticket will be updated when the Restore is complete, and Will Include the Directory of the Restore Data.

The New Facebook

Thursday, March 12th, 2009

Facebook released some major updates today. A number of people have complained that they don’t like the new layout, but the minor changes have you clicking less to find things, which conserves their bandwidth and lets you get to things faster. The newer graphics are sleeker and honestly a bit more like what you’d expect to see on an iPhone. Also, now you have the ability to simply eliminate friends from your news feed, which allows you to get the most up-to-data on the friends you actually want to know about.

Facebook seems to be more and more popular by the day. 318 has had a group on Facebook for a couple of years, and managed the Mac OS X Server group and the Xsan group, amongst others. Now we’ve added a fan page as well! Check it out here and become a fan.

Ubuntu 8.04 Released

Sunday, May 11th, 2008

ubuntulogo1.pngUbuntu 8.04 is now available – the first major release since 7.10. Code named Hardy heron, 8.04 will look familiar to long-time Ubuntu users. But under the hood, 8.04 sports a new kernel (2.6.24-12.13), a new rev of Gnome (2.22), improved graphical elements (such as Xorg 7.3), a spiffy new installer (Wubi), the latest and greatest in software, enhanced security and of course more intelligent default settings. The build is free to download the desktop version from

The new Ubuntu installer comes with a new utility called Wubi. Wubi can run as a Windows application, which means that Windows users will be able to more easily transition and learn about Ubuntu. Wubi can perform a full installation of Ubuntu as a file on a Windows hard drive. This means that you no longer need to install a second drive or perform complicated partitioning on an existing drive. When you boot up Ubuntu the system reads and writes to the disk image as though it were a standard drive letter, much like VMWare would do. Ubuntu can also be uninstalled as though it were a standard Windows application using Add/Remove Programs.

The new application set is solid. Firefox 3.0 comes pre-installed. Brasero provides an easier interface for burning CDs and DVDs. PulseAudio now gets installed by default (which is arguably a questionable decision but we found it worked great for us). The Transmission BitTorrent client is now included by default. Vinagre provides a very nice and streamlined VNC client for remote administration (although the latency for remote users is still a bit of a pain compared to the Microsoft RDP protocol). Inkscape has always been easy to install and use, but the popular Adobe Illustrator-like application it now comes bundled with Ubuntu.

In order to play nicer in the enterprise, the security infrastructure of Ubuntu has also had a nice upgrade. The Active Directory plug-in is provided using Likewise Open (unlike Mac OS X which sees a custom package specifically for this purpose). There is a new PolicyKit which provides policies similar to GPOs in Windows or MCX in Mac OS X. The default settings in 8.04 are also chosen with a bit more of a security mindset. New memory protection is built into 8.04, primarily to make exploits harder to uncover and prevent rootkits. Finally, UFW (uncomplicated firewall) is now built into the system to make firewall administration more accessible to the everyday *nix fan.

Network Administrators will be impressed by the inclusion of many new features. KVM is included in the Kernel and lib-virt and virtmanager are provided to make Ubuntu a very desirable virtualization platform. iSCSI support provides more targets with which to store those virtual machines and also expanded storage for those larger filers (eg – using Samba 3). Postfix and Dovecot provide a standardized mail server infrastructure out of the box. CUPS in 8.04 now supports Bonjour and Zeroconf protocols as well as the solid standbys of SMB, LPD, JetDirect and of course IPP. Those building web servers will be happy to see Apache 2, PHP 5, Perl, Python and Ruby on Rails (with GEM) and of course Sun Open JDK (community supported). If you need the database side of things there’s MySQL, Postgresql, DB2 and Oracle Database Express.

However, if you are just starting out keep in mind that Ubuntu Server does not come with a windowing system by default – so beef up those command line skills sooner rather than later! We are also still waiting for a roadmap for integrating much of the more Enterprise or Network-oriented packages. For example, we now have the PolicyKit and a solid Active Directory client. But how do we push out en masse the policies that we want our users to have post imaging?

So if you use Ubuntu or are interested in getting to know the Linux platform then 8.04 is likely a great move. It’s solid, stable and much improved over 7. It’s easier to migrate, virtualize and work in. The developers should be proud!

Microsoft Office Live Workspace

Wednesday, January 30th, 2008

Microsoft Office Live Workspace is a portal that allows you to view your Microsoft Office documents online. This includes the ability to share documents and do desktop presentations of Microsoft Office documents. Microsoft Office Live Workspace is in beta and free, so why not give it a try? That’s what Microsoft is asking now that Google Docs and Zoho are moving towards commoditizing the document and spreadsheet space.

So first impressions? Office Live Workspace doesn’t let you edit documents. Anyone who has used Google Docs or Zoho is going to be looking for that feature. There is a nice plug-in that is free that allows you to save up to 500 Megabytes of new or existing files into the Workspace portal as well as edit documents that are actually located on the portal. You can also create multiple locations for others to access, called workspaces and sync task lists or online events with Microsoft Outlook (a feature most Outlook Web Access users are already using). If you don’t have Office though, you can only view files and create notes about them. Changes are automatically synchronized so you can easily work while offline without a lot of headache.

There’s also SharedView. SharedView is part of Microsoft Office Live Workspace and gives other users the ability to view or take over your desktop as part of the collaboration benefits of Microsoft Office Live Workspace. This is already available through other Microsoft technologies, but this is a little more user friendly and nicely ties together with the document editing process. images-1.jpeg All in all, users of Microsoft Office just got a host of new features with the Microsoft Office Live Workspace. So we might as well take use of this new technology since Microsoft was so nice to give it to us. However, if we’re looking for something that mirrors the functionality of Google Docs then this isn’t it. It’s more of meeting half-way between Google Docs and Microsoft Office.

Leopard Server: Introduction to Ruby on Rails

Sunday, October 28th, 2007

So Ruby on Rails… What does this mean for me and what exactly is Ruby on Rails from a systems administration standpoint? Ruby on Rails was created by David Heinemeier Hansson from his work on Basecamp, a web-based project-management tool, by the company 37signals. Ruby on Rails was first released to the public in July 2004. Ruby on Rails is a web application framework designed to support the development of dynamic websites. To see some sites built using Ruby on Rails check out

Ruby is an object-oriented program language that Rails is built on.  To access rails, you can use the rails command.

The Ruby on Rails framework is built into Leopard Server and can be started up using the mongrel_rails start command. It can be stopped using the mongrel_rails command. Mongrel is a fast HTTP library and server for Ruby. Mongrel_rails is a command line tool that can be used to control the Mongrel webserver.

Some options to the mongrel_rails command include the following: -d daemonize -p assign a custom port -a assign an address for the HTTP listener -l assign a log file to use -t customize the timeout variable -m use additional MIME types -r change the document root -B enable debugging -C use a configuration file -S define an additional config script -h access the help libraries -G generate a config file –user define who the server will run as –version get the version information for Mongrel

But that’s not all you can do with mongrel_rails. The actual file is not compiled so you can read it in clear text and learn more about what it is doing behind the scenes. Just cd into the /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/ folder to find it. One item of note is the inclusion of mongrel_rails_persist, a wrapper for mongrel_rails that allows admins to register the Mongrel Server with Bonjour and create a launchd plist to run Mongrel (/Library/LaunchAgents/

So let’s say that you have a Ruby application that lives at the following location /Library/WebServer/MyRubyApp. You can run the following command to launch it over port 8001 in a persistent manner: mongrel_rails_persist start -p 8001 -c /Library/WebServer/MyRubyApp

To access it from a web browser you would enter the address

From here you’ll be able to daemonize Mongrel and provide the Rails development framework to developers in your environment. There are already a lot of projects for using Ruby with FileMaker and other database systems, so keep an eye out for more information about this piece of Leopard Server!

FileMaker / XML Exploits Security Concerns

Tuesday, May 1st, 2007

When running FileMaker Custom Web publishing, it should be considered whether or not External Authentication via a domain should be used for the FileMaker system due to this exploit which will allow a savvy tech to grab domain userIDs and passwords to the entire domain:

An XML document fragment is loaded into the request-query parameter in the following grammar:

Note The query information is defined to be in the namespace fmq=””. Make sure you include a declaration of the fmq namespace in the element at the start of your XSLT stylesheet. See “About namespaces and prefixes for FileMaker XSLT stylesheets” on page 55.

For example, suppose you want to access the query commands and query parameters in this request: &-findall

If you include the statement before the template section, the Web Publishing Engine will store this XML document fragment in that parameter:
abcl23 < /query>

You can then use the request-query parameter to access the value of a token that was passed in a URL by using an XPath expression. For example:
$request-query/fmq:query/fmq:parameter[@name = '-token.l']

Obtaining client information
You can use the following FileMaker XSLT parameters to obtain information from the Web Publishing Engine about a web client’s IP address, user name, and password:

Include these parameter statements in your XSLT stylesheet before the top element.
These parameters provide the web user’s credentials when a stylesheet programmatically loads additional password-protected XML documents. See “Loading additional documents” on page 61. The web user must provide the user name and password initially via the HTTP Basic Authentication dialog box. See “When web users use Custom Web Publishing to access a protected database” on page 19.

For more information and examples of using these three FileMaker XSLT parameters, see the FileMaker XSLT Extension Function Reference.

This information is excerpted from the XSLT Reference database.

This FileMaker XSLT parameter provides the web client’s IP address.
Note: needs to be inserted before the top element. For example, this:

returns this:
Client IP: [your browser machine's IP address]

This FileMaker XSLT parameter provides the web client’s user name. You can use the parameter if your need to provide the web user’s credentials when programmatically loading additional password-protected XML documents processing, the original request.

When accessing a password protected database, you must provide a username and password within a URL. The syntax as is defined in HTTP standard is:

http://username:password@host/fmi/xml/ [Filemaker XML grammar name]. xml?query_string

Note: needs to be inserted before the top element. For example, in order to obtain the layout information from the source database, you can use client-user-name and password in the following manner.

This FileMaker XSLT parameter provides the web client’s password. You can use the parameter if your need to provide the web user’s credentials when programmatically loading additional password-protected XML documents while processing the original request.

When accessing a password protected database, you must provide a username and password within a url. The syntax as is defined in HTTP standard is:

http://username:password@host/fmi/xml/ [Filemaker XML grammar name]. xml?query_string

Note: needs to be inserted before the top element, For example, in order to obtain the layout information from the source database, you can use client-user-name and password in the following manner.

Tags: , , , , , , , ,
Posted in Directory Services, FileMaker, Security, Web Development | Comments Off

Installing Joomla in OS X Server

Tuesday, July 4th, 2006

1. Enable MySQL.
2. Create a database in MySQL called joomladb.
3. Create a new user called jadmin that has full priviledges to this database (the user does not need to be called jadmin, but that is the username we will be using for this walkthrough).
4. Download the latest stable release of Joomla.
5. Extract the tar files into a new folder (for this example we are going to call it joomla to keep things easy).
6. Make the following folders writeable for Joomla
7. Move the joomla folder onto a web server.
8. From your web server, visit the site or the subfolder that you placed the joomla files into.
9. Make sure PHP is enabled for the domain and globally.
10. At the Joomla Pre-Installation check page, you will either see a notice that you can install Joomla or a notice that your system does not meet the minimum requirements for installion. If your system does not meet the requirements, install the modules that are listed in Red, or make Joomla work and click on the Check Again button. Once the dependencies are all installed click Next.
11. Read the license agreement and click on Next.
12. Fill in the appropriate fields for your MySQL environment and click Next >>. The fields that are used:
a. Host Name: If the server you are currently using is a MySQL server then enter localhost. Otherwise enter the name or IP of your MySQL server.
b. MySQL User Name: Either enter the root User Name for your MySQL server or another username if desired.
c. MySQL User Name: Either enter the root password for your MySQL server or the password for another user if desired.
d. MySQL Database Name: The name of the database on the MySQL server you would like the Joomla files saved to. In our example, we will use joomladb.
13. Enter the name you would like to use for your Joomla site. This will be the name users will see when logging into your Joomla site and click on the Next button.
14. At the next screen you will be asked to enter some site specific information and then click Next.
a. URL: Enter the URL that users will use to access your site.
b. Path: Enter the full path to the Joomla directory on your server.
c. Email: This will be used for administrative logins.
d. Admin password: This will be the administrative password used to access your Joomla site.
15. cd into the Joomla directory and remove the directory called installation.
16. Click on the View Site button. If you see the Default Joomla site then you are almost done.
17. Go back to the previous screen and click on the Administration button.
18. Enter admin as your username and the administrative password you gave Joomla in field 14.d.
19. You now have Joomla configured and are now ready to customize it.

Open Source Code Development

Monday, June 5th, 2006

Developers of code have always been fairly open with their tips and tricks. New advancements in the websphere come fast and many of them come from the open source community. Led by people like Linus Torvalds, the original author of Linux, the open source ommunity has rewritten many of the most popular proprietary applications on the market and made them freely available to the world, asking only that if they don’t sell the code you don’t turn around and sell the code as well.

This was the foundation for the web. Apache, the most popular web server in use, is a product of the open source community. Recently, due to a large pool of code to draw upon and the entry into the open source community of many proprietary products we have been seeing a lot of advancements coming at a more rapid rate than ever., a project for replacing Microsoft Office, Eclipse, a project supposedly named because they were going to “eclipse” Sun and a list almost as long as the postings on (a popular site for open source software) have emerged.

This is changing the way people write code. Programmers today are often charged with assembling and integrating code more than they are actually writing new code. Many organizations have seen that by using code repositories online and in some cases searchable is more efficient than writing new code. In many cases, software developers and architects spend more time finding, downloading and evaluating available code than anything else.

Some programmers sell their code, but many just post it online giving back to the community that helped them find code they have been using and in some cases learn their craft. Finding the appropriate code for a given task and making sure that the licensing and documentation is taken care of can be a tough task. This is where a new type of search engine comes into play. currently offers over 225,000,000 lines of code for languages including PHP, Python, SQL and many others. Krugle is another search engine that offers much more information on code although it is currently in beta. If you would rather pay for your ability to search code you can sign up for the protexIP/OnDemand service with Black Duck. Anyone who will be writing a lot of code should get to know all their options for trolling around for code.

Backing Up and Restoring MySQL

Tuesday, May 9th, 2006

Backing up MySQL
Do you need to change your web host or switch your database server? This is probably the only time when you really think of backing up your MySQL data. If you’ve got a website with a database or your custom database running for your applications, it is imperative that you make regular backups of the database. In this article, I will outline two easy ways of backing up and restoring databases in MySQL.
The easiest way to backup your database would be to telnet to the your database server machine and use the mysqldump command to dump your whole database to a backup file. If you do not have telnet or shell access to your server, don’t worry about it; I shall outline a method of doing so using the PHPMyAdmin web interface, which you can setup on any web server which executes PHP scripts.

If you have either a shell or telnet access to your database server, you can backup the database using mysqldump. By default, the output of the command will dump the contents of the database in SQL statements to your console. This output can then be piped or redirected to any location you want. If you plan to backup your database, you can pipe the output to a sql file, which will contain the SQL statements to recreate and populate the database tables when you wish to restore your database. There are more adventurous ways to use the output of mysqldump.

Mysqldump can create a simple backup of your database using the following syntax:
mysqldump -u [username] -p [password] [databasename] > [backupfile.sql]

Username is your database username. Password is the password for your database. Databasename is the name of the database you want to backup. Backupfile.sql is the file that the backup will be written to.

The new dump file will contain all the SQL statements needed to create the table and populate the data into a new database server. To backup your database ‘Customers’ with the username ‘sadmin’ and password ‘pass21′ to a file custback.sql, you would issue the command:

mysqldump -u sadmin -p pass21 Customers > custback.sql

You can also ask mysqldump to add a drop table command before every create command by using the option –add-drop-table. This option is useful if you would like to create a backup file which can rewrite an existing database without having to delete the older database manually first.

mysqldump –add-drop-table -u sadmin -p pass21 Customers > custback.sql

If you’d like restrict the backup to only certain tables of your database, you can also specify the tables you want to backup. Let’s say that you want to backup only customer_master & customer_details from the Customers database, you do that by issuing

mysqldump –add-drop-table -u sadmin -p pass21 Customers customer_master customer_details> custback.sql

So the syntax for the command to issue is:

mysqldump -u [username] -p [password] [databasename] [table1 table2 ....]

[tables] – This is a list of tables to backup. Each table is separated by a space.

If you are a database administrator who has to look after multiple databases, you’ll need to back up more than one database at a time. Here’s how you can backup multiple databases in one shot.

If you want to specify the databases to backup, you can use the –databases parameter followed by the list of databases you would like to backup. Each database name has to be separated by at least one space when you type in the command. So if you have to backup 3 databases, let say Customers, Orders and Comments, you can issue the following command to back them up. Make sure the username you specify has permissions to access the databases you would like to backup.

mysqldump -u root -p pass21 –databases Customers Orders Comments > multibackup.sql

This is okay if you have a small set of databases you want to backup. Now how about backing up all the databases in the server? That’s an easy one, just use the –all-databases parameter to backup all the databases in the server in one step.

mysqldump –all-databases> alldatabases.sql

Backing up only the Database Structure

Most developers need to backup only the database structure to while they are developing their applications. You can backup only the database structure by telling mysqldump not to back up the data. You can do this by using the –no-data parameter when you call mysqldump.

mysqldump –no-data –databases Customers Orders Comments > structurebackup.sql

Compressing your Backup file on the Fly

Backups of databases take up a lot of space. You can compress the output of mysqldump to save valuable space while you’re backing up your databases. Since mysqldump sends its output to the console, we can pipe the output through gzip or bzip2 and send the compressed dump to the backup file. Here’s how you would do that with bzip2 and gzip respectively.

mysqldump –all-databases | bzip2 -c >databasebackup.sql.bz2
mysqldump –all-databases | gzip >databasebackup.sql.gz

It is also always a good idea to backup your MySQL coniguration file. This file is typically called my.cnf.

You can also use the mysqlhotcopy command to make raw backups of MyISAM tables. Mysqlhotcopy handles locks on the database and has support for regular expressions. Mysqlhotcopy can also truncate indexes to allow administrator to save space on their backup media. Mysqlhotcopy is included with a default installation of MySQL and can typically be found in the /usr/bin folder of your system. Mysqlhotcopy use is as follows:
mysqlhotcopy databasename path_to_file

The mysqlhotcopy is a Perl script, so you must be able to execute Perl scripts on your server in order to use it. It requires the DBI, Getopt::Long, Data::Dumper, File::Basename, File::Path, Sys::Hostname Perl classes in order to run. You must also have SELECT, RELOAD and LOCK TABLES priveledges for the tables you are backing up. For more information on the mysqlhotcopy command, enter perldoc mysqlhotcopy at a command prompt.

When you enter mysql from a command prompt you will enter an interactive command line mode. You can only use the backup statement for MyISAM tables. The backup statement, used from within the mysql interactive mode, has been deprecated by later versions of MySQL. You should only use it for small, low volume tables. The backup command works as follows:
1. Enter MySQL at a command prompt
2. Enter backup table tablename to ‘/path/filename’

This copies the definition (.frm) and data (.MYD) files. The indexes will be rebuilt when restoring, which is done as follows:
1. Enter MySQL at a command prompt
2. Restore table tablename from ‘/path/filename’

Additionally it is possible to make MySQL backups using third party software. Packages such as
Restoring MySQL
Now that you’ve got backups of your database, let’s learn how to restore your backup in case your database goes down. Here’s how you can restore your backed up database using the mysql command.

If you have to re-build your database from scratch, you can easily restore the mysqldump file by using the mysql command. This method is usually used to recreate or rebuild the database from scratch.

Here’s how you would restore your custback.sql file to the Customers database.

mysql -u sadmin -p pass21 Customers < custback.sql

Easy isn’t it ? Here’s the syntax you would follow to restore data.

mysql -u [username] -p [password] [database_to_restore] < [backupfile]

Now how about those zipped files? You can restore your zipped backup files by first uncompressing its contents and then sending it to mysql.

gunzip < custback.sql.sql.gz | mysql -u sadmin -p pass21 Customers

You can also combine two or more backup files to restore at the same time, using the cat command. Here’s how you can do that.

cat backup1.sql backup.sql | mysql -u sadmin -p pass21

How would you like to replicate your present database to a new location? When you are shifting web hosts or database servers, you can directly copy data to the new database without having to create a database backup on your machine and restoring the same on the new server. mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database. Let’s say we want to recreate the Customers database on a new database server located at, we can run the following set of commands to replicate the present database at the new server.

mysqldump -u sadmin -p pass21 Customers | mysql –host= -C Customers

To restore the MySQL configuration file, you can usualy just copy and paste the settings from a previously saved copy of the my.cnf file into the newly created my.cnf file from when you reinstalled MySQL.

Choosing the Right Web Host

Monday, April 10th, 2006

Managing Your Hosting Environment

When you start a new hosting environment, you will probably handle many of the tasks that you will likely want your clients to handle later down the road. There are many products that help to ease the administrative burden of a shared hosting environment. These products empower users of your services to create their own accounts and perform other administrative tasks using easy to navigate web portals.

• cPanel and Plesk are server management software solutions designed to allow administrators to create Reseller accounts, Domain accounts and email features. Administrators have the ability to assign users rights to various aspects of their hosting environment. This saves time for the hosting provider and allows for clients to receive a wider variety of features without the hosting provider having to set these up for each individual client. These include web support, adding features to web sites, domain control, DNS control, email account control, spam filtering, virus filtering and other features. While cPanel and Plesk are not the only products that allow for these types of functions they have risen to be what most sites now use.
• Webmin is an open source solution that allows for managing web sites, DNS, email, spam filtering and virus filtering from a web portal. Webmin is not meant specifically to be used in a web hosting environment but can be used to obtain some of the features that are available in the commercial packages, cPanel and Plesk.

One of the main reasons that many web-hosting ventures don’t work out is support. When we think of supporting clients in a web-hosting environment we typically think of the phone calls where we help the clients troubleshoot FTP, Mail and web issues. But the overall level of support that you provide for your clients also includes setting up email accounts, web features and other settings that they can setup themselves. The first time they need to do this they may call, but if you have a support department that is dedicated to helping them use the tools that you can provide them then you can drastically cut down the support calls you receive.

Rather than just offer tools that help users on a technical level, the makers of Plesk also offers tools to help run your entire web hosting company. HSPcomplete integrates billing, provisioning and marketing using control panels that integrate with their Plesk control panel. If you are planning on moving from simple web hosting and into colocation for clients, you can use PEM to manage an entire data center.

Network Bandwidth Monitoring enables network administrators to identify how their network is being used. This allows for the optimization or blocking of certain network services that are creating bottlenecks. By monitoring bandwidth, web hosts are also able to plan for the future development of their network services.

Securing Your Hosting Environment
Many hosting environments are started using a single server that is plugged directly into a network port provided by a colocation company. Over time, new servers are added but the need for a firewall to protect these servers is often overlooked. Many administrators will choose to use the firewall that is built into their servers rather than a physical firewall. Once you have a multi-server environment it is going to become important to start considering your network architecture and the security of this network. This includes patch management, firewalling, intrusion detection and security audits.

A network security system designed to identify intrusive or malicious behavior via monitoring of network activity. The Network Intrusion Detection Systems (NIDS) identify suspicious patterns that may indicate an attempt to attack, break in to, or otherwise compromise a system. Many networks have a hard exterior that is tough to penetrate. Many companies have invested time and manpower to make the perimeter of their network as secure as possible using firewalls. In this scenario, if a single system is compromised, it is often easy for attackers to exploit other systems on the network. Host based Intrusion Detection Systems (IDS) help to mitigate this by scanning network traffic for known attacks.

If you are processing credit card transactions then at some point you are likely to go through an automated security audit using an application like Nessus, so the bank can limit their risk to legal ramifications of data theft. Whether required or not, security audits can help organizations ensure that they are meeting security best practice minimums.

Contingency planning is a critical aspect of security. Implementing industry standard tiered storage and backup procedures help ensure that your data is fully redundant. Disaster recovery goes beyond backup and requires you to ask many questions about what you would do in certain situations. Many organizations have redundant hardware, the software required to restore in case of a failure, and redundant locations that ensure their clients the 99.999% uptime that many organizations now require in their Service Level Agreements.

Whether you are just getting started, adding new servers to your hosting environment, switching to a new colocation facility or bringing your servers in house, Three18 can help you. You are not alone. We have been there many times over and can work with you to define the systems and procedures that will get your hosting environment profitable, secure and stable.

Using Webmin in Mac OS X Server

Monday, March 20th, 2006

In case ya’ll haven’t used it yet, Webmin is a pretty cool program. You can actually get a little more finely grained control over not only OS X Server services but also over OS X Workstation services. This includes performing tasks like installing multiple domains on a single OS X Workstation host, configuring FTP and installing CVS. Before you can get started with it you will need to install it. Once you have it installed, you can play around with it and learn plenty of new stuff. The features extend beyond services and into actually working on network stacks and bandwidth monitoring.

Check it out… Here is a walkthrough for installing it.

To install Webmin:

* Download the latest version of Webmin that is appropriate to your Operating System from

* Read the appropriate setup file for your operating system located in the root of the new webmin folder.

* Run the script logged in as root. The command for this would be

* At the config file directory prompt enter the full path of the directory you would like the configuration files to be stored in and press enter if you would like them to be stored in /etc/webmin.

* At the log file directory prompt enter the full path of the directory you would like Webmin to store its logs in or press enter if you would like them to be stored in /var/webmin.

* If your Perl interpreter is not located at /usr/bin/perl, type in the location of Perl and press Enter.

* After a quick check of Perl, you will be prompted for the TCP port that Webmin will run on. By default this is 10000 but feel free to change it if you would like, just make sure you are not changing it to a port that is already in use on the system. Once you have done so press enter.

* At the Login Name prompt, enter the name you would like to use when logging into Webmin and press enter.

* At the Login Password prompt, enter the password you would like to use when logging into Webmin and press enter.

* Reenter the password at the prompt and press enter.

* Choose whether or not you want Webmin to start at boot by entering a y to signify yes or a n to signify no and press enter.

* Webmin will take a moment to complete installation and build its permissions. When the installation is complete you will see a message telling you that Webmin has successfully been installed and started. When it is done, open a web browser from the server and go to to verify that Webmin is running.If you have customized the port then enter your custom port after the colon (:).

What is Web 2.0 Anyways?

Tuesday, February 28th, 2006

Chances are, with all of the hubbub surrounding overnight success giants and Flickr, you’ve undoubtedly heard about the second coming of the internet, commonly referred to as “Web 2.0” . Bloggers are frequently commenting on “Wiki” this and “tagging” that. But what is this Web 2.0 phenomenon and how can it improve how we manage our lives and businesses in a digital world? While there may not be a simple answer to these questions, there are a few suppositions that can be made as to what Web 2.0 is shaping up to look like and how its changing the way we exchange information.

In very general terms, Web 2.0 is commonly referred to as the upsurge in development of web-based services and applications utilizing open-source development platforms such as Ruby on Rails and Ajax. Which doesn’t really mean very much to, you and me, the non-developer community, except that what these developmental tools actually allow us to do on the internet are shaping up to be rather interesting prospects, indeed. For instance, last year, using their own Ruby on Rails technology, a company called 37 signals, released a completely internet-based project management and collaboration suite called Basecamp. For a rather nominal licensing fee, small businesses can manage projects and the people assigned to them in real-time, all within a web-browser. No more confusing licensing issues with project management software. One licensing fee, unlimited users. That’s it. Simple, easy. It’s the perfect example of what many developers are banking on. No more confusing licensing issues and expensive support.

What makes this technology so alluring, besides cost-effectiveness, is the collaborative capabilities inherent in tagging technology. In a nutshell, “tagging” or “Wiki” is the ability for users to link information to make it available to whomever they see fit. For example,, one of the more successful Web 2.0 outcroppings, gives users the ability to upload their pictures to their own personal Flickr website. They then tag their pictures, inserting keywords that describe the picture, which are then enabled as hyperlinks, making them searchable to other users that have similar tags. Other users have the ability to tag your photos, if you so desire. Allowing you to accept or deny these tags, thereby giving your pictures less or more visibility depending on what your level of participation might be. Essentially, the more you contribute, the more visible you become.

Taking online collaboration to a more global level, Wikipedia, a free online encyclopedia, allows registered users to contribute to articles in encyclopedic entries, essentially tagging them with additional information they deem important to that article. Volunteers, or Wikipedians, as they’re referred to in the wiki-sphere, edit these entries and collaborate on whether they should be included or not. True global collaboration.

But this technology is not just reserved for the internet. Software developers are feverishly developing web 2.0 applications for the enterprise. SocialText, a Palo Alto based developer has just released server software that will facilitate easy online collaboration for documents and projects in an enterprise environment. Companies like design firms and media firms that rely heavily on collaboration for the success of their enterprise will probably want to take a good hard look at these kinds of collaborative solutions. Another interesting development comes from Joyent, a Marin County, CA start-up that is targeting small businesses with a completely web-based network server solution, literally, in a box. For just around $5K and a $65 monthly service fee for updates and support, this “out-of-the-box” server plugs into a company’s intranet and via a web-browser, hosts email, file-sharing, contact management, and calendar publishing, with tagging supported across the whole suite allowing for a true online collaborative environment.

If this kind of solution catches on, software development of this sort won’t be going away any time soon and is the stuff that might make server giants such as Microsoft and Apple rethink their strategies toward the small business market. Web 2.0 is still in its infancy; we’ll have to wait and see which of the many services and technologies being offered catch on and which will waste away in the cloud of cyberspace obscurity. But one thing is for certain, Web 2.0 development is paving the road for the future of online collaboration and productivity.

Installing AWStats on Mac OS X Server

Friday, February 3rd, 2006

Here are the steps for setting up AWStats on Mac OS X 10.4 Tiger Server.

1. Download the last stable release of AWStats from to your desktop.
2. In the Finder, navigate to /var/log/httpd
3. Backup and remove any old web logs.
4. Open Server Admin.
5. Select Web:Settings:Modules
6. Make sure the “perl_module” and “php4_module” are enabled.
7. Click Save.
8. Select the “Sites” pane.
9. Double-click the entry for the site you are going to enable stats on.
10. Select the “Options” pane.
11. Enable CGI Execution and Server Side Includes (SSI).
12. Click Save.
13. Select the “Realms” pane.
14. Create a new Realm called “awstats_data” in the site’s root directory or “Web Folder”. If necessary, within the Finder, navigate to the /Library/WebServer/Documents directory and create a new folder called “awstats_data”. (i.e. /Library/WebServer/Documents/awstats_data).
15. Enable Browse/Author access for the local Administrator and the “www” user only.
16. Click Save.
17. Select the “Logging” pane.
18. Change the access logging Format to “combined”
19. Change the access log Location to /var/log/httpd/awstats_access_log
20. Change the error log Location to /var/log/httpd/awstats_error_log
21. Click Save.
22. Select the “Aliases” pane and add as an alias.
23. Click Save.
24. Click the left-arrow icon to exit Editing the site.
25. Make sure the site is enabled and Web Services are running.
26. Open Workgroup Manager.
27. Verify ACLs are enabled on the volume containing the “awstats_data” directory you created earlier.
28. Change the posix permissions of the “awstats_data” directory to allow Read/Write access for the admin group.
29. Create an ACL to allow Read/Write access for the “www” user.
30. Click Save.
31. Close Server Admin and Workgroup Manager.
32. Expand the downloaded from to your desktop.
33. Create a new folder named “awstats” in the /Library/WebServer directory.
34. Copy the contents of ~/Desktop/awstats-6.5/ to /Library/WebServer/awstats
35. Open a Terminal session.
36. Type cd /Library/WebServer/awstats/tools
37. Press Return
38. Type sudo perl
39. Follow the prompts…

—– AWStats awstats_configure 1.0 (build 1.6) (c) Laurent Destailleur —–
This tool will help you to configure AWStats to analyze statistics for
one web server. You can try to use it to let it do all that is possible
in AWStats setup, however following the step by step manual setup
documentation (docs/index.html) is often a better idea. Above all if:
- You are not an administrator user,
- You want to analyze downloaded log files without web server,
- You want to analyze mail or ftp log files instead of web log files,
- You need to analyze load balanced servers log files,
- You want to ‘understand’ all possible ways to use AWStats…
Read the AWStats documentation (docs/index.html).

—–> Running OS detected: Mac OS

—–> Check for web server install
Found Web server Apache config file ‘/etc/httpd/httpd.conf’

—–> Check and complete web server config file ‘/etc/httpd/httpd.conf’
AWStats directives already present.

—–> Update model config file ‘/Library/WebServer/awstats/wwwroot/cgi-bin/awstats.model.conf’
File awstats.model.conf updated.

—–> Need to create a new config file ?
Do you want me to build a new AWStats config/profile
40. file (required if first install) [y/N] ? y

—–> Define config file name to create
What is the name of your web site or profile analysis ?
Example: demo
Your web site, virtual server or profile name:

—–> Create config file ‘/Library/WebServer/awstats/wwwroot/cgi-bin/’
Config file /Library/WebServer/awstats/wwwroot/cgi-bin/ created.

—–> Add update process inside a scheduler
Sorry, does not support automatic add to cron yet.
You can do it manually by adding the following command to your cron:
/Library/WebServer/CGI-Executables/ -update
Or if you have several config files and prefer having only one command:
/Library/WebServer/Documents/tools/ now
42. Press ENTER to continue…

A SIMPLE config file has been created: /Library/WebServer/awstats/wwwroot/cgi-bin/
You should have a look inside to check and change manually main parameters.
You can then manually update your statistics for’ with command:
> sudo perl -update
You will also read your statistics for ‘’ with URL:
> http://localhost/cgi-bin/

43. Press ENTER to finish…
44. Edit the file (in your favorite text editor, as root) and add these lines or augment existing lines for these variables.
46. Move the remaining contents of /Library/WebServer/awstats/wwwroot to /Library/WebServer/Documents
47. Move the “tools” directory of /Library/WebServer/awstats to /Library/WebServer/Documents
48. Open Terminal
49. Type cd /Library/Webserver/CGI-Executables/
50. Type sudo perl -update
51. From the server, open a browser and go to the site http://localhost/cgi-bin/
52. If you see the data then you know that both your configuration and log file format is good.
53. Now it’s time to tell the system to update awstats on a regular basis.
Create a CRON job to run the command /Library/WebServer/CGI-Executables/ -update

Installing MediaWiki on Mac OS X

Wednesday, August 17th, 2005

Installing MediaWiki

1. Create a database in MySQL called wikidb.
2. Create a new user called wikiserver that has full priviledges to this database (the user does not need to be called wikiserver, but that is the username we will be using for this walkthrough).
3. Download the latest stable release of MediaWiki from
4. Extract the tar files into a new folder (for this example we are going to call it wiki to keep things easy). This can be done using the tar -xvzf mediawiki.tar.gz (or subsititute your file name for mediawiki.tar.gz
5. Make the configuration files writeable using the command chmod a+w config while in the new wiki folder
6. Move the wiki folder onto a web server
7. From your web server, visit the site or the subfolder that you placed the wiki files into
8. At the MediaWiki Installation page, you will either see a notice that you can install MediaWiki or a notice that your system does not meet the minimum requirements for installion. If your system does not meet the requirements, install the modules that are listed. If it does, move on to the next steps
9. At the MediaWiki Installation page, scroll down to the Site Config section. Here, fill in the fields for:
a. Wiki name: The name assigned to your wiki.
b. Conact e-mail: Displayed when error notices are encountered.
c. Language: The language to be used for your Wiki
d. Copyright: The copyright type, typically leave this as the default setting
e. Admin Username: The username to use for administering the Wiki
f. Admin Password: The password to use for administering the Wiki
g. Shared Memory caching: Decide whether to use memcached
10. Fill in the appropriate values for the Email and authentication setup section:
a. Email (General): Enable or disable the global use of email for your Wiki
b. User-to-User email: Allow users to email one another
c. Email Notification: Allows users to be notified if there is a change in a folder or page
d. Email Authentication: Enable email authentication for the wiki. Sends request for users to click a link to authenticate into the wiki.
11. Database Configuration options:
a. Database Type: Most users use MySQL, but Oracle is an option as well, although experimental.
b. SQLServerHost: The address of the MySQL Server. If MySQL is on the system you are currently using then leave this field as localhost.
c. Database Name: The name of the database you will be using in MySQL to store your wiki’s data.
d. DB Username: If you used wikiserver in step 2 then use wikiserver here; otherwise use the username you chose in step 2.
e. DB Password: The password you assigned for your wikidb user.
f. Database Table Prefix: Use this option if you would like to share you will be using other tables within the wiki database for other applications.
g. Database Character set: leave this as defualt unless you will be using
h. Superuser account: The MySQL SuperUser account – typically root
i. Superuser Password: The MySQL SuperUser or root account password
12. Click on Install MediaWiki!
13. Move the LocalSettings.php file from the /config directory of the wiki installation into the root directory of the wiki installation
14. Go to the folder and the default Main MediaWiki page will open
15. Customize the wiki to work for your organization


Thursday, June 30th, 2005

iTunes Latest Buzz Tops One Million Downloads In First Two Days.

Podcasting is the latest evolutionary branch of the iPod. With Podcasting, anyone with an iPod can download and listen to books, editorials, unofficial museum walkthroughs, and a variety of other audio commentaries.

In it’s simplest from, Podcasting is simply an audio download from a website. Utilizing a few new web technologies, iTunes now can check the site for new content, and automatically download it into your iPod. Many industries have taken advantage of this technology to provide news services to their clients. For example, several software companies publish Podcasts informing clients of upcoming products, or company events. Many companies and people are creating unofficial talk radio stations, with topics ranging from tutorials and techniques to simple editorials voicing their opinions.

Three18 is well versed in Podcasting, and the underlying technologies. If you feel that your company could benefit from Podcasting, or you would like more information, feel free to contact any staff member.