Archive for April, 2009

10, 40, 100 and 1,000 Gigabit Ethernet

Thursday, April 30th, 2009

Bob Metcalfe should be proud.  from 3 whole megabits at inception in 1973, ethernet has gone to 10 megabits then 100 and to the desktop is currently sitting at predominantly gigabit speeds.  But in the data center, a push towards 10 gigabit ethernet deployments has been going on since 2002.  One of our favorite products is the Cisco Catalyst 4948, which has two 10 gigabit ports and 48 gigabit ports, allowing for a couple of servers at 10 gigabit or stacking as a core switch in a medium sized organization.

Of course, as an industry addicted to speed, 10 gigabit ethernet simply isn’t going to be enough; 40 gigabit and 100 gigabit ethernet products are already being announced, although primarily in stacking switching fabrics together.  While the standard for 40 gigabit network has not exactly been ratified, we’ve been seeing a number of products coming out onto the market and standardization by the IEEE is expected in 2010 for 40 and possibly 100 gigabit networking.

The barrier from 100 and is expected to take a little less time than the 7 to 8 year window between when 10 gigabit was released and an expected 40/100 gigabit ethernet.  Terabit networking is expected by 2015, which means that those 10, 40 and 100 gigabit interfaces will not be outdated all that quickly, providing a nice return on the investment.

Overall, 10 gigabit and up can be fairly costly (although with a 40 gigabit release, expect 10 gigabit products to come down in price a bit). However, it can increase the performance of a network environment exponentially when used in the proper locations and with a comprehensive strategy in place. 318 has experience with 10+ gigabit networking and can help in devising such a strategy. Feel free to contact us and we will be happy to review options and potential uses for your organization.

Mass Deploying Firefox Preferences for Mac OS X

Friday, April 24th, 2009

Firefox has a number of preferences.  Not all are available in the GUI.  To access these preferences, you can simply open Firefox and type the following in the address bar:

about: config

This will allow you to customize preferences, whether or not they’re otherwise known, line by line.  These can then be copied between users, by inserting lines into the preferences file.

Like with most applications on Mac OS X, the preferences for Firefox can be deployed en masse.  It is a bit more complicated than deploying preferences for some other applications.  The reason for this is that the path to the preference file isn’t the same for all users.  The file is located in the ~/Library/Application Support/Firefox/Profiles directory.  It is an 8 character string followed by .default.  For example, lzwntwo9.default.  In this folder is a file called prefs.js, which contains all of the preferences for Firefox.  For example, the following line will disable the check for whether you wish Firefox to be the default web browser for a user:

user_pref(“”, false);

Once you know what preferences you’d like to push out there are two options to do so (there might be more, but these are the two we’ve used):

  • The first is to edit items in the bundle.  Most of these can be edited using the /Applications/ file, although the home page will be set using the /Applications/ file.  One note is that when you go to customize the prefs.js file it will give you a fairly nasty warning, but then it will push changes out to new accounts; however, don’t make any changes while the application is open.  Additionally, this method requires deleting the existing preferences, so if you simply want to push out updates you’ll need to resort to the second method.
  • For the second method, we look at a script that finds the name of the directory located in ~/Library/Application Support/Firefox/Profiles for the user (or all users for computer-based policies) of the system.  We then set that as a variable.  For example, using the output of ls ~/Library/Application\ Support/Firefox/Profiles/ as a variable called FFPREFSFOLDER would then be used to alter the contents of the js file using ls ~/Library/Application\ Support/Firefox/Profiles/$FFPREFSFOLDER/prefs.js as the actual path of the file for a user.

Now you can insert (or replace) the line that makes up the specific preference.  This isn’t nearly as clean as using defaults to push out Safari preferences.  But it does provide a way to push out Firefox preferences, be it as a file drop to replace the preferences in the application bundle or as a line edit to alter settings of an existing users browser.

VMware vSphere 4 is Here!

Thursday, April 23rd, 2009

At a VMUG meeting in Minneapolis in December, VMware employees mentioned that Virtual Infrastructure would be getting a new name, vSphere.  A few days ago, VMware officially announced vSphere, the successor to the Virtual Infrastructure (VI) product line.  VMware is hailing vSphere as the first true cloud-based operating system, hoping to capitalize on the hype that surrounds cloud computing.

VMware has had products available for years that allow administrators to cluster resources and place virtual machines on a virtualized abstraction layer that spans multiple hosts, pooling RAM, CPU and other system resources.  When we had heard there was a raging debate about whether a private cloud was possible, we immediately though of all of our successful implementations of the VI product.  vSphere is designed from the ground up to sit on low cost and energy efficient computing resources and allow for the flexible deployment of systems onto the cluster.  This allows organizations ranging from small businesses to enterprise, from education to government to deploy new data protection and high availability resources, to pool IT assets in a manner not previously available.

The key components of vSphere all not all new.  ESX and ESXi are the hypervisor.  These sit on the physical machines (aka the Hosts) and build the virtualization layer.  Sitting on top of the hypervisors is vCenter Server, which allows for the actual provisioning, monitoring, physical to virtual conversion process and centralized management.  The vCenter Update Manager keeps all of the ESX systems updated (as well as some of the VMs themselves to help reduce the surface space of update management).  The VMware High Availability piece gives failover between hosts.  VMsafe is a another component that provides security APIs; while offerings from 3rd party developers are fairly immature expect this to grow rapidly as the virtualization industry moves into its next stage.

vSphere was built for microprocessers.  The Nehalem and its successor, Westmere, are designed with collaboration from VMware; as such, they are built for virtualization.  When you are looking to plan for a potential upgrade to vSphere, it’s important to keep in mind that each member of a vSphere cloud is going to run at the speed of the slowest host.  Therefore, you will have tiers of VMware virtualized clouds, each with a class of system in it (for larger environments).  The Nehalem and Westmere are designed for 8GB of RAM, so you’ll want to make sure to put plenty of memory into the cluster nodes, which have a diminishing return on investment (in terms of memory) around 120GB (so don’t be afraid of going hog wild on the memory front, those VMs need it!).

Overall, our tests of vSphere have shown a considerable performance gain for the guest operating systems running on hosts with newer hardware.  Older assets have a lower impact on performance, but still have a slight upgrade.  The biggest management features that we’re finding useful are an upgraded vCenter (for converting those physical systems over to virtual hosts), enhancements to Vmotion and automation.  With the latest tools it is fairly straight forward to automate nearly every task using vCenter, including the deployment of new virtual machines based on templates, restarting a virtual machine and migrating them using Vmotion.

While the vSphere product may seem overwhelming at first, it begins to bring into focus a contained and mature VMware based infrastructure.  There are a lot of new features; but there is bound to be a lot of marketing spin and while I’m sure it can, out of the box vSphere will not do your laundry.  In order to help guide you through the planning phases of the next generation of the data center (which is after all, the true target of vSphere 4), 318 is here to provide the experience you need with regards to VMware licensing, architecture and of course support – be it with the guests, the hosts, the storage layer or the virtualization layer itself!

Exchange 2010 Beta Now Available

Wednesday, April 22nd, 2009

Exchange 2010 has been announced – and should be available later this year!  The first public beta has some of the feature set and shows the direction Microsoft will be taking Exchange. Three things stand out about Exchange 2010: a continued to push into further integrated communications, client management and enterprise clustering. Additionally, Exchange 2010 includes improvements to the database design, which should reduce overall disk I/O by up to 50% and allow the databases to be run on lower tier DAS storage (with a target at SATA, even in larger environments).  While a move to reduce errors in the database and make it less I/O dependent is a good start for compelling features, it does not speak to active-active clustering.  These new options are more similar to the LCR options introduced in 2007, just with 16 replicas now being available – which allows for a lot of disaster recovery.

Exchange 2010 includes server-side email archival, which will be a big boon to many Mac environments (Entourage still doesn’t have an auto-archive feature). Server-side email archiving also allows enterprise organizations to gain further control over archives and enforce better policy management for mailboxes.

Exchange 2010 allows users to manage many of their own common tasks rather than opening a service request.  Exchange will also warn users (and allow administrators to make policies based on these types of events) before they make common mistakes such as sending mail to large distribution groups, to recipients who are out of the office or to recipients outside the organization.  Overall, this move towards self-service should reduce overall support costs.

Text based voice mail preview, voice mail rules and further integrated Outlook Web Access (OWA) and Outlook Mobile dominate the theme of Exchange 2010.  Users of the Microsoft unified communications environment will be able to see text previews of voice mail using Outlook, delete voice mails out of Outlook without picking up a hand set and even create rules for dealing with certain types of messages (for example if a voice mail is less than 1 second it should probably just be deleted). There are a number of other features, most of which (such as a message indicator light, caller ID and voice control over voice mail) are already present in other modern phone systems – the key word here is other as Microsoft now has what amounts to a phone system built into Exchange.

As always, many of the new features of Exchange will revolve around new features within the Office product line, which will also receive a refresh in 2010.  Public folders (not shared folders) will more than likely be moved into SharePoint, which will also see an update in 2010.  There will also be a number of upgraded Powershell commands that will further automate the use of Exchange with the upcoming Windows 7 operating system.

Overall, for many environments, Exchange 2010 should represent a lower Total Cost of Ownership (TCO) than previous releases.  However, it will need to be strategically planned well in advance, especially if your organization will be skipping Exchange 2007 and upgrading from 2003 into Exchange 2010.  If you need help with the strategy and assistance, please feel free to contact 318 and we will do whatever possible to aid in the planning of this transition.

Uninstalling Service Pack 2 from Windows XP In Fusion (Due to Blue Screens)

Wednesday, April 22nd, 2009

1. Grab your Windows install CD.
2. Go to and download the SCSI Disk Driver (it’s a Zip file)
3. Extract the contents, it should be an *.fld file.
4. Add a floppy drive to the image in VM. Settings, Other Devices, +, Floppy, direct floppy to *.fld file.
5. Boot XP in Fusion. Press Esc to get to boot menu
6. Boot to CD.
7. Press F6 to add drive (it wont immediately do it, it will cycle through some stuff first).
8. Press S to add drive (it will now hit the floppy)
9. Choose the VMWare SCSI drive.
10. Press Enter
11. Boot in Recovery Mode (”R”).
12. Choose your install location (most likely “1”)
13. Authenticate to Windows with the Administrator account
14. Get to command prompt.
15. Type: cd $ntservicepackuninstall$\spuninst and hit Enter
16. Type: batch spuninst.txt and hit Enter (errors and file copies will scroll through)
17. Disconnect floppy once it finished scrolling.
18. Type: exit and then Enter (this’ll reboot it)
19. Hit F8 to boot into Safe Mode (it WILL take a while to let you through, if it takes longer than 10 minutes, power cycle VM)
If no icons, or start button appears (black screen for longer than 10 minutes) proceed to next step. If explorer.exe IS running, go to #25.
20. Send a CTRL+ALT+DEL
21. File > New Task (Run…)
22. In Open, type regedit
23. Go to HKLM>System>CurrentControlSet\Services\RpcSs
24. Right click “ObjectName”, click Modify, type in LocalSystem in the “Value data” box, and then click OK
25. Restart computer in Normal Mode.
26. Re-install VMWare tools to get your mouse back.
27. Find out why SP2 didn’t install right, and try it again

Recovering FileMaker and FileMaker Server Databases

Tuesday, April 21st, 2009


The most common thing that happens to FileMaker databases is file corruption. In this case, the local or server files will not be accessible, and customers will report issues.

Normally, one specific file is down and inoperable in FileMaker or FileMaker Server, but sometimes could be multiple files. You will either have to grab the affected items from a recent backup or otherwise recover the files.


If you have to recover files, you will need FileMaker Pro. If you are recovering .fp5 files (FileMaker 5 databases), either version 5 or 6 would be appropriate. If the files are .fp7 files (FileMaker 7 databases), then versions 7, 8, 9 and 10 will work. Open FileMaker, choose menu command “File, Recover”, and select the damaged database file. FileMaker will save a recovered copy.

**Important** For .fp5 (FileMaker 5) files, after recovering, each file’s shared hosting status might revert to Single User Mode. To fix this, open the file in FileMaker Pro 5 or 6, go to “File, Sharing” and set the file to either Multi User or Multi User (Hidden), depending on whether or not you want it to be selectable in FileMaker Server. (If you do not have a version of FileMaker Pro 5 or 6 to work with, most likely a 318 developer will.)

Oracle Buys Sun

Monday, April 20th, 2009

Sun was in merger talks with IBM.  Talks that had fallen through.  Today, the Sun website says “Oracle to Buy Sun.” Oracle is the largest database company in the world and has been tinkering with selling support contracts for Linux and the Oracle suite of database products, that already includes PeopleSoft, Hyperion and Siebel. This merger, valued at $7.4Billion, will give Oracle access to sell hardware bundled solutions, further the Oracle development product offerings and give Oracle one of the best operating systems for running databases on the planet.

Oracle doesn’t just get hardware and Solaris though.  This move also solidifies a plan for Oracle customers to integrate Sun storage.  Oracle had previously been working with HP in a partnership that never seemed to gain traction.  Then there is Java, MySQL, VirtualBox, GlassFish and  A number of the Sun contributions will be Open Source projects, but overall it’s possible to see a strategy that can emerge from a new Oracle + Sun organization.

As a Sun partner, 318 can assist its clients through this transition, be it with storage, MySQL, Java, Solaris or Oracle middleware scripting.  Overall, this deal makes a lot of sense and 318 is behind doing whatever possible to ease our clients through the transition.

Finally, for those concerned that Oracle might just be buying Sun to kill off MySQL, keep in mind that the Open Source community built MySQL in the first place (or was integral to building it) and it can build another in its place just as easily, this time faster and with less required legacy support.  MySQL is not a fluke.  PostgreSQL or a newer solution will take its place if MySQL were to fall by the wayside under the Oracle helm. Oracle is not going to make MySQL into a martyr of sorts, and is going to want to capitalize on their investment (a Billion dollar purchase by Sun and obviously part of this purchase); especially with a clear business plan for MySQL to be profitable (which is why Sun bought them for such a lofty price in the first place). Overall, Oracle has no reason to kill MySQL; instead, with Siebel, MySQL, Oracle, PeopleSoft, etc – they can simply tout “All Your Databasen Are Belong To Us!”

Using LCR for Exchange 2007 Disaster Recovery

Thursday, April 16th, 2009

Local Continuous Replication (LCR) is a high availability feature built into Exchange Server 2007.  LCR allows admins to create and maintain a replica of a storage group to a SAN or DAS volume.  This can be anything from a NetApp to an inexpensive jump drive or even a removable sled. In Exchange 2007, log file sizes have been increased, and those logs are copied to the LCR location (known as log shipping) and then used to “replay” data into the replica database (aka change propagation).

LCR can be used to reduce the recovery time in disaster recovery scenarios for the whole database, instead of restoring a database you can simply mount the replica.  However, this is not to be used for day-to-day mailbox recovery, message restores, etc.  It’s there to end those horrific eseutil /rebuild and eseutil /defrag scenarios.  Given the sizes that Exchange environments are able to get in Exchange 2003 R2 and Exchange 2007, this alone is worth the drive space used.

Like with many other things in Windows, LCR can be configured using a wizard.  The Local Continuous Backup wizard (I know, it should be the LCR wizard) can be accessed using the Exchange Management Console.  From here, browse to the storage group you would like to replicate and then click on the Enable Local Continuous Backup button.  The wizard will then ask you for the path to back up to and allow you to set a schedule.  Once done, the changes will replicate, but the initial copy will not.  This is known as seeding and will require a little PowerShell to get going.  Using the name of the Storage Group (in this example “First Storage Group”) you will stop LCR, manually update the seed, then start it again, commands respectively being:

Suspend-StorageGroupCopy –identity “First Storage Group”

Update-StorageGroupCopy –identity “First StorageGroup”

Resume-StorageGroupCopy –identity “First StorageGroup”

Now that your database is seeded, click on the Storage Group in the Exchange Management Console and you should see Healthy listed in the Copy Status column for the database you’re using LCR with.  Loop through this process with all of your databases and you’ll have a nice disaster recovery option to use next time you would have instead done a time consuming defrag of the database.

EMC Celerra NX4 Defaults

Wednesday, April 15th, 2009

The EMC Celerra NX4 comes with a number of IPs (and other settings) set from the factory. The IP addressing, by default, is as follows:

  • Primary Internal Network –
  • Backup Internal Network –
  • Netmask
  • IP of Storage Processor A –
  • IP of Storage Processor B –
  • Gateway IP of Storage Processor A –
  • Gateway IP of Storage Processor B –

ESX Patch Management

Tuesday, April 14th, 2009

VMware’s ESX Server, like any system, needs to be updated regularly. To see what patches have been installed on your ESX server use the following command:

esxupdate -query

Once you know what updates have already been applied to your system it’s time to go find the updates that still need to be applied. You can download the updates that have not yet been run at Here you will see a bevy of information about each patch and can determine whether you consider it an important patch to run. At a minimum, all security patches should be run as often as your change control environment allows. Once downloaded make sure you have enough free space to install the software you’ve just downloaded and then you will need to copy the patches to the server (using ssh, scp or whatever tool you prefer to use to copy files to your ESX host). Now extract the patches prior to running them. To do so use the tar command, as follows:

tar xvzf .tgz

Once extracted, cd into the patch directory and then use the esxupdate command with the update flag and then the test flag, as follows:

esxupdate –test update

Provided that the update tests clean, run the update itself with the following command (still with a working directory inside the extracted tarball from a couple of steps ago):

esxupdate update

There are a couple of flags that can be used with esxupdate. Chief amongst them are -noreboot (which doesn’t reboot after a given update), -d, -b and -l (which are used for working with bundles and depots).

If esxupdate fails with an error code these can be cross referenced using the ESX Patch Management Guide.

You can also run patches without copying the updates to the server manually, although this will require you to know the URL of the patch. To do so, first locate the patch number that you would like to run. Then, open outgoing ports on the server as follows:

esxcfg-firewall -allowOutgoing

Next, issue the esxupdate command with the path embedded:

esxupdate –noreboot -r http:// update

Once you’ve looped through all the updates you are looking to run, lock down your ESX firewall again using the following command:

esxcfg-firewall -blockOutgoing

FileMaker IWP Database Development

Monday, April 13th, 2009

Centering IWP

If you turn off the status area in IWP and edit the file iwp.css (inside the “FM Web Publishing” package), you can get the entire site to center. Add this code to the file iwp.css:

.part, #part { margin-left: auto; margin-right: auto; max-width: 600px }

(Hint: Make sure that the max-width property is smaller than your smallest FileMaker layout width.)

Then to have FileMaker center all the layouts, select everything on each layout, and using the Object Info box, set the anchor on the top and deselect any other anchors. (This is for fixed-width solutions.)

List View issues

If you were planning to use a list view, FORGET IT! List views only show up to 25 records and then FileMaker wants you to page through them. There is no way to specify a smaller or larger paging size.

Instead, use a portal to show the same information. Portals have unlimited records.

If you need to perform a search, you can still list the results in a portal:
1) Get your found set of records
2) Copy all record IDs as a return-delimited list into a global field.
3) Create a relationship from that global field to the table where you got your search results from.
4) Set up a new layout with the background table being the table with the global field, and create a portal using the new relationship.

If you want pages as well (like max 100 records per page so that not all of them loads), set up two new fields:
1) page_number
2) page_records — A calculation field using the return-delimited list global field:
Let ( m = 100 ; MiddleValues(global_records; ((page_number – 1) * m) + 1; m ) )
Change the relationship to use the page_records field instead of the global_records field.

No preview mode

Forget about subsummary reports altogether. You’ll need to be very creative to accomplish anything like this. If you do need to, consider scripting the subsummary totals and storing them into fields. There is no sliding either, so this needs to be taken into account as well.

If you need to print things out, this is possible. Create a layout that fits on all browsers (about 700×885) and make sure to center IWP. However, you can’t create visible “Go Back” buttons for navigation (since they’ll print out), so you’ll either have to make certain areas of the layout clickable to return you, or you’ll have to do a PHP solution (see “No new windows” section).

Portal issues

If you are trying to submit data to a script by placing a button inside a portal, you might run into problems. For example, if the button or field that has the script on it is an image or field stored in a different table than the portal itself, data from the portal table will not be passed to the script.

No new windows

If you want the solution to open a new page in another window, it is nearly impossible to accomplish. IWP will open new “virtual windows”, but they’re useless. You can use Open URL to spawn a new window or tab, but if you use it to access IWP using a link, the session will then think that you’re in a new layout, and so the old window will have problems.

If you need the new window for display purposes, such as a print-out, create this using PHP designed in HTML instead. You can generate pages from FileMaker using whatever table you want (using fields from a specific layout) and then just borrow the code from that.

Image (container field) boundaries

You should make the boundaries of fixed-size image fields (containers) one pixel larger on each side and set it to crop, and then center it. This will make the graphic look correct in FileMaker and in all browsers.

If you need to stretch an image, just make sure that the boundaries are also one pixel larger on each side and set it to Reduce/Enlarge.

If you need lines to connect to or line up with graphics, DO NOT use FileMaker vector lines! The lines will not necessarily align properly in all browsers, and there will be a difference between a FileMaker session and IWP. Instead, save a one-pixel container image and then stretch this on a page (making the container field 3 pixels wide).

Image speed

If you want your images to load faster in IWP, you should set up web viewers instead of container fields. Web viewers would point to an image located on a web site.

Web viewers will show up in individual iframes (frames within pages on the web site). This way, the browser will cache the image, making it faster to load.

Here are a couple caveats to using web viewers:
1) You cannot use web viewers within portals.
2) iframes by default have a gray border. To fix this, edit your iwp.css file to include this line:
iframe {border:0px solid #FFFFFF; }

Text field boundaries

In IWP, you should make text fields larger than you normally would, especially the height. This is because different browsers have different text sizes, and the text easily gets cut off. Also, many fields do not justify properly (especially vertically), so you might have to toy around with the layout to find a happy medium between FileMaker and the different target browsers.

View Mode vs Edit Mode

IWP Browse Mode actually has two modes: View Mode and Edit Mode. When a record is not yet being edited, you will be in View Mode. IWP will enter Edit Mode as soon as you click into a field that is editable. You can also script it to go into edit mode by using the script step Open Record. Once in edit mode, IWP acts like a regular web site submit form, where nothing is actually changed until you hit a button that submits (or commits) the record.

Be aware that when you change the contents of a field in IWP, field calculations based on this will not change until the form is submitted (the record is committed). So the page is static until a scripted button is pressed!

No custom formatting

Custom formatting unfortunately doesn’t work in IWP at all. If you want text to change colors, make it part of a calculation field.

Disabled script steps

There are many script steps that you cannot use in IWP, most of them having to do with windowing, exporting to files, displaying dialog boxes, or dictionary features. Make sure that “Web Compatibility” is selected at the bottom right of the script editor when writing scripts.

New article on Xsan Scripting by 318

Saturday, April 11th, 2009

318 has published another article on Xsanity, for scripting various notifications and monitors for Xsan and packaged up into a nice package installer. You can find it here

Sleeping Windows from the Command Line

Friday, April 10th, 2009

Windows, like Mac OS X can be put to sleep, locked or suspended from the command line. To suspend a host you would run the following command:

rundll32 powrprof.dll,SetSuspendState

To lock a Windows computer from the command line, use the following command:

rundll user32.dll,LockWorkStation

To put a machine in Hibernation mode:

rundll32 powrprof.dll,SetSuspendState Hibernate

If you would rather simply shut the computer down, then there is also the shutdown command, which can be issued at the command line. You can also use tsshutdn, which provides a few more options than the traditional shutdown command. All of these commands can also be scripted. For example, using the at command to provide a one time instance (which is actually a feature built into tsshutdn and shutdown). Another way to automate these in WIndows would be to issue the schtasks command (or simply write a batch file and use the GUI).

Setting Up Folders and Rules in Outlook

Friday, April 10th, 2009

In Outlook, to create a new folder, right click on the Mailbox – Username on the left side and select New Folder. Type in the name FooBar E-mail for the Name. For the “Folder Contains” you should choose Mail and Post Items (Which should be the default).

Now that you have the folder created, a rule needs to be setup for it so that all e-mail goes into that folder that was addressed using the e-mail address. To start off, you need to go to Tools and then Rules and Alerts. Click on New Rule. You are going to want to select “Move messages from someone to a folder”. Click Next. Uncheck anything that is currently checked. Then put a check mark in “with specific words in the recipient’s address”. Now down in the lower window, click on the blue text that says “specific words”. Another box should pop up. In the top thin box, type the users e-mail address in and then click add. If they have any sort of alias they should add that one as well. Click ok when done. Now click on “specified folder”. It will bring up another window. Find the FooBar folder that was created earlier, highlight it and then click ok. Once the blue high lighted words are correct, you should be able to click on finish and be done.

Now any e-mail that comes into the new Exchange server with the e-mail address, it will be directed to that folder of the user it was addressed to.

Conficker Redux

Thursday, April 9th, 2009

Conficker Part II: we’re not trying to beat a dead horse here, nor be fear mongers; our goal is to be realistically managing risk. Conficker was set to go active on April 1st, but not a lot happened.  Infection estimates tended toward the millions, as high as 15.  That’s a sleeping bear that you likely don’t want to stir.  Now, as we are a bit more into April and the thaw is upon us, the hibernation appears to be over, even if the only result is a still sleepy bear, rubbing his eyes and with a big yawn, wondering out of its cave.  As though part of a bad April Fools prank, it appears as though Conficker is starting to stir, with reports from security researchers that it is just beginning to send out a payload to infected hosts that, while heavily encrypted, is reported to likely be logging keystrokes and designed to steal personal information.

Because Conficker is able to communicate with other infected hosts and download updates to itself (in the form of new payloads), it is able to morph into a new virus, able to do more damage to a system or be used for distributed attacks against larger environments. Because Conficker disables anti-virus software and Automatic Updates from Windows, the best fix is to download and run a tool designed for the task. You can download a free removal tool at

New Intel Xserves: Nehalem

Wednesday, April 8th, 2009

The new Nehalem Xserve is out.  We’ve waited a couple of days to digest the information so here it is!  The new Xserve is named from the next generation chip it has, which makes it the fastest Xserve Apple has yet to ship (this isn’t to say it’s the fastest Xeon, but it is faster overall by 2x-ish).  To quote Apple:

Its single-die, 64-bit architecture makes 8MB of fully shared L3 cache readily available to each of the four processor cores. The result is fast access to cache data, reduced traffic between processors, and greater application performance. Combine that with the other technological advances and you get an Xserve that’s up to 2x faster than the previous generation.

But the processor being twice the speed isn’t the only thing that got a major upgrade.  The new Xserve can take up to 12 slots (or 6 slots on a the quad-core)  worth of 1066MHz DDR3 ECC SDRAM.  The RAM is faster, but the new processor has an integrated memory controller, which reduces the latency between RAM and processor, again increasing speed.  Each processor can control 3 banks worth of 1066MHz RAM, removing more bottlenecks from the chip to the I/O hub (which is also faster in the latest model, btw).  

Everyone else has been overclocking for years.  Not that it’s overclocking, but close enough: introduce TurboBoost.  If the other cores of a chip aren’t doing anything then the Nehalem will allow the CPU to spike up from 2.93GHz to 3.33GHz.  So you’re not performing an operation telling the CPU to always run faster (and thus hotter).  Instead, you’re telling it that if other cores aren’t needed to wind them down and move the heat over to the one that needs the power.

The New Xserve also has some very nice storage options.  While we have been able to install 3 drives in the past, at this point there is a fourth drive option (similar to the original Xserve).  Rather than being loaded into the front though, this drive is installed inside the system, and it’s a 128 GB solid state drive (SSD).  You can also purchase a RAID5 controller for the Xserve.  This seems to indicate that installing the Operating System on the Solid State Drive and placing data, be it mail, files, etc on the RAID5 (which doesn’t require a PCI slot) will be a common architectural choice.  The Apple Drive Modules (ADMs) can now go up to 1 terabyte each.  These are not interchangeable with older Xserves, and they are SATA.  If you want to use SAS with the new Xserve, then Promise will now be handling all SAS drive modules (be it for Vtrak or Xserve) for Apple.

A couple of points about the new Xserve:

  • The RAID5 controller: ZFS is more efficient than RAID5.  Provided you are using ZFS, you can have more useable disk capacity with equal throughput using ZFS.
  • If you’re not into headless serving, the dongle doesn’t come with the server any more (like with MacBooks), so make sure to order it, or just steal the one off your neighbors MacBook.
  • Expect it to be a few weeks to ship these things (understandably, it’s a whole new gen of Xserve).
  • Because it’s a new generation, your old spare parts kit likely won’t get you far with these things, and don’t expect to be swapping ADMs between the servers either.
  • The quad-core is only $500 cheaper than the octo-core…  Double the possible memory alone will potentially make the octo-core last a year or more longer than the quad-core as a viable production node…
  • The SSD is nice and all, but they crash too.  Just because there are no moving parts doesn’t mean that they can die.  I’m all for using it as your boot volume, but…  Make sure to have a bare metal backup, preferably one that is 1-button restore.
  • One of the compelling aspects of this server is the processing per unit of rack density.  The power requirements have been lowered, the firepower increased and overall the server is a blazing rocket ship.  For the first time in a long time it has a very compelling story in the 1U server space: it’s similarly priced to other 1U systems, can run Windows/Linux and is way sexier than any other rack mount server (I know the data center isn’t supposed to be a fashion show but come on, the only other vendor that even cares about rack chassis looks seems to be Sun, who’s strategy is to make them look a little like a MacPro).

Moving Exchange Public Folders Between Information Stores

Wednesday, April 8th, 2009

Moving the Public Folders in Exchange 2003 from one Information Store to another located on the same server.

The only way to do this, previously, was to create another Exchange server and either use pfadmin to transfer the public folders, or to setup another Exchange server setup replication and then replicate again to the target Information Store. Either way, you will require another Exchange server.

Setting up and using PFADMIN:

Setting up Public Folder Replicas: (towards middle of page)

The steps outlined below will allow you to use only one.

1. Ensure there are no connections to Exchange (OWA, Outlook, etc.)
2. Login to Exchange System Manager (ESM)
3. Drill down to the Public Folder that you want to move. Make note of the application
4. Install adsiedit
a. <- For Windows 2003 SP2
5. Drill down in ADSIedit to the public folder
a. Configuration
b. Services
c. Microsoft Exchange
d. Administrative Groups
e. Server
f. Information Store
6. Right mouse click the public folder on the right side that you want to move. Select “Move”
7. A new window will appear, drill down again to the information store that you wish to move the public folder to, and move it.
8. Go back to ESM
9. Go to Mail Box on originating Information Store (where you are moving from)
10. Right mouse click, and re-associate the public folder with the mailbox store. It will automatically redirect itself to the newly moved public folder in the new information store.
11. Reboot Exchange or Restart Exchange Services.

The process above was used to migrate data from one information to another located on a SAN that was connected to an Exchange server. The migration process first included the Mailboxes, then the System Mail Boxes, and lastly the Public Folders. If following that process, you can then safely delete the mailbox store from the originating Information Store, and then delete the original Information Store. (ensure there are no lingering accounts that have associated mailboxes to the old store).

Changing Passwords on Windows Computers

Tuesday, April 7th, 2009

For a Domain Password:
1. Go to Active Directory Users and Computers
2. Locate user account
3. Change Password for user account
4. Wait 15 minutes for Changes to propagate in large domain with more than 2 DCs
5. Done

Local Password Change on Windows Computers on a Domain:
1. Create batch file with following script:

net user usernamethatyouwantmakechangesto newpassword

2. Edit/Create GPO for OU that has computers in question
3. Place the script as Computer startup/shutdown script GPO
4. Wait for computer GPO to propagate, and users to shutdown/startup later that evening.
5. Done

Stand-alone Workstations:
1. Ensure Workstations are XP Pro (wont work on XP Home – you’ll have to use sneakernet for password changes)
2. Ensure Simple File Sharing is TURNED OFF (if not, then Sneakernet)
3. Get PsPasswd
4. Make a list of all windows computers on your network, and save it to a file (a computer on each line)
5. run: pspasswd @file -u localadministrator -p password username newpassword
6. Done

Ensure the credentials you are changing are not being used for any services (On Server and Workstation):
1. Start > run > services.msc
2. Click on “Standard “Tab
3. Sort by “Log On As”
4. Note which ones are being used by non system accounts. Ensure your changes are not going to effect them. If they are, please consider making separate service user accounts for the services in question, or change the password for the service as well.
a) Get to the Properties of the service
b) Click on the Log On tab
c) Enter in the correct changed password, and confirm it.

Enable and Disable Root from the Command Line

Monday, April 6th, 2009

In Tiger and below you used NetInfo Manager to enable and disable the root account in Mac OS X.  However, in Leopard and above you use the Directory Utility.  But you can also use the command line.  In /usr/sbin there is a handy little tool called dsenableroot.  To use it, simply open up and type dsenableroot.  It will then prompt you for your password.  Provided you type that correctly it will then prompt you for the password you desire the root account to have twice.  Assuming the target passwords match, at this point you should see something similar to the following in your secure.log file:

Apr  6 09:38 client162[22]: checkpw() succeeded, creating credential for user root

There are other options you can use with the dsenableroot command.  The -u, -p and -r flags can be used to put the username, password and root password into the command, so that it is not interactive.  For example, the following would set the root password on a machine to TANSTAAFL! and use the username of Mike with a password of WyomingKnott:
dsenableroot -u Mike -p WyomingKnott -r TANSTAAFL!
The dsenableroot command can also disable the root account.  To do so, simply use the -d flag.  This can be done interactively with just dsenableroot followed by -d.  It can also be done as in the above example in a non-interactive manner (useful for scripting or sending via ARD):
dsenableroot -d -u Mike -p WyomingKnott
You can also use dsenableroot to change the password of the root account, or stick with the passwd command for that.
There is an undocumented option with dsenableroot, but it’s simply a very unexciting way to get a version:
dsenableroot -appleversion
Which should spit out a comma delimited output (well, almost) that can be used to (for example), verify that the dsenableroot command hasn’t been tampered with (although a checksum might be better for something like that):
dsenableroot, Apple Computer, Inc., Version 112

Retrospect 8 Warning

Friday, April 3rd, 2009

For users of Retrospect 8 with Tape Libraries, EMC has issued a bug advisory regarding precautions to be taken to make sure you don’t erase all of the data in the library. The warning is as follows:

This issue only applies to environments where EMC Retrospect 8.0 is used with a tape library. If that configuration applies to you, please read the following notice carefully, so that you can take necessary precautions.

Problem Description:
When highlighting a group of tape slots or a magazine and clicking Erase, EMC Retrospect 8.0 incorrectly sends the Erase All command, commanding the tape library to erase ALL the tapes contained in the library, instead of only those tapes in the group/magazine.

Immediate Workaround:
To prevent the accidental erasure of tapes that contain valid data, either erase one tape at a time, or remove all the tapes that you do not want erased from your library before performing an erase operation on a group of tapes.

Resolution Pending:
This issue is being investigated with the highest priority, and a fix will be provided via automatic updates as soon as possible.

Mac OS X: Show Only Active Apps in the Dock

Thursday, April 2nd, 2009

The dock should have the applications you commonly need to get to.  However, some simply want it to show them the applications that are open.  You can do this by running the following command:

defaults write static-only -bool TRUE

Once run, reboot, or just restart your dock with the following command:

killall Dock

To undo it:

defaults write static-only -bool FALSE

Restoring Data From Rackspace

Wednesday, April 1st, 2009

Rackspace provides a managed Backup Solution. The backups are available for up to 1 Month back. 2 Weeks of Backups are located on their premises, and the previous 2 Weeks are stored offsite. If the files to restore are within that period your restore time will take longer, as they will have to move the tapes from their offsite location to Onsite to start the restore process.

Restores can either be performed from Rackspace’s Web Portal or a support phone call.

Calling Rackspace
Supply Account Name and Password
State you want to Restore Files, windows or linux computer
Give Backup Operator File Path, and Date to Restore From
A Ticket will be created, and updated with the Restore Process. This ticket will be updated when the Restore is complete, and Will Include the Directory of the Restore Data.