Posts Tagged ‘tutorial’

Video on Setting up TheLuggage

Friday, July 13th, 2012

The Luggage is shaping up to be the go-to packaging software for Mac Admins. Getting started can be daunting for some, though, so I’ve narrated a video taking you through the steps required to set it up. Not included:
- Getting a Mac account (while this process can mostly be done for free, it’s the best and easiest way if you do have access)
- Downloading the tools from the Mac Dev Center (Command Line Tools and Auxiliary Tools for Xcode)
- Choosing your favorite text editor (no emacs vs vi wars, thanks)

Setting up The Luggage from Allister Banks on Vimeo.

Happy Packaging! Please find us on Twitter or leave a comment if you have any feedback.

Hiding a Restore Partition With jamf

Monday, August 9th, 2010

The jamf command that is placed inside the /usr/sbin directory has a number of things it does really well. Many of the tasks exposed in Casper Admin can be tapped into using shell scripts.

One nice option that the Casper Suite has for the mobile users in many an enterprise is the ability to restore a given machine to a known good working state. Casper addresses this using a concept known as a restore partition. The restore partition can be used to deploy a base set of packages to a client, or maybe just a functional operating system that hooks back into the JSS, or JAMF Software Server. Because you want the restore partition to be somewhat undefiled, you can hide it. Then, if a user needs to boot to the restore partition, they would simply boot the computer holding down the option key and select Restore (or whatever you have named it).

The /usr/sbin/jamf command can then be used to hide that restore partition using the hideRestore option. For example, assuming that the restore partition is named Restore, the following command will hide it:

/usr/sbin/jamf hideRestore

But, you might find that you want to deploy multiple hidden partitions. So let’s say that you had another for running disk tools. In our environment we could call it 318Tools. So to hide it as well, we would use the same command, but with the -name option followed by the name of the other partition we would like to hide, like so:

/usr/sbin/jamf hideRestore -name 318Tools

Overall, there are a number of uses other than simple patch management with the Casper Suite, and this is just one of the small things you can do with the jamf command, an integral part of the Suite.

Adding Windows Services Monitoring in Zenoss

Thursday, July 22nd, 2010

1. Under devices find the server
2. Go to Configuration Properties
3. Scroll down until you find zWinUser and zWinPassword, and enter in admin username and password.
4. Click on the first item under Components on the left hand side
5. Click on the “+” Sign
6. Click Add Win Service
7. Choose the service from the drop down menu.
8. Click on Service if status says “Unknown”
9. Find server under Display
10. Change Set Local value to Yes
11. Click SAVE (from light testing, this seems to only have to be done once per service).

MySQL Backup Options

Thursday, July 8th, 2010

MySQL bills itself as the world’s most popular open source database. It turns up all over, including most installations of WordPress. Packages for multiple platforms make installation easy and online resources are plentiful. Web-based admin tools like phpMyAdmin are very popular and there are many stand-alone options for managing MySQL databases as well.

When it comes to back-up, though, are you prepared? Backup plug-ins for WordPress databases are fairly common, but what other techniques can be used? Scripting to the rescue!

On Unix-type systems, it’s easy to find one of the many example scripts online, customize them to your needs, then add the script to a nightly cron job (or launchd on Mac OS X systems). Most of these scripts use the mysqldump command to create a text file that contains the structure and data from your database. More advanced scripts can loop through multiple databases on the same server, compress the output and email you copies.

Here is an example we found online a long time ago and modified (thanks to the unknown author):


# List all of the MySQL databases that you want to backup in here,
# each separated by a space
databases="database1 database2 database3"

# Directory where you want the backup files to be placed

# MySQL dump command, use the full path name here

# MySQL Username and password
userpassword=" --user=myusername --password=mypassword"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert --lock-tables"

# Unix Commands

# Create our backup directory if not already there
mkdir -p ${backupdir}
if [ ! -d ${backupdir} ]
echo "Not a directory: ${backupdir}"
exit 1

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
$mysqldumpcmd $userpassword $dumpoptions $database > ${backupdir}/${database}.sql

# Compress all of our backup files
echo "Compressing Dump Files"
for database in $databases
rm -f ${backupdir}/${database}.sql.gz
$gzip ${backupdir}/${database}.sql

# And we're done
ls -l ${backupdir}
echo "Dump Complete!"

Once you verify that your backup script is giving you valid backup files, these should be added to your other backup routines, such as CrashPlan, Mozy, Retrospect, Time Machine, Backup Exec, PresSTORE, etc. It never hurts to have too many copies of your critical data files.

To make sure your organization is prepared, contact your 318 account manager today, or email for assistance.

Changing The Password Policy on Windows Server 2008 Domain Controllers

Wednesday, June 2nd, 2010

There seems to be a bug (maybe feature?) in Windows Server 2008 where you cannot change the default password policies on at least the first Domain Controller in a new Domain via Group Policy Management and editing the Default Domain Controller security policy.

You must make the changes in the Local Policies section of Active Directory on the Windows Server 2008 Domain Controller.
1. Start > All Programs > Administrative Tools > Local Security Policy
2. Security Settings > Password Policy

NOTE: You will see that the Password Policy for the domain controller is populated, unlike in GPMC.MSC where everything is “Not Configured” but has a confusing note about default settings being other than “Not Configured”.

To further confuse the issue, it seems that in Windows Server 2008 R2, using the Local Security Policy to change the Password policy on the DC will NOT work. It will be grayed out. The Domain Controller policy then seems to default to the Default Domain Security Policy (not Default Domain CONTROLLER Security Policy). After changing the password policies under GMPC.MSC for the Default Domain Policy I was able to successfully get the needed password configuration settings for the Domain Controller. It seems that the Default Domain Controller Security Policy password settings are either no longer separate from the Default Domain Security Policy, or now the Default Domain Security Policy overrides the Default Domain Controller Policy. This happened on a fully patched Windows Server 2008 R2 x64 OS.

Evaluating Backup Exec Jobs

Tuesday, April 13th, 2010

[ ] Assess the Job Setup tab and review its listing to determine which jobs are currently configured on the system.
[ ] Review the selection list to ensure that all relevant data and file shares are being backed up and copied
[ ] Assess the Job Monitor tab to confirm that the jobs that are setup and configured are actually running as scheduled.
[ ] Review the job logs (Job History) to ensure that all data is being backed up or if there are minor errors, note what caused those errors to correct later.
[ ] Ensure that there that the job did not fail due to lack of space (or other chronic issues), because if it is then most likely the client needs larger storage or we must set media and jobs to allow for overwrite of data.

Backup Agents are needed for special data such as SQL and Exchange databases, or files located on remote computers. Many open files will not back up unless the Open File Agent is preset, installed and licensed on the data source.

Media Sets (Under the Media tab) are collections of backup media that share common properties. In Backup Exec, media sets can be adjusted under their properties to allow for overwrite and appends infinitely or after a certain period of time. This allows you to manage how media is managed when space begins to come into play. Verify these settings to ensure proper retention.

[ ] Review the Alerts tab and check under Active Alerts sub-tab and ensure that no jobs have been waiting on media or needed human interaction or response.
[ ] Review the Alert History sub-tab and verify that no jobs in the past were waiting for interaction or response.
[ ] Check backup notifications under each job and under the default preferences (Tools >> Recipients… & Tools >> Email and Pager Notification…), to ensure that the proper individuals are being notified about backups and alert items.
[ ] Review the Devices tab and verify that there are no devices/destination that are Offline.
[ ] Ensure that any devices that are currently listed as a backup destination (unless it is the member of a device pool) is online. If the device is a member of a device pool and that the backup job is referencing that pool then the jobs will continue once at least one of the pool’s devices is online).

Typically backup jobs will have destinations as being either tape, local or network storage. Most likely an external backup devices will fall under the tree as a Backup-to-Disk Folder. If the drive/device is not connected it may show up as Offline. If you are sure that the device is connected, right-click on the entry and ensure that devices is both confirmed as Online and also Enabled.

To learn more about Backup Exec – here are some additional links:
Symantec Backup Exec website

Datasheets on usage of Backup Exec 2010 (Applications,Features, Agents)

Wikipedia on the architecture and history of Backup Exec

Checking Backup Jobs in Atempo’s Time Navigator

Wednesday, March 31st, 2010

Time Navigator is a powerful enterprise level backup software suite. It is also one of the most complex backup software you can manage.

In order for an ADB to be successful you need to check the following:

  • Whether the scheduled backups were successful or not.
  • If they were unsuccessful is intervention required?
  • Check the available storage for future backups
  • Did test restore succeed or not
  • Are Critical files backed up?
  • General log review

Section 1. Check whether the scheduled backup was successful

To begin you need to know the username and password for the local user on the host computer, which needs to have admin rights, as well as the username and password for the tina catalog.

Step 1: Open the Time Navigator Administrative Console.
On Mac it is /Applications/Atempo/tina/Administrative Console
On Windows c:\Program Files\Tina\Administrative Console

When the Administrative Console starts it will initiate a connection to the Time Navigator Catalog indicated in the config files.

It will prompt you for a username and password. Once you enter the proper username and password you will gain access to the Administrative Console.

This program interface is the main access point to the various programs that let you control Time Navigator.

Step 2. Choose the “Monitor” menu list and select “Job Manager” this will open the Time Navigator Job Manage. The initial view will show all active jobs. Go to the View menu and choose “Historic” this will show past jobs.

From here you will be able to review the recent backup jobs to find out whether they were successful or not.

Section 2. If the backups were unsuccessful do they require intervention?

Determining whether intervention is required is largely diagnosed by the reasons of a backup failure.

From within the Job Monitor you can select a job from the historic menu and double click it to access the job detail window.

From this window you will have access to several tabs. The tab of interest here will be the one called “Events”. This is a filtered view of the logs so it shows only the log entries that are connected to this job number.

To make the determination of whether intervention is warranted requires some knowledge of the errors you find. To that end the errors are color coded. Yellow errors are considered minor and are likely to be overlooked if they are the only errors present. While Orange and Red errors are higher priority and should warrant the attention of a tech trained in Time Navigator.

Section 3. Check the available storage capacity for future backup executions.

Time Navigator treats all forms of storage as tape a tape library. Your backup destination will either be a Virtual Tape library, in the case of backing up to hard drives, or to a specific physical tape library.

This means that we will need to view the Library Manager application.

Start with the Admin Console. Choose the host for which the library is attached. ( all libraries are attached to a host ) Select the host icon with the mouse and choose the “Devices” menu from there choose “Library” then “Operations” then “Management”

This will spawn the Library Manager application. You will be presented with a dialogue containing a list of available Libraries.

Once chosen you will get a window that shows the number of drives ( virtual or real ) and the tape cartridges in their slots ( also virtual or real ) by this display you will be able to determine which tapes have been used and which are free for use.
If a cartridge has been used it will be labeled for the tape pool in which is belongs too. If it is free for use it will be labeled either SPARE or ????? or in rare cases. Lost & Found. Lost & Found cartridges should be reported to the administrator.

A comprehensive determination of how much space is left would take some math. Know how much each tape represents and how much data is backed up nightly etc…

A quick version to keep in mind is percentages. If there is less than 10% free available cartridges it might be worthwhile to notify the administrator. It will take some experience to tell whether this is a problem or not as some tapes can hold hundreds of gigs and two tapes might take months to fill.

Section 4. Test Restore. Success or Failure.

This section implies that you will attempt a restoration of some files.
FIle restoration with Time Navigator is both its most powerful feature and most complex in comparison to other backup software.

First a word about the process. While it is true that the Administrative Console and associated applications can be run on any computer that participates in the Time Navigator backup system. The Restore and Archive Manager application will attempt to make a connection to the host from which files were backed up. Which means you will need credentials for that host which allow read write access to the directories which were backed up. To this end it is often simpler to open the Administrative Console on the host in question before you open the Restore and Archive Manager application.

To restore files from the backup of a host you will need to select the host from the Administrative Console. From the “Platform” menu choose “Restore and Archive Manager”. You will then be challenged for a username and password for the host in question.

Once you have entered legitimate credentials for the host you will be presented with the window for the Restore and Archive manager. It will show the host name and the username by which you are connecting. It will also show you the complete file system on this host in expandable trees. Each element with a check mark box beside it

Furthermore this view will show you the file system in the present and the capacity to show the file system at some point in the past.

This element is where the program gets its name. The “Time Navigator” allows you to navigate through time to look at the file system and select file for restoration.

The idea here is that you know what time period you are looking for. You select the date beside the “past” radio button and it will then show you what files are available for that time period.

The second feature shown on this interface is the ability to isolate files that have been deleted. Meaning you have the ability to adjust the view to show files that were present in the past but are not present now. Spanning back an arbitrary amount of time as determined by the form element for days, weeks, months etc…

While this is very useful it will not filter out non deleted files. Meaning you have to know what directory you want to look in before this becomes useful.

A third, and in my opinion the most useful, method of restoring files is called versioning.
If you right click ( control click ) on a file that has been backed up, you will be presented with a contextual menu with the word “versions”

Once selected it will open a dialogue window with every version of the file that is currently within the backup catalog.

Once you have selected a file from that list you will need to select the “synchronize” button at the bottom of this versions dialogue. This will set the past date and time marker to the point in time where this file was backed up. You can then check mark the file to be restore.

FInally you can search the catalog for files form this host. While within the Restore and Archive Manage choose the “Backup” menu and choose find.

You will be presented with the search interface with the current host already selected as the search base. From here you can search by pathname, filename and how far back in time to search and how many results to show

The search forms will accept wildcards for more creative searching. Once a file is located in the results window you will need to select the “synchronize” button at the bottom in a manner similar to the versions window mentioned above.


All of the above techniques are methods of locating the files you wish to restore and putting check marks besides them. Now it is time to restore them

Once you have all the files you wish to restore check marked we can proceed.
We will accomplish this with the “Restore” menu item. If there is any question as to what you have selected for backup, there is the option here to “view checked objects” This will filter the view to show only objects that have been check marked for restoration.

Next we can choose to test the backup or Run the backup. If there is any question as to whether the media for a file is available you should run it as a test first.

When you select test you will be greeted with a warning dialogue that says that this operation will perform all operations except for the writing of data itself. This means drives and or tape cartridges will be engaged and network throughput will be used.

After you agree the restore dialogue will show. You will have to tabs to choose from.
The first of which is labeled “Parameters”.

From here you can choose whether to restore the files to their original locations or to a new location on the same file system ( if you wish to restore to another host, it is possible but it is not covered here )

Now you must choose what level of backup you wish. Here you are presented with several radio dials that allow you to choose whether to restore data with or without directory and object information. This may seem like a splitting hairs but in some environments it is nice that your backup system can restore the user permissions for objects in your directory tree instead of just restoring everything.

The checkmark box for “restore all file versions” will restore everything int he “versions” list discussed above. Not used very often,

Now to the second tab “Behavior” the first selection to be made here is what behavior to choose should there already be a file with the same file name at the destination path.

You will see options for restore the file and overwrite, to renaming either the existing file or the restored file or do not restore if certain conditions are met.

Keep this in mind. If you need to restore a large number of files and you don’t know whether you should overwrite existing files, you should restore it in a neutral location and review it by hand.

If an error occurs while Restoring files. Skip? Cancel? Ask user? This selection will be important if you are monitoring the process. If you are not monitoring and you choose skip. You will need to review the logs, you choose cancel. you could come back to very little data being restored.

Finally the section “if required cartridges are off-line”
you run into this if you are dealing with physical tapes that are no longer within the library.

Issue Operator Requests for each missing cartridge. Which means the software will bug you each time a tape is missing.
Ignore files indicated on those cartridges. Self explanatory.
Display offline cartridge list. This is the one I have learned to check, It will check the availability of the tapes within the current library listing. Which means if you put new tapes in you have to scan the bar codes before this list updates. This method avoids a lot of headaches and is my recommendation of you are dealing with physical tape.

Finally you get to press restore. Where you will be presented with the dialogue for the restore process. You will see the progress bar, the path of files being restored and the option to monitor restore events.

If after all of this you have problems restoring you should contact a Time Navigator Admin.

Section 5. Did critical files backup.

At first glance this is similar to “did backups succeed” You can backup the system state for windows servers which are critical files but you should also check to see if the catalog for Time Navigator is being backed up. In the administrative console there is a host icon which will be called CATALOG. It is very important that this get backed up nightly. If this file becomes corrupt or non functional. The entire backup is effectively lost. A good Time Navigator tech can spend a huge amount of time to pull data from the tapes.

Section 6. General Logs Review

This section covers looking for things that look weird. From the Administrative Console choose Monitor Events.

This will open the event monitor and if you see errors like, Environment error or catalog error. Then it needs to be reported.

WordPress Security Auditing

Thursday, March 11th, 2010

After reading Sarah Gooding’s article, 7 Quick Strategies to Beef Up Your Security, we decided to take a look at our own WordPress settings here on the 318 Tech Journal.

Deleting the Default Admin User

Creating a new user with admin permissions, then logging in as that user and deleting the default “admin” account is great advice. Just make sure you assign all of the old admin users posts and links to the new account. Another caveat, if you are using the WPG2 plugin with a Gallery2 installation, make sure to remove the Gallery2 user links before deleting the old admin account.

Don’t Use the Default “wp_” Table Prefix

SQL injection attacks are very real, and this tip can help mitigate risk of infection. The WP Security Scan plug-in mentioned in the article has a built-in tool to help automate this change, but it can also lock you out of your dashboard. The trick is to make sure each user’s meta_key settings in the usermeta table match whatever prefix you choose:

wp_capabilities –> newprefix_capabilities
wp_usersettings –> newprefix_usersettings
wp_usersettingstime –> newprefix_usersettingstime
wp_user_level –> newprefix_user_level

Whitelisting Access to wp-admin by IP Address

This is typically done via .htaccess files and the AskApache Password Protection For WordPress plug-in mentioned in the article can help get the settings correct, although that plug-in has specific server requirements in order to run (it will run some tests for you to see if your server qualifies). If you do set this up, beware of dynamic IP address changes, which can lock you out in the future.

Other Items to Consider

  • Consider using a local MySQL application like Sequel Pro or the command line mysql tools for database configuration instead of public web-facing tools like phpMyAdmin. If you do use PMA, you should lock down access as much as possible using .htaccess controls (or other methods).
  • Tools like the WP Security Scan plug-in mentioned above or Donncha O Caoimh’s WordPress Exploit Scanner plug-in can help identify file permission issues in your WordPress setup.
  • Using SSH/SFTP instead of FTP to access your server is always good advice, even when you are using whitelists.
  • Stay up to date on both WordPress core files and all of your plug-ins.

318 is here to help you with all of your WordPress needs – call us today at 877.318.1318!

Setting Up SonicWALL’s SonicPoints

Tuesday, February 23rd, 2010

99% of this is from Page 23 of the SonicWALL Network Security Appliances – SonicPoint-N Dual-Band Getting Started Guide, the other 1% makes it worth reprinting.

Configuring Wireless Access

This section describes how to configure SonicPoints with a
SonicWALL UTM appliance.

SonicWALL SonicPoints are wireless access points specially engineered to work with SonicWALL UTM appliances. Before you can manage SonicPoints in the management interface, perform the following steps:
-Configuring Provision Profiles
-Configuring a Wireless Zone
-Configuring the Network Interface

Configuring Provision Profiles
SonicPOint Profile defines settings that can be configured on a SonicPoint, such as radio SSIDs, and channels of operation.

These profiles make it easy to apply basic settings to a wireless zone, especially when that zone contains multiple SonicPoints When a SonicPoint is connected to a zone, it is automatically provisioned with the profile assigned to that zone. If a SonicPoint is connected to a zone that does not have a custom profile assigned to it, the default profile “SonicPoint-N” is used.

To add a new profile:
1. Navigate to the SonicPoint > SonicPoints page in the SonicOS interface.
2. Click Add SonicPointN below the list of SonicPoint provisioning profiles.
3. The Add/Edit SonicPoint Profile window displays settings you can enable and/or modify.

Settings Tab:
1. Select Enable SonicPoint
2. Enter a Name Prefix to be used internally as the first part of the name for each SonicPoint provisioned
3. Select the Country Code for the area of operation

802.11n Radio Tab
1. Select Enable Radio
2. Optionally, select a schedule for he radio to be enabled from the drop-down list. The most common work and weekend hour schedules are pre-populated for selection.
3. Select a Radio Mode to dictate the radio frequency band(s). The default settings is 2.4GHz 802.11n/g/b Mixed.
4. Enter an SSID. This is the access point name that will appear in clients’ lists of available wireless connections.
5. Select a Primary Channel and Secondary Channel. You may choose AutcChannel and Secondary Channel. You may choose AutoChannel unless you have a reason to use or avoid specific channels.
6. Under WEP/WPA Encryption, select the Authentication Type of your wireless network. SonicWALL recommends using WPA2 as the authentication type.
7. Fill in the fields specific to the authentication type that you selected. The remaining files change depending on the selected authentication type.
8. Optionally, under ACL Enforcement, select Enable MAC Filter List to enforce Access Control by allowing or denying traffic from specific devices. Select a MAC address object group from the Allow List or Deny List to automatically allow or deny traffic to and from all devices with MAC addresses in the group. The Deny List is enforced before the Allow List.

Advanced Tab:
Configure the advanced radio settings for the 802.11n radio. For most 802.11n advanced options, the default settings give optimum performance. For a full description of the fields on this tab, see the SonicOS Enhanced Administrator’s Guide.

Configuring a Wireless Zone

You can configure a wireless zone on eh Network > Zones page. Typically, you will configure the WLAN zone for use with SonicPoints.

To configure a standard WLAN zone:
1. On the Network > Zones page in the WLAN row, click the icon in the Configure column.
2. Click on General tab.
3. Select the Allow Interface Trust setting to automate the creation of Access Rules to allow traffic to flow between the interfaces within the zone, regardless of which interfaces to which the zone is applied. For example, if the WLAN Zone has both the X2 and X3 interfaces assigned to it, selecting the Allow Interface Trust checkbox on the WLAN Zone creates the necessary Access Rules to allow hosts on these interfaces to communicate with each other.
4. Select the check boxes for the security services to enable on this zone. Typically, you would enable Gateway Anti-Virus, IPS, and Anti-Spyware (IF YOU HAVE THE LICENSES). If your wireless clients are all running SonicWALL Client Anti-Virus, select Enable Client AV Enforcement Service.
5. Click on the Wireless Tab.
6. Select Only allow traffic generated by a SonicPoint to allow only traffic from SonicWALL SonicPoints to enter the WLAN Zone interface. This provides the maximum security on your WLAN.
7. Optionally, click the Guest Services tab to configure guest Internet access solely, or in tandem with secured access. For information about configuring Guest Services, see the SonicOS Enhanced Administrator’s Guide.
8. When finished, click OK.

Configuring the Network Interface

Each SonicPoint or group of SonicPoints must be connected to a physical network interface that is configured for Wireless. SonicOS by default provides a standard wireless zone (WLAN), which can be applied to any available interface.

To configure a network interface using the standard wireless (WLAN) zone:
1. Navigate to the Network > Interfaces page and click the Configure button for the interface to which your SonicPoints will be connected.
2. Select WLAN for the Zone type.
3. Select Static for the IP Assignment.
4. Enter a static IP Address in the field. Any private IP is appropriate for this field, as long as it does not interfere with the IP address range of any of your other interfaces.
5. Enter a Subnet Mask.
6. Optionally, choose a SonicPoint Limit for this interface. This option helps limit resources on port by port basis when using SonicPoints across multiple ports.
7. Optionally, choose to allow Management and User Login mechanisms if they make sense in your deployment. Remember that allowing login from a wireless zone can pose a security threat, especially if you or your users have not set strong passwords.

Verifying Operation

To verify that the SonicPoint is provisioned and operational, navigate to the SonicPoint > SonicPoints page in the SonicOS management interface. The SonicPoint displays an “operational” status in the SonicPointNs table.

Connect to WIFI and ensure that you can browse the Internet.

Blackberry BIS Setup, Websites and Providers

Wednesday, February 3rd, 2010

You will want to create an IMAP or POP account *Not an OWA account* If you create an OWA account it will not sync in real time.

To setup a IMAP or POP account you must:

1. create an account on one of the following websites below.

2. Enter in the PIN# and the ESN# (located under the battery and outside the box).

3. Fill in the user name (usually their E-mail address) and then the wrong password twice for the site to give you more options.

4. Next go through the setup using your own configurations and settings or it will default to OWA. Once finished the user should get an activation E-mail. From there you should be able to test.

A list of providers and their BIS sites can be found in the following list:

Website Wireless Provider




Bell Canada


Cellular South


Cincinnati Bell

Dobson Cellular

Earthlink Wireless

Edge Wireless



Rogers Wireless


TeleCommunication Systems

T-Mobile Austria

T-Mobile Germany

Tmobile UK

T-Mobile USA

US Cellular

Verizon Wireless

Vodafone Germany

Preparing for Exchange 2007

Wednesday, January 27th, 2010

Make sure you have a fully updated Windows 2008 64bit install setup for the following commands to work. Note that Windows 2008 R2 will NOT work with Exchange 2007.

Exchange 2007 has a lot of prerequisites that need to be installed before you can install Exchange 2007. Instead of going through a bunch of Wizards and using trial and error to make sure you have everything installed, you can set them up using a command line.

The first command that should be run is:

ServerManagerCmd -i PowerShell

This will install and configure everything that Exchange 2007 needs for PowerShell.

IIS has several components that need to be installed to use Exchange 2007. You can create a quick batch script that includes them all. The following commands need to be run:

ServerManagerCmd -i Web-Server
ServerManagerCmd -i Web-ISAPI-Ext
ServerManagerCmd -i Web-Metabase
ServerManagerCmd -i Web-Lgcy-Mgmt-Console
ServerManagerCmd -i Web-Basic-Auth
ServerManagerCmd -i Web-Digest-Auth
ServerManagerCmd -i Web-Windows-Auth
ServerManagerCmd -i Web-Dyn-Compression

If you plan on using RPC over HTTP (Outlook Anywhere) you will need to run this command after all of the IIS commands have finished:

ServerManagerCmd -i RPC-over-HTTP-proxy

After running these commands you should be ready to run the actual setup files. When you run setup.exe you should see that everything before option 4. Is greyed out. Option 4. is what triggers the install. If anything has not finished look through the command lines to make sure no errors have shown up.

Adding a User and Folder to FTP Running Active Directory in Isolation Mode

Thursday, January 21st, 2010

Note: For the purpose of these directions the username is MyUser

First, create a user in Active Directory (assuming, also, that there is an FTP users container in AD)

Next, create a home directory in the FTP share (for MyUser it might be D:\Company Data\FTP\MyUser *naming the home folder the same as the user name*)

Go to the command line use these commands to map the directories to the accounts:

iisftp /SetADProp MyUser FTPRoot “D:\company data\ftp”

*note the use of parenthesis outside the path to specify this directory since there is a space between company and data*

iisftp /SetADProp MyUser FTPDir LaBioMed

You can verify this by using the command line ftp localhost and logging in with the new user credentials

You can also create and delete a file to make sure it correctly edits the folder.

Note: If the password changes for the domain administrator account you must change it in IIS for this.

Installing Zenoss

Wednesday, December 30th, 2009

To monitor a device over the WAN, there needs to be a 1 to 1 Firewall Rule. There needs to be a firewall rule, allowing SNMP traffic from the WAN to a device on the lan. For multiple devices, then each device will need a dedicated WAN IP with the firewall rule. SNMP runs on UDP on port 161

Install SNMP service in components will require I386 for .dll
Download and install additional SNMP dll files provided by SNMP Informant,
Once installed right click on SNMP click properties and go to the Agents tab:
Contact: (e.g.
Location: (e.g. 830 Colorado Ave. Santa Monica, CA)
Check all services below that
Move to next tab traps:
Community Name:(e.g. 318zenoss)
Click Add to list
Then click add and enter the Zenoss server address
Move to next tab Security:
Make sure send authentication trap is checked
Add community name 318zenoss read only
And check SNMP packets from any host
Click Apply and Ok

Restart the Service.

Add two firewall rules allow traffic from the Device (LAN) to the WAN zenoss address of the Zenoss Server

Next Add device in zenoss:
Log in as user
Click Add Device
Enter Device IP WAN IP Address for Device Name
SNMP Community: 318zenoss
Select the Server Class:
/Servers/Windows – Windows Server
/Servers/Darwin – Mac Server
/Servers/Unix – Linux/Unix Server

Add or select Location Path

Add Or Select Client Name as Location

Select Your Team As Group

Use SSH Tunneling to Access Firewalled Devices

Friday, December 4th, 2009

Many environments have numerous Desktops or Servers which we may need to support remotely, but lack a full-fledged VPN solution. If the client has a server on a DMZ, or is forwarding SSH ports to a specific server, you can use SSH to then access other machines otherwise protected by the firewall.

For instance, say client MyCo has two servers:, and In this scenario, the backup server has no remote access, and has access available over port 22. Lets say the need arises to provide remote support for the backup server, which has both SSH and ARD/VNC enabled.

In this scenario, it is possible to open up a remote ARD session to the backup server from my remote laptop by utilizing ssh tunneling. To do so, I run the following command from my laptop:

ssh -L -N

This command tells ssh to open up the local port: 5901, and tunnel it to, which will in turn forward traffic to server over port 5900.

Once I have ran this command, I can open up a VNC connection to my local machine, which will then be forwarded through ssh to the clients private backup server:

open vnc://

Alternatively, you may only want shell access to the firewalled server, to accomplish that, we can instead open up port with ssh (once again, from my laptop):

ssh -L -N

From here I can ssh to the local port, which will once again forward to the backup server (this time over port 22):

ssh mycoadmin@ -p 50022


In order for this to work, ssh must be enabled on any client or server that you want to access. Also, publicly accessible server must be able to resolve the target name that you provide. For instance, in the above example if “” doesn’t properly resolve on, then the solution will not work. In this instance, you could specify the internal IP of the backup server:

ssh -L 50022: -N

The local port is somewhat arbitrary (5901 and 50022 in my examples), you just want to make sure that the port is not in use, which can be determined by looking at the output of `netstat -a -p TCP`, or through `lsof -i TCP:50022` where ’50022′ is the local port you want to open.

Resolving Qmaster Problems with Xsan

Thursday, November 5th, 2009

When using qmaster in an Xsan environment, it is often desirable to use the Xsan volume for qmaster cluster storage, this allows all qmaster render nodes on the xsan to directly access assets for rendering, rather than having to pull the assets over NFS. However, a race condition exists where when qmasterd fires prior to an xsan volume being mounted, qmaster will create a folder structure at the Volume’s mount path, which prevents proper mounting of the Xsan volume.

To resolve the issue, you can set a delay on the qmaster daemon to give sufficient time for the Xsan volumes to properly mount. This can be done by editing the file located at /Library/LaunchDaemons/ and changing it’s contents to match the following, which adds a 60 second delay prior to qmasterd starting:


/bin/sleep 60; /usr/sbin/qmasterd


Greylisting and Snow Leopard Server

Thursday, October 8th, 2009

10.6 has introduced the use of Greylisting as a spam prevention mechanism. In short, it denies the first attempt for an MTA to deliver a message, once the server tries a second time (after an acceptable amount of delay, proving it’s not an overeager spammer), it can be added to a temporary approval list so future emails are delivered without a delay.

The problem with this is many popular mail systems, including gmail, don’t exactly behave as expected, so the messages may take hours before they are delivered. To get around this, the people championing greylisting suggest maintaining a whitelist of these popular, but ‘non standard’ mail servers, allowing them to bypass the greylist process entirely and accepting the messages the first time around. The other problem is for companies that send mail through mxlogic and other similar services, the mail is sent from the first available server, potentially causing delayed because they were being sent by a different mxlogic box each time.

The problem with this under 10.6 is there is no gui or interface to inform you that greylisting is enabled (it gets turned on when you enable spam filtering), and so it just takes forever for messages to hit your inbox. You can start managing the whitelist / greylist system, or you can just turn it off:

cp /etc/postfix/ /etc/postfix/

vi /etc/postfix/

change line 667 from:

smtpd_recipient_restrictions = permit_sasl_authenticated permit_mynetworks reject_unauth_destination check_policy_service unix:private/policy permit

To the following (removing check_policy_service unix:private/policy):

smtpd_recipient_restrictions = permit_sasl_authenticated permit_mynetworks reject_unauth_destination permit

You can then run postfix with the reload verb to reload the config files, as follows:

postfix reload

Using kmsrecover to Restore Kerio Backups

Monday, September 28th, 2009

Using KMSrecover to restore a mailserver/user
Using this command will overwrite the existing config and modify the message store, which is why you need another machine for this, with adequate HD space.

[ ] Install KMS locally on your computer (skip wizard)
[ ] Rename your laptops volumes name to the same as where the KMS store lives
(e.g Mail Server HD or Server HD or Macintosh HD)
[ ] Copy KMS backups to external drive and plug into laptop.
[ ] Navigate to mail server path in terminal or DOS.
Mac: /usr/local/kerio/mailserver
PC: C:\Program Files\KerioMailServer
[ ] Start the recovery
Mac: ./kmsrecover |
For full recovery point to backup location.
./kmsrecover /Volumes/backup
For specific recovery, use filename
./kmsrecover /Volumes/backup/
PC kmsrecover |
For full recovery point to backup location.
kmsrecover E:\backup
For specific recovery, use filename
kmsrecover E:\backup\
Warning: If the parameter contains a space in a directory name, it must be closed in quotes. kmsrecover “E:\backup 2”

Setup HP OfficeJet Printers Using Terminal Services

Wednesday, September 9th, 2009

Often times remote users have Officejet printers and would like them redirected in Terminal Services. Prior to the new version of remote desktop, this was difficult to do. Most times, the user had to lose functionality locally on their printer in order to get this to work. With the latest version of Remote Desktop for Windows, (version 6), this is no longer an issue. The printer will redirect as it’s supposed to. The following are the steps to successfully accomplish this.

1. Download the drivers. You must ensure the server has the drivers before redirection will take place. You can open up the printer control panel, and open up the print server properties from there and search for the driver. If the driver is not there, you must install it. If, for example, HP does not have just the driver, but the entire install suite, install only the printer portion, and choose the option to install even though the printer is not plugged in (sometimes this will require that the server be rebooted). Open up the print server menu from the printer control panel again, and confirm the printer is there.

2. Ensure the remote client is using Windows XP SP2, if they are not at SP2, they will not be able to upgrade Remote Desktop to version 6. Once you have ensured that they are running SP2, have the user go to: and select the appropriate version for their OS. It will then ask the user to validate their version of Windows. Once this is done, install the new version of Remote Desktop and test. They should be good to go now.

Restoring Kerio Mail Server Data Without Using KMS Restore

Tuesday, September 8th, 2009

This article will cover the WHY and the HOW of restoring mail files without using the KMS recover tool 

WHY would you not want to use the KMS recover tool.
1. The KMS recover tools requires that you stop the KMS in order to restore. This is an interruption in a live mail server which can bring a company to a halt. We do not always have the opportunity to wait until off-time to restore important data.
2. The KMS tool has the ability to restore specific folder, will overwrite said folder. For example if Julia asks me to restore all the message in her in box before September second, restoring that INBOX using KMS recover for September second would erase the entire contents of that inbox and replace it with the contents of september second. This is not always what is desired.

How to do this properly.

1. isolate and decompress the archive zip file for the client. This is often the most time consuming part of the restore. Especially if there are quite a number of zip files to look through. I suggest you install some sort of client to look at the zip contents. I suggest the zip quick look plug in found at:

Without such a utility you will need to de compress as many zip files as it takes to isolate the user folder in question. It is important to know that if the user account is over 1 GB the backup process will split it between multiple zip files.

2. Once we have isolated the files to be restored we copy these files to the Archive directory as indicated in the Administrative Console under the Archive and Backup tab. Once these files are copied to the Archive directory KMS will index it so it will become available to the Mail Admin web interface

3. log into the mail admin web interface. Expand the Archive folder and you should see a listing for the files you copied to the archive folder.

4. Now we need to get these files into the target folder. This can be accomplished several ways;
A. the easiest way to restore these files is when you have the password for the user in question. If this is the case you can access the web interface for that user. In the Admin account create a public folder of the time required. A mail folder for mail, a contact folder if you are transferring contacts, or a calendar folder for transferring calendar items. Right click on the new public folder and change the access rights so the admin account and the user account in question both have administrative rights. Once this public folder has been created copy the files from the archive folder to the public folder. Usually by right clicking on the archive folder and choosing “Move or Copy all” and choosing the public folder for the destination. Once that copy process is done you can log into the web interface for the account in question and proceed to copy the messages out of the public folder to the folder in their account.

This will give you the option to see what messages are there so you don’t over write them.

repeat this process as necessary to restore the message to the folders required.

B. If you don’t have the password to the user account. You can still accomplish a lot but it will require the use of the terminal on the mail server. I would suggest you try your best to get the mail password for the account or change the mail password for the account to grant you access. If you cannot get the password or it would really be bad to change the password you can proceed in the following manner.

It is important that the actual message copying take place in the web interface. This is so kerio can properly name the messages and that the index files are correct.

In the Admin web interface create a new folder named temp_restore. This folder will be empty with nothing important. At this point you will need ssh access to the mail server and grant yourself root access. Navigate to the mail store directory and to the account of the person you are going to restore message for. Use the ditto command to copy the contents of the target restore folder ( in this example the inbox of Renee ) to the temp_restore.

When you refresh your web interface for the mail Admin, temp_restore will gain all of the properties of the Renee’s inbox. You can proceed to copy the files form your archive folder to this temp_restore. This will preserve message numbers and index files. Once the copy is complete you can return to the terminal and reverse the direction of the copy. This will maker Julia’s inbox the same as your temp_restore.

This method is trickier with the inbox as it is possible that message have come in during the time you would making the copies. Other folders are not quite so sensitive to this process.

This process can be time consuming but it is sometimes best to work slowly without stopping the mail server for everyone

Hosting Your Mail Store on a Non Booted Volume Using Kerio Mail Server

Monday, September 7th, 2009

There is a bug in KMS when the mail store is on the root level of a non boot volume.

When the path to the mail store volume exactly matches the mount point for the hard drive. KMS cannot determine whether the volume is properly mounted or not. This leads to the creation of a folder in the /Volumes directory that causes the mount point of the intended drive to have the number 1 appended to its name.

for example if your second internal drive is named KERIO_DATA, the full path to the drive is /Volumes/KERIO_DATA

if the path to the mail store is indicated at /Volumes/KERIO_DATA in the administrative console KMS will not be able to test whether the drive is mounted or not.

It should now be considered best practices to create a folder within the drive called mailstore such that the full path to the mail store is now /Volumes/KERIO_DATA/mailstore

This will allow KMS to test whether that path is valid before the mailserver daemon starts.

How to Fix It
If you come across a scenario where this mount point problem has come about.
you will see in the /Volumes folder

/Volumes/KERIO_DATA and /Volumes/KERIO_DATA1

Since kerio works on path names it will ignore /Volumes/KERIO_DATA1 and work with /Volumes/KERIO_DATA

to fix this

1. Stop KMS
2. Move /Volumes/KERIO_DATA to another location. This is a folder and can me moved.
3. Unmount /Volumes/KERIO_DATA1 so neither KERIO_DATA nor KERIO_DATA1 are present.
4. Remount the drive so that it properly mounts as /Volumes/KERIO_DATA
5. Start KMS

What about the messages that were received during the time that KMS was working with that folder?

You can’t just move those message files into the current mail store. Each folder contains numerically listed mail messages in hexadecimal naming convention.
If you just copy the messages you can overwrite existing messages. In addition the status.fld file indicate what is the next safe file name for a mail message.

If these are out of sync it will take hours for the KMS to catch up.

The best practice is to locate the mail archive folder as indicated in the admin console. Move this erroneously created mailstore folder into the Archive folder.
This folder structure will then be available to the Admin user account web interface. This will allow the admin to access all aspects of the mailstore including contacts and calendar items.

You will then need to move the messages, contacts and calendars to the appropriate users. This precise technique for restoring these items is more fully covered in my next article. Kerio Mail server How to restore items without using the kmsrestore application

New Video on System Image Utility in Snow Leopard

Tuesday, September 1st, 2009

Now that NetRestore has been moved into Mac OS X Server (kinda), we have created a new video on creating a NetRestore image for Snow Leopard.

Yet Another Spyware Article

Monday, August 31st, 2009

First and foremost, it’s called MS Antivirus, or MS Antispyware:

From Wikipedia:

MS Antivirus has a number of other names. It is also known as XP Antivirus,[2] Vitae Antivirus, Windows Antivirus, Win Antivirus, Antivirus Pro, Antivirus Pro 2009, Antivirus 2007, 2008, 2009, 2010, and 360, Internet Antivirus Plus, System Antivirus, Spyware Guard 2008 and 2009, Spyware Protect 2009, Winweb Security 2008, System Security, Malware Defender 2009, Ultimate Antivirus2008, Vista Antivirus, General Antivirus, AntiSpywareMaster, Antispyware 2008, XP AntiSpyware 2008 and 2009, WinPCDefender, Antivirus XP Pro, and Anti-Virus-1

It can be spread through the following vectors:
Most have a Trojan horse component, which users are misled into installing. The Trojan may be disguised as:

* A browser plug-in or extension (typically toolbar)
* An image, screensaver or archive file attached to an e-mail message
* Multimedia codec required to play a certain video clip
* Software shared on peer-to-peer networks
* A free online malware scanning service

Lately, with the infections I’ve seen this year, it seems that it spreads by tricking the user to download a CODEC to play a video. Sometimes the link will appear within a frame (say AOL main web site with an article directed somewhere else). It will also bypass web filtering applications (i.e. Surfcontrol) as long as the site that carries the malware is not banned for any reason . I was reading of an instance where a graphic designer was looking for a CODEC for their software, and downloaded one that they thought was good from a site that hosted Graphic Design templates, and got infect from there.

I also read of an instance in an enterprise environment where a business person was looking for info on an article, and happened to find what he thought was a news video on the subject, and got infected from there.

The following are ways that we can decrease a company as being a target for this infection:
1. Begin updating all Windows workstations with current security patches from Microsoft. And update them regularly
2. User education (especially don’t download codecs!)
3. Keep AV up to date.

Damage control consists of cleaning the computers with free tools we have at hand.

I have had success (meaning clean system with no nuke and pave) using the following strategy:
1. Download and Install CCleaner: Run it in regular mode and clear out the temp files, and unneeded registry entries.
2. Go to Control Panel, Add/Remove Programs and attempt to remove Malware from there.
3. Turn off system restore to delete all system restores that are probably compromised now.
4. Download and install Malwarebytes Open it in regular mode, update it, and then run it in safe mode (no networking). If you can’t run it, go to step 12.
5. Reboot
6. Run Malwarebytes in regular mode until it reports no issues. If there are virii still present, run it in safe mode. If you can’t run Malwarebytes at all or after 3 cleans it’s not fully clean, continue to step 7. If no spyware is present, but Google redirects, skip to step 12.
7. Download Superantispyware:
8. Update it in regular mode for Windows.
9. Run it in safe mode to remove more malware.
10. Reboot
11. Repeat step 6, if step 6 fails, continue to step 12.
12. Download Combofix:, and update it in regular mode.
13. Run it in safe mode. If Combofix will not run, continue to step 14.
14. Find the Malwarebytes executable by going to the shortcut that it placed on the desktop, and rename the Malwarebytes executable from *.exe to *.com.
15. Boot into regular mode, and update Malwarebytes.
16. Boot into safe mode and see if it will run (ensure it’s still named *.com). Repeat step 6 until it’s clean. Once it is, rename it back to *.exe. If this fails, continue on to step 17.
17. Rename combofix.exe to combo-fix.exe or Run it. After it’s finished, repeat step 6.
18. If all of these fail, backup your registry again, and try running Icesword: Icesword’s GUI is in Chinese, if this is unacceptable, backup, nuke and pave, and reinstall OS plus data, and rejoin to to domain if necessary.

The above steps go from least intrusive software to more dangerous software. Combofix and Icesword being the ones that can cause the most damage if used improperly (can delete needed items in registry, or muck up Microsoft Office Suite applications). Personally, Combfix seems to do the trick, and is the only one that will take care of the Google Link redirects. Icesword is worse case scenario, and I’ve only had to run it once since I first became aware of it 2 years ago.

Links on the subject for your reference:

Uninstalling Service Pack 2 from Windows XP In Fusion (Due to Blue Screens)

Wednesday, April 22nd, 2009

1. Grab your Windows install CD.
2. Go to and download the SCSI Disk Driver (it’s a Zip file)
3. Extract the contents, it should be an *.fld file.
4. Add a floppy drive to the image in VM. Settings, Other Devices, +, Floppy, direct floppy to *.fld file.
5. Boot XP in Fusion. Press Esc to get to boot menu
6. Boot to CD.
7. Press F6 to add drive (it wont immediately do it, it will cycle through some stuff first).
8. Press S to add drive (it will now hit the floppy)
9. Choose the VMWare SCSI drive.
10. Press Enter
11. Boot in Recovery Mode (”R”).
12. Choose your install location (most likely “1”)
13. Authenticate to Windows with the Administrator account
14. Get to command prompt.
15. Type: cd $ntservicepackuninstall$\spuninst and hit Enter
16. Type: batch spuninst.txt and hit Enter (errors and file copies will scroll through)
17. Disconnect floppy once it finished scrolling.
18. Type: exit and then Enter (this’ll reboot it)
19. Hit F8 to boot into Safe Mode (it WILL take a while to let you through, if it takes longer than 10 minutes, power cycle VM)
If no icons, or start button appears (black screen for longer than 10 minutes) proceed to next step. If explorer.exe IS running, go to #25.
20. Send a CTRL+ALT+DEL
21. File > New Task (Run…)
22. In Open, type regedit
23. Go to HKLM>System>CurrentControlSet\Services\RpcSs
24. Right click “ObjectName”, click Modify, type in LocalSystem in the “Value data” box, and then click OK
25. Restart computer in Normal Mode.
26. Re-install VMWare tools to get your mouse back.
27. Find out why SP2 didn’t install right, and try it again

Recovering FileMaker and FileMaker Server Databases

Tuesday, April 21st, 2009


The most common thing that happens to FileMaker databases is file corruption. In this case, the local or server files will not be accessible, and customers will report issues.

Normally, one specific file is down and inoperable in FileMaker or FileMaker Server, but sometimes could be multiple files. You will either have to grab the affected items from a recent backup or otherwise recover the files.


If you have to recover files, you will need FileMaker Pro. If you are recovering .fp5 files (FileMaker 5 databases), either version 5 or 6 would be appropriate. If the files are .fp7 files (FileMaker 7 databases), then versions 7, 8, 9 and 10 will work. Open FileMaker, choose menu command “File, Recover”, and select the damaged database file. FileMaker will save a recovered copy.

**Important** For .fp5 (FileMaker 5) files, after recovering, each file’s shared hosting status might revert to Single User Mode. To fix this, open the file in FileMaker Pro 5 or 6, go to “File, Sharing” and set the file to either Multi User or Multi User (Hidden), depending on whether or not you want it to be selectable in FileMaker Server. (If you do not have a version of FileMaker Pro 5 or 6 to work with, most likely a 318 developer will.)

ESX Patch Management

Tuesday, April 14th, 2009

VMware’s ESX Server, like any system, needs to be updated regularly. To see what patches have been installed on your ESX server use the following command:

esxupdate -query

Once you know what updates have already been applied to your system it’s time to go find the updates that still need to be applied. You can download the updates that have not yet been run at Here you will see a bevy of information about each patch and can determine whether you consider it an important patch to run. At a minimum, all security patches should be run as often as your change control environment allows. Once downloaded make sure you have enough free space to install the software you’ve just downloaded and then you will need to copy the patches to the server (using ssh, scp or whatever tool you prefer to use to copy files to your ESX host). Now extract the patches prior to running them. To do so use the tar command, as follows:

tar xvzf .tgz

Once extracted, cd into the patch directory and then use the esxupdate command with the update flag and then the test flag, as follows:

esxupdate –test update

Provided that the update tests clean, run the update itself with the following command (still with a working directory inside the extracted tarball from a couple of steps ago):

esxupdate update

There are a couple of flags that can be used with esxupdate. Chief amongst them are -noreboot (which doesn’t reboot after a given update), -d, -b and -l (which are used for working with bundles and depots).

If esxupdate fails with an error code these can be cross referenced using the ESX Patch Management Guide.

You can also run patches without copying the updates to the server manually, although this will require you to know the URL of the patch. To do so, first locate the patch number that you would like to run. Then, open outgoing ports on the server as follows:

esxcfg-firewall -allowOutgoing

Next, issue the esxupdate command with the path embedded:

esxupdate –noreboot -r http:// update

Once you’ve looped through all the updates you are looking to run, lock down your ESX firewall again using the following command:

esxcfg-firewall -blockOutgoing

New article on Xsan Scripting by 318

Saturday, April 11th, 2009

318 has published another article on Xsanity, for scripting various notifications and monitors for Xsan and packaged up into a nice package installer. You can find it here

Sleeping Windows from the Command Line

Friday, April 10th, 2009

Windows, like Mac OS X can be put to sleep, locked or suspended from the command line. To suspend a host you would run the following command:

rundll32 powrprof.dll,SetSuspendState

To lock a Windows computer from the command line, use the following command:

rundll user32.dll,LockWorkStation

To put a machine in Hibernation mode:

rundll32 powrprof.dll,SetSuspendState Hibernate

If you would rather simply shut the computer down, then there is also the shutdown command, which can be issued at the command line. You can also use tsshutdn, which provides a few more options than the traditional shutdown command. All of these commands can also be scripted. For example, using the at command to provide a one time instance (which is actually a feature built into tsshutdn and shutdown). Another way to automate these in WIndows would be to issue the schtasks command (or simply write a batch file and use the GUI).

Setting Up Folders and Rules in Outlook

Friday, April 10th, 2009

In Outlook, to create a new folder, right click on the Mailbox – Username on the left side and select New Folder. Type in the name FooBar E-mail for the Name. For the “Folder Contains” you should choose Mail and Post Items (Which should be the default).

Now that you have the folder created, a rule needs to be setup for it so that all e-mail goes into that folder that was addressed using the e-mail address. To start off, you need to go to Tools and then Rules and Alerts. Click on New Rule. You are going to want to select “Move messages from someone to a folder”. Click Next. Uncheck anything that is currently checked. Then put a check mark in “with specific words in the recipient’s address”. Now down in the lower window, click on the blue text that says “specific words”. Another box should pop up. In the top thin box, type the users e-mail address in and then click add. If they have any sort of alias they should add that one as well. Click ok when done. Now click on “specified folder”. It will bring up another window. Find the FooBar folder that was created earlier, highlight it and then click ok. Once the blue high lighted words are correct, you should be able to click on finish and be done.

Now any e-mail that comes into the new Exchange server with the e-mail address, it will be directed to that folder of the user it was addressed to.

Changing Passwords on Windows Computers

Tuesday, April 7th, 2009

For a Domain Password:
1. Go to Active Directory Users and Computers
2. Locate user account
3. Change Password for user account
4. Wait 15 minutes for Changes to propagate in large domain with more than 2 DCs
5. Done

Local Password Change on Windows Computers on a Domain:
1. Create batch file with following script:

net user usernamethatyouwantmakechangesto newpassword

2. Edit/Create GPO for OU that has computers in question
3. Place the script as Computer startup/shutdown script GPO
4. Wait for computer GPO to propagate, and users to shutdown/startup later that evening.
5. Done

Stand-alone Workstations:
1. Ensure Workstations are XP Pro (wont work on XP Home – you’ll have to use sneakernet for password changes)
2. Ensure Simple File Sharing is TURNED OFF (if not, then Sneakernet)
3. Get PsPasswd
4. Make a list of all windows computers on your network, and save it to a file (a computer on each line)
5. run: pspasswd @file -u localadministrator -p password username newpassword
6. Done

Ensure the credentials you are changing are not being used for any services (On Server and Workstation):
1. Start > run > services.msc
2. Click on “Standard “Tab
3. Sort by “Log On As”
4. Note which ones are being used by non system accounts. Ensure your changes are not going to effect them. If they are, please consider making separate service user accounts for the services in question, or change the password for the service as well.
a) Get to the Properties of the service
b) Click on the Log On tab
c) Enter in the correct changed password, and confirm it.

Restoring Data From Rackspace

Wednesday, April 1st, 2009

Rackspace provides a managed Backup Solution. The backups are available for up to 1 Month back. 2 Weeks of Backups are located on their premises, and the previous 2 Weeks are stored offsite. If the files to restore are within that period your restore time will take longer, as they will have to move the tapes from their offsite location to Onsite to start the restore process.

Restores can either be performed from Rackspace’s Web Portal or a support phone call.

Calling Rackspace
Supply Account Name and Password
State you want to Restore Files, windows or linux computer
Give Backup Operator File Path, and Date to Restore From
A Ticket will be created, and updated with the Restore Process. This ticket will be updated when the Restore is complete, and Will Include the Directory of the Restore Data.