Posts Tagged ‘backup’

Spelunking An iTunes Backup

Wednesday, June 12th, 2013

Say you’re excited about installing a particular beta of a particular mobile operating system, and are foolhardy enough to put it on a phone that was in use for business purposes. Let’s go even further, hypothetically, and say you had been using iCloud Backup, but made a backup with iTunes before upgrading… leaving about half a day gap, during which contacts were added. This is a phone that’s often used for testing and little else, so no accounts besides iCloud are configured and you don’t encrypt the backup because you don’t have passwords you want/need restored. After the beta upgrade completes, you restore the iCloud Backup, leaving out that one phone number that’s the direct line to a level two support group at a certain backup company. iTunes is just not fun to plug into, though, so let’s go spelunking in the backup it created.

First, I need to put the backup into a state I can interact with it in. For that I chose the product with the best domain name,, and its iOS Backup Extractor. I chose to put it all in tmp, so it gets dumped sooner rather than later, and found a promising database to sift through:

in tmp

Following basic sqlite3 commands I found on @tvsutton’s site, I saw a promising table, ABPersonFullTextSearch_content. Sure enough, the contact info I was missing was there and I could pull it out to restore just that one contact I’d created.

 I never use this theme

PSU MacAdmins Conference 2013

Wednesday, February 27th, 2013

It's Secret!

For the third year, I’ll be presenting at PSU MacAdmins Conference! This year I’m lucky enough to be able to present two talks, “Backup, Front to Back” and “Enough Networking to be Dangerous”. But I’m really looking forward to what I can learn from those speaking for the first time, like Pepijn Bruienne and Graham Gilbert among others. The setting and venue is top-notch. It’s taking place May 22nd through the 24th, with a Boot Camp for more foundational topics May 21st. Hope you can join us!

iOS Backups Continued, and Configuration Profiles

Friday, December 14th, 2012

In our previous discussion of iOS Backups, the topic of configuration profiles being the ‘closest to the surface’ on a device was hinted at. What that means is, when Apple Configurator restores a backup, that’s the last thing to be applied to the device. For folks hoping to use Web Clips as a kind of app deployment, they need to realize that trying to restore a backup that has the web clip in a particular place doesn’t work – the backup that designates where icons on the home screen line up gets laid down before the web clip gets applied by the profile. It gets bumped to whichever would be the next home screen after the apps take their positions.

This makes a great segue into the topic of configuration profiles. Here’s a ‘secret’ hiding in plain sight: Apple Configurator can make profiles that work on 10.7+ Macs. (But please, don’t use it for that – see below.) iPCU possibly could generate usable ones as well, although one should consider the lack of full screen mode in the interface as a hint: it may not see much in the way of updates on the Mac from now on. iPCU is all you have in the way of an Apple-supported tool on Windows, though. (Protip: activate the iOS device before you try to put profiles on it – credit @bruienne for this reminder.)

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Now why would you avoid making, for example, a Wi-Fi configuration profile for use on a mac with Apple Configurator? Well there’s one humongous difference between iOS and Macs: individual users. Managing devices with profiles shows Apple tipping their cards: they seem to be saying you should think of only one user per device, and if it’s important enough to manage at all, it should be an always enforced setting. The Profile Manager service in Lion and Mountain Lion Server have an extra twist, though: you can push out settings for Mac users or the devices they own. If you want to manage a setting across all users of a device, you can do so at the Device Group level, which generates extra keys than those that are present in a profile generated by Apple Configurator. The end result is that a Configurator-generated profile will be user-specific, and fail with deployment methods that need to target the System. (Enlarge the above screenshot to see the differences – and yes, there’s a poorly obscured password in there. Bring it on, hax0rs!)

These are just more of the ‘potpourri’ type topics that we find time to share after being caught by peculiarities out in the field.

CrashPlan PROe Refresher

Thursday, December 13th, 2012

It seems that grokking the enterprise edition of Code 42′s CrashPlan backup service is confusing for everyone at first. I recall several months of reviewing presentations and having conversations with elusive sales staff before the arrangement of the moving parts and the management of its lifecycle clicked.

There’s a common early hangup for sysadmins trying to understand deployment to multi-user systems, with the only current way to protect each user from another’s data being to lock the client interface (if instituted as an implementation requirement.) What could be considered an inflexibility could just as easily be interpreted as a design decision that directly relates to licensing and workflow. The expected model these days is a single user may have multiple devices, but enabling end users to restore files (as we understand it) requires one user be granted access to the backup for an entire device. If that responsibility is designated to as the IT staff, then the end user must rely on IT to assist with a restore, instead of healing thyself. This isn’t exactly the direction business tech has been going for quite some time. The deeper point is, backup archives and ‘seats’ are tied to devices – encryption keys cascade down from a user, and interacting with the management of a device is, at this point, all or nothing.

This may be old hat to some, and just after the Pro name took on a new meaning into Code 42 hosted-only, the E for Enterprise version had seemingly been static for a spell – until things really picked up this year. With the 3.0 era came the phrasing “Cold Storage”, which is neither a separate location in the file hierarchy nor intended for long-term retention (like one may use Amazon’s new Glacier tier of storage for.) After a device is ‘deactivated’, it’s former archives are marked for deletion, just as in previous versions – this is just a new designation for the state of the archives. The actual configuration which determines when the deactivated device backup will finally be deleted can be designated deployment-wide or more granularly per organization. (Yes, you can find the offending GUID-tagged folder of the archives in the PROe servers filesystem and nuke it from orbit instead, if so inclined.)

ComputerBlock from the PROe API

computerBlock from the PROe API

Confusion could arise from the term that looks similar to deactivation, ‘deauthorization’. Again, you need to notice the separation between a user and their associated device. Deauthorization operates at the device level to put a temporary hold on its ability to log in and perform restores on the client. In API terms it’s most similar to a ComputerBlock. This still only affects licensing in the fact that you’d need to deactivate the device to get back it’s license for use elsewhere, (although jiggery-pokery may be able to resurrect a backup archive if the user still exists…) As always, test, test, test, distribute your eggs across multiple baskets, proceed with caution, and handle with care.

iOS and Backups

Wednesday, December 12th, 2012

If you’re like us, you’re a fan of our modern era, as we are (for the most part) better off than we previously were for managing iOS devices. One such example is bootstrapping, although we’re still a ways away from traditional ‘imaging’. You don’t need Xcode to update the OS in parallel, iPCU to generate configuration profiles, and iTunes for restoring backups anymore. Nowadays in our Apple Configurator world, you don’t interact with iTunes much at all (although it needs to be present for assisting in loading apps and takes a part in activation.)

So what are backups like now, what are the differences between a restore from, say, iCloud versus Apple Configurator? Well, as it was under the previous administration, iTunes has all our stuff, practically our entire base belongs to it. It knows about our Apple ID, it has the ‘firmware’ or OS itself cached, we can rearrange icons with our pointing human interface device… good times. Backups with iTunes are pretty close to imaging, as an IT admin would possibly define it. The new kids on the block(iCloud, Apple Configurator,) however, have a different approach.

iOS devices maintain a heavily structured and segmented environment. Configuration profiles are bolted on top(more on this in a future episode), ‘Userspace’ and many settings are closer to the surface, apps live further down towards the core, and the OS is the nougat-y center. Apple Configurator interacts with all these modularly, and backups take the stage after the OS and apps have been laid down. This means if your backup includes apps that Apple Configurator did not provide for you… the apps(and their corresponding sandboxed data) are no longer with us, the backup it makes cannot restore the apps or their placement on the home screen.

iCloud therefore stands head and shoulders above the rest(even if iTunes might be faster.) It’s proven to be a reliable repository of backups, while managing a cornucopia of other data – mail, contacts, calendars, etc. It’s a pretty sweet deal that all you need is to plug in to power for a backup to kick off, which makes testing devices by wiping them just about as easy as it can get. (Assuming the apps have the right iCloud-compatibility, so the saved games and other sandbox data can be backed up…) Could it be better? Of course. What’s your radar for restoring a single app? (At this point, that can be accomplished with iTunes and manual interaction only.) How about more control over frequency/retention? Never satisfied, these IT folk.

(Sysadmin) Software Design Decisions

Wednesday, October 3rd, 2012

When approaching a task with an inkling to automate, sometimes you find an open source project that fits the bill. But the creator will work within constraints, and often express their opinion of what’s important to ‘solve’ as a problem and therefore prioritize on: a deployment tool is not necessarily a patch management tool is not necessarily a configuration management tool, and so on. One of the things I’ve dealt with is trying to gauge the intent of a developer and deciding if they are interested in further discussion/support/development of a given project. Knowing why one decision was made or another can be helpful in these situations. In that category of things I wish someone could have written so I could read it, here’s the design decisions behind the sonOfBackupRestoreScripts project I’ve been toying with as an add-on to DeployStudio(heretofore DS), which you can hopefully understand why I am not releasing as an official, supportable tool in it’s current bash form after reading the following.
I’ve adapted some of the things Google used in their outline for Simian as a model, to give this some structure.

Project Objective:

To move user home folders and local authentication/cached credentials between workstations in a customizable and optimized manner, preserving the integrity of the data/user records as much as possible


For speed and data integrity, rsync is used to move selections of the users home folder(minus caches, trash, and common exclusions made by Time Machine). To increase portability and preserve mac-specific attributes, a disk image is generated to enclose the data. The user account information is copied separately and helpful information is displayed at the critical points as it moves from one stage to another and during the backup itself.

Requirements: DeployStudio Server / NetBoot

DS, as a service, enables an infrastructure to run the script in, and automounts a repository to interact with over the network. Meant to work optimally with or without a NetBoot environment, an architecture assumption being made during development/testing is wired ethernet, with the use of USB/Thunderbolt adapters if clients are MacBook Airs. Even old minis can function fine as the server, assuming the repo is located on a volume with enough space available to accept the uncompressed backups.

Implementation Details: Major Components / Underlying Programs

- source/destination variables

Parameters can be passed to the script to change the source/destination of backups/restores with the -s(for source) and -d(…) switches and then a path that is reachable by the NetBooted system.

- hdiutil

A simple sparsediskimage is created which can expand up to 100GBs with the built-in binary hdiutil. The file system format of that container is JHFS+, and a bunch of other best practices, cobbled together from Bombich’s Carbon Copy Cloner(heretofore CCC) and InstaDMG, are employed.

- cp

The cp binary is used to just copy the user records from the directory service the data resides on to the root of the sparseimage, and the admin group’s record is copied into a ‘group’ folder. If hashes exist in /var/db/shadow/hash, which is how passwords were stored previous to 10.7, those are moved to a ‘hashes’ folder.

- rsync

A custom, even more current build of rsync could be generated if the instructions listed here are followed. Ideally, a battle-tested version like the one bundled with CCC’s (/Applications/Carbon\ Copy\, which is actually a heavily customized rsync version 3.0.6) could be used, but it’s output isn’t easy to adapt and see an overview of the progress during a CLI transfer. Regardless, the recommended switches are employed in hopes to get a passing grade on the backupBouncer test. The 3.0.7 version bundled with DS itself (/Applications/Utilities/DeployStudio\, which for whatever reason is excluded when the assistant creates NetBoot sets) was used during development/testing.


The Users folder on the workstation that’s being backed up is what’s targeted directly, so any users that have been deleted or subfolders can be removed with the exclusions file fed to the rsync command, and without catch-all, asterisk(*) ‘file globbing’, you’d need to be specific about certain types of files you want to exclude if they’re in certain directories. For example, to not backup any mp3 files, no matter where they are in the user folders being backed up, you’d add - *.mp3 Additional catch-all excludes could be used, as detailed in the script, which specifically excludes ipsw’s(iOS firmware/OS installers) like this: --exclude='*.ipsw'


Pretty much everything done via both rsync and cp are done in reverse, utilizing the source/destination options, so a backup taken from one machine can easily be chosen to restore to another.

Security Considerations:

Very little security is applied during storage. Files are transferred over password-protected AFP, so a separate server and repo could be used to minimize potential access by whoever can access the main DS service. Nothing encrypts the files inside the sparseimages, and if present, the older password format is a hash that could potentially be cracked over a great length of time. The home folder ACL’s and ownership/perms are preserved, so in that respect it’s secure according to whoever has access to the local file systems on the server and client.

Excluded/Missing Features:
(Don’t You Wish Every Project Said That?)

Hopefully this won’t sound like a soul-bearing confession, but here goes:
No checks are in place if there isn’t enough space on destinations, nor if a folder to backup is larger than the currently hard-coded 100GB sparseimage cap (after exclusions.) Minimal redirection of logs is performed, so the main DS log can quickly hit a 2MB cap and stop updating the DS NetBoot log window/GUI if there’s a boatload of progress echo’d to stdout. The process to restore a users admin group membership(or any other group on the original source) is not performed, although the group’s admin.plist can be queried after the fact. Nor is there even reporting on Deleted Users orphaned home folders if they do actually need to be preserved, by default they’re just part of the things rsync excludes. All restrictions are performed in the Excludes.txt file fed to rsync, so it cannot be passed as a parameter to the script.
And the biggest possible unpleasantness is also the #1 reason I’m not considering continuing development in bash: UID collisions. If you restore a 501 user to an image with a pre-existing 501 user that was the only admin… bad things will happen. (We’ve changed our default admin user’s UID as a result.) If you get lucky, you can change one user’s UID or the other and chown to fix things as admin before all heck breaks lose… If this isn’t a clean image, there’s no checking for duplicate users with newer data, there’s no filevault1 or 2 handling, no prioritization so if it can only fit a few home folders it’ll do so and warn about the one(s) that wouldn’t fit, no version checking on the binaries in case different NetBoot sets are used, no fixing of ByHostPrefs(although DS’s finalize script should handle that), no checks with die function are performed if the restore destination doesn’t have enough space, since common case is restoring to same HD or a newer, presumably larger computer. Phew!


The moral of the story is that the data structures available in most of the other scripting languages are more suited for these checks and to perform evasive action, as necessary. Bash does really ungainly approximations of tuples/dictionaries/hash tables, and forced the previous version of this project to perform all necessary checks and actions during a single loop per-user to keep things functional without growing exponentially longer and more complex.

Let’s look forward to the distant future when this makes it’s way into Python for the next installment in this project. Of course I’ve already got the name of the successor to SonOfBackupRestoreScripts: BrideOfBackupRestoreScripts!

DeployStudio Scripting Tips

Tuesday, September 25th, 2012

I’ve given a presentation on my affinity for DeployStudio, yet with it being closed source, it’s sometimes like an enigma (wrapped in a mystery (wrapped in bacon)). However, a workflow exists to enable scripting within it, although the only option besides automatically running it when dropped into a workflow is non-interactively passing arguments to a script. Even with little in the way of documented information, projects have popped up to take advantage of the framework provided.

Most noticeably, Rusty Myers’ BackupRestore scripts enabled quite an interesting workflow: first, you could run one workflow to tar(or ditto) the user folders to a new Backup directory in the repo, with a few customizable exceptions. And then, when either specified or dropped into a workflow that had a restore action precede it, you could put the users and their associated passwords back into place. This is obviously pretty darn useful for clean(er) migrations and/or OS upgrades, or simply refreshing existing workstations with a new base set of software. Many folks in the MacEnterprise community contributed features, including FileVault(version 1) support, and updates were made for the curveball Lion introduced in respect to how passwords are stored(nested inside the user record plist itself.)

I’m in the process of creating a successor to this project, so I thought I’d share some of the experiences I’ve had and pointers I’ve come across as a sysadmin, not a software developer, attempting to get a repeatable task accomplished inside this framework. Tip number zero is the same advice given to all students of programming in scripting languages: don’t write a lot before running and getting feedback. So, I booted a laptop with a few small user folders to my DeployStudio-generated netboot set, authenticated to the interface, and opened terminal. That netboot set includes the optional Python framework(Ruby is another option if you’d like access to that scripting language), which I’ll be using in the future. Along with selecting “Display Runtime log window by default”, I extended the “Quit runtime automatically after__” number to 90 seconds, so when testing inside of a workflow I wouldn’t be kicked out as I iterated and repeated the process.

To get started, I made an “admin” folder inside the repo, put an updated version of the rsync binary in it(since the one in /usr/bin that ships with OS X is pretty ancient), and started writing a script in that directory which I could therefore run from terminal on the netbooted laptop over VNC/ARD. For starters, here’s tip #1:
DeployStudio mounts the repo in /tmp/DSNetworkRepository. While /tmp isn’t read only, you will get out-of-space errors and general unhappiness if you use it for anything except mountpoints.
Tip #2:
No, you can’t use symlinks in those subfolders to point somewhere else on the DeployStudio server, since it assumes the directory the symlink would point to is relative to the root of the netboot set instead. (No, really, it wouldn’t even work when using ln -s on the machine the repo’s hosted from instead of Finder’s aliases, which definitely don’t work in scripts.)
tip #3:
For persnickety folks like myself that MUST use a theme in terminal and can’t deal to not have option set as the meta key, you’re probably bummed that the Preferences menu item is greyed out and command-comma doesn’t seem to work. There is a way, though: from the Shell menu choose Show Inspector. Then from the settings tab, double-click the theme you prefer. The full settings window will appear, and you can have your modern conveniences again.
tip #4:
How does DeployStudio decide what is the first mounted volume, you may wonder? I invite(dare?) you to ‘bikeshed‘(find a more optimized way to accomplish a relatively trivial task) this particular piece of code:
system_profiler SPSerialATADataType | awk -F': ' '/Mount Point/ { print $2}' | head -n1
In the case of the restore script, hard-coding the DS_LAST_RESTORED_VOLUME variable (on line 44 in Rusty’s current Lion version), or changing the argument in the workflow to pass that path with the -v flag will remove the dependency on restoring an image before putting user folders back in place.

Two more tips before I go, which are both specific to the task I’m trying to accomplish. Ownership on files will not be preserved when moved to the repo with rsync, so you can create a sparse image or sparsebundle as a destination, and it will even retain ACL’s (certain patched rsync binaries complain about smb_acl’s, like the one I used, which is bundled in DeployStudio’s Tools folder.) As mentioned about /tmp in the NetBoot environment earlier, sparseimages should be created in a subfolder of the repo, or you could easily run into ‘out of space’ errors.

When logging, you can use tee or simply redirect output, but more customized feedback in the actual log window in the Deploy Studio netboot runtime is helpful. There’s a “custom_logger” function used in some of the bundled scripts… which literally does nothing but echo $1 – pretty rudimentary. For output that doesn’t display when run as part of a script, you can redirect that output to /dev/stdout and have it shown instead of fooling around with echo or printf.
e.g. rsync -avz /source /destination >/dev/stdout
There may be a lag if verbose output happens in a short amount of time, as the main log file in the repo is being written to simultaneously to what is printed onscreen.

Oh, and the silliest thing I’ve noticed: your script needs to be owned by root:wheel and 777 in the scripts folder of the repo in order to show up in the workflow interface for selection… It’s got it’s quirks, but it’s just about the best out there!

Building A Custom CrashPlan PROe Installer

Friday, April 13th, 2012

CrashPlan PROe installation can be customized for various deployment scenarios

Customization of implementations for over 10,000 clients is considered a special case by Code 42, the makers of CrashPlan, and requires that you contact their sales department. Likewise, re-branding the client application to hide the CrashPlan logo also requires a special license.

Planning Your Deployment

A large scale deployment of CrashPlan PROe clients requires a certain level of planning and setup before you can proceed. This usually means a test environment to iron out the details that you wish to configure. Multiple locations, bandwidth, and storage are obvious concerns that will need a certain amount of tuning before and after the service ‘goes live’. Also, an LDAP server populated with the expected information or a prepared xml document that has identifiable machine information needs to be matched with account and registration data. Not just account credentials, but also filing computers and accounts into groups through the use of Organizations (which directly relate to the registration information used) should also be considered.

Which Files to Change

The CrashPlan PROe installer has different files for Windows and Mac OS X, but the gist is largely the same for either. There is a customizable script (or .bat file) in that you can use to specify variables to feed information into a template that are specific to your deployment. The script can be customized to reference ldap information, or even a shared data source that can provide account information based on an identifiable resource such as a MAC address.

Mac OS X 

Download the installer DMG and make a copy of it. The path we’ll be working in is:

Install CrashPlanPRO.mpkg/Contents/Resources/

Inside the Resources directory there is a Custom-example folder that contains the template and script to customize.

Duplicate the Custom-example to Custom is a configuration script that has (commented-out by default) sections for parsing usernames from the current home folder, hostname, or from LDAP. This would also be where one could gather other machine information ( such as mac address ) and match it to data in a shared document on a file server.

In the same folder as is the folder “conf” which contains the file default.service.xml. The contents of this file can be fed variable information from the configuration script to set the user name, computer name, ldap specifics, and password that will be used upon installation. It is advisable to test new user creation when using LDAP and CrashPlan organizations, to ensure users . It is possible to specify those properties in this xml list.

So the process breaks down like this. edit the to populate the default.service.xml. let the installer run and make contact with the server and let the organization policies set all non custom settings.

XML Parameters

default.service.xml has the following properties

By supplying the address, registrationKey, username and password, the user will bypass the registration / login screen. The following tables describe authority attributes that you can specify and their corresponding parameters.

Authority Attributes
the primary address and port to the server that manages the accounts and issues licenses. If you are running multiple PRO Server, enter the address for the Master PRO Server.
(optional) the secondary address and port to the authority that manages the accounts and issues licenses.

Note: This is an advanced setting. Use only if you are familiar with its use and results.

a valid Registration Key for an organization within your Master PRO Server. Hides the Registration Key field on the register screen if a value is given.
the username to use when authorizing the computer, can use params listed below
the password used when authorizing the computer, can use params listed below
(true/false) do not prompt or allow user to change the address (default is false)
(true/false) allow user to change the server address on the Settings > Account page. (do not set if hideAddress=“true”)

Authority Parameters
determined from the CP_USER_NAME command-line argument, the CP_USER_NAME environment variable, or “” Java system property from the user interface once it launches.
system computer name
random 8 characters, typically used for password
for LDAP and Auto register only! This allows clients to register without manually entering a password and requiring user to login to desktop the first time.
Set to false to turn off the inbound backup listener by default.

Sample Usage
All of these samples are for larger installations where you know the address of the PRO Server and want to specify a Registration Key for your users.
Note: NONE of these schemes require you to create the user accounts on your PRO Server ahead of time.

  • Random Password: Your users will end up with a random 8-character password. In order to access their account they will have to use the Reset My Password feature OR have their password reset by an admin.
  • Fixed Password: All users will end up with the same password. This is appropriate if your users will not have access to the CrashPlan Desktop UI and the credentials will be held by an admin.
  • Deferred Password: FOR LDAP ONLY! This scheme allows the client to begin backing up, but it is not officially “logged in”. The first time the user opens the Desktop UI they will be prompted with a login screen and they will have to supply their LDAP username/password to successfully use CrashPlan to change their settings or restore data.

Changing CrashPlan PRO’s Appearance (Co-branding)

This information pertains to editing the installer for co-branding. Skip this section if you are not co-branding your CrashPlan PRO.
Co-Branding: Changing the Skin and Images Contents

You can modify any of the images that appear in the PRO Server admin console as well as those that appear in the email header. Here are the graphics you may substitute:
.Custom/skin folder contents
Filename Description
logo_splash.png splash screen logo
splash.png transparent splash background (Windows XP only)
splash_default.png splash background, must NOT be transparent (Windows Vista, Mac, Linux, Solaris, etc.)
logo_main.png main application logo that appears on the upper right of the desktop
window_bg.jpg main application background
icon_app_16x16.png icons that appear on desktop, customizable with Private Label agreement only

View examples
In the Custom/skin folder, locate the image you wish to replace.
Create another image that is the same size with your logo on it.
For best results, we recommend using the same dimensions as the graphics files we’ve supplied.
Place your customized version into the Content-custom folder you created.
Make sure not to change the filename or folder structure, so that CrashPlan PRO will be able to find the file.
Co-Branding: Editing the Text Properties File

You can change the text that appears as the application name or product name in CrashPlan PRO Client. Make your changes in files in the Custom/lang folder.
The file is English and is the default language.
Each file contains the text for a language. Please refer to the Internationalization document from Sun for details (
The language is identified in the comments at the beginning of the file.
When you change the application or product name, keep in mind that using very long names could affect the flow / layout of the text in a window or message box.
Text Property Description
Product.B42_PRO The name of the product as it would appear on the Settings > Account page, such as CrashPlan PRO The application name appears in error messages, instructions, descriptions throughout the UI.

Creating an Installer

Make the customizations that you want as part of your deployment, then follow the instructions to build a self-installing .exe file.
How It Works – Windows Installs

Test your settings by running the CrashPlan_[date].exe installer.
Make sure the installer.exe file and the Custom folder reside in the same parent folder.
Re-zip the contents of your Custom folder so you have a new that contains:
Custom (includes the skin and conf folders)
Turn your zip file into a self-extracting / installing file for your users.
For example, download the zip2secureexe from
The premium version is not required; however, it does have some nice features and they certainly deserve your support if you use their utility.
Launch zip2secureexe, then :
specify the zip
specify the name of the program to run after unzipping: CrashPlan_[date].exe
check the Build an EXE option to automatically unzip to a temporary directory
specify the app title:CrashPlan Installer
specify the icon file:cpinstall.ico
click Create to create your self-extracting zip file
Windows Push Installs

Review / edit cp_silent_install.bat and cp_silent_uninstall.bat.
These show how the push installation system needs to execute the Windows installer.
If your push install software requires an MSI, download the 32-bit MSI or the 64-bit MSI.
If you have made customizations, place the Custom directory that contain your customizations next to the MSI file.
To apply the customizations, run the msiexec with Administrator rights:
Right-click CMD.EXE, and select Run as Administrator.
Enter msiexec /i



REM The LDAP login user name and the CrashPlan user name.
Echo UserName: %CP_USER_NAME%

REM The users home directory, used in backup selection path variables.
SET CP_USER_HOME=C:\Documents and Settings\crashplan
Echo UserHome: %CP_USER_HOME%

REM Tells the installer not to run CrashPlan client interface following the installation.
Echo Silent: %CP_SILENT%

Echo Arguments: %CP_ARGS%

REM You can use any of the msiexec command-line options.
ECHO Installing CrashPlan…
CrashPlanPRO_2008-09-15.exe /qn /l* install.log CP_ARGS=%CP_ARGS% CP_SILENT=%CP_SILENT%



REM Tells the installer to remove ALL CrashPlan files under C:/Program Files/CrashPlan.

ECHO Uninstalling CrashPlan…

How It Works – Mac OS X Installer

PRO Server customers who have a lot of Mac clients often want to push out and run the installer for many clients at a time. Because we don’t offer a push installation solution, you’ll need to use other software to push-install CrashPlan, such as Apple’s ARD.
Run Install CrashPlanPRO.mpkg to test your settings:
At the command line, type open Install\ CrashPlanPRO.mpkg from /Volumes/CrashPlanPRO/)
Launch Install CrashPlanPRO.mpkg to test your settings.
Unmount the resulting disk image and distribute to users.
Note: If you do not want the user interface to start up after installation or you want to run the installer as root (instead of user), change the file as described in next section.
Understanding the File
This Mac-specific file is in the Custom-example folder inside the installer metapackage. Edit this file to set the user name and home variables if you wish to run the installer from an account other than root, such as user, and/or you wish to prevent the user interface from starting up after installation.
Be sure to read the comments inside the file.
How It Works – Linux Installer
Edit your install script as needed.
Run the install script to test your settings.
Tar/gzip the crashplan folder and share it with other users.
Custom Folder Contents
When you open the installer zip file or resource contents and view the Custom-example folder, the structure looks like this:
Contents of resource folder
Custom (folder)
skin (folder)
lang (folder)
conf (folder)
cpinstall.ico (Windows only)
must be created using an icon editor (Mac only)

Customizing the PRO Server Admin Console

You can also change the appearance of the PRO Server admin console and email headers and footers.
In the ./content/Manage, locate the images and macros you wish to modify and copy them into ./content-custom/Manage-custom using the same sub-folder and file names as the originals. Placing them there protects your changes from being wiped during the next upgrade.
Our HTML macros are written with Apache Velocity. If your site stops working after you’ve changed a macro, delete or move the customized version to get it working again.
Location of Key PRO Server Files
These locations may change in a future release so you will be responsible to move your customized versions to keep your images working.
macros/cppStartHeader.vm ++ (see below)
macros/cppFooterDiv.vm ++ (see below)
Email images are:
++ These files are web macros. You’ll need to update these in place instead of copying them to the custom folder. They won’t work under the custom folder. Remember that our upgrade process will overwrite your changes.

Preparing for a Business CrashPlan Deployment

Sunday, March 11th, 2012

Knowing the Software

It is important to remember that of the two aspects to the software, the CrashPlan client does all the heavy lifting. It scans the local file system, filters and applies other rules as set on the server, compresses and encrypts the data, and finally transfers it either to a destination across the network or to a local ‘folder’(attached drive, etc.) The second portion of the software is the server process that accepts data from each of the clients and tracks everything in a database.

Knowing Your Requirements

Scaling an environment that is backing up to near-unlimited, cloud-based storage is just a matter of having sufficient licenses and internet bandwidth to maintain uploads from multiple clients at once. CrashPlan Pro allows for businesses to store smaller sets of data with pricing per computer, as well. Organizationally, however, the Pro version is not meant for environments with over 200 users. It lacks other features, including integration with directory services and backup seeding/guest restoring/and reporting flexibility.

Embrace the Enterprise with PROe

In addition to getting those features which are missing from the ‘Pro’ level, CrashPlan PROe can work well in environments that are concerned about disaster recovery and would like to host secondary destinations. In these situations there are further considerations to take into account:

Data: Even with the compression applied to files, you’ll need to gauge a significantly larger amount of storage for data than will be backed up at the time of deployment, and have an understanding of how your retention policy will affect your storage needs as time goes on and/or clients are added. A great feature of the REST API available only to the PROe version is that usage can be dynamically gauged.

‘User’ Accounts: It is often the case that there is a subset of pre-approved users for inclusion, which can easily be imported into the CrashPlan PROe servers database, or linked from LDAP. For certain computers and situations, however, the software would more appropriately be allocated by the role the computer performs. Alerting and monitoring is one concern when changing how the account is tied to the computer, but more crucial to understand is when customers are allowed to restore their own files; backing up many computers under the same account can become a security liability (this can be administratively locked out.)

Master-Slave Configuration: For multiple locations, a slave server can be allocated within an organization to more flexibly allocate computers. Just like seeding a backup, an entire slave server can be seeded with the contents of any other server under a Master, and clients will pick up right where they left off.

These are just a few examples of the considerations to take into account when deciding if CrashPlan PROe is right for your environment. For more information, please contact your Professional Services Manager or if you do not yet have one.

Performing a CrashPlan PROe Server Installation

Wednesday, April 13th, 2011

This is a checklist for installing CrashPlan PROe Server.

Prepare your deployment:  Before you install the server software you should have the following ready:

  1. A static IP address. If this is a shared server, whenever possible, CrashPlan should have a dedicated network interface.
  2. (Recommended) Fully qualified Host Name in DNS. IP addresses will work, but for ease of management internally (and even more important externally,) working DNS to point to the service is best.
  3. Firewall port forwards for network connections. Ports 4280 and 4282 are needed for client-server communication, and to send software updates. 4285 is also needed if you wish to manage the server via HTTPS from the WAN.
  4. There should be a dedicated storage (preferably with a secure level of RAID) volume for backup data.
  5. Although a second server install (as server/destination licenses are free) is best for near-full redundancy, secondary destination volumes can be configured on external drives for offsite backup.
  6. LDAP connection. If you will be reading user account information from an LDAP server, make sure you  have the credentials and server information to access it from the CrashPlan Server install.
  7. If you’d like multiple locations to backup to local servers, ensure that your first master is installed in the most ideal environment for your anticipated usage. This is referred to as the Master server, which requires higher uptime and accessibility, as all licensing/user additions and removals rely upon it.


  1.  Go to
  2. If you have not purchased CrashPlan licenses through a reseller, you can fill out the web form to be issued a trial master license key. Otherwise, check the “I already have a master key” checkbox to be presented with the downloads.
  3. Download the CrashPlan PROe server installer (the client software is located further down on the page.)  Choose the appropriate installer for your server (Mac, Windows, Linux, or Solaris.)
  4. Run the installer. When the installation completes you will be asked to enter the master key in order to activate the software.  If you don’t have it at that time, you can enter it later via the web interface.


  1. Initial Setup. On the server, from a web browser, connect to   This is the web interface of the CrashPlan PROe Server. If you did not enter the master key during installation, you will prompted to enter it here.
  2. Log into the server using the default admin user credentials provided on the screen.  Immediately change the username and password for the ‘Superuser’ by going to Settings Tab > Edit Server Settings in the sidebar > then Superuser in the sidebar. Just as with Directory Administrator user names, customizing the user name is also recommended.
  3. Assign networking information. Click on the Settings tab > Edit Server Settings > Network Addresses. You will see fields in which to enter the Primary and Secondary  network addresses or DNS name(s). This information will match how clients attempt to connect to the server, so for ease of management, using an IP address for the primary and DNS for the secondary may make the most sense. Changes to the servers address would therefore immediately propagate for clients instead of waiting for DNS, although TTL preparation would help. Another consideration is where the majority of the clients will be accessing the server from.
  4. Assign the default storage volume: By default, CrashPlan PROe will assign a directory on the boot volume as the storage volume. Navigate to the Settings tab > Add Storage. You will be presented with a page that has links to Add Custom DirectoryAdd Pro Server, or Unused Volumes. If the data volume is attached to the file system with a UNC path it will be listed as an Unused Volume. Select the new storage volume, optionally with a subdirectory. Finally, to indicate this new volume as the default storage volume for new clients, navigate to the Settings tab > Edit Server Settings, and the third line has a drop-down menu for Mount Point for New Computers. You can then remove the default storage location on the boot volume.
  5. Create Organizations. At installation time there will be one default organization. All new users created will be added to this group. You can create an arbitrary number of organizations and sub organizations, if you believe client settings should be propagated differently for certain departments. At least one sub-organization can be helpful in complex environments, especially with Slave servers. Each division can have managers assigned for managing, alerting, and/ or reporting purposes, as well.
  6. Create User Accounts. Users can be created manually in the web interface, during the deployment of the client software, or through LDAP lookups.
  7. Set Client Backup Defaults. If you’d like to restrict certain files or location from the clients backups, you may do so from the Settings tab > Edit Client Settings. By default, nothing is excluded, but only the users home folder is included. It may be useful to restrict file types that the company is not concerned about, or modify the time period for keeping old versions. If storage space is a concern and customers are including very large files in the backup, you may want to purge deleted files on an accelerated schedule (default is never.) Allowing reports to be sent to each individual customer can also be enabled, or optionally setting may be locked down to read-only. In particular, especially if multiple computers share the same account, forcing the entering of a password to open the desktop interface may be useful to turn on and not allow it to be changed. These changes can be propagated for the entire Master server, the organization, or an individual client/user installation.
  8. Install CrashPlan PROe on a test machine for final testing. The installation of a client will require the Registration key that is generated for the organization that the user should be ‘filed’ into, the Master servers network information, the creation of a username (usually the customers email address, or the function that computer performs,) and a password. Once complete, the client will register with the server and begin backing up the home folder of the currently logged-in customer (by default.)

Backing Up Cisco Configurations Using Mac OS X

Friday, February 18th, 2011

Before you make configuration changes on devices you should make a backup of the device. You can basically use any platform you want to backup Cisco devices. Doing so in Mac OS X starts with the Terminal. So to backup a Cisco device you must first connect to the device in Terminal either through SSH or Telnet.

Then SSH to the device using the ssh command, followed by the username, an @ symbol and then the IP address or hostname of your device. Here, we’ll use an example of

ssh admin@

Note: One could also use telnet using the same type of string, but ssh is more secure.

Next, provide the password and you will see a prompt with the device name. Once connected to the device you will need to go into enable mode by typing “en” at the command prompt and hit enter. It may prompt you for an elevated privileges password, which you will need to know.

Once complete you will notice that the prompt turns from a > to a # symbol. The # symbol is akin to having root access. Now to backup the configuration of this device you will enter “show run” which is short for show running-config:

show run

You will see a ←-more→ prompt at the bottome of the page. Just hit the space bar until you are back a the prompt. Once you are at the prompt you will highlight all the text using your mouse that was just generated in the terminal and after its all highlighted hit “Command C” to copy the contents. Open your favorite text editor and use the “Command V” to paste the text. Be careful to use plain text here (I prefer to just use pico or vi rather than Word or TextEdit). Save the file as your configuration backup file for the Device.

NOTE: If you want to also get the IOS (IOS is different than iOS) version info you can run the “show version” instead of the “show run” command. And use the same steps to cut and paste.

If you cannot log into a device remotely, you can use a Keyspan adapter to use the serial port to connect to the device.

PresSTORE Article on Xsanity

Tuesday, November 16th, 2010

We have posted a short article on the availability of PresSTORE 4.1 on Xsanity at Enjoy!

Thinking Outside the Box: CrashPlan Pro

Monday, November 8th, 2010

There are a lot of organizations who are rethinking some basic concepts in Information Technology. One of these concepts is that you need to own, duplicate and even replicate user data between each of your sites so that you can have roaming profiles in Windows and mobile home directories in Mac OS X. For organizations with a large number of labs and users who roam between them, these challenges, which have dominated the infrastructure side of IT have been cumbersome for the past 15 to 20 years. But let’s rethink the “why.”

If you have labs, common in K12 and Higher Education but not so common in the corporate world, you need network home folders on the Mac OS X side, or its sister, portable home directories. On the Windows side, you need folder redirection. But a growing number of education environments are practicing the art of the one-to-one deployment, which strongly resembles what can be seen in the corporate world.

Between the big iron, massive SANs attached to the core switches licensing for DFS heads and the like, it can all get cost prohibitive. But we still do it because we think we need our data replicated. And some of us do. But one thing that we often say is that this data is not a backup. So if it isn’t a backup then how do we back these systems up. And if we do need to back these systems up then why are we also performing a layer of redundant synchronization? Does all of this result in 3 or 4 copies of the data, all in a from that cannot be reduplicated?

The end of the Xserve is nigh, and now for something completely different?

Awhile back, someone told me that you could back an unlimited amount of data up to the cloud for a price that was so cheap that I was stunned. There were a couple of products that I reviewed: CrashPlan and Backblaze. Both are pretty darn awesome. But the bandwidth to back 3,000 users up to someone else’s cloud can become pretty darn cost prohibitive. Enter CrashPlan Pro: you can host that cloud in your own location, or in multiple locations if you have the need to do so, and all on relatively inexpensive hardware, either leveraging the hardware that you already own or even the CrashPlan Pro appliances, rack mountable goodness that scales to store up to 72TB of data per unit, to store data that gets deduplicated before it gets copied to the device over the wire, providing substantial storage savings, not to mention reduced congestion on your wire (or wireless).

And to top it all off, CrashPlan Pro offers extensibility in the form of a REST-based API that allows building that which you may need but which the developers have not yet though (or more likely had time) to build. The API actually makes CrashPlan Pro a possible destination for Final Cut, amongst other things.

Oh, and did we mention the client can run on Mac OS X, Windows, Linux and Solaris?!?!

318 partners with a number of vendors to help you rethink your IT conundrum, leveraging the best advances of today and tomorrow. We are pleased to add CrashPlan as our latest, in a long list of valued partners. Contact your 318 Professional Services Manager, or now for more information.

MySQL Backup Options

Thursday, July 8th, 2010

MySQL bills itself as the world’s most popular open source database. It turns up all over, including most installations of WordPress. Packages for multiple platforms make installation easy and online resources are plentiful. Web-based admin tools like phpMyAdmin are very popular and there are many stand-alone options for managing MySQL databases as well.

When it comes to back-up, though, are you prepared? Backup plug-ins for WordPress databases are fairly common, but what other techniques can be used? Scripting to the rescue!

On Unix-type systems, it’s easy to find one of the many example scripts online, customize them to your needs, then add the script to a nightly cron job (or launchd on Mac OS X systems). Most of these scripts use the mysqldump command to create a text file that contains the structure and data from your database. More advanced scripts can loop through multiple databases on the same server, compress the output and email you copies.

Here is an example we found online a long time ago and modified (thanks to the unknown author):


# List all of the MySQL databases that you want to backup in here,
# each separated by a space
databases="database1 database2 database3"

# Directory where you want the backup files to be placed

# MySQL dump command, use the full path name here

# MySQL Username and password
userpassword=" --user=myusername --password=mypassword"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert --lock-tables"

# Unix Commands

# Create our backup directory if not already there
mkdir -p ${backupdir}
if [ ! -d ${backupdir} ]
echo "Not a directory: ${backupdir}"
exit 1

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
$mysqldumpcmd $userpassword $dumpoptions $database > ${backupdir}/${database}.sql

# Compress all of our backup files
echo "Compressing Dump Files"
for database in $databases
rm -f ${backupdir}/${database}.sql.gz
$gzip ${backupdir}/${database}.sql

# And we're done
ls -l ${backupdir}
echo "Dump Complete!"

Once you verify that your backup script is giving you valid backup files, these should be added to your other backup routines, such as CrashPlan, Mozy, Retrospect, Time Machine, Backup Exec, PresSTORE, etc. It never hurts to have too many copies of your critical data files.

To make sure your organization is prepared, contact your 318 account manager today, or email for assistance.


Wednesday, June 30th, 2010

Last week, German software company ARCHIWARE released version 4.0 of its enterprise backup solution, PresSTORE. This version is for new installations only – version 4.1, planned for release in October, will support upgrades from existing 3.x deployments.

The new features of PresSTORE 4 can be found on the company’s website, but here are some highlights:

  • New interface to simplify management
  • iPhone app for remote monitoring of jobs
  • New desktop notification system to alert users of actions
  • Progressive backup – “backup without full backup”

Aaron Freimark also wrote a post about the new version on the Xsanity site that talks more about the Xsan-specific features.

As before, PresSTORE is supported on Mac OS X (10.4 and higher), Windows (2003, 2008, XP, Vista and 7), Linux and Solaris. Backup2Go Server is only supported on OS X and Solaris.

PresSTORE support is great – during testing of the new version, the iPhone monitoring app was crashing. Within a day, a new version was available in the App Store that addressed the exact issue. Bravo!

To learn more about PresSTORE (including pricing options), please contact your 318 account manager today, or email for more information.

Uninstalling Retrospect 6.3 Clients and Changing Passwords

Wednesday, May 12th, 2010

Open the retrospect client and turn it off. Then close it and delete the \Libraries\Preferences\retroclient.state file. Now you have two options. To completely uninstall, just trash the app from the Application folder. Or if you just needed to reset the password, you can rerun the installer and it will prompt you for a password.

Evaluating Backup Exec Jobs

Tuesday, April 13th, 2010

[ ] Assess the Job Setup tab and review its listing to determine which jobs are currently configured on the system.
[ ] Review the selection list to ensure that all relevant data and file shares are being backed up and copied
[ ] Assess the Job Monitor tab to confirm that the jobs that are setup and configured are actually running as scheduled.
[ ] Review the job logs (Job History) to ensure that all data is being backed up or if there are minor errors, note what caused those errors to correct later.
[ ] Ensure that there that the job did not fail due to lack of space (or other chronic issues), because if it is then most likely the client needs larger storage or we must set media and jobs to allow for overwrite of data.

Backup Agents are needed for special data such as SQL and Exchange databases, or files located on remote computers. Many open files will not back up unless the Open File Agent is preset, installed and licensed on the data source.

Media Sets (Under the Media tab) are collections of backup media that share common properties. In Backup Exec, media sets can be adjusted under their properties to allow for overwrite and appends infinitely or after a certain period of time. This allows you to manage how media is managed when space begins to come into play. Verify these settings to ensure proper retention.

[ ] Review the Alerts tab and check under Active Alerts sub-tab and ensure that no jobs have been waiting on media or needed human interaction or response.
[ ] Review the Alert History sub-tab and verify that no jobs in the past were waiting for interaction or response.
[ ] Check backup notifications under each job and under the default preferences (Tools >> Recipients… & Tools >> Email and Pager Notification…), to ensure that the proper individuals are being notified about backups and alert items.
[ ] Review the Devices tab and verify that there are no devices/destination that are Offline.
[ ] Ensure that any devices that are currently listed as a backup destination (unless it is the member of a device pool) is online. If the device is a member of a device pool and that the backup job is referencing that pool then the jobs will continue once at least one of the pool’s devices is online).

Typically backup jobs will have destinations as being either tape, local or network storage. Most likely an external backup devices will fall under the tree as a Backup-to-Disk Folder. If the drive/device is not connected it may show up as Offline. If you are sure that the device is connected, right-click on the entry and ensure that devices is both confirmed as Online and also Enabled.

To learn more about Backup Exec – here are some additional links:
Symantec Backup Exec website

Datasheets on usage of Backup Exec 2010 (Applications,Features, Agents)

Wikipedia on the architecture and history of Backup Exec

Checking Backup Jobs in Atempo’s Time Navigator

Wednesday, March 31st, 2010

Time Navigator is a powerful enterprise level backup software suite. It is also one of the most complex backup software you can manage.

In order for an ADB to be successful you need to check the following:

  • Whether the scheduled backups were successful or not.
  • If they were unsuccessful is intervention required?
  • Check the available storage for future backups
  • Did test restore succeed or not
  • Are Critical files backed up?
  • General log review

Section 1. Check whether the scheduled backup was successful

To begin you need to know the username and password for the local user on the host computer, which needs to have admin rights, as well as the username and password for the tina catalog.

Step 1: Open the Time Navigator Administrative Console.
On Mac it is /Applications/Atempo/tina/Administrative Console
On Windows c:\Program Files\Tina\Administrative Console

When the Administrative Console starts it will initiate a connection to the Time Navigator Catalog indicated in the config files.

It will prompt you for a username and password. Once you enter the proper username and password you will gain access to the Administrative Console.

This program interface is the main access point to the various programs that let you control Time Navigator.

Step 2. Choose the “Monitor” menu list and select “Job Manager” this will open the Time Navigator Job Manage. The initial view will show all active jobs. Go to the View menu and choose “Historic” this will show past jobs.

From here you will be able to review the recent backup jobs to find out whether they were successful or not.

Section 2. If the backups were unsuccessful do they require intervention?

Determining whether intervention is required is largely diagnosed by the reasons of a backup failure.

From within the Job Monitor you can select a job from the historic menu and double click it to access the job detail window.

From this window you will have access to several tabs. The tab of interest here will be the one called “Events”. This is a filtered view of the logs so it shows only the log entries that are connected to this job number.

To make the determination of whether intervention is warranted requires some knowledge of the errors you find. To that end the errors are color coded. Yellow errors are considered minor and are likely to be overlooked if they are the only errors present. While Orange and Red errors are higher priority and should warrant the attention of a tech trained in Time Navigator.

Section 3. Check the available storage capacity for future backup executions.

Time Navigator treats all forms of storage as tape a tape library. Your backup destination will either be a Virtual Tape library, in the case of backing up to hard drives, or to a specific physical tape library.

This means that we will need to view the Library Manager application.

Start with the Admin Console. Choose the host for which the library is attached. ( all libraries are attached to a host ) Select the host icon with the mouse and choose the “Devices” menu from there choose “Library” then “Operations” then “Management”

This will spawn the Library Manager application. You will be presented with a dialogue containing a list of available Libraries.

Once chosen you will get a window that shows the number of drives ( virtual or real ) and the tape cartridges in their slots ( also virtual or real ) by this display you will be able to determine which tapes have been used and which are free for use.
If a cartridge has been used it will be labeled for the tape pool in which is belongs too. If it is free for use it will be labeled either SPARE or ????? or in rare cases. Lost & Found. Lost & Found cartridges should be reported to the administrator.

A comprehensive determination of how much space is left would take some math. Know how much each tape represents and how much data is backed up nightly etc…

A quick version to keep in mind is percentages. If there is less than 10% free available cartridges it might be worthwhile to notify the administrator. It will take some experience to tell whether this is a problem or not as some tapes can hold hundreds of gigs and two tapes might take months to fill.

Section 4. Test Restore. Success or Failure.

This section implies that you will attempt a restoration of some files.
FIle restoration with Time Navigator is both its most powerful feature and most complex in comparison to other backup software.

First a word about the process. While it is true that the Administrative Console and associated applications can be run on any computer that participates in the Time Navigator backup system. The Restore and Archive Manager application will attempt to make a connection to the host from which files were backed up. Which means you will need credentials for that host which allow read write access to the directories which were backed up. To this end it is often simpler to open the Administrative Console on the host in question before you open the Restore and Archive Manager application.

To restore files from the backup of a host you will need to select the host from the Administrative Console. From the “Platform” menu choose “Restore and Archive Manager”. You will then be challenged for a username and password for the host in question.

Once you have entered legitimate credentials for the host you will be presented with the window for the Restore and Archive manager. It will show the host name and the username by which you are connecting. It will also show you the complete file system on this host in expandable trees. Each element with a check mark box beside it

Furthermore this view will show you the file system in the present and the capacity to show the file system at some point in the past.

This element is where the program gets its name. The “Time Navigator” allows you to navigate through time to look at the file system and select file for restoration.

The idea here is that you know what time period you are looking for. You select the date beside the “past” radio button and it will then show you what files are available for that time period.

The second feature shown on this interface is the ability to isolate files that have been deleted. Meaning you have the ability to adjust the view to show files that were present in the past but are not present now. Spanning back an arbitrary amount of time as determined by the form element for days, weeks, months etc…

While this is very useful it will not filter out non deleted files. Meaning you have to know what directory you want to look in before this becomes useful.

A third, and in my opinion the most useful, method of restoring files is called versioning.
If you right click ( control click ) on a file that has been backed up, you will be presented with a contextual menu with the word “versions”

Once selected it will open a dialogue window with every version of the file that is currently within the backup catalog.

Once you have selected a file from that list you will need to select the “synchronize” button at the bottom of this versions dialogue. This will set the past date and time marker to the point in time where this file was backed up. You can then check mark the file to be restore.

FInally you can search the catalog for files form this host. While within the Restore and Archive Manage choose the “Backup” menu and choose find.

You will be presented with the search interface with the current host already selected as the search base. From here you can search by pathname, filename and how far back in time to search and how many results to show

The search forms will accept wildcards for more creative searching. Once a file is located in the results window you will need to select the “synchronize” button at the bottom in a manner similar to the versions window mentioned above.


All of the above techniques are methods of locating the files you wish to restore and putting check marks besides them. Now it is time to restore them

Once you have all the files you wish to restore check marked we can proceed.
We will accomplish this with the “Restore” menu item. If there is any question as to what you have selected for backup, there is the option here to “view checked objects” This will filter the view to show only objects that have been check marked for restoration.

Next we can choose to test the backup or Run the backup. If there is any question as to whether the media for a file is available you should run it as a test first.

When you select test you will be greeted with a warning dialogue that says that this operation will perform all operations except for the writing of data itself. This means drives and or tape cartridges will be engaged and network throughput will be used.

After you agree the restore dialogue will show. You will have to tabs to choose from.
The first of which is labeled “Parameters”.

From here you can choose whether to restore the files to their original locations or to a new location on the same file system ( if you wish to restore to another host, it is possible but it is not covered here )

Now you must choose what level of backup you wish. Here you are presented with several radio dials that allow you to choose whether to restore data with or without directory and object information. This may seem like a splitting hairs but in some environments it is nice that your backup system can restore the user permissions for objects in your directory tree instead of just restoring everything.

The checkmark box for “restore all file versions” will restore everything int he “versions” list discussed above. Not used very often,

Now to the second tab “Behavior” the first selection to be made here is what behavior to choose should there already be a file with the same file name at the destination path.

You will see options for restore the file and overwrite, to renaming either the existing file or the restored file or do not restore if certain conditions are met.

Keep this in mind. If you need to restore a large number of files and you don’t know whether you should overwrite existing files, you should restore it in a neutral location and review it by hand.

If an error occurs while Restoring files. Skip? Cancel? Ask user? This selection will be important if you are monitoring the process. If you are not monitoring and you choose skip. You will need to review the logs, you choose cancel. you could come back to very little data being restored.

Finally the section “if required cartridges are off-line”
you run into this if you are dealing with physical tapes that are no longer within the library.

Issue Operator Requests for each missing cartridge. Which means the software will bug you each time a tape is missing.
Ignore files indicated on those cartridges. Self explanatory.
Display offline cartridge list. This is the one I have learned to check, It will check the availability of the tapes within the current library listing. Which means if you put new tapes in you have to scan the bar codes before this list updates. This method avoids a lot of headaches and is my recommendation of you are dealing with physical tape.

Finally you get to press restore. Where you will be presented with the dialogue for the restore process. You will see the progress bar, the path of files being restored and the option to monitor restore events.

If after all of this you have problems restoring you should contact a Time Navigator Admin.

Section 5. Did critical files backup.

At first glance this is similar to “did backups succeed” You can backup the system state for windows servers which are critical files but you should also check to see if the catalog for Time Navigator is being backed up. In the administrative console there is a host icon which will be called CATALOG. It is very important that this get backed up nightly. If this file becomes corrupt or non functional. The entire backup is effectively lost. A good Time Navigator tech can spend a huge amount of time to pull data from the tapes.

Section 6. General Logs Review

This section covers looking for things that look weird. From the Administrative Console choose Monitor Events.

This will open the event monitor and if you see errors like, Environment error or catalog error. Then it needs to be reported.


Monday, October 19th, 2009

Ever lost the data on your computer and then realized that your media library was on your iPod or iPhone but not on your computer? Or maybe you had some data backed up but not your massive media library? Well what you need is iTunes backwards: copy the media files from your iPod or iPhone to you computer. Luckily there’s Senuti, which is iTunes spelled backwards because it does just that; it copies data from your mobile device into the iTunes library. This should not be used as a backup tool but it does make for a nice recovery path in some cases!

Using kmsrecover to Restore Kerio Backups

Monday, September 28th, 2009

Using KMSrecover to restore a mailserver/user
Using this command will overwrite the existing config and modify the message store, which is why you need another machine for this, with adequate HD space.

[ ] Install KMS locally on your computer (skip wizard)
[ ] Rename your laptops volumes name to the same as where the KMS store lives
(e.g Mail Server HD or Server HD or Macintosh HD)
[ ] Copy KMS backups to external drive and plug into laptop.
[ ] Navigate to mail server path in terminal or DOS.
Mac: /usr/local/kerio/mailserver
PC: C:\Program Files\KerioMailServer
[ ] Start the recovery
Mac: ./kmsrecover |
For full recovery point to backup location.
./kmsrecover /Volumes/backup
For specific recovery, use filename
./kmsrecover /Volumes/backup/
PC kmsrecover |
For full recovery point to backup location.
kmsrecover E:\backup
For specific recovery, use filename
kmsrecover E:\backup\
Warning: If the parameter contains a space in a directory name, it must be closed in quotes. kmsrecover “E:\backup 2”

Restoring Kerio Mail Server Data Without Using KMS Restore

Tuesday, September 8th, 2009

This article will cover the WHY and the HOW of restoring mail files without using the KMS recover tool 

WHY would you not want to use the KMS recover tool.
1. The KMS recover tools requires that you stop the KMS in order to restore. This is an interruption in a live mail server which can bring a company to a halt. We do not always have the opportunity to wait until off-time to restore important data.
2. The KMS tool has the ability to restore specific folder, will overwrite said folder. For example if Julia asks me to restore all the message in her in box before September second, restoring that INBOX using KMS recover for September second would erase the entire contents of that inbox and replace it with the contents of september second. This is not always what is desired.

How to do this properly.

1. isolate and decompress the archive zip file for the client. This is often the most time consuming part of the restore. Especially if there are quite a number of zip files to look through. I suggest you install some sort of client to look at the zip contents. I suggest the zip quick look plug in found at:

Without such a utility you will need to de compress as many zip files as it takes to isolate the user folder in question. It is important to know that if the user account is over 1 GB the backup process will split it between multiple zip files.

2. Once we have isolated the files to be restored we copy these files to the Archive directory as indicated in the Administrative Console under the Archive and Backup tab. Once these files are copied to the Archive directory KMS will index it so it will become available to the Mail Admin web interface

3. log into the mail admin web interface. Expand the Archive folder and you should see a listing for the files you copied to the archive folder.

4. Now we need to get these files into the target folder. This can be accomplished several ways;
A. the easiest way to restore these files is when you have the password for the user in question. If this is the case you can access the web interface for that user. In the Admin account create a public folder of the time required. A mail folder for mail, a contact folder if you are transferring contacts, or a calendar folder for transferring calendar items. Right click on the new public folder and change the access rights so the admin account and the user account in question both have administrative rights. Once this public folder has been created copy the files from the archive folder to the public folder. Usually by right clicking on the archive folder and choosing “Move or Copy all” and choosing the public folder for the destination. Once that copy process is done you can log into the web interface for the account in question and proceed to copy the messages out of the public folder to the folder in their account.

This will give you the option to see what messages are there so you don’t over write them.

repeat this process as necessary to restore the message to the folders required.

B. If you don’t have the password to the user account. You can still accomplish a lot but it will require the use of the terminal on the mail server. I would suggest you try your best to get the mail password for the account or change the mail password for the account to grant you access. If you cannot get the password or it would really be bad to change the password you can proceed in the following manner.

It is important that the actual message copying take place in the web interface. This is so kerio can properly name the messages and that the index files are correct.

In the Admin web interface create a new folder named temp_restore. This folder will be empty with nothing important. At this point you will need ssh access to the mail server and grant yourself root access. Navigate to the mail store directory and to the account of the person you are going to restore message for. Use the ditto command to copy the contents of the target restore folder ( in this example the inbox of Renee ) to the temp_restore.

When you refresh your web interface for the mail Admin, temp_restore will gain all of the properties of the Renee’s inbox. You can proceed to copy the files form your archive folder to this temp_restore. This will preserve message numbers and index files. Once the copy is complete you can return to the terminal and reverse the direction of the copy. This will maker Julia’s inbox the same as your temp_restore.

This method is trickier with the inbox as it is possible that message have come in during the time you would making the copies. Other folders are not quite so sensitive to this process.

This process can be time consuming but it is sometimes best to work slowly without stopping the mail server for everyone

BRU Server 2.0 Now Available

Friday, July 24th, 2009

BRU Server 2.0 was released this week, offering a long anticipated update to the popular cross platform backup suite of applications. The main two features that the TOLIS group is highlighting include Encryption of backup target sets and client initiated backup.

Whether you are a BRU, Atempo, Bakbone, Backup Exec or Retrospect environment, 318 can assist you with planning, testing, verifying or restoring backups. Contact your 318 account manager today for more details.

Vmeter & Vguard for Xsan

Thursday, July 9th, 2009

Vmeter is another great product that you can bolt onto your Xsan from Vicom Systems. Vmeter allows you to get
Vmeter SQL Statistics statistics of bandwidth allocation for Xsan clients. But Vmeter doesn’t stop there. It also allows you to meter, or limit, the amount of bandwidth that is allocated to client machines, maximizing bandwidth for some users and tiering your performance allocation.

Vguard, also by Vicom Systems, is based on the technology included in Vmirror, the LUN mirroring solution, but goes a step further. Vguard allows you to setup another Xsan and use that SAN as a backup. We’re not going to go so far as to call it a snapshot, but it’s everything but.

Overall, Vicom integrates well with Xsan and fills some of the holes that the product itself has. For more information on Vmeter, Vguard or Vmirror, contact your 318 account manager today.

Retrospect 8.0.733

Tuesday, May 12th, 2009

Retrospect 8.0.733 is now out and available for download. If you are using version 8 and experiencing problems then you should run it as it fixes a number of bugs. Bugs fixed in the Retrospect 8.0.733 release:
18925: Keep backup sets and scripts associated when catalog rebuild is necessary
20075: General UI Feedback: Okay/Apply
20131: Able to enter text in fields that should only accept numbers
20146: Log Limit doesn’t verify for valid value range
20156: Prefs >Media > media request timeout should check for valid values
20229: Scripts Icon backwards in details view when no script is selected
20258: Copy assistant should not allow you to select same volume for source and destination
20276: “More Backups…” is disabled in Restore Assistant
20332: Restore Assistant: script starts when you select ‘Save’
20343: Error backing up Win XP client – error -3043 (the maximum number of Snapshots has been reached)
20373: Sources icons display as usb removable drives
20437: Past Backup lists wrong date
20475: Disclosure triangles in volumes and scripts
20504: Remove all local volumes: Need to restart Engine to repopulate
20528: Servers displaying in the Sources list
20538: Improve column sizes and layout
20555: Verify Script: Options lists backup sets
20585: “Pause Server” should change to “Unpause” or “Resume”
20598: File Media Sets: remove option to change ‘Fast Catalog Rebuild’
20604: Volume Type not correct
20634: Script Schedule > refresh > auto deletes schedules
20640: Creating a new schedule item does not select the new item
20719: Console: DAG memory leaks
20729: Possible Small Memory leak in Engine when [Backupset EditWithPassword]
20735: New Backup Script: using Tag from previous script
20849: Creating a New Media Set does not accept some characters
20896: “Please update your server” dialog should be more informative
20919: Media Sets: Tape not display Used/Free/Capacity
20945: ScriptProperties::TransferMode seems to have incorrect values
20953: Need to be able to defer scheduled activities
20971: Use Small Icons setting lost after closing UI
21015: Sources: Clients duplicate in the Multicast list
21039: License Manager UI Issues
21087: Starting activity negates activity scope buttons
21124: Desktop: no license challenge when adding a 3rd client
21174: Smart Tag UI problem
21302: Disk Media Sets: when only one member – remove should be disabled
21382: Dev: ArcDiskInfo/ArcDiskFileInfo’s persistent logic is wrong, blocking ppc feature
21463: Need a way to change console’s server password on existing server
21487: Sessions and Snapshots get into state with different volume names
21510: Search for files restore not working across multiple Media Sets
21544: Launch engine at startup authentication broken
21552: Sources: Erase a local drive the disk used / total not updated
21562: Restore Files: Assistant – Search for files in selected Media sets
21590: Need to store extdFlags EXTD_HASACL and EXTD_HASMETA in trees
21603: File Media Set: during backup .rbf.rfc file displays as unix executable
21618: Unable to successfully restore IIS on W2K3 Server
21625: Rules not updating correctly
21628: Unable to add multiple device members
21644: Cannot change member location in Edit Member, throws error
21663: Bad value for Compression field in Activities
21712: Assert during first backup
21737: Crash with DLT1 drive
21740: Media creation time is wrong
21746: Crash trying to add NAS device
21752: Crash copying library directory
21755: module.cpp-825 assert
21764: Console crash while backing up NAS (tag-related)
21775: wrong password adding clients
21782: Restore Assistant: Assert at module.cpp-845
21783: Sources: Local Volumes displaying multiple times
21785: Restore Assistant: When Clients volumes selected unable to ‘Continue’
21791: U Mich. assert
21797: Klingon server assert during client backup
21800: RefBackupset::Search needs Progress object
21803: Error -703 unknown when trying to access a Media Set
21804: Firewire Lacie D2 AIT not responding
21812: Engine crash with invalid object
21813: Incorrect free disk space displayed
21815: Can’t stop engine on 10.4.11
21822: Search for files – manual selection is ignored
21824: Wrong Client Errors being displayed
21825: Client Test button missing
21826: Client connection strangeness
21830: Rules UI different in different parts of yeti
21837: Source’s ‘Last Backup Date’ field doesn’t roll up
21838: assert while trying to rebuild a disk media set
21846: Improve how compression data is displayed
21849: Editing script with many sources not easy
21852: Crash proactive backup to tape library
21856: Console crash with 8.0.608 (tag-related)
21858: Restore Assistant: Selected Media selector set jumps to top of list
21863: Restore Assistant: Restore files from which backup – no date displaying
21864: Restore Assistant: Preview for multiple media sets – only displaying files from first
21866: Assert during local restore: restore drive out of space
21868: better errors needed when license is required
21876: Assert: tree.cpp-3095
21877: Smart Tags not working with Clients set to Startup volume
21878: Assert: module.cpp-825 and others when adding clients
21879: Can’t erase 6.1 VXA-320 media
21881: Hang with 2 proactive backups running
21901: Selecting tape in slot during add member tries to add tape in drive first
21902: Grow the UI elements for all non-English language XIBs
21908: Can’t create a Size rule with more then 3 numbers
21911: Restore Assistant: Not restoring correct files (search restore restores too many files)
21915: Rule: Rules using ‘is not’ switches back to ‘is’
21916: Rules: unable to use Rule ‘Volume drive letter is’
21917: Rules: Files system is Mac OS switches to Windows
21922: Rules: unable to use ‘Date accessed’ rule
21924: Add Media Set: changes to catalog path in text field are ignored
21925: Add Media Set: Browse window should be a sheet
21926: Client browse cause engine crash: module.cpp-845
21934: Assert module.cpp-825 adding tape members
21939: Assert: tmemory.cpp-275 and Crash Reporter logs
21945: Restore Assistant: Unable to use ‘Search Media Set’
21960: VXA-320 FireWire loader issues including assert at intldrdev.cpp-4483
21961: Sources: Last Backup Date – local dmg files
21969: Find Files doesn’t always find the right media sets
21973: Sources: cannot remove local favorite folders
22002: Restore Assistant: issue with preview
22005: Restore: crash when accessing backup with a yellow icon
22006: Restore Assistant: FindFiles with mutiple found sets but not all checked doesn’t run
22013: Copy Backup: MD5 check some error
22024: Unable to change rules condition
22046: Script > Schedule > Text cutoff “F” for friday
22056: Restore Assistant: Restore files – Where do you want to restore: allows multiple selections

Using Symantec’s Backup Exec With External Hard Drives

Tuesday, May 5th, 2009

This assumes that you’ve already installed Backup Exec, and licensed it appropriately.
This assumes that all parities understand the expected backup retention policies, as well.

Preparing Backup Drives
1. Unpack Backup Drives
2. Plug both of them in
3. Note the drive letter assigned to them (this drive letter will now be forever associated with that drive).
4. Ensure drive is formatted with NTFS, if not, backup info on hard drive, format it, and label it appropriately
NOTE: You want to backup info on the new external drive because often times there will be utilities on there that are not present on the CD that the drive came with, or available from the manufacturers website.

Preparing Devices
1. Open Backup Exec
2. Navigate to Devices
3. Right mouse click on Removable Backup-to-Disk Folders
4. Select Backup-to-Disk Wizard
5. Click Next
6. Select Create a new backup-to-disk folder
7. Select Removable backup-to-disk folder
8. Name it (remember the name)
9. Select a path (this is just the drive name [ex. F:])
10. Follow the rest of the steps
NOTE: You will need to do this for each drive.

Preparing Media
NOTE: This is a critical step. If you don’t do this, chances are that the media you’re writing to will not allow you to overwrite it, even if you told it to do so in your Job properties. As a general rule, remember that device properties trump job properties.
1. Go to the Media tab, Right mouse click on Media Set
2. Select New Media Set
3. Give it a name (remember the name)
4. Ensure that “Overwrite protection period” is set to: Infinite – Don’t Allow Overwrite
NOTE: This is in my opinion bad grammar that’s been carried along from version to version. What this settings does is DISABLE overwrite protection. This means that there is no overwrite protection – i.e, you can write over the drive as many times as you please.
5. For “Append Period”, ensure that it is set to “Infinite – Allow Append” Backup exec interprets this as “I will allow you to append as many time as you please because there is no period to stop appending”.
6. Set Vault rules to None

Creating a Job
1. Go to the Job Setup tab
2. On the left pane, under the Backup Tasks window, select “New job using wizard”
3. Select “Create a backup job with custom settings”
4. Select the resources you would like to backup
5. Test the logon account
6. Select the order of backup
7. Name the backup, and the backup set
8. Choose the device you’d like to backup the data to (The All Devices pool).
NOTE: You will in most cases want to select “all devices”. This will tell Backup Exec to go to all devices and then select the one that’s available to backup to. If you have a tape drive that’s been deprecated, then you want to disable the tape drive under “Devices”, but still point the job to all devices. It will then backup to the drive that’s plugged in. This will allow for external drive rotation with the least amount of user intervention. If you have more than one “online” device, then you want to create a new “device pool” under “Device” and add your two “backup-to-disk” folders within that new pool.
9. Select the media set you’d like to backup the data to (the new media set you created).
10. For Backup Overwrite Method, please select “Append to media, overwrite if no appendable media is available”. What this will do is backup to the drives for as long as the drives say per your Media selection, and if there’s no room, it will overwrite.
11. Choose your backup options. Depending on the time it takes to backup, you will want to adjust this. With the size of external hard drives nowadays, I don’t see any other reason why you’d want to stray from Full Backups. If the backups are under 100GB and you have 1TB drives, go ahead and choose full backups (at the speed of USB2.0 or greater this will most likely only take about 4-5 hours). This will make it easier for restores in a offsite rotation scenario, managing jobs in the long run, and give you ~8 days worth of backups.
12. Always select it to verify backups
13. Schedule the job to run later
14. For the schedule, you would usually want to choose Recurring Week Days, and select the days you want it to backup per your conversation with the client.
15. For the Time Window, select what time you’d like the backup to start.

Adjusting Alerts
1. Go to Tools > Alert Categories
2. For “Media Insert”, and “Media Overwrite”, ensure that you select “Automatically clear alert after” 2 Minutes (or whatever you want), and Respond with “Yes”
NOTE: IMPORTANT If you don’t do this, Backup Exec will actually wait FOREVER (literally) for someone to manually acknowledge the alert by clicking Yes, No, or Cancel. It will always pop an alert because it’s hitting a pool to search for available media. By responding with Yes, it will now begin to Overwrite and/or use the device and media that you have selected the job to use.

Testing Job
1. Unplug one of the drives
2. Manually Run the Job
3. Verify that the job has run successfully and note what problems you have ran into, and correct or note as necessary
4. Run the Job AGAIN on the same drive. Ensure that it runs and appends to the drive. This will prove that the drive can be written to and is not “locked” due to an incorrect setting on the job or media.
5. Unplug the tested drive
6. Run steps 2-4 on the other drive to ensure that everything is OK.
7. Run a test restore
8. You can now leave one of the drives onsite, and take another with you or leave it with the client. You can now assure the client that they now have good backups (one onsite, and one that’s going offsite), and that you’ve thoroughly tested the backups and also performed a test restore.

Wrap up
1. Note any false positives in notes for the client (for backup troubleshooting in the future)
2. Update the Backup section for the client in notes.
3. Even if there was no BEV, send a BEV out saying that they now have a backup system in place.

Troubleshooting File Replication Pro

Saturday, May 2nd, 2009

Check the Web GUI.

To Check the logs via the Web Gui
- On the server, open Safari and go to http://localhost:9100 and authenticate
- Go To Scheduled Jobs and view the Logs for the 2 Way Replication Job

You can also tail the logs on the server. They are in /Applications/FileReplicationPro/logs and among the various logs in that location, the most useful log would be the syncronization log.

Many times the logs show that the server TimeSync is to fare between, the date and time are not correct. Each Server has a script you can run to resync the time. To Run this Script
Open Terminal on both First and Second servers and run
sudo /scripts/

You should see output in the terminal window and in the Console related to the time&date are now in sync with the time server.

To Stop and Restart the Replication Service

Open Terminal and run the following commands as sudo
systemstarter stop FRPRep
systemstarter stop FRPHelp
systemstarter stop FRPMgmt
once the services are stopped, start them up again in the following order
systemstarter start FRPRep
systemstarter start FRPHelp
systemstarter start FRPMgmt

You also should restart the second (or tertiary) Client:
Open Terminal and run the following commands as sudo
systemstarter stop FRPRep
wait for the service to stop and then start it again with this command
systemstarter start FRPRep

Recovering FileMaker and FileMaker Server Databases

Tuesday, April 21st, 2009


The most common thing that happens to FileMaker databases is file corruption. In this case, the local or server files will not be accessible, and customers will report issues.

Normally, one specific file is down and inoperable in FileMaker or FileMaker Server, but sometimes could be multiple files. You will either have to grab the affected items from a recent backup or otherwise recover the files.


If you have to recover files, you will need FileMaker Pro. If you are recovering .fp5 files (FileMaker 5 databases), either version 5 or 6 would be appropriate. If the files are .fp7 files (FileMaker 7 databases), then versions 7, 8, 9 and 10 will work. Open FileMaker, choose menu command “File, Recover”, and select the damaged database file. FileMaker will save a recovered copy.

**Important** For .fp5 (FileMaker 5) files, after recovering, each file’s shared hosting status might revert to Single User Mode. To fix this, open the file in FileMaker Pro 5 or 6, go to “File, Sharing” and set the file to either Multi User or Multi User (Hidden), depending on whether or not you want it to be selectable in FileMaker Server. (If you do not have a version of FileMaker Pro 5 or 6 to work with, most likely a 318 developer will.)

Using LCR for Exchange 2007 Disaster Recovery

Thursday, April 16th, 2009

Local Continuous Replication (LCR) is a high availability feature built into Exchange Server 2007.  LCR allows admins to create and maintain a replica of a storage group to a SAN or DAS volume.  This can be anything from a NetApp to an inexpensive jump drive or even a removable sled. In Exchange 2007, log file sizes have been increased, and those logs are copied to the LCR location (known as log shipping) and then used to “replay” data into the replica database (aka change propagation).

LCR can be used to reduce the recovery time in disaster recovery scenarios for the whole database, instead of restoring a database you can simply mount the replica.  However, this is not to be used for day-to-day mailbox recovery, message restores, etc.  It’s there to end those horrific eseutil /rebuild and eseutil /defrag scenarios.  Given the sizes that Exchange environments are able to get in Exchange 2003 R2 and Exchange 2007, this alone is worth the drive space used.

Like with many other things in Windows, LCR can be configured using a wizard.  The Local Continuous Backup wizard (I know, it should be the LCR wizard) can be accessed using the Exchange Management Console.  From here, browse to the storage group you would like to replicate and then click on the Enable Local Continuous Backup button.  The wizard will then ask you for the path to back up to and allow you to set a schedule.  Once done, the changes will replicate, but the initial copy will not.  This is known as seeding and will require a little PowerShell to get going.  Using the name of the Storage Group (in this example “First Storage Group”) you will stop LCR, manually update the seed, then start it again, commands respectively being:

Suspend-StorageGroupCopy –identity “First Storage Group”

Update-StorageGroupCopy –identity “First StorageGroup”

Resume-StorageGroupCopy –identity “First StorageGroup”

Now that your database is seeded, click on the Storage Group in the Exchange Management Console and you should see Healthy listed in the Copy Status column for the database you’re using LCR with.  Loop through this process with all of your databases and you’ll have a nice disaster recovery option to use next time you would have instead done a time consuming defrag of the database.

Restoring Data From Rackspace

Wednesday, April 1st, 2009

Rackspace provides a managed Backup Solution. The backups are available for up to 1 Month back. 2 Weeks of Backups are located on their premises, and the previous 2 Weeks are stored offsite. If the files to restore are within that period your restore time will take longer, as they will have to move the tapes from their offsite location to Onsite to start the restore process.

Restores can either be performed from Rackspace’s Web Portal or a support phone call.

Calling Rackspace
Supply Account Name and Password
State you want to Restore Files, windows or linux computer
Give Backup Operator File Path, and Date to Restore From
A Ticket will be created, and updated with the Restore Process. This ticket will be updated when the Restore is complete, and Will Include the Directory of the Restore Data.

File Replication Pro Story About 318

Wednesday, March 25th, 2009

The File Replication Pro folks have published a customer success story outlining some of the ways we’re using their product. Check it out and if you have any questions about what we’re doing with it feel free to drop us a line!