Posts Tagged ‘backups’

iOS Backups Continued, and Configuration Profiles

Friday, December 14th, 2012

In our previous discussion of iOS Backups, the topic of configuration profiles being the ‘closest to the surface’ on a device was hinted at. What that means is, when Apple Configurator restores a backup, that’s the last thing to be applied to the device. For folks hoping to use Web Clips as a kind of app deployment, they need to realize that trying to restore a backup that has the web clip in a particular place doesn’t work – the backup that designates where icons on the home screen line up gets laid down before the web clip gets applied by the profile. It gets bumped to whichever would be the next home screen after the apps take their positions.

This makes a great segue into the topic of configuration profiles. Here’s a ‘secret’ hiding in plain sight: Apple Configurator can make profiles that work on 10.7+ Macs. (But please, don’t use it for that – see below.) iPCU possibly could generate usable ones as well, although one should consider the lack of full screen mode in the interface as a hint: it may not see much in the way of updates on the Mac from now on. iPCU is all you have in the way of an Apple-supported tool on Windows, though. (Protip: activate the iOS device before you try to put profiles on it – credit @bruienne for this reminder.)

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Now why would you avoid making, for example, a Wi-Fi configuration profile for use on a mac with Apple Configurator? Well there’s one humongous difference between iOS and Macs: individual users. Managing devices with profiles shows Apple tipping their cards: they seem to be saying you should think of only one user per device, and if it’s important enough to manage at all, it should be an always enforced setting. The Profile Manager service in Lion and Mountain Lion Server have an extra twist, though: you can push out settings for Mac users or the devices they own. If you want to manage a setting across all users of a device, you can do so at the Device Group level, which generates extra keys than those that are present in a profile generated by Apple Configurator. The end result is that a Configurator-generated profile will be user-specific, and fail with deployment methods that need to target the System. (Enlarge the above screenshot to see the differences – and yes, there’s a poorly obscured password in there. Bring it on, hax0rs!)

These are just more of the ‘potpourri’ type topics that we find time to share after being caught by peculiarities out in the field.

CrashPlan PROe Refresher

Thursday, December 13th, 2012

It seems that grokking the enterprise edition of Code 42′s CrashPlan backup service is confusing for everyone at first. I recall several months of reviewing presentations and having conversations with elusive sales staff before the arrangement of the moving parts and the management of its lifecycle clicked.

There’s a common early hangup for sysadmins trying to understand deployment to multi-user systems, with the only current way to protect each user from another’s data being to lock the client interface (if instituted as an implementation requirement.) What could be considered an inflexibility could just as easily be interpreted as a design decision that directly relates to licensing and workflow. The expected model these days is a single user may have multiple devices, but enabling end users to restore files (as we understand it) requires one user be granted access to the backup for an entire device. If that responsibility is designated to as the IT staff, then the end user must rely on IT to assist with a restore, instead of healing thyself. This isn’t exactly the direction business tech has been going for quite some time. The deeper point is, backup archives and ‘seats’ are tied to devices – encryption keys cascade down from a user, and interacting with the management of a device is, at this point, all or nothing.

This may be old hat to some, and just after the Pro name took on a new meaning into Code 42 hosted-only, the E for Enterprise version had seemingly been static for a spell – until things really picked up this year. With the 3.0 era came the phrasing “Cold Storage”, which is neither a separate location in the file hierarchy nor intended for long-term retention (like one may use Amazon’s new Glacier tier of storage for.) After a device is ‘deactivated’, it’s former archives are marked for deletion, just as in previous versions – this is just a new designation for the state of the archives. The actual configuration which determines when the deactivated device backup will finally be deleted can be designated deployment-wide or more granularly per organization. (Yes, you can find the offending GUID-tagged folder of the archives in the PROe servers filesystem and nuke it from orbit instead, if so inclined.)

ComputerBlock from the PROe API

computerBlock from the PROe API

Confusion could arise from the term that looks similar to deactivation, ‘deauthorization’. Again, you need to notice the separation between a user and their associated device. Deauthorization operates at the device level to put a temporary hold on its ability to log in and perform restores on the client. In API terms it’s most similar to a ComputerBlock. This still only affects licensing in the fact that you’d need to deactivate the device to get back it’s license for use elsewhere, (although jiggery-pokery may be able to resurrect a backup archive if the user still exists…) As always, test, test, test, distribute your eggs across multiple baskets, proceed with caution, and handle with care.

iOS and Backups

Wednesday, December 12th, 2012

If you’re like us, you’re a fan of our modern era, as we are (for the most part) better off than we previously were for managing iOS devices. One such example is bootstrapping, although we’re still a ways away from traditional ‘imaging’. You don’t need Xcode to update the OS in parallel, iPCU to generate configuration profiles, and iTunes for restoring backups anymore. Nowadays in our Apple Configurator world, you don’t interact with iTunes much at all (although it needs to be present for assisting in loading apps and takes a part in activation.)

So what are backups like now, what are the differences between a restore from, say, iCloud versus Apple Configurator? Well, as it was under the previous administration, iTunes has all our stuff, practically our entire base belongs to it. It knows about our Apple ID, it has the ‘firmware’ or OS itself cached, we can rearrange icons with our pointing human interface device… good times. Backups with iTunes are pretty close to imaging, as an IT admin would possibly define it. The new kids on the block(iCloud, Apple Configurator,) however, have a different approach.

iOS devices maintain a heavily structured and segmented environment. Configuration profiles are bolted on top(more on this in a future episode), ‘Userspace’ and many settings are closer to the surface, apps live further down towards the core, and the OS is the nougat-y center. Apple Configurator interacts with all these modularly, and backups take the stage after the OS and apps have been laid down. This means if your backup includes apps that Apple Configurator did not provide for you… the apps(and their corresponding sandboxed data) are no longer with us, the backup it makes cannot restore the apps or their placement on the home screen.

iCloud therefore stands head and shoulders above the rest(even if iTunes might be faster.) It’s proven to be a reliable repository of backups, while managing a cornucopia of other data – mail, contacts, calendars, etc. It’s a pretty sweet deal that all you need is to plug in to power for a backup to kick off, which makes testing devices by wiping them just about as easy as it can get. (Assuming the apps have the right iCloud-compatibility, so the saved games and other sandbox data can be backed up…) Could it be better? Of course. What’s your radar for restoring a single app? (At this point, that can be accomplished with iTunes and manual interaction only.) How about more control over frequency/retention? Never satisfied, these IT folk.

(Sysadmin) Software Design Decisions

Wednesday, October 3rd, 2012

When approaching a task with an inkling to automate, sometimes you find an open source project that fits the bill. But the creator will work within constraints, and often express their opinion of what’s important to ‘solve’ as a problem and therefore prioritize on: a deployment tool is not necessarily a patch management tool is not necessarily a configuration management tool, and so on. One of the things I’ve dealt with is trying to gauge the intent of a developer and deciding if they are interested in further discussion/support/development of a given project. Knowing why one decision was made or another can be helpful in these situations. In that category of things I wish someone could have written so I could read it, here’s the design decisions behind the sonOfBackupRestoreScripts project I’ve been toying with as an add-on to DeployStudio(heretofore DS), which you can hopefully understand why I am not releasing as an official, supportable tool in it’s current bash form after reading the following.
I’ve adapted some of the things Google used in their outline for Simian as a model, to give this some structure.

Project Objective:

To move user home folders and local authentication/cached credentials between workstations in a customizable and optimized manner, preserving the integrity of the data/user records as much as possible

Overview:

For speed and data integrity, rsync is used to move selections of the users home folder(minus caches, trash, and common exclusions made by Time Machine). To increase portability and preserve mac-specific attributes, a disk image is generated to enclose the data. The user account information is copied separately and helpful information is displayed at the critical points as it moves from one stage to another and during the backup itself.

Requirements: DeployStudio Server / NetBoot

DS, as a service, enables an infrastructure to run the script in, and automounts a repository to interact with over the network. Meant to work optimally with or without a NetBoot environment, an architecture assumption being made during development/testing is wired ethernet, with the use of USB/Thunderbolt adapters if clients are MacBook Airs. Even old minis can function fine as the server, assuming the repo is located on a volume with enough space available to accept the uncompressed backups.

Implementation Details: Major Components / Underlying Programs

- source/destination variables

Parameters can be passed to the script to change the source/destination of backups/restores with the -s(for source) and -d(…) switches and then a path that is reachable by the NetBooted system.

- hdiutil

A simple sparsediskimage is created which can expand up to 100GBs with the built-in binary hdiutil. The file system format of that container is JHFS+, and a bunch of other best practices, cobbled together from Bombich’s Carbon Copy Cloner(heretofore CCC) and InstaDMG, are employed.

- cp

The cp binary is used to just copy the user records from the directory service the data resides on to the root of the sparseimage, and the admin group’s record is copied into a ‘group’ folder. If hashes exist in /var/db/shadow/hash, which is how passwords were stored previous to 10.7, those are moved to a ‘hashes’ folder.

- rsync

A custom, even more current build of rsync could be generated if the instructions listed here are followed. Ideally, a battle-tested version like the one bundled with CCC’s ccc_helper.app (/Applications/Carbon\ Copy\ Cloner.app/Contents/MacOS/ccc_helper.app/Contents/MacOS/rsync, which is actually a heavily customized rsync version 3.0.6) could be used, but it’s output isn’t easy to adapt and see an overview of the progress during a CLI transfer. Regardless, the recommended switches are employed in hopes to get a passing grade on the backupBouncer test. The 3.0.7 version bundled with DS itself (/Applications/Utilities/DeployStudio\ Admin.app/Contents/Frameworks/DSCore.framework/Versions/A/Resources/Tools/rsync, which for whatever reason is excluded when the assistant creates NetBoot sets) was used during development/testing.

-Exclusions

The Users folder on the workstation that’s being backed up is what’s targeted directly, so any users that have been deleted or subfolders can be removed with the exclusions file fed to the rsync command, and without catch-all, asterisk(*) ‘file globbing’, you’d need to be specific about certain types of files you want to exclude if they’re in certain directories. For example, to not backup any mp3 files, no matter where they are in the user folders being backed up, you’d add - *.mp3 Additional catch-all excludes could be used, as detailed in the script, which specifically excludes ipsw’s(iOS firmware/OS installers) like this: --exclude='*.ipsw'

-Restore

Pretty much everything done via both rsync and cp are done in reverse, utilizing the source/destination options, so a backup taken from one machine can easily be chosen to restore to another.

Security Considerations:

Very little security is applied during storage. Files are transferred over password-protected AFP, so a separate server and repo could be used to minimize potential access by whoever can access the main DS service. Nothing encrypts the files inside the sparseimages, and if present, the older password format is a hash that could potentially be cracked over a great length of time. The home folder ACL’s and ownership/perms are preserved, so in that respect it’s secure according to whoever has access to the local file systems on the server and client.

Excluded/Missing Features:
(Don’t You Wish Every Project Said That?)

Hopefully this won’t sound like a soul-bearing confession, but here goes:
No checks are in place if there isn’t enough space on destinations, nor if a folder to backup is larger than the currently hard-coded 100GB sparseimage cap (after exclusions.) Minimal redirection of logs is performed, so the main DS log can quickly hit a 2MB cap and stop updating the DS NetBoot log window/GUI if there’s a boatload of progress echo’d to stdout. The process to restore a users admin group membership(or any other group on the original source) is not performed, although the group’s admin.plist can be queried after the fact. Nor is there even reporting on Deleted Users orphaned home folders if they do actually need to be preserved, by default they’re just part of the things rsync excludes. All restrictions are performed in the Excludes.txt file fed to rsync, so it cannot be passed as a parameter to the script.
And the biggest possible unpleasantness is also the #1 reason I’m not considering continuing development in bash: UID collisions. If you restore a 501 user to an image with a pre-existing 501 user that was the only admin… bad things will happen. (We’ve changed our default admin user’s UID as a result.) If you get lucky, you can change one user’s UID or the other and chown to fix things as admin before all heck breaks lose… If this isn’t a clean image, there’s no checking for duplicate users with newer data, there’s no filevault1 or 2 handling, no prioritization so if it can only fit a few home folders it’ll do so and warn about the one(s) that wouldn’t fit, no version checking on the binaries in case different NetBoot sets are used, no fixing of ByHostPrefs(although DS’s finalize script should handle that), no checks with die function are performed if the restore destination doesn’t have enough space, since common case is restoring to same HD or a newer, presumably larger computer. Phew!

Wrapup:

The moral of the story is that the data structures available in most of the other scripting languages are more suited for these checks and to perform evasive action, as necessary. Bash does really ungainly approximations of tuples/dictionaries/hash tables, and forced the previous version of this project to perform all necessary checks and actions during a single loop per-user to keep things functional without growing exponentially longer and more complex.

Let’s look forward to the distant future when this makes it’s way into Python for the next installment in this project. Of course I’ve already got the name of the successor to SonOfBackupRestoreScripts: BrideOfBackupRestoreScripts!

DeployStudio Scripting Tips

Tuesday, September 25th, 2012

I’ve given a presentation on my affinity for DeployStudio, yet with it being closed source, it’s sometimes like an enigma (wrapped in a mystery (wrapped in bacon)). However, a workflow exists to enable scripting within it, although the only option besides automatically running it when dropped into a workflow is non-interactively passing arguments to a script. Even with little in the way of documented information, projects have popped up to take advantage of the framework provided.

Most noticeably, Rusty Myers’ BackupRestore scripts enabled quite an interesting workflow: first, you could run one workflow to tar(or ditto) the user folders to a new Backup directory in the repo, with a few customizable exceptions. And then, when either specified or dropped into a workflow that had a restore action precede it, you could put the users and their associated passwords back into place. This is obviously pretty darn useful for clean(er) migrations and/or OS upgrades, or simply refreshing existing workstations with a new base set of software. Many folks in the MacEnterprise community contributed features, including FileVault(version 1) support, and updates were made for the curveball Lion introduced in respect to how passwords are stored(nested inside the user record plist itself.)

I’m in the process of creating a successor to this project, so I thought I’d share some of the experiences I’ve had and pointers I’ve come across as a sysadmin, not a software developer, attempting to get a repeatable task accomplished inside this framework. Tip number zero is the same advice given to all students of programming in scripting languages: don’t write a lot before running and getting feedback. So, I booted a laptop with a few small user folders to my DeployStudio-generated netboot set, authenticated to the interface, and opened terminal. That netboot set includes the optional Python framework(Ruby is another option if you’d like access to that scripting language), which I’ll be using in the future. Along with selecting “Display Runtime log window by default”, I extended the “Quit runtime automatically after__” number to 90 seconds, so when testing inside of a workflow I wouldn’t be kicked out as I iterated and repeated the process.

To get started, I made an “admin” folder inside the repo, put an updated version of the rsync binary in it(since the one in /usr/bin that ships with OS X is pretty ancient), and started writing a script in that directory which I could therefore run from terminal on the netbooted laptop over VNC/ARD. For starters, here’s tip #1:
DeployStudio mounts the repo in /tmp/DSNetworkRepository. While /tmp isn’t read only, you will get out-of-space errors and general unhappiness if you use it for anything except mountpoints.
Tip #2:
No, you can’t use symlinks in those subfolders to point somewhere else on the DeployStudio server, since it assumes the directory the symlink would point to is relative to the root of the netboot set instead. (No, really, it wouldn’t even work when using ln -s on the machine the repo’s hosted from instead of Finder’s aliases, which definitely don’t work in scripts.)
tip #3:
For persnickety folks like myself that MUST use a theme in terminal and can’t deal to not have option set as the meta key, you’re probably bummed that the Preferences menu item is greyed out and command-comma doesn’t seem to work. There is a way, though: from the Shell menu choose Show Inspector. Then from the settings tab, double-click the theme you prefer. The full settings window will appear, and you can have your modern conveniences again.
tip #4:
How does DeployStudio decide what is the first mounted volume, you may wonder? I invite(dare?) you to ‘bikeshed‘(find a more optimized way to accomplish a relatively trivial task) this particular piece of code:
system_profiler SPSerialATADataType | awk -F': ' '/Mount Point/ { print $2}' | head -n1
In the case of the restore script, hard-coding the DS_LAST_RESTORED_VOLUME variable (on line 44 in Rusty’s current Lion version), or changing the argument in the workflow to pass that path with the -v flag will remove the dependency on restoring an image before putting user folders back in place.

Two more tips before I go, which are both specific to the task I’m trying to accomplish. Ownership on files will not be preserved when moved to the repo with rsync, so you can create a sparse image or sparsebundle as a destination, and it will even retain ACL’s (certain patched rsync binaries complain about smb_acl’s, like the one I used, which is bundled in DeployStudio’s Tools folder.) As mentioned about /tmp in the NetBoot environment earlier, sparseimages should be created in a subfolder of the repo, or you could easily run into ‘out of space’ errors.

When logging, you can use tee or simply redirect output, but more customized feedback in the actual log window in the Deploy Studio netboot runtime is helpful. There’s a “custom_logger” function used in some of the bundled scripts… which literally does nothing but echo $1 – pretty rudimentary. For output that doesn’t display when run as part of a script, you can redirect that output to /dev/stdout and have it shown instead of fooling around with echo or printf.
e.g. rsync -avz /source /destination >/dev/stdout
There may be a lag if verbose output happens in a short amount of time, as the main log file in the repo is being written to simultaneously to what is printed onscreen.

Oh, and the silliest thing I’ve noticed: your script needs to be owned by root:wheel and 777 in the scripts folder of the repo in order to show up in the workflow interface for selection… It’s got it’s quirks, but it’s just about the best out there!

Building A Custom CrashPlan PROe Installer

Friday, April 13th, 2012

CrashPlan PROe installation can be customized for various deployment scenarios

Customization of implementations for over 10,000 clients is considered a special case by Code 42, the makers of CrashPlan, and requires that you contact their sales department. Likewise, re-branding the client application to hide the CrashPlan logo also requires a special license.

Planning Your Deployment

A large scale deployment of CrashPlan PROe clients requires a certain level of planning and setup before you can proceed. This usually means a test environment to iron out the details that you wish to configure. Multiple locations, bandwidth, and storage are obvious concerns that will need a certain amount of tuning before and after the service ‘goes live’. Also, an LDAP server populated with the expected information or a prepared xml document that has identifiable machine information needs to be matched with account and registration data. Not just account credentials, but also filing computers and accounts into groups through the use of Organizations (which directly relate to the registration information used) should also be considered.

Which Files to Change

The CrashPlan PROe installer has different files for Windows and Mac OS X, but the gist is largely the same for either. There is a customizable script (or .bat file) in that you can use to specify variables to feed information into a template that are specific to your deployment. The script can be customized to reference ldap information, or even a shared data source that can provide account information based on an identifiable resource such as a MAC address.

Mac OS X 

Download the installer DMG and make a copy of it. The path we’ll be working in is:

Install CrashPlanPRO.mpkg/Contents/Resources/

Inside the Resources directory there is a Custom-example folder that contains the template and script to customize.

Duplicate the Custom-example to Custom

userinfo.sh is a configuration script that has (commented-out by default) sections for parsing usernames from the current home folder, hostname, or from LDAP. This would also be where one could gather other machine information ( such as mac address ) and match it to data in a shared document on a file server.

In the same folder as userinfo.sh is the folder “conf” which contains the file default.service.xml. The contents of this file can be fed variable information from the configuration script to set the user name, computer name, ldap specifics, and password that will be used upon installation. It is advisable to test new user creation when using LDAP and CrashPlan organizations, to ensure users . It is possible to specify those properties in this xml list.

So the process breaks down like this. edit the userinfo.sh to populate the default.service.xml. let the installer run and make contact with the server and let the organization policies set all non custom settings.

XML Parameters

default.service.xml has the following properties

Property
Description
config.servicePeerConfig.authority
By supplying the address, registrationKey, username and password, the user will bypass the registration / login screen. The following tables describe authority attributes that you can specify and their corresponding parameters.

Authority Attributes
Attributes
Description
address
the primary address and port to the server that manages the accounts and issues licenses. If you are running multiple PRO Server, enter the address for the Master PRO Server.
secondaryAddress
(optional) the secondary address and port to the authority that manages the accounts and issues licenses.

Note: This is an advanced setting. Use only if you are familiar with its use and results.

registrationKey
a valid Registration Key for an organization within your Master PRO Server. Hides the Registration Key field on the register screen if a value is given.
username
the username to use when authorizing the computer, can use params listed below
password
the password used when authorizing the computer, can use params listed below
hideAddress
(true/false) do not prompt or allow user to change the address (default is false)
locked
(true/false) allow user to change the server address on the Settings > Account page. (do not set if hideAddress=“true”)

Authority Parameters
Parameter
Description
${username}
determined from the CP_USER_NAME command-line argument, the CP_USER_NAME environment variable, or “user.name” Java system property from the user interface once it launches.
${computername}
system computer name
${generated}
random 8 characters, typically used for password
${uniqueId}
GUID
${deferred}
for LDAP and Auto register only! This allows clients to register without manually entering a password and requiring user to login to desktop the first time.
servicePeerConfig.listenForBackup
Set to false to turn off the inbound backup listener by default.

Sample Usage
All of these samples are for larger installations where you know the address of the PRO Server and want to specify a Registration Key for your users.
Note: NONE of these schemes require you to create the user accounts on your PRO Server ahead of time.

  • Random Password: Your users will end up with a random 8-character password. In order to access their account they will have to use the Reset My Password feature OR have their password reset by an admin.
  • Fixed Password: All users will end up with the same password. This is appropriate if your users will not have access to the CrashPlan Desktop UI and the credentials will be held by an admin.
  • Deferred Password: FOR LDAP ONLY! This scheme allows the client to begin backing up, but it is not officially “logged in”. The first time the user opens the Desktop UI they will be prompted with a login screen and they will have to supply their LDAP username/password to successfully use CrashPlan to change their settings or restore data.

Changing CrashPlan PRO’s Appearance (Co-branding)

This information pertains to editing the installer for co-branding. Skip this section if you are not co-branding your CrashPlan PRO.
Co-Branding: Changing the Skin and Images Contents

You can modify any of the images that appear in the PRO Server admin console as well as those that appear in the email header. Here are the graphics you may substitute:
.Custom/skin folder contents
Filename Description
logo_splash.png splash screen logo
splash.png transparent splash background (Windows XP only)
splash_default.png splash background, must NOT be transparent (Windows Vista, Mac, Linux, Solaris, etc.)
logo_main.png main application logo that appears on the upper right of the desktop
window_bg.jpg main application background
icon_app_128x128.png
icon_app_64x64.png
icon_app_32x32.png
icon_app_16x16.png icons that appear on desktop, customizable with Private Label agreement only

View examples
In the Custom/skin folder, locate the image you wish to replace.
Create another image that is the same size with your logo on it.
For best results, we recommend using the same dimensions as the graphics files we’ve supplied.
Place your customized version into the Content-custom folder you created.
Make sure not to change the filename or folder structure, so that CrashPlan PRO will be able to find the file.
Co-Branding: Editing the Text Properties File

You can change the text that appears as the application name or product name in CrashPlan PRO Client. Make your changes in thetxt_.properties files in the Custom/lang folder.
The txt.properties file is English and is the default language.
Each file contains the text for a language. Please refer to the Internationalization document from Sun for details (http://java.sun.com/developer/technicalArticles/J2SE/locale/).
The language is identified in the comments at the beginning of the file.
When you change the application or product name, keep in mind that using very long names could affect the flow / layout of the text in a window or message box.
Text Property Description
Product.B42_PRO The name of the product as it would appear on the Settings > Account page, such as CrashPlan PRO
application.name The application name appears in error messages, instructions, descriptions throughout the UI.

Creating an Installer

Make the customizations that you want as part of your deployment, then follow the instructions to build a self-installing .exe file.
How It Works – Windows Installs

Test your settings by running the CrashPlan_[date].exe installer.
Make sure the installer.exe file and the Custom folder reside in the same parent folder.
Re-zip the contents of your Custom folder so you have a new customized.zip that contains:
Crashplan_[date].exe
Custom (includes the skin and conf folders)
cpinstall.ico
Turn your zip file into a self-extracting / installing file for your users.
For example, download the zip2secureexe from http://www.chilkatsoft.com/ChilkatSfx.asp
The premium version is not required; however, it does have some nice features and they certainly deserve your support if you use their utility.
Launch zip2secureexe, then :
specify the zip file:customized.zip
specify the name of the program to run after unzipping: CrashPlan_[date].exe
check the Build an EXE option to automatically unzip to a temporary directory
specify the app title:CrashPlan Installer
specify the icon file:cpinstall.ico
click Create to create your self-extracting zip file
Windows Push Installs

Review / edit cp_silent_install.bat and cp_silent_uninstall.bat.
These show how the push installation system needs to execute the Windows installer.
If your push install software requires an MSI, download the 32-bit MSI or the 64-bit MSI.
If you have made customizations, place the Custom directory that contain your customizations next to the MSI file.
To apply the customizations, run the msiexec with Administrator rights:
Right-click CMD.EXE, and select Run as Administrator.
Enter msiexec /i

cp_silent_install.bat

@ECHO OFF

REM The LDAP login user name and the CrashPlan user name.
SET CP_USER_NAME=colt
Echo UserName: %CP_USER_NAME%

REM The users home directory, used in backup selection path variables.
SET CP_USER_HOME=C:\Documents and Settings\crashplan
Echo UserHome: %CP_USER_HOME%

REM Tells the installer not to run CrashPlan client interface following the installation.
SET CP_SILENT=true
Echo Silent: %CP_SILENT%

SET CP_ARGS=”CP_USER_NAME=%CP_USER_NAME%&CP_USER_HOME=%CP_USER_HOME%”
Echo Arguments: %CP_ARGS%

REM You can use any of the msiexec command-line options.
ECHO Installing CrashPlan…
CrashPlanPRO_2008-09-15.exe /qn /l* install.log CP_ARGS=%CP_ARGS% CP_SILENT=%CP_SILENT%

cp_silent_uninstall.bat

@ECHO OFF

REM Tells the installer to remove ALL CrashPlan files under C:/Program Files/CrashPlan.
SET CP_REMOVE_ALL_FILES=true
EHCO CP_REMOVE_ALL_FILES=%CP_REMOVE_ALL_FILES%

ECHO Uninstalling CrashPlan…
msiexec /x {AC7EB437-982A-47C0-BC9A-E7FBD06B1ED6} /qn CP_REMOVE_ALL_FILES=%CP_REMOVE_ALL_FILES%

How It Works – Mac OS X Installer

PRO Server customers who have a lot of Mac clients often want to push out and run the installer for many clients at a time. Because we don’t offer a push installation solution, you’ll need to use other software to push-install CrashPlan, such as Apple’s ARD.
Run Install CrashPlanPRO.mpkg to test your settings:
At the command line, type open Install\ CrashPlanPRO.mpkg from /Volumes/CrashPlanPRO/)
Launch Install CrashPlanPRO.mpkg to test your settings.
Unmount the resulting disk image and distribute to users.
Note: If you do not want the user interface to start up after installation or you want to run the installer as root (instead of user), change the userInfo.sh file as described in next section.
Understanding the userInfo.sh File
This Mac-specific file is in the Custom-example folder inside the installer metapackage. Edit this file to set the user name and home variables if you wish to run the installer from an account other than root, such as user, and/or you wish to prevent the user interface from starting up after installation.
Be sure to read the comments inside the file.
How It Works – Linux Installer
Edit your install script as needed.
Run the install script to test your settings.
Tar/gzip the crashplan folder and share it with other users.
Custom Folder Contents
When you open the installer zip file or resource contents and view the Custom-example folder, the structure looks like this:
Contents of resource folder
Custom (folder)
skin (folder)
logo_splash.png
splash.png
splash_default.png
logo_main.png
window_bg.jpg
logo_main.png
icon_app_128x128.png
icon_app_64x64.png
icon_app_32x32.png
icon_app_16x16.png
lang (folder)
txt_.properties
conf (folder)
default.service.xml
cpinstall.ico (Windows only)
must be created using an icon editor
userInfo.sh (Mac only)

Customizing the PRO Server Admin Console

You can also change the appearance of the PRO Server admin console and email headers and footers.
In the ./content/Manage, locate the images and macros you wish to modify and copy them into ./content-custom/Manage-custom using the same sub-folder and file names as the originals. Placing them there protects your changes from being wiped during the next upgrade.
Our HTML macros are written with Apache Velocity. If your site stops working after you’ve changed a macro, delete or move the customized version to get it working again.
Location of Key PRO Server Files
These locations may change in a future release so you will be responsible to move your customized versions to keep your images working.
CrashPlanPRO/images/login_background.jpg
CrashPlanPRO/images/header_background.gif
CrashPlanPRO/styles/main_override.css
macros/cppStartHeader.vm ++ (see below)
macros/cppFooterDiv.vm ++ (see below)
Email images are:
content/Default/emails/images/header/proserver_banner.gif
content/Default/emails/images/header/proserver_banner_backup_report.gif
++ These files are web macros. You’ll need to update these in place instead of copying them to the custom folder. They won’t work under the custom folder. Remember that our upgrade process will overwrite your changes.

PresSTORE Article on Xsanity

Tuesday, November 16th, 2010

We have posted a short article on the availability of PresSTORE 4.1 on Xsanity at http://www.xsanity.com/article.php/20101116105720183. Enjoy!

MySQL Backup Options

Thursday, July 8th, 2010

MySQL bills itself as the world’s most popular open source database. It turns up all over, including most installations of WordPress. Packages for multiple platforms make installation easy and online resources are plentiful. Web-based admin tools like phpMyAdmin are very popular and there are many stand-alone options for managing MySQL databases as well.

When it comes to back-up, though, are you prepared? Backup plug-ins for WordPress databases are fairly common, but what other techniques can be used? Scripting to the rescue!

On Unix-type systems, it’s easy to find one of the many example scripts online, customize them to your needs, then add the script to a nightly cron job (or launchd on Mac OS X systems). Most of these scripts use the mysqldump command to create a text file that contains the structure and data from your database. More advanced scripts can loop through multiple databases on the same server, compress the output and email you copies.

Here is an example we found online a long time ago and modified (thanks to the unknown author):


#!/bin/sh

# List all of the MySQL databases that you want to backup in here,
# each separated by a space
databases="database1 database2 database3"

# Directory where you want the backup files to be placed
backupdir=/mydatabasebackups

# MySQL dump command, use the full path name here
mysqldumpcmd=/usr/local/mysql/bin/mysqldump

# MySQL Username and password
userpassword=" --user=myusername --password=mypassword"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert --lock-tables"

# Unix Commands
gzip=/usr/bin/gzip
uuencode=/usr/bin/uuencode

# Create our backup directory if not already there
mkdir -p ${backupdir}
if [ ! -d ${backupdir} ]
then
echo "Not a directory: ${backupdir}"
exit 1
fi

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
do
$mysqldumpcmd $userpassword $dumpoptions $database > ${backupdir}/${database}.sql
done

# Compress all of our backup files
echo "Compressing Dump Files"
for database in $databases
do
rm -f ${backupdir}/${database}.sql.gz
$gzip ${backupdir}/${database}.sql
done

# And we're done
ls -l ${backupdir}
echo "Dump Complete!"
exit

Once you verify that your backup script is giving you valid backup files, these should be added to your other backup routines, such as CrashPlan, Mozy, Retrospect, Time Machine, Backup Exec, PresSTORE, etc. It never hurts to have too many copies of your critical data files.

To make sure your organization is prepared, contact your 318 account manager today, or email sales@318.com for assistance.

ARCHIWARE PresSTORE 4 Released

Wednesday, June 30th, 2010

Last week, German software company ARCHIWARE released version 4.0 of its enterprise backup solution, PresSTORE. This version is for new installations only – version 4.1, planned for release in October, will support upgrades from existing 3.x deployments.

The new features of PresSTORE 4 can be found on the company’s website, but here are some highlights:

  • New interface to simplify management
  • iPhone app for remote monitoring of jobs
  • New desktop notification system to alert users of actions
  • Progressive backup – “backup without full backup”

Aaron Freimark also wrote a post about the new version on the Xsanity site that talks more about the Xsan-specific features.

As before, PresSTORE is supported on Mac OS X (10.4 and higher), Windows (2003, 2008, XP, Vista and 7), Linux and Solaris. Backup2Go Server is only supported on OS X and Solaris.

PresSTORE support is great – during testing of the new version, the iPhone monitoring app was crashing. Within a day, a new version was available in the App Store that addressed the exact issue. Bravo!

To learn more about PresSTORE (including pricing options), please contact your 318 account manager today, or email sales@318.com for more information.

Uninstalling Retrospect 6.3 Clients and Changing Passwords

Wednesday, May 12th, 2010

Open the retrospect client and turn it off. Then close it and delete the \Libraries\Preferences\retroclient.state file. Now you have two options. To completely uninstall, just trash the app from the Application folder. Or if you just needed to reset the password, you can rerun the installer and it will prompt you for a password.

Evaluating Backup Exec Jobs

Tuesday, April 13th, 2010

[ ] Assess the Job Setup tab and review its listing to determine which jobs are currently configured on the system.
[ ] Review the selection list to ensure that all relevant data and file shares are being backed up and copied
[ ] Assess the Job Monitor tab to confirm that the jobs that are setup and configured are actually running as scheduled.
[ ] Review the job logs (Job History) to ensure that all data is being backed up or if there are minor errors, note what caused those errors to correct later.
[ ] Ensure that there that the job did not fail due to lack of space (or other chronic issues), because if it is then most likely the client needs larger storage or we must set media and jobs to allow for overwrite of data.

Backup Agents are needed for special data such as SQL and Exchange databases, or files located on remote computers. Many open files will not back up unless the Open File Agent is preset, installed and licensed on the data source.

Media Sets (Under the Media tab) are collections of backup media that share common properties. In Backup Exec, media sets can be adjusted under their properties to allow for overwrite and appends infinitely or after a certain period of time. This allows you to manage how media is managed when space begins to come into play. Verify these settings to ensure proper retention.

[ ] Review the Alerts tab and check under Active Alerts sub-tab and ensure that no jobs have been waiting on media or needed human interaction or response.
[ ] Review the Alert History sub-tab and verify that no jobs in the past were waiting for interaction or response.
[ ] Check backup notifications under each job and under the default preferences (Tools >> Recipients… & Tools >> Email and Pager Notification…), to ensure that the proper individuals are being notified about backups and alert items.
[ ] Review the Devices tab and verify that there are no devices/destination that are Offline.
[ ] Ensure that any devices that are currently listed as a backup destination (unless it is the member of a device pool) is online. If the device is a member of a device pool and that the backup job is referencing that pool then the jobs will continue once at least one of the pool’s devices is online).

Typically backup jobs will have destinations as being either tape, local or network storage. Most likely an external backup devices will fall under the tree as a Backup-to-Disk Folder. If the drive/device is not connected it may show up as Offline. If you are sure that the device is connected, right-click on the entry and ensure that devices is both confirmed as Online and also Enabled.

To learn more about Backup Exec – here are some additional links:
Symantec Backup Exec website

http://www.symantec.com/business/products/family.jsp?familyid=backupexec

Datasheets on usage of Backup Exec 2010 (Applications,Features, Agents)

http://www.symantec.com/business/products/datasheets.jsp?pcid=pcat_business_cont&pvid=57_1

Wikipedia on the architecture and history of Backup Exec

http://en.wikipedia.org/wiki/Backup_Exec

Checking Backup Jobs in Atempo’s Time Navigator

Wednesday, March 31st, 2010

Time Navigator is a powerful enterprise level backup software suite. It is also one of the most complex backup software you can manage.

In order for an ADB to be successful you need to check the following:

  • Whether the scheduled backups were successful or not.
  • If they were unsuccessful is intervention required?
  • Check the available storage for future backups
  • Did test restore succeed or not
  • Are Critical files backed up?
  • General log review

Section 1. Check whether the scheduled backup was successful

To begin you need to know the username and password for the local user on the host computer, which needs to have admin rights, as well as the username and password for the tina catalog.

Step 1: Open the Time Navigator Administrative Console.
On Mac it is /Applications/Atempo/tina/Administrative Console
On Windows c:\Program Files\Tina\Administrative Console

When the Administrative Console starts it will initiate a connection to the Time Navigator Catalog indicated in the config files.

It will prompt you for a username and password. Once you enter the proper username and password you will gain access to the Administrative Console.

This program interface is the main access point to the various programs that let you control Time Navigator.

Step 2. Choose the “Monitor” menu list and select “Job Manager” this will open the Time Navigator Job Manage. The initial view will show all active jobs. Go to the View menu and choose “Historic” this will show past jobs.

From here you will be able to review the recent backup jobs to find out whether they were successful or not.

Section 2. If the backups were unsuccessful do they require intervention?

Determining whether intervention is required is largely diagnosed by the reasons of a backup failure.

From within the Job Monitor you can select a job from the historic menu and double click it to access the job detail window.

From this window you will have access to several tabs. The tab of interest here will be the one called “Events”. This is a filtered view of the logs so it shows only the log entries that are connected to this job number.

To make the determination of whether intervention is warranted requires some knowledge of the errors you find. To that end the errors are color coded. Yellow errors are considered minor and are likely to be overlooked if they are the only errors present. While Orange and Red errors are higher priority and should warrant the attention of a tech trained in Time Navigator.

Section 3. Check the available storage capacity for future backup executions.

Time Navigator treats all forms of storage as tape a tape library. Your backup destination will either be a Virtual Tape library, in the case of backing up to hard drives, or to a specific physical tape library.

This means that we will need to view the Library Manager application.

Start with the Admin Console. Choose the host for which the library is attached. ( all libraries are attached to a host ) Select the host icon with the mouse and choose the “Devices” menu from there choose “Library” then “Operations” then “Management”

This will spawn the Library Manager application. You will be presented with a dialogue containing a list of available Libraries.

Once chosen you will get a window that shows the number of drives ( virtual or real ) and the tape cartridges in their slots ( also virtual or real ) by this display you will be able to determine which tapes have been used and which are free for use.
If a cartridge has been used it will be labeled for the tape pool in which is belongs too. If it is free for use it will be labeled either SPARE or ????? or in rare cases. Lost & Found. Lost & Found cartridges should be reported to the administrator.

A comprehensive determination of how much space is left would take some math. Know how much each tape represents and how much data is backed up nightly etc…

A quick version to keep in mind is percentages. If there is less than 10% free available cartridges it might be worthwhile to notify the administrator. It will take some experience to tell whether this is a problem or not as some tapes can hold hundreds of gigs and two tapes might take months to fill.

Section 4. Test Restore. Success or Failure.

This section implies that you will attempt a restoration of some files.
FIle restoration with Time Navigator is both its most powerful feature and most complex in comparison to other backup software.

First a word about the process. While it is true that the Administrative Console and associated applications can be run on any computer that participates in the Time Navigator backup system. The Restore and Archive Manager application will attempt to make a connection to the host from which files were backed up. Which means you will need credentials for that host which allow read write access to the directories which were backed up. To this end it is often simpler to open the Administrative Console on the host in question before you open the Restore and Archive Manager application.

To restore files from the backup of a host you will need to select the host from the Administrative Console. From the “Platform” menu choose “Restore and Archive Manager”. You will then be challenged for a username and password for the host in question.

Once you have entered legitimate credentials for the host you will be presented with the window for the Restore and Archive manager. It will show the host name and the username by which you are connecting. It will also show you the complete file system on this host in expandable trees. Each element with a check mark box beside it

Furthermore this view will show you the file system in the present and the capacity to show the file system at some point in the past.

This element is where the program gets its name. The “Time Navigator” allows you to navigate through time to look at the file system and select file for restoration.

The idea here is that you know what time period you are looking for. You select the date beside the “past” radio button and it will then show you what files are available for that time period.

The second feature shown on this interface is the ability to isolate files that have been deleted. Meaning you have the ability to adjust the view to show files that were present in the past but are not present now. Spanning back an arbitrary amount of time as determined by the form element for days, weeks, months etc…

While this is very useful it will not filter out non deleted files. Meaning you have to know what directory you want to look in before this becomes useful.

A third, and in my opinion the most useful, method of restoring files is called versioning.
If you right click ( control click ) on a file that has been backed up, you will be presented with a contextual menu with the word “versions”

Once selected it will open a dialogue window with every version of the file that is currently within the backup catalog.

Once you have selected a file from that list you will need to select the “synchronize” button at the bottom of this versions dialogue. This will set the past date and time marker to the point in time where this file was backed up. You can then check mark the file to be restore.

FInally you can search the catalog for files form this host. While within the Restore and Archive Manage choose the “Backup” menu and choose find.

You will be presented with the search interface with the current host already selected as the search base. From here you can search by pathname, filename and how far back in time to search and how many results to show

The search forms will accept wildcards for more creative searching. Once a file is located in the results window you will need to select the “synchronize” button at the bottom in a manner similar to the versions window mentioned above.

RESTORING.

All of the above techniques are methods of locating the files you wish to restore and putting check marks besides them. Now it is time to restore them

Once you have all the files you wish to restore check marked we can proceed.
We will accomplish this with the “Restore” menu item. If there is any question as to what you have selected for backup, there is the option here to “view checked objects” This will filter the view to show only objects that have been check marked for restoration.

Next we can choose to test the backup or Run the backup. If there is any question as to whether the media for a file is available you should run it as a test first.

When you select test you will be greeted with a warning dialogue that says that this operation will perform all operations except for the writing of data itself. This means drives and or tape cartridges will be engaged and network throughput will be used.

After you agree the restore dialogue will show. You will have to tabs to choose from.
The first of which is labeled “Parameters”.

From here you can choose whether to restore the files to their original locations or to a new location on the same file system ( if you wish to restore to another host, it is possible but it is not covered here )

Now you must choose what level of backup you wish. Here you are presented with several radio dials that allow you to choose whether to restore data with or without directory and object information. This may seem like a splitting hairs but in some environments it is nice that your backup system can restore the user permissions for objects in your directory tree instead of just restoring everything.

The checkmark box for “restore all file versions” will restore everything int he “versions” list discussed above. Not used very often,

Now to the second tab “Behavior” the first selection to be made here is what behavior to choose should there already be a file with the same file name at the destination path.

You will see options for restore the file and overwrite, to renaming either the existing file or the restored file or do not restore if certain conditions are met.

Keep this in mind. If you need to restore a large number of files and you don’t know whether you should overwrite existing files, you should restore it in a neutral location and review it by hand.

If an error occurs while Restoring files. Skip? Cancel? Ask user? This selection will be important if you are monitoring the process. If you are not monitoring and you choose skip. You will need to review the logs, you choose cancel. you could come back to very little data being restored.

Finally the section “if required cartridges are off-line”
you run into this if you are dealing with physical tapes that are no longer within the library.

Issue Operator Requests for each missing cartridge. Which means the software will bug you each time a tape is missing.
Ignore files indicated on those cartridges. Self explanatory.
Cancel
Display offline cartridge list. This is the one I have learned to check, It will check the availability of the tapes within the current library listing. Which means if you put new tapes in you have to scan the bar codes before this list updates. This method avoids a lot of headaches and is my recommendation of you are dealing with physical tape.

Finally you get to press restore. Where you will be presented with the dialogue for the restore process. You will see the progress bar, the path of files being restored and the option to monitor restore events.

If after all of this you have problems restoring you should contact a Time Navigator Admin.

Section 5. Did critical files backup.

At first glance this is similar to “did backups succeed” You can backup the system state for windows servers which are critical files but you should also check to see if the catalog for Time Navigator is being backed up. In the administrative console there is a host icon which will be called CATALOG. It is very important that this get backed up nightly. If this file becomes corrupt or non functional. The entire backup is effectively lost. A good Time Navigator tech can spend a huge amount of time to pull data from the tapes.

Section 6. General Logs Review

This section covers looking for things that look weird. From the Administrative Console choose Monitor Events.

This will open the event monitor and if you see errors like, Environment error or catalog error. Then it needs to be reported.

Using kmsrecover to Restore Kerio Backups

Monday, September 28th, 2009

Using KMSrecover to restore a mailserver/user
Using this command will overwrite the existing config and modify the message store, which is why you need another machine for this, with adequate HD space.

[ ] Install KMS locally on your computer (skip wizard)
[ ] Rename your laptops volumes name to the same as where the KMS store lives
(e.g Mail Server HD or Server HD or Macintosh HD)
[ ] Copy KMS backups to external drive and plug into laptop.
[ ] Navigate to mail server path in terminal or DOS.
Mac: /usr/local/kerio/mailserver
PC: C:\Program Files\KerioMailServer
[ ] Start the recovery
Mac: ./kmsrecover |
For full recovery point to backup location.
./kmsrecover /Volumes/backup
For specific recovery, use filename
./kmsrecover /Volumes/backup/C200401z.zip
PC kmsrecover |
For full recovery point to backup location.
kmsrecover E:\backup
For specific recovery, use filename
kmsrecover E:\backup\C200401z.zip
Warning: If the parameter contains a space in a directory name, it must be closed in quotes. kmsrecover “E:\backup 2”

Retrospect 8.0.733

Tuesday, May 12th, 2009

Retrospect 8.0.733 is now out and available for download. If you are using version 8 and experiencing problems then you should run it as it fixes a number of bugs. Bugs fixed in the Retrospect 8.0.733 release:
18925: Keep backup sets and scripts associated when catalog rebuild is necessary
20075: General UI Feedback: Okay/Apply
20131: Able to enter text in fields that should only accept numbers
20146: Log Limit doesn’t verify for valid value range
20156: Prefs >Media > media request timeout should check for valid values
20229: Scripts Icon backwards in details view when no script is selected
20258: Copy assistant should not allow you to select same volume for source and destination
20276: “More Backups…” is disabled in Restore Assistant
20332: Restore Assistant: script starts when you select ‘Save’
20343: Error backing up Win XP client – error -3043 (the maximum number of Snapshots has been reached)
20373: Sources icons display as usb removable drives
20437: Past Backup lists wrong date
20475: Disclosure triangles in volumes and scripts
20504: Remove all local volumes: Need to restart Engine to repopulate
20528: Servers displaying in the Sources list
20538: Improve column sizes and layout
20555: Verify Script: Options lists backup sets
20585: “Pause Server” should change to “Unpause” or “Resume”
20598: File Media Sets: remove option to change ‘Fast Catalog Rebuild’
20604: Volume Type not correct
20634: Script Schedule > refresh > auto deletes schedules
20640: Creating a new schedule item does not select the new item
20719: Console: DAG memory leaks
20729: Possible Small Memory leak in Engine when [Backupset EditWithPassword]
20735: New Backup Script: using Tag from previous script
20849: Creating a New Media Set does not accept some characters
20896: “Please update your server” dialog should be more informative
20919: Media Sets: Tape not display Used/Free/Capacity
20945: ScriptProperties::TransferMode seems to have incorrect values
20953: Need to be able to defer scheduled activities
20971: Use Small Icons setting lost after closing UI
21015: Sources: Clients duplicate in the Multicast list
21039: License Manager UI Issues
21087: Starting activity negates activity scope buttons
21124: Desktop: no license challenge when adding a 3rd client
21174: Smart Tag UI problem
21302: Disk Media Sets: when only one member – remove should be disabled
21382: Dev: ArcDiskInfo/ArcDiskFileInfo’s persistent logic is wrong, blocking ppc feature
21463: Need a way to change console’s server password on existing server
21487: Sessions and Snapshots get into state with different volume names
21510: Search for files restore not working across multiple Media Sets
21544: Launch engine at startup authentication broken
21552: Sources: Erase a local drive the disk used / total not updated
21562: Restore Files: Assistant – Search for files in selected Media sets
21590: Need to store extdFlags EXTD_HASACL and EXTD_HASMETA in trees
21603: File Media Set: during backup .rbf.rfc file displays as unix executable
21618: Unable to successfully restore IIS on W2K3 Server
21625: Rules not updating correctly
21628: Unable to add multiple device members
21644: Cannot change member location in Edit Member, throws error
21663: Bad value for Compression field in Activities
21712: Assert during first backup
21737: Crash with DLT1 drive
21740: Media creation time is wrong
21746: Crash trying to add NAS device
21752: Crash copying library directory
21755: module.cpp-825 assert
21764: Console crash while backing up NAS (tag-related)
21775: wrong password adding clients
21782: Restore Assistant: Assert at module.cpp-845
21783: Sources: Local Volumes displaying multiple times
21785: Restore Assistant: When Clients volumes selected unable to ‘Continue’
21791: U Mich. assert
21797: Klingon server assert during client backup
21800: RefBackupset::Search needs Progress object
21803: Error -703 unknown when trying to access a Media Set
21804: Firewire Lacie D2 AIT not responding
21812: Engine crash with invalid object
21813: Incorrect free disk space displayed
21815: Can’t stop engine on 10.4.11
21822: Search for files – manual selection is ignored
21824: Wrong Client Errors being displayed
21825: Client Test button missing
21826: Client connection strangeness
21830: Rules UI different in different parts of yeti
21837: Source’s ‘Last Backup Date’ field doesn’t roll up
21838: assert while trying to rebuild a disk media set
21846: Improve how compression data is displayed
21849: Editing script with many sources not easy
21852: Crash proactive backup to tape library
21856: Console crash with 8.0.608 (tag-related)
21858: Restore Assistant: Selected Media selector set jumps to top of list
21863: Restore Assistant: Restore files from which backup – no date displaying
21864: Restore Assistant: Preview for multiple media sets – only displaying files from first
21866: Assert during local restore: restore drive out of space
21868: better errors needed when license is required
21876: Assert: tree.cpp-3095
21877: Smart Tags not working with Clients set to Startup volume
21878: Assert: module.cpp-825 and others when adding clients
21879: Can’t erase 6.1 VXA-320 media
21881: Hang with 2 proactive backups running
21901: Selecting tape in slot during add member tries to add tape in drive first
21902: Grow the UI elements for all non-English language XIBs
21908: Can’t create a Size rule with more then 3 numbers
21911: Restore Assistant: Not restoring correct files (search restore restores too many files)
21915: Rule: Rules using ‘is not’ switches back to ‘is’
21916: Rules: unable to use Rule ‘Volume drive letter is’
21917: Rules: Files system is Mac OS switches to Windows
21922: Rules: unable to use ‘Date accessed’ rule
21924: Add Media Set: changes to catalog path in text field are ignored
21925: Add Media Set: Browse window should be a sheet
21926: Client browse cause engine crash: module.cpp-845
21934: Assert module.cpp-825 adding tape members
21939: Assert: tmemory.cpp-275 and Crash Reporter logs
21945: Restore Assistant: Unable to use ‘Search Media Set’
21960: VXA-320 FireWire loader issues including assert at intldrdev.cpp-4483
21961: Sources: Last Backup Date – local dmg files
21969: Find Files doesn’t always find the right media sets
21973: Sources: cannot remove local favorite folders
22002: Restore Assistant: issue with preview
22005: Restore: crash when accessing backup with a yellow icon
22006: Restore Assistant: FindFiles with mutiple found sets but not all checked doesn’t run
22013: Copy Backup: MD5 check some error
22024: Unable to change rules condition
22046: Script > Schedule > Text cutoff “F” for friday
22056: Restore Assistant: Restore files – Where do you want to restore: allows multiple selections

Using Symantec’s Backup Exec With External Hard Drives

Tuesday, May 5th, 2009

This assumes that you’ve already installed Backup Exec, and licensed it appropriately.
This assumes that all parities understand the expected backup retention policies, as well.

Preparing Backup Drives
1. Unpack Backup Drives
2. Plug both of them in
3. Note the drive letter assigned to them (this drive letter will now be forever associated with that drive).
4. Ensure drive is formatted with NTFS, if not, backup info on hard drive, format it, and label it appropriately
NOTE: You want to backup info on the new external drive because often times there will be utilities on there that are not present on the CD that the drive came with, or available from the manufacturers website.

Preparing Devices
1. Open Backup Exec
2. Navigate to Devices
3. Right mouse click on Removable Backup-to-Disk Folders
4. Select Backup-to-Disk Wizard
5. Click Next
6. Select Create a new backup-to-disk folder
7. Select Removable backup-to-disk folder
8. Name it (remember the name)
9. Select a path (this is just the drive name [ex. F:])
10. Follow the rest of the steps
NOTE: You will need to do this for each drive.

Preparing Media
NOTE: This is a critical step. If you don’t do this, chances are that the media you’re writing to will not allow you to overwrite it, even if you told it to do so in your Job properties. As a general rule, remember that device properties trump job properties.
1. Go to the Media tab, Right mouse click on Media Set
2. Select New Media Set
3. Give it a name (remember the name)
4. Ensure that “Overwrite protection period” is set to: Infinite – Don’t Allow Overwrite
NOTE: This is in my opinion bad grammar that’s been carried along from version to version. What this settings does is DISABLE overwrite protection. This means that there is no overwrite protection – i.e, you can write over the drive as many times as you please.
5. For “Append Period”, ensure that it is set to “Infinite – Allow Append” Backup exec interprets this as “I will allow you to append as many time as you please because there is no period to stop appending”.
6. Set Vault rules to None

Creating a Job
1. Go to the Job Setup tab
2. On the left pane, under the Backup Tasks window, select “New job using wizard”
3. Select “Create a backup job with custom settings”
4. Select the resources you would like to backup
5. Test the logon account
6. Select the order of backup
7. Name the backup, and the backup set
8. Choose the device you’d like to backup the data to (The All Devices pool).
NOTE: You will in most cases want to select “all devices”. This will tell Backup Exec to go to all devices and then select the one that’s available to backup to. If you have a tape drive that’s been deprecated, then you want to disable the tape drive under “Devices”, but still point the job to all devices. It will then backup to the drive that’s plugged in. This will allow for external drive rotation with the least amount of user intervention. If you have more than one “online” device, then you want to create a new “device pool” under “Device” and add your two “backup-to-disk” folders within that new pool.
9. Select the media set you’d like to backup the data to (the new media set you created).
10. For Backup Overwrite Method, please select “Append to media, overwrite if no appendable media is available”. What this will do is backup to the drives for as long as the drives say per your Media selection, and if there’s no room, it will overwrite.
11. Choose your backup options. Depending on the time it takes to backup, you will want to adjust this. With the size of external hard drives nowadays, I don’t see any other reason why you’d want to stray from Full Backups. If the backups are under 100GB and you have 1TB drives, go ahead and choose full backups (at the speed of USB2.0 or greater this will most likely only take about 4-5 hours). This will make it easier for restores in a offsite rotation scenario, managing jobs in the long run, and give you ~8 days worth of backups.
12. Always select it to verify backups
13. Schedule the job to run later
14. For the schedule, you would usually want to choose Recurring Week Days, and select the days you want it to backup per your conversation with the client.
15. For the Time Window, select what time you’d like the backup to start.

Adjusting Alerts
1. Go to Tools > Alert Categories
2. For “Media Insert”, and “Media Overwrite”, ensure that you select “Automatically clear alert after” 2 Minutes (or whatever you want), and Respond with “Yes”
NOTE: IMPORTANT If you don’t do this, Backup Exec will actually wait FOREVER (literally) for someone to manually acknowledge the alert by clicking Yes, No, or Cancel. It will always pop an alert because it’s hitting a pool to search for available media. By responding with Yes, it will now begin to Overwrite and/or use the device and media that you have selected the job to use.

Testing Job
1. Unplug one of the drives
2. Manually Run the Job
3. Verify that the job has run successfully and note what problems you have ran into, and correct or note as necessary
4. Run the Job AGAIN on the same drive. Ensure that it runs and appends to the drive. This will prove that the drive can be written to and is not “locked” due to an incorrect setting on the job or media.
5. Unplug the tested drive
6. Run steps 2-4 on the other drive to ensure that everything is OK.
7. Run a test restore
8. You can now leave one of the drives onsite, and take another with you or leave it with the client. You can now assure the client that they now have good backups (one onsite, and one that’s going offsite), and that you’ve thoroughly tested the backups and also performed a test restore.

Wrap up
1. Note any false positives in notes for the client (for backup troubleshooting in the future)
2. Update the Backup section for the client in notes.
3. Even if there was no BEV, send a BEV out saying that they now have a backup system in place.

Troubleshooting File Replication Pro

Saturday, May 2nd, 2009

Check the Web GUI.

To Check the logs via the Web Gui
- On the server, open Safari and go to http://localhost:9100 and authenticate
- Go To Scheduled Jobs and view the Logs for the 2 Way Replication Job

You can also tail the logs on the server. They are in /Applications/FileReplicationPro/logs and among the various logs in that location, the most useful log would be the syncronization log.

Many times the logs show that the server TimeSync is to fare between, the date and time are not correct. Each Server has a script you can run to resync the time. To Run this Script
Open Terminal on both First and Second servers and run
sudo /scripts/updatetime.sh

You should see output in the terminal window and in the Console related to the time&date are now in sync with the time server.

To Stop and Restart the Replication Service

Open Terminal and run the following commands as sudo
systemstarter stop FRPRep
systemstarter stop FRPHelp
systemstarter stop FRPMgmt
once the services are stopped, start them up again in the following order
systemstarter start FRPRep
systemstarter start FRPHelp
systemstarter start FRPMgmt

You also should restart the second (or tertiary) Client:
Open Terminal and run the following commands as sudo
systemstarter stop FRPRep
wait for the service to stop and then start it again with this command
systemstarter start FRPRep

Recovering FileMaker and FileMaker Server Databases

Tuesday, April 21st, 2009

INTRODUCTION

The most common thing that happens to FileMaker databases is file corruption. In this case, the local or server files will not be accessible, and customers will report issues.

Normally, one specific file is down and inoperable in FileMaker or FileMaker Server, but sometimes could be multiple files. You will either have to grab the affected items from a recent backup or otherwise recover the files.

FILE RECOVERY

If you have to recover files, you will need FileMaker Pro. If you are recovering .fp5 files (FileMaker 5 databases), either version 5 or 6 would be appropriate. If the files are .fp7 files (FileMaker 7 databases), then versions 7, 8, 9 and 10 will work. Open FileMaker, choose menu command “File, Recover”, and select the damaged database file. FileMaker will save a recovered copy.

**Important** For .fp5 (FileMaker 5) files, after recovering, each file’s shared hosting status might revert to Single User Mode. To fix this, open the file in FileMaker Pro 5 or 6, go to “File, Sharing” and set the file to either Multi User or Multi User (Hidden), depending on whether or not you want it to be selectable in FileMaker Server. (If you do not have a version of FileMaker Pro 5 or 6 to work with, most likely a 318 developer will.)

Using LCR for Exchange 2007 Disaster Recovery

Thursday, April 16th, 2009

Local Continuous Replication (LCR) is a high availability feature built into Exchange Server 2007.  LCR allows admins to create and maintain a replica of a storage group to a SAN or DAS volume.  This can be anything from a NetApp to an inexpensive jump drive or even a removable sled. In Exchange 2007, log file sizes have been increased, and those logs are copied to the LCR location (known as log shipping) and then used to “replay” data into the replica database (aka change propagation).

LCR can be used to reduce the recovery time in disaster recovery scenarios for the whole database, instead of restoring a database you can simply mount the replica.  However, this is not to be used for day-to-day mailbox recovery, message restores, etc.  It’s there to end those horrific eseutil /rebuild and eseutil /defrag scenarios.  Given the sizes that Exchange environments are able to get in Exchange 2003 R2 and Exchange 2007, this alone is worth the drive space used.

Like with many other things in Windows, LCR can be configured using a wizard.  The Local Continuous Backup wizard (I know, it should be the LCR wizard) can be accessed using the Exchange Management Console.  From here, browse to the storage group you would like to replicate and then click on the Enable Local Continuous Backup button.  The wizard will then ask you for the path to back up to and allow you to set a schedule.  Once done, the changes will replicate, but the initial copy will not.  This is known as seeding and will require a little PowerShell to get going.  Using the name of the Storage Group (in this example “First Storage Group”) you will stop LCR, manually update the seed, then start it again, commands respectively being:

Suspend-StorageGroupCopy –identity “First Storage Group”

Update-StorageGroupCopy –identity “First StorageGroup”

Resume-StorageGroupCopy –identity “First StorageGroup”

Now that your database is seeded, click on the Storage Group in the Exchange Management Console and you should see Healthy listed in the Copy Status column for the database you’re using LCR with.  Loop through this process with all of your databases and you’ll have a nice disaster recovery option to use next time you would have instead done a time consuming defrag of the database.

Restoring Data From Rackspace

Wednesday, April 1st, 2009

Rackspace provides a managed Backup Solution. The backups are available for up to 1 Month back. 2 Weeks of Backups are located on their premises, and the previous 2 Weeks are stored offsite. If the files to restore are within that period your restore time will take longer, as they will have to move the tapes from their offsite location to Onsite to start the restore process.

Restores can either be performed from Rackspace’s Web Portal or a support phone call.

Calling Rackspace
1-800-961-4454
Supply Account Name and Password
State you want to Restore Files, windows or linux computer
Give Backup Operator File Path, and Date to Restore From
A Ticket will be created, and updated with the Restore Process. This ticket will be updated when the Restore is complete, and Will Include the Directory of the Restore Data.

File Replication Pro Story About 318

Wednesday, March 25th, 2009

The File Replication Pro folks have published a customer success story outlining some of the ways we’re using their product. Check it out and if you have any questions about what we’re doing with it feel free to drop us a line!

File Replication

Thursday, February 19th, 2009

Performing replication between physical locations is always an interesting task. Perhaps you’re only using your second location for a hot/cold site or maybe it’s a full blown branch office. In many cases, file replication can be achieved with no scripting, using off the shelf products such as Retrospect or even Carbon Copy Cloner. Other times, the needs are more granular and you may choose to script a solutions, as is often done using rsync.

However, a number of customers have found these solutions to leave something to be desired. Enter File Replication Pro. File Replication Pro allows administrators to replicate data between two locations in a variety of fashions and across a variety of operating systems in a highly configurable manner. Furthermore, File Replication Pro provides delta synchronization rather than full file copies, which means that you’re only pushing changes to files and not the full file over your replication medium, greatly reducing required bandwidth. File Replication Pro is also multi-platform (built on Java), allowing administrators to synchronize Sun, Windows, Mac OS X, etc.

If you struggle with File Replication issues, then we can help. Whatever the medium may be, give us a call and we can help you to determine the best solution for your needs!

Shared Memory Settings Explained

Friday, February 6th, 2009

Shared memory is a method of inter-process communication (IPC), where two processes communicate with each other through shared blocks of RAM. Because communication is resident in RAM, shared memory allows for very fast communication between processes. There are significant drawbacks to shared memory; one obvious limitation is that all communicating processes must exist on the same box. Additional complexities with the implementation of shared memory means that it is typically relegated to lower-level, performance oriented systems, such as databases or backup systems.

In OS X, these settings MUST be tweaked if you are expecting to backup significant amounts of data with any semblance of speed or stability. I can confirm that both TiNa and NetVault use shared memory for IPC. Other products such as Retrospect or PresStore utilize other IPC methods, such as named pipes.

kern.sysv.shmall
shmall represents the maximum number of pages able to be provisioned for shared memory. It determines the total amount of shared memory that the system can allocate. To determine total system shared memory, multiply this value by the size of the page file. The page file size can be determined via `vm_stat` or `getconf PAGE_SIZE`. A typical page size is 4KB, 4096 bytes.
In OS X, Apple uses extremely conservative settings for shmall. At 1024, OS X defaults to only 4MB of shared memory.

kern.sysv.shmseg
shmseg represents the maximum number of shared memory segments each process can attach. Default in OS X is 8.

kern.sysv.shmmni
shmmni limits the number of shared memory segments across the system, representing the total number of shared memory segments. Default in OS X is 32.

kern.sysv.shmmin
shmmin is the minimum size of a shared memory segment, this should pretty much never need modification. Default is 1.

kern.sysv.shmmax
shmmax is the maximum size of a segment. Default in OS X is 4 MB, 4194304.

Suggested Settings:

512MB of shared memory
kern.sysv.shmall: 131072
kern.sysv.shmseg: 32
kern.sysv.shmmni: 128
kern.sysv.shmmin: 1
kern.sysv.shmmax: 536870912

1GB Shared memory
kern.sysv.shmall: 262144
kern.sysv.shmseg: 32
kern.sysv.shmmni: 128
kern.sysv.shmmin: 1
kern.sysv.shmmax: 1073741824

Using Selectors With Retrospect

Wednesday, February 4th, 2009

Retrospect has a filtering system based on selectors. This document will review the specifics of developing these selectors.

each script created in retrospect has the option to filer the file selection. This can be accomplished with a pre existing selector or with a custom filter created for just this script.

We will be creating a new pre created selector.

In retrospect 6.1 for mac. Navigate to the Special tab of the primary Retrospect window.
- Press the selector button
There will be a pop open window with all of the existing selectors. You can choose to create a new one or edit existing ones.
- Press New

You will be prompted to name the selector any name will do
You will then be presented with a simple include and exclude selector sections.

Include: By default this is empty which means include everything in the source. By addition conditions to this section you will exclude everything BUT the selections you are choosing. This is often used with source groups to limit the backup directories to /Users

Exclude: Aft the Include rules populate the file list to be backed up the exclude list applies to remove files that are indicated by the logic.
This is used to exclude files that are not important or will eat up too much space for backup. Music files and cache files are often the case.

Logic: The filtering mechanism gives you the ability to select or exclude files based on the following criterion:

-Date:
- File Kind (HFS File types):
- Flags (HFS File Flags):
- Labels (HFS Label Colors):
- Backup Client Name ( as found in the client list of Retrospect)
- File / Folder Name
- Sharing Owner name
- Volume Name
- Pre Existing selector in retrospect
- Size of File or Folder
- Special Folders ( Mac OS reserved folders )
- UNIX ( file permissions or special files such as symbolic links or pipes )

You will see that there are quite a number of Mac specific selectors here and no Windows specific selectors. Retrospect 6.1 for mac is very one sided. Using these selectors you can create inclusions and exclusions with logic to refine you backup or restore policy. Once you have the selector set up the way you would like. You can save it and then indicate this new selector in your backup scripts

Retrospect 7.6 for Windows: The version 7 – 7.6 is Windows only and we will touch the most recent version 7.6 for windows

The interface for the windows version of Retrospect is different in that the location of the buttons is different but the names are generally the same.
Instead of having tabs across the top the windows version has them as a list of links vertically on a sidebar to the left. From that list you can select the “Configure” link near the bottom. This should expose a list that will include “Selectors”

The mechanism for the selectors are similar to that of Retrospect 6.1. The selector window will show a list of pre created selectors. With the option to edit existing selectors or create new selectors. The selectors are organized into inclusion and exclusion.

Logic: the arrangement of selectors is slightly different. The choice of options are grouped into separate sections:

Universal:
- Atrributes
- Client Name
- Date
- File System
- Login Name
- Name
- Selector
- Size
Windows:
- Attributes
- Date
- Drive Letter
- Path
- Special Folders
Mac OS X:
- Attributes
- File Kind
- Label
- Path
- Permissions
- Special Folders
UNIX:
- Attributes
- Date
- Path
- Permissions
NetWare:
- Date
- Path
MailBox:
- Sender

This arrangement separates the different supported client types, specific selectors for the client in question. Otherwise the logic of include filters creating the file list and exclude logic removing files from it. Also once the new selector is created it can be selected within any available script.

Retrospect 8
The new version of Retrospect 8 ( as far as beta 5 ) called selectors rules, and only supports the use of rules. You cannot come up a custom filter for the use for only one script. This version of Retrospect allows you to edit the “Rules” only from the preference pane of the application.

The preference pane allows you to create, remove, edit or duplicate scripts. The script editor resembles the smart folder rule in the Mac OS X Finder.
You begin with the logic to include or exclude “Any” or “All” of the following selectors. You can then create filters based on the following:

File:
-Name
-Mac Path
-Windows Path
-UNIX Path
- Attributes
- Kind
-Date Accessed
- Date Created
- Date Modified
- Date Backed up
- Size Used
- Sized on Disk
- Label
- Permissions
Folder:
- Name
-Mac Path
-Windows Path
-UNIX Path
- Attributes
- Kind
-Date Accessed
- Date Created
- Date Modified
- Date Backed up
- Size Used
- Sized on Disk
- Is
- Is Not
- Label
- Permissions
Volume:
- Name
- Drive Letter
- Connection Type
- File System
Source Host:
- Name
- Login Name
Existing Rule:
- Is

This list could easily expand out many times too complex to display here. None-the-less all the features of previous filters are arranged more simple to more complex with logical includes or excludes.

Once these are created they are available to any script created by the program. In addition since the Retrospect application is now console for Retrospect servers, the scripts created are on a per server basis. The “Rules” on one server are not necessarily on another.

The Time Machine Safety Net

Monday, February 2nd, 2009
Time Machine utilizes Leopard’s new MAC framework, providing a “safety net” to ensure the integrity of your backups. Access control provisions are applied via a kernel extension located at /System/Library/Extensions/TMSafetyNet.kext, which makes calls to _mac_policy_register and _mac_policy_unregister. All of this results in a backup set which contains data which is immutable via standard means. For instance, attempting to delete a Time Machine backup via the cli utility ‘rm’ will result in failure, as well as any other cli file operation utility which attempts to alter Time Machine backups. 
It seems that the system enforces the restrictions based upon all of the
following conditions being met:
  1. Has ACE ‘group:everyone deny full control’
  2. Resides in a directory “Backups.backupdb” located at volume root with the same deny ACE

Steps to create the safety net:
 

$mkdir -p /Backups.backupdb/test/test1
$chmod -R +a# 0 "group:everyone deny add_file,delete,add_subdirectory,
delete_child,writeattr,writeextattr,chown" /Backups.backupdb/
$rm -rf /Backups.backupdb/test
rm: /Backups.backupdb/test/test1: Operation not permitted
rm: /Backups.backupdb/test: Operation not permitted

Attempts to alter this data is then unsuccessful. However, there are a few back doors here. There exists a cli binary at /System/Library/Extensions/TMSafetyNet.kext/Contents/MacOS/bypass
which allows you to supply a command + args as an argument and completely bypass the access restrictions. Likewise, GUI level apps can delete these items by escalating via the authorization trampoline.

Mac OS X 10.5: Time Machine at the CLI

Saturday, October 18th, 2008

You can customize what Time Machine does not back up by using the following plist:

/System/Library/CoreServices/backupd.bundle/Contents/Resources/StdExclusions.plist

Simply add the strings that you don’t want to back up and it will no longer back up those locations. Remove the strings to re-add them at a later date.
In the UserPathsExcluded key, you can exclude paths that in relation to users home directories.

Backing Up With Carbon Copy Cloner

Wednesday, April 2nd, 2008

The newest version of carbon copy cloner, now version 3.1, has a number of features that move it closer to a viable automated backup system.

Carbon Copy Clone is now a wrapper application that runs a series of terminal commands to accomplish its goal but it does then very well.

Compatibility: 10.4 or higher. Universal Binary

Usage:

Cloning: As its name suggests the first feature of this software is to clone one drive to another. This is how the program started and was one of the few good third party software applications to do drive cloning on the mac.

The software interface is simple. Choose a source volume and choose a destination volume. If you are cloning you by default want to overwrite the destination drive.

New Feature: There is now a built in feature that tests the “Bootability” of the target drive after the clone. This will let you know whether the target drive can be used as a boot volume.

Local Backup: Instead of copying all data from the local to the target drive, you can now choose to do incremental backups of selected files. The source file system tree is then displayed, you can choose to check mark the boxes that you wish to backup. This model is good because you can choose the user directory to back up but then deselect the music folder within the user. Any new files or folders in the user directory will get backed up, but any files or folders in the music will not be.

Destination in subdirectory & Pre or Post Script runs: to copy data into a subdirectory of the target drive you must choose the pull down the Application Menu, between the Apple menu and File menu. Then choose Advanced Settings. This will give you a field to enter a pathname to specify a subdirectory to receive the copied files. You will also see fields to specify scripts to run either before or after the copy. Classically this is to stop and then start a database, or execute a database export for backup. I have also seen commands to gzip a directory structure and then decompress it after the copy.

Incremental Backups: When you choose your destination you can choose whether to do a full copy or an incremental copy. In addition you are presented with options to choose whether files are deleted if they are not on the source, and whether to preserver files that are delete or overwritten. This option creates a directory at the destination point _CCC_Year_Month_Time that will indicate that the files inside are the files that would have been overwritten by the incremental backup. As of now there is no way to automatically remove these files without further scripting or user intervention. If you are at a client that makes use of CCC and the destination drives are reaching capacity. These are the files to remove to conserve space.

Filtering: This version of ccc has filtering. The gearbox next to the source drive selector will be available if the source drive is local. These filters will show what you have chosen not to include. In addition you can add to this filter exceptions by file extension or exceptions by pathname. The latter filter works the same was as the exceptions in rsync. If you add an entry to this list, and path name that has content that matches this string will be ignored.

For example: If you back up the /Users/ directory but place “iTunes” in the advanced filter. It will backup all the user folders but will ignore all of the iTunes folders inside all of the user folders.

Disk Images as destinations: This allows you to create a sparse image file, with encryption should you choose it, to be the destination of the backups. The image file needs to be local. You could use other scripts to move these files around

Remote Backup: A recent update to this feature makes this a more viable solution for cost effective backup. In the interface you can choose the source to a be a remote mac or the destination to be a remote mac, but not both. If you choose the source to be a remote mac you cannot apply the file filters. In most circumstances I prefer to set this up on a client computer that is to be backed up and then choose the remote computer to be the server that will receive the data. In either case for a remote computer to be the source or the destination, you have to generate and authorization package installer.

This creates an SSH encryption key that is installed into /var/root/.ssh which allows the rsync process to run over an ssh tunnel without username:password authorization. This package needs to be installed on both the source and destination computers. These installers will now place nice with each other and concatenate their encryption keys so multiple sources can write to the same computer.

Note: Computers set as destinations must have ssh enabled. Normally done by enabling “remote login” in the sharing pane of system preference.

Scheduling Backups: Once you have the specifics of the copy process set you can choose to save the task. This will open a new window called “Backup Task Scheduler” In it you will see a list of scheduled tasks. These tasks correspond to entries in /Library/LaunchDaemons, each one will run as a daemon process call ccc_helper.

You can schedule operations on a hourly, daily, weekly, monthly basis or whenever the drive is connected. That last option is only viable for a backup that writes to a local drive.

The settings tab allows you to specify whether the backup destination will be determined by pathname only or whether to use the unique uuid for each drive.

You can access existing schedules by going to the Application menu again and choosing “Scheduled Tasks…”

NOTE: if the destination drives at the client rotate onsite and offsite there are two things to consider. First is that the scheduled backups should NOT be using the unique uuid and that both drive should have the same name so that they can receive remote backups properly. The good news is that the ccc_helper daemon is smart enough to not write into the /Volumes directory if there is no drive there that matches the destination name.

The description field is by default populated with common language describing the specifics of the back up script. This can be edited to be anything that you like

Cancelling a copy in process: If you can see the windows for the ccc_helper app you can press the cancel button. If you do so you are given two options. One is to skip this execution, which will relaunch on the next scheduled time, or you can defer. If you choose to defer you can have this newly selected time be the execution time from now on. This is probably the only drawback to having the backup run on a client computer. Is that they can cancel the process on their own.

Conclusions: All in all you get a lot with this simple product and it can be of great use even in limited applications. If you client is mostly mac and does not want to invest in an expensive backup situation it can go a long way to backing them up.

Pros: It is donation ware: Meaning it is freeware that will bug you for a donation now and again. It uses existing technology on your system, namely rsync and ssh. It is HFS+ meta-data aware. It is the ccc-helper that does the work and it will copy the hfs+ meta data over ssh. It writes out its own ccc log file.

Cons: Does not handle failure gracefully: If it cannot perform its actions it will bring up an on screen alert that will stay until dismissed. Using incremental backup on a very large file list can be memory intensive. This is more pronounce in local copies as it seems to break down the rsync operations on a folder by folder basis with a remote destination. Filtering is only available if the source is local. MAC ONLY. No support for any other operating system.

Starting (and restarting) Retrospect Clients From the Command Line

Monday, March 10th, 2008

Port scan the system to see if port 497 is up. Send Unix Command(this very often does not work for me) : exec SystemStarter stop RetroClient # then exec SystemStarter start RetroClient

If the above fails, enable SSH by sending the command via Send Unix: systemsetup -setremotelogin on

Open up a new terminal window and ssh into the system: ssh 318admin@192.168.1.150

Run the following to start the retrospect startup item: sudo /Library/StartupItems/RetroClient/RetroClient

if that does not work you can try to manually run the daemon in the foreground: sudo /Applications/Retrospect\ Client.app/Contents/Resources/pitond

This last command is only helpful for debugging as the client will exit as soon as you close the window. however you can open up multiple (ssh) terminal windows to view the logs on while you manually start and stop the service.

tail -f /var/log/retroclient.log tail -f /var/log/system.log

Checking FileMaker Server Backups

Tuesday, December 11th, 2007

When checking backups for clients that have a FileMaker-based solution, it is imperative that we check the backup mechanisms for FileMaker in addition to Retrospect (or other backup solutions). This article will outline this process.

1) First, we need to refresh the finder so that the dates reported will be correct (you’ll see why this is important in a minute). To do this, you simply need to log out of the account and log back in. Keep in mind that programs like Retrospect will quit on logout. If the computer cannot be logged out due to the fact a program is hosting a service that is in use, there is another way to refresh the Finder’s timestamps. When you create a new folder in a finder window, it should refresh the timestamps. Do this in the folder you are checking, and then delete the newly created folder.

2) Once you have logged back in, we need to find out where the backups are stored. To do this, simply launch FileMaker Server Admin (/Applications/FileMaker Server /FileMaker Server Admin) and connect to the database by typing 127.0.0.1.

3) Once the FileMaker Server Admin is open, click on the “schedule” button at the top. Here you will see the schedule for the backups. Double-click on one scripts and note the file path starting with filemac:/. This is the location of FileMaker’s backups. The default location is:

/Library/FileMaker Server/Data/Backups/

4) Navigate to the listed location in the Finder.

5) Switch to details view (Apple + 2). You should see folders that indicate timed backups. For instance:

0800 1000 1200 1400 etc.

1 – Mon 2 – Tuesday 3 – Wed etc.

This would indicate that the database is being backed up every 2 hours, and on each day of the week.

6) Now, to check the backups, go into one of the folders. Make sure you are in list view (Apple + 2) and look at the timestamps. The last modified timestamp should fall in line with the type of backup you are looking at. If it is a daily, then the items in the folder for the day before should read “Yesterday”.

7) Go through the various backup script folders and verify that the database is backing up properly.

The FileMaker backup system should be checked in addition to the main backup system. The system should be set up so that FileMaker is creating backups, and those backups are being backed up again by the main backup program.

The developers have requested that we check and ensure that these clients also have at least one backup on each weekend day (in the evening). If they do not, go ahead and create one for Saturday and Sunday (10PM is fine as long as it doesn’t conflict with other backups). Add the script by doing the following:

Go to the folder where the backups are located and create a SAT and SUN folder. Make sure ownership is for fmserver (read/write) and group is fmsadmin (read/write). Then go into FileMaker Server Admin and go to schedules. Duplicate the Friday backup and change the file location and timing to meet your new scheduled backup. Name it appropriately.

Time Navigator Installation Checklist

Monday, November 26th, 2007

This document will be followed up by a document with more detailed instructions for each checkbox.

Client management [ ] Talk to the client to verify the SOW from ATEMPO [ ] Discuss the amount of data and retention policies.

Preflight [ ] Verify host name of server and clients. [ ] Verify hardware. [ ] Verify version. Time Navigator gets new revisions quite often, check with your Atempo point of contact to make sure you have the latest version.

Installation:

[ ] Atempo license email should have been sent to the client contact. [ ] Log into the Atempo license web site to the point where it asks for the host id. [ ] Log in to the computer as root. [all installations should be done as root, enable the root user if you need to] [ ] Run the License Manager Installation. [ ] Copy and paste the host id into the license web site [ ] Generate and download license key. [ ] Indicate the license key file in the License manager installer. [ ] Run Time Navigator installer. [ ] Designate the environment name. usually tina [ ] Designate ports. default to 2525 and 2526 [ ] When installation is complete restart the computer [ ] Start the Atempo launcher. [ ] Start “The Configurator” [ ] Create initial catalog. [ ] Detect attached tape drives and libraries. [ ] Start Tina Administrative console [ ] Run diagnostic test on all physical Drives [ ] Create VLS libraries [ if necessary ] [ ] Create tape Pools

Set up Agents on Back up clients [ ] Install initial agent. [ ] create package installer. [ ] deploy package to remaining agents [ ] install remaining non Mac OS X computers [ ] add agents as hosts. [ ] Create back up classes [ ] Create back up strategies [ ] run test back ups [ ] run restore tests [ ] Customize tina install to features of the client

Addendum :: Replication [ ] Select the host to be the source of replication [ ] Select Platform > Application > Filesystem [ ] Create back up class on new Application icon [ ] Create strategy with replication activated [ ] create destination within the strategy

Completion [ ] Review the SOW from Atempo with the Client [ ] Train the client on how to monitor backups.