Archive for the ‘Xsan’ Category

Files Not Showing For Xsan Clients When Uploaded Through Ethernet

Tuesday, October 15th, 2013

There is a problem with Xsan when using AFP or SMB heads in front of volumes, where when a user uploads or adds a file to the volume, the file is not readily available/visible to all users. This issue doesn’t occur every time a file is uploaded and nor does it cause files to actually disappear, only to need the user to restart their Finder in order to be able to see the object.

We’ve been using this freeware app as a workaround until Apple comes up with a patch: https://www.macupdate.com/app/mac/24714/refresh-finder

Xsan & Media Composer

Monday, April 22nd, 2013

There’s a product out there called SANFusion that can allow Media Composer to read disk images on Xsan as though each were a separate workspace in AvidFS. This allows environments to leverage existing pools of storage sitting on Xsan as though they were sitting on an Isis or a Terablock. Check it out at SANFusion:

SAN Fusion is an easy and cost-effective solution that allows you to use Avid Media Composer with your existing Xsan infrastructure.

By using an existing Xsan volume as a backing-store for SAN Fusion’s virtualized workspaces, editors can experience authentic Unity-style bin sharing and locking from within Media Composer 5.5.3 or later. Unlike other solutions that purport to make Avid work with Xsan, SAN Fusion is a client-side application with no additional server or minimum number of seats to buy.

SAN Fusion clients mount and unmount workspaces on demand through our intuitive GUI to provide Media Composer a real-time translation layer between it and Apple’s Xsan filesystem. The end result is the best of both worlds: Avid’s first class bin and media sharing combined with Apple’s storage-agnostic low (or no) cost cluster filesystem.

The $1,500 price tag shouldn’t scare you off. Just compare the cost of a Promise X30 to the same amount of storage from Avid and you’ll get the idea why. And if you need any help with such things, give us a shout at sales@318.com.

Unity Best Practices In AVID Environments

Thursday, September 6th, 2012

Avid Unity environments are still common these days because the price for Avid’s ISIS SAN is tremendously high. While a Unity typically started anywhere from $50,000 to $100,000, a typical ISIS starts around the same price even though the ISIS is based on more typical, less expensive commodity hardware. The ISIS is based on common gigabit networking, whereas the Unity is based on fibre channel SCSI.

Avid Unity systems come in two flavors. Both can be accessed by fibre channel or by gigabit ethernet. The first flavor is all fibre channel hardware. The second uses a hardware RAID card in a server enclosure with a sixteen drive array and shares that storage over fibre channel and/or gigabit ethernet.

Components in a fibre channel only Unity can be broken down so:

  • Avid Unity clients
  • Fibre channel switch
  • Fibre channel storage
  • Avid Unity head

Components in a chassis-based Unity are:

  • Avid Unity clients
  • Fibre channel switch
  • Avid Unity controller with SATA RAID

The fibre channel only setup can be more easily upgraded. Because such setups are generally older, they typically came with a 2U rackmount dual Pentium 3 (yes, Pentium 3!) server. They use a 2 gigabit ATTO fibre channel card and reliability can be questionable after a decade.

The Unity head can be swapped for a no-frills Intel machine (AMD doesn’t work, and there’s not enough time in the world to figure out why), but one must take care to be careful about video drivers. Several different integrated video chips and several video cards have drivers which somehow conflict with Unity software, so sometimes it’s easier to simply not install any drivers since nothing depends on them. The other requirements / recommendations are a working parallel port (for the Unity dongle), a PCIe slot (for a 4 gigabit ATTO fibre channel card) and 4 gigs of memory (so that Avid File Manager can use a full 3 gigabytes).

The fibre channel switch is typically either a 2 gigabit Vixel switch or a 4 gigabit Qlogic 5200 or 5600 switch. The older Vixel switches have a tendency to fail because there are little heat sinks attached to each port chip which face downward, and after a while sometimes a heat sink or two fall off and the chip dies. Since Vixel is not in business, the only replacement is a Qlogic.

The fibre channel storage can be swapped for a SATA-fibre RAID chassis so long as the chassis supports chopping up RAID sets into many smaller logical drives on separate LUNs. Drives which Avid sells can be as large as 1 TB if using the latest Unity software, so dividing up the storage into LUNs no larger than 1 TB is a good idea.

Changing storage configuration while the Unity has data is typically not done due to the complexity and lack of proper understanding of what it entails. If it’s to be done, it’s typically safer to use a client or multiple clients to back up all the Unity workspaces to normal storage, then reconfigure the Unity’s storage from scratch. If that is what is done, that’s the best opportunity to add storage, change from fibre channel drives to RAID, take advantage of RAID-6, et cetera.

Next up is how Avid uses storage. The Unity essentially thinks that it’s given a bunch of drives. Drives cannot easily be added, so the only time to change total storage is when the Unity will be reconfigured from scratch.

The group of all available drives is called the Data Drive Set. There is only one Data Drive Set and it has a certain number of drives. You can create a Data Drive Set with different sized drives, but there needs to be a minimum of four drives of the same size to make an Allocation Group. Spares can be added so that detected disk failures can trigger a copy of a failing drive to a spare.

Once a Data Drive Set is created, the File Manager can be started and Allocation Groups can be created. The reasoning behind Allocation Groups is so that groups of drives can be kept together and certain workspaces can be put on certain Allocation Groups to maximize throughput and/or I/O.

There are pretty much two different families of file access patterns. One is pure video streaming which is, as one might guess, just a continuous stream of data with very little other file I/O. Sometimes caching parameters on fibre-SATA RAID are configured to have large video-only or video-primary drive sets (sets of logical volumes cut up from a single RAID set) are set to optimize streams. The other file access pattern would be handling lots of little files such as audio, stills, render files and project files. Caching parameters set for optimizing lots of small random file I/O can show a noticeable improvement, particularly for the Allocation Group which has the workspace on which the projects are kept.

Workspaces are what they sound like. When creating a workspace, you decide which Allocation Group that workspace will exist. Workspaces can be expanded and contracted even while clients are actively working in that workspace. The one workspace which matters most when it comes to performance is the projects workspace. Because Avid projects tend to have hundreds or thousands of little files, an overloaded Unity can end up taking tens of seconds to simply open a bin in Media Composer which will certainly affect editors trying to work. The Attic is kept on the projects workspace, too, unless explicitly set to a different destination.

Although Unity systems can have ridiculously long uptimes, like any filesystem there can be problems. Sometimes lock files won’t go away when they’re supposed to, sometimes there can be namespace collisions, and sometimes a Unity workspace can simply become slow without explanation. The simplest way to handle filesystem problems, especially since there are no filesystem repair tools, is to create a new workspace, copy everything out of the old workspace, then delete the old workspace. Fragmentation is not checkable in any way, so this is a good way to make a heavily used projects workspace which has been around for ages a bit faster, too.

Avids have always had issues when there are too many files in a single directory. Since the media scheme on Avids involves Media Composer creating media files in workspaces on its own, one should take care to make sure that there aren’t any single directories in media workspaces (heck, any workspaces) which have more than 5,000 files. Media directories are created based on the client computer’s name in the context of the Unity, so if a particular media folder has too many items, that folder can be renamed to the same name with a “-1″ at the end (or “-(n+1)”).

Avid has said that the latest Media Composer (6.0.3 at the time of this writing) is not compatible with the latest Unity client (5.5.3). This is not true and while certain exotic actions might not work well (uncompressed HD, large number of simultaneous multicam, perhaps), all basic editing functions work just fine.

Finally, it should be pointed out that when planning ways to back up Unity workspaces, Windows clients are bad candidates. Because of the limitation on the number of simultaneously mounted workspaces being dependent on the number of drive letters available, Windows clients can only back up at most 25 workspaces at a time. Macs have no limitation on the number of workspaces they can mount simultaneously, plus Macs have rsync built in to the OS, so they’re a more natural candidate for performing backups.

Video On Setting Up File Sharing In Lion Server

Friday, May 11th, 2012

Setting up a Qlogic Fibre Channel Switch For Xsan

Wednesday, April 11th, 2012

Qlogic switches can be configured via a built-in Web-based administration tool, or via their Command Line Interface over a serial connection. The Web-based tool is the fastest and easiest method of getting one up and running.

By default, Qlogic switches have an IP address of 10.0.0.1. The default username is “admin”, and the default password is “password”. Set your computer’s IP address to 10.0.0.2, with a Subnet Mask of 255.255.255.0 and no router/gateway. Open a web browser – Firefox is your best option – and go to 10.0.0.1. The Java applet will prompt a security warning – please confirm that the applet can control your computer. It won’t do anything bad.

On first logging in, you will be warned that the default password has not been changed. Please change the password. It’s very easy for somebody to make your fibre fabric not work right. Once you have done so, configure the IP address of the switch.

Please check and see if a firmware update is available for the switch before proceeding any further with setup. It’s definitely going to be easiesr to get a firmware update applied before you’ve got an Xsan using your fabric. Go to http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/NewDefault.aspx and click on Switches, then Fibre Channel Switches, choose the correct model, and click “Go”.

Devices on a fibre network are identified by their World Wide Name, or WWN. WWNs are guaranteed to be universally unique, which is a good thing, but they’re not designed to be read by humans. That’s why Qlogic lets you assign Nicknames to your devices. You should assign meaningful and easily decipherable Nicknames to all of your devices. Go to Fabric, and then Nicknames. You’ll see a list of all the WWNs (including vendor information), and which port they’re connected to. Double-click in the “Nickname” box, enter what you like, and when you’re done, click “Apply”. Accurate and comprehensible Nicknames make everything else easier, particularly the next step, which is Zoning.

Communication on a Fibre Channel network is controlled by Zones. In order for Fibre Channel devices to see one another (e.g. for clients to see storage), they must be in a zone together. In a small environment, it’s feasible to create a single zone, and place all devices in that zone. However, it isn’t necessary for Xsan clients and controllers to be able to communicate via Fibre Channel – all of their communication happens across the Metadata Network. If you want the best performance, then, it’s best to separate the devices logically into multiple zones to avoid excessive traffic on the Fibre Channel network. Devices can be added directly to a zone, or they can be grouped into Aliases, which can then be added to a zone.

As an example, imagine an environment with 15 Xsan clients, 2 Metadata controllers, and 2 Promise E-Class arrays. The clients need to communicate with the Promise storage, and the controllers do as well, but the clients and controllers don’t need to communicate with one another. Three aliases should be created and two zones should be created: one alias for each class of device, and one zone for each necessary communications channel.

Aliases

  • clients: Contains all Xsan clients
  • controllers: Contains both Metadata controllers
  • storage: Contains both Promises

Zones

  • XsanControllers: Contains the controllers and storage aliases
  • XsanClients: Contains the clients and storage aliases

Zones are contained in Zone Sets. Many Zone Sets can be configured, but only one Zone Set can be active at any time. Once you’ve created zones for your devices, put all those zones into a Zone Set, and make sure that you activate that Zone Set when you’re finished with your configuration changes.

Storage devices and clients on a Fibre Channel network present themselves to the switch differently, and require configuration specific to their role. There are port properties that need to be set to provide the best performance. Xsan controllers and clients are “Initiators”, and storage devices are “Targets”. Device Scan, when enabled, queries every newly connected device to determine whether or not it is a Target or an Initiator. I/O Streamguard attempts to prevent disruption by suppressing some types of communication between initiators. Since we know what every device will be, and what port they’re on, we can set Device Scan and I/O Streamguard appropriately and avoid the excess traffic.

Initiators: Enable I/O Streamguard Disable Device Scan Targets: Disable I/O Streamguard Enable Device Scan

Once you have your Nicknames, Zones, and port settings configured, you switch should be ready for use, and you can move on to configuring your storage, clients, and controllers.

Xsan Deployment Checklist

Tuesday, April 10th, 2012

One of the harder aspects of building systems consistently in a repeatable fashion is that you often need a checklist to follow in order to maintain that consistency. Therefore, we’ve started an Xsan Installation Checklist, which we hope will help keep all the i’s dotted and t’s crossed. Feel free to submit any items we should add to the checklist and also feel free to use it to verify the configuration of your own Xsans.

Preparation

[ ] Work out ahead of time how permissions will be dealt with:

  • Active Directory
  • Open Directory
  • Local Clients in same group with different UIDs.

[ ] If Active Directory is already in place, verify that system are bound properly.

[ ] If Open Directory is already in place, verify that system are bound properly.

[ ] If Open Directory is not already in place, configure Open Directory.

[ ] All client Public interfaces should have working forward and reverse DNS resolution.

Fibre Channel (Qlogic)

[ ] Update Qlogic firmware to latest on all switches.

[ ] Set nicknames for all devices in the fabric.

[ ] Export the nicknames.xml file and give to customer or import to workstation running Qlogic San Surfer.

[ ] Set the domain IDs on the Qlogic. Different Domain ID for each switch.

[ ] Set port speed manually on Qlogic and clients. Don’t use auto-negotiation.

[ ] Configure the appropriate Qlogic port properties for Targets (Storage) and Initiators (Clients).

Targets

  • Device Scan On
  • I/O Streamguard Off
  • Initiators
  • Device Scan Off
  • I/O Streamguard On

[ ] Avoid fully populating Qlogic 9200 blades, only use 8-12 ports of each blade to avoid flooding backplane.

[ ] If the switch has redundant power, plug each PS into different circuits.

[ ] Split HBA (client port) and storage ports across switches, i.e. port 0 on switch 1, port 1 on switch 2.

Storage (Promise)

[ ] Update Controller firmware to latest version

[ ] If client has a spare controller, update that as well.  Also label box with updated firmware number

[ ] Work out LUNs for MetaData/Journal and Data (MD should be RAID 1, Data should be RAID 5 or 6)

[ ] Adjust script for formatting Promise RAIDs – refer to this link  http://support.apple.com/kb/HT1200

[ ] Start formatting LUNS according to strategy – this can take up to 24 hours.

Metadata Network

[ ] If customer has Spanning Tree enabled, make sure Portfast is enabled as well. If possible, disable ST.

[ ] Verify that both clients and servers have GigE connection.

General Client/Server

[ ] Label your NICs clearly: Public LAN and Metadata LAN.

[ ] Configure Metadata network with IP and Subnet Mask only. No router or DNS.

[ ] Disable unused network interfaces.

[ ] Make sure Public Interface is top interface in System Preferences/Network

[ ] Disable IPv6 on all interfaces.

[ ] Energy Saver settings: Make sure “put hard disks to sleep when possible” is disabled.

[ ] Make sure Startup Disk is set to the proper local boot volume.

Metadata Controllers

[ ] Install XSAN on Snow Leopard machines and below (XSAN is included with Lion)

[ ] All MDCs should have mirrored boot drives, with AutoRebuild enabled.

[ ] Sync the clocks via NTP. Make sure all clients and MDCs point to same NTP server.

[ ] Add MDCs to XSAN

Volume Configuration

[ ] Label all the LUNs clearly.

[ ] Configure the MetaData LUN as a mirrored Raid 1.

[ ] Use an even number of LUNs per pool.

[ ] Use Apple defaults for block size and stripe breadth and test to see if performance is acceptable.

[ ] Do NOT enable Extended Attributes.

[ ] Verify email notification is turned on.

[ ] Make sure the customer knows not to go below 20% free space.

XSAN Creation/Management

[ ] Verify that the same version of Xsan is running on on all MDCs and clients.

[ ] For 10.6 and below – Add XSAN Serial numbers to XSAN Admin

[ ] Add Clients to XSAN

[ ] Verify performance of XSAN

  • Test speed
  • Test IO
  • Test sustained throughput
  • Test with different file types
  • Test within applications (real world testing)

[ ] Document XSAN for client

[ ] Upload documentation

 

Configuring a Qlogic Fibre Channel switch for Xsan

Tuesday, February 28th, 2012

Qlogic switches can be configured via a built-in Web-based administration tool, or via their Command Line Interface over a serial connection. The Web-based tool is the fastest and easiest method of getting one up and running.

By default, Qlogic switches have an IP address of 10.0.0.1. The default username is “admin”, and the default password is “password”. Set your computer’s IP address to 10.0.0.2, with a Subnet Mask of 255.255.255.0 and no router/gateway. Open a web browser – Firefox is your best option – and go to 10.0.0.1. The Java applet will prompt a security warning – please confirm that the applet can control your computer. It won’t do anything bad.

On first logging in, you will be warned that the default password has not been changed. Please change the password. It’s very easy for somebody to make your fibre fabric not work right. Once you have done so, configure the IP address of the switch.

Please check and see if a firmware update is available for the switch before proceeding any further with setup. It’s definitely going to be easiesr to get a firmware update applied before you’ve got an Xsan using your fabric. Go to Qlogic’s Support Site and click on Switches, then Fibre Channel Switches, choose the correct model, and click “Go”.

Devices on a fibre network are identified by their World Wide Name, or WWN. WWNs are guaranteed to be universally unique, which is a good thing, but they’re not designed to be read by humans. That’s why Qlogic lets you assign Nicknames to your devices. You should assign meaningful and easily decipherable Nicknames to all of your devices. Go to Fabric, and then Nicknames. You’ll see a list of all the WWNs (including vendor information), and which port they’re connected to. Double-click in the “Nickname” box, enter what you like, and when you’re done, click “Apply”. Accurate and comprehensible Nicknames make everything else easier, particularly the next step, which is Zoning.

Communication on a Fibre Channel network is controlled by Zones. In order for Fibre Channel devices to see one another (e.g. for clients to see storage), they must be in a zone together. In a small environment, it’s feasible to create a single zone, and place all devices in that zone. However, it isn’t necessary for Xsan clients and controllers to be able to communicate via Fibre Channel – all of their communication happens across the Metadata Network. If you want the best performance, then, it’s best to separate the devices logically into multiple zones to avoid excessive traffic on the Fibre Channel network. Devices can be added directly to a zone, or they can be grouped into Aliases, which can then be added to a zone.

As an example, imagine an environment with 15 Xsan clients, 2 Metadata controllers, and 2 Promise E-Class arrays. The clients need to communicate with the Promise storage, and the controllers do as well, but the clients and controllers don’t need to communicate with one another. Three aliases should be created and two zones should be created: one alias for each class of device, and one zone for each necessary communications channel.

  • Aliases
    1. clients: Contains all Xsan clients
    2. controllers: Contains both Metadata controllers.
    3. storage: Contains both Promises
  • Zones
    1. XsanControllers: Contains the controllers and storage aliases
    2. XsanClients: Contains the clients and storage aliases

Zones are contained in Zone Sets. Many Zone Sets can be configured, but only one Zone Set can be active at any time. Once you’ve created zones for your devices, put all those zones into a Zone Set, and make sure that you activate that Zone Set when you’re finished with your configuration changes.

Storage devices and clients on a Fibre Channel network present themselves to the switch differently, and require configuration specific to their role. There are port properties that need to be set to provide the best performance. Xsan controllers and clients are “Initiators”, and storage devices are “Targets”. Device Scan, when enabled, queries every newly connected device to determine whether or not it is a Target or an Initiator. I/O Streamguard attempts to prevent disruption by suppressing some types of communication between initiators. Since we know what every device will be, and what port they’re on, we can set Device Scan and I/O Streamguard appropriately and avoid the excess traffic.

  • Initiators:
    • Enable I/O Streamguard
    • Disable Device Scan
  • Targets:
    • Disable I/O Streamguard
    • Enable Device Scan

Once you have your Nicknames, Zones, and port settings configured, you switch should be ready for use, and you can move on to configuring your storage, clients, and controllers.

The Impact of Directory Services on Xsan

Monday, January 23rd, 2012

When you’re dealing with file ownership and permissions, context is very important. Xsan volumes, from the point of view of an Xsan client, are local storage. There’s no daemon acting as gatekeeper or mediator, so when files are created or modified, the clients will use the standard mechanisms for assigning ownership and access rights, as they would with any local drive. In the absence of a shared authentication context, local user accounts will end up owning the files, and their permissions will be according to the default umask.

Mac OS X keeps track of such things by numerical User ID, and all Mac OS X systems start assigning local UIDs beginning with 501. It shouldn’be be too difficult to see how this can end badly. Users will potentially be able to overwrite files owned by other users, or access files they shouldn’t be able to, or not be able to access files that they should.

A directory service, therefore, is a key component in a proper deployment of Xsan. A directory service provides a shared context that will keep User ID collisions from happening. Also, users and groups can be managed centrally, and not on each workstation. This is especially important in large Xsan environments. Also, it will be possible to leverage Access Control Lists, and not POSIX permissions, to manage access to files and folders. POSIX permissions aren’t flexible enough to effectively manage the requirements of most Xsan environments.

It doesn’t matter whether you’re using Open Directory, Active Directory, or a Golden Triangle. You can even integrate Xsan with Novell’s eDirectory. Any of these will provide a much smoother and easy to manage Xsan.

318 CatDV Installation Checklist

Wednesday, January 11th, 2012

318 has been doing a lot of work with CatDV recently and as such, we are starting to build a large library of assets for the product. We have built a checklist for the installation and planning of a CatDV asset management system. The checklist is a quick guide to server installation, worker nodes, client configuration, using SSL, watch folders, conditions, queries, conversations and processing.

The checklist can be downloaded here:

318 CatDV Installation Checklist

318 CatDV Installation Checklist

For more information about CatDV, related storage issues or other aspects of your technology environment, please feel free to contact your Professional Services Manager or sales@318.com. For more information about 318, see us on the web at 318.com.

Monitoring Xsan with Nagios and SNMP

Monday, December 12th, 2011

Monitoring a system or device using SNMP (a SonicWALL, for instance) is simple enough, provided you have the right MIB. XSNMP is an Open Source project that provides a simple Preference Pane to manage SNMP on OS X, and it also includes an MIB developed by LithiumCorp. This MIB provides OS X’s SNMP agent to gather and categorize information relating specifically to Mac OS X, Mac OS X Server, and Xsan.

XSNMP-MIB can be downloaded from GitHub, or directly from Lithium.

Download the XSNMP-MIB.txt file and put it in /usr/share/snmp/mibs. You can verify that the MIB is loaded by running snmpwalk on the system, specifying the XSNMP Version OID. If snmpwalk returns the version, the MIB is installed correctly. If it returns an error about an “Unknown Object Identifier”, then the MIB isn’t installed in the right spot.

bash$ snmpwalk -c public -v 1 my.server.address XSNMP-MIB::xsnmpVersion
XSNMP-MIB::xsnmpVersion.0 = Gauge32: 1

The fact that the MIB was developed by Lithium doesn’t stop us from using it with Nagios, though. You can define a Nagios service to gather the free space available on your Xsan volume by adding the following to a file called xsan_usage.cfg. Put the file in your Nagios config directory.

define service{
host_name xsan_controller
service_description Xsan Volume Free Space
check_command check_snmp!-C public -o xsanVolumeFreeMBytes.1 -m XSNMP-MIB
}

The host_name should match the Nagios host definition for your Xsan Controller. The service_description can be any arbitrary string that makes sense and describes the service.

The check_command definition is the actual command that’s run. The -C flag defines the SNMP community string, the -m flag defines which MIB should be loaded (you can use “-m all” to just load them all), and the -o flag defines which OID we should return. “xsanVolumeFreeMBytes.1″ should return the free space, in MB, of the first Xsan volume.

Final Cut Server EOL’d – What do we do now?

Friday, December 9th, 2011

318 has been working to provide our clients with a strategy to replace Final Cut Server, now that FCS has been EOL’d by Apple. We are proud to announce a comprehensive strategy and solution in the form of CatDV Enterprise Server and Client, by Square Box Systems, LTD.

The first question should always be, “Do we need to implement a new solution?” In many cases, and at least for now, the answer may be “No, not yet.” There will come a time, however, when the needs of the workflow, software, hardware, or some other factor will necessitate a new Digital Asset Management (DAM) System implementation.

Once the decision has been made to deploy a new DAM, many additional questions will arise. How do we keep our metadata intact? Can we re-use our clip and edit proxies? How do we keep our current automations? 318 can work with you to address these issues. We are asking ourselves the same questions with an eye towards minimizing the hassles associated with migrating such a major piece of infrastructure.

318 has spent the last year evaluating many of the DAM solutions in the marketplace, with an emphasis on whether or not the solution is an appropriate replacement for Final Cut Server in terms of cost, functionality and scalability, and after many internal discussions, CatDV best matched these criteria. In terms of cost, CatDV is one of the most affordable solutions in the marketplace. In terms of functionality, CatDV matches or exceeds the functionality of Final Cut Server. In terms of scalability, CatDV far exceeds the capabilities of Final Cut Server.

The final link in the chain is migrating data and recreating workflows from Final Cut Server to CatDV. 318 has the facility and ability to migrate your metadata with a minimum of user intervention. We also have the ability to analyze your Final Cut Server workflows and re-create the functionality in CatDV, including shell scripting and highly customized workflow integrations for ingest and archive.

We are a CatDV authorized reseller, and have staff trained by CatDV personnel. 318 stands ready to spec, deploy, configure and maintain your CatDV solution and help you with the transition from your Final Cut Server to CatDV. Please don’t hesitate to contact us for a demo and discussion of what CatDV can do for your video workflows.

Finally, 318 is working with other vendors to continue expanding our portfolio of SAN and DAM solutions. Keep on the lookout for what will hopefully be a few other additions once our thorough vetting process has been completed! If you would like further information on any of this, please feel free to contact your Professional Services Manager or sales@318.com if you do not yet have one.

The All New Promise x30

Wednesday, April 13th, 2011

Yesterday, we mentioned Thunderbolt adapters for Xsan. But NAB 011 isn’t over and we have more announcements to bring up. Promise has announced their all new x30 series. With a spiffy new chassis design, these things now sport 8Gbps controllers, up to 48TB of space and up to 8 in a stack (that’s 7 expansion per chassis).

Oh, and we’d be remiss not to mention the redesigned management screen, a massive improvement over the command and control pane of glass we had before! A web-based management tool, by the way, that works on iPad! And management is easier now that you don’t have to restart the units every time you need to make a software update.

For more information on the new Promise x30, see:

http://www.promise.com/storage/raid_series.aspx?region=en-US&m=1053&sub_m=sub_m_8&rsn1=40&rsn3=48

To discuss how 318 can assist your organization in leveraging these new tools from Promise, from integrating a fleet of MacBook Pros with Xsan to bolting on additional storage for the always-full Xsan, contact your 318 Professional Services Manager, or sales@318.com if you do not yet have one!

Final Cut Server Client for iPad

Wednesday, January 19th, 2011

Yes, you heard that right. You can now browse assets, edit metadata, annotate clips and download clip proxies from Final Cut Server using an iPad.

ClipTouch, from Factorial in New Zealand is a slick, sleek client for Final Cut Server. Per the Factorial website, it supports:

– No server configuration required
– Search and discover assets
– Directly download and view clip proxies
– Supports the default proxy setting
– Clip timecode display
– Change asset metadata
– Browse and add annotations
– Archive and Restore assets to any archive device
– Respects permission sets based on your login
– Supports direct and VPN connections

After using it to view some assets that were optimized using the special compressor settings that Factorial posted, I have to say that I’m impressed with how well it works and with how the interface just looks plain sexy. A job well done! Check it out on the App Store.

Defragmenting an Xsan Volume to Reallocate Storage

Friday, January 14th, 2011

In the life of an Xsan shop, you will at one point or another be presented with the need to defragment your volume. Defragmenting a volume is a good way to recover lost performance, but can also be beneficial in other scenarios: defragging is an absolute must after performing a bandwidth-style expansion of your volume, and is often recommended (though not absolutely necessary) when performing a capacity-style expansion. In case you’re confused, a bandwidth expansion is the type of expansion performed when you add LUNs to a specific storage pool. Conversely, a capacity expansion involves simply adding new storage pools to an existing volume.

Because the make-up of a storage pool is drastically altered when a bandwidth expansion is performed, the data is not properly distributed across any of the new LUNs that were added to the pool, this results in a shadow-effect where all capacity of the storage pool is not available for use by the system. Because of this, it is an absolute requirement that a defrag routine is ran. To perform this defrag, we use the standard snfsdefrag command, but we use the special ‘-d’ flag, which ensures that this shadowed storage space is reclaimed and that data is properly distributed across the storage pool:

snfsdefrag -dr /Volumes/MyXsanVolume

There are several scenarios where it may be desirable to rebalance the data on an existing volume. A capacity expansion of a volume will result in one or more new storage pools being added to the volume, but the new storage will not have any data written to it. Alternatively, an allocation strategy of round-robin or fill can, over time, result in a poor distribution of data across your storage. By spreading data across the storage evenly, you ensure that all disks are running at similar capacities, therefore netting more consistent performance across the volume, as disk performance tends to degrade as capacity increases.

When an snfsdefrag is ran, it will defragment files as specified by your parameters, and these files will be distributed onto the volume as per your allocation strategy. If you defragment a volume that has a ‘Fill’ allocation strategy, you will not gain any benefits of having evenly distributed data, though your individual files will no longer be fragmented.

Thus, if your main goal is balance all data across the volume, it will be necessary to change the volume’s allocation strategy to Balance and then defragment the volume. This will result in fragmented files to be relocated the lowest-capacity pool, an extremely effective method for balancing data. In Xsan 2.x, you can change a volume’s allocation strategy at the GUI level, which can be changed while the volume is live. In our experience, the change can be performed live and will not result in Xsan client service interruption to the volume, and active transfers proceed with no disruption. Even so, it’s best to perform the switch at a time when there’s minimal activity (preferably none) on the volume and no active transfers in progress.

Once you’ve converted the volume to the new strategy, you can proceed with the optimization, which is a fairly straightforward defrag performed with the command

snfsdefrag -r /Volumes/VolumeName

This will defragment any files with more than one extent, re-provisioning the optimized files to the next LUN in the allocation strategy. And because we’re now using the Balance strategy, the next LUN will always be the one with the lowest capacity—-our new LUNs, in this case. If, however, you had a healthy Xsan volume, this command may not properly balance data, because fragmented files will be rare. In such an event, run the command

snfsdefrag -r -m 0 /Volumes/VolumeName

This will defragment files with more than 0 extents, which is every file on the system, letting you rest assured that the volume will be nicely balanced at the end of the operation. The main trade off here is that doing so re-provisions all files on the volume, which can be a very time consuming task. If the volume has standard levels of fragmentation, running the command without the flag should do a decent job of balancing without having to operate against non-fragmented files as well.

PresSTORE Article on Xsanity

Tuesday, November 16th, 2010

We have posted a short article on the availability of PresSTORE 4.1 on Xsanity at http://www.xsanity.com/article.php/20101116105720183. Enjoy!

The Xserve Has Been Discontinued

Friday, November 5th, 2010

The Xserve has officially been discontinued by Apple and will no longer be sold after January, 2011. Mac OS X Server will still be available on Mac Mini and Mac Pro (which will become the only option for a Mac OS X-based Metadata Controller in Xsan environments). Apple has produced a transition guide, available here.

If you would like to discuss how this move impacts your Information Technology environment then please contact your 318 account team for more information!

ARCHIWARE PresSTORE 4 Released

Wednesday, June 30th, 2010

Last week, German software company ARCHIWARE released version 4.0 of its enterprise backup solution, PresSTORE. This version is for new installations only – version 4.1, planned for release in October, will support upgrades from existing 3.x deployments.

The new features of PresSTORE 4 can be found on the company’s website, but here are some highlights:

  • New interface to simplify management
  • iPhone app for remote monitoring of jobs
  • New desktop notification system to alert users of actions
  • Progressive backup – “backup without full backup”

Aaron Freimark also wrote a post about the new version on the Xsanity site that talks more about the Xsan-specific features.

As before, PresSTORE is supported on Mac OS X (10.4 and higher), Windows (2003, 2008, XP, Vista and 7), Linux and Solaris. Backup2Go Server is only supported on OS X and Solaris.

PresSTORE support is great – during testing of the new version, the iPhone monitoring app was crashing. Within a day, a new version was available in the App Store that addressed the exact issue. Bravo!

To learn more about PresSTORE (including pricing options), please contact your 318 account manager today, or email sales@318.com for more information.

New ActiveStorage iPhone App

Tuesday, January 26th, 2010

Active Storage has released a new iPhone app that you can use to monitor the status of their new XRAID ES. If you are interested in the Active Storage products then please call 318 at 310-581-9500 for more information and pricing.

The App is available on the App Store.

Xsan 2.2.1

Friday, December 18th, 2009

Xsan 2.2.1 has been released. Updates include:

  1. Improved filesystem reliability
  2. Improved cvfsck (the Xsan filesystem repair tool)
  3. Resolves QuickTime reporting “invalid public movie atom found” on playback
  4. Eliminates “An unknown disk has been inserted” message when mounting Xsan volumes (occurs in Mac OS X 10.5 Leopard only)

New CLI Options in Final Cut Server

Wednesday, November 25th, 2009

For those of us who thought that the Final Cut Server 1.5.1 update was just a couple of minor bug fixes, there’s a little more than meets the eye. If you run /Library/Application Support/Final Cut Server/Final Cut Server.bundle/Contents/MacOS/fcsvr_client then you’ll note that there are a few fun new features. While there hasn’t been enough time to thoroughly put the new options through their paces, we do hope to do further reporting on them as we become more comfortable with leveraging them for automations. Stay tuned!

ATTO Fibre Channel + Snow Leopard

Tuesday, November 24th, 2009

If you’re using the ATTO card along with Snow Leopard then the 2.41MP driver on their website is compatible with 10.6, but they have yet to update the website to reflect that it is. These are the drivers for 42ES coupled with the EMC Clarion system:
http://attotech.com/product.php?model=80

You may want to check with Tech Support, but it appears the latest 10.5 drivers will work with 10.6

Resolving Qmaster Problems with Xsan

Thursday, November 5th, 2009

When using qmaster in an Xsan environment, it is often desirable to use the Xsan volume for qmaster cluster storage, this allows all qmaster render nodes on the xsan to directly access assets for rendering, rather than having to pull the assets over NFS. However, a race condition exists where when qmasterd fires prior to an xsan volume being mounted, qmaster will create a folder structure at the Volume’s mount path, which prevents proper mounting of the Xsan volume.

To resolve the issue, you can set a delay on the qmaster daemon to give sufficient time for the Xsan volumes to properly mount. This can be done by editing the file located at /Library/LaunchDaemons/com.apple.qmasterd.plist and changing it’s contents to match the following, which adds a 60 second delay prior to qmasterd starting:



Label
com.apple.qmaster.qmasterd
ProgramArguments

/bin/bash
-c
/bin/sleep 60; /usr/sbin/qmasterd

OnDemand

Video: Creating a Device on Final Cut Server

Wednesday, July 29th, 2009

BRU Server 2.0 Now Available

Friday, July 24th, 2009

BRU Server 2.0 was released this week, offering a long anticipated update to the popular cross platform backup suite of applications. The main two features that the TOLIS group is highlighting include Encryption of backup target sets and client initiated backup.

Whether you are a BRU, Atempo, Bakbone, Backup Exec or Retrospect environment, 318 can assist you with planning, testing, verifying or restoring backups. Contact your 318 account manager today for more details.

Final Cut Server on the Cheap

Monday, July 13th, 2009

At 318 we see a number of Final Cut Server installations. And for most of those jobs you should use an Xsan, have editors edit-in-place and develop custom automations. But Final Cut Server doesn’t have to be super complicated; nor does it have to be super expensive to integrate. At the end of the day it’s all about what a customer is expecting to get out of the product – and that’s how the product is developed and priced: to scale with the customers needs.

One of the most marketable and best features of Final Cut Server is that it is a way to catalog assets. These assets can be stored anywhere you want, provided that they are reliably accessible by the server. Given a username and password, users of Final Cut Server can then access the assets whether or not they can actually get to them using server shares or flat file systems. This allows Final Cut Server to bring logic to an otherwise chaotic form of storing data.

Once catalogued you can then tag assets with metadata. This means that when you go to find assets in the future you can do so quickly and easily. You can preview, annotate and then download those assets, no matter where they are stored – even on a Drobo or some large Firewire media sets. And if you decide to edit-in-place in the future, the fact that assets are stored in a logical space (called a Device) means that if you see the value, that you have an easy upgrade into more online media, such as an Xsan volume – but you don’t have to do it all at once to start seeing value.

And value is the key aspect of Final Cut Server. You can spend as much or as little as you need in order to get value out of the product. Sometimes the smallest features are what organizations will derive the most value out of. Not always, but sometimes… And when you see the value of the smaller features you can then make a decision based on your organizations goals and workflows what else will be of value. If you’d like to discuss a Final Cut Server implementation, whether it’s the basic initial installation, complicated workflow integration or custom scripting, contact 318 for more information. We’re here to help, whether or not the implementation is on the cheap.

Vmeter & Vguard for Xsan

Thursday, July 9th, 2009

Vmeter is another great product that you can bolt onto your Xsan from Vicom Systems. Vmeter allows you to get
Vmeter SQL Statistics statistics of bandwidth allocation for Xsan clients. But Vmeter doesn’t stop there. It also allows you to meter, or limit, the amount of bandwidth that is allocated to client machines, maximizing bandwidth for some users and tiering your performance allocation.

Vguard, also by Vicom Systems, is based on the technology included in Vmirror, the LUN mirroring solution, but goes a step further. Vguard allows you to setup another Xsan and use that SAN as a backup. We’re not going to go so far as to call it a snapshot, but it’s everything but.

Overall, Vicom integrates well with Xsan and fills some of the holes that the product itself has. For more information on Vmeter, Vguard or Vmirror, contact your 318 account manager today.