Archive for April, 2008

318, Inc. Announces Immediate Availability of RepTools™ 2008

Thursday, April 24th, 2008

RepTools™ 2008318, Inc. is proud to announce the immediate availability of our flagship software product, RepTools™ 2008.

RepTools™ 2008 is a customer relationship management (CRM) suite developed specifically for the entertainment industry. RepTools™ 2008 has nine integrated modules that are designed to efficiently manage all of the information businesses need to manage sales forces automation, asset management, and customer relationships from the beginning to the end of production. With instantaneous access to every aspect of the production process and comprehensive metrics for detailed analysis, RepTools™ 2008 will let you worry about what matters the most: your customers.

Over 100 New Features:

  • Document Management – RepTools™ 2008 has an all new document management system that will automatically organize your storyboards, bids, treatments, callsheets, location photos, and more.
  • Completely New Interface – Built to be faster over your network and keep you more productive than ever before.
  • New QuickFind – Now you can find any of your projects, contacts, or bids in seconds.
  • Live Filters – See only what you decide is relevant and prevent information overload from bogging down your workflow.

For more information about RepTools™ 2008 and how it can dramatically increase the productivity of your business, please visit or call us toll-free at (888) 347-3318.

Setting Up Delegates in Microsoft Exchange

Saturday, April 19th, 2008

Exchange 2003 allows you to administer it granularly from Exchange System Manager (”ESM”), but this cannot be done with users that already are administrators (Domain Admins, Exterprise Admins, etc.)

First, create a user that you would like to have Administrator Delegate Access to Exchange and all of the information stores. Do Not make this user a member of any admin security groups.

Next, create a group for administering Exchange, usually this can be called “exadmin” without quotes.

Start populating that security group with the people you would want to have access to the Information Store(s). Next, open up ESM and right mouse-click on the top level of the tree. Go to Delegate Control, and add the newly created group FULL ADMINISTRATIVE ACCESS.

Press, “Next” until all of the windows close.

After you have given the group access, wait approximately 30 minutes for the settings to propagate through Exchange.

Members of this group can now take control of items in user’s inboxes and can also administer public folders via Outlook. They can also now run exmerge.

Domain Controller Capacity Planning In Active Directory

Friday, April 18th, 2008

The memory requirements per DC is calculated based on the number of DCs and how spread out they are. Any time we are doing this type of planning we start out with the number of users that interact with a given DC and how much replication it does with other DCs. If a DC is processing logins for 1,000 users then it can easily be run from a fairly unsubstantial host – as would be the example with a Global Catalog sitting at a smaller school. However, as the number of users interacting with a single DC goes up, the RAM goes up. The minimum recommended memory is approximately 2GB per 1,000 users and a minimum of 1 dual CPU system per 10,000 users – but again loads may vary based on various aspects of the domain.

In terms of bandwidth utilization the number of users logging in concurrently per school will use practically no bandwidth compared to a fiber connection if they have a DC at the school. However, if the school does not have a DC then you can expect approximately 64k per concurrent login for remote users not counting any network profiles or login scripts. More speed will allow for faster login windows which will in turn allow for the system load to decrease faster following large quantities of users logging in concurrently. The bandwidth utilization can be slightly higher than other LDAP types of environments for Windows hosts but not typically for Linux or Mac clients.

Policies will create additional load. The more layered the policies the higher this load will become. Flattening the policy structure as much as possible will help reduce this overhead. But in the beginning some monitoring and tuning will need to be done. Monitoring the Database Cache % Hit on the server, you will be able to track whether additional memory is required.

Disk space is typically not a factor when planning an Active Directory deployment. But before factoring the size of logs a good setup should accommodate for 4GB plus installers/drivers and .5GB per 1,000 users for non Global Catalogs and an additional 50% for Global Catalogs.

A brief introduction to Mac OS X Sandbox Technology

Thursday, April 17th, 2008

Note: For more information about the information contained in this article, contact us for a professional consultation.

In all versions of OS X previous to Leopard, access control restrictions were limited to a security model referred to as Discretionary Access Controls (DAC). The most visible form of DAC in OS X is in it’s implementation of the POSIX file-system security model, which establishes identity-based restrictions on an object in the form of a subject’s user or group membership. Similarly Access Control Lists are a form of discretionary control, though they are far more extensible and discrete then the POSIX model. In such models,  newly created objects or processes inherit their access rights based upon those of the creating subject, so that any spawned objects are not granted access rights beyond that of their creating subject. The key idea behind the DAC model is that the security of an object is left to the discretion of the object’s owner; an object’s owner has the ability to assign varying levels of access control to that object within the confines of the DAC implementation. The DAC model has for decades been a staple in the management of both object/process creation and access across all mainstream computer systems due to it’s user-centric nature. However there is a persistent caveat in these implementations;  in all mainstream implementations of such models, there exists a superuser which has the capabilities to completely bypass access restrictions placed on objects. In POSIX-based Operating Systems such as Unix, Linux, or OS X, this superuser exists in the form of the root user. The existence of such a loophole presents a bit of a paradox. On one hand, it introduces several obvious security ramifications by providing capabilities to completely bypass the DAC model all together; any processes which are invoked by the superuser inherit the “god mode” access controls, they have free reign over the entire system. At the same time, the existence of the superuser account becomes a vital tool for the practical administration of data objects and system resources. In a perfect world, this wouldn’t necessarily be a bad thing. Unfortunately that’s not the world we live in, and it is not uncommon to hear about processes being hijacked for ill-will. If the compromised process has been invoked by the superuser, then the entire system has been compromised, including all user data with it.

With 10.5 Leopard, Apple has introduced a new low-level access control model into their OS, based upon the mandatory access control (MAC) model. Conceptually, the MAC system implements restrictions based upon actors, objects, and actions. In such a system, the actor typically assumes the form of a process, thread, or socket. The object can be any type of resource, such as a file, directory, socket, or even a TCP/UDP network port, among others. The action is simply the request of the actor to be applied to the respective object, and varies depending on the type of object involved in the request. Referring back to the file system model; the actor would be a word processor, the object would be a .txt flat file, and the action would be a call to either read to or write to that text file. When the actor requests access to the object, the MAC authorization system evaluates security policies and decides whether the request can proceed, or if it should be prohibited. In a pure MAC model, the object or process ownership is not generally a consideration; individual users do not have the ability to override defined policy.

Leopard enforces the MAC model via a new framework, architected from TrustedBSD’s MAC framework. This framework introduces “sandbox” access control capabilities which allow a developer or user to apply access control policies to a process, restricting privileges to various specified system resources. The restrictions are generally enforced upon acquisition, so any active file descriptors would not be immediately affected by any policy changes, however, any new open() operations would be subject to the new restrictions. In a fashion similar to the DAC model, new processes and forks will inherit the access restrictions of their parent. In Leopard, these restriction policies can be pre-compiled into any given program, or they can be applied to any executable at runtime.

While Leopard’s MAC framework is based off of TrustedBSD’s,  it’s implementation deploys only a subset of control points provided by the TrustedBSD implementation. Noticeably absent are the majority of the Security Policy Modules available for TrustedBSD and FreeBSD implementations, such as Biba, MLS, or NSA’s FLASK/TE (implemented in SEDarwin), though perhaps some day we’ll see some of these ported to Leopard’s MAC framework.  For now, Apple has offered their own Security Policy Module dubbed “Seatbelt”, which is implemented as a KEXT installed at /System/Library/Extensions/seatbelt.kext.  As of 10.5.2, the feature set of Seatbelt seems to be very much in flux. The only documented way to apply these controls in code is via the sandbox_init() function. Utilizing this function in code provides a way for an application programmer to voluntarily restrict access privileges in a running program. sandbox_init() is very limited at this point, providing only 5 pre-defined constants:

• kSBXProfileNoInternet  – disables TCP/IP networking.

• kSBXProfileNoNetwork – disables all sockets-based networking

• kSBXProfileNoWrite – disables write access to all filesystem objects

• kSBXProfileNoWriteExceptTemporary – disables write access to filesystem objects except /var/tmp and `getconf DARWIN_USER_TEMP_DIR`

• kSBXProfilePureComputation – all OS services are restricted

An application can utilize one of these constants to restrict capabilities in spawned processes or threads, minimizing the potential damage that can occur in the event that the process is compromised. Figure 1 shows an example implementation of the kSBXProfileNoWrite profile in code:

Figure 1.





int main()


int sb, fh;

char **errbuf;

char rtxt[255];

char wtxt[255] = "Sandboxed you aren't\n\n";

// init our sandbox, if we don't return 0 then there's a problem

sb = sandbox_init(kSBXProfileNoWrite, SANDBOX_NAMED, errbuf);

if ( sb != 0 ) {

        printf("Sandbox failed\n");

return sb;


fh = open("test.txt", O_RDONLY);

if ( fh == -1 ) {

perror("Read failed");

} else {

read(fh, rtxt, 255);


printf("FileContents:\n %s\n", rtxt); 


fh = open("test.txt", O_RDWR | O_CREAT, 0000644);

if ( fh == -1 ) {

perror("Write Failed"); } else {

write(fh, wtxt, strlen(wtxt));


printf("Successfully wrote file!\n");


return 0;


Compiling and running this code returns the following results:

% ./sandBoxTest

FileContents:  hello

Write Failed: Operation not permitted

So, even though our POSIX permissions allows for read/write access to the file, the sandbox prevents it, regardless of user. Running the program even with root privileges yields the same results.

Currently, the options provided by Apple are very all-or-nothing, particularly in the area of file system restrictions. In this way, Seatbelt acts more as a clumsy broadsword, lopping off functionality in large chunks at a time for the sake of security. In this form, Seatbelt has minimized use outside of very vertical applications or the increasingly rare applications that don’t utilize network communication in one way or another. Though these limitations will significantly limit widespread adoption, I believe it would be a mistake for a developer to shrug off Seatbelt as a whole.

Luckily, Seatbelt has an alternate application, though currently it is not officially supported. As I mentioned earlier, it is possible to apply sandbox restrictions to any pre-complied executable at runtime. This is done via the sandbox-exec binary, and uses predefined profiles housed at /usr/share/sandbox which provide for fine-grained control of resources. These profiles use a combination of allow/deny rules in combination with regular expressions to specify system resource access. There are numerous control points, such as network sockets, signals, sysctl variables, forking abilities, and process execution, most of which can be tuned with fairly decent precision by utilizing a combination of regex and static inclusion sets. Filesystem objects and processes are identified via POSIX paths; there currently is no target validation performed ether via checksums or digital signing.

Figure 2 shows a sample sandbox profile that can be applied to restrict an application from making outbound communications and restricts file system writes to temporary directories and the user’s preferences folder. The ‘debug deny’ line tells seatbelt to log all policy violations. This proves to be very useful in determining filesystem and network activity by an untrusted program. It facilitates a quick-and-easy way to do basic forensic testing on any program acquired from an untrusted source. Figure 3 shows example log violations of a network-outbound violation, and of a file-write violation, respectively.

To apply a sandbox profile to a standard application bundle you must pass sandbox-exec the path of the mach-o binary file which is typically located in ‘Contents/MacOS/’, relative to the application’s bundle. You can specify a sandbox profile by name using the -n flag if the profile resides in /usr/share/sandbox, or you can specify a full path to a profile with the -f argument. Carbon applications may require the LaunchCFMApp wrapper to properly execute. See figure 4 for example syntax for both Cocoa and Carbon Applications.

Figure 2. Example sandbox profile

(version 1)

(debug deny)

(allow default)

(allow process*)

(deny network-outbound)

(allow file-read-data file-read-metadata

  (regex "^/.*"))

(deny file-write*      

  (regex "^/.*"))

(allow file-write*      

  (regex "^/Users/johndoe/Library/Preferences.*"))

(allow file-write* file-read-data file-read-metadata

  (regex "^(/private)?/tmp/"))

(import "")

Figure 3. Example log entries from TCP and filesystem write violations

3/4/08 12:15:10 AM kernel dig 79302 NET_OUTBOUND DENY l= unavailable r= UDP 1 (seatbelt) 

3/4/08 12:43:05 AM kernel sh 79147 FS_WRITE_DATA SBF /Users/Shared/test.txt 13 (seatbelt) 

Figure 4. Using launchd to sandbox cocoa and carbon applications.

Cocoa % sandbox-exec -n localonly /Applications/

Carbon % sandbox-exec -n localonly /System/Library/Frameworks/Carbon.framework/Versions/A/Support/LaunchCFMApp /Applications/Microsoft\ Office\ 2004/Microsoft\ Word

Unfortunately, the system seems to be far from finalized, and even some example profiles provided by Apple do not seem to be completely functional, or contain unimplemented control points. One example of this is seen when trying to implement IP-based network restrictions. Apple provides example entries for layer3 filtering in the included profiles, but they are commented-out and illicit a syntax error when ran. Additionally, Apple has a rather ominous warning in each of it’s provided profiles, stating that current profiles are deemed to be Apple System Private Interfaces, and may change at any time.

However, that’s no reason to completely ignore the technology. Given what has currently been implemented, and taking into consideration control points which are alluded to by Apple’s own imbedded comments, Seatbelt is showing significant promise to provide very fine-grained resource access capabilities. By utilizing these restrictions, applications and users can ensure that even in a worst-case scenario, possibilities for errant/hijacked process damage becomes mitigated and compartmentalized. There are many real-world situations where this type of access control model fits very well, particularly in complement to standard DAC systems: they can be used to mitigate privilege escalation opportunities for shell users, to confine behavior conformance of processes to defined resources (and there by protect against hacked processes), or as a forensic tool to determine software malfeasance. By providing these type of capabilities through the Seatbelt policy module, and by providing a path towards implementing more complex MAC policy modules, Leopard’s new MAC framework ushers in a new level of security and access control capabilities for OS X.

Windows XP: No longer being sold after June

Tuesday, April 15th, 2008

images.jpegMicrosoft has announced that as of June 30th, 2008 Windows XP will no longer be distributed. You will still be able to buy machines that run Windows XP but it will become increasingly difficult in the months that follow. Windows XP will be supported by Microsoft until April 14th, 2014. However, only security-specific patches will be released for XP after June.

Open XML Draft Approved

Saturday, April 12th, 2008

The Microsoft Open XML standard is what Microsoft is hoping will be the standard in document formats. The first step in that process is now complete with Office Open XML being accepted as a draft standard by ISO, the International Organization for Standardization. ISO is the world’s largest developer of standards and has no governmental affiliation. Office 2007 created a stir by omitting the Open Document Format (ODF), which is already an ISO standard. Many had hoped that ODF would help to spark an uptick in the interest of applications such as as a replacement for the Microsoft Office Suite of applications. However, the ODF standard has had slow adoption in large part due to the Microsoft omission of it from Office. noooxml.jpg If Microsoft’s Open XML format receives ratification from ISO as a standard then it would introduce a pair of rival standards into the document community. In many ways, the non-official standardization of documents around the Microsoft doc format over the past decade has led to an unparalleled ability for organizations to trade information freely. However, many (especially in the open source community) feel that allowing Microsoft to hold all the cards is a dangerous thing and that by bringing about a truly open standard such as ODF there will be more options in the word processing suite that organizations can use.

The battle between ODF and Open XML is likely to rage on for years as the appeals and votes and red tape continue to drag on. Just to put things in perspective, ISO rejected the Open XML proposal in September of 2007 and after a rewrite based on input from vendors and members of ISO it was voted as a draft standard in March. The appeals process doesn’t close until June but we’re likely to see more red tape for awhile given the interests of the parties involved.

Cleaning Exchange Queues in Windows Server 2003

Monday, April 7th, 2008

While working in Exchange, you will run into issues where the Exchange server is getting heavily SPAMmed. Often times if it’s not due to being an Open Relay, it will be by NDR. The definition of and explanation of NDR’s will not be covered in this article.

Let’s say that when viewing items in the Queue, you Identify that you are being SPAMmed using NDR, and don’t have the luxury of time to create another queue to dump things into, and clear it out the old-fashioned way – and possibly lose “good” e-mail along the way.

There’s hope.

Go to: Support Tools/Aqadmcli

…and download aqdmcli.exe

Save this somewhere on the Exchange server.

Type in the following at the command prompt: #>aqdmcli.exe >setserver ‘yourexchangeserver’flags=SENDER, (at this point, after you hit enter, it will go through and delete ONLY those e-mails from the queue) >end (this is to quit out of aqadmcli)

That’s it. You’re done. Wasn’t that easy?

More information on this time saving utility is available here:,289483,sid43_gci1218279,00.html #>aqdmcli.exe ?

Remote 10.5 Mac OS X Server Installations

Friday, April 4th, 2008

This setup works best if there is a Boot and Data partition.

Coordinate to have an onsite person during the install(able to follow directions and semi-technical if possible).

Check all install requirements such as Processor usage and RAM ( 10.5 requires 1GB to install)

Backup all boot drive settings if it is not a clean install (i.e. Retrospect, Kerio directory, store etc )

If you have access to take over system’s mouse and keyboard:

Enable port forward port 22 (Secure Shell) to a internal workstation. Enable the 318admin on that system ( Desktop , not Laptop ) Install the server admin tools on that workstation (Must 10.5 Server version i.e. 10.5 ) Verify Sleep is disabled on workstation in Energy Setting


localhost#2xCPUFormat# OS X Server 10.5#RDY4PkgInstall#4.0#512

Create a tunnel on from your localhost:5901 to the remote DVD installer via the workstation you “piggy back” workstation off of

ssh -L 5901/ -Nv

File > “Add by Address” enter in : Leave the username blank Enter in password: G84214S0

You now will be able to control the Boot Operating System

Set colors down to greyscale ( less CPU load on Install Disc)

If you would rather let the workstation’s user continue to work ( make sure they are told not shut down their system) Of if you would rather not have to change the port forwards after the fact for 5900:

Blacklisting IP addresses on SonicWALLs

Friday, April 4th, 2008

Blacklisting an IP from the WAN on a SonicWALL

1. Login to SonicWALL 2. Go to Firewall Rules 3. Go to Matrix 4. Go to WAN -> LAN 5. Create Rule 6. For Source, choose Create Network. 7. Change Zone to WAN 8. Name it whatever you want (ie. Blacklisted IP1) 9. Enter in IP 10. Save it 11. On the firewall rule, make sure to click on the check box for Deny 12. Source is Blacklisted IP 13. Destination is ANY 14. Service ANY (if you want to block all traffic). 15. Save it. 16. Move it up in the chain to be the first rule. 17. Test it.

Backing Up With Carbon Copy Cloner

Wednesday, April 2nd, 2008

The newest version of carbon copy cloner, now version 3.1, has a number of features that move it closer to a viable automated backup system.

Carbon Copy Clone is now a wrapper application that runs a series of terminal commands to accomplish its goal but it does then very well.

Compatibility: 10.4 or higher. Universal Binary


Cloning: As its name suggests the first feature of this software is to clone one drive to another. This is how the program started and was one of the few good third party software applications to do drive cloning on the mac.

The software interface is simple. Choose a source volume and choose a destination volume. If you are cloning you by default want to overwrite the destination drive.

New Feature: There is now a built in feature that tests the “Bootability” of the target drive after the clone. This will let you know whether the target drive can be used as a boot volume.

Local Backup: Instead of copying all data from the local to the target drive, you can now choose to do incremental backups of selected files. The source file system tree is then displayed, you can choose to check mark the boxes that you wish to backup. This model is good because you can choose the user directory to back up but then deselect the music folder within the user. Any new files or folders in the user directory will get backed up, but any files or folders in the music will not be.

Destination in subdirectory & Pre or Post Script runs: to copy data into a subdirectory of the target drive you must choose the pull down the Application Menu, between the Apple menu and File menu. Then choose Advanced Settings. This will give you a field to enter a pathname to specify a subdirectory to receive the copied files. You will also see fields to specify scripts to run either before or after the copy. Classically this is to stop and then start a database, or execute a database export for backup. I have also seen commands to gzip a directory structure and then decompress it after the copy.

Incremental Backups: When you choose your destination you can choose whether to do a full copy or an incremental copy. In addition you are presented with options to choose whether files are deleted if they are not on the source, and whether to preserver files that are delete or overwritten. This option creates a directory at the destination point _CCC_Year_Month_Time that will indicate that the files inside are the files that would have been overwritten by the incremental backup. As of now there is no way to automatically remove these files without further scripting or user intervention. If you are at a client that makes use of CCC and the destination drives are reaching capacity. These are the files to remove to conserve space.

Filtering: This version of ccc has filtering. The gearbox next to the source drive selector will be available if the source drive is local. These filters will show what you have chosen not to include. In addition you can add to this filter exceptions by file extension or exceptions by pathname. The latter filter works the same was as the exceptions in rsync. If you add an entry to this list, and path name that has content that matches this string will be ignored.

For example: If you back up the /Users/ directory but place “iTunes” in the advanced filter. It will backup all the user folders but will ignore all of the iTunes folders inside all of the user folders.

Disk Images as destinations: This allows you to create a sparse image file, with encryption should you choose it, to be the destination of the backups. The image file needs to be local. You could use other scripts to move these files around

Remote Backup: A recent update to this feature makes this a more viable solution for cost effective backup. In the interface you can choose the source to a be a remote mac or the destination to be a remote mac, but not both. If you choose the source to be a remote mac you cannot apply the file filters. In most circumstances I prefer to set this up on a client computer that is to be backed up and then choose the remote computer to be the server that will receive the data. In either case for a remote computer to be the source or the destination, you have to generate and authorization package installer.

This creates an SSH encryption key that is installed into /var/root/.ssh which allows the rsync process to run over an ssh tunnel without username:password authorization. This package needs to be installed on both the source and destination computers. These installers will now place nice with each other and concatenate their encryption keys so multiple sources can write to the same computer.

Note: Computers set as destinations must have ssh enabled. Normally done by enabling “remote login” in the sharing pane of system preference.

Scheduling Backups: Once you have the specifics of the copy process set you can choose to save the task. This will open a new window called “Backup Task Scheduler” In it you will see a list of scheduled tasks. These tasks correspond to entries in /Library/LaunchDaemons, each one will run as a daemon process call ccc_helper.

You can schedule operations on a hourly, daily, weekly, monthly basis or whenever the drive is connected. That last option is only viable for a backup that writes to a local drive.

The settings tab allows you to specify whether the backup destination will be determined by pathname only or whether to use the unique uuid for each drive.

You can access existing schedules by going to the Application menu again and choosing “Scheduled Tasks…”

NOTE: if the destination drives at the client rotate onsite and offsite there are two things to consider. First is that the scheduled backups should NOT be using the unique uuid and that both drive should have the same name so that they can receive remote backups properly. The good news is that the ccc_helper daemon is smart enough to not write into the /Volumes directory if there is no drive there that matches the destination name.

The description field is by default populated with common language describing the specifics of the back up script. This can be edited to be anything that you like

Cancelling a copy in process: If you can see the windows for the ccc_helper app you can press the cancel button. If you do so you are given two options. One is to skip this execution, which will relaunch on the next scheduled time, or you can defer. If you choose to defer you can have this newly selected time be the execution time from now on. This is probably the only drawback to having the backup run on a client computer. Is that they can cancel the process on their own.

Conclusions: All in all you get a lot with this simple product and it can be of great use even in limited applications. If you client is mostly mac and does not want to invest in an expensive backup situation it can go a long way to backing them up.

Pros: It is donation ware: Meaning it is freeware that will bug you for a donation now and again. It uses existing technology on your system, namely rsync and ssh. It is HFS+ meta-data aware. It is the ccc-helper that does the work and it will copy the hfs+ meta data over ssh. It writes out its own ccc log file.

Cons: Does not handle failure gracefully: If it cannot perform its actions it will bring up an on screen alert that will stay until dismissed. Using incremental backup on a very large file list can be memory intensive. This is more pronounce in local copies as it seems to break down the rsync operations on a folder by folder basis with a remote destination. Filtering is only available if the source is local. MAC ONLY. No support for any other operating system.