Archive for December, 2012

A Simple, Yet Cautionary Tale

Friday, December 28th, 2012

While we don’t normally cover web development security basics, or find much to report when poking around in iOS apps, a great example of independent investigative tech journalism related to these topics broke late last week. On Nick Arnott(@noir‘s) blog Neglected Potential, he expands on a previous post involving how data is stored within an app(nice shout-out to a personal fave, PhoneView by Ecamm,) to talk about how it communicates with whatever services it may be hooked up to. Generally speaking, SSL and PKI don’t magically solve all our issues(as comically referred to here: This is 2012 and we’re still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary,) and end users reflexively clicking ‘accept’ on self-signed cert warnings is the front lines of the convenience vs. security battle. No, you shouldn’t send auth in plaintext just ’cause it’s SSL. (Yes, you should be seeding any straggler self-signed certs on the devices in your purview so you don’t need to say ‘just for this ONE sites self-signed cert, please just click Continue’.) The fact that a banking users SSN number was being sent to the app on every communication was… surprising, and corrected immediately after the heightened interest resulting from the aforementioned blog post.

Security via public trust

Security via public trust

After the publicity surrounding the post, however, folks were reassured by getting an immediate audience with the Director of Engineering at Simple, Brian Merritt(@btmerr.) Perhaps the flaw may have been considered too contrived a process for traditional(read: an email to their security team) channels at Simple to respond in a way that satisfied Mr. Arnott before he went ahead and published his post. “If only Jimmy had gone to the police,” the saying goes, “none of this would have happened” – please do note that while responsible disclosure was attempted, the issue is with PKI and not Simple itself, and updates were added to the post when clarifications were worth mentioning to present the facts in an even-handed manner. A key take-away is the fact that there is no live, zero-day exploit going on, just the relative ineffectiveness of PKI being exposed.

simpleNoEntryWhenTalkingToCharles

Although a process can enable the snooping of traffic, by default proxy’d SSL wouldn’t be allowed to start a session

But even more importantly, the fact that observing the traffic was even possible (thanks to CharlesProxy, also recently mentioned on @tvsutton‘s MacOps blog) highlights the ease with which basic internet security can be thwarted, and how much progress is left to be made. Of the improvements out there, Certificate Pinning is one of those ‘new to me’ concept enhancements regarding PKI, which luckily already has proposals in for review with the IETF. (An interesting contender from about a year ago is expounded on at the tack.io site.) There are quite a few variables involved that make intelligent discussion of the topic difficult for amateurs, but the take-away should be that you can inspect these things yourselves, as convoluted as it may be to get to the root cause of security issues. Hopefully we’ll have easier-to-deploy systems that’ll enable us to never ‘give up’ and use autosign again.

Thanks to Mr. Merritt, Michael Lynn and Jeff McCune for reviewing drafts of this post.

Manually delete data from Splunk

Thursday, December 27th, 2012

By default Splunk doesn’t delete the logging data that it’s gathered. Without taking some action to remove data it will continue to process all results from Day 1 if doing an All time search. That may be desirable in some cases where preserving and searching the data for historical purposes is necessary, but when using Splunk only as a monitoring tool the older data becomes superfluous after time.

Manually deleting information from Splunk is irreversible and doesn’t necessarily free disk space. Splunk users should only delete when they’ve verified their search results return the information they expect.

Enabling “can_delete”

No user can delete data until he’s been provided the can_delete role. Not even the admin account in the free version of Splunk has this capability enabled. To enable can_delete:

  1. Click the Manager link and then click the Access Controls link in the Users and authentication section.
  2. Click the Users link. If running the free version of Splunk then the admin account is the only account available. Click the admin link. Otherwise, consider creating a special account just for sensitive procedures, such as deleting data, and assigning the can_delete role only to that user.
  3. For the admin account move the can_delete role from the Available roles to the Selected roles section.
    Enable can_delete
  4. Click the Save button to keep the changes.

Finding data

Before deleting data be sure that a search returns the exact data to be deleted. This is as simple as performing a regular search using the Time range drop down menu to the right of the Search field.

The Time range  menu offers numerous choices for limiting results by time including Week to dateYear to date and Yesterday. In this case, let’s search for data from the Previous month:

Search time range

Wait for the search to complete then verify the results returned at the results that need deleting.

Deleting data

Deleting the found data is as simple as performing a search and then piping it into a delete command:

Search and delete

 

This runs the search again, deleting found results on the fly, which is why searching first before deleting is important. Keep in mind that a “delete and search” routine takes as long or longer to run as the initial search and consumes processing power on the Splunk server. If deleting numerous records proceed with one search/delete at a time to avoid overtaxing the server.

Configure network printers via command line on Macs

Wednesday, December 26th, 2012

In a recent Twitter conversation with other folks supporting Macs we discussed ways to programmatically deploy printers. Larger environments that have a management solution like JAMF Software’s Casper can take advantage of its ability to capture printer information and push to machines. Smaller environments, though, may only have Apple Remote Desktop or even nothing at all.

Because Mac OS X incorporates CUPS printing, administrators can utilize the lpadmin and lpoptions command line tools to programmatically configure new printers for users.

lpadmin

A basic command for configuring a new printer using lpadmin looks something like this:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E

Several options follow the lpadmin command:

  • -p = Printer name (queue name if sharing the printer)
  • -v = IP address or DNS name of the printer
  • -D = Description of the printer (appears in the Printers list)
  • -L = Location of the printer
  • -P = Path the the printer PPD file
  • -E = Enable this printer

The result of running this command in the Terminal application as an administrator looks like this:

New printer

lpoptions

Advanced printer models may have duplex options, multiple trays, additional memory or special features such as stapling or binding. Consult the printer’s guide or its built-in web page for a list of these installed printer features.

After installing a test printer, use the lpoptions command with the -l option in the Terminal to “list” the feature names from the printer:

lpoptions -p "salesbw" -l

The result is usually a long list of features that may look something like:

HPOption_Tray4/Tray 4: True *False
HPOption_Tray5/Tray 5: True *False
HPOption_Duplexer/Duplex Unit: True *False
HPOption_Disk/Printer Disk: *None RAMDisk HardDisk
HPOption_Envelope_Feeder/Envelope Feeder: True *False
...

Each line is an option. The first line above displays the option for Tray 4 and shows the default setting is False. If the printer has the optional Tray 4 drawer installed then enable this option when running the lpadmin command by following it with:

-o HPOption_Tray4=True

Be sure to use the option name to the left of the slash not the friendly name with spaces after the slash.

To add the duplex option listed on the third line add:

-o HPOption_Duplexer=True

And to add the envelope feeder option listed on the fifth line add:

-o HPOption_Envelope_Feeder=True

Add as many options as necessary by stringing them together at the end of the lpadmin command:

lpadmin -p "salesbw" -v "lpd://192.168.1.10/" -D "Sales B&W" -L "2nd floor print center" -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 8150 Series.gz" -E -o HPOption_Tray4=True -o HPOption_Duplexer=True -o HPOption_Envelope_Feeder=True

The result of running the lpadmin command with the -o options enables these available features when viewing the Print & Scan preferences:

Printer options

With these features enabled for the printer in Print & Scan, they also appear as selectable items in all print dialogs:

Printer dialog

 

BSD as a useful tool

Monday, December 17th, 2012

Whether or not you know it, the world runs on BSD. You can’t send a packet more than a few hops without a BSD-derived TCP/IP stack getting involved. Heck, you’d be hard pressed to find a machine which doesn’t already have BSD code throughout the OS.

Why BSD? Many companies don’t want to deal with GPL code and the BSD license allows any use so long as the BSD group is acknowledged. This is why Windows has BSD code, Mac OS X is based on BSD (both in its current incarnation which pulls much code from FreeBSD and NetBSD as well as via code which came from NeXTStep, which in turn was derived from 4.3BSD), GNU/Linux has lots of code which was written while looking at BSD code, and most TCP/IP stacks on routers and Internet devices are BSD code.

In the context of IT tools BSD excels due to its cleanliness and consistency. GNU/Linux, on the other hand, has so many different distributions and versions that it’s extremely difficult to do certain tasks across different distributions in any consistent way. Furthermore, the hardware requirements of GNU/Linux precludes using anything but typical x86 PC with a full compliment of associated resources. Managing GNU/Linux on non-x86 hardware is a hobby in its own right and not the kind of thing anyone would want to do in a production environment.

NetBSD in particular stands in stark contrast to GNU/Linux when deploying on machines of varying size and capacity. One could just as easily run NetBSD from an old Pentium 4 machine as a tiny StrongARM SheevaPlug, a retired PowerPC Macintosh, or a new 16 core AMD Interlagos machine. A usable system could have 32 megs of memory or 32 gigs. Disk space could be a 2 gig USB flash drive or tens of terabytes of RAID.

Configuration files are completely consistent across architecture and hardware. You may need to know a little about the hardware when you first install (wd, sd, ld for disks, ex, wm, fxp, et cetera for NICs, for example), but after that everything works the same no matter the underlying system.

Some instances where a BSD system can be invaluable are situations where the installed tools are too limited in scope to diagnose problems, where problematic hardware needs to be replaced or augmented quickly with whatever’s at hand, or where secure connectivity needs to be established quickly. Some examples where BSD has come in handy are:

In a warehouse where an expensive firewall device was flakey, BSD provided a quick backup. Removing the flaky device would have left the building with no Internet connection. An unused Celeron machine with a USB flash drive and an extra ethernet card made for a quick and easy NetBSD NAT / DHCP / DNS server for the building while the firewall device was diagnosed.

At another business an expensive firewall device was in use which is not capable of showing network utilization in any detail without setting up a separate computer for monitoring (and even then it is limited to giving very general and broad information), nor is it flexible when it comes to routing all traffic through alternate methods such as gre or ssh tunnels. Setting up an old Pentium 4 with a four port ethernet card gave us a router / NAT device which allowed us to do tests where we passed all traffic through a single tunnel to an upstream provider to test the ISP’s suggestion that too many connections were running simultaneously (which wasn’t the case, but sometimes you have to appease the responsible party before they’ll take the next step). They can also now monitor network traffic quickly and easily using darkstat (http://unix4lyfe.org/darkstat/), monitor packet loss, see who on the local networks is causing network congestion, et cetera. The machine serves three separate local network segments which can talk with each other. One segment is blocked from accessing the Internet because it contains Windows systems running Avid, but can be turned on momentarily to allow for software activation and similar things.

When another business needed a place to securely host their own WordPress blog, an unused Celeron machine was set up with a permissions scheme which regular web hosting providers won’t typically allow. WordPress is set up so that neither the running php code nor the www user can write to areas which allow for script execution, eliminating almost all instances where WordPress flaws can give full hosting abilities to attackers, which is how WordPress is so often used to host phishing sites and advertising redirectors.

DNS hosting, NAT or routing can be set up in minutes, a bridge can be configured to do tcpdump capture, or a web proxy can be installed to save bandwidth and perform filtering. An SMTP relay can be locally installed to save datacenter bandwidth.

So let’s say you think that a NetBSD machine could help you. But how? If you haven’t used NetBSD yet, then here are some tips.

The latest version is 6.0. The ISOs from NetBSD’s FTP server typically weigh in at around 250 to 400 megabytes, so CDs are fine. The installer is pretty straightforward and the mechanisms for installing on various architectures is not germane.

After boot, the system is pretty bare, so here are things you’ll want to do:

Let’s look at a sample /etc/rc.conf:

hostname=wopr.example.com
sshd=YES
ifconfig_wm0=”dhcp”
dhcpcd_flags=”-C resolv.conf -C mtu”
ifconfig_wm1=”inet 192.168.50.1 netmask 255.255.255.0″
ipfilter=YES
ipnat=YES
dhcpd=YES
dhcpd_flags=”wm1 -cf /etc/dhcpd.conf”
ip6mode=router
rtadvd=YES
rtadvd_flags=”-c /etc/rtadvd.conf wm1″
named9=YES
named_chrootdir=”/var/chroot/named”
named_flags=”-c /etc/namedb/named.conf

So what we have here are a number of somewhat obvious and a few not-so-obvious options. Let’s assume you know what hostname, sshd, named9, ipnat and dhcpd are for. You can even make guesses about many of the options. What about ifconfig_wm0 (and its flags), ip6mode and other not-so-obvious rc.conf options? First, obviously, you can:

man rc.conf

dhcpcd is a neat DHCP client which is lightweight, supports IPv6 auto discovery and is very configurable. man dhcpcd to see all the options; the example above gets a lease on wm0 but ignores any attempts by the DHCP server to set our resolvers or our interface’s MTU. ifconfig_wm1 should be pretty self-explanatory.

ipnat and ipfilter enable NetBSD’s built in ipfilter (also known as ipf) and its NAT. Configuration files may often be as simple as this for NAT in /etc/ipnat.conf:

map wm0 192.168.50.0/24 -> 0/32 proxy port ftp ftp/tcp
map wm0 192.168.50.0/24 -> 0/32 portmap tcp/udp 10000:50000
map wm0 192.168.50.0/24 -> 0/32
rdr wm0 0.0.0.0/0 port 5900 -> 192.168.50.175 port 5900

And lines which look like this for ipfilter in /etc/ipf.conf:

block in quick from 78.159.112.198/32 to any

There’s tons of documentation on the Internet, particularly here:

http://coombs.anu.edu.au/~avalon/

To quickly summarize, the first three lines set up NAT for the 192.168.50.0/24 subnet. The ftp line is necessary because of the mess which is FTP. The second line says to only use port numbers in that range for NAT connections. The third line is for non-TCP and non-UDP protocols such as ICMP or IPSec. The fourth redirects port 5900 of the public facing IP to a host on the local network.

The ipf.conf line is straightforward; ipf in many instances is used to block attackers since you wouldn’t turn on or redirect services which you didn’t intend to be public. Other examples are in the documentation and include stateful inspection (including stateful UDP; I’ll let you think for a while about how that might work), load balancing, transparent filtering (on a bridge), port spanning, and so on. It’s really quite handy.

Next is BIND. It comes with NetBSD and if you know BIND, you know BIND. Simple, huh?

rtadvd is the IPv6 version of a DHCP daemon and ip6mode=router tells the system you intend to route IPv6 which does a few things for you such as setting net.inet6.ip6.forwarding=1. You’re probably one of those, “We don’t need that yet” people, so we’ll leave that for another time. IPv6 is easier than you think.

dhcpd is for the ISC DHCP server. man dhcpd and check out the options, but most should already look familiar.
So you have a system up and running. What next? You may want to run some software which isn’t included with the OS such as Apache (although bozohttpd is included if you just want to set up simple hosting), PHP, MySQL, or if you’d like some additional tools such as emacs, nmap, mtr, perl, vim, et cetera.

To get the pkgsrc tree in a way which makes updating later much easier, use CVS. Put this into your .cshrc:

setenv CVSROOT :pserver:anoncvs@anoncvs.netbsd.org:/cvsroot

Then,

cd /usr
cvs checkout -P pkgsrc

After that’s done (or while it’s running), set up /etc/mk.conf to your liking. Here’s one I use most places:

LOCALBASE=/usr/local
FAILOVER_FETCH=YES
SKIP_LICENSE_CHECK=YES
SMART_MESSAGES=YES
IRSSI_USE_PERL=YES
PKG_RCD_SCRIPTS=YES
PKG_OPTIONS.sendmail=sasl starttls
CLEANDEPENDS=yes

Set LOCALBASE if you prefer a destination other than /usr/pkg/. PKG_RCD_SCRIPTS tells pkgsrc to install rc.d scripts when installing packages. PKG_OPTIONS.whatever might be different for various packages; I put this one in here as an example. To see what options you have, look at the options.mk for the package you’re curious about. CLEANDEPENDS tells pkgsrc to clean up working directories after a package has been compiled.

After the CVS has finished, you have a tree of Makefiles (and other files) which you can use as simply as:

cd /usr/pkgsrc/editors/vim
make update

That will automatically download, compile and install all prerequisites (if any) for the vim package, then download, compile and install vim. I personally use “make update” in case I’m updating an older package, FYI.

With software installed, the rc.conf system works similarly to the above. After adding Apache, for instance (www/apache24/), you can just add apache=YES >> /etc/rc.conf. That sets Apache to launch at boot; to start it without rebooting, just run /etc/rc.d/apache start.

One package which comes in very handy when trying to keep a collection of packages up to date is pkg_rolling-replace (/usr/pkgsrc/pkgtools/pkg_rolling-replace). After performing a cvs update in /usr/pkgsrc, one can simply run pkg_rolling-replace -ru and come back a little later; everything which has been updated in the CVS tree will be compiled and updated in the system.

Finally, to update the entire OS, there are just a handful of steps:

cd /usr
cvs checkout -P -rnetbsd-6 src

In this instance, the netbsd-6 tag specifies the release branch (as opposed to current) of NetBSD.

I keep a wrapper called go.sh in /usr/src so I don’t need to remember options. This makes sure that all the CPUs are used when compiling and the destinations of files are in tidy, easy to find places.

#!/bin/sh
./build.sh -j `sysctl -n hw.ncpu` -D ../dest-$1 -O ../obj-$1 -T ../tools -R ../sets -m $*

An example of a complete OS update would be:

./go.sh amd64 tools
./go.sh amd64 kernel=GENERIC
./go.sh amd64 distribution
./go.sh amd64 install=/

Then,

mv /netbsd /netbsd.old
mv /usr/obj/sys/arch/amd64/compile/GENERIC/netbsd /
shutdown -r now

Updating the OS is usually only necessary once every several years or when there’s an important security update. Security updates which pertain to the OS or software which comes with the OS are listed here:

http://www.netbsd.org/support/security/

The security postings have specific instructions on how to update just the relevant parts of the OS so that in most instances a complete rebuild and reboot are not necessary.

Security regarding installed packages can be checked using built-in tools. One of the package tools is called pkg_admin; this tool can compare installed packages with a list of packages known to have security issues. To do this, one can simply run:

pkg_admin fetch-pkg-vulnerabilities
pkg_admin audit

A sample of output might look like this:

Package mysql-server-5.1.63 has a unknown-impact vulnerability, see http://secunia.com/advisories/47894/
Package mysql-server-5.1.63 has a multiple-vulnerabilities vulnerability, see http://secunia.com/advisories/51008/
Package drupal-6.26 has a information-disclosure vulnerability, see http://secunia.com/advisories/49131/

You can then decide whether the security issue may affect you or whether the packages need to be updated. This can be automated by adding a crontab entry for root:

# download vulnerabilities file
0 3 * * * /sbin/pkg_admin fetch-pkg-vulnerabilities >/dev/null 2>&1
5 3 * * * /sbin/pkg_admin audit

All in all, BSD is a wonderful tool for quick emergency fixes, for permanent low maintenance servers and anything in between.

iOS Backups Continued, and Configuration Profiles

Friday, December 14th, 2012

In our previous discussion of iOS Backups, the topic of configuration profiles being the ‘closest to the surface’ on a device was hinted at. What that means is, when Apple Configurator restores a backup, that’s the last thing to be applied to the device. For folks hoping to use Web Clips as a kind of app deployment, they need to realize that trying to restore a backup that has the web clip in a particular place doesn’t work – the backup that designates where icons on the home screen line up gets laid down before the web clip gets applied by the profile. It gets bumped to whichever would be the next home screen after the apps take their positions.

This makes a great segue into the topic of configuration profiles. Here’s a ‘secret’ hiding in plain sight: Apple Configurator can make profiles that work on 10.7+ Macs. (But please, don’t use it for that – see below.) iPCU possibly could generate usable ones as well, although one should consider the lack of full screen mode in the interface as a hint: it may not see much in the way of updates on the Mac from now on. iPCU is all you have in the way of an Apple-supported tool on Windows, though. (Protip: activate the iOS device before you try to put profiles on it – credit @bruienne for this reminder.)

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Also thanks to @bruienne to the recommendation of the slick p4merge tool

Now why would you avoid making, for example, a Wi-Fi configuration profile for use on a mac with Apple Configurator? Well there’s one humongous difference between iOS and Macs: individual users. Managing devices with profiles shows Apple tipping their cards: they seem to be saying you should think of only one user per device, and if it’s important enough to manage at all, it should be an always enforced setting. The Profile Manager service in Lion and Mountain Lion Server have an extra twist, though: you can push out settings for Mac users or the devices they own. If you want to manage a setting across all users of a device, you can do so at the Device Group level, which generates extra keys than those that are present in a profile generated by Apple Configurator. The end result is that a Configurator-generated profile will be user-specific, and fail with deployment methods that need to target the System. (Enlarge the above screenshot to see the differences – and yes, there’s a poorly obscured password in there. Bring it on, hax0rs!)

These are just more of the ‘potpourri’ type topics that we find time to share after being caught by peculiarities out in the field.

CrashPlan PROe Refresher

Thursday, December 13th, 2012

It seems that grokking the enterprise edition of Code 42′s CrashPlan backup service is confusing for everyone at first. I recall several months of reviewing presentations and having conversations with elusive sales staff before the arrangement of the moving parts and the management of its lifecycle clicked.

There’s a common early hangup for sysadmins trying to understand deployment to multi-user systems, with the only current way to protect each user from another’s data being to lock the client interface (if instituted as an implementation requirement.) What could be considered an inflexibility could just as easily be interpreted as a design decision that directly relates to licensing and workflow. The expected model these days is a single user may have multiple devices, but enabling end users to restore files (as we understand it) requires one user be granted access to the backup for an entire device. If that responsibility is designated to as the IT staff, then the end user must rely on IT to assist with a restore, instead of healing thyself. This isn’t exactly the direction business tech has been going for quite some time. The deeper point is, backup archives and ‘seats’ are tied to devices – encryption keys cascade down from a user, and interacting with the management of a device is, at this point, all or nothing.

This may be old hat to some, and just after the Pro name took on a new meaning into Code 42 hosted-only, the E for Enterprise version had seemingly been static for a spell – until things really picked up this year. With the 3.0 era came the phrasing “Cold Storage”, which is neither a separate location in the file hierarchy nor intended for long-term retention (like one may use Amazon’s new Glacier tier of storage for.) After a device is ‘deactivated’, it’s former archives are marked for deletion, just as in previous versions – this is just a new designation for the state of the archives. The actual configuration which determines when the deactivated device backup will finally be deleted can be designated deployment-wide or more granularly per organization. (Yes, you can find the offending GUID-tagged folder of the archives in the PROe servers filesystem and nuke it from orbit instead, if so inclined.)

ComputerBlock from the PROe API

computerBlock from the PROe API

Confusion could arise from the term that looks similar to deactivation, ‘deauthorization’. Again, you need to notice the separation between a user and their associated device. Deauthorization operates at the device level to put a temporary hold on its ability to log in and perform restores on the client. In API terms it’s most similar to a ComputerBlock. This still only affects licensing in the fact that you’d need to deactivate the device to get back it’s license for use elsewhere, (although jiggery-pokery may be able to resurrect a backup archive if the user still exists…) As always, test, test, test, distribute your eggs across multiple baskets, proceed with caution, and handle with care.

iOS and Backups

Wednesday, December 12th, 2012

If you’re like us, you’re a fan of our modern era, as we are (for the most part) better off than we previously were for managing iOS devices. One such example is bootstrapping, although we’re still a ways away from traditional ‘imaging’. You don’t need Xcode to update the OS in parallel, iPCU to generate configuration profiles, and iTunes for restoring backups anymore. Nowadays in our Apple Configurator world, you don’t interact with iTunes much at all (although it needs to be present for assisting in loading apps and takes a part in activation.)

So what are backups like now, what are the differences between a restore from, say, iCloud versus Apple Configurator? Well, as it was under the previous administration, iTunes has all our stuff, practically our entire base belongs to it. It knows about our Apple ID, it has the ‘firmware’ or OS itself cached, we can rearrange icons with our pointing human interface device… good times. Backups with iTunes are pretty close to imaging, as an IT admin would possibly define it. The new kids on the block(iCloud, Apple Configurator,) however, have a different approach.

iOS devices maintain a heavily structured and segmented environment. Configuration profiles are bolted on top(more on this in a future episode), ‘Userspace’ and many settings are closer to the surface, apps live further down towards the core, and the OS is the nougat-y center. Apple Configurator interacts with all these modularly, and backups take the stage after the OS and apps have been laid down. This means if your backup includes apps that Apple Configurator did not provide for you… the apps(and their corresponding sandboxed data) are no longer with us, the backup it makes cannot restore the apps or their placement on the home screen.

iCloud therefore stands head and shoulders above the rest(even if iTunes might be faster.) It’s proven to be a reliable repository of backups, while managing a cornucopia of other data – mail, contacts, calendars, etc. It’s a pretty sweet deal that all you need is to plug in to power for a backup to kick off, which makes testing devices by wiping them just about as easy as it can get. (Assuming the apps have the right iCloud-compatibility, so the saved games and other sandbox data can be backed up…) Could it be better? Of course. What’s your radar for restoring a single app? (At this point, that can be accomplished with iTunes and manual interaction only.) How about more control over frequency/retention? Never satisfied, these IT folk.

Getting your feet wet with ACL’s

Monday, December 3rd, 2012

As an old school unix geek I have to admit that I’ve been dragging my feet in my efforts to learn and really grasp the idea of Access Control Lists (ACL’s) until embarrassingly recently. As you may know, *nix OS’s of the past have had only basic levels of access control over files on a system and for a surprisingly long amount of time, these simple controls were enough to get by. You were given 3 permission scopes per file, representing the ability to assign specific permissions for the file’s owner, a single group, and everyone else. This was enough for smallish deployments and implementations but when you start having 1000′s of users, setting specific permissions per user started to get needlessly complicated.

Enter ACL’s which grant extremely granular control over every operation you can do on a file. Need a folder to propagate a single user’s permissions to all files but not folders? No problem. Need to give read only access and disallow deletes for a set of folders? No problem there as well. It’s this fine level control that makes using ACL’s important, even mandatory, in some specific cases.

Just a few days ago, encountered an program that was behaving strangely. It was crunching a large number of files and creating the correct output files but for some strange reason, it was deleting them immediately after creating them. I noticed this as I control – C’d the program and saw my file only for it to be deleted once the process resumed. If only there was a way for the OS to disallow the offending program from removing it’s output files…

This is where ACL’s come in and why they are so powerful. I was able to tell the OS to block a program from deleting anything in it’s output folder. Here’s the command I used to check the ACL’s and set them on my mac:

>ls -le

As you see there are no ACL’s set. To set the append only attribute I typed the following.

>chmod +a ‘alt229 deny delete’ ‘output folder’

You see, the ACL has been set.  I’ll try and delete something now.

What gives? Unlike Linux systems ACL inheritance isn’t enabled by default when set from the command line. We’ll need to tweak our original command to enable that.

Clear old permissions first:
>chmod -R -N *
>chmod +a ‘alt229 deny delete,file_inherit,directory_inherit’ ‘output folder’

Now permissions will inherit but only to newly created folders.  You’ll see that the extra permissions have only been set on the newly created folder named ‘subfolder3′

Rerun the command like this to apply it to existing folders.

>chmod -R +a ‘alt229 deny delete,file_inherit,directory_inherit’

Now, you won’t be able to delete any file that’s contained within the main folder and it’s sub folders.

 There are many other special permissions available to tweak your system and help pull you out of strange binds that you may find yourself in. Here’s a list of some of the other ACL’s available in OSX that you can use to customize your environment. This is straight from the man page.

 

The following permissions are applicable to all filesystem objects:

delete 

Delete the item.  Deletion may be granted by either this

permission on an object or the delete_child right on the

containing directory.

readattr

Read an objects basic attributes.  This is implicitly

granted if the object can be looked up and not explicitly

denied.

writeattr

Write an object’s basic attributes.

readextattr

Read extended attributes.

writeextattr

Write extended attributes.

readsecurity

Read an object’s extended security information (ACL).

writesecurity

Write an object’s security information (ownership, mode,

ACL).

chown

Change an object’s ownership.

 

The following permissions are applicable to directories:

list   

List entries.

search

Look up files by name.

add_file

Add a file.

add_subdirectory

Add a subdirectory.

delete_child

Delete a contained object.  See the file delete permission

above.

 

The following permissions are applicable to non-directory filesystem

objects:

read    Open for reading.

write   Open for writing.

append  Open for writing, but in a fashion that only allows writes

into areas of the file not previously written.

execute

Execute the file as a script or program.

 

ACL inheritance is controlled with the following permissions words, which

may only be applied to directories:

file_inherit

Inherit to files.

directory_inherit

Inherit to directories.

limit_inherit

This flag is only relevant to entries inherited by subdirec-

tories; it causes the directory_inherit flag to be cleared

in the entry that is inherited, preventing further nested

subdirectories from also inheriting the entry.

only_inherit

The entry is inherited by created items but not considered

when processing the ACL.