Archive for March, 2012

Auditing Email in Google Apps

Thursday, March 22nd, 2012

In order to address situations where a Google Apps admin needs access to a user’s mail data, Google provides an Email Audit API. It allows administrators to audit a user’s email and chats, and also download a user’s complete mailbox. While Google provides this API, third-party tools are required in order to make use of the functionality. While there are some add-ons in the Google Apps Marketplace that make email auditing available, the most direct method of gaining access to this is with a command-line tool called Google Apps Manager. GAM is a very powerful management tool for Google Apps, but here we will focus on just what’s required to use the Email Audit API.

Using GAM requires granting access, with a Google Apps admin account, to a specific system. An OAuth token for the domain is stored in the GAM folder. Also, if you’re going to download email exports, it’s necessary to generate a GPG key and upload that to Google Apps. In light of both of these factors, it’s best to designate a specific system as the GAM management system. GAM is a collection of Python modules, so whatever system you designate should be something that has a recent version of Python. We’ll assume that we’re using a fairly recent Mac.

What we’ll do is download GPG and generate a GPG key, and then download GAM and get it connected to Google Apps.

Generating a GPG key

The GPGTools installer is here:

After installation, open up Terminal, in the account that you’ll be using to manage Google Apps.

Run the command:

$ gpg –gen-key –expert

For type of key, choose “RSA and RSA (default)”. For key size, you can probably safely choose a smaller key. Bear in mind that all your mailbox exports will be encrypted with this key and then will need to be decrypted after download. This can take a non-trivial amount of time, especially for larger mailboxes, and a larger key will mean much longer encryption and decryption times. A 1024-bit key should be fine in most cases.

When asked for how long the key should be valid, choose 0 so that the key does not expire.

Next you’ll be prompted for your name, email address and a comment. This information is not, at the moment, used by Google for anything. However, in the interests of long-term usability, I would recommend using the email address and name of an actual admin for the Google Apps domain.

Finally, you’ll be asked for a passphrase. This passphrase will be required in order to decrypt the downloaded mailboxes. Do not forget it. You will be unable to decrypt the downloads without it.

When key creation is complete, you’ll see something like this:

pub 1024R/0660D980 2012-03-22
Key fingerprint = A642 0721 2D4A 9150 6ED1 DBD7 AFFF 992F 0660 D980
uid Apps Admin
sub 1024R/6D1C197B 2012-03-22

Make a note of the ID of the public key, which in this case is 0660D980. You’ll need the ID to upload the key to Google.

Installing GAM

Prior to installing GAM, you’ll want to open up your default browser and log into to your Google Apps domain as an administrator. It’s not technically necessary – you can log in as an admin when the GAM install needs access, but you’ll find it authenticates more reliably if log in in advance.

GAM can be found here:

Download the python-src package, and put it somewhere in the home directory of the same user that generated the GPG key. The most reliable way to invoke GAM is using the python command to call the script:

$ python ~/Desktop/gam-2/

This assumes it was unzipped to the Desktop of the user account. Change the path where appropriate. In order to make this a bit easier, you can create an alias that will allow you to call it with just “gam”

$ alias gam=”python ~/Desktop/gam-2/”

From here on, we’ll assume you did this. Bear in mind that aliases created this way only last until the session ends (i.e. the Terminal window gets closed).

The first command you’ll need to run is:

$ gam info domain

You’ll be asked to enter your Google Apps Domain, and then you’ll be asked for a Client ID and secret. These are only necessary if you’ll be using Group Settings commands, which we won’t. Press enter to continue. You’ll now be presented with a list of scopes that this GAM install will be authorized for. You can just enter “16″ to continue with all selected, or you can just select Audit Monitors, Activity and Mailbox Exports for Email Audit functions. When you continue, you’ll see this:

You should now see a web page asking you to grant Google Apps Manager access. If you’re not logged in as an administrator, you can do that now, though you may experience some odd behavior. Once you grant access, return to the terminal Window and press Enter. At this point, GAM will retrieve information about your domain from Google Apps, and you’ll be returned to a shell prompt. GAM is installed and almost ready to use.

Uploading the GPG Key

There’s one final step to take before mailbox export requests are possible. The GPG key you generated earlier must be uploaded to Google. What you can do is have gpg export the key and pipe that directly to GAM. You’ll need the ID of the key so that you export the correct one to GAM. If you didn’t make a note of the ID earlier, you can see all the available keys with:

$ gpg –list-keys

pub 1024R/0660D980 2012-03-22
uid Apps Admin
sub 1024R/6D1C197B 2012-03-22

The ID you want is that of the public key. In this case, 0660D980. Now export an ASCII armored key and pipe it to GAM.

$ gpg –export –armor 0660D980 | gam audit uploadkey

Now you’re ready to request mailbox exports.

Dealing with mailbox exports

To request a mailbox export, use:

$ gam audit export includedeleted

This will submit a request for a mailbox export, including all drafts, chants, and trash. You can leave off “includedeleted” if you don’t want their trash. GAM will show you a request ID, which you can use to check the status of a request.

To check the status of one request, use:

$ gam audit export status

If you leave off either username or request ID, you’ll be shown the status of all requests, pending and completed. To download a request you can use:

$ gam audit export download

You must specify both the username and the request ID. Please note that GAM will download the files to the current working directory. The files will be named “export---.mbox.gpg. The numbers will start at 0. In order to decrypt the downloaded files, you’ll need to use GPG.

$ gpg –output –decrypt

This will decrypt one of the files. The predicatbility of the names makes it easy to programatically decrypt all the files. For instance if the username were bob, the ID were 53521381, and there were 8 files, you could use this command:

$ for i in {0..7}; do gpg –output export-bob-53521381-$i.mbox –decrypt export-bob-53521381-$i.mbox.gpg; done

When decryption is completed, you can take the resulting mbox files and import them into any mail client that supports mbox – Thunderbird is a good choice, though should work as well – or you can just look at them in a text editor.

Further Reading

For more details about using GAM or the Email Audit API, please consult the official documentation.

Google Apps Manager Wiki:

Google’s Email Audit API reference:

Using Nagios NIBs with ESX

Thursday, March 22nd, 2012

What is a MIB

A MIB is a Management Information Base. It is an index based upon a network standard that categorizes data for a specific device so that SNMP servers can read the data.

Where to Obtain VMware vSphere MIBs

VMware MIBs are specific to VMware Version, you can try to use the ESX MIBs for ESXi. They can be downloaded from Click on VMware vSphere > find the version of ESX that you are running under “Other versions of VMware vSphere” (the latest version will be the page that you’re on). Click on “Drivers & Tools”. Then click on “VMware vSphere x SNMP MIBs” where “x” is your version.

How to add VMware vSphere MIBs into Nagios

  • Download the VMware vSphere MIBs from
  • Copy the MIB files to /usr/share/snmp/mibs/
  • Run check_snmp -m ALL so it detects the new MIBs

Editing snmpd.conf and starting snmpd on ESX

  • Stop snmpd: service snmpd stop
  • Backup snmp.xml: cp /etc/vmware/snmp.xml /etc/vmware/snmp.xml.old
  • Edit snmp.xml with your favorite CLI text editor to have the following:


  • Backup snmpd.conf: cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.old
  • Use your favorite CLI text editor and edit /etc/snmp/snmpd.conf
  • Erase everything in it.
  • Add in the following and save it:

load  99 99 99
syslocation ServerRoom
syscontact  “ESX Administrator”
rocommunity  public
view systemview included .
proxy -v 1 -c public .

  • Change “syslocation” and “syscontact” to whatever you want
  • Save your work
  • Configure snmpd to autostart: chkconfig snmpd on
  • Allow SNMP through firewall: esxcfg-firewall –e snmpd
  • Start the SNMP daemon: service snmpd start
  • Restart the mgmt-vmware service: service mgmt-vmware restart

Determining OID

OID’s are MIB specific variables that you can instruct an SNMP server monitor to look for. These variables can be determined by reading the MIBs. One tool that assists with doing this is MIB Browser by iReasoning Networks MIB Browser can run on Windows, Mac OS X, and Linux/UNIX. To obtain the appropriate OID’s:

  • Load the MIBs in MIB Browser by going to File > Load Mibs
  • Manually comb through to find the OID you want (it will be connected to a string that will be similar to wording used in VSphere).


  • SNMP MIBs was downloaded from for ESX 4.1
  • Loaded MIB for VMWARE-RESOURCES-MIB into MIB Browser
  • Searched for “Mem” (Edit > Find in MIB Tree), found “vmwMemAvail”, the OID for this is . (use the OID shown in the dropdown that is near the menu in the MIB Browser – it will show the full OID which will sometimes include a “0″ at the end that the OID listed towards the bottom of the window will not)
  • Add OID into remotehost.cfg (or linux config file) file in Nagios

define service{
use             generic-service ; Inherit values from a template
host_name           ESX4_1
service_description  Memory Available
check_command       check_snmp!-C public -o . -m all

host_name: the name of the device (whatever you want to call it)
service_description: the name of the service you are monitoring (whatever you want to call it)
check_command: -C is to define the community SNMP string, -o is to define the OID to read, -m is to define which MIB files to load – to be more specific, for this example you can narrow “-m all” to “-m VMWARE-RESOURCES-MIB.MIB”

Once you’ve done the above you should be able to monitor “Memory Available” for ESX through Nagios.  Repeat the procedure, changing steps where applicable for the specific OID you want to monitor.  If you have questions, or need assistance, please contact 318, Inc. at 1-877-318-1318.

Installing and Configuring Active Directory Certificate Services

Wednesday, March 21st, 2012

This guide assumes that you have a Windows Server 2008 R2 installation on a physical or virtual machine, and that the system is a domain controller of an Active Directory domain:

  1. Open Server Manager. 
  2. Click on Roles on in the tree on the left, then click Add Roles
  3. Choose next to start the wizard. 
  4. Then enable the checkbox for Active Directory Certificate Services
  5.  Choose  next to start the AD CS role configuration
  6. Click on “Add Required Role Services”  to install the IIS and the related tools needed.
  7. Enable the check box for “Certification Authority Web Enrollment and click next.
  8. Choose “Enterprise” and click next.
  9. Choose “Root CA”and click next
  10. Choose “Create a new private key”
  11. Leave the default values for Configure Cryptography for CA and click next.
  12. Ensure that you have the proper values for Configure CA Name for your environment and click next. The default values will usually be right.
  13. Click next to set the default validity period  of 5 years
  14. Configure the locations of the database and logs if needed for your environment and click next
  15. You will now be prompted to configure IIS. 
  16. Make changes if needed, but be sure to leave Windows Authentication as it is required for Web Enrollment.
  17. After the  role configuration is complete, run IIS Manager from Administrative Tools.
  18. From the tree on the left, navigate to the default website. 
  19. Right click Default website, and choose bindings.
  20. Click the Add… button.
  21. Change the type to https, and choose the SSL certificate that matches the server’s FQDN, and click OK.

The (Distributed) Version Control Hosting Landscape

Monday, March 19th, 2012

When working with complex code, configuration files, or just plain text, using Version control (or VC for short) should be like brushing your teeth. You should do it regularly, and getting into a routine with it will protect you from yourself. Our internet age has dragged us into more modern ways of tracking changes to and collaborating on source code, and in this article we’ll discuss the web-friendly and social ways of hosting and discovering code.

One of the earliest sites to rise to prominence was Sourceforge, which is now owned by the company behind Slashdot and Thinkgeek. Focused around projects instead of individuals, and offering more basic VC systems, like… CVS, Sourceforge became a site many open source developers would host and/or distribute their software through. Lately, Sourceforge seems to be on the wane, as it is found to be redirect and advertising-heavy.

When Google wanted to attract more attention to its open source projects and give outsiders a way to contribute, it opened in 2005. In addition to SVN, Mercurial (a.k.a. Hg) was available as an alternative VC option in 2009, as it was the system adopted by the Python language, whose creator is an employee at Google, Guido van Rossum. Hg was one of the original Distributed Version Control Systems, DVCS for short, and the complexity of such a system could feel ‘bolted-on’ when using Google for hosting (especially in the cloning interface), and its recent introduction of Git as an option mid last year brings this feeling out even more.

Bitbucket was another prominent early champion of Hg, and its focus, like those previously mentioned, is also on projects. Atlassian, the company behind it, are real titans in the industry, as the stewards of the Jira bug-tracking software, Confluence wiki, HipChat web-based IM/chatroom service, and have recently purchased the mac DVCS GUI client SourceTree. Even more indicative of the fast-paced and free-thinking approach of how Atlassian has done business is their adoption of Git late last year as an option for Bitbucket, going so far as to guide folks to move their Hg projects to it.

But the 900-pound gorilla in comparison to all of these is Github, with their motto, ‘Social Coding’. Collaboration can tightly couple developers and make open source dependent on the approval or contributions of others. In contrast, ‘Forking’ as a central concept to Git makes this interdependency less pronounced, and abstracts the project away to put more focus on the individual creators. Many words have already been spent on the phenomenon that is Git and Github by extension, just as its Rails engine enjoyed in years past, so we’ll just sign off here by recommending you sign up somewhere and join the social coding movement!

Microsoft’s System Center Configuration Manager 2012

Sunday, March 18th, 2012

Microsoft has released the Beta 2 version of System Center Configuration Manager (SCCM) aka System Center 2012. SCCM is a powerful tool that Microsoft has been developing for over a decade. It started as an automation tool and has grown into a full-blown management tool that allows you to manage, update, and distribute software, license, policies and a plethora of other amazing features to users, workstation, servers, and devices including mobile devices and tablets. The new version has been simplified infrastructure-wise, without losing functionality compared to previous versions.

SCCM provides end-users with a easy to use web portal that will allow them to choose what software they want easily, providing an instant response to install the application in a timely manner. For Mobile devices the management console has an exchange connector and will support any device that can use Exchange Active Sync protocol. It will allow you to push policies and settings to your devices (i.e. encryption configurations, security settings, etc…). Windows phone 7 features are also manageable through SCCM.

The Exchange component sits natively with the configuration manager and does not have to interface with Exchange directly to be utilized. You can also define minimal rights for people to just install and/or configure what they need and nothing more. The bandwidth usage can be throttled to govern its impact on the local network.

SCCM will also interface with Unix and Linux devices, allowing multiple platform and device management. At this point, many 3rd party tools such as the Casper Suite and Absolute Manage also plug into SCCM nicely. Overall this is a robust tool for the multi platform networks that have so commonly developed in today’s business needs everywhere.

Microsoft allows you to try the software at For more information, contact your 318 Professional Services Manager or if you do not yet have one.

Adding incoming and outgoing access rules on a Cisco ASA

Saturday, March 17th, 2012

To understand incoming and outgoing rules there are a couple of things to know before you can define your rules. Let’s start with an understanding of traffic flow on an ASA. All incoming rules are meant to define traffic that come inbound to the ASA’s interface. Outgoing is for all traffic that is going outbound of an ASA’s interface. It does not matter which interface it is since this is a matter data flow and each active interface on an ASA will have it’s own unique address.

To try an explain this further let’s say we have and internal interface with an IP address of that is for your local area network to connect to. You can add a permit or deny rule to this interface specifying whether incoming or outgoing  traffic will be permitted or not. This allows you to control what computers can communicate past that interface or not. Essentially you would define most of your rules for the local area network on the internal interface, governing which systems/devices could access the internet, certain protocols, or not.

Now if you know about the basic configuration of an ASA you know that you have to set the security level of the Internal and External ports. So by default these devices allow traffic from a higher security interface to a lower security interface. NAT/PAT will need to be configured depending on if you want to define port traffic for specified protocols.

For this article I will just mention that their are several types of Access Control Lists (ACL) that you can create on an ASA. These types are Standard, Extended, Ethertype, webtype, and IPV6. For this example we will use Extended because most likely that is what most everyone will use the most. With extended ACL not only can you specify IP addresses in the access control list, but you can specify port traffic to match the protocol that might be required.

Lets look at the the examples below:

You will see we are in the configuration terminal mode

ASA(config)# access-list acl extended permit tcp any host eq 80

-So the first part “access-list acl” means the access list will be named “acl”.
-Next you have a choice between type of access list. We are using Extended for this example.
-The next portion is the permit or deny option and we have permit selected for this statement.
-On the next selection that say’s “any” this refers to inside traffic (simply meaning that any internal traffic is allowed). If you dont use any you can specify specific devices by using “host and the IP address like that last part of this ACL statement.
-The next part of this refers to specifying a specific host address of equals port 80.

So this example tells us that our access control list named ACL will allow any inside traffic out the host address of that is internet traffic.

Later you will notice that your statement will look like this on the ASA

ASA(config)access-list acl extended permit tcp any host www
Notice how “eq 80″ default http traffic changed automatically to www) This is common on Cisco ASA devices).

Test-Driven Sysadmin with a Russo-Australian Accent

Friday, March 16th, 2012

One of the jokes in the Computer Science field goes like this: there are only 2 hard problems: cache invalidation, naming things, and off-by-one errors. Please do pardon the pun.

Besides the proclivity to name things strangely in the tech community, we often latch on to acronyms and terms that show our pride in being proficient with cutting-edge (or obscure) concepts. As with fashion, there is an ebb and flow to what’s new, but one thing that is here to stay are tests for code, exemplified by the concept of TDD or Test-Driven Development. When you work with complex systems, dependancies can become a fragile house of cards, but here’s another take on that concept: “here in Australia, “babushka doll” is the colloquial term for Russian nesting dolls. Deps” (short for dependancies) “are intended to be small, tidy chunks of code, nested within each other – hence the name”

Babushka is the name of a tool, for Mac OS X and Linux, that tests for the software or settings your system relies on – and if it isn’t present, it goes about changing that for you. Its claim of “no job too small” hints at how atomic and for-mere-mortals the tool was made to be. In comparison to configuration management tools like Puppet and Chef, which are also written in Ruby, it’s much more humble with a proportional community in comparison. The larger tools strive to deliver the ‘holy trinity’, consisting of a package, a configuration file, and a service (gathered in modules by Puppet parlance or recipes in Chef.) Babushka can just deliver the package and lets you build from there.

It was originally released a few years ago, and has recently been refreshed with new capabilities and approachable, comprehensive documentation. Unlike centralized business systems that require curation to take into account things like volume licensing, Babushka can let you reach right out to publicly available freeware. For developers it affords more conveniences like the command line tools that used to require Xcode, package managers like homebrew, and support for Ubuntu’s standard package manager as well.

Git and both play a big part in Babushka; and not just that Git’s the version control system it uses and Github is the site it can be downloaded from. If you decide you’d like to use someone else’s ‘Deps’ to set up your workstation, there is a simplified syntax to not only specify a user on Github whose repository you’d like to work out of, but you can now search across Github for all of the repositories Babushka knows about.

One way of getting started super fast is just running this simple command: bash -c “`curl`”

Now installing via this method is not the most secure, but you can audit the code since it is open source and make your own assurances that your network communication is secure before using it. For examples, you can look at the creator’s deps or your humble author’s.

More fuel for the Simian fire – how does free sound?

Thursday, March 15th, 2012

Well we’ve been busy keeping our finger on the pulse of the Mac-managing open source community, and that genuine interest and participation continues to pay off. Earlier, we highlighted how inexpensive and mature the Simian project running on Google App Engine (GAE for short) is, although as of this writing refreshed documentation is still forthcoming. In that article we mentioned only one tool needs to be run on a Mac as part of maintaining packages posted to the service, and an attempt is being made to remove even the need for that. This new project was originally announced here, and has a growing number of collaborators. But that isn’t the biggest news about Managed Software Update (Munki) and Simian we have to announce today.

A technique that had been previously overlooked is now proven to be functional that allows you to use Simian as the repository of all of your configurations, but serve the actual packages from an arbitrary URL. Theoretically, if you take the publicly available pkginfo files, modify them to point to a web server on your LAN, (or even the vendors website directly, if you want them to be available from anywhere,) and your GAE service would fall under the free utilization limits with very little maintenance effort. This is big for institutions with a tight budget and/or multiple locations that want to take advantage of the App Engine platforms availability and Simian’s great interface. Beyond helping you save on bandwidth usage, this can also help control where your licensed software is stored.

Previously people have wished they could adapt Google’s code to run on their local network with the TyphoonAE beta project, but versus the recommended & supported method to deploy the server component, this is a great middle ground that brings down a barrier for folks having difficulty forecasting costs.

It’s an exciting time, with many fully-featured offerings to consider.

Munki’s Missing Link, the Simian Server Component from Google

Tuesday, March 13th, 2012

At MacWorld 2011, Ed Marczak and Clay Caviness gave a presentation called A Week in the life of Google IT. It included quite the bombshell, that Google was open-sourcing its Managed Software Update (Munki) server component for use on the Google App Engine (or GAE). Some began immediately evaluating the solution, but Munki itself was still young, and the enterprise-intent of the tool made it hard for smaller environments to consider evaluating. Luckily, the developers at Google kept at it, and just like GAE graduated from beta and other Google products got a facelift, a new primate now stands in our midst (mist?): Simian 2.0!

With enhancements more than skin deep, this release ups the ante for competing ‘munkiweb’ admin components, with rich logs and text editor-less manifest generation. For every package you’d like to distribute, only one run of the Munki makepkginfo tool is required – the rest can be done with web forms. No more ritual running of makecatalogs, just click the snazzy buttons in the interface!

Unlike the similarly GAE-based Cauliflower Vest, Simian does not require a Google account for per-client secure transmission, which makes evaluation easier. While GAE has ‘billable‘ levels, the free version allows for 1GB of storage with 1GB of upload and… yup, 1GB of download. While GAE may not be quite as straightforward to calculate the cost of as other ‘Platform as a Service’ offerings, it is, to use a phrase, ‘dumb cheap’. The only time the server’s instance would cost you during billable operation is when Admins are maintaining the packages stored, or when clients are actively checking in (by default once a day) and pulling packages down. As Google ‘dogfood’s the product, they have reported $.75/client per YEAR in the way of GAE-related costs.

Getting started with Simian is not a walk in the park, however: you must wrap your brain around the concept of a certificate authority (or CA), understand why the configuration files are a certain way based on the Simian way of managing Munki, and then pay close attention as you deploy your customized server and clients. Planning your Simian deployment starts with either creating or reusing an existing certificate authority, which would be a great way to leverage Puppet if it’s already running in your environment. Your server just needs to have its private key and public certificate signed by the same authority as the clients to secure their communication. Small or proof-of-concept deployments can use this guide to step you through a quick Certificate Authority setup.

When it comes to the server configuration, it’s good to specify who will be granted admin access, in addition to the email contact info for your support team. The GAE instance requires a Google account for authentication, and it is recommended that access is restricted to users from a particular Google Apps domain (free or otherwise). One tripping point is when allowing domain access to the GAE instance, you need to go to a somewhat obscure location in your GoogleApps dashboard (linked from above where the current services are listed on the dashboard tab, as pictured):

Ready to take the plunge? Once configurations have been set in the three files specified in the wiki, and the certs you’ll use to identify and authenticate your server, CA, and a client are stowed in the appropriate directories, go ahead and send it up to the great App Engine in the sky.

See our follow-up article

Windows Firewall via GPO

Monday, March 12th, 2012

Setting up the Windows Firewall to run on Windows client systems can be tedious when done en masse. But using a Group Policy (GPO) to centrally manage systems can be a fairly straight forward process. First, decide which firewall rules you want to implement. Then, manually configure them and test them  out on a workstation to verify it works the way you want it to. This process has been documented at

Once you know the exact settings you’d like to deploy, create an Organizational Unit and put computer accounts (or other OUs/security groups) to be governed by this policy in the new OU. Once you have all of your objects where you’d like them, it’s time to create a GPO of the settings (which should be applied to one machine and tested before going wide across a large contingent of systems). To do so, go to the policy server and Features from within Server Manager to expand Group Policy Management.

From Group Policy Management, expand the appropriate Forest and Domain and then right-click Group Policy Objects, clicking New at the contextual menu. Then provide a name for the new GPO (e.g. Windows Firewall Policy) and click on OK. In the Group Policy Management screen, click on Group Policy Objects and then right-click on Firewall Settings for Windows Clients. Click on Edit to bring up the Group Policy Management Editor.

At the Group Policy Management Editor, right-click Firewall Settings for Windows Clients policy, and select its Properties. Click on the Disable User Configuration settings check box and at the Confirm Disable dialog box, click on the Yes button and click OK when prompted.

In the Group Policy Management Editor open Policies from Computer Configuration. Then expand on Windows Settings and then on Security Settings and finally Windows Firewall with Advanced Security. Here, click on Windows Firewall with Advanced Security for the LDAP GUID for your domain. Then open Overview to verify that each network location profile lists the Windows Firewall state as not configured.

Click on Windows Firewall Properties and under the Domain Profile tab, use the drop-down list to set the Firewall state to On. Then, click on OK and verify the Windows Firewall is listed as On.

Once you’ve created the GPO, go to the OU and click on Link an Existing GPO. Here (the list of GPOs), select the new GPO and test it on a client by running gpupdate or rebooting the client. To verify that the GPO was applied, open the Windows Firewall with Advanced Security snap-in and right-click on Windows Firewall with Advanced Security on Local Computer, selecting Properties from the contextual menu. If the setting is listed as On then the policy was created properly!

Preparing for a Business CrashPlan Deployment

Sunday, March 11th, 2012

Knowing the Software

It is important to remember that of the two aspects to the software, the CrashPlan client does all the heavy lifting. It scans the local file system, filters and applies other rules as set on the server, compresses and encrypts the data, and finally transfers it either to a destination across the network or to a local ‘folder’(attached drive, etc.) The second portion of the software is the server process that accepts data from each of the clients and tracks everything in a database.

Knowing Your Requirements

Scaling an environment that is backing up to near-unlimited, cloud-based storage is just a matter of having sufficient licenses and internet bandwidth to maintain uploads from multiple clients at once. CrashPlan Pro allows for businesses to store smaller sets of data with pricing per computer, as well. Organizationally, however, the Pro version is not meant for environments with over 200 users. It lacks other features, including integration with directory services and backup seeding/guest restoring/and reporting flexibility.

Embrace the Enterprise with PROe

In addition to getting those features which are missing from the ‘Pro’ level, CrashPlan PROe can work well in environments that are concerned about disaster recovery and would like to host secondary destinations. In these situations there are further considerations to take into account:

Data: Even with the compression applied to files, you’ll need to gauge a significantly larger amount of storage for data than will be backed up at the time of deployment, and have an understanding of how your retention policy will affect your storage needs as time goes on and/or clients are added. A great feature of the REST API available only to the PROe version is that usage can be dynamically gauged.

‘User’ Accounts: It is often the case that there is a subset of pre-approved users for inclusion, which can easily be imported into the CrashPlan PROe servers database, or linked from LDAP. For certain computers and situations, however, the software would more appropriately be allocated by the role the computer performs. Alerting and monitoring is one concern when changing how the account is tied to the computer, but more crucial to understand is when customers are allowed to restore their own files; backing up many computers under the same account can become a security liability (this can be administratively locked out.)

Master-Slave Configuration: For multiple locations, a slave server can be allocated within an organization to more flexibly allocate computers. Just like seeding a backup, an entire slave server can be seeded with the contents of any other server under a Master, and clients will pick up right where they left off.

These are just a few examples of the considerations to take into account when deciding if CrashPlan PROe is right for your environment. For more information, please contact your Professional Services Manager or if you do not yet have one.

Windows Firewall For Windows 7

Friday, March 9th, 2012

A firewall is a barrier between you and the Internet at large that filters information that your computer can receive. Companies usually have firewalls in place to keep certain kinds of websites, people, and information from being accessed from outside their networks, keeping sensitive info safe, and you focused on the job. Your home computer and/or modem can have a firewall built-in as well, acting as the gateway to your home network and the Internet.

NOTE: you might not be able to use a third party application until you add the application to the list of allowed programs.

Here is an explanation of the different options you can modify and customize:

Add a program to the list of allowed programs:

  1. Open Windows Firewall by clicking the Start button, and then clicking the Control Panel. In the search box, type firewall, and then click Windows Firewall.
  2. In the left pane, click Turn Windows Firewall on or off. If you’re prompted for an administrator password or confirmation, type the password or provide confirmation.
  3. Click Change settings.  If you’re prompted for an administrator password or confirmation, type the password or provide confirmation.
  4. Select the check box next to the program you want to allow, select the network locations you want to allow communication on, and then click OK.

If an application needs a specific port that this being blocked you can also allow port traffic by:

  1. Open Windows Firewall by clicking the Start button, and then clicking Control Panel. In the search box, type firewall, and then click Windows Firewall.
  2. In the left pane, click advanced settings. If you’re prompted for an administrator password or confirmation, type the password or provide confirmation.
  3. In the Windows Firewall with Advanced Security dialog box, in the left pane, click Inbound Rules, and then, in the right pane, click New Rule.
  4. Follow the instructions in the New Inbound Rule wizard.

Block all incoming connections, including those in the list of allowed programs: this setting blocks all unsolicited attempts to connect to your computer. Use this setting when you need maximum protection for your computer, such as when you connect to a public network in a hotel or airport, or when a computer virus is spreading over the network or Internet. Word of caution with this setting, you wont be notified when Windows Firewall blocks programs. When you block all incoming connections, you can still view most websites, send and receive e‑mail, and send and receive instant messages.

  1. Open Windows Firewall by clicking the Start button, and then clicking Control Panel. In the search box, type firewall, and then click Windows Firewall.
  2. Check the box that says to block all incoming connections.

Notify me when Windows Firewall blocks a new program
If you select this check box, Windows Firewall will inform you when it blocks a new program and give you the option of unblocking that program.

  1. Open Windows Firewall by clicking the Start button, and then clicking Control Panel. In the search box, type firewall, and then click Windows Firewall.
  2. Select the box that says “Notify me when Windows Firewall blocks a new program”

Turn off Windows Firewall (not recommended)
This step is not recommended unless your system administrator has implemented another application to provide protection for your network.

  1. Open Windows Firewall by clicking the Start button, and then clicking the Control Panel. In the search box, type firewall, and then click Windows Firewall.
  2. In the left pane, click Turn Windows Firewall on or off. If you’re prompted for an administrator password or confirmation, type the password or provide confirmation.

Note: If some firewall settings are unavailable and your computer is connected to a domain, your system administrator might be controlling these settings through Group Policy or third party application like Symantec Endpoint Protection.

If you have trouble allowing other computers to communicate with your computer through Windows Firewall, you can try using the Incoming Connections troubleshooter to automatically find and fix some common problems.

  1. Open the Incoming Connections troubleshooter by clicking the Start button, and then clicking Control Panel.
  2. In the search box, type troubleshooter, and then click Troubleshooting. Click View all, and then click Incoming Connections.

Note: Some material in this article was referenced from Microsoft directly from:

Note: Stay tuned for more information about setting up Windows Firewall Using a GPO!

Lost a password to your Cisco Device and need to recover the settings?

Friday, March 9th, 2012

Most of us know that Cisco can be a bit complicated and sometimes things happen that are not so forgiving. One of those is losing a password on a Cisco device. The downside to this is if you did not know that you could reset the password using a console cable you might be freaking out thinking you might have to reset to factory defaults. Well thank you Cisco for providing a backdoor to their devices. Now for each device the commands and procedures can be slightly different, so you will want to look up from Cisco the password recovery steps for you specific device. In the example I will show you the steps on how to reset the password on a Cisco ASA 5505 using Terminal from a Macbook.

First thing you will need to have on all the Cisco devices is Console port access. For this reason it is important to ensure there are strict physical security measures in place. Access to the device allows someone to have access to the procedures that I am about to list, which can give them unwanted entry to your device.

1.Connect to the device using the console port\cable. The cable is usually an RJ45 to Serial so on my Macbook I don’t have a serial port so I use a serial to USB adapter. All my configurations are than done in terminal. If you’re on a PC you can use your telnet application or the MS-DOS CMD window.

Using a Macbook with the serial to USB adapter requires I use the “Screen /dev/tty.KeySerial1 9600” command to be able to use terminal as my telnet window. This will allow you to view the bootup of the device as soon as it has power.

2. Now shutdown the ASA, and power it back up. During the startup messages, press and hold the “Escape” key when prompted to enter ROMMON.

3. To update the configuration register value, enter the following command:

rommon #1> confreg 0x41

4. To have the ASA ignore the startup configuration during its startup, enter the following command

rommon #1> confreg

The ASA will display the current configurations register value, and will prompt you to change the value:

Current Configuration Register: 0x00000011
Configuration Summary:
boot TFTP image, boot default image from Flash on netboot failure
Do you wish to change this configuration? y/n [n]:

5. Take note of the current configuration register value (it will be used to restore later). At the prompt enter “Y” for yes and hit enter.

The ASA will prompt you for new values.

6. Accept all the defaults, except for the “disable system configuration?” value; at that prompt, enter “Y” for yes and hit enter.

7. Reload the ASA by using entering:

rommon #2> boot

The ASA loads a default configuration instead of the startup configuration.

8. Enter privileged EXEC mode by entering:

hostname> en

9. When prompted for the password press “Enter” so the password will be blank.

10. Next Load the startup config by entering:

hostname# copy startup-config running-config

11. Enter global configuration mode by using this command:

hostname# config t

12. Change the passwords in the configuration by using these commands, as necessary:

hostname(config)# password newpassword
hostname(config)# enable password newpassword
hostname(config)# username newusername password newpassword

13. Change the configuration register to load the startup configuration at the next reload by entering:

hostname(config)# config-register 0x00000011

* Note- 0×00000011 is the current configurations register you noted in step 4.

13. Save the new passwords to the startup configuration by entering:

hostname(config)# wr mem


The commands used in the example above were referenced from Cisco article

Introduction to Centralized Configurations with Puppet

Thursday, March 8th, 2012

One of the hardest things for IT to tackle at large scale is workstation lifecycle management. Machines need to be deployed, maintained, and re-provisioned based on the needs of the business. Many of the solutions provided by vendors need to be driven by people, pulling levers and applying changes in realtime. Since Macs have a Unix foundation, they can take advantage of an automation tool used for Linux and other platforms, Puppet. It can be used to cut down on a lot of the manual interaction present in other systems, and is based on the concept that configuration should be expressed in readable text, which can then be checked into a version control system.
To quickly bootstrap a client-server setup the Puppet Enterprise product is recommended, but we’ll be doing things in a scaled-down fashion for this post. We’ll use Macs, and it won’t matter what OS either the puppetmaster(server) or client is running on, nor if either are a Virtual Machine. First, install Facter, a complementary tool to collect specifications about your system, and then Puppet, from the PuppetLabs download site. Then, open Terminal and run this command to begin configuring the server, which adds the ‘puppet’ user and group:

sudo /usr/sbin/puppetmasterd --mkusers

Then, we’ll create a configuration file to specify a few default directories and the hostname of the server, so it can begin securing communication with the ssl certificates it will generate. I’m using computers bonjour names throughout this example, but DNS and networking/firewalls should be configured as appropriate for production setups, among other optimizations.

sudo vim /etc/puppet/puppet.conf
vardir = /var/lib/puppet
libdir = $vardir/lib
ssldir = /etc/puppet/ssl
certname = mini.local

Before we move on, an artifact of the –mkusers command above is that the puppet process may have been started in the background. For us to apply the changes we’ve made and start over with the server in verbose mode, you can just kill the ruby process started by the puppet user, either in Activity Monitor or otherwise.Now, let’s move on to telling the server what we’d like to see passed down to each client, or ‘node’:

sudo vim /etc/puppet/manifests/site.pp
# /etc/puppet/manifests/site.pp
import "classes/*"
import "nodes"
sudo vim /etc/puppet/manifests/nodes.pp
# /etc/puppet/manifests/nodes.pp
node '318admins-macbook-air.local' {
  include testing
sudo vim /etc/puppet/manifests/classes/testing.pp
# /etc/puppet/manifests/classes/testing.pp
 class testing {
   exec { "Run Recon, Run":
    command  => /usr/sbin/jamf recon -username '318admin' -passhash 'GOBBLEDEGOOK' -sshUsername \
    'casperadmin' -sshPasshash 'GOOBLEDEBOK' -swu -skipFonts -skipPlugins,  }

Here we’ve created three files as we customized them to serve a laptop with the bonjour name 318admins-macbook-air.local. Site.pp points the server to the configurations and  clients it can manage, Nodes.pp allows a specific client to receive a certain set of configurations(although you could use ‘node default include company_wide’ to affect everyone), and the actual configuration we’d like to enforce is present in Testing.pp.

One last tweak and our server is ready:

sudo chown -R puppet:puppet /etc/puppet

and we actually run the server, with some extra feedback turned on, with this:

sudo puppet master --no-daemonize --onetime --verbose --debug

Now, we can move on to setting up our client. Besides installing the same packages (in the same order) as above, we need to add a few directories and one file before we’re ready to go:

sudo mkdir -p /var/lib/puppet/var
sudo mkdir /var/lib/puppet/ssl
sudo vim /etc/puppet/puppet.conf
# /etc/puppet/puppet.conf
server = mini.local
vardir = /var/lib/puppet
ssldir = /var/lib/ssl
certname = 318admin-macbook-air.local

Then we’re ready to connect our client.

sudo puppet agent --no-daemonize --onetime --verbose --debug

You should see something like this on the server, “notice: 318admins-macbook-air.local has a waiting certificate request”. On the server we go ahead and sign it like this:

sudo puppet cert --sign 318admins-macbook-air.local

Running puppet agent again should result in the a successful connection this time, with the configuration being passed down from the server for the client to apply.

This is just a small sample of how you can quickly start using Puppet, and we hope to share more of its benefits when integrated with other systems in the future.

Virtual Desktop Infrastructure (VDI) for Mac OS X

Thursday, March 8th, 2012

What is Virtual Desktop Infrastructure (VDI)? VDI is technology that enables you to connect to a host’s shared repository of virtualized environments and then allows you to run them on your computer or device, but still utilizing the host’s resources. In other words, it allows you to connect to an OS dedicated to you using your local device as a remote (read: thin) client.

The difference between VDI and Terminal Services or a traditional Citrix setup is that in a Terminal Server or Cirix setup, many users are connecting to a server, sharing the resources of the server, and are all still under the same end-user OS layer and hardware ecosystem. Using VDI, each user has a dedicated virtual machine running a workstation OS, now only under the same hardware ecosystem. Some VDI tools can then be synchronized to the local workstation and run offline as well, leveraging the local systems resources.

Mac OS X was initially left out of the virtual desktop infrastructure space. But with the introduction of VMware View 4.5, users of the Apple-based platform get a chance to dabble in leveraging a virtualized desktop infrastructure in much the same way that users of other platforms can. With VMware View Client for Tech Preview, Mac users can leverage PCoIP (PC over IP) instead of only relying on Remote Desktop for connecting to their virtual desktops. The current offerings of the VMware View Client for OS X do not offer the same type of features as the Windows version, but VMware is working on matching those features across their clients.

Citrix has its own implementation of VDI called XenDesktop. XenDesktop is similar in its offerings to VMware View and is another enterprise class option in VDI implementation. OS X can connect to the virtual desktop through Citrix Receiver. A difference bewteen the two would be the protocol which is used to deliver the best virtualized desktop expeirence. While VMware View uses PCoIP (UDP Based), Citrix XenDeskop uses HDX (High Definition Experience) which is TCP based. Both do a good job at connecting to their respective virtual desktop using different protocols, and both also support using Remote Desktop to connect to the virtual desktop.

Mokafive is a newcomer into the VDI scene, geared specifically to the Mac OS X platform. Mokafive takes a different spin on VDI, and sets up the virtual desktop to utilize the resources of the local device instead of a centralized server (it should be noted though, that both XenDesktop and VMware View now offer that same capability, each with its own unique implementation). Mokafive does so from a Mokefive server using a desktop virtual machine called a LivePC that it uses as a “golden image” (a master virtual machine that’s used for deployment).  One of its main strengths is it’s easy to understand and use.

With all of the VDI options that are out, there’s an acronym that’s being used called BYOC (Bring Your Own Computer).  With this idea, companies may begin to allow more employees to bring their Macbooks to work and then run the corporate virtual desktop on their Macbooks without the IT staff having to be too concerned about line of business application compatibility on OS X since it will just run on the corporate virtual desktop.  Choosing the VDI to do this for your company seems to be more of a question of which solution lines up best with your current infrastructure/familiarity vs. simplicity. If you would like to discuss VDI or other forms of virtualization with 318, please contact your Professional Services Manager or if you do not yet have one.


Wednesday, March 7th, 2012

In the routing world, NAT stands Network Address Translation while PAT stands for Port Address Translation. To many they’re going to be pretty similar while to others they couldn’t be more different.

When you have an Internet connection for your business network you are usually given a range of public static IP addresses. With these addresses you can use your Cisco router to use NAT technology, which will allow you to map an external address to an internal address (NAT is One to One addressing).  Your NAT router translates traffic coming into and leaving your private network so it works in both directions.

Let’s say your computer has an IP address of and the Router has a public IP address of If you go to the Internet from your address, it will be translated to the address using the NAT protocol, which will allow you to communicate external your network. It also allows for the return of that data and the opposite to happen when data returns to it will translate back to your address to receive the information to your system with the address of

Port Address translation is almost the same thing but it allows you to specify the TCP or UDP protocol (port) to be used. Let’s pretend you need to access a mail server at your network from externally. Most likely your port will be the standard SMTP port 25. Assuming it is you would configure the router to allow traffic from port 25 external your network to come through to your mail server’s port 25, thus sending and receiving e-mail. You can also use PAT to define traffic from a specific port to translate to a different port. For example if you have to use port 25 for an external mail client but you have a custom port of 26 internally to the mail server. You can define a Static PAT rule that can define all outside port 25 traffic will route to port 26 internally allowing port 25 traffic to reach your mail server on port 26.

*Note: PAT works hand in hand with NAT and is linked to the public and internal IP addresses. With PAT You may route many to one addressing (i.e. all internal addresses go out a single Public IP address for internet using port 80).

Searching for the hidden Library folder?

Tuesday, March 6th, 2012

Just a quick note, came across this tip today for another way to get to the (hidden in Lion) Library folder: from the Finder’s Go menu, hold down Option.

Must have Windows utilities

Monday, March 5th, 2012

Most of the Mac Techies I know have a boot drive, or set of drives, capable of running a variety of tools. Many of those drives are geared towards repairing problems on file systems, fixing operating systems and installing software. But what many don’t have are boot volumes for Windows, or cross platform tools in a heterogenous environment.

I used to refer to these, in addition to my tools for the Mac to be my Bat-Belt. On the Mac platform, this usually included Disk Warriror, a clean operating system of each revision, a bootable DeployStudio imaging with installers for operating systems, Disk Rescue and a number of other tools. But what kind of other tools should we be looking at for other platforms? Let’s start with SpinRite.

SpinRite is a tool from Steve Gibson, that runs about $80. It’s probably the best disk repair tool I’ve used for the supported file systems and can go as low level as scanning disks at the platter. The sector tests and file system tests thought, are unparalleled for the platform. I’ve seen SpinRite take weeks to run but it always gets the job done (if it’s possible to do so)!

Next, I’d make sure to have a copy of the Ultimate Boot CD. This little bugger is easy to use, has a number of tools included that resolve issues with systems and allows techs to add, remove or alter files on supported file systems. You can resolve a number of malware problems that crop up, fix file systems (a little overlap with SpinRite is a good thing, here), diagnose operating system problems and it all runs from a self-contained optical disk.

Combatting Malware and Spyware is a big part of many of the jobs for a Windows tech. It often requires multiple tools in your Bat-Belt but the first tool in my arsenal is a program called Combofix.  It’s in active development and goes through an exhaustive set of checks and tests to find any malicious files.  Once started, it scans and auto deletes suspicious files and is usually able to fix all but most infected machines.  You can run this tool in safe mode if a machine is too badly infected to boot into windows normally.

The 80′s are “in” again and this is true with malware too as there’s been a resurgence of MBR viruses in the wild.  If you encounter a machine that combofix or other AV tools can’t repair then an MBR issue is a likely cause.  The next tool in the list is called TDSSKiller.  Made by Kapersky labs this free tool is a uni-task program that only repairs infected MBR’s.  It’s quick and can usually repair a bad MBR without needing to boot off another medium.

In the case that a machine is FUBAR’d you may need to boot off another drive and scan for issues.  The two tools I use are the Kaspersky Rescue Disk and the Microsoft Standalone System Sweeper.  Both are free and scan the entire system for MBR issues and infected executables.  These can take a long time to run so unless you’re doing something else it’s best to run these over night or start thinking about doing a system reinstall.

If we’re all doing our jobs and making recommendations we’re going to run into the situation where we need to clone systems from one drive to another.  There are a number of tools to do this both paid for and free that get the job done nicely.  In the paid for dept Acronis True Image Home is a great tool that does just what you’d expect, clone a drive from one to the other.  It automagically resizes the drive to fit the newer larger one too so no need to worry about repartitioning.  On the free side we have linux based Clonezilla.  It does all the same things as Acronis but with a clunkier interface (yay ncurses!).  The only caveat with clonezilla is that is sometimes doesn’t resize the drive to fit the new partition properly and that brings us to our next tool, GParted.  This is another linux boot cd that can resize partitions non destructively.  I use this in combination with clonezilla but it’s still definitely useful as a standalone tool.

It’s a rare day when I encounter a windows user who doesn’t want their machine to run faster.  Thankfully there’s a scientific reason why windows boxes tend to run slower over time and it’s called OS rot.  Unfortunately the best fix for this is also the one that takes the longest and that’s to reinstall windows & every program on the system.  If a client doesn’t want to do this then we can use the following tools to alleviate some of the issues.

Pcdecrapifier is a tool used to automate the uninstallation of unwanted programs.  It’s useful to run on brand new systems (full of preloaded garbage-ware) as well as older machines where you just want to easily get rid of some of the accumulated crapola.

Also in the cleanup category is CCleaner.  This tool can tidy up the registry, remove many different sets of cache files as well as remove a lot of misc. unwanted items on the system.  There are too many options to state so download it and check it out for yourself!

Finally we have our misc list of utilities that pretty much do one thing but do it very well.  Best of all, they’re free.  Most don’t need much of a discussion so I’ll rifle through them real quick in list form:

  • magicdisc mounts iso files easily
  • nt password hack allows you to reset a forgotten admin account
  • putty great ssh client and hyper terminal replacement
  • syncback syncs two folders with the greatest of ease
  • windirstat graphically shows hard drive usage by both file type and folder

This should get your toolkit started and in no time at all you’ll be inundated with accolades from satisfied customers.

Technical Overview of Mac Business Encryption methods

Friday, March 2nd, 2012

For a more in-depth look at security on the Mac, we’ll contrast the technical features (and limitations) of Mac full disk encryption methods. A balance that always needs to be struck when implementing a highly complex system is between maintainability and features. Employees need an easy to use yet reliable solution, and support personnel need to be able to consistently ensure that everything is functional and able to be audited. To understand the changes to encryption features leading up to the present, we’ll start by describing the implementation used by one of the most popular vendor’s, Symantec, and their PGP product.

PGP has a very long history in data encryption, and since Apple moved to the Intel processor platform (and EFI), they have been able to provide many features that were previously only fully supported on Windows. In an ideal situation, they and other vendors (like Sophos and McAfee) construct a way to tie your directory service to a keyserver, and therefore have authentication stay in one central place. Client software performs the local encryption on each workstation, and after completion users are granted secure access to a pre-boot environment, so only after authentication succeeds does the actual system boot. The encryption itself is based on a key that is independent of the user, multiple users including the admin can be added, and there is even a feature called the Recovery Token in case someone forgets their password (which gets regenerated after a single use, once the laptop then connects back to the keyserver).

The changes Apple made during its build-up to Lion jeopardized PGP’s pre-boot environment, which caused serious side effects. From Snow Leopard 10.6.6 on, Symantec had to be vigilant to insure their product was updated in a timely fashion, and many customers began to doubt the future viability of the solution. Businesses still wanted the features products like PGP offered, so a balance needed to be struck.

Apple released FileVault2 with Lion, and has since documented one method of achieving some sort of centralization: generating and storing the FileVaultMaster.keychain. A drawback of this process is many of the workflow steps surrounding it require custom, secure methods to be devised for implementing and auditing this process. Further, since support personnel need access to the single key that will unlock any machine encrypted with this process, the fact that the key cannot be easily reset and never expires becomes a prominent flaw.

Cauliflower Vest, as discussed previously, instead utilizes the recovery key. This reduces the risks associated with storing and retrieving the unlock mechanism centrally, as it is tied to each employee’s Google Apps account. It is sent and stored securely, and access controls can be put in place to grant access to support personnel. The csfde tool that is bundled with the project can also be used independently, if another data store or authentication mechanism to secure the transport and storage is preferable. Deployment and enforcement were priorities of the project as well, and a graphical interface to guide employees through setting up their encryption in a self-service manner round out the salient features. It can still be considered a compromise when compared to the functionality offered to businesses previously, but Google’s Macintosh Operations team should be commended for making available a feature-rich and flexible open source solution.

Patch Management (and More) for Macs with Managed Software Update

Friday, March 2nd, 2012

When compared to Linux distributions, Mac OS X has lacked a standard, built-in package management system. Although network based software is still a possibility with OS X’s Unix foundation, in practice it is used in very few environments. The fact that developer tools are not included by default raises the barrier to entry for all systems that purport to allow simplified installation of software, and much has been made of the Mac App Store filling the void for mere mortals.

Businesses, however, have engaged software companies to acquire volume licenses, which simplify asset tracking and deployment concerns. Employees expect certain tools to be available to them, and support personnel carefully monitor the workstations under their purview to proactively address security concerns and stability issues. The Mac App Store was not designed with these concerns in mind, and even projects like MacPorts and Homebrew lack the centralization that configuration and patch management systems provide.

Managed Software Update (MSU for short) is an application developed by Greg Neagle of Walt Disney Animation Studios to provide and end-user interface to a businesses centrally managed software repository. It relies upon a larger project called Munki (calling to mind helper monkeys) that requires little infrastructure to implement. Workstations can be managed at a company-wide, department, and individual level, with as much overlap as makes sense. And just as the thin or modular imaging methods utilize packages as their building blocks to modify an images configuration, MSU can enforce settings just as well as it can insure security patches are installed in a timely fashion.

Among other benefits, MSU gives IT the power to uninstall software when it would be better provisioned elsewhere, provide a self-service interface to approved software, and takes away the number one source of friction for employees: “Why can’t I be an administrator so I can install my own software and updates, like I can at home?”

With Managed Software Update, businesses can now safely and efficiently address this concern.

Configuring a Cisco ASA 5505 with the basics

Thursday, March 1st, 2012

The Cisco ASA 5505 is great for small to medium businesses. Below are the steps you will have to complete to configure your ASA to communicate with the internet. There are many more steps, options, and features to these devices (which later there will be more articles in regards to some of these features).

Bring your device into configuration mode
Brings the device into enable mode

318ASA#config t
Change to configuration terminal mode

The ASA is now ready to be configured when you see (config)#

Configure the internal interface VLAN (ASA’s use VLAN’s for added security by default)
318ASA(config)# interface Vlan 1

Configure interface VLAN 1
318ASA(config-if)# nameif inside
Name the interface inside

318ASA(config-if)#security-level 100

Set’s the security level to 100

318ASA(config-if)#ip address
Assign your IP address

318ASA(config-if)#no shut
Make sure the interface is enabled and active

Configure the external interface VLAN (This is your WAN\internet connection)
318ASA(config)#interface Vlan 2
Creates the VLAN2 interface

318ASA(config-if)# nameif outside
Name’s the interface outside

318ASA(config-if)#security-level 0
Assigns the most strict security level to the outside interface (lower the number the higher the security).

318ASA(config-if)#ip address
Assign your Public Address to the outside interface

318ASA(config-if)#no shut
Enable the outside interface to be active.

Enable and assign the external WAN to Ethernet 0/0 using VLAN2
318ASA(config)#interface Ethernet0/0
Go to the Ethernet 0/0 interface settings

318ASA(config-if)#switchport access vlan 2
Assign the interface to use VLAN2

318ASA(config-if)#no shut
Enable the interface to be active.

Enable and assign the internal LAN interface Ethernet 0/1 (note ports 0/1-0/7 act as a switch but all interfaces are disabled by default).
318ASA(config)#interface Ethernet0/1
Go to the Ethernet 0/1 interface settings

318ASA(config-if)#no shut
Enable the interface to be active.
If you need multiple LAN ports you can do the same for Ethernet0/2 to 0/7.

To have traffic route from LAN to WAN you must configure Network Address Translation on the outside interface
318ASA(config)#global (outside) 1 interface
318ASA(config)#nat (inside) 1

***NOTE for ASA Version 8.3 and later***
Cisco announced the new Cisco ASA software version 8.3. This version introduces several important configuration changes, especially on the NAT/PAT mechanism. The “global” command is no longer supported. NAT (static and dynamic) and PAT are configured under network objects. The PAT configuration below is for ASA 8.3 and later:

318ASA(config)#nat (inside,outside) dynamic interface

For more info you can reference this article from Cisco with regards to the changes –

Configure the default route (for this example default gateway is
318ASA(config)#route outside 2 1

Last but not least verify and save your configurations. If you do not save your configurations you will have to.

Verify your settings are working. Once you have verified your configurations write to memory to save the configuration. If you do not write to memory your configurations will be lost upon the next reboot.

318ASA(config)#wr mem