Posts Tagged ‘reporting’

Pulling Report Info from MunkiWebAdmin

Wednesday, November 6th, 2013

Alright, you’ve fallen in love with the Dashboard in MunkiWebAdmin – we don’t blame you, it’s quite the sight. Now you know one day you’ll hack on Django and the client pre/postflight scripts until you can add that perfect view to further extend it’s reporting and output functionality, but in the meantime you just want to export a list of all those machines still running 10.6.8. Mavericks is free, and them folks still on Snow Leo are long overdue. If you’ve only got a handful of clients, maybe you set up MunkiWebAdmin using sqlite(since nothing all that large is actually stored in the database itself.)

MunkiWebAdmin in action

Let’s go spelunking and try to output just those clients in a more digestible format than html, so I’d use the csv output option for starters. We could tool around in an interactive session with the sqlite binary, but in this example we’ll just run the query on that binary and cherry-pick the info we want. Most often, we’ll use the information submitted as a report by the pre- and postflight scripts munki runs, which dumps in to the reports_machine table. And the final part is as simple as you’d expect, we just select all from that particular table where the OS version equals exactly 10.6.8. Here’s the one-liner:

$sqlite3 -csv /Users/Shared/munkiwebadmin_env/munkiwebadmin/munkiwebadmin.db\
 "SELECT * FROM reports_machine WHERE os_version='10.6.8';"

 


And the resultant output:
b8:f6:b1:00:00:00,Berlin,"","",192.168.222.100,"MacBookPro10,1","Intel Core i7","2.6 GHz",x86_64,"8 GB"...

You can then open that in your favorite spreadsheet editing application and parse it for whatever is in store for it next!

[More Splunk: Part 4] Narrow search results to create an alert

Wednesday, January 30th, 2013

This post continues [More Splunk: Part 3] Report on remote server activity.

Now that we have Splunk generating reports and turning raw data into useful information, let’s use that information to trigger something to happen automatically such as sending an email alert.

In the prior posts a Splunk Forwarder was gathering information using a shell script and sending the results to the Splunk Receiver. To find those results we used this search string:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh"

It returned data every 60 seconds that looked something like:

2012-11-20 14:34:45-08:00 MySQLCPU=23.2 ApacheCount=1

Using the timechart function of Splunk we extracted the MySQLCPU field to get its value 23.2 and put that into a graph for easier viewing.

Area graph

Returning to view that graph every few minutes, hours or days can get tedious if nothing really changes or the data isn’t out of the ordinary. Ideally, Splunk would watch the data and alert us when something is out of the ordinary. That’s where alerts are useful.

For example, the graph above shows the highest spike in activity to be around 45% and we can assume that a spike at 65% would be unusual. We want to know about that before processor usage gets out of control.

Configuring Splunk for email alerts

Before Splunk can send email alerts it needs basic email server settings for outgoing mail (SMTP). Click the Manager link in the upper right corner and then click System Settings. Click on Email alert settings. Enter public or private outgoing mail server settings for Splunk. If using a public mail server such as Gmail then include a user name and password to authenticate to the server and select the option for either SSL or TLS. Be sure to append port number 465 for SSL or 587 for TLS to the mail server name.

Splunk email server settings

In the same settings area Splunk includes some additional basic settings. Modify them as needed or just accept the defaults.

Splunk additional email server settings

Click the Save button when done.

Refining the search

Next, select Search from the App menu. Let’s refine the search to find only those results that may be out of the ordinary. Our first search found all results for the MySQLCPU field but now we want to limit its results to anything at 65% or higher. The where function is our new friend.

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | where MySQLCPU >= 65

This takes the result from the Forwarder and pipes it into an operation that returns only values of the MySQLCPU field that are greater than or equal to “65″. The search results, we hope, are empty. To verify the search is working correctly, change the value temporarily from “65″ to something lower such as “30″ or “40″. The lower values should return multiple results.

On a side note but unrelated to our need, if we wanted an alert for a range of values an AND operator connecting two statements will limit the results to something between values:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | where MySQLCPU >= 55 AND MySQLCPU <=65

Creating an alert

An alert will evaluate this search as frequently as Splunk receives new data and if it spots any results other than nothing then it can do something automatically.

With the search results in view (or lack of them), select Alert… from the Create drop down menu in the upper right corner. Name the search “MySQL CPU Usage Over 65%” or something that’s recognizable later. One drawback with Splunk is that it won’t allow renaming the search later. To do that requires editing more .conf files. Leave the Schedule at its default Trigger in real-time whenever a result matches. Click the Next button.

Schedule an alert

Enable Send email and enter one or more addresses to receive the alerts. Also, enable Throttling by selecting Suppress for results with the same field value and enter the MySQLCPU field name. Set the suppression time to five minutes, which is pretty aggressive. Remember, the script on the Forwarder server is sending new values every minute. Without throttling Splunk would send an alert every minute as well. This will allow an administrator to keep some sanity. Click the Next button.

Enable alert actions

Finally, select whether to keep the alert private or share it with other users on the Splunk system. This only applies to the Enterprise version of Splunk. Click the Finish button.

Share an alert

Splunk is now looking for new data to come from a Forwarder and as it receives that new data it’s going to evaluate it against the saved search. Any result other than no results found will trigger an email.

Note that alerts don’t need to just trigger emails. They can also run scripts. For example, an advanced Splunk search may look for multiple Java processes on a server running a Java-based application. If it found more than 20 spawned processes it could trigger a script to send a killall command to stop them before they consumed the server’s resources and then issue a start command to the application.

[More Splunk: Part 3] Report on remote server activity

Wednesday, November 28th, 2012

This post continues [More Splunk: Part 2] Configure a simple Splunk Forwarder.

With data flowing from the Splunk Forwarders into the Splunk Receiver server, the last step toward getting meaningful information is to create a search for specific data and put it into a report.

Splunk searches range from simplistic strings such as “error” to complex phrases that resemble Excel formulas mixed with shell scripting. To extract the data gathered from a remote server will require narrowing down the location of the data from host to source to field and then manipulating the field values to get meaning from them.

Creating a search

After logging in to the Splunk Receiver server, select Search from the App menu.

Choose Search

This presents a page with a seemingly simple search field at the top with three panels below called “Sources”, “Source Types” and “Hosts”. The window is actually a very helpful formula builder for creating complex searches. Locate the Hosts area. This lists both the local computer as well as all Splunk Forwarders.

Hosts

Clicking any of the host names, in this case “TMI”, begins building the search formula. It automatically inserts a correctly formatted string into the Search field:

host="TMI"

At the same time Splunk displays a table of data from that host and begins displaying a dynamic graph based on that data. Without any filtering or refining it’s displaying the count of records from log files it has gathered. Interesting but not very useful.

Host search

Now that the data shown is narrowed down to the server, let’s narrow it down to the data coming from the counters.sh script running on the server. The script is considered the “source” of the data and the path to the script is the value:

hosts="TMI" source="/Applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh"

This search result narrows Splunk’s results considerably. Note that Splunk is highlighting the host and source information in the textual data. Also, note how the graph consistently shows “1″ across its scope. This indicates it’s reporting one record for each time reported. Again, not very useful.

Source search

What we really want are the values of the results displayed over time. This is handled by the “timechart” function in Splunk. The formula now pipes the data returned from the host and source into a function:

host="TMI" source="/applications/splunkforwarder/etc/apps/talkingmoose/bin/counters.sh" | timechart avg(MySQLCPU)

Remember that the counters.sh script was written to denote “fields” called “MySQLCPU” and “ApacheCount”. Using the field name in the timechart function returns the values over time. Using “avg” returns the average of the values (really, just the average of the one value). The final result is a simple table of data, which is all that’s needed to create a report.

Timechart

Creating a report

Now, we can graph this table of data. From the Create menu select Report… Splunk creates a rough graph, which is useful but not very easy to read.

Initial graph

Using the formatting options above the graph, adjust these items:

  • Chart type: area
  • Chart title: MySQL CPU Usage

Area graph

To save this graph so that it’s easily accessible without having to recreate the search each time, let’s add it to a dashboard. A dashboard is a single Splunk page that can act as an overview for multiple related or unrelated processes or servers.

From the Create drop down menu select Dashboard panel… Name the new panel “MySQL CPU Usage” and click the Next button. If an appropriate dashboard already exists simply choose to add the panel to that existing dashboard. Otherwise, name the new dashboard itself “Servers Dashboard” and click the Next button. Click the Finish button when done.

To view the report panel without having to recreate the search each time, locate the Dashboards & Views menu and select the Servers Dashboard.

Select dashboard

A dashboard can hold any number of report graphs for one or multiple machines. Create a new search and then create a new report based on that search. When done save it to the dashboard. Drag and drop panels on the page to reorder them or put higher priority panels toward the top or left of the page.