Splunk truncating timecharts is fixable

I’ve been building some dashboards in Splunk to make it easier to compare some things. I needed to find a way to deal with Splunk truncating timecharts inconsistently.

Splunk’s timechart option is useful for baselining. Before you can spot the abnormal, you need to recognize the normal. I sometimes monitor things using the timechart option, but sometimes Splunk will truncate the timecharts randomly, so I’m not necessarily comparing the same timeframes.

Fix Splunk truncating timecharts with fillnull

Here’s how to force it to 24 hours:

(insert search parameters here) | fillnull value=NoVal | timechart span=15m count

The key is the fillnull value, which tells Splunk what to do with time periods that have no data.

A real world search example

For a real-world example, here’s a search that grabs all of my activity, assuming my name shows up in the logs (in my case it does):

farquhar | fillnull value=NoVal | timechart span=15m count

Set your timeframe in the dashboard, and as long as you include the fillnull parameter in all of your panels, you’ll get a fair apples-to-apples comparison rather than having one or more panels show just part of the day and stretch it out, creating a distorted view where the times don’t line up.

Thanks to fillnull, it’s possible to build two searches differing only in computer or user name. By visualizing data this way, you can spot subtle differences between, say, two computers that ought to be identical.

This has applications in security, as it allows you to find things that shouldn’t be happening. But it’s useful in all of IT for the same reason. If two systems are supposed to be configured identically but one works and one doesn’t, finding the differences in those two systems’ logs is a good place to start tracking down the problem. Once you know what the systems do differently, you can start figuring out why.

%d bloggers like this:
WordPress Appliance - Powered by TurnKey Linux