Log alerts

The other day I missed a couple of OTA recordings from my HDHomeRun tuner for some weird reason or another, and it wasn't until well after the fact that we were sitting down to watch that I realized it had failed. That started me searching for a way to get alerts from channels, but surprisingly I couldn't find anything.

I initially considering writing a small app I could throw into a docker container, but I was really just looking for something quick and dirty. That's when I remembered I was already using the Node-RED addon in my HomeAssistant install. I decided to try throwing together a quick flow that would poll the log endpoint, parse the log records, and alert me if issues were found.

After a couple hours of experimenting last night, I wound up with what seems to be a workable solution.

I setup a quick flow with 4 nodes:

The first one is an "inject" node to trigger the HTTP GET on an interval. I set it for every 2 minutes.

The next one is an "http request" node that performs the HTTP GET on the log endpoint. Notice it's not an api endpoint sadly, so it just returns a plain text (full) log each time.

Next is a "function" node for doing the log processing. I'll include the full code for it at the end.

The code basically parses the full log each time, looking for new log records that match the alert rules defined. If records are found, it collects them and passes them on to the next node in the flow. This is where things get interesting with it being Node-RED in HomeAssistant - you could easily trigger some home automation routine, send an alert to the HA app, send a txt, etc. At one point I had it flashing the lights, playing an audio message on my Google Home, and alerting via the app, but after some coaching from my wife :sweat_smile:, I chose to simply send myself an email with the log records selected.

That leads me to the last "email" node. Reminder, if using gmail and you have MFA on your account, you'll need to create an "app password" to use instead of your normal account password.

Finally just Deploy the flow and wait for the emails to come in.

The full code I came up with for the function node that parses the log is below.
I chose to create a persistent, localfilesystem context store in Node-RED so the last processed log id wouldn't be lost between restarts, but if you don't want to fool with creating one you can simply leave the CONTEXT_STORE variable empty and it will default to the in-memory store. You might get some old alerts re-sent on a node-red restart, but no biggie.

The ALERT_RULES collection defines what you will be alerted for. You can simply define the log types (i.e. [ERR]) you want, and optionally some rules to match text in the log description. Specifically you can match on descriptions that "startsWith" some text, "contains" some text, "notContains" some text, or if you really want to get fancy you can do "regex" patterns (or a combination of all of them). I added the two in the code below because I wanted to know when jobs failed (mainly scheduled recordings), and any other ERR's as long as they weren't Pluto-related because it appears I regularly get 404's pulling pluto guide data.

/*
  This code parses log records from ChannelsDVR log endpoint http://xxx.xxx.xxx.xxx:8089/log.
  The logs are plain text and must be parsed into datetime, type, and description.
  The log endpoint always returns a full log and must be polled, so we use a node-red context store
    to keep track of the last log entry processed.
  Log records matching the rules in ALERT_RULES will be collected and sent in a single message
    on to the next node in the flow (to be sent via email).
  The ALERT_RULES support selecting log records by the log type and optionally matching on the
    description text using the following:
    1) startsWith - where description starts with the given string
    2) contains - where description contains the given string
    3) notContains - where description does NOT contain the given string
    4) regex - where description matches the regex pattern
*/

// CONFIGURE: Node-RED context store to use; leave blank for in-memory or specify name (if persistent/defined in settings.js)
const CONTEXT_STORE = 'channelsdvrlog'; 

// CONFIGURE: Email Subject
const MESSAGE_TOPIC = 'ChannelsDVR Log Alert';  

// CONFIGURE: Alerts; logType required; optional rules: contains, notContains, startsWith, regex
const ALERT_RULES = [
    { logType: '[ERR]', notContains: 'Pluto' },
    { logType: '[DVR]', startsWith: 'Error running job' }
];

// Retrieve the last processed log identifier
const lastProcessedLogId = context.get('lastProcessedLogId', CONTEXT_STORE) || '';

function parseLogEntry(logEntry) {
    /* Example Log records
    2024/01/12 21:26:07.170332 [ERR] Failed to download XMLTV-Pluto: xmltv fetch: GET: https://i.mjh.nz/PlutoTV/us.xml: 404 Not Found: "404: Not Found"
    2024/01/12 21:26:07.430336 [DVR] Fetched guide data for XMLTV-Plex in 259ms
    2024/01/12 21:26:08.486562 [DVR] Indexed 612 airings into XMLTV-Plex (31 channels over 26h6m59s) + 71 skipped [746ms index]
    */
    const parts = logEntry.split(' ');
    const datetime = parts[0] + ' ' + parts[1];
    const type = parts[2];
    const description = parts.slice(3).join(' ');

    return {
        id: datetime,
        datetime: datetime,
        type: type,
        description: description
    };
}

function checkConditions(log, rule) {
    if (log.type !== rule.logType) {
        return false;
    }

    if (rule.startsWith && !log.description.startsWith(rule.startsWith)) {
        return false;
    }

    if (rule.contains && !log.description.includes(rule.contains)) {
        return false;
    }

    if (rule.notContains && log.description.includes(rule.notContains)) {
        return false;
    }

    if (rule.regex && !rule.regex.test(log.description)) {
        return false;
    }

    return true;
}

let newLastProcessedLogId = lastProcessedLogId;
let alerts = [];

// process the log records
msg.payload.split('\n').forEach(logRecord => {
    if (logRecord.trim() !== '') {
        const log = parseLogEntry(logRecord);

        if (log.id > lastProcessedLogId) {
            newLastProcessedLogId = log.id;

            ALERT_RULES.forEach(rule => {
                if (checkConditions(log, rule)) {
                    alerts.push(logRecord);
                }
            });
        }
    }
});

// Store the last processed log Id
context.set('lastProcessedLogId', newLastProcessedLogId, CONTEXT_STORE);

if (alerts.length > 0) {
    // Send a single message with all selected log records concatenated
    node.send({
        topic: MESSAGE_TOPIC,
        payload: alerts.join('\n')
    });
}

return null; // Prevent further processing in this flow

Final thoughts:

  • This could easily be wrapped up into a little node or .net app in a docker container instead of going the HA/Node-RED route.
  • It would have been reaaaallly nice if there was an actual log api endpoint that returned JSON and even better if you could query it based on date/time so we wouldn't have to pull the full log each time.
  • Even better would be some kind of pub/sub or webhooks support so we could just subscribe for the events instead of having to poll. :sunglasses:

Curious what you guys think-

1 Like

I did wind up creating a little standalone docker container for this if anyone is interested.
channelsdvr-log-monitor

You can check the github repo linked for full instructions, but it basically expects you to mount your own appsettings.json config file for the container. Grab the default appsettings.json from the root of the github repo and fill in your own info (log url, smtp settings, alert rules, etc.).

Then just mount the container path /app/appsettings.json to your own host path/to/appsettings.json.

I've only tested with gmail smtp for now. If this is useful enough I may look into other notification options. It was a fun little project.

Why the constant HTTP calls? Why not something simple like:

tail -f ${CHANNELS_DIR}/data/channels-dvr.log | grep '[ERR]'

This will report every line prefixed with [ERR] as it is written to the log. Just pipe the output as needed to whichever additional tool you need, such as teeing it to a different file for use.

Hah, mainly because I didn't know that existed! The only way I knew of accessing the log was through the admin site which was polling /log, so that's basically the direction I took.

I'll have to give that some thought. I run Channels in a container, but I could probably create another volume mount in this container that points to the Channels container's /data/channels-dvr.log.

Thanks for the tip.

Why? Why does everyone think since one thing is in a container, everything must be?

Just run a simple shell script that reads from the same shared folder you mount into your Channels container, and save the overhead of a whole separate containerized OS.

Your first thought should always be how to do it in the easiest way, using as many of the base OS tools necessary. If your first thought to any problem is to spin up a separate container/VM for a single purpose, you're doing it wrong.

(No wonder emissions are so high. Everyone thinks about AWS instances as their solution, instead of understanding what a POSIX foundation fostered. Do more with less, not less with more!)

1 Like

Wow, sorry if I triggered you - lol. I think we'll have to agree to disagree on this one.

It's all about ease of use and ease of maintenance of the solution, not just for myself but others (if anyone ever finds it useful). I wanted something self-contained - not to have to manage some random shell scripts trying to tie various tools together. I wanted logging, thought I needed http requests, smtp/email notifications (and the ability to easily support other types if needed), and I certainly didn't want to have to worry about how the user was trying to run it and what dependencies they needed to manage.

That's what the whopping 50mb of memory it uses is for - ease of use. I'm not sure who's running these in AWS. I run all mine on a little Unraid instance.

But hey, if you like scripts, that's awesome - you do you. If you'd shared one, I'd probably never have bothered doing this.

I wasn't "triggered" (and whatever that means to the current generation), I was merely stating that I believed your Node-RED–based solution to a minimal shell script was overkill.

If you checked the startup scripts for Channels, you would see that it directs its output to a single file; you could modify that startup to additionally direct its output—even filtered via grep or similar—to an additional location of your choice.

(Yes, I subscribe to the old school idea of a single tool for a single purpose string together can achieve myriad results. I also believe almost all of those tools are already installed with the minimal base install of my OS. But huge middleware like Node-RED should never be necessary.

Yes, we'll agree to disagree. But why make simple things harder?)

Quick note that I've updated the app and it can now be configured to either periodically poll the /log API endpoint, or realtime monitor the physical log file (/data/channels-dvr.log) if you mount it for the container.

It will also poll the /devices API endpoint and will alert you to device/source/channel changes as well.

channelsdvr-log-monitor

Github