Logfile Management

Disclaimer: Not for the faint of heart or those without some minimal Linux sysadmin experience.

Love my Channels DVR, but the non-managed ever-growing channels-dvr.log always bothered me. Moving it to an alternate volume or using log rotation created problem for the log review feature (and presumably for issue reporting). My workaround is the Linux-only code below that implements a proper logfile fifo. It operates in one of three modes depending on the Linux-version and filesystem being used:

#1- With Linux 3.15+ (almost everyone) and ext4/xfs, uses fallocate / FALLOC_FL_COLLAPSE_RANGE to deallocate the old data from the head of a logfile. This is extremely efficient and there are no extra writes or shifting of data. The file system alters the data block mapping to remove unwanted blocks. When the file reaches its target size, it essentially stays there (+/-64K).

#2- If approach one fails, the code performs rewrite in place. This is far less efficient as each byte of data is essentially written 3-4 times over its lifetime. When the logfile reaches the size limit, approximately 75% of the data is rewritten to the start of the file and the remainder (old duplicate data at that point) is truncated. The file will vary between 75-100% of the target size.

#3- If both approaches fail, no management is performed and the logfile grows unbounded (no different than today). Don't believe approach two would ever realistically fail, but the fallback exists just in case.

Building: gcc logfifo.c -o logfifo
Usage: logfifo [-m size-limit] <output-logfile-name>
(default logfile limit is 10M, minimum limit is 256K)

Installation: modify startup to pipe data though the utility. For my systemd based deployment, I compiled logfifo in the channels data directory and updated the systemd config:
ExecStart=/bin/sh -c 'exec $base/latest/channels-dvr 2>&1 | ./logfifo -m 10000000 $base/data/channels-dvr.log'

Use at your own risk source code below. Though I have been running for several weeks on ext4 (true fifo) and tmpfs (shift-in-place), always possible there is a bug that could cause data loss or worse. (Edited to fix forum suppress of less-than/greater-than in usage.)

// logfifo.c
// requires linux 3.15+ and ext4/xfs for fifo effect
// (for other file systems, does shift within file)
// provided as-is, no restrictions, use at own risk
// to build:
// gcc logfifo.c -o logfifo

// _GNU_SOURCE required for fallocate / FALLOC_FL_COLLAPSE_RANGE
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/mman.h>

static unsigned int max_size = 10000000;
static unsigned int min_size = 256*1024;
static unsigned int blk_size = 64*1024;

int main(int argc, char *argv[])
  if (argc < 2) {
    fprintf(stderr, "usage: %s [-m max_size] <logfile>\n", argv[0]);

  // max filesize configuration
  for (int i = 1; i < argc-1; ++i)
    if (strcmp(argv[i], "-m") == 0)
      max_size = atoi(argv[++i]);
  if (max_size < min_size)
    max_size = min_size;

  // append to logfile
  int fd = open(argv[argc-1], O_RDWR|O_CREAT, 0666);
  if (fd < 0) {
    fprintf(stderr, "%s: unable to open %s\n", argv[0], argv[argc-1]);
  lseek(fd, 0, SEEK_END);

  // buffer each line
  char strLine[4096];
  while (fgets(strLine, sizeof(strLine), stdin) != NULL)
    write(fd, strLine, strlen(strLine));
    if (blk_size <= 4096) continue;
    unsigned int eof = lseek(fd, 0, SEEK_CUR);
    if (eof <= max_size) continue;

    // align to block multiple
    unsigned int del = eof-max_size;
    del -= del % blk_size;
    if (del < blk_size) del = blk_size;

    // deallocate from front of file to create fifo effect
    if (fallocate(fd, FALLOC_FL_COLLAPSE_RANGE, 0, del) == 0) {
      // reseek to end just in case (as length has been reduced)
      lseek(fd, 0, SEEK_END);

    // fall back to copying newer data to start of file
    unsigned char *copy = mmap(NULL, eof, PROT_READ, MAP_PRIVATE, fd, 0);
    if (copy != NULL) {
      lseek(fd, 0, SEEK_SET);
      write(fd, copy+del, eof-del);
      ftruncate(fd, eof-del);
      munmap(copy, eof);

      // recalc large block size for minimum copying
      blk_size = max_size/3;
      // make it a power-of-2
      while (blk_size & (blk_size-1))
        blk_size &= blk_size-1;

    // disable further truncation as errors likely to continue
    // (unsupported linux version, file-system, blk-size, etc)
    blk_size = 0;

  // clean up when source closes pipe

I do use Linux and am realizing the logfiles add up quick (transfering my Channels DVR to a temp storage so I can rebuild the RAID drives) and it's over 100 gigs. Seems like a lot of data just for log files.

Roughly 99.5% of the size is eaten up by the recording logs, the comskip logs are only about 100 megs.

Unfortunately, I am a noob with Linux and don't understand about 3/4 of what you coded above, so I'll simply suggest that the admins may want to develop a solution that automates the above through a point and click environment for the coding challenged (like me).

Been running this code since I posted it. My logfile is always the most recent 10M of data. Been super valuable for me as my experiments integrating Raspberry PI cameras have often resulted in logfile spamming. Would have run out of space dozens of time without this. Had hoped this example might encourage the devs to develop a solution of their own or use/modify what I provided but other priorities I guess.

The script code Vraz posted is for the main channels-dvr.log file, not the individual recording logs.
You could script or manually delete the recording logs, they are only useful to the devs for troubleshooting issues with recorded TVE/M3U streams or the new Smart Commercial Detection.

Are there other unmanaged logfiles besides channels-dvr.log? Because the latter is stdout, it goes crazy. Seems like the other logfiles have basic rotation/auto-delete that are sufficient to keep in check. Are there recording specific logs that I missing?


The recording logs are what Channels considers "debug" recording logs and are kept for every recording made regardless of source. They're deleted when the recording is deleted. HDHR tuner recordings result in small. one line log files, but TVE/M3U "debug" recording logs can be multi megabyte. They're stored in
DVR recording directory/Logs/recording/FileID# folders

In the channels-dvr data directory there's also
You would have to ask the devs about those as I never mess with them.

Right, sorry. The C program .

What I meant (and should have said code instead of script) was you were redirecting std and error output from channels-dvr to the channels-dvr.log file and that doesn't include the recording.log files.

I should have quoted from your post