How to feed content into Channels DVR? (RSS Feed)

New user here, migtrating from mythtv and a patchwork of various services.

One thing that I want to migrate is my RSS program feed. At the moment, I use gPodder to pull down programs from various feeds, and then pull episode info from gPodder's database and push the episodes into mythtv's video spool directory and it's database, then tell gPodder to delete it.

I see the provision to add a .m3u file under custom channels, but I'm not quite sure how good of a fit this is.. I could set up a named pipe for the .m3u file and generate the file whenever channelsDVR reads it, but it is kind of difficult to tell when channelsDVR gets around to reading the program so that I can mark it for gPodder to delete.

I also see a library mechanism that might be a good fit. I'm guessing that I just copy episodes into the library directory and channelsDVR will periodically check and notice the added files there. But, how do I provide episode description & other metadata? Also, will channelsDVR delete episodes from the library after they are watched?

And I see references to the unpublished (and unstable) REST API, which I'm thinking might have some mechanism there.

So, what would be the best way to programmatically add content to the server?

Marcus Hall

ps: Thanks for a great product!

1 Like

You tell Channels how often to read the m3u and the XML guide data. The m3u is either never or every 24 hours. The guide data is 1, 3, 6, 12, or 24 hours.

Honestly, I am unfamiliar with the method you are using, but it sounds unnecessarily complicated and something you had to do in order to work around your old solution. You might want to look into simplifying with Channels' built-in tools.

Yes, either Channels' default import directories or you can also add your own directories so long as they follow Channels' preferred format. The systen checks approximately every 5 minutes.

When an show/episode is detected, the system attempts to match against thr Gracenote database (movies will also use TMDB). If you don't like the match, you can use the Fix Incorrect Match function. If you don't like that or there is no Match, you manually edit through the web interface.

Not after they are watched, only after you click the delete button, which you can also turn off of you want to protect something from ever being deleted. It also goes to a temporary trash in the interface (the file stays where it is), so you can undelete in a given timeframe.

On a day-to-day basis for the average user, no. For specialized use cases, yes. You most likely don't need to worry about it.

As you may have noticed, I linked to a bunch of help and learning articles. There are a ton more, so read through those first. Channels is free to try for 30 days, so do that and experiment, see if you can answer all your own questions. If you have a specific issue, search the forums as your answer will probably already be here. If not, ask a specific question and many people here will be glad to assist!

1 Like

Thank you for the reply! I had seen the tutorials on .m3u and libraries, but haven't decided which to try to prototype with first. Or if there was a better way that I had been missing...

I totally agree.. I did see a couple of requests for channelsDVR to support RSS feeds, but it doesn't seem to be a high enough priority, so that's why I'm trying to figure out what existing tools are the best fit.

The difficulty is that channels seems to view everything as a live stream, and a podcast has episodes published by adding the file to a server and adding a record to the .rss file. I'm using gPodder to periodically pull down the .rss files for all of the podcasts and determine if a new episode has been published. If so, then it downloads the episode and stores away the metadata about it. I'm looking for the best way to take that and push it into channels.

Re: Custom channels approach:

Hmm.. So, If gPodder is run 4 times a day, then when it downloads new episodes, I can leave them spooled in gPodder's directory and build a .m3u file with info about it and place it where channelsDVR will eventually look at it. What I don't know is when to remove the downloaded episode? I have to leave it visible long enough to know that channelsDVR has picked it up, but not so long that it gets picked up more than once. That is why I am looking for some handshake from channelsDVR that it has picked it up.

There are some other issues, such as I would prefer the date to be the publish date of the episode, but I think channelsDVR would think that was the date to look for the episode and that the date has already passed.

Installing the .m3u playlist, one thing it asks is the stream format, and offers HLS or MPEG-TS.. I'm guessing that if it's just a .mp4 file that MPEG-TS could grok it? I'm not sure that all of the podcasts are .mp4 format, but I can translate them if necessary..

Re: Libraries approach:

The podcasts are cetainly not in Gracenote. Think of a you-tube channel, perhaps. I was hoping that I could perhaps do something like put two files "Cordkillers_20221221.mp4" and "Cordkillers_20221221.info" into the library where one was the episode and the .info was the metadata for the episode. Trying to scrape a web interface to get my script to insert metadata is probably much more failure prone than using the unpublished API to set the metadata.

But, it's good that the UI offers a delete option for library files just like the recorded files.

I'm beginning to think that this is a special use case.

At the moment, I'm checking into POSTing "/dvr/uploads/new" The name seems to be an applicable operation. If that doesn't pan out, then I think my best solution is to try ploping the files into a library directory and finding the API calls to set the metadata.

Didn't realize you were talking about video podcasts. Then just forget everything I said.

Channels is primarily focused on TV/Movies. There is a section called "Video" for all other personal media (people use it for concerts, home movies, fitness, etc...). Your best bet is to just auto-download the episodes to a folder you set up for that and let Channels pick them up. There is no out of the box solution right now for metadata importing; it is all manual.

If they are video files that you can link to directly, you don't have to download them, but can create a .strm file:

Again, this will not help with your metadata. Channels is not set up yet to be all media consumption, and most likely never will.

(Edit: Ninja'd by @babsonnexus)

Custom Channels (M3U playlists) are only supported for live/linear streaming content. Using it as a playlist for static/VOD content is not support (and likely will not ever be).

What you are describing sounds like you want to use a local content import. You designate a directory for Channels to periodically scan—every 5 minutes—and when new content is found, it is added to your library. You can then use Channels' native management features to edit the metadata and delete watched items.

You may want to give this a try: point gPodder to the Imports/Videos/ directory of your DVR storage path. Have it place episodes in a directory named after your podcast, and Channels will use that directory name as the group name for the imported content.

Thanks!

Just playing a little bit by hand, I can just plop the episodes into data/Imports/Videos/<podcast_directory>/ and 90% of everything is good. That's actually sufficient right there. I still want to figure out the right POST to set the metadata, which seems to have a short summary and full summary provided for. It seems that Movies have a release date that can be set, but that isn't (normally) edited for Videos, but maybe I can set that too, even if it isn't in the GUI.

Looks good!

A final entry here just to document what my solution is for tying an RSS feed into channels.

First off, I use gPodder to manage the RSS feeds. I run it as the channelsDVR userid ("channels" on my system). Normally this is run non-interactively from a crontab, but I do run the GUI (as "channels") whenever I want to add or manage feeds.

gPodder normally stores its data in ~/gPodder/Downloads, in one directory for each feed. After setting up a new feed, I move its directory from ~/gPodder/Downloads/<rss_feed> to Imports/Video/<rss_feed> in the channelsDVR DVR directory. I then put a symlink in the gPodder/Downloads directory that points to where the directory was moved to. (It turns out that channelsDVR will not follow a symlink to the gPodder directory...)

The script to manage the downloading is in ~/bin/gPodder_download:

#!/bin/bash
# Script to fetch new episodes of podcasts
#
set -e
timeout 10m gpo update >/dev/null
timeout 2h gpo download >/dev/null

I have a crontab set up to run things 4 times per day:

23 3,9,15,21 * * * bin/gPodder_daemon

This actually runs a simple wrapper instead of gPodder_download directly, but if you run the download script from the crontab, it would be a pretty functional feed by itself.
But, I want to pull some metadata from the podcast episodes, so I have a 2nd script that is run by this wrapper (this is gPodder_daemon, invoked from crontab):

#!/bin/bash
# Script periodically run as a daemon

bin/gPodder_download
sleep $(( 5 * 60 ))  # Sleep 5 minutes
bin/gPodder_metadata

This wrapper runs the download script, waits 5 minutes (during which time channelsDVR has most likely noticed any new files in it's Videos directory), then runs the metadata script.

The metadata script is more complicated, but the general idea is to ask channelsDVR for info about any files. For each record, we note any files with a group that starts with "videos-" and doesn't have anything set in "Airing.OriginalDate". For these entries, we take note of the filename and the file id.
Then, we look through gPodder's database and select any unplayed episodes. If the filename is one that we had noted, then we populate the date, title, and description with info from the database.
Some care has to be taken to extract the text field from the database as a hex string (to avoid confusing the script with stray delimiters) and newline and " characters have to be escaped to make a valid JSON record. Also, if the description is empty, we try to convert the HTML description into text and use that.

Here's the metadata script:

#!/bin/bash
# Script to copy metadata from gPodder into channelsDVR

CHANNELS=127.0.0.1:8089
DOWNLOADS=/home/channels/gPodder/Downloads
GP_DATABASE=/home/channels/gPodder/Database
declare -A episode

function hex2str {
    local in="$1"
    local out=""
    while [ -n "$in" ]; do
        local c=${in:0:2}
        in=${in:2}
        case $c in
        00) ;;
        0[aA]) out="$out"'\\n';;
	22) out="$out"'\\"';;
        5[cC]) out="$out"'\\\\';;
        *) out="$out\x$c";;
        esac
    done
    printf "$out"
}

# Query for all episodes in channelsDVR and find those in the video
# group where the date hasn't been set yet.  Set the assiciative array
# episode[] to map the path to the file id for any gPodder episodes without
# the metadata defined.
#  (Note: use process substitution so the while loop runs in the
#   primary process and not a subprocess)
while read id;do read path;read date; read group
    [ $date = null ] || continue
    eval group=$group
    case "$group" in videos-*);; *) continue;; esac
    eval path=$path
    eval id=$id
    episode["$path"]="$id"
done < <(curl -s $CHANNELS/dvr/files \
         | jq '.[]|.ID,.Path,.Airing.OriginalDate,.GroupID')

# Query the gPodder database to extract the title and description for
# all "unwatched" episodes.  If these map to a file id, then set the
# metadata in channelsDVR
QRY="select download_folder, download_filename, published, "
QRY="$QRY hex(episode.title), hex(episode.description), hex(description_html)"
QRY="$QRY from podcast, episode"
QRY="$QRY where podcast.id = podcast_id and state = 1"
echo "$QRY;" | sqlite3 "$GP_DATABASE" \
| while IFS='|' read folder file pub title desc deschtml
do
    id=${episode["$folder/$file"]}
    [ -n "$id" ] || continue
    if [ -n "$desc" ]; then
        desc="$(hex2str $desc)"
    else
        desc="$(hex2str $deschtml | sed 's/\\n/ /' \
                | html2text | sed -z 's/"/\\"/g;s/\n\n*/\\n/g')"
    fi
    JSON="{ \"Airing\": {"
    JSON="$JSON \"OriginalDate\": \"$(date +'%Y-%m-%d' -d@$pub)\","
    JSON="$JSON \"EpisodeTitle\": \"$(hex2str $title)\","
    JSON="$JSON \"Summary\": \"$desc\"}}"
    curl -s --json "$JSON" -X PUT $CHANNELS/dvr/files/"$id" >/dev/null
done

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.