FastChannels - FAST Channels aggregator/manager

I noticed a couple "Warning" errors with my most recent scrapes. Here are the corresponding log entries:

2026-03-17 18:30:18,699 INFO     app.worker: [roku] Scrape job started
2026-03-17 18:30:19,305 WARNING  app.scrapers.roku: [roku] prewarm seed: could not seed osm_session — no channel succeeded
2026-03-17 18:30:19,305 INFO     app.scrapers.roku: [roku] cache warm summary: play_id=0/0 selector=0/0 stream_url=0/0 retry_play=0 retry_selector=0
2026-03-17 18:30:19,305 INFO     app.scrapers.roku: [roku] 0 EPG entries fetched for 0 channels
2026-03-17 18:30:19,751 INFO     app.worker: [roku] EPG-only run complete — 0 channels, 0 programs (1.1s)
2026-03-17 18:31:26,706 INFO     app.worker: [plex] Scrape job started
2026-03-17 18:32:08,623 WARNING  urllib3.connectionpool: Retrying (Retry(total=2, connect=3, read=1, redirect=None, status=2)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='epg.provider.plex.tv', port=443): Read timed out. (read timeout=30)")': /grid?beginningAt=1773826286&endingAt=1773855086
2026-03-17 18:33:30,941 INFO     app.scrapers.plex: [plex] 5676 EPG entries fetched from grid API
2026-03-17 18:33:31,101 INFO     app.worker: [plex] preserved 2 existing EPG rows across 2 channels with no now coverage (sample channel_ids=818,339)
2026-03-17 18:33:31,678 INFO     app.worker: [plex] EPG-only run complete — 678 channels, 5676 programs (125.0s)

Category cleanup...
I know you're still working on it but you have the following categories that can be cleaned up/merged, etc.

And a few more down the list...

As I mentioned in the release notes, "Existing users can run migration 009 to clean up old data, or just wait for re-scrape."

It will clean itself up over the next 24 hours or so (the actual channel list will need to re-scrape -- the workflow for some scrapers only scrapes EPG hourly and usually daily grabs the channel list), or if you are feeling wild, you could run the migration script manually. Or, if really impatient, you could delete your stack and re-install.

Ok, I was just about to post that I saw that... Just out of curiosity, how do I run migration 009?
Actually, never mind... I can wait the 24 hours... it's not biggy...

docker exec fastchannels python /app/migrations/009_normalize_categories.py

Last post of the night... OK I heard you guys loud and clear - the categories improvement you'll like, but some channels still leak into wrong category. no fault of mine - just lame upstream data.

I just fed the Channels list through a LLM and had it do a best effort to normalize all the channels into proper categories. I'll commit to auditing it occassionally. The new helper script will put channels into proper category. It'll never be 100% perfect, but first pass looks solid. First pass, it found ~800 mis-categorized channels....

i'll get it out over next few days.

2 Likes

I keep pulling the new images, but the UI version still shows 1.6.0.
I can tell by the image create date/time I'm using the latest.

didnt mean to push that last one..... you might get a free sneak peek at the improved channels normalization now :slight_smile:

Yea.. thats one of the things to deal with when you are taking the approach of taking multiple FAST sources and trying to use them in any sort of logical fashion. Each service has its own categories and way of doing things.

It gets even deeper into the weeds.
Even channels with the same name, identical or tiny bit different name, may be airing identical content as other services. Or, it may be different in some way.
Things i have noticed in comparing sources:
Resolution (720 or 1080)
Same show, but different episode schedule. (most common between different regions like US, CA, etc)
Guide data: quality/amount of info, missing season/episode number/premier date (uses todays date instead), length of data, Channel Logo
Ad breaks, quick cuts or "we will be right back" type screens.

Resolving Duplicates function, would, ideally, reveal clearly, all those data points, so user can pick and choose what channel version they want.
The current search function does a good job so far to expose most of this info, though one has to click each channel in the list to see guide info.
I am not certain i want to set a specific fast source as priority as first step in "Resolving duplicates", as i am not certain what one is best, or if one is better than the other just yet, not until i can compare the channels i want first.
Honestly, it am thinking more of cherry picking logic here i guess. Pick "best" version of a single channel from all available sources, and make one master feed.
"best" is relative to the user though, depending on what they aspect they want more.

Totally agree. And don't forget stream reliability. I hate it when I see corrupted recordings because of disconnects. Makes you think you should have picked a different source for it.

1 Like

I'm happy he added that in the Channels tab. It's a nice touch.

Ah. Yes. For me, personally, FAST channels pretty much is just live stream for me...when i'm bored, nothing else on, channel surf, or for background noise i'll throw something on. Rarely record anything.

My mother though, does record alot, to save or to watch later. For sure, she bugs me about issues with recordings. I can often remux them with mkvtoolnix, often when the file says like 5hrs in VLC, and won't play...

LG Channels seems to be failing to scrape. (seems its me trying to pull CA region. Maybe LG region locked or what ever to block requests from non region specific IP/s)

2026-03-18 03:54:59,995 ERROR    app.scrapers.base: [lg-channels] GET https://api.lgchannels.com/api/v1.0/schedulelist failed: 500 Server Error:  for url: https://api.lgchannels.com/api/v1.0/schedulelist
2026-03-18 03:54:59,996 ERROR    app.scrapers.lg_channels: [lg-channels] schedulelist request failed
2026-03-18 03:55:02,158 ERROR    app.scrapers.base: [lg-channels] GET https://api.lgchannels.com/api/v1.0/schedulelist failed: 500 Server Error:  for url: https://api.lgchannels.com/api/v1.0/schedulelist
2026-03-18 03:55:02,158 ERROR    app.scrapers.lg_channels: [lg-channels] schedulelist request failed
1 Like

Seems to be a issue with Plex Guide data.
The show name and episode name are reversed in Channels.
Left is this feed, right is my regular server using jgomez docker


Also,
Guide data for Pluto also seems to be a bit, wonky. (compared to Windows app i am using for my main server)
Many channels are using current date for episode name (some do via the Windows app, but much more on this setup)
Some channels defer in data, even though they are supposed to be the same in comparison between the two source methods.

I guess, in general, there may be some refinement and checking/comparing need for either scraping guide data, or processing it.
Cause i am seeing differences when comparing to current, more mature solutions for Channels.


Maybe...but, have you actually directly compared each one of those channels?
Just cause they say they are the same channel, air the same show, does not mean they air the same exact episode.
Thus, guide data may be different per FAST source.
I have seen this on show channels like Top Gear, Ghost Hunters, Stargate, Star Trek...
Also, will almost always will be different for different regions on the same source.
Like Pluto US, not the same episode as Pluto CA or UK version of the channel.

do you have a certain region setup?

Also, still having that SQL locking bug surfacing.. it's killing me. I'm going to try another fix today. I know a flury of updates this week, but we'll get this stabilized before I do any new features.

Maybe setting up an external database like postgres would help on that front.

Theres like 5000 channels that needs to be scrapped. I can't imagine SQLite can handle that easily

1 Like

you read my mind... it's not trivial, but leaning that way. Kicking myself for not doing that at the beginning.

2 Likes

i haven't seen this personally - do you happen to change your Country code? I only tested US

FastChannels v1.7.0

Hey all — dropping v1.7.0 today, which is almost entirely focused on stability, DB reliability, and a big channel category cleanup.

DB Contention Fixes

If you've been seeing database is locked errors in your logs, this one's for you.

  • The EPG-only scrape path (used by Plex) had no retry logic at all — added that.
  • Exception handlers were trying to commit on dirty sessions without rolling back first, causing cascade failures. Fixed.
  • The hourly EPG prune and daily orphan cleanup jobs were running in APScheduler background threads, racing against active scrape jobs for the write lock. They now get enqueued through the RQ worker like everything else, so all DB writes are serialized. This was probably the biggest source of random lock errors on busy installs.

SQLite has served us well but I've been chasing write contention bugs long enough that I'm evaluating a move to PostgreSQL for a future release. It handles concurrent writes natively and would eliminate this whole class of issues. Nothing to change on your end yet — just a heads up that it may be coming.

Channel Category Overhaul

  • Did a full audit of ~4,000 channels across all 35 categories and corrected 800+ miscategorizations. Local news affiliates (FOX Local, CBS News [City], call-sign stations like KABC/WNBC, numbered network affiliates, etc.) were leaking into the general News bucket — those are now properly tagged as Local News. Pattern-based detection means new channels from those networks will be categorized correctly on first scrape too.

  • This migrates automatically on container start — no manual steps required. Upgrading users get all corrections applied the moment the container comes up, and scrape runs can never undo them going forward.

Other

  • Stripped commas from incoming channel names on ingest — some upstream sources include them and they can cause issues with M3U parsing.

As always, let me know if you run into anything.

1 Like

I moved most of my homelab tools (the ones that I COULD move) to postgres from SQLite because I got tired of data lockups. The only one I didn't move is my Jellyfin because that feature is alpha ATM. Didn't miss a beat and those stuff runs more smoothly