Non-Docker source for PlutoTV and Stirr m3u playlists and EPG

removed it.
reloaded it .. same thing: Welcome to nginx!

and again --Damn ,third times a charm? It worked

1 Like

image

The nginx screen appears while the docker container is still booting. You need to wait 15s for it to finish processing the pluto data the first time.

2 Likes

Just FYI it was running all morning and STIRR at the time successfully. Go figure

1 Like

@CScott I am interested in your Channel Collections. Do you mind sharing the contents of them?

1 Like

I just updated the Pluto for Docker to v1.2.8 and I noticed that my Pluto channel numbers have changed to a range of 50 - 1035. They were all in the 9000 and up range.

Is this correct? or is something wrong.

Thanks,

1 Like

You can change your DVR to ignore numbers from M3U

OK. No big deal, but I had been accustomed to seeing them in the 9000's. Worked fine.

Thanks for the info.

2 Likes

v1.2.9 is out and removes duplicated channel number entries.

3 Likes

nocords.xyz also updated. thanks for the quick fix!

3 Likes

Just checked and apparently PASS is working now :wink::+1:

Still under previous 1.28 version

2 Likes

Just updated to 1.2.9 but still see the version did not change in the (http://127.0.0.1:8080/) view

I guess this is normal right given the address you are using
Or did you forget to update that upon update ?
Thx

image

1 Like

That page points to the URLs from the Docker version you have installed on that same server. Hank’s non-Docker method uses different URLs from nocords.xyz

These sources have been working pretty well, so thanks again for making them! I'm encouraging everyone I know using Channels to get on board with them, too.

You had me at "twelve bucks". Keep up the good work.

1 Like

Thank you. Enjoy!

Just donated via PayPal. Thanks for providing this service!

2 Likes

Hi folks,

Two small requests -- if you are using the ?cb=[handle] URL parameters on the Pluto and Stirr links (discussed up-thread), please remove them, so the edge caching will work better. Please use the original base links (with no URL parameters) posted on the https://nocords.xyz website and in the first post.

Second, there's one user out there who has the Pluto Playlist file setting as: "https://nocords.xyz/pluto/playlist.m3u " <<<-- notice the space on the end. This is causing the server to respond with a "404" error and you're not getting any Pluto M3U updates. This user appears to be in the SF Bay area (possibly Santa Clara) and is an sbcglobal.net subscriber. Please adjust your settings to remove that extra space.
Thank you.

1 Like

Just a quick update. I've enabled edge caching with Cloudflare so that will relieve the bulk of the bandwidth requirements on the origin server. But we lose the ability to give you feedback on the main nocords.xyz site of the last time you accessed the M3U and EPG file (helpful to diagnose data issues). The real-time map is no longer an accurate representation of traffic, so I've removed those things.

But if you are curious of the last time the files were refreshed in the edge cache, I've included this data in the https response headers (you can use curl, chrome inspector, or your tool of choice to view the response headers).

Look for the headers that begin with x-channels,like so:

x-channels-pluto-m3u: Tue, 02 Nov 2021 15:03:08 -0400
x-channels-pluto-epg: Tue, 02 Nov 2021 15:03:09 -0400
x-channels-stirr-m3u: Tue, 02 Nov 2021 15:03:09 -0400
x-channels-stirr-epg: Tue, 02 Nov 2021 14:40:09 -0400

This shows the last time the edge cache pulled the source files from the origin server. If you want to see the times these files were last refreshed on the origin server, that info is still on the https://nocords.xyz home page.

The source files are regenerated every 30 minutes, and the edge cache has a max age of 30 minutes. So in an absolute worst case scenario, you might get data that's no more than one hour old.

5 Likes

you can use CF's API to purge the cache after you change files :slight_smile:
This is what I do for nearly all my content.
After generating new files, I call the CF API to clear the cache on those files so the next request to them will get the files.

def purge_urls(urls):
    def chunks(l, n):
        for i in range(0, len(l), n):
            yield l[i:i + n]

    for chunk in chunks(urls, 30):
        print('Cache Purge: {}'.format(chunk))
        r = requests.post('https://api.cloudflare.com/client/v4/zones/{}/purge_cache'.format(CF_ZONE_ID), headers={'X-Auth-Email': CF_EMAIL, 'X-Auth-Key': CF_KEY}, json={'files': chunk})
        assert r.json()['success'] == True
1 Like