
Log info from Docker:
27.0.0.1 - - [07/Jun/2022 13:52:08] "GET /keep_alive HTTP/1.1" 500 -
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 60836)
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 910, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/local/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./app.py", line 53, in do_GET
routes[func]()
File "./app.py", line 58, in _keep_alive
frndly.channels()
File "/usr/src/app/frndly.py", line 135, in channels
rows = self._request('https://frndlytv-api.revlet.net/service/api/v1/tvguide/channels?skip_tabs=0')['data']
File "/usr/src/app/frndly.py", line 124, in _request
if error_code != 402 and login_on_failure and self.login():
File "/usr/src/app/frndly.py", line 166, in login
session_id = self._session.get('https://frndlytv-api.revlet.net/service/api/v1/get/token', params=params, timeout=TIMEOUT).json()['response']['sessionId']
File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 917, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: [Errno Expecting value] <html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
</body>
</html>
: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/socketserver.py", line 683, in process_request_thread
self.finish_request(request, client_address)
File "/usr/local/lib/python3.8/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "./app.py", line 25, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.8/socketserver.py", line 747, in __init__
self.handle()
File "/usr/local/lib/python3.8/http/server.py", line 427, in handle
self.handle_one_request()
File "/usr/local/lib/python3.8/http/server.py", line 415, in handle_one_request
method()
File "./app.py", line 55, in do_GET
self._error(e)
File "./app.py", line 30, in _error
self.wfile.write(f'Error: {message}'.encode('utf8'))
File "/usr/local/lib/python3.8/socketserver.py", line 826, in write
self._sock.sendall(b)
BrokenPipeError: [Errno 32] Broken pipe
----------------------------------------
It returned and logged in and keep alive resumed.
Tnx
With the newest container my log ins have not been the consistent 6hr 1min as before. Since I started the container log ins were are at:
17:58, 19:23, 00:24, 00:29, 03:31, 08:35, 08:40, 08:45
Strange Issue happened with me too: I wonder if they happened at the same time. That may be an issue with the Frndly servers and not the frndlytv-for-channels container.
> frndlytv-for-channels
> date,stream,content
> 2022-06-07T13:51:01.034138653Z,stdout,127.0.0.1 - - [07/Jun/2022 13:51:01] "GET /keep_alive HTTP/1.1" 200 -
>
> 2022-06-07T13:51:00.042103060Z,stdout,Keep alive!
>
> 2022-06-07T13:45:59.940885619Z,stdout,127.0.0.1 - - [07/Jun/2022 13:45:59] "GET /keep_alive HTTP/1.1" 200 -
>
> 2022-06-07T13:45:59.802954464Z,stdout,Logged in!
>
> 2022-06-07T13:45:58.532126031Z,stdout,logging in....
>
> 2022-06-07T13:45:58.532089672Z,stdout,Unauthorized access
>
> 2022-06-07T13:45:58.531855640Z,stdout,401
>
> 2022-06-07T13:45:57.745225690Z,stdout,Keep alive!
>
> 2022-06-07T13:40:58.876760636Z,stdout,127.0.0.1 - - [07/Jun/2022 13:40:58] "GET /keep_alive HTTP/1.1" 200 -
>
> 2022-06-07T13:40:58.876328100Z,stdout,Logged in!
>
> 2022-06-07T13:40:53.361793278Z,stdout,logging in....
>
> 2022-06-07T13:40:53.361347999Z,stdout,Forcing login!
>
> 2022-06-07T13:40:37.635676211Z,stdout,logging in....
>
> 2022-06-07T13:40:37.632838405Z,stdout,Keep alive!
>
> 2022-06-07T13:35:37.533374047Z,stdout,----------------------------------------
>
> 2022-06-07T13:35:37.533343165Z,stdout,"requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='frndlytv-api.revlet.net', port=443): Read timed out. (read timeout=15)
>
> 2022-06-07T13:35:37.533317859Z,stdout, raise ReadTimeout(e, request=request)
>
> 2022-06-07T13:35:37.533284302Z,stdout, File \"/usr/local/lib/python3.8/site-packages/requests/adapters.py\", line 532, in send
>
> 2022-06-07T13:35:37.533259101Z,stdout, r = adapter.send(request, **kwargs)
>
> 2022-06-07T13:35:37.533226523Z,stdout, File \"/usr/local/lib/python3.8/site-packages/requests/sessions.py\", line 645, in send
>
> 2022-06-07T13:35:37.533200400Z,stdout, resp = self.send(prep, **send_kwargs)
>
> 2022-06-07T13:35:37.533169161Z,stdout, File \"/usr/local/lib/python3.8/site-packages/requests/sessions.py\", line 529, in request
>
> 2022-06-07T13:35:37.533142851Z,stdout, return self.request('POST', url, data=data, json=json, **kwargs)
>
> 2022-06-07T13:35:37.533112818Z,stdout, File \"/usr/local/lib/python3.8/site-packages/requests/sessions.py\", line 577, in post
>
> 2022-06-07T13:35:37.533075687Z,stdout, data = self._session.post('https://frndlytv-api.revlet.net/service/api/auth/signin', json=payload, headers={'session-id': session_id}, timeout=TIMEOUT).json()
>
> 2022-06-07T13:35:37.533048647Z,stdout, File \"/usr/src/app/frndly.py\", line 186, in login
>
> 2022-06-07T13:35:37.533025308Z,stdout, self.login()
>
> 2022-06-07T13:35:37.532998666Z,stdout, File \"/usr/src/app/frndly.py\", line 142, in keep_alive
>
> 2022-06-07T13:35:37.532974371Z,stdout, frndly.keep_alive()
>
> 2022-06-07T13:35:37.532948359Z,stdout, File \"./app.py\", line 58, in _keep_alive
>
> 2022-06-07T13:35:37.532923020Z,stdout, routes[func]()
>
> 2022-06-07T13:35:37.532896681Z,stdout, File \"./app.py\", line 53, in do_GET
>
> 2022-06-07T13:35:37.532871933Z,stdout, self._error(e)
>
> 2022-06-07T13:35:37.532845277Z,stdout, File \"./app.py\", line 55, in do_GET
>
> 2022-06-07T13:35:37.532820716Z,stdout, method()
>
> 2022-06-07T13:35:37.532789794Z,stdout, File \"/usr/local/lib/python3.8/http/server.py\", line 415, in handle_one_request
>
> 2022-06-07T13:35:37.532765293Z,stdout, self.handle_one_request()
>
> 2022-06-07T13:35:37.532735369Z,stdout, File \"/usr/local/lib/python3.8/http/server.py\", line 427, in handle
>
> 2022-06-07T13:35:37.532712392Z,stdout, self.handle()
>
> 2022-06-07T13:35:37.532682227Z,stdout, File \"/usr/local/lib/python3.8/socketserver.py\", line 747, in __init__
>
> 2022-06-07T13:35:37.532656978Z,stdout, super().__init__(*args, **kwargs)
>
> 2022-06-07T13:35:37.532630679Z,stdout, File \"./app.py\", line 25, in __init__
>
> 2022-06-07T13:35:37.532603481Z,stdout, self.RequestHandlerClass(request, client_address, self)
>
> 2022-06-07T13:35:37.532576063Z,stdout, File \"/usr/local/lib/python3.8/socketserver.py\", line 360, in finish_request
>
> 2022-06-07T13:35:37.532550706Z,stdout, self.finish_request(request, client_address)
>
> 2022-06-07T13:35:37.532514194Z,stdout, File \"/usr/local/lib/python3.8/socketserver.py\", line 683, in process_request_thread
>
> 2022-06-07T13:35:37.532488795Z,stdout,Traceback (most recent call last):
>
> 2022-06-07T13:35:37.532458520Z,stdout,
>
> 2022-06-07T13:35:37.532431387Z,stdout,During handling of the above exception, another exception occurred:
>
> 2022-06-07T13:35:37.532406839Z,stdout,
>
> 2022-06-07T13:35:37.532170228Z,stdout,urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='frndlytv-api.revlet.net', port=443): Read timed out. (read timeout=15)
>
> 2022-06-07T13:35:37.532145174Z,stdout, raise ReadTimeoutError(
>
> 2022-06-07T13:35:37.532115851Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 340, in _raise_timeout
>
> 2022-06-07T13:35:37.532086418Z,stdout, self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
>
> 2022-06-07T13:35:37.532057418Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 451, in _make_request
>
> 2022-06-07T13:35:37.532031446Z,stdout, httplib_response = self._make_request(
>
> 2022-06-07T13:35:37.531936017Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 703, in urlopen
>
> 2022-06-07T13:35:37.531911506Z,stdout, raise value
>
> 2022-06-07T13:35:37.531882752Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/packages/six.py\", line 770, in reraise
>
> 2022-06-07T13:35:37.531853303Z,stdout, raise six.reraise(type(error), error, _stacktrace)
>
> 2022-06-07T13:35:37.531816800Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py\", line 550, in increment
>
> 2022-06-07T13:35:37.531791600Z,stdout, retries = retries.increment(
>
> 2022-06-07T13:35:37.531760511Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 785, in urlopen
>
> 2022-06-07T13:35:37.531733966Z,stdout, resp = conn.urlopen(
>
> 2022-06-07T13:35:37.531702527Z,stdout, File \"/usr/local/lib/python3.8/site-packages/requests/adapters.py\", line 440, in send
>
> 2022-06-07T13:35:37.531675247Z,stdout,Traceback (most recent call last):
>
> 2022-06-07T13:35:37.531653375Z,stdout,
>
> 2022-06-07T13:35:37.531622578Z,stdout,During handling of the above exception, another exception occurred:
>
> 2022-06-07T13:35:37.531600581Z,stdout,
>
> 2022-06-07T13:35:37.531568297Z,stdout,socket.timeout: The read operation timed out
>
> 2022-06-07T13:35:37.531535765Z,stdout, return self._sslobj.read(len, buffer)
>
> 2022-06-07T13:35:37.531504895Z,stdout, File \"/usr/local/lib/python3.8/ssl.py\", line 1099, in read
>
> 2022-06-07T13:35:37.531469552Z,stdout, return self.read(nbytes, buffer)
>
> 2022-06-07T13:35:37.531433689Z,stdout, File \"/usr/local/lib/python3.8/ssl.py\", line 1241, in recv_into
>
> 2022-06-07T13:35:37.531393502Z,stdout, return self._sock.recv_into(b)
>
> 2022-06-07T13:35:37.530816611Z,stdout, File \"/usr/local/lib/python3.8/socket.py\", line 669, in readinto
>
> 2022-06-07T13:35:37.530776258Z,stdout, line = str(self.fp.readline(_MAXLINE + 1), \"iso-8859-1\")
>
> 2022-06-07T13:35:37.530748280Z,stdout, File \"/usr/local/lib/python3.8/http/client.py\", line 277, in _read_status
>
> 2022-06-07T13:35:37.530719675Z,stdout, version, status, reason = self._read_status()
>
> 2022-06-07T13:35:37.530683927Z,stdout, File \"/usr/local/lib/python3.8/http/client.py\", line 316, in begin
>
> 2022-06-07T13:35:37.530657671Z,stdout, response.begin()
>
> 2022-06-07T13:35:37.530619843Z,stdout, File \"/usr/local/lib/python3.8/http/client.py\", line 1348, in getresponse
>
> 2022-06-07T13:35:37.530589288Z,stdout, httplib_response = conn.getresponse()
>
> 2022-06-07T13:35:37.530551112Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 444, in _make_request
>
> 2022-06-07T13:35:37.530519737Z,stdout, File \"<string>\", line 3, in raise_from
>
> 2022-06-07T13:35:37.530488900Z,stdout, six.raise_from(e, None)
>
> 2022-06-07T13:35:37.530416911Z,stdout, File \"/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 449, in _make_request
>
> 2022-06-07T13:35:37.530272099Z,stdout,Traceback (most recent call last):
>
> 2022-06-07T13:35:37.139696231Z,stdout,Exception happened during processing of request from ('127.0.0.1', 59378)
>
> 2022-06-07T13:35:37.139553828Z,stdout,----------------------------------------
>
> 2022-06-07T13:35:36.948386824Z,stdout,127.0.0.1 - - [07/Jun/2022 13:35:36] GET /keep_alive HTTP/1.1" 500 -
>
> 2022-06-07T13:35:21.552323768Z,stdout,logging in....
>
> 2022-06-07T13:35:21.552172189Z,stdout,Forcing login!
>
> 2022-06-07T13:35:21.094952097Z,stdout,Keep alive!
>
> 2022-06-07T13:30:20.975451724Z,stdout,127.0.0.1 - - [07/Jun/2022 13:30:20] "GET /keep_alive HTTP/1.1" 200 -
>
> 2022-06-07T13:30:20.523888686Z,stdout,Keep alive!
UPDATE: Latest Logins at 08:40, 08:45, 13:46, 13:51, 18:52, 18:57
Looks like there are now double logins happening (5 minutes apart).
@Absenm and @Jim_FL
What problems are you having that you're trying to fix?
I haven't updated to the latest code since I have no issues with my recordings.
If you're only seeing issues with watching Live and not recordings, my guess is somethnig in Channels DVR with timeouts. I see it everytime when trying to watch a TVE channel live in the DVR web UI player where you select to watch a channel that needs re-authed and it appears to timeout, video player closes, shows reconnecting and eventually you get audio, but no video. Looking at the dvr log after that shows 2 or more connections open and 2 or more auth attempts.
Actually no issues at all. @matthuisman asked earlier on for a little feedback concerning some changes he made specific to keep alive and tokens. I am just providing a little feedback for his reference, and in particular to the most recent changes. As far as I am concerned the end result as an end user is that it is working awesome.
Agree. The issue was only with Frndly using Docker. The other TVE channels were not affected. Frndly is now working correctly but I believe @matthuisman is working on cleaning up his code. I have posted log entries I have seen. FYI, I run Channels on 7 sets in my house using both Apple TV and Tivo. I have two other Tivo 4K streamers in my RV which I run over Starlink internet. Works fine. I am using a Qnap NAS for Channels server. Good luck!
Your still seeing logins every 5 mins??
Latest: 15:20, 15:15, 10:14, 10:09, 5:08, 5:03, 0:03, 23:57, 18:57, 18:52, 13.51, 13:46, 8:45, 8:40
It is basically 1 login followed 5 minutes later by a second login. Then five hours later 1 login followed 5 minutes later by a second login.
The keep alive are still occurring every 5 minutes like expected.
@matthuisman Just for reference.
I installed the latest docker version about 19 hours ago and set KEEP_ALIVE=0
The only login I see is from this morning when Channels DVR refreshed the playlist about 6 hours ago.
The only recording using Frndly today is scheduled in about 80 minutes. I'm sure I'll see the login associated with that.
Here's the last 22 hours. Only 2 logins are for the daily playlist refresh and a recording.
2022-06-08 03:00:40 stdout Starting server on port 80
2022-06-08 16:15:00 stdout logging in....
2022-06-08 16:15:01 stdout Logged in!
2022-06-08 16:15:02 stderr 192.168.1.3 - - [08/Jun/2022 16:15:02] "GET /playlist.m3u8 HTTP/1.1" 200 -
2022-06-08 16:15:02 stdout No gracenote id found in epg map for: 38
2022-06-08 16:15:02 stdout No gracenote id found in epg map for: 43
2022-06-08 16:15:02 stdout No gracenote id found in epg map for: 16
2022-06-08 16:15:02 stdout No gracenote id found in epg map for: 49
2022-06-08 23:58:00 stdout 401
2022-06-08 23:58:00 stdout Unauthorized access
2022-06-08 23:58:00 stdout logging in....
2022-06-08 23:58:01 stdout Logged in!
2022-06-08 23:58:02 stdout channel/live/heroes___icons > https://sr-live2-frndly.akamaized.net/...
2022-06-08 23:58:02 stderr 192.168.1.3 - - [08/Jun/2022 23:58:02] "GET /play/heroes___icons.m3u8 HTTP/1.1" 302 -
KEEP_ALIVE=0 will just revert to old behavior of refreshing the token "on demand" which may be too slow and cause timeouts in channels
Please try latest version
hopefully that sorts out the duplicate logins
Thanks, I was asking earlier what the issue was.
So it appears you're trying to fix this
I'm not having any issues, so I'll keep what I have, that's working, and observe from the side lines.
Again for @matthuisman information. There is no end result problem. This is simply an observation of logs to help fine tune keep alive function. I am not declaring KEEP_ALIVE so it should be defaulting to curls every 5 minutes and forced login every 5 hours.
Logins at: 6:06, 11:07, 16:08, 16:13, 21:14, 21:19, 2:20, 2:25
The expected 5 minute keep alive curls are happening on time.
As you can see the first 2 went every five hours apart as expected. Then the double logins five minutes apart started happening again.
can you provide a log? I want to see why the 3rd 5th 7th logins failed
Sent in PM: Hopefully it is enough of the log. The system kept telling me it was too long to send.
You can dump a portion of the docker log. This --since=24h will output the last 24hrs into the file
/volume1/arkives/frndlycontainer.log
sudo docker logs -t --since=24h frndlytv-for-channels >/volume1/arkives/frndlycontainer.log 2>&1
I setup a task that runs as root hourly to update the file, so I don't have to log into my Synology to see the log.
docker logs -t --since=1h frndlytv-for-channels >>/volume1/arkives/frndlycontainer.log 2>&1
Works great. Very much appreciate the time spent creating and maintaining. My first custom channel, btw. I did the python method and it took me a while to figure out I had to download and install Git in addition to Python. I am noticing that in the server window a keep alive is recorded every 5 minutes. Is this being sent to Frndly? Thought I read above it should be every five hours? Next question: Does Channels pull multiple streams at a time with this or is the subscription stream limit enforced by the Frndly servers? If this does allow us to pull more streams that we pay for, would option to limit streams in the Channels server source management handle that task?
It logs in every 5 hours. It checks if the token is still good every five minutes (“keep alive” is slightly poor terminology) and will force a login if the token has for some reason expired.
The “keep alive” is really mainly for people who may have issues with their sessions timing out due to login delays. Up until recently login was done on demand as necessary. The new process is to simply stay logged in all the time and not allow the token to expire. This can be turned off by using KEEP_ALIVE=0.
My personal hunch is that the original on-demand behavior is good for most people and would recommend using KEEP_ALIVE=0 unless you encounter problems with recordings or timing out while changing channels. But I’m not the developer, just a fan. Lol.
Ok… I just recorded 5 different channels off Frndly at the same time while watching 2 live streams on an iPhone and AppleTV. So I would say the Frndly steam limit doesn’t apply.
Yes, if you enter a limit and exceed it you will get the a nice Channels logo screen with the message “stream limit reached”.