You can try strace
on the process and see what syscall is returning EPERM
Who makes those dockers? I moved to a linuxserver made ubuntu focal image so I didn't have to keep updating my own ubuntu image. I don't have any networking issues with mine in host mode, in bridge mode, or behind a VPN and I messed with all of that last night. I'll try using a different base image this weekend.
Also, which Unraid version are you running?
My dockers are from a variety of sources, but mainly Linuxserver.io, including the Plex one. I have been running the 6.10 release candidates and just updated to 6.10.
would this be in unraid or the container?
Ok I’ll look this weekend. I’m using the same base image they are so maybe I’ll see something
I updated the container, can you give it a try?
Sorry for the delay, unfortunately it was the same result.
2022/05/25 15:36:51.749184 [TNR] Cancelling stream 1315D55C/0 ch670 after no data was received for 6s
2022/05/25 15:36:51.749828 [HLS] ffmpeg: ch670-dANY-ip192.168.20.120-remux: [mpegts @ 0x60fe680] Dropped corrupted packet (stream = 1)
2022/05/25 15:36:51.749978 [HLS] ffmpeg: ch670-dANY-ip192.168.20.120-remux: [mpegts @ 0x60fe680] Dropped corrupted packet (stream = 2)
2022/05/25 15:36:51.761936 [TNR] Closed connection to 1315D55C/0 for ch670 FreeForm HD (AB
2022/05/25 15:36:51.849087 [ENC] Stopped encoder for ch670 in /shares/DVR/Streaming/ch670-dANY-ip192.168.20.120-1460938458/encoder-1958-3274782352 after encoding 1958 to 1968
I did run a packet capture and right before the issue happens I see a missing ACK to the HDHomerun, and then a bunch of duplicate ACK messages and things seem to go off the rails there. In the below screenshot .141 is the my hdhomerun and .164 is the channelsdvr
Can you get a shell on the container and then in /tmp download the speedtest binary (https://install.speedtest.net/app/cli/ookla-speedtest-1.1.1-linux-x86_64.tgz), untar it, and then run it? It should show if there's any packet loss at least going out of the container. Hopefully it's:
Packet Loss: 0.0%
oddly enough it didn't test for packet loss
Latency: 8.30 ms (1.72 ms jitter)
Download: 515.47 Mbps (data used: 501.6 MB )
Upload: 31.31 Mbps (data used: 25.7 MB )
Packet Loss: Not available.
This is just odd... try ip -s link show {interface}
?
Here is the info for br0
br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 84:3d:c6:4d:6c:98 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
598409039710 542191961 0 572 0 911941
TX: bytes packets errors dropped carrier collsns
28785692897 27524404 0 0 0 0
So high level question, what's the advantage of using the Community App vs the Docker template? Is it based on the Docker template with more things preconfigured for hardware transcoding?
Ease of use. The Community App pulls from a github repo that hosts…. two templates, one for Intel and one for NVIDIA. It’s based on my docker container (not the official Channels) and they have always been preconfigured for hardware transcoding and specifying the user in the container (so permissions for the mounted file will be nobody:user and not root:root). The base OS for the container is a linuxserver baseimage for Ubuntu 20.04. The container can be found here: https://github.com/timstephens24/channelsdvr-docker
Getting a weird error when I start the docker container on a fresh install using the defaults you described in the readme
/bin/bash: line 0: cd: /channels-dvr/data: No such file or directory
Any ideas what's generating this error?
It won’t be for a few hours until I can check, but I’ll get back to you soon.
It is likely that you do not have a volume mounted at that location. You need to have something like --volume channels:/channels-dvr
in your Docker command line (or however else you are creating the container).
I ran it on my test server and found the issue. I'm working on a fix and I'll let you know when it's pushed up to Docker Hub.
Sorry about that, the error only happens on a fresh install. It should work correctly now, thank you for posting about it!
Thank you! It's working great now!