Concurrency is killing remote streaming

I am trying to stream an HD channel with bandwidth below but close to the available bandwidth.The client is FireTV 9 API/28 and the server is 2022.01.27.0906 on RPi4

Speedtest is showing 8.8Mbit/s Latency 55ms Jitter 5ms
Trying to stream a channel with Overall bit rate : 8 609 kb/s according to mediainfo.
The channel contains two audio streams, both at 192kbps.

One would like to think after an initial buffering for a few seconds the channel would start just fine.

However the clients shows Concurrency: 8 Timeouts: 5 before completely stalling and the server logs show multiple attempts (the first number is the number of attemptsin the server logs) at getting the following segments:

  6 22.ts
  6 27.ts
  8 23.ts
  9 17.ts
 11 11.ts
 12 19.ts
 12 3.ts

according to the log they all get status 200 or OK.

the full log:

https://pastebin.com/pRwDy8Zs

Is there a way to disable the excessive client "concurrency" when streaming remotely?

I hope I should not need to double the available bandwidth as transcoding on RPi produces a "choppy" and painful to watch video at any speed limit.
Thank you for reading this far :wink:

I can watch live HD streams but not recordings because the client is too aggressive and the server does nothing to prevent the schizophrenic behavior of the client. In our example the client has a problem fetching stream3.ts and for some reason it plows ahead with request for more segments which prevents it from getting stream3.ts successfully. The server should prioritize delivery of the earlier segments to the client instead of happily obliging with requests for future segments the client has no use for.

I was able to neutralize the aggressiveness of the client by limiting the number of concurrent requests for ts segments at the proxy level. I can watch recordings now. Fast forwarding is still a problem and would require something more sophisticated. Here are the relevant parts of nginx config:

limit_conn_zone $binary_remote_addr zone=addr:10m;

server {

        location ~ \.ts$  {
                proxy_pass http://127.0.0.1:8089;
                limit_conn addr 1;
                limit_conn_status 429;
        }

        location / {                
                proxy_pass http://127.0.0.1:8089;
        }
}

You can't expect it to work consistently with this kind of low bandwidth. Even if you had a server that could transcode well, you would still run into occasional bandwidth issues.

What makes you say this? I was able to force channels to access the server "locally" w/o any transcoding and it works well. Remote streaming uses too much bandwidth and should be avoided in situations where the bandwidth is scarce. It assumes bandwidth is unlimited and it is the root of the problem.

One nice trick discovered during this exercise is how to remote stream without involving Fancy Bits servers. I will be sharing it at some point :wink:

1 Like

You mean by using VPN to your home network?
Thats already known way to do it.
I have been doing so for years, to either of my DVR severs, neither have remote access enabled.

Please submit diagnostics from the client after seeing the bad behavior.

We do many things to handle many network situations. If it’s causing issues in your situation, we can review the logs and understand how to best handle this situation.

The latest TestFlight betas already have improvements to back off on the concurrency in different timeout situations.

I have never used the Fancy Bits servers, or a VPN. I just have the port open and directly connect to my own public domain name / ip address.

1 Like

Limiting the connections in nginx is definitely going to cause you some issues when streaming from far away locations, so I would definitely like to figure out how to solve your issue without resorting to that.

Eric, you have to realize the existing code is trying to deal with not enough bandwidth on the client which is a typical case when watching videos online. However in case of channels the problem is usually on the server side so a different method needs to be designed. The logs I provided in my original post clearly show bad client behavior. On the other hand the client is controlled by the server so probably making the server a bit smarter would help too.

The problems I have seen so far are:

  1. The sending buffer on the connection from the client is 4MB(RPi 4) which means the server can just dump the file into OS buffers and move on despite the fact it will take some time for the client to get it. The server is completely unaware of how long it did take for the client to get the ts segment.
  2. I tried to decrease the sending buffer size but it resulted in nothing but timeouts.
  3. The client always tries to get the full segment despite having the segment partially downloaded. Using range request to only download the part of the file that has not been downloaded yet would probably help. Byte serving - Wikipedia

In short I would seriously consider disabling concurrency on the client(as a server option?) as a first step.

That shouldn't actually be any different from the clients perspective. It's just that something in the path between the two nodes is limiting the bandwidth and it needs to adjust. There's nothing specific about where that limitation is that should matter.

Honestly the server is always unaware of how long it takes for the client to get the ts segments. That's how HLS works. The server is just serving segments over HTTP, it's up to the client to figure out which bitrate to pick and how fast to download things.

We used to have this feature enabled, but ran into too many edge cases where it resulted in jumbled data, so we've disabled it. I'll revisit if there would be ways we could re-enable it for some of these situations.

The "concurrency" is the way that we are able to stream at all at good bitrates from far distances. With TCP window sizes what they are, it would not be possible to stream 8mbps streams while in Japan from a home in the US (for instance) without the concurrent downloads.

The solution we currently have with concurrent downloads is a good one that works in a large variety of situations. It looks like you're running into problems. We'd like to solve them. If you can replicate the issue and submit diagnostics from the client app, we will be able to see the logs in detail and understand what is going wrong and what we can do to improve the situation.

1 Like

so what is the client doing to address the insufficient bandwidth issue on the server except for saturating the link even more? :wink:

The TCP window size on my client is 4MB (65535*2^6) on Amazon stick 4k. This should allow delivering 8Mbps over a single tcp connection with 2000ms latency. Not sure how far Japan is but my latency is below 100ms so workarounds of this nature are unnecessary. Maybe trying to increase the window size on the client would be a better option in the extreme cases?

Your jokey sarcasm isn’t working.

I’m done answering you point-by-point, but if you’re interested in this improving, I’ll say for the third time on this thread that if you submit diagnostics from the app after seeing this behavior it will give us information that we will be able to use to improve the handling in your specific situation.

4 Likes

Seems that this heated discussion stems from a problem that could be solved by paying for faster internet. Is it the best usage of our time?

1 Like

Are you stalking me now? :rofl:

There’s a new TestFlight build out that should react better to these sorts of situations.

Thanks! I won't be able to test it for awhile as I am "local". The pipe the other way is 500/50 so no problems there.