The buffer increases when video data has to be saved in RAM before being received by the client. Maybe in this case the disk was slow to wake up so the memory buffer spiked slightly.
Anything less than buf=100% is normal. When the buffer reaches 100% then drop= will start increasing.
We are now exposing stats about things that have always happened but previously had no insight into the normal behavior. This is additional diagnostic detail that will become helpful to diagnose hardware and software issues that come up.
Hi folks. Could you try out the latest beta when you get a chance and let me know if everything is continuing to behave as normal? We've made some improvements to our logging infrastructure in the latest build around TVE that should have no user-facing changes (but potentially fix some strange issues where logs ended up in the wrong file from time to time). We just want to make sure there aren't any regressions. There isn't anything specific that is needed to test, just wanting to make sure there aren't any panics/crashes or other strange things that crop up during normal usage.
We've been running the new logging infrastructure in our beta apps for a couple weeks without incident, so we aren't expecting any issues.
Sure, does this only affect those using the Experimental New Streaming Buffer (Topic Title)?
Like debug recording logs for back-back recording with overlap on the same TVE/M3U Channel?
I've noticed that, but hardly ever look at them and figured you guys knew about it,
No, the stream isnāt open, but itās stuck in the close process. It looks like thereās a race condition that can cause a deadlock on shutdown of a connection that I am trying to reason out. You should be able to update to the new release that just went out without any corruption.
Thanks, that should help my 4 Channels DVR servers that I updated!
But... the one that was stuck and I submitted diags on thinks it's busy.
Trying to update results in this message
Version 2022.08.19.2146 Waiting to upgrade to 2022.08.19.2329...
! Upgrade now
So for anyone else stuck here, I'm going to click Upgrade now.
And after the updating it resumed normal operation.
[DVR] Processing partially recorded expired job 1660950002-ch6095 Canary REELZ Recording
[DVR] Processing file-973: TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-19-1600.mpg
[DVR] Running commercial detection on file 973 (TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-19-1600.mpg)
[DVR] Commercial detection for Canary REELZ Recording 2022-08-19-1600.mpg finished with 2 markers in 3m23.177776698s (2 threads).
The only issue is the recording shows as interrupted, although it wasn't.
Glad we caught it early and it was only a Canary recording, not something I wanted to archive.
That's why I don't use a task to auto-update to the latest pre-releases.
Prefer to be there to make sure it works.
Iāve done some preliminary tests that didnāt replicate the issue, but I also did those tests on the previous build and they passed, so some of these things will only surface once in a while and you just got unlucky (but lucky for my debugging).
This is one of the reasons weāve donāt have the betas auto-update. Itās better to have less people updating immediately in case there are issues with the build.
OK, have an upcoming curl manual scheduled by task recording of one in an hour on one of my DVR's and will update that server and let you know how it goes.
Scheduled to finish recording at 3AM UTC Tomorrow
As I'm sure you know, I do manual recordings using the curl method (last diags was one) so there may be something different with those. I'm also using that method for the one I'm about to test with the hlstube M3U source.