Yes, being empty is the preferred state. It will only show above 0 if there is something wrong going on.
All recordings were fine no problem.
One of my older AppleTVs that routinely stopped streaming after 10-15 minutes worked flawlessly last night. Will keep an eye on it, but so far this seems like an improvement.
Yes, I have server side setup for Tuner sharing for all devices. After updating yesterday to the latest DVR pre-release I enabled the new streaming buffer. Just a single evening of use so far, but this particular older appleTV has always given me trouble with streaming stopping after a watching for a while.
I just saw this thread, my issue was more similar to what was referred to here, but applied to all sources, local HDHRs and IP based streams. IP streams do no reconnect automatically when a connection is lost - #14 by eric
Just noticed this during a recording and I have the Experimental New Streaming Buffer unchecked.
Is this the new normal?
It logged this when the recording finished
[SNR] Buffer statistics for "TV/Star Trek Voyager/Star Trek Voyager S01E03 1995-01-30 Time and Again 2022-08-12-1958.mpg": buf=0% drop=0%
Yes, this is the new normal. We are now exposing the buffer usage for all streams.
Without the New Streaming Buffer enabled the drop=
statistic will always be 0%
(the old buffer is unable to drop) and the buf=
percentage will be the same for all clients watching the same channel.
Sorry if off topic but can someone link me to how to sign up for beta access. Thank you
So it appears the DVR record engine is a client?
[SNR] Buffer statistics for "TV/Magnolia Table With Joanna Gaines/Magnolia Table With Joanna Gaines S04E04 Favorite Sides 2022-08-14-0957.mpg": buf=0%-1% drop=0%
I think the buf=0%-1% was caused by Channels DVR re-authing the channel to record (which happens every 8 days for this Magnolia channel).
2022/08/14 09:57:00.000806 [DVR] Starting job 1660496220-12 Magnolia Table With Joanna Gaines on ch=[6108]
2022/08/14 09:57:28.797259 [TVE] action=fill_form u=USERNAME
2022/08/14 09:57:38.911505 [TVE] action=scienceauth done=true
2022/08/14 09:57:38.911552 [TVE] action=authed
2022/08/14 09:57:41.234409 [TVE] stream timestamps: diy: start_at=2022-08-14T09:56:41-07:00 current_at=2022-08-14T09:57:01-07:00 end_at=2022-08-14T09:57:09-07:00
2022/08/14 09:57:41.234598 [TNR] Opened connection to TVE-Comcast_SSO for ch6108 DIY
2022/08/14 09:57:41.267351 [DVR] Recording for job 1660496220-12 from TVE-Comcast_SSO ch6108 into "TV/Magnolia Table With Joanna Gaines/Magnolia Table With Joanna Gaines S04E04 Favorite Sides 2022-08-14-0957.mpg" for 34m59.999087284s
2022/08/14 09:57:41.616220 [IDX] Generating video index for job 1660496220-12
2022/08/14 10:32:00.001037 [SNR] Buffer statistics for "TV/Magnolia Table With Joanna Gaines/Magnolia Table With Joanna Gaines S04E04 Favorite Sides 2022-08-14-0957.mpg": buf=0%-1% drop=0%
Yes, recordings go through tuner sharing
The buffer increases when video data has to be saved in RAM before being received by the client. Maybe in this case the disk was slow to wake up so the memory buffer spiked slightly.
Anything less than buf=100% is normal. When the buffer reaches 100% then drop= will start increasing.
Thanks for the confirmation
No, my Synology NAS disks in the SHR raid spin 24/7.
And I have NOT enabled the New Streaming Buffer.
We are now exposing stats about things that have always happened but previously had no insight into the normal behavior. This is additional diagnostic detail that will become helpful to diagnose hardware and software issues that come up.
Hi folks. Could you try out the latest beta when you get a chance and let me know if everything is continuing to behave as normal? We've made some improvements to our logging infrastructure in the latest build around TVE that should have no user-facing changes (but potentially fix some strange issues where logs ended up in the wrong file from time to time). We just want to make sure there aren't any regressions. There isn't anything specific that is needed to test, just wanting to make sure there aren't any panics/crashes or other strange things that crop up during normal usage.
We've been running the new logging infrastructure in our beta apps for a couple weeks without incident, so we aren't expecting any issues.
Sure, does this only affect those using the Experimental New Streaming Buffer (Topic Title)?
Like debug recording logs for back-back recording with overlap on the same TVE/M3U Channel?
I've noticed that, but hardly ever look at them and figured you guys knew about it,
It impacts both — we just had such a nice group of testers here I threw it in here for feedback.
That’s exactly the sort of bugs we’ve tried to tackle with these changes.
Scheduled TVE channel 10 minute recording is done, but it still shows as recording in
schedule
manage shows
calendar.
Last log entries are
2022/08/19 16:10:02.000298 [SNR] Buffer statistics for "TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-19-1600.mpg": buf=0% drop=0%
2022/08/19 16:10:02.093758 [MTS] Statistics for "TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-19-1600.mpg": skipped=0 unhandled_packets=0 discontinuity_detected=0 transport_errors=0 invalid_pts=0 invalid_dts=0 saw_pcr=true saw_pmt=true highest_pts=617.659233
Normally after it's done recording I would see this (below from last recording)
[SNR] Buffer statistics for "TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-18-1600.mpg": buf=0% drop=0%
[TNR] Closed connection to TVE-Comcast_SSO for ch6095 REELZ
[MTS] Statistics for "TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-18-1600.mpg": skipped=0 unhandled_packets=0 discontinuity_detected=0 transport_errors=0 invalid_pts=0 invalid_dts=0 saw_pcr=true saw_pmt=true highest_pts=611.692378
[DVR] Finished job 1660863602-ch6095 Canary REELZ Recording
[DVR] Processing file-970: TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-18-1600.mpg
[DVR] Running commercial detection on file 970 (TV/Canary REELZ Recording/Canary REELZ Recording 2022-08-18-1600.mpg)
[DVR] Commercial detection for Canary REELZ Recording 2022-08-18-1600.mpg finished with 4 markers in 2m5.408329917s (2 threads).
Can you submit diagnostics when it’s still in this state?
Logs have been submitted as 5b7c9d9e-b30f-4b25-b9e9-3e5eaaebbcdb