DVR crashed and didn't recover this morning

I just noticed that the DVR was offline (not responding to http) but my QNAP was online. Logged into the QNAP and channels DVR showed as running but I couldn’t access it as it was non responsive. No logs in the QNAP logs. Shutdown and restarted it and its running now. Here’s a copy of the log:

2017/02/11 08:38:37 [DVR] Commercial detection finished with 8 markers.
2017/02/11 09:57:00 [DVR] Fetching guide data for 44 stations in USA-OTA98663 @ 2017-02-11 9:30AM
panic: runtime error: index out of range

goroutine 53 [running]:
panic(0xb0e8a0, 0xc42000c100)
/home/vagrant/go/src/runtime/panic.go:500 +0x1a1
_/home/vagrant/channels-server-x86_64/dvr.(*Recorder).indexAirings(0xc420142580, 0xc420460070, 0xc, 0xc42037cb70, 0xed0314118, 0xc400000000, 0x12a58e0, 0x13a52453c000, 0xed02ff5ec, 0x0)
/home/vagrant/channels-server-x86_64/dvr/recorder.go:410 +0xdaf
_/home/vagrant/channels-server-x86_64/dvr.(*Recorder).RunIndexer(0xc420142580)
/home/vagrant/channels-server-x86_64/dvr/recorder.go:300 +0xaee
created by _/home/vagrant/channels-server-x86_64/dvr.(*Recorder).Run
/home/vagrant/channels-server-x86_64/dvr/recorder.go:140 +0x43
2017/02/11 10:32:43 [SYS] Starting Channels DVR v2017.02.10.0714 (linux-x86_64) in /share/CACHEDEV1_DATA/.qpkg/ChannelsDVR/channels-dvr/data
2017/02/11 10:32:44 [HDR] Found 2 devices

1 Like

Thanks for the report!

Had you setup some custom channel mappings with the new build?

This bug has been fixed in v2017.02.11.0312

Nope, didn’t touch the channel mapping thing yet. I did create a couple new passes last night.

Interesting. I fixed that bug last night but didn’t push out a release because I thought only custom channel mappings were affected.

Seems like no one else hit it, so that’s good.

Not sure if this is related but mine crashed as well. It did restart on its own.

Version
2017.02.11.0312
Linux Ubuntu
16.04 (kernel: 4.4.0-62-generic)

goroutine 1122 [IO wait]:
net.runtime_pollWait(0x7ff92c7465e0, 0x72, 0x1e)
/home/vagrant/go/src/runtime/netpoll.go:160 +0x59
net.(*pollDesc).wait(0xc421c023e0, 0x72, 0xc4203e16f0, 0xc42000c068)
/home/vagrant/go/src/net/fd_poll_runtime.go:73 +0x38
net.(*pollDesc).waitRead(0xc421c023e0, 0x1254740, 0xc42000c068)
/home/vagrant/go/src/net/fd_poll_runtime.go:78 +0x34
net.(*netFD).Read(0xc421c02380, 0xc420582000, 0x4000, 0x4000, 0x0, 0x1254740, 0xc42000c068)
/home/vagrant/go/src/net/fd_unix.go:243 +0x1a1
net.(*conn).Read(0xc422cbc6d0, 0xc420582000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
/home/vagrant/go/src/net/net.go:173 +0x70
crypto/tls.(*block).readFromUntil(0xc42319bb60, 0x7ff92c6fe8a8, 0xc422cbc6d0, 0x5, 0xc422cbc6d0, 0x28)
/home/vagrant/go/src/crypto/tls/conn.go:476 +0x91
crypto/tls.(*Conn).readRecord(0xc425037500, 0xc1ee17, 0xc425037608, 0xc420560680)
/home/vagrant/go/src/crypto/tls/conn.go:578 +0xc4
crypto/tls.(*Conn).Read(0xc425037500, 0xc4230f2000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/home/vagrant/go/src/crypto/tls/conn.go:1113 +0x116
net/http.(*persistConn).Read(0xc42020d900, 0xc4230f2000, 0x1000, 0x1000, 0x5eb590, 0xc4203e1b58, 0x42e2fd)
/home/vagrant/go/src/net/http/transport.go:1261 +0x154
bufio.(*Reader).fill(0xc4204c0b40)
/home/vagrant/go/src/bufio/bufio.go:97 +0x10c
bufio.(*Reader).Peek(0xc4204c0b40, 0x1, 0xc4203e1bbd, 0x1, 0x1, 0xc4204c0000, 0x0)
/home/vagrant/go/src/bufio/bufio.go:129 +0x62
net/http.(*persistConn).readLoop(0xc42020d900)
/home/vagrant/go/src/net/http/transport.go:1418 +0x1a1
created by net/http.(*Transport).dialConn
/home/vagrant/go/src/net/http/transport.go:1062 +0x4e9

goroutine 1108 [IO wait, 125 minutes]:
net.runtime_pollWait(0x7ff92c746ca0, 0x72, 0xf)
/home/vagrant/go/src/runtime/netpoll.go:160 +0x59
net.(*pollDesc).wait(0xc420986300, 0x72, 0xc423bd5790, 0xc42000c068)
/home/vagrant/go/src/net/fd_poll_runtime.go:73 +0x38
net.(*pollDesc).waitRead(0xc420986300, 0x1254740, 0xc42000c068)
/home/vagrant/go/src/net/fd_poll_runtime.go:78 +0x34
net.(*netFD).Read(0xc4209862a0, 0xc42315a000, 0x1000, 0x1000, 0x0, 0x1254740, 0xc42000c068)
/home/vagrant/go/src/net/fd_unix.go:243 +0x1a1
net.(*conn).Read(0xc4209b4108, 0xc42315a000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/home/vagrant/go/src/net/net.go:173 +0x70
net/http.(*connReader).Read(0xc423ceef80, 0xc42315a000, 0x1000, 0x1000, 0x22, 0xc420577800, 0x0)
/home/vagrant/go/src/net/http/server.go:586 +0x144
bufio.(*Reader).fill(0xc42281a420)
/home/vagrant/go/src/bufio/bufio.go:97 +0x10c
bufio.(*Reader).ReadSlice(0xc42281a420, 0xa, 0x0, 0x1e, 0x6, 0x0, 0x0)
/home/vagrant/go/src/bufio/bufio.go:330 +0xb5
bufio.(*Reader).ReadLine(0xc42281a420, 0xc4205a6870, 0xf0, 0xf0, 0xbaa280, 0x678d33, 0x12a7258)
/home/vagrant/go/src/bufio/bufio.go:359 +0x37
net/textproto.(*Reader).readLineSlice(0xc422cb8e40, 0xc423bd5a88, 0xc423bd5a88, 0x43a7a8, 0xf0, 0xbaa280)
/home/vagrant/go/src/net/textproto/reader.go:55 +0x5e
net/textproto.(*Reader).ReadLine(0xc422cb8e40, 0xc4205a6870, 0xc, 0x0, 0x45453c)
/home/vagrant/go/src/net/textproto/reader.go:36 +0x2f
net/http.readRequest(0xc42281a420, 0xc423cb6000, 0xc4205a6870, 0x0, 0x0)
/home/vagrant/go/src/net/http/request.go:793 +0xa5
net/http.(*conn).readRequest(0xc420134500, 0x1258400, 0xc424bdecc0, 0x0, 0x0, 0x0)
/home/vagrant/go/src/net/http/server.go:765 +0x10d
net/http.(*conn).serve(0xc420134500, 0x1258400, 0xc424bdecc0)
/home/vagrant/go/src/net/http/server.go:1532 +0x3d3
created by net/http.(*Server).Serve
/home/vagrant/go/src/net/http/server.go:2293 +0x44d

The log you posted doesn’t show the entire failure. Please email the whole log file to [email protected]

You can access it via http://x.x.x.x:8089/log?n=2000

Then there is my crash that I started another thread on…seems like they are
related…

Sent

@Scott it looks like your guide database is getting corrupted again:

2017/02/12 09:26:14 [DVR] Error indexing airings: Corruption: unknown WriteBatch tag

Not sure why you keep running into this issue. I’m starting to suspect you have a bad stick of RAM.

i think so too. On my list to replace.