[vdr] Problems playing ongoing recordings?

alexw alexw at undercover.mine.nu
Thu Apr 3 09:10:34 CEST 2008


On Tuesday 01 April 2008 01:05:37 Artur Skawina wrote:
> Alexw wrote:
> >>> Sometimes the player stops in the middle of a recording due to a zero
> >>> size request. Here is the log:
> >>>
> >>> vdr: [3836] dvbplayer thread started (pid=3643, tid=3836)
> >>> vdr: [3836] resuming replay at index 1950 (0:01:18.01)
> >>> vdr: [3837] non blocking file reader thread started (pid=3643,
> >>> tid=3837) vdr: [3836] SetBrokenLink: no GOP header found in video
> >>> packet vdr: [3836] setting audio track to 1 (0)
> >>> vdr: [3836] playing
> >>> '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
> >>>
> >>> <<<unexpect stop of replay>>>
> >>>
> >>> vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837)
> >>> vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)
> >>>
> >>> vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418
> >>> vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump:   
> >>>      0 ra: 12582912 vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331
> >>> SIZE: 316627 vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 
> >>> 13354 jump:         0 ra: 12582912 vdr: [5618] READ: fd: 25 1068680058
> >>> .. 1068690344 SIZE:  10286 jump:         0 ra: 12582912 vdr: [5618]
> >>> READ: fd: 25 1068690344 .. 1068721839 SIZE:  31495 jump:         0 ra:
> >>> 12582912 vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288
> >>> jump:         0 ra: 12582912 vdr: [5618] WANT: fd: 25 1069246127 ..
> >>> 1070294703 SIZE: 1048576 vdr: [5618] READ: fd: 25 1069246127 ..
> >>> 1069246127 SIZE:      0 jump:         0 ra: 12582912 vdr: [5618] non
> >>> blocking file reader thread ended (pid=5563, tid=5618) vdr: [5617]
> >>> dvbplayer thread ended (pid=5563, tid=5617)
> >>
> >> Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this.
> >
> > Sometimes it take a long time to occur, sometimes not.
>
> Did this start after applying my patch, or did it happen in the past too?
> Does it always happen at a certain position? Specific stream or bitrate?
> I don't recall ever having a similar problem, the number 524288 looks a bit
> suspicious...
>

It has been happening in the past. It is not related to the position in the stream nor to the bitrate. It seems to be related to the fact the dvbplayer is starving at this time. I am suspicious about 
blocking IO read somewhere.

I did look at the requested read size and notice that big chunk request are linked to the playback freeze.

vdr: [23223] **** trigger read with 23176
vdr: [23223] **** trigger read with 52353
vdr: [23223] **** trigger read with 21065
vdr: [23223] **** trigger read with 22996
vdr: [23223] **** trigger read with 51949
vdr: [23223] **** trigger read with 27058
vdr: [23223] **** trigger read with 24513
vdr: [23223] **** trigger read with 50280
vdr: [23223] **** trigger read with 18108
vdr: [23223] **** trigger read with 20406
vdr: [23223] **** trigger read with 101168 <-- picture freeze
vdr: [23223] **** trigger read with 20742
vdr: [23223] **** trigger read with 20922
vdr: [23223] **** trigger read with 49968
vdr: [23223] **** trigger read with 21879
vdr: [23223] **** trigger read with 21630
vdr: [23223] **** trigger read with 56692
vdr: [23223] **** trigger read with 24832
vdr: [23223] **** trigger read with 23184
vdr: [23223] **** trigger read with 59373
vdr: [23223] **** trigger read with 24832
vdr: [23223] **** trigger read with 25949
vdr: [23223] **** trigger read with 62326
vdr: [23223] **** trigger read with 27614
vdr: [23223] **** trigger read with 25029
vdr: [23223] **** trigger read with 56620
vdr: [23223] **** trigger read with 27066
vdr: [23223] **** trigger read with 22845
vdr: [23223] **** trigger read with 112627 <-- picture freeze
vdr: [23223] **** trigger read with 29840


> >>> As you can see the requested size is increasing until it reaches the
> >>> max buf. This is also a period with freezes in the video (late
> >>> delivery).
> >>
> >> Do these problems (0-sized reads) occur only near the end of a program
> >> being recorded?
> >
> > No, you can experience a stop in the middle of a recording.
> >
> >> Also, I see from the above that the readahead code needs to be more aggressive:
> >>> vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248
> >>
> >> [... small reads...]
> >>
> >>> vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump:   
> >>>      0 ra: 12582912
> >>
> >> the readahead window does not cover the area which is being read later
> >> -- this certainly is likely to stall playback. I'll fix this (i did not
> >> expect such a large difference in read request sizes.)
>
> The attached patch makes the readahead window grow much faster, this will
> cause more I/O at the start of playback, but should handle cases like the
> one above better. If it works correctly all the ranges mentioned in "READ:"
> lines should be inside the preceding "WANT:" range and the playback
> shouldn't stall.

I am reaching the maximum readahead window faster (almost 10s after playback starts) with the new patch.

vdr: [15502] WANT: fd: 26 1004377827 .. 1015728543 SIZE: 11350716
vdr: [15502] WANT: fd: 26 1004535337 .. 1015998967 SIZE: 11463630
vdr: [15502] WANT: fd: 26 1004682706 .. 1016274903 SIZE: 11592197
vdr: [15502] WANT: fd: 26 1004739635 .. 1016553335 SIZE: 11813700
vdr: [15502] WANT: fd: 26 1004839932 .. 1016943423 SIZE: 12103491
vdr: [15502] WANT: fd: 26 1004970331 .. 1017333735 SIZE: 12363404
vdr: [15502] WANT: fd: 26 1005150982 .. 1017733894 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1005505020 .. 1018075966 SIZE: 12570946
vdr: [15502] WANT: fd: 26 1005797742 .. 1018344142 SIZE: 12546400
vdr: [15502] WANT: fd: 26 1006185614 .. 1018631686 SIZE: 12446072
vdr: [15502] WANT: fd: 26 1006351675 .. 1018919902 SIZE: 12568227
vdr: [15502] WANT: fd: 26 1006677271 .. 1019215894 SIZE: 12538623
vdr: [15502] WANT: fd: 26 1007017481 .. 1019548174 SIZE: 12530693
vdr: [15502] WANT: fd: 26 1007229411 .. 1019812323 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1007530282 .. 1020077003 SIZE: 12546721
vdr: [15502] WANT: fd: 26 1007792199 .. 1020375111 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008063657 .. 1020646569 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008339261 .. 1020922173 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008611553 .. 1021194465 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008909033 .. 1021491945 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1009225795 .. 1021808707 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1009559062 .. 1022141974 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1009941453 .. 1022441318 SIZE: 12499865
vdr: [15502] WANT: fd: 26 1010248984 .. 1022802262 SIZE: 12553278


> Here the readahead window grows to ~5Mbytes just after starting playback,
> i still need to check that this is not too fast, doesn't saturate the disk
> and/or link and cause delays when jumping etc. Tested by playing a few
> files from an NFS mount, didn't notice any problems so far.
>

This is also working fine here. No delay when jumping and the start of the playback is OK.

> An incremental patch would look like this (the attached one (vs 1.4.7)
> already includes it):
>
> diff --git a/tools.c b/tools.c
> index a14f799..e22614f 100644
> --- a/tools.c
> +++ b/tools.c
> @@ -1186,13 +1186,13 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t
> Size) // Trigger the readahead IO, but only if we've used at least some of
> the previously // requested area. This avoids calling fadvise() after every
> read() call. size_t cachedsize = cachedend - curpos;
> -           size_t ra = cachedsize + Size*2 + (size_t)jumped*1;
> +           size_t ra = cachedsize + Size*8 + (size_t)jumped*1;
>             if (cutting)
>                ra += KILOBYTE(64);
>             ra = min(readahead, ra);
>             // Start I/O if we A) used some of the data or B) can read
> sufficiently large new chunk. // (A) is important when starting w/ a small
> readahead. -           if (cachedsize < (ra-ra/4) ||
> cachedsize+KILOBYTE(256) <= ra) +           if (cachedsize < (ra-ra/16) ||
> cachedsize+KILOBYTE(256) <= ra) FadviseRead(curpos, ra);
>             }
>          else if (jumped >= 0) {    // either large forward jump, or FF
> (jumps by ~4xSize)
>
> artur


Client's log extract (startup):

vdr: [25486] READ: fd: 27 803118549 .. 803146397 SIZE:  27848 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803146397 .. 803191530 SIZE:  45133 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803191530 .. 808584629 SIZE: 5393099
vdr: [25486] READ: fd: 27 803191530 .. 803212652 SIZE:  21122 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803212652 .. 803234315 SIZE:  21663 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803234315 .. 803289817 SIZE:  55502 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803289817 .. 809028645 SIZE: 5738828
vdr: [25486] READ: fd: 27 803289817 .. 803312817 SIZE:  23000 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803312817 .. 803334701 SIZE:  21884 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803334701 .. 803409850 SIZE:  75149 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803409850 .. 809629837 SIZE: 6219987
vdr: [25486] READ: fd: 27 803409850 .. 803430400 SIZE:  20550 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803430400 .. 803450775 SIZE:  20375 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803450775 .. 803495691 SIZE:  44916 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803495691 .. 809989165 SIZE: 6493474
vdr: [25486] READ: fd: 27 803495691 .. 803516928 SIZE:  21237 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803516928 .. 803560475 SIZE:  43547 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803560475 .. 810337541 SIZE: 6777066
vdr: [25486] READ: fd: 27 803560475 .. 803579793 SIZE:  19318 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803579793 .. 803626190 SIZE:  46397 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803626190 .. 810708717 SIZE: 7082527
vdr: [25486] READ: fd: 27 803626190 .. 803647444 SIZE:  21254 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803647444 .. 803691436 SIZE:  43992 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803691436 .. 811060653 SIZE: 7369217
vdr: [25486] READ: fd: 27 803691436 .. 803711102 SIZE:  19666 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803711102 .. 803753355 SIZE:  42253 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803753355 .. 811398677 SIZE: 7645322
vdr: [25486] READ: fd: 27 803753355 .. 803772657 SIZE:  19302 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803772657 .. 803829583 SIZE:  56926 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803829583 .. 811854085 SIZE: 8024502
vdr: [25486] READ: fd: 27 803829583 .. 803852980 SIZE:  23397 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803852980 .. 803877264 SIZE:  24284 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803877264 .. 803931256 SIZE:  53992 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803931256 .. 812286021 SIZE: 8354765
vdr: [25486] READ: fd: 27 803931256 .. 803953880 SIZE:  22624 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803953880 .. 803975350 SIZE:  21470 jump:         0 ra: 12582912


Is it possible to log the status of the mpeg2 decoder feeder? 

Cheers,

Alex



More information about the vdr mailing list