Mailing List archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[linux-dvb] Re: [PATCH] fix for stream corruption on budget /Nova-T cards



Stefan Betermieux wrote:

Hi,

unfortunatly, the patch doesn't do the trick for me, I still have to use another PCI burst mode. I have taken a look at your patch and I have noticed that your workaround is executed every second call of vpeirq() on my system. Is this a normal behaviour?
I don't think it is normal behaviour. I see about 64 IRQ's / second.
The buffer is 188kB, if we assume the DVB-T stream is about 2MByte/s, we should half fill the buffer every 2048 / (188/2) = so I reckon that it should hit the 0%/50% mark approximately 21 times/s. Perhaps it is every second call if your stream rate is higher.

I was reading through the saa7146 datasheet last night. As far as I can tell the chip is configured with limit=9 (set in the BASE_PAGE reg) which should give an IRQ for every 2^(5 + 9) bytes of data (= 16Kb).

This doesn't seem to quite match my experiences, since this would lead to an IRQ rate of 2048/16 = 128/s, when I typically only see around 64.

It also also generates an IRQ when it gets to 50% and 100% of the buffer. This is when it fills the ODD & EVEN buffer.

Perhaps you're just seeing the 50% / 100% IRQ's?

Try tweaking the LIMIT parameter in the BASE_PAGE "...|0x90" to "...|0x80" to see if it generates IRQ's more frequently.


I wonder why we bother using both the ODD & EVEN buffer at all. The distinction doesn't appear relevant when the chip is dealing with raw DVB data and not interlaced video fields. Perhaps we should just use one field buffer and double it's size. This should simplify things a bit, it seems to be this transition between the two fields which causes the ambiguous DMA pointer in the first place.

On the other hand, I can see a connection between your patch and mine. If I reduce the burst threshold, then the buffer doesn't get filled up so much and the difference between olddma and newdma should be smaller.
As far as I can tell, the burst threshold just controls how often the data goes from the FIFO to the main memory via PCI DMA.

It is the LIMIT parameter which defines how often the IRQ should fire and hence how much data should be available each time the IRQ is called.


I see that you've recently written this patch on the subject:

* stream. By changing this parameter, the PCI burst mode for DMA 3 can be
* optimised. The 3 least significant bits of "budget_burst" equal bursts
* of 1,2,4,8,16,32,64 and 128 Dwords. The fourth and fifth bit are assigned
* to the FIFO threshold: 4,8,16 or 32 Dwords. The default is 0x1c: 16 DW
* burst and 32 DW FIFO.
****************************************************************************/
int budget_burst = 0x1c;

I think what you've written is wrong, my interpretation of the datasheet is as follows:
- bits 0,1 control the threshold.
- bits 2,3,4 control the burst size.
This makes the 0x1c == 0001 1100 is
Threshold(00) == 4 Dwords
Burst(111) == 128Dwords

I think the default FIFO threshold seems too aggressive. I think this basically makes the SAA7146 request access to the PCI bus almost continuously which I think it leads to the transfers on the bus being unneccessarily small which drops the efficiency and therefore overall bandwidth on the PCI bus.

Your recommended setting of 0x03 == 0000 0011
Threshold(11) = 32 Dwords
Burst(000) = 1 Dword

My impression is that a high threshold helps. Could you try increasing the burst size as well? The only thing this should do is improve the efficiency of the bus.

Given that anecdotal evidence that higher speed CPU's have more problems than slow CPU's, perhaps what is happening is that by reducing the bus efficiency you've efectively slowed the accesses by the CPU and this only indirectly avoids the problem and doesn't actually fix it.

I will dig deeper into it after the weekend. Do you have a pointer where I can verify the TS stream inside the dvb driver? At the moment, I have to count the ccErrors in vdr, which is not the best solution regarding the latency.
You could try adding some code to the part of vpeirq which looks for
mem[0] == 0x47, presumably this is a magic marker for identifying the start of a TS packet.

Jon




--
Info:
To unsubscribe send a mail to ecartis@linuxtv.org with "unsubscribe linux-dvb" as subject.



Home | Main Index | Thread Index