Mailing List archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[linux-dvb] Re: CVS UNLOCKED



Gerd Knorr wrote:

Holger Waechtler <holger@qanu.de> writes:


I think there should be no serious problems, but to verify this we
need to set up the testcase. The planned DMA buffer partitioning table
and write pointer position exposed to userspace may look like this:

struct dma_stuff {
volatile unsigned long current_dma_buffer; /* last completely
filled buffer */

Fine.


volatile unsigned long dma_pos; /* DMA write pointer in buffer
[(current_dma_buffer+1)%num_dma_buffers] */

I wouldn't put that in there. Not sure I can do that with saa7134 +
cx88, but even if I can that most likely requires a register access
and thus wouldn't work in a memory page which can be mapped to
userspace.

Some DMA controllers can throw interrupts at programmable DMA buffer watermarks, some not. For the latter ones this pointer is always zero, a somewhat higher latency is the result.


const int num_dma_buffers;
const struct { unsigned long offset; unsigned long len;}
dma_buffer_tab [] = {
/* ... */
};

Looks like a bad idea. Is that in-kernel stuff or for export to
userspace?

it's just the table that announces the buffer layout to both sides. Usually you allocate one big chunk of DMA memory using consistent_alloc() and all buffers are simply (sometimes page-aligned) fixed-sized blocks inside this chunk.

You could as well allocate multiple chunks (but need to ensure that their size is multiple of a page size and they are page-aligned) and concatenate them in the remap() call.

For userspace exports just num_dma_buffers + dma_buffer_size should be
enougth I think, you can remap physically fragmented stuff into a
linear virtual memory area, so the userspace app don't see the
fragmentation.

Sometimes buffer sizes are not multiples of PAGE_SIZE or the wordsize but the DMA controller requires a particular alignment, thus the offsets should get exported...

Hmm, while thinking about it, 188 is a odd number, so we might run in
trouble with page alignment. Is that the reason for one entry per dma
buffer? I'd still make all buffers the same size and then have a
table with just the offsets.

How would you determine the size of each DMA buffer block then?

How applications can wait for the next dma buffer being filled?

by calling poll() or select()? The DMA Interrupt pointer simply awakes the poll wait queue as soon data arrives or some amount of data arrives...

Holger





Home | Main Index | Thread Index