Mailing List archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[linux-dvb] Re: CVS UNLOCKED



Holger Waechtler <holger@qanu.de> writes:

> I think there should be no serious problems, but to verify this we
> need to set up the testcase. The planned DMA buffer partitioning table
> and write pointer position exposed to userspace may look like this:
> 
> struct dma_stuff {
>     volatile unsigned long current_dma_buffer; /* last completely
> filled buffer */

Fine.

>     volatile unsigned long dma_pos; /* DMA write pointer in buffer
> [(current_dma_buffer+1)%num_dma_buffers] */

I wouldn't put that in there.  Not sure I can do that with saa7134 +
cx88, but even if I can that most likely requires a register access
and thus wouldn't work in a memory page which can be mapped to
userspace.

>     const int num_dma_buffers;
>     const struct { unsigned long offset; unsigned long len;}
> dma_buffer_tab [] = {
>        /* ... */
>     };

Looks like a bad idea.  Is that in-kernel stuff or for export to
userspace?

For userspace exports just num_dma_buffers + dma_buffer_size should be
enougth I think, you can remap physically fragmented stuff into a
linear virtual memory area, so the userspace app don't see the
fragmentation.

Hmm, while thinking about it, 188 is a odd number, so we might run in
trouble with page alignment.  Is that the reason for one entry per dma
buffer?  I'd still make all buffers the same size and then have a
table with just the offsets.

How applications can wait for the next dma buffer being filled?

  Gerd

-- 
#define printk(args...) fprintf(stderr, ## args)




Home | Main Index | Thread Index