[media-workshop] Agenda for the Edinburgh mini-summit
hverkuil at xs4all.nl
Mon Sep 23 16:29:33 CEST 2013
On 09/23/2013 03:00 PM, Hugues FRUCHET wrote:
> Hi Hans,
> Unfortunately I'm not available on Monday, I can only Wednesday the 23th.
> Regarding discussions we already had, half a day is certainly too long, we can shorten it to 1 or 2 hours I think.
I'll try to squeeze it into the agenda. I plan on posting a more detailed agenda with timeslots
later this week.
Are you available for further discussions during (part of) some other day?
24th or 25th perhaps? Just in case we need more time to work on this.
> -----Original Message-----
> From: Hans Verkuil [mailto:hverkuil at xs4all.nl]
> Sent: lundi 23 septembre 2013 12:37
> To: Hugues FRUCHET
> Cc: Oliver Schinagl; linux-media at vger.kernel.org; media-workshop; Benjamin Gaignard; hdegoede at redhat.com
> Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
> Hi Hugues,
> Do you think it would be possible to discuss this topic in a small group on Monday
> (October 21st)? Half a day for this during the summit itself is too long, but if
> we can discuss it on Monday, then we can just present the results of that meeting
> during the summit.
> Monday would be best since then Laurent Pinchart is available (he's attending the
> ARM summit on Tuesday), and his input would be very useful.
> Please let me know asap whether this is an option for you.
> Hans, would you be available for this on the Monday as well?
> On 09/10/2013 09:54 AM, Hans Verkuil wrote:
>> On Tue 10 September 2013 09:36:00 Hugues FRUCHET wrote:
>>> Thanks Hans,
>>> Have you some implementation based on meta that we can check to see code details ?
>> Not as such. Basically you just add another pixelformat define for a multiplanar
>> format. And you define this format as having X video planes and Y planes containing
>> meta data.
>>> It would be nice to have one with noticeable amount of code/processing made on user-land side.
>>> I'm wondering also how libv4l is selecting each driver specific user-land plugin and how they are loaded.
>> libv4l-mplane in v4l-utils.git is an example of a plugin.
>> Documentation on the plugin API seems to be sparse, but Hans de Goede,
>> Sakari Ailus or Laurent Pinchart know a lot more about it.
>> There are (to my knowledge) no plugins that do exactly what you want, so
>> you're the first. But it has been designed with your use-case in mind.
>>> -----Original Message-----
>>> From: Hans Verkuil [mailto:hverkuil at xs4all.nl]
>>> Sent: lundi 9 septembre 2013 12:33
>>> To: Hugues FRUCHET
>>> Cc: Mauro Carvalho Chehab; Oliver Schinagl; media-workshop; Benjamin Gaignard; linux-media at vger.kernel.org
>>> Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
>>> Hi Hugues,
>>> On 09/05/2013 01:37 PM, Hugues FRUCHET wrote:
>>>> Hi Mauro,
>>>> For floating point issue, we have not encountered such issue while
>>>> integrating various codec (currently H264, MPEG4, VP8 of both Google
>>>> G1 IP & ST IPs), could you precise which codec you experienced which
>>>> required FP support ?
>>>> For user-space library, problem we encountered is that interface
>>>> between parsing side (for ex. H264 SPS/PPS decoding, slice header
>>>> decoding, references frame list management, ...moreover all that is
>>>> needed to prepare hardware IPs call) and decoder side (hardware IPs
>>>> handling) is not standardized and differs largely regarding IPs or
>>>> CPU/copro partitioning. This means that even if we use the standard
>>>> V4L2 capture interface to inject video bitstream (H264 access units
>>>> for ex), some proprietary meta are needed to be attached to each
>>>> buffers, making de facto "un-standard" the V4L2 interface for this
>>> There are lots of drivers (mostly camera drivers) that have non-standard
>>> video formats. That's perfectly fine, as long as libv4l plugins/conversions
>>> exist to convert it to something that's standardized.
>>> Any application using libv4l doesn't notice the work going on under the
>>> hood and it will look like a standard v4l2 driver.
>>> The multiplanar API seems to me to be very suitable for these sort of devices.
>>>> Exynos S5P MFC is not attaching any meta to capture input
>>>> buffers, keeping a standard video bitstream injection interface (what
>>>> is output naturally by well-known standard demuxers such as gstreamer
>>>> ones or Android Stagefright ones). This is the way we want to go, we
>>>> will so keep hardware details at kernel driver side. On the other
>>>> hand, this simplify drastically the integration of our video drivers
>>>> on user-land multimedia middleware, reducing the time to market and
>>>> support needed when reaching our end-customers. Our target is to
>>>> create a unified gstreamer V4L2 decoder(encoder) plugin and a unified
>>>> OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M
>>>> API whatever hardware IP is.
>>>> About mini summit, Benjamin and I are checking internally how to
>>>> attend to discuss this topic. We think that about half a day is
>>>> needed to discuss this, we can so share our code and discuss about
>>>> other codebase you know dealing with video codecs.>
>>> We are getting a lot of topics for the agenda and half a day for one topic
>>> seems problematic to me.
>>> One option is to discuss this in a smaller group a day earlier (October 22).
>>> We might be able to get a room, or we can discuss it in the hotel lounge or
>>> pub :-) or something.
>>> Another option is that ST organizes a separate brainstorm session with a
>>> few core developers. We done that in the past quite successfully.
>> media-workshop mailing list
>> media-workshop at linuxtv.org
More information about the media-workshop