LinuxTV

News

2011-10-26   Report for the Media subsystem workshop 2011 - Prague - Oct 23-25

Report for the Workshop 2011 is already available here.

Media subsystem workshop 2011 - Prague - Oct 23-25

Group photo of the Kernel Summit Media Subsystem Workshop 2011

Since 2007, we're doing annual mini-summits for the media subsystem, in order to plan the new features that will be introduced there.

Last year, during the Kernel Summit 2010, it was decided that the Kernel Summit 2011 format will be modified, in order to strength the interaction between the several sub-system mini-summits and the main Kernel Summit. If this idea works well, the next Kernel Summits will also follow the same format.

So, some mini-summits were proposed to happen together with the Kernel Summit 2011. Among a few others, the Media subsystem was accepted to be held with this year's Kernel Summit.

So, we'd like to announce that the Media subsystem workshop 2011 will happen together with the Kernel Summit 2011.

The Media subsystem workshop is on early planning stages, but the idea is that we'll have an entire day to do the media discussions. We'll also planning to have workshop presentations inside the Kernel Summit 2011 with the workshop and Kernel Summit attendants present, where workshop results will be presented.

So, I'd like to invite V4L, DVB and RC developers to submit proposals for the themes to be discussed. Please email me if you're interested on being invited for the event.

Hoping to see you soon there!

Mauro

Proposed themes

Theme Proposed by
Buffer management: snapshot mode Guennadi
Rotation in webcams in tablets while streaming is active Hans de Goede
V4L2 Spec: ambiguities fix Hans Verkuil
V4L2 compliance test results Hans Verkuil
Media Controller presentation (probably on Tue Oct, 25) Laurent Pinchart
Workshop summary presentation on Tue Oct, 25 Mauro Carvalho Chehab
DVB API consistency: audio and video DVB API's - what to do? Mauro Carvalho Chehab
Multi FE support: one FE with multiple delivery systems like DRX-K frontend Mauro Carvalho Chehab/Dmitry Belimov
videobuf2 - migration plans for legacy drivers Mauro Carvalho Chehab
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol variatios Mauro Carvalho Chehab
Resource locking Mauro Carvalho Chehab
Multiple CI encoders and how to remove the current CI drivers from staging Dmitry Belimov
The V4L2 on desktop vs. embedded systems Sakari Ailus

Day 1 discussions

DVB video/audio.h conversion to V4L2

Only used by ivtv and av7110.
Deprecate old API, design new API in V4L2.
See RFC posted on June 9th 2011:
    "RFC: Add V4L2 decoder commands/controls to replace dvb/video.h"
Hans will make an RFCv2, wait for comments from ST.
Hans can implement the API in the V4L2 core and ivtv.


videobuf2 - Migration plans for legacy drivers

Missing features

  • radio support
  • DVB support
  • Overlay

videobuf1 should be deprecated and drivers moved to videobuf2.

Some drivers (ab)used videobuf for audio support, this has been cleaned up by patches.

Video overlay

Only supported by bttv and saa7134/7146. Not supported by newer hardware.

videobuf1 supports overlay mode by passing calls directly to the driver. videobuf2 could do the same. (No needed as overlay support will be deprecated)

V4L2 overlay support requires userspace to pass a pointer to physical memory to the V4L2 driver. For security reasons this requires root permissions. A userspace setuid helper is thus required.

The original overlay API proposal involved querying the video adapter driver for a buffer ID and passing the ID to the V4L2 driver. The new buffers sharing API uses a similar approach. V4L2 overlay support should be deprecated in favour of the buffers sharing API.

One drawback is that applications would need to constantly queue/dequeue buffers. A possible solution is to add a buffer flag to tell drivers to constantly overwrite the same buffer over and over again if no new buffer is queued.

Can we migrate to vb2 with no OVERLAY support?

The existing overlay API needs to be supported until the buffers sharing API is in place. To convert bttv/saa7134/saa7146 to videobuf2 we thus need a working overlay API support solution. videobuf2 should not be touched if possible, the code should instead be put in the drivers. Drivers that don't support overlays will be ported to videobuf2 first, and the decision on how (and if) to support overlays will be postponed until bttv/saa7134/saa7146 get ported to videobuf2.

Actions

  • Review new buffers sharing RFCs (developed by Sumit Semwal from TI/Linaro), make sure they cover our overlay use cases
  • Implement support for buffer sharing in videobuf2
  • Don't implement overlay support and deprecated the overlay API when buffer sharing will be available
  • Add support for DVB
    • cx88 is probably a good place to start, as it uses all VB1 stuff except for OVERLAY
  • Convert individual drivers
    • uvcvideo (Laurent)
    • DaVinci (Hans Verkuil, requires CMA)
    • cx23885 / SAA7164 +videobuf-dvb (stoth)
    • gspca (Hans de Goede)

Miscellaneous

Multiple contiguous planes and padding

Use case: allocating an NV12 video buffer on the capture device side and pass it to a GPU that requires the Y and CbCr planes to be contiguous in memory with a GPU-specific amount of padding. Strictly speaking it's not the NV12 format anymore. The standard NV12 format is defined without any padding between planes.

Possible solutions:
  • Allocate a multi-plane buffer on the GPU side and pass it to the video capture driver
  • Extend the multiplane API
  • use the CREATE_BUFS ioctl
  • add additional formats

V4L2 Ambiguities

  • G/S_FREQUENCY has no way to tell whether the call is for a modulator or tuner. Some drivers implement both on the same video node.

Solution: Create two device nodes, one for the modulator and one for the tuner.

  • G/S_MODULATOR has no type, making it impossible to support TV modulators (only radio modulators are supported).

Solution: Only work on that when a driver will require it.

  • V4L2_FBUF_FLAG_PRIMARY and V4L2_FBUF_FLAG_OVERLAY are poorly defined.

Solution: Define the PRIMARY flag as an indication of destructive/non-destructive overlays and the OVERLAY flag as "auto full-screen".

  • The V4L2_FBUF_CAP_SRC_CHROMAKEY definition is wrong.

Solution: Fix it. It should be the opposite of V4L2_FBUF_CAP_CHROMAKEY.

  • RGB pixel formats: endianness issues in the spec.

Solution: Move the old table to a historical question. Clarify names by adding aliases with _LE and _BE suffixes.

  • Duplicate exposure controls (V4L2_CID_EXPOSURE and V4L2_CID_EXPOSURE_ABSOLUTE). Note that V4L2_CID_EXPOSURE has no defined unit.

Solution: Drop the V4L2_CID_EXPOSURE control from the user class, keep V4L2_CID_EXPOSURE_ABSOLUTE only.

  • Control units are not accessible to applications.

Solution: Add a new VIDIOC_QUERY_EXT_CTRL ioctl. This needs to be researched and discussed first (RFC on the list).

  • Does VIDIOC_QUERYCAP return capabilities for the whole device or for the device node ?

Solution: Add a local capabilities field to the querycap structure and clarify that spec to describe the capabilities field as global capabilities. Add a global CAP to tell the userspace that local_cap is setup by the driver.

  • Internal naming: video_device is the only structure not to have a v4l2_ prefix and should be renamed. Unfortunately v4l2_device is already taken.

Solution: No agreement yet.

Snapshot mode

Use case(s)

  • Digital camera with a "viewfinder" (screen that displays the live image) and a snapshot button.

Hardware sensor capabilities of interest

  • Multiple hardware contexts

This might not bring any substantial gain. We need numbers to take a decision whether to implement a new API (which would then be exposed through subdev nodes, and handle multiple contexts in a clean way, without hacking the support through our existing API) or postpone the solution for later.

  • External trigger, external shutter control
  • Fixed number of frames to capture

Technical solutions

  • VIDIOC_CREATE_BUFS/VIDIOC_PREPARE_BUF ioctls for video buffers pre-allocation and pre-queueing.

Actions

  • Draft an API for external trigger/shutter control, based on V4L2 controls.
  • Measure the time it takes to write a context to verify whether there would be a substantial gain from using multiple contexts.

DVB: Multiple frontend support

Current status

  • Drivers implementing the new API expose one frontend, the old API exposes this as two devices
  • More hardware supporting several FEs on single chip popping up
  • wscan falls back for v3 if v5 API is not available

What is the proper way of exposing multiple frontends nowadays?

Discussion
  • Mauro: if a frontend supports both DVB-T and DVB-C, it should be exposed as one which supports both.
  • Michael: for applications it'd be easiest to show two frontends; one which can do DVB-T and the other which does DVB-C.
  • Mauro: not scaleable. The number of frontends will grow quickly with DVB-S, DVB-S2, DVB-S2 Turbo, and FE_CAN_2G_MODULATION is too limited.
  • One piece of silicon should expose one frontend; hardware which has three frontends should also show three frontends to the user space.
  • The current API does not allow enumerating available TV standards.
  • Many drivers still only support DVBv3 API, not v5. The reason for this is that v5 does not support all the features v3 does. Only a few drivers would really need v3, however, and the rest would be fine implementing v5.

Conclusion
  • The current status is far from ideal, and an agreement on a good solution was not reached. Further discussion on the topic is needed.

Actions
  • Steve: diseqc support in DVBv5 
  • Steve: capabilities in DVBv5

DVB: CI encoders and how to remove the current CI drivers from staging

CI == Common Interface.

Discussion
  • Steve: the CI encoders do not fit to existing DVB APIs.
  • Different options exist in data routing. The data may be written to system memory in between processing stages.

Possible solutions:
  • Mauro: The Media controller interface could be used to configure the pipeline.
  • Naveen: New scrambling block could be used to represent CI modules. (This appears to be the way forward.)

DVB: New standards --- Michael Krufky

Hauppauge has a new USB stick which does ATSC-MH. This device requires further software processing on its output data for it to be useful for the user. The current library implements the de-scrambling in the user space.

Should the descrambling be implemented in the kernel instead? That would be more useful for the user, but such processing is not necessarily seen best done in the kernel.

In V4L2 there is a comparable solution: libv4l.

The hardware provides its own encapsulation inside which UDP packets may be found.  A separate data stream called FIC is provided alongside the UDP packets. Should the UDP packets be handled by the networking stack since they are network packets? For receiving a compressed stream, perhaps no, but the content could theoretically be anything.

tun (as in tuntap) may be used to inject the packets to a virtual network interface and the raw data is to be provided separately.  This approach has a problem: configuring the tun device requires cap_sys_net requires root acces typically. This is seen as conflicting with the intent to be using the system as a regular user.

It might be better to handle the UDP packets in the library instead after all.

There is not other than LG hardware existence yet, so we don't know what hardware manufacturers intend to do in the future: do they provide such solutions in the future as well, or do go back to something more traditional.

Without that knowledge, the library approach should proceed. When more information on the devices that the hardware manufacturers make is available the decision could be re-evaluated. It might make sense to move this to the kernel after all.

Proposal
  • Implement a library for the LG ATSC-MH devices
  • "LG_MH_RAW_PAYLOAD" is provided by the driver to the user space --- the library
  • Library does everything:
    • provide interface to apps
    • scan
    • tuning
    • receive payload
  • Library talks to the kernel using DVBv5
  • The interface provided by the library is different than that provided by the kernel
    • If the interface was in the kernel, the data would be available through the TCP stack; the device would provide its own network interface

On control classes and low level sensor and other controls

There is a need to control certain aspects of embedded hardware such as sensor settings such as blanking, digital gain, black level clamping, test patterns and per-component gains but no control classes for such purposes exist at the moment.

We have the high level camera class but that is not seen suitable for this kind of controls.

Should we classify controls based on what is their function or where are they implemented? The common agreement appears to be that the function is the answer.

The controls mentioned above are fairly low level so hiding them from the regular user should be the default thing to do. Hiding of the controls should be done in the user space, and the kernel still should expose all the controls.

Some of these controls control the image capture process itself while the others affect on the processing of the data which may be done elsewhere in the pipeline. Whether hiding or showing the control is desired depends on an application: a regular application likely would not wish to see them while an application written for a specific embedded system would need these controls to function. The decision must be taken in the user space.

Currently it is seen these controls are best put to a separate control class, V4L2_CID_CLASS_LOW_LEVEL, which would hold all the low-level controls whether they are related to image capture process or image processing.


Day 2 discussions


V4L2/DVB on desktop vs. embedded systems
Goal: provide a uniform user-space API on both desktop and embedded (MC) systems

Reasons, why providing a default pipeline is difficult:
1. after a MC-aware application has run, a standard V4L2 application might not work anymore
2. hardware-specific library plugin is hard to maintain, is largely propriatory, bases on a vendor-local (non mainline) kernel branch, would only support vendor (Nokia) devices, not generic (OMAP3) systems

Example pipeline:
Sensor: [pixel array -> binner] -> ISP [CSI-2 -> format conversion -> scaler] -> memory
with possible in- and output from and to memory at multiple ISP stages

Currently any configuration, applied to a subdevice remains local and does not get propagated to other (connected) entities

Video drivers, implementing the V4L2 API, have to configure the complete video pipeline upon V4L2 ioctl()s.
Currently there is no way to distinguish between video devices, belonging to standard V4L2 devices and MC-devices

Among "regular" applications there are ones, using libv4l and those, not using it. libv4l can also be preloaded for applications, not using it directly. Alternatively media-ctl can be used to pre-configure the pipeline.

"Low-level" applications will not need libv4l, they use the MC API directly

MC drivers have to be testable, i.e., driver authors also have to provide an open-source plugin, doing at least a basic pipeline set up. For advanced features vendors can implement closed-source plugins. Device manufacturers should (but don't have to) additionally provide device-specific plugins for maximum flexibility, but those plugins are not compulsory.

Fazit: the user-space configuration consists of the following components:
* libv4l - the generic library
* SoC-specific plugin - open-source plugin for the specific SoC
* device-specific plugin - possibly closed-source plugin, specific to the device (optional)
* libvioctl - auxiliary library, using the same plugins, as libv4l (SoC- and / or device-), but specifically designed to export all the advanced configuration functions in a generic way
* media-ctl - an application, using libvioctl, used to set up the pipeline, before running generic V4L2 application, not using libv4l

Mauro: This is not V4L2, because it doesn't propagate S_FMT configuration. Possibilities: remove S_FMT, rename the ioctl(), or rename device nodes from videoN
Laurent: Currently omap3isp creates 7 /dev/videoN nodes, of those are 4 CAPTURE devices, they correspond to 7 DMA engines. The problem is the absence of a 1-to-1 relationship between actual data sources and device nodes, videoN node enumeration is the problem. Already now many configurations do not work without libv4l, shouldn't it be made a requirement for all V4L2 applications?
Possible solutions:
* separate video nodes - one V4L2-compliant for legacy applications, supporting default configurations
* do not return the CAPTURE capability, possibly add a new STREAM_MANAGEMENT capability
Proposed resolution: define profiles: "streaming" - for MC. An RFC should be created by Laurent and submitted

Agreement: the low level video nodes will not use CAPTURE  / OUTPUT capability bits in queryctrl. Regular applications know from this that the video device node is a low level node which only provides a subset of the V4L2 API functionality.

libv4l should gain an additional library to enumerate video devices

STMicro to V4L2, DVB / MC

Complex topology, consisting of video input, processing and output
Applications include set-top boxes, TV-sets,...
Input possibilities include tuners, analog input, uncompressed data, HDMI-RX
Input can be passed through a transport engine, possibly a security block
Followed by a Stream Engine, eventually landing on a display device
The driver infrastructure is presented in form of an object model
MC should be used to configure the pipelines - video and audio
Typical data processing paths will pipe data in the kernel from input to output without going to the user-space. Simultaneous processing of several data streams is a typical use-case too.
Many input interface support - both digital (HDMI) and analog (SCART) will require API extensions, RFCs would be required
Example configuration, proposed for MC an implementation for DVB
Front-End -> Demux -> Decoder (* 2) -> ... (* 2) -> Output
MC will have to be extended to support dynamic number of pads, since that is a requirement to support demuxes. Re-routing data should be possible without breaking the stream. E.g., switching a demux to a different language.
Q: should an MC-generic ioctl be defined to configure data format on a pad, similar to S_FMT, which would be passed on by the MC core to the respective subsystem?
Q: should a request to start the pipeline, sent to one entity, increment the use-count of all entities in the pipeline and start them all?

mchehab

Privacy Policy