Development: The DVB Decoder Challenge: Difference between revisions

From LinuxTVWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 20: Line 20:
* naive codes just watch the buffer fuel level and drop or delay frames.
* naive codes just watch the buffer fuel level and drop or delay frames.


* VLC low-pass filters the incoming clock references and uses a linear approximation algorithm to approach the server clock reference.
* VLC low-pass filters the incoming clock references and uses a linear approximation algorithm to approach the server clock reference. This works pretty well, but unfortunally the all timing code is very VLC-specific and not easy to reuse.


* A theoretically a little hard to understand but very efficient and easy to implement approach uses Kalman Filtering.
* A theoretically a little harder to understand but very efficient and trivial to implement approach uses Kalman Filtering.





Revision as of 23:48, 29 September 2004

Introduction

When implementing a Hard- or Software MPEG2 Decoder one will encounter several challenges, most of them are not too hard to solve but very DVB-specific and can be quite annoying if not properly handled.

So this page tries to list them, to discuss common approaches and to outline elegant solutions.


The STC sync problem

Whenever a client has to decode a live stream from a server it has to adjust it's own system time clock to the one of the server, for several reasons:

  • Transmitted data is bursty, the decoder has to display content with a little delay. This delay should get minimized, otherwise you always hear your neighbors celebrating the soccer championship goal 3 seconds before you can see it.
  • The server clock may run continously faster or slower than the host clock, so the time difference may increase with time

The Solution is the PCR (Program Clock Reference) a special clock reference value transmitted every few MPEG2 TS packets in the TS packet header. This reference allows the client to synchronize it's own clock to the one of the server. Hardware MPEG2 decoders use voltage-controlled oscillators or numerically controlled oscillators for this purpose.

Software decoders followed different approaches in the past:

  • naive codes just watch the buffer fuel level and drop or delay frames.
  • VLC low-pass filters the incoming clock references and uses a linear approximation algorithm to approach the server clock reference. This works pretty well, but unfortunally the all timing code is very VLC-specific and not easy to reuse.
  • A theoretically a little harder to understand but very efficient and trivial to implement approach uses Kalman Filtering.


Audio/Video Sync

You should ensure that Audio and video frames are correctly presented to the user at the System Clock Time encoded in the frame's PTS (Presentation Time Stamp).

The STC should get synchronized regularily to the server clock using the PCR. For recorded playback you can either use the host clock, the video frame sync or the audio crystal as clock reference to sync.


Screen/Decoder Sync Aliasing

Unless the Display Refresh rate is at least twice as high as the frame rate of the displayed video you will get aliasing artifacts (jerking video due to dropped or double frames). See Wikipedia:Nyquist and Wikipedia:Nyquist-Shannon_sampling_theorem for a overview over sampling theory and a short explanation why aliasing artifacts occur when you sample at rates below the Nyquist frequency.


Audio Clock pitching

Since the sample rate of most soundcards can't get smoothly adjusted while playing it may be possible to resample the audio signal in software before sending it to the sound card. Naive nearest-neighbor or sample-drop approaches are trivial to implement, even linear filtering costs only a few lines of code. Most audio libraries have resampling routines built in, there are also resampling libraries available on the net.

ISO/IEC13818-1 allows maximum clock rate changes of 0.075Hz/sec in order to avoid audible artifacts in audio playback.


Deinterlacing

There are plenty of deinterlacing algorithms known, even simple blend filters (like the one implemented [1]'s libavcodec) can perform quite well. A more serious problem is that many deinterlacers are top-field-only (or bottom-field-only) and degrade the frame rate from 50Hz (interlaced) to 25Hz (progressive). This may look fine and cinema-alike when watching Hollywood movies but makes scrolling text (e.g. credits and newstickers) jerky and hardly readable.

The correct approach to preserve full temporal resolution is to deinterlace both fields, the even and the odd ones (each blended with the inbetween fields from the previous timeframe).

In order to use ffmpeg's deinterlacer you would need to implement a matching deinterlace_top_field() function in addition to the existing deinterlace_bottom_field().

When using OpenGL the Deinterlacer can get implemented completely on the graphics card. Enable the multitexturing engines: use one texture unit to for the previous frame, one for the blend grid (where each second line has e.g. alpha=0.8), one for the new frame and one for the second blend grid to select the appropriate fields of the new frame. Be sure to offset the texture coordinates so that the correct field from the previous frame shines through the gridlines. If the graphics card has not enough texture units available you can do the work in multiple passes.


Downscaling

Upscaling is usually simple. Especially when displaying HDTV transmissions in small windows or on the SDTV screen you need to downscale by factors less than 0.5, this is a little harder. You need to use either convolution filters with very long taps or, better: downscale in several steps. The image pyramid approach works fine:

  • Downscale by a factor of 2 using linear interpolation filters until you reached a resolution less than twice the target resolution. This averages 4 neighbor pixels into a single pixel on the next smaller level.
  • Now scale, again using linear interpolation filters, down to the target resolution (this scale factor is somewhere in the range [0.5...1.0] and thus uncritical to aliasing).

This algorithm can get implemented completely on the graphics card, simply a render-to-texture approach in OpenGL until you reached the last but one level and then render your texture into the framebuffer.


Color Correction, the Gamma Question

Computer Monitors and Video Projectors have a different Gamma Curve than Television Screens. Thus you need to apply a proper correction curve to the display. All common graphics libraries like SDL, DirectFB and SDL provide an API to set up the Gamma Color Lookup Tables. Not hard to do, just has to be done correctly otherwise you risk weak colors on the display.