Hi list,
the last few days I made some interesting experiences with VGA cards I now want to share with you.
goal ----
develop a budget card based VDR with PAL/RGB output and FF like output quality
problem -------
as we all know current VGA graphics output quality suffers from certain limitations. Graphics cards known so far operate at a fixed frame rate not properly synchronized with the stream. Thus fields or even frames do often not appear the right time at the ouput. Some are doubled others are lost. Finally leading to more or less jerky playback.
To a certain degree you can workaround this by software deinterlacing. At the cost of worse picture quality when playing interlaced material. Also CPU load is considerably increased by that.
It appeared to be a privilege of so called full featured cards (expensive cards running proprietary firmware) to output true RGB PAL at variable framerate. Thus always providing full stream synchronicity.
I've always been bothered by that and finally started to develop a few patches with the goal in mind to overcome these VGA graphics limitations.
solution --------
graphics cards basically are not designed for variable frame rates. Once you have setup their timing you are not provided any means like registers to synchronize the frame rate with external timers. But that's exactly what's needed for signal output to stay in sync with the frame rate provided by xine-lib or other software decoders.
To extend/reduce the overall time between vertical retrace I first dynamically added/removed a few scanlines to the modeline but with bad results. By doing so the picture was visibly jumping on the TV set.
After some further experimenting I finally found a solution to fine adjust the frame rate of my elderly Radeon type card. This time without any bad side effects on the screen.
Just trimming the length of a few scanlines during vertical retrace period does the trick.
Then I tried to implement the new functionality by applying only minimum changes to my current VDR development system. Radeon DRM driver is perfectly suited for that. I just had to add a few lines of code there.
I finally ended up in a small patch against Radeon DRM driver and a even smaller one against xine-lib. The last one also could take place directly in the Xserver. Please see attachments for code samples.
When xine-lib calls PutImage() it checks whether to increase/decrease Xservers frame rate. This way after a short adaption phase xine-lib can place it's PutImage() calls right in the middle between 2 adjacent vertical blanking intervals. This provides maximum immunity against jitter. And even better: no more frames/fields are lost due to stream and graphics card frequency drift.
Because we now cease from any deinterlacing we enjoy discontinuation of all its disadvantages:
If driving a device with native interlaced input (e.g. a traditional TV Set or modern TFT with good RGB support) we have no deinterlacing artifacts anymore.
Since softdecoders now are relieved of any CPU intensive deinterlacing we now can build cheap budget card based VDRs with slow CPUs.
Please find attached 2 small patches showing you the basic idea and a description of my test environment. The project is far from complete but even at this early stage of development shows promising results.
It should give you some rough ideas how to recycle your old hardware to a smoothly running budget VDR with high quality RGB video output.
some suggestions what to do next: - detection of initial field parity - faster initial frame rate synchronisation after starting replay - remove some hard coded constants (special dependencies on my system's timing)
Some more information about the project is also available here http://www.vdr-portal.de/board/thread.php?threadid=78480
Currently it's all based on Radeons but I'll try to also port it to other type of VGA cards. There will be some updates in the near future. stay tuned.
-Thomas
On Tue, 2008-07-22 at 18:37 +0200, Thomas Hilber wrote:
Hi list,
the last few days I made some interesting experiences with VGA cards I now want to share with you.
Wow!
I can't support this project strongly enough - what a perfect idea!
In the inevitable shift towards HDTV and progressive scanning, I was becoming increasingly concerned that the countless hours of interlaced content would be forgotten in the scramble for new and shiny.
Indeed, my own VDR FF based system exists in a large desktop PC case only because of the enormous (and antiquated) FF card. This project leads the way for replacing it with something much more compact, since the card runs hot and it's only a matter of time before it dies.
Is it likely to work with any newer Radeons, or is it because the RV2xx series is the last to have useful full open source drivers? I have a couple of RV3xxs (Radeon X300 + X600 Pro) I'd love to try this with :)
Re: http://www.sput.nl/hardware/tv-x.html
I've read this page before, and I dearly love the 'Problems' section which I've reproduced verbatim here:
"Apparently, some hardware doesn't support interlaced mode. If you have sync problems, check the sync signal with an oscilloscope."
Yup, because everyone has one lying around ;)
In fact, my biggest problem with this project before now has been the manufacture of such an adapter - my soldering skills are beyond poor.
I don't suppose you'd be willing to make some VGA -> SCART hobby-boxes up for a suitable fee? :)
Cheers, Gavin.
On Tue, Jul 22, 2008 at 07:12:35PM +0100, Gavin Hamill wrote:
In the inevitable shift towards HDTV and progressive scanning, I was becoming increasingly concerned that the countless hours of interlaced content would be forgotten in the scramble for new and shiny.
not to forget interlaced formats are still in effect for HDTV. I think you could recycle the basic idea behind my patch for HDTV as well.
Indeed, my own VDR FF based system exists in a large desktop PC case only because of the enormous (and antiquated) FF card. This project leads the way for replacing it with something much more compact, since the card runs hot and it's only a matter of time before it dies.
that originally was one of my major motivations. I don't like a huge VDR box with a FF card in my living room. At least Radeons are also available in low profile format. So are some budget satellite cards.
I hope one day we also could support some on-board graphics (like nVidia or Intel) what would allow tiny VDR boxes with very common hardware.
Is it likely to work with any newer Radeons, or is it because the RV2xx series is the last to have useful full open source drivers? I have a couple of RV3xxs (Radeon X300 + X600 Pro) I'd love to try this with :)
the patch from above basically should run with all cards supported by the xf86-video-ati driver. Just have a look at one of the more recent man pages:
http://cgit.freedesktop.org/~agd5f/xf86-video-ati/tree/man/radeon.man?h=vsyn...
Unfortunately with Radeons we currently have 2 problems unsolved:
1. there appears to be a tiny bug in XV overlay scaling code which sometimes mixes even and odd fields at certain resolutions. A workaround to compensate for this is to scale the opposite way. This is done by xineliboutput option 'Fullscreen mode: no, Window height: 575 px' instead of 'Window height: 575 px' (as noted in my configuration example).
Overlay XV uses double buffering eliminating any tearing effects. This works pretty good.
2. the other way to use XV video extension is textured mode. This method shows very good results. No scaling problems at all. But this code is so new (a few weeks), there even does not yet exist proper tearing protection for.
So for demonstration purposes I still prefer the overlay instead of textured XV adaptor.
"Apparently, some hardware doesn't support interlaced mode. If you have sync problems, check the sync signal with an oscilloscope."
but we are on the safe side. Radeons do support it:)
In fact, my biggest problem with this project before now has been the manufacture of such an adapter - my soldering skills are beyond poor.
just recycle a conventional VGA monitor cable. So you just have to fiddle with the SCART side of the cable.
I don't suppose you'd be willing to make some VGA -> SCART hobby-boxes up for a suitable fee? :)
at least this was not my primary intention:)
Cheers, Thomas
Hi!
First thing: A great idea!
Thomas Hilber schrieb:
not to forget interlaced formats are still in effect for HDTV. I think you could recycle the basic idea behind my patch for HDTV as well.
I have connected my VDR box to my TV via a DVI-to-HDMI cable, set the resolution to 1920x1080 and let the graphics card do the upscaling instead of the TV, because the quality looks IMHO better this way. But here still the same problem is present, the refresh rate of the graphics card is not bound to the field rate of the incoming TV signal, so I can either disable sync-to-vblank and have tearing artefacts or enable it and have an unsteady framerate.
I wonder if your patch could be applied to a DVI/HDMI connection, too? Its a Radeon X850 currently with xf86-video-ati 6.6.3 and xorg-server 1.4.
Ciao
Martin
On Wed, Jul 23, 2008 at 12:12:46AM +0200, Martin Emrich wrote:
I have connected my VDR box to my TV via a DVI-to-HDMI cable, set the resolution to 1920x1080 and let the graphics card do the upscaling instead of the TV, because the quality looks IMHO better this way. But
ok. But if doing so you still have to continue deinterlacing in software. This is because any scaling in Y dimension intermixes even/odd fields in the frame buffer. Finally producing a totally messed VGA output signal.
Scaling in X dimension however is always allowed. E.g. for switching between 4:3/16:9 formats on a 16:9 TV-set.
here still the same problem is present, the refresh rate of the graphics card is not bound to the field rate of the incoming TV signal, so I can either disable sync-to-vblank and have tearing artefacts or enable it and have an unsteady framerate.
right. Even if you still must use software deinterlacing for some reason you benefit from the 'sync_fields' patch. You then can enable sync-to-vblank and the patch dynamically synchronizes graphics card's vblanks and TV signal's field updates. Thus avoiding unsteady frame rates at VGA/DVI/HDMI output.
I wonder if your patch could be applied to a DVI/HDMI connection, too? Its a Radeon X850 currently with xf86-video-ati 6.6.3 and xorg-server 1.4.
In your case the only prerequisite is support of your Radeon X850 by Radeon DRM driver. DRM normally is shipped with kernel. So this is a kernel/driver issue. But I don't expect problems here though I not yet testet the X850 myself (yet).
-Thomas
Hi!
Thomas Hilber schrieb:
On Wed, Jul 23, 2008 at 12:12:46AM +0200, Martin Emrich wrote:
I have connected my VDR box to my TV via a DVI-to-HDMI cable, set the resolution to 1920x1080 and let the graphics card do the upscaling instead of the TV, because the quality looks IMHO better this way. But
ok. But if doing so you still have to continue deinterlacing in software. This is because any scaling in Y dimension intermixes even/odd fields in the frame buffer. Finally producing a totally messed VGA output signal.
Of course. As I also use other applications on the box (mplayer, photo viewing), neither reducing the resolution nor enabling interlacing (1080i) is desired.
Software deinterlacing is no problem, from time to time I experiment with all the interlacer options. (I wonder why there's no simple "TV simulator" that upmixes 50 fields/s to 50 frames/s just like a CRT TV?).
right. Even if you still must use software deinterlacing for some reason you benefit from the 'sync_fields' patch. You then can enable sync-to-vblank and the patch dynamically synchronizes graphics card's vblanks and TV signal's field updates. Thus avoiding unsteady frame rates at VGA/DVI/HDMI output.
Ok. I'm really busy currently (but your project looked so cool that I just *had* to write an email to the list), but as soon as I get to it, I'll try to make it work.
Does anyone have a 1080p@50Hz modeline ready? Currently, I use the settings provided by the TV via EDID, and I guess it defaults to 60Hz :(
I wonder if your patch could be applied to a DVI/HDMI connection, too? Its a Radeon X850 currently with xf86-video-ati 6.6.3 and xorg-server 1.4.
In your case the only prerequisite is support of your Radeon X850 by Radeon DRM driver. DRM normally is shipped with kernel. So this is a kernel/driver issue. But I don't expect problems here though I not yet testet the X850 myself (yet).
As the box runs a home-built netboot mini distro, I am quite flexible regarding kernel versions. As soon as I have some spare time (probably after I finished my BA thesis :(, I get to it...
Ciao
Martin
On 23 Jul 2008, at 23:06, Martin Emrich wrote:
(I wonder why there's no simple "TV simulator" that upmixes 50 fields/s to 50 frames/s just like a CRT TV?).
It's very hard to simulate this 'upmix'. A CRT TV actually moves the electron beam across the screen and the phosphor has some time it stays illuminated after being hit by the beam. This is very hard to simulate with a digital screen which is either on or off, or has some slowness by itself which is different from how a CRT screen works.
The dscaler project has implemented some of the best deinterlacing algorithms and most of the tvtime algorithms are implemented (to my knowledge) with basis in dscaler source / ideas. See http:// dscaler.org/ . dscaler basically is a deinterlace and display program that takes input from bt8x8 based capture cards.
Someone on that project had an idea to create a setup where the display hardware was synced to the input clock of the capture card, but I'm not sure if anything ever came out of that idea.
Hi!
Torgeir Veimo schrieb:
On 23 Jul 2008, at 23:06, Martin Emrich wrote:
(I wonder why there's no simple "TV simulator" that upmixes 50 fields/s to 50 frames/s just like a CRT TV?).
It's very hard to simulate this 'upmix'. A CRT TV actually moves the electron beam across the screen and the phosphor has some time it stays illuminated after being hit by the beam. This is very hard to simulate with a digital screen which is either on or off, or has some slowness by itself which is different from how a CRT screen works.
I did not mean to actually simulate the brightness decay in the phosphors, just the points in time where the fields are presented.
Let's assume we have two frames to be played back, which each consists of two fields: {1,2} and {3,4}.
I don't know if it actually works this way, but as far as I know, playing back interlaced content with 25 frames/s on a progressive display looks this way:
11111 33333 22222 ...1/25th sec. later: 44444 11111 33333 22222 44444
As field 3 is a 1/50th second "older" than field 4, there are jaggies in moving scenes.
What I am looking for would be this, with 50 frames/s:
11111 11111 33333 33333 ..... 1/50th s. 22222 1/50s 22222 44444 11111 11111 33333 33333 ..... 22222 22222 44444
So each field ist still being shown for a 1/25th of a second, but for the "right" 1/25th second. The output then no longer serves 25fps but 50fps to XVideo, DirectFB or whatever.
All of this of course makes only sense for PAL content when the TV can do 50Hz, not 60Hz.
The dscaler project has implemented some of the best deinterlacing algorithms and most of the tvtime algorithms are implemented (to my knowledge) with basis in dscaler source / ideas. See http:// dscaler.org/ . dscaler basically is a deinterlace and display program that takes input from bt8x8 based capture cards.
I assume these are the "tvtime" deinterlacers in the libxineoutput plugin. I played around with them, but none of them resultet in a picture as sharp and contrasty as without any deinterlacer. So I have to choose between sharpness and clean motions. And even during the EURO 2008, I chose the first.
Someone on that project had an idea to create a setup where the display hardware was synced to the input clock of the capture card, but I'm not sure if anything ever came out of that idea.
I also thought of that. One then would have to sync to the soundcard's buffer, too, and remove/duplicate samples as necessary, to keep the audio synchronized.
BTW: How does libxineoutput synchronize? I noticed a slight AV desync growing over ca. 5 minutes, the the audio jumps once and the desync jumps into the right position (Digital output to AV receiver).
Ciao
Martin
I notice a AV desync after 5 minutes, it definitely happens when it reaches an advertisement that was cut out, or when I jump to a advertisement. :(
the only way I could "fix" it was to re-encode the edited recording with 'mencoder -ovc copy -oac copy -of mpeg -mpegopts format=pes2 -o new/001.vdr old/001.vdr'
I would see frames being skipped when it reaches where the cut took place...
I'm using vdr 1.6.0_p1
Theunis
On 23/07/2008, Martin Emrich emme@emmes-world.de wrote:
Hi!
Torgeir Veimo schrieb:
On 23 Jul 2008, at 23:06, Martin Emrich wrote:
(I wonder why there's no simple "TV simulator" that upmixes 50 fields/s to 50 frames/s just like a CRT TV?).
It's very hard to simulate this 'upmix'. A CRT TV actually moves the electron beam across the screen and the phosphor has some time it stays illuminated after being hit by the beam. This is very hard to simulate with a digital screen which is either on or off, or has some slowness by itself which is different from how a CRT screen works.
I did not mean to actually simulate the brightness decay in the phosphors, just the points in time where the fields are presented.
Let's assume we have two frames to be played back, which each consists of two fields: {1,2} and {3,4}.
I don't know if it actually works this way, but as far as I know, playing back interlaced content with 25 frames/s on a progressive display looks this way:
11111 33333 22222 ...1/25th sec. later: 44444 11111 33333 22222 44444
As field 3 is a 1/50th second "older" than field 4, there are jaggies in moving scenes.
What I am looking for would be this, with 50 frames/s:
11111 11111 33333 33333 ..... 1/50th s. 22222 1/50s 22222 44444 11111 11111 33333 33333 ..... 22222 22222 44444
So each field ist still being shown for a 1/25th of a second, but for the "right" 1/25th second. The output then no longer serves 25fps but 50fps to XVideo, DirectFB or whatever.
All of this of course makes only sense for PAL content when the TV can do 50Hz, not 60Hz.
The dscaler project has implemented some of the best deinterlacing algorithms and most of the tvtime algorithms are implemented (to my knowledge) with basis in dscaler source / ideas. See http:// dscaler.org/ . dscaler basically is a deinterlace and display program that takes input from bt8x8 based capture cards.
I assume these are the "tvtime" deinterlacers in the libxineoutput plugin. I played around with them, but none of them resultet in a picture as sharp and contrasty as without any deinterlacer. So I have to choose between sharpness and clean motions. And even during the EURO 2008, I chose the first.
Someone on that project had an idea to create a setup where the display hardware was synced to the input clock of the capture card, but I'm not sure if anything ever came out of that idea.
I also thought of that. One then would have to sync to the soundcard's buffer, too, and remove/duplicate samples as necessary, to keep the audio synchronized.
BTW: How does libxineoutput synchronize? I noticed a slight AV desync growing over ca. 5 minutes, the the audio jumps once and the desync jumps into the right position (Digital output to AV receiver).
Ciao
Martin
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
On Wed, Jul 23, 2008 at 9:38 PM, Theunis Potgieter theunis.potgieter@gmail.com wrote:
I notice a AV desync after 5 minutes, it definitely happens when it reaches an advertisement that was cut out, or when I jump to a advertisement. :(
I can confirm this also occurs with the latest version of vdr-xine.
Have you tried changing the audio.synchronization.av_sync_method in xine-config to be resample instead of metranom?
I think I found this to be better: -
# method to sync audio and video # { metronom feedback resample }, default: 0 audio.synchronization.av_sync_method:resample
# always resample to this rate (0 to disable) # numeric, default: 0 audio.synchronization.force_rate:48000
Cheers
Hi!
Morfsta schrieb:
Have you tried changing the audio.synchronization.av_sync_method in xine-config to be resample instead of metranom?
Hmm, I didn't know there was such an option... I especially have these problems with AC3 audio, and these cannot easily be resampled.
I have a cheap ASRock board in my media PC, and an SB Live 5.1 value sound card that gave me quite some problems to set it up correctly, one of these two is probably responsible for the problems.
In the last media PC before this one, I had an Asus board with onboard digital out, and with this I didn't have that much problems (but the old nVidia AGP card was not capable to run at 1920x1080 smoothly).
Ciao
Martin
On Tue, Jul 22, 2008 at 11:17:26PM +0200, Thomas Hilber wrote:
Unfortunately with Radeons we currently have 2 problems unsolved:
- there appears to be a tiny bug in XV overlay scaling code which
sometimes mixes even and odd fields at certain resolutions. A workaround to compensate for this is to scale the opposite way. This is done by xineliboutput option 'Fullscreen mode: no, Window height: 575 px' instead of 'Window height: 575 px' (as noted in my configuration example).
Overlay XV uses double buffering eliminating any tearing effects. This works pretty good.
- the other way to use XV video extension is textured mode. This method
shows very good results. No scaling problems at all. But this code is so new (a few weeks), there even does not yet exist proper tearing protection for.
the first issue has been fixed yesterday! The second one is void then. Thanks to Roland Scheidegger I now get a perfect picture for all source resolutions testet so far.
http://lists.x.org/archives/xorg-driver-ati/2008-July/006143.html http://www.vdr-portal.de/board/thread.php?postid=741778#post741778
There currently is only one known issue left: detection of inital field polarity. I don't think this is a big deal. After this I can start productive use of the patch on my living room VDR.
Maybe then I find some time to port the patch to other platforms (like intel based graphics cards).
Cheers Thomas
On Tue, Jul 29, 2008 at 09:34:39AM +0200, Thomas Hilber wrote:
On Tue, Jul 22, 2008 at 11:17:26PM +0200, Thomas Hilber wrote:
Unfortunately with Radeons we currently have 2 problems unsolved:
- there appears to be a tiny bug in XV overlay scaling code which
sometimes mixes even and odd fields at certain resolutions. A workaround to compensate for this is to scale the opposite way. This is done by xineliboutput option 'Fullscreen mode: no, Window height: 575 px' instead of 'Window height: 575 px' (as noted in my configuration example).
Overlay XV uses double buffering eliminating any tearing effects. This works pretty good.
- the other way to use XV video extension is textured mode. This method
shows very good results. No scaling problems at all. But this code is so new (a few weeks), there even does not yet exist proper tearing protection for.
the first issue has been fixed yesterday! The second one is void then. Thanks to Roland Scheidegger I now get a perfect picture for all source resolutions testet so far.
http://lists.x.org/archives/xorg-driver-ati/2008-July/006143.html http://www.vdr-portal.de/board/thread.php?postid=741778#post741778
Nice progress!
There currently is only one known issue left: detection of inital field polarity. I don't think this is a big deal. After this I can start productive use of the patch on my living room VDR.
:)
Maybe then I find some time to port the patch to other platforms (like intel based graphics cards).
That would rock.
btw any chance of getting these patches accepted/integrated upstream?
-- Pasi
On Tue, Jul 29, 2008 at 01:40:49PM +0300, Pasi Kärkkäinen wrote:
Maybe then I find some time to port the patch to other platforms (like intel based graphics cards).
That would rock.
maybe this way we could fix current issues with S100. Picture quality dramaticly improves if deinterlacer is switched off.
Anyway they made a big step forward these days:
http://forum.zenega-user.de/viewtopic.php?f=17&t=5440&start=15#p4324...
btw any chance of getting these patches accepted/integrated upstream?
I don't think we get upstream support in the near future. Since TV applications are the only ones that need to synchronize VGA timing to an external signal.
-Thomas
On Wed, Jul 30, 2008 at 07:43:19AM +0200, Thomas Hilber wrote:
On Tue, Jul 29, 2008 at 01:40:49PM +0300, Pasi Kärkkäinen wrote:
Maybe then I find some time to port the patch to other platforms (like intel based graphics cards).
That would rock.
maybe this way we could fix current issues with S100. Picture quality dramaticly improves if deinterlacer is switched off.
Anyway they made a big step forward these days:
http://forum.zenega-user.de/viewtopic.php?f=17&t=5440&start=15#p4324...
btw any chance of getting these patches accepted/integrated upstream?
I don't think we get upstream support in the near future. Since TV applications are the only ones that need to synchronize VGA timing to an external signal.
Ok.. the other day you sent a mail saying you had reworked the patches, so that made me wonder if it would be possible to make these patches friendly enough to get them accepted upstream :)
-- Pasi
On Tue, Aug 12, 2008 at 04:44:59PM +0300, Pasi Kärkkäinen wrote:
that made me wonder if it would be possible to make these patches friendly enough to get them accepted upstream :)
sorry - but I really can't care about this at the current state of development
wow, when I buy an AMD card I will sure look up your code. :)
currently I'm still using a pentium 4, 2.4GHz machine with nvidia AGP 440MX card,
only way to get that to work properly was with the older nvidia drivers 71.86.0 , apparently the newer drivers forces PAL or any other TV Standard to run @60Hz instead of 50Hz, which is what my broadcast is. So I had to "downgrade" the driver to get the proper output.
With these options in my xorg.conf to disable the driver's auto settings.
Section "Monitor" . . ModeLine "720x576PAL" 27.50 720 744 800 880 576 582 588 625 -hsync -vsync ModeLine "720x576@50i" 14.0625 720 760 800 900 576 582 588 625 -hsync -vsync interlace . EndSection
Section "Screen" . . Option "UseEDIDFreqs" "FALSE" Option "UseEDIDDpi" "FALSE" Option "ModeValidation" "NoEdidModes" SubSection "Display" Modes "720x576PAL" EndSubSection . EndSection
xvidtune reports this on DISPLAY=:0.1 "720x576" 27.50 720 744 800 880 576 582 588 625 -hsync -vsync
cpu load is 10% with xineliboutput set to use xvmc, my cpu fan even turns off, it only kicks in when I view a xvid/divx type movie.
Theunis
2008/7/22 Thomas Hilber vdr@toh.cx:
Hi list,
the last few days I made some interesting experiences with VGA cards I now want to share with you.
goal
develop a budget card based VDR with PAL/RGB output and FF like output quality
problem
as we all know current VGA graphics output quality suffers from certain limitations. Graphics cards known so far operate at a fixed frame rate not properly synchronized with the stream. Thus fields or even frames do often not appear the right time at the ouput. Some are doubled others are lost. Finally leading to more or less jerky playback.
To a certain degree you can workaround this by software deinterlacing. At the cost of worse picture quality when playing interlaced material. Also CPU load is considerably increased by that.
It appeared to be a privilege of so called full featured cards (expensive cards running proprietary firmware) to output true RGB PAL at variable framerate. Thus always providing full stream synchronicity.
I've always been bothered by that and finally started to develop a few patches with the goal in mind to overcome these VGA graphics limitations.
solution
graphics cards basically are not designed for variable frame rates. Once you have setup their timing you are not provided any means like registers to synchronize the frame rate with external timers. But that's exactly what's needed for signal output to stay in sync with the frame rate provided by xine-lib or other software decoders.
To extend/reduce the overall time between vertical retrace I first dynamically added/removed a few scanlines to the modeline but with bad results. By doing so the picture was visibly jumping on the TV set.
After some further experimenting I finally found a solution to fine adjust the frame rate of my elderly Radeon type card. This time without any bad side effects on the screen.
Just trimming the length of a few scanlines during vertical retrace period does the trick.
Then I tried to implement the new functionality by applying only minimum changes to my current VDR development system. Radeon DRM driver is perfectly suited for that. I just had to add a few lines of code there.
I finally ended up in a small patch against Radeon DRM driver and a even smaller one against xine-lib. The last one also could take place directly in the Xserver. Please see attachments for code samples.
When xine-lib calls PutImage() it checks whether to increase/decrease Xservers frame rate. This way after a short adaption phase xine-lib can place it's PutImage() calls right in the middle between 2 adjacent vertical blanking intervals. This provides maximum immunity against jitter. And even better: no more frames/fields are lost due to stream and graphics card frequency drift.
Because we now cease from any deinterlacing we enjoy discontinuation of all its disadvantages:
If driving a device with native interlaced input (e.g. a traditional TV Set or modern TFT with good RGB support) we have no deinterlacing artifacts anymore.
Since softdecoders now are relieved of any CPU intensive deinterlacing we now can build cheap budget card based VDRs with slow CPUs.
Please find attached 2 small patches showing you the basic idea and a description of my test environment. The project is far from complete but even at this early stage of development shows promising results.
It should give you some rough ideas how to recycle your old hardware to a smoothly running budget VDR with high quality RGB video output.
some suggestions what to do next:
- detection of initial field parity
- faster initial frame rate synchronisation after starting replay
- remove some hard coded constants (special dependencies on my system's
timing)
Some more information about the project is also available here http://www.vdr-portal.de/board/thread.php?threadid=78480
Currently it's all based on Radeons but I'll try to also port it to other type of VGA cards. There will be some updates in the near future. stay tuned.
-Thomas
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
On Tue, Jul 22, 2008 at 08:30:46PM +0200, Theunis Potgieter wrote:
currently I'm still using a pentium 4, 2.4GHz machine with nvidia AGP 440MX card,
at least the VGA-to-SCART cable (not yet the patch itself) does run here on nVidia hardware without problems. Box is a PUNDIT P1-AH2 with nVidia C51PV [GeForce 6150] graphics.
only way to get that to work properly was with the older nvidia drivers 71.86.0 , apparently the newer drivers forces PAL or any other TV Standard to run @60Hz instead of 50Hz, which is what my broadcast is. So I had to "downgrade" the driver to get the proper output.
really? On my Pundit I use NVIDIA-Linux-x86-100.14.19-pkg1.run and the attached xorg.conf with no problems.
Option "UseEDIDFreqs" "FALSE" Option "UseEDIDDpi" "FALSE"
I just use one big hammer instead:)
Option "UseEDID" "FALSE"
That works (mostly).
-Thomas
The xorg.conf options differ for newer versions of the nvidia driver, that is why mine looks different.
How I picked up on the problem was, when I ran xvidtune on DISPLAY=:0.1 (TV-Out) and found that even when I set the modeline, it still ran @60Hz, thus showing the tearing effect and had to enable deinterlacer. After googling for 6 months, I found somebody on a mailing list explaining that the TV-Out (s-video) could be set to run @50Hz but only with the older drivers of nvidia and because my card is old it was not a problem. Obviously this only helps for the TV-Out on nvidia, thus I don't require any deinterlacers. I use the machine as a home PC on DISPLAY=:0.0. I understand that your solution helps when using a LCD/Plasma with dvi/d-sub/scart connectors.
Just wanted to share my experience with all :) I'm only showing that you can consolidate your hardware too, if implemented correctly. I only have the 1 pc running in the house and don't see a need to run more. I extend the svideo/audio/IR cable to the next room. Not really needed now since the pc runs quiet when the cpu fan stops. Only thing making a noise now is the already relatively "quiet" power supply. Things that start up the cpu fan is xvid/divx and firefox (on DISPLAY=:0.0). Taking into account that "live" tv is also off loaded using xvmc.
Theunis
On 22/07/2008, Thomas Hilber vdr@toh.cx wrote:
On Tue, Jul 22, 2008 at 08:30:46PM +0200, Theunis Potgieter wrote:
currently I'm still using a pentium 4, 2.4GHz machine with nvidia AGP
440MX
card,
at least the VGA-to-SCART cable (not yet the patch itself) does run here on nVidia hardware without problems. Box is a PUNDIT P1-AH2 with nVidia C51PV [GeForce 6150] graphics.
only way to get that to work properly was with the older nvidia drivers 71.86.0 , apparently the newer drivers forces PAL or any other TV
Standard
to run @60Hz instead of 50Hz, which is what my broadcast is. So I had to "downgrade" the driver to get the proper output.
really? On my Pundit I use NVIDIA-Linux-x86-100.14.19-pkg1.run and the attached xorg.conf with no problems.
Option "UseEDIDFreqs" "FALSE" Option "UseEDIDDpi" "FALSE"
I just use one big hammer instead:)
Option "UseEDID" "FALSE"
That works (mostly).
-Thomas
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Dear Theunis,
Am Mittwoch, den 23.07.2008, 09:41 +0200 schrieb Theunis Potgieter:
Only thing making a noise now is the already relatively "quiet" power supply.
You probably already know this, but anyway.
There are power supplies called picoPSU which have no fans and are, as far as I understand, more efficient. See [1] for example. Additionally you need an AC/DC adapter which gets, I think, pretty hot but it is placed outside of the case. Also it is a little more expensive than a “normal” PSU but you probably also save some money because of the better efficiency.
Regards,
Paul
[1] http://www.bigbruin.com/reviews05/review.php?item=picopsu&file=1
On Wed, Jul 23, 2008 at 09:41:54AM +0200, Theunis Potgieter wrote:
deinterlacers. I use the machine as a home PC on DISPLAY=:0.0. I understand that your solution helps when using a LCD/Plasma with dvi/d-sub/scart connectors.
right. It also helps for deinterlaced output using a LCD/Plasma with dvi/d-sub.
But it should prove even more useful doing interlaced output with scart on a LCD/Plasma or cathode ray tube based displays.
Though this is a quite challenging task. All components e.g. budget card drivers, software decoder, display driver (e.g. Xserver) must play seamlessly together.
One badly behaving component anywhere in the chain can ruin the overall effort.
This is a big difference to a FF card. Where almost all important components are located within a self-contained board. Driven by proprietary firmware:)
hi,
with NVIDIA driver 169 and 173 at least, this does not yet work:
Thomas Hilber kirjoitti:
I just use one big hammer instead:)
Option "UseEDID" "FALSE"
That works (mostly).
And the reason is easily read from the driver's README:
Because these TV modes only depend on the TV encoder and the TV standard, TV modes do not go through normal mode validation. The X configuration options HorizSync and VertRefresh are not used for TV mode validation.
Additionally, the NVIDIA driver contains a hardcoded list of mode sizes that it can drive for each combination of TV encoder and TV standard. Therefore, custom modelines in your X configuration file are ignored for TVs.
Setting TV format to PAL-B results in the following modeline (with prefedined 720x576):
DISPLAY=:0.0 xvidtune -show "720x576" 31.50 720 760 840 880 576 585 588 597 -hsync -vsync
and PAL-G: DISPLAY=:0.0 xvidtune -show "720x576" 31.50 720 760 840 880 576 585 588 597 -hsync -vsync
(does not change at all...)
I have no idea on whether this is 50Hz or 60Hz - I guess not interlaced at least.
So the question is if you have used VGA instead of TVout, or tricked the driver somehow to respect your own modelines...
I add the relevant part of Xorg.0.log, so you can see what the modelines available are:
(**) NVIDIA(0): Ignoring EDIDs (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) NVIDIA(0): enabled. (II) NVIDIA(0): NVIDIA GPU GeForce FX 5200 (NV34) at PCI:1:0:0 (GPU-0) (--) NVIDIA(0): Memory: 131072 kBytes (II) NVIDIA(0): GPU RAM Type: DDR1 (--) NVIDIA(0): VideoBIOS: 04.34.20.87.00 (--) NVIDIA(0): Found 2 CRTCs on board (II) NVIDIA(0): Supported display device(s): CRT-0, CRT-1, DFP-0, TV-0 (II) NVIDIA(0): Bus detected as AGP (II) NVIDIA(0): Detected AGP rate: 8X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (II) NVIDIA(0): (II) NVIDIA(0): Mode timing constraints for : GeForce FX 5200 (II) NVIDIA(0): Maximum mode timing values : (II) NVIDIA(0): Horizontal Visible Width : 8192 (II) NVIDIA(0): Horizontal Blank Start : 8192 (II) NVIDIA(0): Horizontal Blank Width : 4096 (II) NVIDIA(0): Horizontal Sync Start : 8184 (II) NVIDIA(0): Horizontal Sync Width : 504 (II) NVIDIA(0): Horizontal Total Width : 8224 (II) NVIDIA(0): Vertical Visible Height : 8192 (II) NVIDIA(0): Vertical Blank Start : 8192 (II) NVIDIA(0): Vertical Blank Width : 256 (II) NVIDIA(0): Veritcal Sync Start : 8191 (II) NVIDIA(0): Vertical Sync Width : 15 (II) NVIDIA(0): Vertical Total Height : 8193 (II) NVIDIA(0): (II) NVIDIA(0): Minimum mode timing values : (II) NVIDIA(0): Horizontal Total Width : 40 (II) NVIDIA(0): Vertical Total Height : 2 (II) NVIDIA(0): (II) NVIDIA(0): Mode timing alignment : (II) NVIDIA(0): Horizontal Visible Width : multiples of 8 (II) NVIDIA(0): Horizontal Blank Start : multiples of 8 (II) NVIDIA(0): Horizontal Blank Width : multiples of 8 (II) NVIDIA(0): Horizontal Sync Start : multiples of 8 (II) NVIDIA(0): Horizontal Sync Width : multiples of 8 (II) NVIDIA(0): Horizontal Total Width : multiples of 8 (II) NVIDIA(0): (--) NVIDIA(0): Connected display device(s) on GeForce FX 5200 at PCI:1:0:0: (--) NVIDIA(0): NVIDIA TV Encoder (TV-0) (--) NVIDIA(0): NVIDIA TV Encoder (TV-0): 350.0 MHz maximum pixel clock (--) NVIDIA(0): TV encoder: NVIDIA (II) NVIDIA(0): TV modes supported by this encoder: (II) NVIDIA(0): 1024x768; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, (II) NVIDIA(0): PAL-N, PAL-NC (II) NVIDIA(0): 800x600; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, (II) NVIDIA(0): PAL-NC (II) NVIDIA(0): 720x576; Standards: PAL-BDGHI, PAL-N, PAL-NC (II) NVIDIA(0): 720x480; Standards: NTSC-M, NTSC-J, PAL-M (II) NVIDIA(0): 640x480; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, (II) NVIDIA(0): PAL-NC (II) NVIDIA(0): 640x400; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, (II) NVIDIA(0): PAL-NC (II) NVIDIA(0): 400x300; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, (II) NVIDIA(0): PAL-NC (II) NVIDIA(0): 320x240; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, (II) NVIDIA(0): PAL-NC (II) NVIDIA(0): 320x200; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, (II) NVIDIA(0): PAL-NC (II) NVIDIA(0): Frequency information for NVIDIA TV Encoder (TV-0): (II) NVIDIA(0): HorizSync : 15.000-16.000 kHz (II) NVIDIA(0): VertRefresh : 43.000-72.000 Hz (II) NVIDIA(0): (HorizSync from HorizSync in X Config Monitor section) (II) NVIDIA(0): (VertRefresh from Conservative Defaults) (II) NVIDIA(0): Note that the HorizSync and VertRefresh frequency ranges are (II) NVIDIA(0): ignored for TV Display Devices; modetimings for TVs will (II) NVIDIA(0): be selected based on the capabilities of the NVIDIA TV (II) NVIDIA(0): encoder. (II) NVIDIA(0): (II) NVIDIA(0): --- Modes in ModePool for NVIDIA TV Encoder (TV-0) --- (II) NVIDIA(0): "nvidia-auto-select" : 1024 x 768; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "1024x768" : 1024 x 768; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "800x600" : 800 x 600; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "720x576" : 720 x 576; for use with TV standards: PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "640x480" : 640 x 480; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "640x400" : 640 x 400; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "400x300" : 400 x 300; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "320x240" : 320 x 240; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): "320x200" : 320 x 200; for use with TV standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined) (II) NVIDIA(0): --- End of ModePool for NVIDIA TV Encoder (TV-0): --- (II) NVIDIA(0): (II) NVIDIA(0): Assigned Display Device: TV-0 (II) NVIDIA(0): Requested modes: (II) NVIDIA(0): "720x576PAL" (II) NVIDIA(0): "720x576@50i" (II) NVIDIA(0): "720x576i" (II) NVIDIA(0): "720x576" (WW) NVIDIA(0): No valid modes for "720x576PAL"; removing. (WW) NVIDIA(0): No valid modes for "720x576@50i"; removing. (WW) NVIDIA(0): No valid modes for "720x576i"; removing. (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): MetaMode "720x576": (II) NVIDIA(0): Bounding Box: [0, 0, 720, 576] (II) NVIDIA(0): NVIDIA TV Encoder (TV-0): "720x576" (II) NVIDIA(0): Size : 720 x 576 (II) NVIDIA(0): Offset : +0 +0 (II) NVIDIA(0): Panning Domain: @ 720 x 576 (II) NVIDIA(0): Position : [0, 0, 720, 576] (II) NVIDIA(0): Virtual screen size determined to be 720 x 576
It seems that NVIDIA also supports HD576i "TVStandard", but I don't know what to put on the "Modes"-line for that. At least 720x576 fails.
yours, Jouni
On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
with NVIDIA driver 169 and 173 at least, this does not yet work:
the patch is not yet ported to nVidia that's true.
Independent from that you can configure the nVidia-Xserver to output a PAL/RGB compatible signal. And connect a CRT via a VGA-to-SCART cable.
But until the patch is ported to nVidia (if ever) you must use a deinterlacer.
I attached my 'xorg.conf' and 'Xorg.0.log' which runs in several configurations here without problems. Maybe you give it a try.
BTW: we do not use any of these evil TV encoder things. Just forget about that.
Cheers Thomas
Does somebody have a URL on how to make one? for d-sub to scart or the new DVI (modern graphic cards) to scart?
On 12/08/2008, Thomas Hilber vdr@toh.cx wrote:
On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
with NVIDIA driver 169 and 173 at least, this does not yet work:
the patch is not yet ported to nVidia that's true.
Independent from that you can configure the nVidia-Xserver to output a PAL/RGB compatible signal. And connect a CRT via a VGA-to-SCART cable.
But until the patch is ported to nVidia (if ever) you must use a deinterlacer.
I attached my 'xorg.conf' and 'Xorg.0.log' which runs in several configurations here without problems. Maybe you give it a try.
BTW: we do not use any of these evil TV encoder things. Just forget about that.
Cheers
Thomas
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
Does somebody have a URL on how to make one? for d-sub to scart or the new DVI (modern graphic cards) to scart?
http://www.sput.nl/hardware/tv-x.html
That URL was included in the first mail of this thread..
-- Pasi
On 12/08/2008, Thomas Hilber vdr@toh.cx wrote:
On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
with NVIDIA driver 169 and 173 at least, this does not yet work:
the patch is not yet ported to nVidia that's true.
Independent from that you can configure the nVidia-Xserver to output a PAL/RGB compatible signal. And connect a CRT via a VGA-to-SCART cable.
But until the patch is ported to nVidia (if ever) you must use a deinterlacer.
I attached my 'xorg.conf' and 'Xorg.0.log' which runs in several configurations here without problems. Maybe you give it a try.
BTW: we do not use any of these evil TV encoder things. Just forget about that.
Cheers
Thomas
On Tue, 2008-08-12 at 14:01 +0300, Pasi Kärkkäinen wrote:
On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
Does somebody have a URL on how to make one? for d-sub to scart or the new DVI (modern graphic cards) to scart?
http://www.sput.nl/hardware/tv-x.html
That URL was included in the first mail of this thread..
You can also use this if you have a Radeon and use 'composite' on the modeline instead of '-hsync -vsync' :
http://www.idiots.org.uk/vga_rgb_scart/index.html
Just please be careful - you can destroy your TV by sending it VGA-spec signals!
Cheers, Gavin.
On Tue, Aug 12, 2008 at 12:08:55PM +0100, Gavin Hamill wrote:
On Tue, 2008-08-12 at 14:01 +0300, Pasi Kärkkäinen wrote:
On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
Does somebody have a URL on how to make one? for d-sub to scart or the new DVI (modern graphic cards) to scart?
http://www.sput.nl/hardware/tv-x.html
That URL was included in the first mail of this thread..
You can also use this if you have a Radeon and use 'composite' on the modeline instead of '-hsync -vsync' :
http://www.idiots.org.uk/vga_rgb_scart/index.html
Just please be careful - you can destroy your TV by sending it VGA-spec signals!
These two links seem to have a bit different ways of doing the cable.. The first link has more "complicated" cable..
Can someone try and compare these or explain the differences?
-- Pasi
On Tue, 2008-08-12 at 14:52 +0300, Pasi Kärkkäinen wrote:
These two links seem to have a bit different ways of doing the cable.. The first link has more "complicated" cable..
Can someone try and compare these or explain the differences?
Only Radeons can output a composite sync signal. That's why the second link will only work on Radeons.
The simple circuit in the first link merely takes seperate horiz + vertical syncs and combines them into the composite sync required for TV display. As such it will work on any VGA card, but since Thomas' work is restricted to supporting Radeons, there seems little point to make things more complex by building a circuit rather than just a few wires and one resistor :)
Note that pin 9 on the Radeon VGA port will provide +5V for you to feed into SCART pin 16 to tell your TV that it's an RGB signal. i.e. you don't need to take a feed from your PC PSU.
Cheers, Gavin.
Cheers, Gavin,
On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
Does somebody have a URL on how to make one? for d-sub to scart or the new DVI (modern graphic cards) to scart?
my favorite cable works with all graphics cards supporting RGB. I don't think it's too complex:)
you find it in the 'README' of my packages at:
http://lowbyte.de/vga-sync-fields
or here:
============================================================================== circuit diagram of my favorite VGA-to-SCART adaptor:
VGA SCART
1 -O------------------------------------------O- 15 R 2 -O------------------------------------------O- 11 G 3 -O------------------------------------------O- 7 B
6 -O---------------------------------------+--O- 13 R Gnd 7 -O---------------------------------------+--O- 9 G Gnd 8 -O---------------------------------------+--O- 5 B Gnd 10 -O---------------------------------------+--O- 17 Gnd +--O- 14 Gnd +--O- 18 Gnd ------ 9 -O-----| 75R |-----------------------------O- 16 ------ -VS 14 -O-----------------------+ | | / ------ |C -HS 13 -O-----| 680R |-----B-| BC 547 B ------ |E | \ | ------ +------| 680R |----O- 20 -CS ------ shell-O------------------------------------------O- 21 shell
==============================================================================
Cheers Thomas
Hi
again, several questions :)
does your project actually fro dvb 720p channels too ? and I have may be stupid question - why frame rate from satellites is not stable ? is it 50i ?
Goga
On Wed, Aug 13, 2008 at 12:03:23AM +0400, Goga777 wrote:
does your project actually fro dvb 720p channels too ?
720p is 1280x720. At least the part that does the frame rate sync VGA<->DVB can be recycled for this.
and I have may be stupid question - why frame rate from satellites is not stable ? is it 50i ?
real life systems don't work 100% perfect in mathematical sense. That's why we must find a way how to compensate for aberrations. BTW that's exactly how a FF-card it also does.
Cheers Thomas
On Tue, 2008-08-12 at 19:19 +0200, Thomas Hilber wrote:
On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
Hi again, Thomas :)
I have a system not dissimilar to yours.. it's a P3-1GHz with PCI Radeon 7000. OS is Ubuntu hardy with the patches from your 0.0.2 release.
Now that I've got the patches in place, I get a stable desktop display on the TV.
If I start vdr / xineliboutput, the picture will be Ok for a second.. then it'll move up and down (like camera wobble, but it moves the onscreen logos / VDR menus, too!).
I see this kind of thing at least once per second :
[ 2706.402871] [drm] changed radeon drift trim from 00520125 -> 0052018c
If I quit vdr (leaving X running), and run the 'drift_control' tool, I see a drift speed of approx -3900 for 4 seconds, then +16000 marked 'excessive drift speed'
It's much the same story on the output of the 'startx' console.. lots of <- resyncing field polarity M1 -> and
sync point displacement: -365 drift speed: -13004 excessive drift speed overall compensation: -461
every couple of seconds :(
The system has no load since it'll become my new VDR box (hopefully :)
Cheers, Gavin.
On Tue, Aug 12, 2008 at 10:44:37PM +0100, Gavin Hamill wrote:
I have a system not dissimilar to yours.. it's a P3-1GHz with PCI Radeon 7000. OS is Ubuntu hardy with the patches from your 0.0.2 release.
ok
Now that I've got the patches in place, I get a stable desktop display on the TV.
good
If I start vdr / xineliboutput, the picture will be Ok for a second.. then it'll move up and down (like camera wobble, but it moves the onscreen logos / VDR menus, too!).
I guess it's because it wants to resync the inital field polarity.
I see this kind of thing at least once per second :
[ 2706.402871] [drm] changed radeon drift trim from 00520125 -> 0052018c
right. The lowest byte 8c confirms my assumption about field polarity.
If I quit vdr (leaving X running), and run the 'drift_control' tool, I see a drift speed of approx -3900 for 4 seconds, then +16000 marked 'excessive drift speed'
I think it is the best to gradually step forward. Could you please do the following and report the results. To be on the safe side I just repeated all steps myself with reproducible results:
1. for the moment please encomment both in 'radeon_video.c' and 'drift_control'
//#define RESYNC_FIELD_POLARITY_METHOD1 //#define RESYNC_FIELD_POLARITY_METHOD2
because this must clearly be fixed in xine. Though it works in my current configuration. Maybe we can reenable it later.
2. start the Xserver (but still without vdr) 3. run 'drift_control a'
this should give you typically an output like this:
# drift_control a tv now: 1218633290.553468 tv vbl: 1218633290.538542 vbls: 43163 trim: 0x00520100 sync point displacement: 9871 drift speed: -19 overall compensation: 339 o. c. clipped: 339 trim absolute: 339 t. a. clipped: 37 new trim: 0x80520125
tv now: 1218633291.553497 tv vbl: 1218633291.539525 vbls: 43213 trim: 0x00520125 sync point displacement: 3972 drift speed: -954 overall compensation: 104 o. c. clipped: 104 trim absolute: 141 t. a. clipped: 37 new trim: 0x80520125
tv now: 1218633292.553471 tv vbl: 1218633292.540529 vbls: 43263 trim: 0x00520125 sync point displacement: 2942 drift speed: -1030 overall compensation: 65 o. c. clipped: 65 trim absolute: 102 t. a. clipped: 37 new trim: 0x80520125
tv now: 1218633293.553429 tv vbl: 1218633293.541534 vbls: 43313 trim: 0x00520125 sync point displacement: 1895 drift speed: -1047 overall compensation: 29 o. c. clipped: 29 trim absolute: 66 t. a. clipped: 37 new trim: 0x80520125
tv now: 1218633294.553387 tv vbl: 1218633294.542539 vbls: 43363 trim: 0x00520125 sync point displacement: 848 drift speed: -1047 overall compensation: -6 o. c. clipped: -6 trim absolute: 31 t. a. clipped: 31 new trim: 0x8052011f
tv now: 1218633295.553358 tv vbl: 1218633295.543374 vbls: 43413 trim: 0x0052011f sync point displacement: -16 drift speed: -864 overall compensation: -30 o. c. clipped: -30 trim absolute: 1 t. a. clipped: 1 new trim: 0x80520101
tv now: 1218633296.553329 tv vbl: 1218633296.543358 vbls: 43463 trim: 0x00520101 sync point displacement: -29 drift speed: -13 overall compensation: -1 completed o. c. clipped: -1 trim absolute: 0 t. a. clipped: 0 new trim: 0x80520100
tv now: 1218633297.553298 tv vbl: 1218633297.543296 vbls: 43513 trim: 0x00520100 sync point displacement: 2 drift speed: 31 overall compensation: 1 completed o. c. clipped: 1 trim absolute: 1 t. a. clipped: 1 new trim: 0x80520101
tv now: 1218633298.553269 tv vbl: 1218633298.543262 vbls: 43563 trim: 0x00520101 sync point displacement: 7 drift speed: 5 overall compensation: 0 completed o. c. clipped: 0 trim absolute: 1 t. a. clipped: 1 new trim: 0x80520101
it is important after some time 'sync point displacement' and 'drift speed' are floating around zero.
4. stop drift_control 5. unload all dvb modules (there are known issues with some) 6. start vdr with local sxfe frontend (make channels.conf zero size file) 7. start replay of some recording. Because field polarity is not synced automatically anymore you can manually restart replay until polarity is correct.
this should give you typically an output (Xorg.0.log) like this:
sync point displacement: -7816 drift speed: -716 overall compensation: -294 sync point displacement: -7503 drift speed: 832 overall compensation: -230 sync point displacement: -6514 drift speed: 1293 overall compensation: -180 sync point displacement: -5394 drift speed: 906 overall compensation: -154 sync point displacement: -4261 drift speed: 1226 overall compensation: -104 sync point displacement: -3142 drift speed: 1154 overall compensation: -68 sync point displacement: -2006 drift speed: 1034 overall compensation: -33 sync point displacement: -875 drift speed: 1218 overall compensation: 11 sync point displacement: 89 drift speed: 796 overall compensation: 30 sync point displacement: 470 drift speed: -75 overall compensation: 13 sync point displacement: 235 drift speed: -391 overall compensation: -5 sync point displacement: -127 drift speed: -258 overall compensation: -13 sync point displacement: -230 drift speed: 55 overall compensation: -6 sync point displacement: -38 drift speed: 271 overall compensation: 8 sync point displacement: 99 drift speed: 43 overall compensation: 4 sync point displacement: 93 drift speed: -62 overall compensation: 1 completed sync point displacement: -15 drift speed: -107 overall compensation: -4 sync point displacement: -58 drift speed: -2 overall compensation: -2 sync point displacement: -30 drift speed: 41 overall compensation: 0 completed sync point displacement: 23 drift speed: -27 overall compensation: 0 completed
again as in our previous example with drift_control the value of 'sync point displacement' starts mostly at very high offset. The algorithm uses 'drift speed' to converge 'sync point displacement' against zero. After a few cycles even an 'overall compensation' of 0 is possible.
The picture quality should be as good as accustomed from RGB/PAL CRT.
If the 'drift speed' value finally does show large deviations from zero this could be a problem in your xine-lib. In that case I upload my current xine-lib version to my web server.
Good luck Thomas
On Wed, 2008-08-13 at 16:39 +0200, Thomas Hilber wrote: 1. for the moment please encomment both in 'radeon_video.c' and 'drift_control'
//#define RESYNC_FIELD_POLARITY_METHOD1 //#define RESYNC_FIELD_POLARITY_METHOD2
Done, recompiled + reinstall the .deb, and recompiled the drift_control binary..
- run 'drift_control a'
it is important after some time 'sync point displacement' and 'drift speed' are floating around zero.
overall comp floats -1 to 2, but sync point floats -44 to +45, and drift speed floats -40 to +40. ta absolute + clipped are 5 -> 7. I find it odd that my 'vbls' value is 15500 when yours is 43000..
- stop drift_control
- unload all dvb modules (there are known issues with some)
I forgot to mention this before - I'm not using any dvb modules - I'm using the streamdev-client to source live TV via HTTP from my live VDR box.
I'll have to wait until I can be in front of the machine to try the other tests.. will follow-up then.
Many thanks for your time and effort :)
Cheers, Gavin.
On Wed, Aug 13, 2008 at 04:21:25PM +0100, Gavin Hamill wrote:
overall comp floats -1 to 2, but sync point floats -44 to +45, and drift speed floats -40 to +40. ta absolute + clipped are 5 -> 7. I find
that's pretty good. Same values here.
it odd that my 'vbls' value is 15500 when yours is 43000..
no - that's also ok! vbls continuously counts vbl interrupts since Xserver start. Next time you restart you will have again a completely different offset.
- stop drift_control
- unload all dvb modules (there are known issues with some)
I forgot to mention this before - I'm not using any dvb modules - I'm using the streamdev-client to source live TV via HTTP from my live VDR box.
ok. There still is left to do a lot of things until the patch is working on all possible configurations. Sorry - I for now only can speak for my own (simple) configuration.
I'll have to wait until I can be in front of the machine to try the other tests.. will follow-up then.
Many thanks for your time and effort :)
thank you for testing:-)
Cheers Thomas
On Wed, 2008-08-13 at 16:39 +0200, Thomas Hilber wrote:
And now.. part 2 :)
- stop drift_control
- unload all dvb modules (there are known issues with some)
- start vdr with local sxfe frontend (make channels.conf zero size file)
- start replay of some recording. Because field polarity is not synced
automatically anymore you can manually restart replay until polarity is correct.
OK, found a suitable recording, and after a couple of 'play 1 begin' to SVDRP it starts OK. The picture is pretty good, but it still shifts around the screen a bit:
drift speed: 982 overall compensation: 30 sync point displacement: 1118 drift speed: 422 overall compensation: 53 sync point displacement: 64 drift speed: -916 overall compensation: -29 sync point displacement: 613 drift speed: -282 overall compensation: 11 sync point displacement: -216 drift speed: -369 overall compensation: -20 sync point displacement: -499 drift speed: 163 overall compensation: -11 sync point displacement: -685 drift speed: 393 overall compensation: -10 sync point displacement: 3175 drift speed: 8509 overall compensation: 402 sync point displacement: 6186 drift speed: -13504 excessive drift speed overall compensation: -252 sync point displacement: -846 drift speed: -4863 overall compensation: -196 sync point displacement: -430 drift speed: 7566 overall compensation: 246 sync point displacement: -438 drift speed: -8988
So, wow yes it's still all over the place :(
FWIW, the output is not once per second.. there is often a delay of up to 4 seconds before another group of 3 lines is displayed.
If the 'drift speed' value finally does show large deviations from zero this could be a problem in your xine-lib. In that case I upload my current xine-lib version to my web server.
Ouch.
I tried first with the xine-lib 1.11.1 which ships with ubuntu hardy, and then forward-ported the 1.1.7 packages from gutsy (whilst repackaging the xineliboutput support for 1.1.7) and had exactly the same problem. I just find it a bit strange that the same problem should manifest itself with both an older and a newer xine-lib than you use (listed as 1.1.8 in your original post)
So, I started looking for other reasons. Whilst X + vdr are running, the Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU usage is 32%, with 16% for user processes. I thought maybe it was using X11 output rather than xv, and thus causing a drain on the system...
I have executed 'xhost +' to eliminate X security issues... and the syslog shows all positive output:
starting plugin: xineliboutput Local decoder/display (cXinelibThread) thread started (pid=14236, tid=14242) [xine..put] xineliboutput: plugin file is /usr/lib/vdr/plugins/libvdr-xineliboutput.so.1.6.0 [xine..put] Searching frontend sxfe from /usr/lib/vdr/plugins/ [xine..put] Probing /usr/lib/vdr/plugins/libxineliboutput-sxfe.so.1.0.0rc2 [xine..put] load_frontend: entry at 0xb569a154 [xine..put] Using frontend sxfe (X11 (sxfe)) from libxineliboutput-sxfe.so.1.0.0rc2 [xine..put] cXinelibLocal::Action - fe created [vdr-fe] sxfe_display_open(width=720, height=576, fullscreen=1, display=:0) [vdr-fe] Display size : 190 x 152 mm [vdr-fe] 720 x 576 pixels [vdr-fe] 96dpi / 96dpi [vdr-fe] Display ratio: 3789.000000/3789.000000 = 1.000000 [vdr-fe] Failed to open connection to bus: Failed to execute dbus-launch to autolaunch D-Bus session [vdr-fe] (ERROR (gnome_screensaver.c,55): Resource temporarily unavailable) [xine..put] cXinelibLocal::Action - fe->fe_display_open ok [xine..put] cXinelibLocal::Action - xine_init [vdr-fe] fe_xine_init: xine_open_audio_driver("alsa:default") failed [xine..put] cXinelibLocal::Action - fe->xine_init ok [xine..put] cXinelibLocal::Action - xine_open
'xvinfo' shows all the good stuff (pages of capabilities), too.
So I'm not entirely sure where to take it from here. Clearly it can work, but I must be missing a piece..
Sorry - it's a bit of a mixed bag response - I was hoping it would be much more clear cut!
Cheers, Gavin.
On Wed, Aug 13, 2008 at 09:09:45PM +0100, Gavin Hamill wrote:
OK, found a suitable recording, and after a couple of 'play 1 begin' to SVDRP it starts OK. The picture is pretty good, but it still shifts around the screen a bit:
that's because PLL encounters very large increments of 'sync point displacement' it must compensate for. We must find the cause where these leaps come from.
BTW: If I start a kernel build in the background I get similar effects:)
In your case it appears to be the Xserver itself consuming huge amounts of CPU resources for some yet unknown reason (see below).
[...]
overall compensation: -11 sync point displacement: -685 <------+ drift speed: 393 | overall compensation: -10 | there is no sync point displacement: 3175 <------+ stability drift speed: 8509 | overall compensation: 402 | sync point displacement: 6186 <------+
[...]
FWIW, the output is not once per second.. there is often a delay of up to 4 seconds before another group of 3 lines is displayed.
strange. The output here gets exactly every second directly to the screen. And also is updated every second through 'tail -F /var/log/Xorg.0.log'. I think that could fit to the '40% Xserver-CPU' phenomenon (see below).
So, I started looking for other reasons. Whilst X + vdr are running, the Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU usage is 32%, with 16% for user processes. I thought maybe it was using X11 output rather than xv, and thus causing a drain on the system...
oh - a very interesting fact. that's different to mine (see my output of top below). Xorg takes only 0.7%(!) CPU on my system. Are there some special patches in ubuntu that causes this?
This appears be the root cause of our problem!
Does the Xserver poll for some resources not available or something? A value of 40% CPU is way too much. The only process consuming some CPU power should be 'vdr' whilst decoding. Most other processes don't have to do much all over the time. We must dig deeper into that '40% Xserver-CPU' phenomenon! DISPLAY environment variable is set to DISPLAY=:0 ?
again a typical Xserver output:
sync point displacement: -26 drift speed: -71 overall compensation: -3 sync point displacement: -31 drift speed: 100 overall compensation: 2 sync point displacement: -25 drift speed: -57 overall compensation: -2 sync point displacement: -23 drift speed: 12 overall compensation: 0 completed sync point displacement: 24 drift speed: 63 overall compensation: 3 sync point displacement: 6 drift speed: -72 overall compensation: -2 sync point displacement: -10 drift speed: -24 overall compensation: -1 completed sync point displacement: 5 drift speed: 60 overall compensation: 2
while at the same time you get these messages in '/var/log/messages'. You see the correction is only floating a little:
kernel: [drm] changed radeon drift trim from 00520101 -> 00520105 kernel: [drm] changed radeon drift trim from 00520105 -> 00520104 kernel: [drm] changed radeon drift trim from 00520104 -> 00520101 kernel: [drm] changed radeon drift trim from 00520101 -> 00520103 kernel: [drm] changed radeon drift trim from 00520103 -> 00520101 kernel: [drm] changed radeon drift trim from 00520101 -> 00520104 kernel: [drm] changed radeon drift trim from 00520104 -> 00520102 kernel: [drm] changed radeon drift trim from 00520102 -> 00520101 kernel: [drm] changed radeon drift trim from 00520101 -> 00520103
at the same time I get following values through 'vmstat 1':
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 349304 11228 107228 0 0 1788 0 286 777 22 1 77 0 0 0 0 349296 11228 107216 0 0 0 0 294 787 22 0 78 0 0 0 0 347364 11232 109012 0 0 1804 0 299 780 22 2 76 0 0 0 0 347364 11232 109024 0 0 0 0 281 767 23 1 76 0 0 0 0 345572 11232 110824 0 0 1796 0 300 782 24 0 76 0 0 0 0 345564 11240 110820 0 0 0 72 295 799 24 1 75 0 0 0 0 343896 11240 112596 0 0 1800 0 294 781 24 1 75 0 0 0 0 343896 11240 112596 0 0 0 0 287 776 25 0 75 0 0 0 0 342104 11240 114396 0 0 1808 0 293 781 24 2 74 0 0 0 0 342096 11240 114404 0 0 0 20 291 780 26 1 73 0 0 0 0 340304 11248 116196 0 0 1800 56 307 779 25 1 74 0 0 0 0 340296 11248 116204 0 0 0 0 281 768 24 2 74 0 0 0 0 338504 11248 118004 0 0 1788 0 285 764 21 4 75 0 0 0 0 338512 11248 117992 0 0 0 0 283 745 27 0 73 0 0 0 0 344776 11248 111580 0 0 1788 4 300 775 23 2 75 0
and top:
top - 10:48:13 up 1:33, 8 users, load average: 0.22, 0.09, 0.02 Tasks: 58 total, 2 running, 56 sleeping, 0 stopped, 0 zombie Cpu(s): 1.0%us, 0.3%sy, 24.3%ni, 74.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 516368k total, 173356k used, 343012k free, 11416k buffers Swap: 3903784k total, 0k used, 3903784k free, 113200k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8858 root 20 0 149m 25m 14m S 16.0 5.0 0:31.03 vdr 8798 root 20 0 294m 15m 12m S 0.7 3.1 0:01.54 Xorg 8894 root 20 0 2316 1096 872 R 0.7 0.2 0:00.28 top 1 root 20 0 2028 708 604 S 0.0 0.1 0:01.26 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
You see Xorg is almost not noticable on my system!
Can you strace the Xserver? Maybe you can try Debian experimental packages like I do? Don't the run on Ubuntu as well?
If it would help you I can offer you to make a copy of my entire development system (about 800MB as compressed tar image). It's based on actual Debian lenny. This way you would have a system running instantly as expected.
From there you could try to stepwise activate your additional components.
And you have a chance to see which components causes the failure.
Cheers Thomas
On Wed, Aug 13, 2008 at 09:09:45PM +0100, Gavin Hamill wrote:
So, I started looking for other reasons. Whilst X + vdr are running, the Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU usage is 32%, with 16% for user processes. I thought maybe it was using X11 output rather than xv, and thus causing a drain on the system...
Have you checked that your display driver is OK? MTRR? Are you sure you use e.g. XV and not XShm?
Also, VDR taking 25% of resources looks pretty high. Can you check without plugins? (or is the 25% already including a software player?)
yours, Jouni
to, 2008-08-14 kello 11:25 +0200, Thomas Hilber kirjoitti:
On Wed, Aug 13, 2008 at 09:09:45PM +0100, Gavin Hamill wrote:
Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU usage is 32%, with 16% for user processes.
[...]
Does the Xserver poll for some resources not available or something?
Maybe the driver is waiting for free overlay buffer ? Some drivers wait for free hardware overlay buffer in simple busy loop.
Usually this can be seen only when video player draws Xv frames faster than the actual output rate (ex. displaying 50p video with 50p display mode).
- Petri
On Thu, Aug 14, 2008 at 01:15:58PM +0300, Petri Hintukainen wrote:
to, 2008-08-14 kello 11:25 +0200, Thomas Hilber kirjoitti:
Does the Xserver poll for some resources not available or something?
Maybe the driver is waiting for free overlay buffer ? Some drivers wait for free hardware overlay buffer in simple busy loop.
a good idea, but in the case of 'xserver-xorg-video-ati' true hardware double buffers are supported. If a new PutImage() comes in the DDX simply toggles to the other double buffer and starts to write there. No matter this buffer ever has been completely read by CRT controller.
So there is no mechanism waiting here for something as far as I can see.
Usually this can be seen only when video player draws Xv frames faster than the actual output rate (ex. displaying 50p video with 50p display mode).
To see the effect in practice I just set the VGA frame rate to several values slightly below 50Hz (i.e. in the range 49.94 - 49.99HZ). Applying all these 'low' frame rates lead to dropped fields as expected. But Xserver %CPU always stays around 2%CPU maximum.
The only exception here is if I press the 'OK' button. The OSD 'time-shift' bar showing up then costs about 16%CPU. Strangely enough if I open the 'recordings' OSD which covers almost the entire screen this takes only about 6%CPU.
BTW: my xineliboutput OSD setup is as follows:
xineliboutput.OSD.AlphaCorrection = 0 xineliboutput.OSD.AlphaCorrectionAbs = 0 xineliboutput.OSD.Downscale = 1 xineliboutput.OSD.ExtSubSize = -1 xineliboutput.OSD.HideMainMenu = 0 xineliboutput.OSD.LayersVisible = 0 xineliboutput.OSD.Prescale = 1 xineliboutput.OSD.SpuAutoSelect = 0 xineliboutput.OSD.UnscaledAlways = 0 xineliboutput.OSD.UnscaledLowRes = 0 xineliboutput.OSD.UnscaledOpaque = 0
But anyway all these values still are in the 'green area' and are compensated by the patch.
A value of 40%CPU as Gavin posted above I never could reproduce on my system. There must be broken something.
Cheers Thomas
On 14 Aug 2008, at 21:53, Thomas Hilber wrote:
a good idea, but in the case of 'xserver-xorg-video-ati' true hardware double buffers are supported. If a new PutImage() comes in the DDX simply toggles to the other double buffer and starts to write there. No matter this buffer ever has been completely read by CRT controller.
So there is no mechanism waiting here for something as far as I can see.
Since you're using the vsync irq in any case, the best solution would be to notify user space at irq time that it should 'PutImage' a new frame.
On Thu, Aug 14, 2008 at 10:22:53PM +1000, Torgeir Veimo wrote:
Since you're using the vsync irq in any case, the best solution would be to notify user space at irq time that it should 'PutImage' a new frame.
I know what you want to say. But according to my understanding xine has it's own heart beat and from this and stream PTS and some other parameters there is finally derived the rate the images are put to the Xserver.
I consider this rate as an 'ideal' rate i.e. free from any hardware contraints. I really don't want to change that.
Rather I try my best to program the hardware as close as possible to xines 'ideal' rate. That's the main intention of the patch.
Cheers Thomas
On Thu, 2008-08-14 at 11:25 +0200, Thomas Hilber wrote:
Good heavens, this is all getting rather heavyweight :)
oh - a very interesting fact. that's different to mine (see my output of top below). Xorg takes only 0.7%(!) CPU on my system. Are there some special patches in ubuntu that causes this?
1% CPU is about what I would expect for xv usage - after all the whole point is for the app to write directly to video memory with minimal 'processing'
A skirt around the problem with Google reveals very little - only a string of users complaining that their silly 3D desktop is slow / unstable (who would have thought? :)
This appears be the root cause of our problem!
Does the Xserver poll for some resources not available or something? A value of 40% CPU is way too much. The only process consuming some CPU power should be 'vdr' whilst decoding. Most other processes don't have to do much all over the time.
It should be said that Xorg is idle when just showing a desktop. It's only when video is played that usage shoots up.
We must dig deeper into that '40% Xserver-CPU' phenomenon! DISPLAY environment variable is set to DISPLAY=:0 ?
Yes. I tried also using mplayer -vo xv /video/blahhhh/12313131/001.vdr and that also generated the same amount of load in Xorg. However, since the PC (Dell Optiplex) has onboard Intel 810 VGA, I removed the radeon and tried it. The same mplayer test yielded only 6% Xorg CPU. Still higher than I would expect, but it was an 800x600 VGA display.
Even deleting the xorg.conf and letting the radeon driver choose 'best defaults' I get the 40% CPU load.
You see Xorg is almost not noticable on my system!
Can you strace the Xserver? Maybe you can try Debian experimental packages like I do? Don't the run on Ubuntu as well?
Well, the Debian experimental packages installed OK, but refused to start:
/usr/bin/X11/X: symbol lookup error: /usr/lib/xorg/modules/drivers//radeon_drv.so: undefined symbol: pci_device_map_range giving up. xinit: Connection refused (errno 111): unable to connect to X server xinit: No such process (errno 3): unexpected signal 2.
(yes, the radeon driver package was upgraded to the experimental one :)
and now I am unable to reinstall the ubuntu xorg due to circular dependencies and very strange package behaviour (see [1]), so I've given up on this installation. A shame, since I'd done well and not installed anything into /usr/local this time :)
If it would help you I can offer you to make a copy of my entire development system (about 800MB as compressed tar image).
At this stage that sounds like a good idea. I originally intended to install lenny but the Debian netinst + 'testing2' iso claimed there was no hard disk on the PC (I had the same experience earlier that day with a server at work), so I tried Ubuntu which installed perfectly.
Are you suggesting to provide a tarball that I can 'tar xzf' into a freshly-formatted root partition (then run grub) ?
Cheers, Gavin.
[1] root@rgb:~# apt-get install xserver-xorg xserver-xorg-core The following packages have unmet dependencies. xserver-xorg: Depends: x11-xkb-utils but it is not going to be installed PreDepends: x11-common (>= 1:7.3+3) but it is not going to be installed xserver-xorg-core: Depends: libfontenc1 but it is not going to be installed Depends: libxau6 but it is not going to be installed Depends: libxdmcp6 but it is not going to be installed Depends: libxfont1 (>= 1:1.2.9) but it is not going to be installed Depends: x11-common (>= 1:7.0.0) but it is not going to be installed
All the Depends: packages are /already/ installed and meet those version requirements!
On Thu, Aug 14, 2008 at 02:22:46PM +0100, Gavin Hamill wrote:
[1] root@rgb:~# apt-get install xserver-xorg xserver-xorg-core The following packages have unmet dependencies. xserver-xorg: Depends: x11-xkb-utils but it is not going to be installed PreDepends: x11-common (>= 1:7.3+3) but it is not going to be installed xserver-xorg-core: Depends: libfontenc1 but it is not going to be installed Depends: libxau6 but it is not going to be installed Depends: libxdmcp6 but it is not going to be installed Depends: libxfont1 (>= 1:1.2.9) but it is not going to be installed Depends: x11-common (>= 1:7.0.0) but it is not going to be installed
All the Depends: packages are /already/ installed and meet those version requirements!
may be a forced purge/uninstall and reinstall of consistent Debian packages would help?
Are you suggesting to provide a tarball that I can 'tar xzf' into a freshly-formatted root partition (then run grub) ?
right. I would prepare it the next hours leave you a message with the URL where to download.
Cheers Thomas
On Thu, 2008-08-14 at 14:22 +0100, Gavin Hamill wrote:
On Thu, 2008-08-14 at 11:25 +0200, Thomas Hilber wrote:
Oh, aptitude solved the dependencies for me (needed to explicitly downgrade one package, then all was well.)
Here's the "vmstat 1" output during mplayer playback... i.e. no madness with interrupts / context switching...
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 39684 4616 13172 186380 0 0 512 0 259 128 41 31 29 0 1 0 39684 4224 13172 186764 0 0 384 0 252 119 38 31 30 0 2 0 39684 3860 13176 187144 0 0 384 4 262 131 42 30 28 0 1 0 39684 4340 12820 187228 0 628 388 628 266 131 40 31 30 0
gdh
On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
with NVIDIA driver 169 and 173 at least, this does not yet work:
I cannot confirm that. I just downloaded and installed most recent
NVIDIA-Linux-x86-173.14.12.pkg1.run
It's running perfectly VGA->SCART with *exactly* the xorg.conf I posted above.
Cheers Thomas
hi,
Thomas Hilber wrote:
On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
with NVIDIA driver 169 and 173 at least, this does not yet work:
I cannot confirm that. I just downloaded and installed most recent
NVIDIA-Linux-x86-173.14.12.pkg1.run
It's running perfectly VGA->SCART with *exactly* the xorg.conf I posted above.
your trick is the VGA->SCART cable. I was using the TVout from the card. I have ordered the components for the cable, and I hope I'll be able to solder them together during the weekend. I hope I can then reproduce your success :)
yours, Jouni
On Thu, Aug 14, 2008 at 08:50:43AM +0300, Jouni Karvo wrote:
your trick is the VGA->SCART cable. I was using the TVout from the card. I have ordered the components for the cable, and I hope I'll be able to solder them together during the weekend. I hope I can then reproduce your success :)
ok, I'm sure you will! The picture quality of SCART is way better than TVout. Though there still must be deinterlaced on nVidia.
Cheers Thomas
On Tue, Jul 22, 2008 at 06:37:05PM +0200, Thomas Hilber wrote:
Hi list,
the last few days I made some interesting experiences with VGA cards I now want to share with you.
goal
develop a budget card based VDR with PAL/RGB output and FF like output quality
problem
as we all know current VGA graphics output quality suffers from certain limitations. Graphics cards known so far operate at a fixed frame rate not properly synchronized with the stream. Thus fields or even frames do often not appear the right time at the ouput. Some are doubled others are lost. Finally leading to more or less jerky playback.
Hi and thanks for starting this project!
I'm a dxr3 user myself, but of course it would be nice to get good output quality without "extra" hardware! :)
It appeared to be a privilege of so called full featured cards (expensive cards running proprietary firmware) to output true RGB PAL at variable framerate. Thus always providing full stream synchronicity.
variable framerate.. I tend to watch interlaced PAL streams (50 hz), PAL DVD's and NTSC DVD's.. It would be great to get perfect output for all of these :)
A bit off topic.. Does any of the video players for Linux switch to a resolution/modeline with a different refresh rate when watching a movie to get perfect synchronization and no tearing?
An example.. your desktop might be at 70 hz refresh rate in normal use (ok, maybe it's 60 hz nowadays with LCD displays), and when you start watching PAL TV it would be better to have your display at 50 hz or 100 hz to get perfect output.. and then, when you start seeing a 24 fps movie, it would be best to have your display in 72 hz mode (3*24).. etc.
-- Pasi
On Tue, Jul 22, 2008 at 11:51:13PM +0300, Pasi Kärkkäinen wrote:
A bit off topic.. Does any of the video players for Linux switch to a resolution/modeline with a different refresh rate when watching a movie to get perfect synchronization and no tearing?
some time ago I accidentally stumbled across these options in my outdated xineliboutput version:
xineliboutput.VideoModeSwitching = 1 xineliboutput.Modeline =
maybe these are intended for this purpose? I didn't care yet.
An example.. your desktop might be at 70 hz refresh rate in normal use (ok, maybe it's 60 hz nowadays with LCD displays), and when you start watching PAL TV it would be better to have your display at 50 hz or 100 hz to get perfect output.. and then, when you start seeing a 24 fps movie, it would be best to have your display in 72 hz mode (3*24).. etc.
http://en.wikipedia.org/wiki/XRandR is what you are looking for. At least when talking about Xservers with that capability. Don't know how well it's supported by today's VDR output plugins.
On Wed, Jul 23, 2008 at 03:09:29PM +0200, Thomas Hilber wrote:
On Tue, Jul 22, 2008 at 11:51:13PM +0300, Pasi Kärkkäinen wrote:
A bit off topic.. Does any of the video players for Linux switch to a resolution/modeline with a different refresh rate when watching a movie to get perfect synchronization and no tearing?
some time ago I accidentally stumbled across these options in my outdated xineliboutput version:
xineliboutput.VideoModeSwitching = 1 xineliboutput.Modeline =
maybe these are intended for this purpose? I didn't care yet.
Ok. Sounds like it..
although "xineliboutput.Modeline" sounds a bit like it only wants to change to one specific modeline..
An example.. your desktop might be at 70 hz refresh rate in normal use (ok, maybe it's 60 hz nowadays with LCD displays), and when you start watching PAL TV it would be better to have your display at 50 hz or 100 hz to get perfect output.. and then, when you start seeing a 24 fps movie, it would be best to have your display in 72 hz mode (3*24).. etc.
http://en.wikipedia.org/wiki/XRandR is what you are looking for. At least when talking about Xservers with that capability. Don't know how well it's supported by today's VDR output plugins.
Thanks.
I knew new xservers are able to change resolution/modeline on the fly nowadays.. but didn't remember it was XRandR extension doing it :)
-- Pasi
On 23 Jul 2008, at 02:37, Thomas Hilber wrote:
To a certain degree you can workaround this by software deinterlacing.
After some further experimenting I finally found a solution to fine adjust the frame rate of my elderly Radeon type card.
Just trimming the length of a few scanlines during vertical retrace period does the trick.
Then I tried to implement the new functionality by applying only minimum changes to my current VDR development system. Radeon DRM driver is perfectly suited for that.
I finally ended up in a small patch against Radeon DRM driver and a even smaller one against xine-lib.
Your approach is very interesting, I myself have seen the problems that clock drift has on judder when using softdevice with vdr.
Have you considered applying your approach to DirectFB? There's a radeon driver which is not too hard to change, it also has a kernel module which could be augmented by using an ioctl command.
In addition, you might want to try out your approach with a matrox G550 card. These have field perfect interlaced output using DirectFB, so you'd have that part of the problem solved already.
Hi,
On Wed, Jul 23, 2008 at 06:04:29PM +1000, Torgeir Veimo wrote:
Your approach is very interesting, I myself have seen the problems that clock drift has on judder when using softdevice with vdr.
yes, that's also my experience with certain xineliboutput - xine-lib version combinations. I also experienced that certain DVB drivers sporadically stall the system for inadmissible long period of time.
My current configuration however outputs fields *very* regularly at least when doing playback. That's why I currently don't want to update any of these components.
Issues with judder are not so noticeable under more common operating conditions. Maybe that's why developers of softdecoder components are not always aware of problems in this area.
But with deinterlacing deactivated irregularities are mercilessly exposed. Because after each dropped field subsequent fields are replayed swapped:)
A measurement protocol showing you how regularly frames drip in with my current configuration can be found here
http://www.vdr-portal.de/board/attachment.php?attachmentid=19208
attached to that post:
http://www.vdr-portal.de/board/thread.php?postid=737687#post737687
legend: vbl_received - count of VBLANK interrupts since Xserver start vbl_since - time in usecs since last VBLANK interrupt vbl_now - time (only usec part) when ioctl has been called vbl_trim - trim value currently configured
some explanations: vbl_received is incremented by two each line because 2 VBLANK interrupts (== fields) are received each frame.
vbl_since is constantly incremented by drift between VBLANK timing based clock and xine-lib's call to PutImage() (effectively stream timestamps). After reaching a programmed level of about vbl_since == 11763 (for this particular example) vbl_trim starts to oscillate between the two values 0 and 200 (only a sample). Representing the two active Xserver modelines. This is only for simplicity. We could also program a much finer grading if desired. We are not limited to 2 modelines.
when vbl_trim starts to oscillate Xserver's video timing is fully synchronized with the stream.
interesting is minimal judder of vbl_now. It's incremented very regularly by a value very close to 40000usec each call.
BTW: The measurement took place in the Xserver (at the place where double buffers are switched) not at the patch in xine-lib. Thus comprising all latencies in the data path between xine-lib and Xserver. And even though there are effectively no stray values.
I can look a football recording for about 20 minutes (my test material) without any field loss.
Have you considered applying your approach to DirectFB? There's a radeon driver which is not too hard to change, it also has a kernel module which could be augmented by using an ioctl command.
not yet but it sounds very interesting! Unfortunately I'm not on holiday and can't spend too much time for this project. Though I dedicate each free minute to it:)
In addition, you might want to try out your approach with a matrox G550 card. These have field perfect interlaced output using DirectFB, so you'd have that part of the problem solved already.
right, a very good idea! You mean AGP G550? I almost forgot there are laying 2 of these boards somewhere around here.
Cheers, Thomas
On Tuesday 22 Jul 2008, Thomas Hilber wrote:
solution
graphics cards basically are not designed for variable frame rates. Once you have setup their timing you are not provided any means like registers to synchronize the frame rate with external timers. But that's exactly what's needed for signal output to stay in sync with the frame rate provided by xine-lib or other software decoders.
To extend/reduce the overall time between vertical retrace I first dynamically added/removed a few scanlines to the modeline but with bad results. By doing so the picture was visibly jumping on the TV set.
{snippage}
Interesting...
I looked at this sort of thing a few years back and came to the conclusion that the only cards that could be convinced to sync at such low rates, i.e. 50 Hz for PAL, were the Matrox G400, G450, etc. Whenever I tried setting modelines with any other cards, I never got any output or an error when starting X.
I take it that more modern cards are a lot more flexible in this respect!
I'm currently using a G450 with softdevice connected to a CRT TV and it works pretty well most of the time with the odd flicker due to dodgy sync every now and than.
Using hardware to do the deinterlacing is _definitely_ the way forward, especially for CRT. (Not sure whether LCDs display an interlaced streame "properly" or whether they try to interpolate somehow and refresh the whole screen at once. I'm not buying one until 1. terrestrial is available in the UK, 2. my current TV dies, 3. there is a solution like this which utilises older hardware!).
Looks interesting...
Cheers,
Laz
On Wed, Jul 23, 2008 at 09:20:08AM +0100, Laz wrote:
that the only cards that could be convinced to sync at such low rates, i.e. 50 Hz for PAL, were the Matrox G400, G450, etc. Whenever I tried setting modelines with any other cards, I never got any output or an error when starting X.
also Radeons need a special 'invitation' for this by specifying:
Option "ForceMinDotClock" "12MHz"
I take it that more modern cards are a lot more flexible in this respect!
maybe. nVidia cards I tested so far work without special problems. But I've heard that only the closed source driver as capable of 50 Hz for PAL. I did't test myself the open source driver with that low frequency.
I hope one day the Nouveau project could give us enough support for PAL on nVidia with adequate drivers.
I'm currently using a G450 with softdevice connected to a CRT TV and it works pretty well most of the time with the odd flicker due to dodgy sync every now and than.
maybe you can convince the softdevice developers to give the patch a try:)
Using hardware to do the deinterlacing is _definitely_ the way forward,
for displaying interlaced content it's always the best to use a display with native interlace capabilities. So nobody has to deinterlace at all. You just route the plain fields straight through to the hardware.
especially for CRT. (Not sure whether LCDs display an interlaced streame "properly" or whether they try to interpolate somehow and refresh
I've heard modern LCDs do a pretty good job interpreting a conventional PAL signal. At the expense of huge amount of signal processing circuitry.
On Tue, Jul 22, 2008 at 06:37:05PM +0200, Thomas Hilber wrote:
It appeared to be a privilege of so called full featured cards (expensive cards running proprietary firmware) to output true RGB PAL at variable framerate. Thus always providing full stream synchronicity.
I assume RGB NTSC should work as well.. ?
I live in Europe so PAL is the thing for me, but sometimes you have video in NTSC too..
After some further experimenting I finally found a solution to fine adjust the frame rate of my elderly Radeon type card. This time without any bad side effects on the screen.
Just trimming the length of a few scanlines during vertical retrace period does the trick.
<snip>
When xine-lib calls PutImage() it checks whether to increase/decrease Xservers frame rate. This way after a short adaption phase xine-lib can place it's PutImage() calls right in the middle between 2 adjacent vertical blanking intervals. This provides maximum immunity against jitter. And even better: no more frames/fields are lost due to stream and graphics card frequency drift.
Hmm.. can you explain what "increase/decrease Xservers frame rate" means?
I don't really know how xserver or display drivers work nowadays, but back in the days when I was coding graphics stuff in plain assembly (in MSDOS) I always did this to get perfect synchronized output without any tearing:
1. Render frame to a (double) buffer in memory 2. Wait for vertical retrace to begin (beam moving from bottom of the screen to top) 3. Copy the double buffer to display adapter framebuffer 4. Goto 1
So the video adapter framebuffer was always filled with a full new frame right before it was visible to the monitor..
This way you always got full framerate, smooth video, no tearing.. as long as your rendering took less than duration of a single frame :)
So I guess the question is can't you do the same nowadays.. lock the PutImage() to vsync?
-- Pasi
On Wed, Jul 23, 2008 at 05:05:21PM +0300, Pasi Kärkkäinen wrote:
I assume RGB NTSC should work as well.. ?
basically yes. The devil is in the details:) Just give it a try.
When xine-lib calls PutImage() it checks whether to increase/decrease Xservers frame rate. This way after a short adaption phase xine-lib can place it's PutImage() calls right in the middle between 2 adjacent vertical blanking intervals. This provides maximum immunity against jitter. And even better: no more frames/fields are lost due to stream and graphics card frequency drift.
Hmm.. can you explain what "increase/decrease Xservers frame rate" means?
you simply adjust the time between two vertical blanking (retrace) intervals to your needs.
This is done by lengthening/shortening scan lines that are not visible on the screen. Because they are hidden within the vertical blanking interval.
I don't really know how xserver or display drivers work nowadays, but back in the days when I was coding graphics stuff in plain assembly (in MSDOS) I always did this to get perfect synchronized output without any tearing:
- Render frame to a (double) buffer in memory
- Wait for vertical retrace to begin (beam moving from bottom of the screen to top)
- Copy the double buffer to display adapter framebuffer
- Goto 1
that's very similar to the way a Radeon handles this when overlay method is choosen for XV extension:
1. the Xserver writes the incoming frame to one of its 2 buffers. Strictly alternating between the two.
2. the CRT controller sequentially reads the even than the odd (or the other way round dependend on the start condition) field out of the buffer. And then switches to the next buffer. Also strictly alternating between the two.
You just have to take care that data is written the right sequence to the double buffers. So it is always read the correct sequence by the CRT controller.
So the video adapter framebuffer was always filled with a full new frame right before it was visible to the monitor..
the same here. Otherwise the CRT controller would reuse already shown data.
So I guess the question is can't you do the same nowadays.. lock the PutImage() to vsync?
exactly. The patch tries hard to do this:) But to put it in your words: It's only a 'soft' lock. Loading the machine too much can cause problems.
-Thomas
On Wed, Jul 23, 2008 at 09:21:01PM +0200, Thomas Hilber wrote:
On Wed, Jul 23, 2008 at 05:05:21PM +0300, Pasi Kärkkäinen wrote:
I assume RGB NTSC should work as well.. ?
basically yes. The devil is in the details:) Just give it a try.
When xine-lib calls PutImage() it checks whether to increase/decrease Xservers frame rate. This way after a short adaption phase xine-lib can place it's PutImage() calls right in the middle between 2 adjacent vertical blanking intervals. This provides maximum immunity against jitter. And even better: no more frames/fields are lost due to stream and graphics card frequency drift.
Hmm.. can you explain what "increase/decrease Xservers frame rate" means?
you simply adjust the time between two vertical blanking (retrace) intervals to your needs.
This is done by lengthening/shortening scan lines that are not visible on the screen. Because they are hidden within the vertical blanking interval.
Hmm.. I still don't understand why you need to do this in the first place?
I don't really know how xserver or display drivers work nowadays, but back in the days when I was coding graphics stuff in plain assembly (in MSDOS) I always did this to get perfect synchronized output without any tearing:
- Render frame to a (double) buffer in memory
- Wait for vertical retrace to begin (beam moving from bottom of the screen to top)
- Copy the double buffer to display adapter framebuffer
- Goto 1
that's very similar to the way a Radeon handles this when overlay method is choosen for XV extension:
- the Xserver writes the incoming frame to one of its 2 buffers. Strictly
alternating between the two.
- the CRT controller sequentially reads the even than the odd (or the
other way round dependend on the start condition) field out of the buffer. And then switches to the next buffer. Also strictly alternating between the two.
You just have to take care that data is written the right sequence to the double buffers. So it is always read the correct sequence by the CRT controller.
Ok.
So the video adapter framebuffer was always filled with a full new frame right before it was visible to the monitor..
the same here. Otherwise the CRT controller would reuse already shown data.
So I guess the question is can't you do the same nowadays.. lock the PutImage() to vsync?
exactly. The patch tries hard to do this:) But to put it in your words: It's only a 'soft' lock. Loading the machine too much can cause problems.
Does this mean XV extension (or X itself) does not provide a way to "wait for retrace" out-of-the-box.. and your patch adds that functionality?
Sorry for the stupid questions :)
-- Pasi
On 24 Jul 2008, at 20:49, Pasi Kärkkäinen wrote:
Hmm.. can you explain what "increase/decrease Xservers frame rate" means?
you simply adjust the time between two vertical blanking (retrace) intervals to your needs.
Hmm.. I still don't understand why you need to do this in the first place?
It is to avoid the output framerate drifting away from the DVB-T/S/C input framerate.
On Thu, Jul 24, 2008 at 09:02:50PM +1000, Torgeir Veimo wrote:
On 24 Jul 2008, at 20:49, Pasi Kärkkäinen wrote:
Hmm.. can you explain what "increase/decrease Xservers frame rate" means?
you simply adjust the time between two vertical blanking (retrace) intervals to your needs.
Hmm.. I still don't understand why you need to do this in the first place?
It is to avoid the output framerate drifting away from the DVB-T/S/C input framerate.
Oh, now I got it.. and it makes sense :) You can't really control the DVB stream you receive so you need to sync the output.
There shouldn't be this kind of problems with streams from local files (DVD for example)..
-- Pasi
On Thu, Jul 24, 2008 at 02:29:15PM +0300, Pasi Kärkkäinen wrote:
On Thu, Jul 24, 2008 at 09:02:50PM +1000, Torgeir Veimo wrote:
On 24 Jul 2008, at 20:49, Pasi Kärkkäinen wrote:
Hmm.. can you explain what "increase/decrease Xservers frame rate" means?
you simply adjust the time between two vertical blanking (retrace) intervals to your needs.
Hmm.. I still don't understand why you need to do this in the first place?
It is to avoid the output framerate drifting away from the DVB-T/S/C input framerate.
Oh, now I got it.. and it makes sense :) You can't really control the DVB stream you receive so you need to sync the output.
There shouldn't be this kind of problems with streams from local files (DVD for example)..
Or maybe there is after all.. it seems the output refresh rate is not exactly 50.00 Hz, but something close to it.. so that's causing problems.
Thanks:)
-- Pasi
On Thu, Jul 24, 2008 at 09:02:50PM +1000, Torgeir Veimo wrote:
Hmm.. I still don't understand why you need to do this in the first place?
It is to avoid the output framerate drifting away from the DVB-T/S/C input framerate.
right. Normally Xserver modelines can only produce 'discrete' and 'static' video timings somewhere near 50Hz. For example 50.01Hz if you are lucky.
With the patch you can dynamically fine tune the frame rate by about 0.000030Hz steps what should be enough for our purpose:)
I actually measured this step size by a quickly hacked measurement tool (drift_monitor) which can be found here (field_sync_tools.tgz):
http://www.vdr-portal.de/board/thread.php?postid=739784#post739784
On Thu, Jul 24, 2008 at 01:49:18PM +0300, Pasi Kärkkäinen wrote:
Does this mean XV extension (or X itself) does not provide a way to "wait for retrace" out-of-the-box.. and your patch adds that functionality?
most implementations provide "wait for retrace" out-of-the-box.
The point is that the Xserver's framerate must be adjustable dynamically for our purpose. In very small degrees to avoid visible effects.
That's what the patch does.
On Thu, Jul 24, 2008 at 01:36:41PM +0200, Thomas Hilber wrote:
On Thu, Jul 24, 2008 at 01:49:18PM +0300, Pasi Kärkkäinen wrote:
Does this mean XV extension (or X itself) does not provide a way to "wait for retrace" out-of-the-box.. and your patch adds that functionality?
most implementations provide "wait for retrace" out-of-the-box.
The point is that the Xserver's framerate must be adjustable dynamically for our purpose. In very small degrees to avoid visible effects.
That's what the patch does.
Yep, I got it now.. after a while :)
Now I just need to find someone to fix me a VGA-SCART cable and I can test these things myself..
Or maybe I could test with a VGA/CRT monitor to begin with..
I think my LCD only provides a single/fixed refresh rate through DVI-D.. (?)
-- Pasi
On Tue, Jul 22, 2008 at 06:37:05PM +0200, Thomas Hilber wrote:
goal
develop a budget card based VDR with PAL/RGB output and FF like output quality
VGA-to-SCART RGB adapter like this: http://www.sput.nl/hardware/tv-x.html
Hi again!
One more question..
Is it possible to output WSS signal from a VGA card to switch between 4:3 and 16:9 modes?
http://www.intersil.com/data/an/an9716.pdf
-- Pasi
On Sun, Jul 27, 2008 at 03:53:01PM +0300, Pasi Kärkkäinen wrote:
Is it possible to output WSS signal from a VGA card to switch between 4:3 and 16:9 modes?
it should be possible to emulate WSS by white dots/lines in a specific scanline. But I did not try this myself yet.
BTW: I'm currently experimenting with DirectFB instead of Xorg. Their Radeon driver code does not show the 'Xorg overlay scaling bug' from above. So I could either stick to DirectFB for the moment. Or I could try with help of their code to identify the part of Xorg that caused the bug.
On Tue, 2008-07-22 at 18:37 +0200, Thomas Hilber wrote:
Hi list,
Finally I have had a chance to try these patches - I managed to get an old Radeon 7000 PCI (RV100)...
I am using a fresh bare install of Ubuntu hardy which ships xine-lib 1.1.11, but the patches don't compile :( The Makefile.am changed a little but I was able to amend that manually, but the video_out_xv.c spews out this:
video_out_xv.c: In function ‘xv_update_frame_format’: video_out_xv.c:475: warning: pointer targets in assignment differ in signedness video_out_xv.c:481: warning: pointer targets in assignment differ in signedness video_out_xv.c:482: warning: pointer targets in assignment differ in signedness video_out_xv.c:483: warning: pointer targets in assignment differ in signedness video_out_xv.c: In function ‘xv_deinterlace_frame’: video_out_xv.c:538: warning: pointer targets in assignment differ in signedness video_out_xv.c:543: warning: pointer targets in passing argument 1 of ‘deinterlace_yuv’ differ in signedness video_out_xv.c:547: warning: pointer targets in assignment differ in signedness video_out_xv.c:552: warning: pointer targets in passing argument 1 of ‘deinterlace_yuv’ differ in signedness video_out_xv.c:566: warning: pointer targets in assignment differ in signedness video_out_xv.c:571: warning: pointer targets in passing argument 1 of ‘deinterlace_yuv’ differ in signedness video_out_xv.c:582: warning: pointer targets in assignment differ in signedness video_out_xv.c:583: warning: pointer targets in assignment differ in signedness video_out_xv.c:590: warning: pointer targets in assignment differ in signedness video_out_xv.c:591: warning: pointer targets in assignment differ in signedness video_out_xv.c:598: warning: pointer targets in assignment differ in signedness video_out_xv.c:599: warning: pointer targets in assignment differ in signedness video_out_xv.c: In function ‘xv_display_frame’: video_out_xv.c:845: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘vsync’ video_out_xv.c:845: error: ‘vsync’ undeclared (first use in this function) video_out_xv.c:845: error: (Each undeclared identifier is reported only once video_out_xv.c:845: error: for each function it appears in.) video_out_xv.c:853: error: ‘RADEON_SETPARAM_VBLANK_CRTC’ undeclared (first use in this function) video_out_xv.c:854: error: ‘DRM_RADEON_VBLANK_CRTC1’ undeclared (first use in this function) video_out_xv.c:859: error: ‘DRM_IOCTL_RADEON_VSYNC’ undeclared (first use in this function) video_out_xv.c: In function ‘open_plugin_2’: video_out_xv.c:1769: warning: passing argument 4 of ‘config->register_enum’ from incompatible pointer type make[4]: *** [xineplug_vo_out_xv_la-video_out_xv.lo]
I'm no coder so I don't know what I'm looking for.. any advice would be warmly welcomed!
Cheers, Gavin.
On Fri, Aug 08, 2008 at 09:23:34PM +0100, Gavin Hamill wrote:
Finally I have had a chance to try these patches - I managed to get an old Radeon 7000 PCI (RV100)...
nice!
I am using a fresh bare install of Ubuntu hardy which ships xine-lib 1.1.11, but the patches don't compile :( The Makefile.am changed a little but I was able to amend that manually, but the video_out_xv.c spews out this:
video_out_xv.c: In function ???xv_update_frame_format???:
[...]
In the meantime I reworked everything from scratch. A patch currently is only applied against radeon-DRM and xserver-xorg-video-ati.
Xine library isn't touched any more though this will change in the future. Latest version of the patch is available at:
http://lowbyte.de/vga-sync-fields
Cheers Thomas
Hi Thomas
does your idea actually for new generation cards - ATI HD series, Intel G35/45 chipsets with hdmi output ?
Goga
On Sat, Aug 09, 2008 at 11:26:21PM +0400, Goga777 wrote:
does your idea actually for new generation cards - ATI HD series, Intel G35/45 chipsets with hdmi output ?
currently it does for everything pre-avivo (e.g. before r500 with the exception of rs690 which is a r300-style 3d core but 2d is avivo).
I not yet tried with ATI HD or Intel G35/45. Basically I don't see a problem. The devil is in the details:)
To ease the port to other graphics hardware I did not use special pre-avivo registers in my current solution.
The idea comprises several aspects:
The most important feature is to synchronize video output with the stream. Nobody cared about that until today. I do not understand that at all.
It just a pleasant by-product of the sync that in some cases you need not to deinterlace anymore.
Cheers Thomas
thanks for your answer
but for 3d games this problem also actually ? or this issue exists only for video playback ?
does your idea actually for new generation cards - ATI HD series, Intel G35/45 chipsets with hdmi output ?
currently it does for everything pre-avivo (e.g. before r500 with the exception of rs690 which is a r300-style 3d core but 2d is avivo).
I not yet tried with ATI HD or Intel G35/45. Basically I don't see a problem. The devil is in the details:)
To ease the port to other graphics hardware I did not use special pre-avivo registers in my current solution.
The idea comprises several aspects:
The most important feature is to synchronize video output with the stream. Nobody cared about that until today. I do not understand that at all.
It just a pleasant by-product of the sync that in some cases you need not to deinterlace anymore.
On Sun, Aug 10, 2008 at 07:05:20PM +0400, Goga777 wrote:
but for 3d games this problem also actually ? or this issue exists only for video playback ?
3d and playback from disk sync to the rate provided by the graphics card. What is of course not possible for live-TV.
Cheers Thomas
Hi all,
Over the last days, Thomas and I have been trying to sort out why my nearly-identical machine couldn't run his VGA sync patches properly.
The key difference is my Radeon 7000VE is PCI, whilst his is AGP. I tried the PCI Radeon in two old Pentium-3 era machines, and on my modern Pentium D930 desktop, all with the same behaviour - fullscreen video over PCI causes huge CPU usage in the Xorg process, even when using xv 'acceleration'.
When I switch the PCI Radeon for a PCI Express X300 (the very lowest 'X' series you can get), everything is glorious: Xorg CPU use is barely 1%.
Unfortunately I don't have any machines with both AGP and PCI on which I can try the same OS image but we both think it's safe to conclude that PCI is just unsuitable for this task.
Many thanks to Thomas for writing the patches in the first place, and also for the time he's spent logged into my machine remotely trying to solve the problem!
Cheers, Gavin.
Would be nice if someone could test the AMD Athlon 64 2000+ on a AMD platform, the 780G chip set on a microATX board, because it can do HD resolution (1920x1200) with high picture quality is possible through DVI/HDMI ports.
On 16/08/2008, Gavin Hamill gdh@acentral.co.uk wrote:
Hi all,
Over the last days, Thomas and I have been trying to sort out why my nearly-identical machine couldn't run his VGA sync patches properly.
The key difference is my Radeon 7000VE is PCI, whilst his is AGP. I tried the PCI Radeon in two old Pentium-3 era machines, and on my modern Pentium D930 desktop, all with the same behaviour - fullscreen video over PCI causes huge CPU usage in the Xorg process, even when using xv 'acceleration'.
When I switch the PCI Radeon for a PCI Express X300 (the very lowest 'X' series you can get), everything is glorious: Xorg CPU use is barely 1%.
Unfortunately I don't have any machines with both AGP and PCI on which I can try the same OS image but we both think it's safe to conclude that PCI is just unsuitable for this task.
Many thanks to Thomas for writing the patches in the first place, and also for the time he's spent logged into my machine remotely trying to solve the problem!
Cheers, Gavin.
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Theunis Potgieter wrote:
Would be nice if someone could test the AMD Athlon 64 2000+ on a AMD platform, the 780G chip set on a microATX board, because it can do HD resolution (1920x1200) with high picture quality is possible through DVI/HDMI ports.
Hardware decoding of H.264 isn't supported in linux. Software decoding even 720p H.264 might be impossible with 1 GHz processor.
Gavin Hamill wrote:
Over the last days, Thomas and I have been trying to sort out why my nearly-identical machine couldn't run his VGA sync patches properly.
The key difference is my Radeon 7000VE is PCI, whilst his is AGP. I tried the PCI Radeon in two old Pentium-3 era machines, and on my modern Pentium D930 desktop, all with the same behaviour - fullscreen video over PCI causes huge CPU usage in the Xorg process, even when using xv 'acceleration'.
When I switch the PCI Radeon for a PCI Express X300 (the very lowest 'X' series you can get), everything is glorious: Xorg CPU use is barely 1%.
Unfortunately I don't have any machines with both AGP and PCI on which I can try the same OS image but we both think it's safe to conclude that PCI is just unsuitable for this task.
PCI in general should be perfectly fine, for SDTV at least. While displaying SDTV (vdrsxfe) I see ~20% cpu use for X on AGP, ~44% on PCI (same machine, different heads, AGP is MGA450, PCI is MGA200).
The huge difference is likely due to something else, like - display (X) driver (but even drivers which just memcpy the video data to the (xv) framebuffer should work on a modern machine) - PCI chipset (eg I had a VIA-based mobo, and it couldn't even keep up with SDTV on a PCI head, swapping the mobo for one w/ a real chipset made all problems suddenly disappear...)
You could probably do some setpci tweaks to improve PCI throughput, but I doubt the gain would be enough (I'd expect 10% improvement or so).
artur
On Sun, 2008-08-17 at 03:41 +0200, Artur Skawina wrote:
PCI in general should be perfectly fine, for SDTV at least. While displaying SDTV (vdrsxfe) I see ~20% cpu use for X on AGP, ~44% on PCI (same machine, different heads, AGP is MGA450, PCI is MGA200).
Yes, 40% CPU has been what I've seen. The problem is that it's system CPU usage rather than userspace. Due to the critical timing nature of the patches, they need to have nearly the whole machine to themselves, thus DMA PCI overhead causing things to be a 'a bit sticky' is just too much :/
You could probably do some setpci tweaks to improve PCI throughput, but I doubt the gain would be enough (I'd expect 10% improvement or so).
I did try to twiddle with setting PCI latency timers but it had no measurable effect..
Cheers, Gavin.
On Sun, Aug 17, 2008 at 04:31:58PM +0100, Gavin Hamill wrote:
CPU usage rather than userspace. Due to the critical timing nature of the patches, they need to have nearly the whole machine to themselves,
the patches are time critical as far as xine itself must time the frames very accurately.
Even my old 800Mhz Pentium with AGP-Radeon shows that indeed every 40000usecs +-35usecs a frame comes to Xserver's PutImage().
It's by far not neccessary for the patches to work to get frames that accurately but it shows what is possible even on old and slow hardware.
On Gavin's machine with PCI DMA problems we instead timed 40000usecs +-21000usecs a frame comes to the Xserver's PutImage().
That is way too unstable. I think xine itself also can't cope with that. At least it will show heavy jerkyness.
Nonetheless I today released a new version of the patches with 100% lesser sensivity to timing problems (see announcement of today).
Cheers Thomas
I found this to be useful for me, however I'm using PAL@50Hz and not NTSC colour encoding.
http://www.linuxis.us/linux/media/howto/linux-htpc/video_card_configuration....
Nice background information.
On 17/08/2008, Thomas Hilber vdr@toh.cx wrote:
On Sun, Aug 17, 2008 at 04:31:58PM +0100, Gavin Hamill wrote:
CPU usage rather than userspace. Due to the critical timing nature of the patches, they need to have nearly the whole machine to themselves,
the patches are time critical as far as xine itself must time the frames very accurately.
Even my old 800Mhz Pentium with AGP-Radeon shows that indeed every 40000usecs +-35usecs a frame comes to Xserver's PutImage().
It's by far not neccessary for the patches to work to get frames that accurately but it shows what is possible even on old and slow hardware.
On Gavin's machine with PCI DMA problems we instead timed 40000usecs +-21000usecs a frame comes to the Xserver's PutImage().
That is way too unstable. I think xine itself also can't cope with that. At least it will show heavy jerkyness.
Nonetheless I today released a new version of the patches with 100% lesser sensivity to timing problems (see announcement of today).
Cheers
Thomas
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
On Wed, Aug 27, 2008 at 03:08:14PM +0200, Theunis Potgieter wrote:
I found this to be useful for me, however I'm using PAL@50Hz and not NTSC colour encoding.
http://www.linuxis.us/linux/media/howto/linux-htpc/video_card_configuration....
Nice background information.
Right - some nice info. But the vga-sync-fields patch now voids some statements. It's no longer true that softdecoders must sync to the graphics card. In our case it's the other way round. As it should be.
The patch of course is for PAL@50Hz though NTSC should also be possible.
In the meantime I issued a few more releases of the vga-sync-fields patch. Version vga-sync-fields-0.0.7 together with xineliboutput Version 1.0.1 or newer and parameter setting
xineliboutput.Advanced.LiveModeSync = 0
give very good results for both viewing recordings and Live-TV. The system is already productive here. I will describe the new setup in my next release.
Cheers Thomas
a successor of my vga-sync-fields patch (http://lowbyte.de/vga-sync-fields/) now has been released by 'durchflieger' on 'vdr-portal.de' with far more functionality especially for HDTV related things.
please see:
http://www.vdr-portal.de/board/thread.php?threadid=80567
- sparkie
On Sat, Sep 27, 2008 at 08:35:23AM +0200, Thomas Hilber wrote:
a successor of my vga-sync-fields patch (http://lowbyte.de/vga-sync-fields/) now has been released by 'durchflieger' on 'vdr-portal.de' with far more functionality especially for HDTV related things.
please see:
Too bad the text is not english :(
-- Pasi
please see:
Too bad the text is not english :(
Not at all. Because it’s a good motivation to learn one more language :)
_________________________________________________________________ News, entertainment and everything you care about at Live.com. Get it now! http://www.live.com/getstarted.aspx
On Sat, Sep 27, 2008 at 02:55:31PM +0000, Bruno wrote:
Not at all. Because it?s a good motivation to learn one more language :)
the primary intention of the patch was not a German language lesson I think:)
the patch itself is written in C and the README is in English.
Cheers Thomas