V4L capturing: Difference between revisions

From LinuxTVWiki
Jump to navigation Jump to search
(Fixed Franco Masotti's name)
 
(41 intermediate revisions by 6 users not shown)
Line 1: Line 1:
This page discusses how to capture analogue video for offline consumption (especially digitising old VHS tapes). For information about streaming live video (e.g. webcams), see [[Streaming-V4L|the streaming page]]. For information about digital video (DVB), see [[TV_Related_Software|TV-related software]].
==TV Recording applications==


== Overview ==
* [http://dvr.sourceforge.net DVR] -- Digital Video Recorder for Linux
* [http://ffmpeg.sourceforge.net/index.php ffmpeg]
* [http://fftv.sourceforge.net fftv]
* [[gstreamer]]
* [http://xawdecode.sourceforge.net/ XdTV] -- XdTV is a software that allows you to watch record & stream TV
* [http://freevo.sourceforge.net Freevo] -- open-source digital video jukebox
* [[mencoder]]
** [[channel script |the channel scripts]]
** [[capture script |the capture script]]
* [[MythTV]] -- a Personal Video Recorder project
* [[streamer]]
* [[transcode]]
* [http://paginas.terra.com.br/informatica/gleicon/video4linux/videodog.html videodog]
* [[vlc]]


Analogue video technology was largely designed before the advent of computers, so accurately digitising a video is a difficult problem. For example, software often assumes a constant frame rate throughout a video, but analogue technologies can deliver different numbers of frames from second to second. This page will present a framework for recording video, which you can alter for your specific requirements.
==Other frame grabbers==


=== Recommended process ===
* See [[webcams]]


Your workflow should look something like this:
==Common configuration and control commands==


# '''Set your system up''' - understand the quirks of your TV card, VCR etc.
1. [[v4l2ucp]] -- universal control panel for v4l2 (available for Debian from [http://debian.video.free.fr/ Marillat])
# '''Encode an accurate copy of the source video''' - handle issues with the analogue half of the system. Do as little digital processing as possible
# '''Transcode a usable copy of the video''' - convert the previous file to something pleasing to use
# '''Try the video and transcode again''' - check whether the video works how you want, otherwise try some different settings


Converting analogue input to a digital format is hard - VCRs overheat and damage tapes, computers use too much CPU and drop frames, disk drives fill up, etc. Creating a ''good'' digital video is also hard - not all software supports all formats, overscan and background hiss distract the viewer, videos need to be split into useful chunks, and so on. It's much easier to learn the process and produce a quality result if you tackle ''encoding'' in one step and ''transcoding'' in another.
2. Command-line control the TV card


=== Suggested software ===
:a. v4lctl is a part of the xawtv package
:* v4lctl -c /dev/video0 list
:* v4lctl -c /dev/video0 bright "60%"
:* v4lctl -c /dev/video0 contrast "55%"


This page assumes you have installed the following programs:
:b. [http://www.vanheusden.com/dov4l/ dov4l]
:* Console tool that sets the parameters of a Video4Linux-device
:* You can set picture size, brightness, contrast, tuner frequency, and so on.
:* You can also retrieve a complete list of all current settings.


* [[GStreamer|gst-launch-1.0]] for capturing video (probably part of the ''gstreamer1.0-tools'' package)
3. Capture the stream
* [http://ffmpeg.org/ FFMpeg] for saving and editing video (probably part of the ''ffmpeg'' package)
:* videodog:
* [http://git.linuxtv.org/v4l-utils.git v4l-ctl] for controlling your video card (probably part of the ''v4l-utils'' package)
:: ''Usage:''
* [http://mpv.io mpv] for viewing videos (probably part of the ''mpv'' package)
:: videodog -x 640 -y 480 -w 3 -b 1 -c 65535 -m PAL -q -d /dev/video0 -j -f /var/www/webcam.jpg
:* webcam:
:: This useful tool supports continuously moving (ftp or scp - ssh copy) of jpeg output to remote server. Also allows put in additional text (date time, location), rotating of image.
:: ''Usage:''
:: webcam /etc/webcamrc
:: See [[webcams]] for model and driver details


There are alternatives for each of these (e.g. the older ''0.10'' series of [[GStreamer]] and the ''libav'' fork of [http://ffmpeg.org/ FFMpeg]). You should be able to modify the instructions below to suit your preferences.
==Compression formats==


=== Choosing formats ===
:*[http://wiki.multimedia.cx Multimediawiki]
:*[http://www.fourcc.org/ Fourcc site]
:*[http://developers.videolan.org/x264.html x264] -- GPL H.264 encoder. Offers best quality/bitrate efficiency, but a fast CPU for playback/recording is recommended.
:**[http://mirror01.x264.nl/x264/changelog.txt x264 Changelog]
:*[http://www.xvid.org xvid] -- free mpeg4 codec


When you create a video, you need to choose your ''video format'' (e.g. XviD or MPEG-2), ''audio format'' (e.g. WAV or MP3) and ''container format'' (e.g. AVI or MP4). There's constant work to improve the ''codecs'' that create audio/video and the ''muxers'' that create containers, and whole new formats are invented fairly regularly, so this page can't recommend any specific formats. For example, as of late 2015 [https://en.wikipedia.org/wiki/MPEG-2 MPEG-2] is the most widely supported by older DVD players, [https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC H.624] is becoming a de facto standard in modern web browsers, and people are waiting to see whether [https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding HEVC] will be blocked by patent trolls. That's probably enough to decide which video codec is right for you in late 2015, but the facts will have changed even by early 2016.
==Live x264 capture comparison==


You'll need to do some research to find the currently-recommended formats. Wikipedia's comparisons of [https://en.wikipedia.org/wiki/Comparison_of_audio_coding_formats audio], [https://en.wikipedia.org/wiki/Comparison_of_video_codecs video] and [https://en.wikipedia.org/wiki/Comparison_of_container_formats container] formats are a good place to start. Here are some important things to look for:
The x264 encoder is now of such a high quality that it is possible to compress live tv on the fly with good results. So far transcode is the only solution that delivers on all counts: good audio/video synchronization, small file size, efficient encoding (full size and high quality without dropped frames), and a resulting file that streams with VLC. These results, however, may vary locally; the following is a status report from late March 2006 on a Debian amd64 sid with Marillat's multimedia packages and a dual-core or dual amd64 CPU.


* '''encoding speed''' - during the encoding stage, using too much CPU load will cause frame-drops as the computer tries to keep up
<br>
* '''accuracy''' - some formats are ''lossless'', others throw away information to improve speed and/or reduce file size
* '''file size''' - different formats use different amounts of disk space, even with the same accuracy
* '''compatibility''' - newer formats usually produce better results but can't be played by older software


Remember that you can use different formats in the ''encode'' and ''transcode'' stages. Speed and accuracy are most important when encoding, so you should use a modern, fast, low-loss format to create your initial accurate copy of the source video. But size and compatibility are most important for playback, so you should transcode to a format that produces a smaller or more compatible file. For example, as of late 2015 you might encode FLAC audio and x264 video into a Matroska file, then transcode MP3 audio and MPEG-2 video into an AVI file. You can examine the result and transcode again from the original if the file is too large or your grandmother's DVD player won't play it.
<table border="10" cellpadding="5" width="70%" cellspacing="3" frame="hsides" rules="all">


== Setting up ==
<tr>
<th>application</th>
<th>a/v sync</th>
<th>efficiency</th>
<th>file size</th>
<th>streaming</th>
</tr>


Before you can record a video, you need to set your system up and identify the following information:


* connector type (RF, composite or S-video)
<tr>
* TV norm (some variant of PAL, NTSC or SECAM)
<td></td>
* video device (<code>/dev/video''<number>''</code>)
<td></td>
* audio device (<code>hw:CARD=''<id>'',DEV=''<number>''</code>)
<td></td>
* video capabilities (<code>video/x-raw, format=UYVY, framerate=''<fraction>'', width=''<int>'', height=''<int>''</code>)
<td></td>
* audio capabilities (<code>audio/x-raw, rate=''<int>'', channels=''<int>''</code>)
<td></td>
* colour settings (optional - hue, saturation, brightness and contrast)
</tr>


This section will explain how to find these.
<tr>
<td>ffmpeg</td>
<td>middling</td>
<td>good</td>
<td>small</td>
<td>yes</td>
</tr>


=== Connecting your video ===
<tr>
<td>mencoder</td>
<td>great</td>
<td>poor</td>
<td>smaller</td>
<td>no</td>
</tr>


{|
<tr>
| style="text-align:right;" |[[File:Rf-connector.png]]
<td>transcode</td>
| style="text-align:center; font-weight: bold; color:#FF0000" | avoid
<td>good</td>
|[https://en.wikipedia.org/wiki/RF_connector RF Connector]
<td>great</td>
| tends to create more noise than the alternatives. Usually input #0, shows snow when disconnected
<td>variable</td>
|-
<td>yes</td>
| style="text-align:right;" |[[File:Composite-video-connector.png]]
</tr>
| style="text-align:center; font-weight: bold; color:#00AA00" | use
|[https://en.wikipedia.org/wiki/Composite_video Composite video connector]
| widely supported and produces a good signal. Usually input #1, shows blackness when disconnected
|-
| style="text-align:right;" |[[File:S-video-connector.png]]
| style="text-align:center; font-weight: bold; color:#777700" | use if available
|[https://en.wikipedia.org/wiki/S-Video S-video connector]
| should produce a good video signal but most hardware needs a converter. Usually input #2, shows blackness when disconnected
|}


Connect your video source (TV or VCR) to your computer however you can. Each type of connector has slightly different properties - try whatever you can and see what works. If you have a TV card that supports multiple inputs, you will need to specify the input number when you come to record.
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>


=== Finding your TV norm ===
</table>


Most TV cards only support the TV norm of the country they were sold in (e.g. PAL-I in the UK or NTSC-M in the Americas), but it's best to confirm this just in case. Wikipedia has [https://en.wikipedia.org/wiki/File:PAL-NTSC-SECAM.svg a graph of colour systems by country] and [https://en.wikipedia.org/wiki/Broadcast_television_systems#ITU_standards a complete list of standards] with countries they're used in.
<br>


If you like, you can store your TV norm in an environment variable:
<b>ffmpeg</b>


TV_NORM=<norm>
ffmpeg -threads 2 -vd /dev/video$DEV -r 29.97 -b 800 -s 640x480 -vcodec h264 -qmax 51 \
-me epzs -deinterlace -g 300 -async 1 -ac 2 -acodec mp3 -ab 96 -ar 32000 \
-ad /dev/dsp$DEV -t $TIM -f avi -y $DIR/$FIL.avi


For example, if your norm was <code>PAL-I</code>, you might type <code>TV_NORM=PAL-I</code> into your terminal. This guide will use <code>$TV_NORM</code> to refer to your video norm - if you choose not to set an environment variable, you will need to replace instances of <code>$TV_NORM</code> with your TV norm.
* a/v sync drifts a bit -- not reliable
* encoding efficiency is high, but frames are dropped silently


=== Determining your video device ===
<b>mencoder</b>


Once you have connected your input, you need to determine the name Linux gives it. See all your video devices by doing:
mencoder -tv driver=v4l2:device=/dev/video$DEV:fps=30000/1001:chanlist=us-bcast:\
audiorate=32000:adevice=/dev/dsp$DEV:input=0:amode=1:normid=4:width=512:height=384 \
-ovc x264 -x264encopts threads=2:bitrate=800:subq=2:me=2:frameref=4:8x8dct \
-oac mp3lame -lameopts cbr:br=96 -endpos $TIM -o $DIR/$FIL.avi tv:// > /dev/null


ls /dev/video*
* creates files that don't stream correctly in VLC -- we've not got to the bottom of this
* encoding efficiency is somewhat below ffmpeg and transcode -- cannot capture to full 640x480 size without dropping frames
* superior to the others in its ability to accept a much fuller set of x264 encoding parameters


One of these is the device you want. Most people only have one, or can figure it out by disconnecting devices and rerunning the above command. Otherwise, check the capabilities of each device:
<b>transcode</b>


for VIDEO_DEVICE in /dev/video* ; do echo -e "\n\n$VIDEO_DEVICE\n" ; v4l2-ctl --device=$VIDEO_DEVICE --list-inputs ; done
transcode -x v4l2=resync_margin=1:resync_interval=250,v4l2 -M 2 \
-i /dev/video$DEV -p /dev/dsp$DEV -y ffmpeg -F h264 -c 00:$TIM \
-g 640x480 -f 29.970,4 -u 1024,2 -w 800 -b 96 -Q 5 -e 32000,16,2 \
--lame_preset medium -o $DIR/$FIL.avi


Usually you will see e.g. a webcam with a single input and a TV card with multiple inputs. If you're still not sure which one you want, try each one in turn:
* after adding the resync parameters, the sync problem, previously severe, appears to be solved!
* does a great job keeping up with frames -- full-size capture with no drops -- though I'm still wondering if it might be dropping frames silently
* file sizes are highly variable and are poorly controlled by the -w video bitrate switch
* on a single CPU, the audio track is confused and overlapping; on a dual-core or dual CPU, the result is good
* to run two threads, see [[Transcode]].


mpv av://v4l2:<device> tv:///<whichever-input-number-you-connected>
A change was made to the x264 repository on 1 August 2006 that allows the switch --threads=auto to detect the number of CPUs.
# or ...
mpv --tv-device=<device> tv:///<whichever-input-number-you-connected> # only works in mpv v0.25.0 or lower


Before you start, plug everything in and start playing a tape in your VCR. That way you won't be confused by weird issues (like seeing snow instead of blackness from a composite cable, because ''the VCR's'' RF connector is unplugged). Usually input #0 is the RF connector, input #1 is the composite connector and input #2 is the S-video connector; but some video cards are different so check your input numbers against the output of <code>v4l2-ctl</code>.
<b>General</b>


If you like, you can store your device and input number in environment variables:
File size is in theory a simple function of audio and video bitrates. In practice, ffmpeg and mencoder produce files according to bitrate settings, while transcode is all over the map -- in a small sample of ten, an hour at "-w 500" ranged from a small 377MB all the way up to 684MB, and "-w 800" from 407MB to 754MB. Quality may also vary; currently not assessed.


VIDEO_DEVICE=<device>
See also [[Talk:TV_Recording |Discussion]].
VIDEO_INPUT=<whichever-input-number-you-connected>

Further examples on this page will use <CODE>$VIDEO_DEVICE</CODE> and <CODE>$VIDEO_INPUT</CODE> - you will need to replace these if you don't set environment variables.

=== Determining your audio device ===

See all of your audio devices by doing:

arecord -l

Again, it should be fairly obvious which of these is the right one. Get the device names by doing:

arecord -L | grep ^hw:

If you're not sure which one you want, try each in turn:

mpv --tv-device=$VIDEO_DEVICE --tv-adevice=<device> tv:///$VIDEO_INPUT # mpv version 0.25.0 or lower

Again, you should hear your tape playing when you get the right one. Note: always use an ALSA ''hw'' device, as they are closest to the hardware. Pulse audio devices and ALSA's ''plughw'' devices add extra layers that, while more convenient for most uses, only cause headaches for us.

Optionally set your device in an environment variable:

AUDIO_DEVICE=<device>

Further examples on this page will use <CODE>$AUDIO_DEVICE</CODE> in place of an actual audio device - you will need to replace this if you don't set environment variables.

=== Getting your device capabilities ===

To find the capabilities of your video device, do:
gst-launch-1.0 --gst-debug=v4l2src:5 v4l2src device=$VIDEO_DEVICE ! fakesink 2>&1 | sed -une '/caps of src/ s/[:;] /\n/gp'

To find the capabilities of your audio device, do:
gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/ s/[s;] /\n/gp'

You will need to press <kbd>ctrl+c</kbd> to close each of these programs when they've printed some output. When you record your video, you will need to specify capabilities based on the ranges displayed here. Some things to remember:

* audio <code>format</code> is optional (your software can decide this automatically)
* video <code>format</code> (discussed below) should be optional, but as of 2015 a bug means you should specify <code>format=UYVY</code>
* video <code>height</code> (discussed below) should be the appropriate height for your TV norm
* video <code>framerate</code> (discussed below) should be the appropriate value for your TV norm, but may need to be tweaked for your hardware
* <code>pixel-aspect-ratio</code> will be set below - do not specify it here
* for all other capabilities, just pick the highest number (or delete it altogether if there's only one choice)

For example, if your TV norm was some variant of PAL and your video card showed these results:

<nowiki>$ gst-launch-1.0 --gst-debug=v4l2src:5 v4l2src device=$VIDEO_DEVICE ! fakesink 2>&1 | sed -une '/caps of src/ s/[:;] /\n/gp'
0:00:00.052071821 29657 0x139fc50 DEBUG v4l2src gstv4l2src.c:306:gst_v4l2src_negotiate:<v4l2src0> caps of src
video/x-raw, format=(string)YUY2, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)UYVY, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)Y42B, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)I420, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)YV12, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)xRGB, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)BGRx, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)BGR, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB16, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB15, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)GRAY8, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59</nowiki>

<nowiki>$ gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/ s/[s;] /\n/gp'
0:00:00.039231863 30898 0x25fcde0 INFO alsa gstalsasrc.c:318:gst_alsasrc_getcaps:<alsasrc0> returning cap
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)1</nowiki>

Then you would select <code>video/x-raw, format=UYVY, framerate=25/1, width=720, height=576</code> and <code>audio/x-raw, rate=32000, channels=2</code>

Once again, you can set your capabilities in an environment variable, but you will need to put quote marks around them:

VIDEO_CAPABILITIES="<capabilities>"
AUDIO_CAPABILITIES="<capabilities>"

For example, <code>AUDIO_CAPABILITIES="audio/x-raw, rate=32000, channels=2"</code>. Further examples on this page will use <code>$VIDEO_CAPABILITIES</code> and <code>$AUDIO_CAPABILITIES</code> in place of actual capabilities - you will need to replace these if you don't set environment variables.

=== Video formats ===

Different video formats will affect the colours in your video. In practice, most formats simply won't work, and <code>format=UYVY</code> or <code>format=YUY2</code> will give the best results. This section provides more detail in case you need to make a different choice.

Computers usually describe colours using one of the [https://en.wikipedia.org/wiki/RGB RGB] family of formats - three numbers specifying an amount of red, green and blue. Modern computers mainly use RGB24 - for example, CSS colour codes are RGB24. Here's what CSS colour codes would look like if they used different types of RGB format:

{|
|+ What a CSS hex code for <span style="color:#6B8E23">OliveDrab</span> would look like in different RGB formats
! Format
! Example
! Comments
|-
| [https://en.wikipedia.org/wiki/Web_colors#Hex_triplet RGB24]
| <span style="color:#6B8E23">#6B8E23</span>
| red, green and blue (in that order), on a scale of 0 to 255
|-
| BGR24
| <span style="color:#6B8E23">#238E6B</span>
| blue, green and red (in that order), on a scale of 0 to 255
|-
| [https://en.wikipedia.org/wiki/High_color RGB16]
| <span style="color:#688c20">#6c64</span>
| red, green and blue (in that order), red and blue on a scale of 0 to 32, green on a scale of 0 to 64
|-
| [https://en.wikipedia.org/wiki/High_color RGB15]
| <span style="color:#688820">#3624</span>
| red, green and blue (in that order), on a scale of 0 to 32
|}

These formats are all quite similar, and historically have all served useful purposes. RGB15 and RGB16 have less information and should be avoided nowadays, BGR24 is fine but happened not to become popular.

Analogue television didn't use RGB. Black-and-white TV described colours using one number - brightness. Colour television added two more numbers - how much of the brightness was blue and how much was red (green was then calculated from that information). These three numbers are often referred to as Y, U and V, so the family of colour formats used by television is called [https://en.wikipedia.org/wiki/YUV YUV].

Like with different members of the RGB family, some members of the YUV family are better (UYVY is better than YV12), and some are arbitrarily different (UYVY is just as good as YUY2).

Because your source video was created using a YUV format, you should encode with a YUV format if possible. RGB formats will lose a bit of colour information during the conversion process, and you won't gain anything by it. Look through the [https://valadoc.org/gstreamer-video-1.0/Gst.Video.Format.html list of video formats] and pick whichever YUV format has the highest numbers (''packed'' vs. ''planar'' doesn't matter).

==== Video heights ====

Some devices report a maximum height of ''578''. A PAL TV signal is 576 lines tall and an NTSC signal is 486 lines, so <code>height=578</code> won't give you the best picture quality. To confirm this, tune to a non-existent TV channel then take a screenshot of the snow:

gst-launch-1.0 -q v4l2src device=$VIDEO_DEVICE \
! $VIDEO_CAPABILITIES, height=578 \
! imagefreeze \
! autovideosink

[[Media:578-lines-of-static.png|Here's an example of what you might see]] - notice the blurring in the middle of the picture. Now take a screenshot with the appropriate height for your TV norm:

gst-launch-1.0 -q v4l2src device=$VIDEO_DEVICE \
! $VIDEO_CAPABILITIES, height=<appropriate-height> \
! imagefreeze \
! autovideosink

[[Media:576-lines-of-static.png|Here's an example taken with height=576]] - notice the middle of this picture is nice and crisp.

You may want to test this yourself and set your height to whatever looks best.

==== Video framerates ====

Due to hardware issues, some V4L devices produce slightly too many (or too few) frames per second. To check your system's actual frame rate, start your video source (e.g. a VCR or webcam) then run this command:

gst-launch-1.0 v4l2src device=$VIDEO_DEVICE \
! $VIDEO_CAPABILITIES \
! fpsdisplaysink fps-update-interval=100000

# Let it run for 100 seconds to get a large enough sample. It should print some statistics in the bottom of the window - write down the number of frames dropped
# Let it run for another 100 seconds, then write down the new number of frames dropped
# Calculate <code>(second number) - (first number) - 1</code> (e.g. 5007 - 2504 - 1 == 2502)
#* You need to subtract one because <code>fpsdisplaysink</code> drops one frame every time it displays the counter
# That number is exactly one hundred times your framerate, so you should tell your software e.g. <code>framerate=2502/100</code>

Note: VHS framerates can vary within the same file. To get an accurate measure of a VHS recording's framerate, encode to a format that supports variable framerates then retrieve the video's duration and total number of frames. You can then transcode a new file with your desired frame rate.

=== Correcting your colour settings ===

Most TV cards have acceptable colour settings by default, but you might get some benefit by configuring things manually. First check the controls available for your hardware:

v4l2-ctl --device=$VIDEO_DEVICE --list-ctrls | tee tv-card-settings-$( date --iso-8601=seconds ).txt

This also saves your settings to a file for future reference. Next, watch your video with some graphs:

<nowiki>ffplay -f lavfi "
movie=$VIDEO_DEVICE, scale=720:-1, fps=25, split [h0] [v0];
[h0] histogram=levels_mode=logarithmic, drawgrid=w=8:h=in_h:color=blue@0.5, drawbox=iw/2:0:1:ih:red@0.5, pad=iw+720:ih [h1];
[h1][v0] overlay=main_w-overlay_w:(main_h-overlay_h)/2 [out0]
"</nowiki>

Because analogue videos use [https://en.wikipedia.org/wiki/YUV YUV] instead of [https://en.wikipedia.org/wiki/RGB RGB], the graphs represent your input's brightness, greenness-blueness and greenness-redness. You should set your video card's controls so the brightest, bluest and redest inputs move the graphs all the way to the right without going past the edge; and so the darkest and greenest inputs move the graphs all the way to the left. While that's running, do this in a second window:

v4l2-ctl --device=$VIDEO_DEVICE --set-ctrl=contrast=32

The video should turn much greyer, and the main spike of the top graph should move closer to the centre. Different video cards provide different controls, but here is some advice on choosing the best values for each one:

* changing the brightness moves the centre of the top graph, contrast alters its range
* changing the hue moves the centre of the bottom two graphs, saturation controls their range
* composite and S-video inputs produce blackness when they're disconnected - viewing this makes it easier to set brightness
* some video cards have an <code>invert</code> control - toggling this when showing a black screen makes it easier to set contrast
* some VCRs have configuration menus with blue backgrounds - viewing this makes it easier to set saturation
* when playing most videos, the peaks of the colour graphs should tend to be equally far from the centre (one to the left, one to the right) - viewing this makes it easier to set hue
* connecting a camcorder lets you capture any colour you like - try videoing a [https://en.wikipedia.org/wiki/Testcard testcard], or just coloured paper

== Encoding an accurate video ==

Your first step should be to record an accurate copy of your source video. A good quality encoding can use anything up to 30 gigabytes per hour, so figure out how long your video is and make sure you have enough space.

As well as the values above, you will need to decide the following (preferably storing them as environment variables):

* <code>ENCODE_VIDEO_FORMAT</code> - the format you chose to encode the accurate copy of your video (see <code>ffmpeg -encoders</code> for a list)
* <code>ENCODE_AUDIO_FORMAT</code> - the format you chose to encode the accurate copy of your audio (see <code>ffmpeg -encoders</code> for a list)
* <code>ENCODE_MUXER_FORMAT</code> - the format you chose to mux your videos together (see <code>ffmpeg -formats</code> for a list)
* <code>ENCODE_VIDEO_OPTIONS</code> - settings for your video format (see <code>ffmpeg --help encoder=$ENCODE_VIDEO_FORMAT</code> for a list)
* <code>ENCODE_AUDIO_OPTIONS</code> - settings for your audio format (see <code>ffmpeg --help encoder=$ENCODE_AUDIO_FORMAT</code> for a list)
* <code>ENCODE_MUXER_OPTIONS</code> - settings for your muxer format (see <code>ffmpeg --help muxer=$ENCODE_MUXER_FORMAT</code> for a list)
* <code>ENCODE_FILENAME</code> - your preferred filename (see <code>ffmpeg --help muxer=$ENCODE_MUXER_FORMAT</code> for suggested extensions)

[http://ffmpeg.org/ FFmpeg] is widely recommended, but can't handle the quirks of capturing analogue video. To avoid desynchronising audio and video, you need to combine it with [[GStreamer]]:

<nowiki>ffmpeg \
-i <(
gst-launch-1.0 -q \
v4l2src device="$VIDEO_DEVICE" do-timestamp=true norm="$TV_NORM" pixel-aspect-ratio=1 \
! $VIDEO_CAPABILITIES \
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! mux. \
alsasrc device="$AUDIO_DEVICE" do-timestamp=true \
! $AUDIO_CAPABILITIES \
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! mux. \
matroskamux name=mux \
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! fdsink fd=1
) \
-c:v $ENCODE_VIDEO_FORMAT $ENCODE_VIDEO_OPTIONS \
-c:a $ENCODE_AUDIO_FORMAT $ENCODE_AUDIO_OPTIONS \
-f $ENCODE_MUXER_FORMAT $ENCODE_MUXER_OPTIONS \
"$ENCODE_FILENAME"</nowiki>

This command does two things:
* tells GStreamer to record raw audio and video use a [http://www.matroska.org/ Matroska media container] to communicate with FFmpeg
* tells FFmpeg to accept the video from GStreamer and encode it using the settings you specified (e.g. remuxing to your preferred container format)

If you have enough free disk space, you could just save the raw video as your accurate copy - see [[GStreamer]] for details.

Make sure to start <code>ffmpeg</code> before pressing play - it's easier to remove the first few frames while transcoding than to prepend a second recording to the start of a video. But consider setting your VCR to play from AV instead of a TV channel - it will make those first few frames black, which is easier on the eye when you're deciding which frames to remove.

=== Handling desynchronised audio and video ===

Most people can skip this step - [[GStreamer]] should be able to synchronise your audio and video automatically using the <code>do-timestamp</code> setting. If your audio and video aren't synchronised (most noticeable when people's mouths don't quite move in time to their words), first check you're using a raw <code>hw</code> audio device, as <code>plughw</code> devices can cause synchronisation issues. If the problem still occurs with a raw <code>hw</code> audio device, your hardware may not support timestamps so you'll have to fix it during transcoding.

If you need to resync audio and video during transcoding, you can make your life easier by creating [https://en.wikipedia.org/wiki/Clapperboard clapperboard] effects at the start of your videos - hook up a camcorder, run your capture command, then clap your hands in front of the camera before pressing play on your VCR. Failing that, make note of any moments where an obvious visual element occurred at the same moment as an obvious audio element (such as the clunk when a cup is placed on a table).

Once you've recorded your video, you'll need to calculate your desired A/V offset. For the best result, play your video with precise timestamps (e.g. <code>mpv --osd-fractions "$ENCODE_FILENAME"</code>) and open your audio in an audio editor (e.g. [http://audacityteam.org/ Audacity]), then find the exact frame when your clapperboard video/audio occurred and subtract one from the other. To confirm your result, run <code>mpv --audio-delay=<result> "$ENCODE_FILENAME"</code> and make sure it looks right.

=== Measuring audio noise ===

Your hardware will create a small amount of audio noise in your recording. If you want to remove this later, you'll need to measure it for every hardware configuration you use - S-video vs. composite, laptop charging vs. unplugged, and so on.

You'll need a recording of about half a second of your system in a resting state, which you will use later to remove noise. This can be a silent TV channel or paused tape, but if you're using composite or S-video connectors, the easiest thing is probably just to record a few moments of blackness before pressing play.

=== Choosing formats ===

Your encoding formats need to encode in real-time and lose as little information as possible. Even if you plan to throw that information away during transcoding, an accurate initial recording will give you more freedom when the time comes. For example, your muxer format should support ''variable frame rates'' so you can measure your video's frame rate. Once you have that information, you could use it to calculate an accurate transcoding frame rate or to cut out sections where your VCR delivered the wrong number of frames - either way the information is useful even though it was lost from the final video.

== Transcoding a usable video ==

The video you recorded should accurately represent your source video, but will probably be a large file, be a noisy experience, and might not even play in some programs. You need to ''transcode'' it to a more usable format. You can use any program(s) to do this, but it's probably easiest to continue using [http://ffmpeg.org/ FFmpeg]:

<nowiki>ffmpeg -i "$ENCODE_FILENAME" \
-c:v <transcode-video-format> <transcode-video-options> \
-c:a <transcode-audio-format> <transcode-audio-options> \
-f <transcode-muxer-format> <transcode-muxer-options> \
<transcode-filename></nowiki>

If you're happy with the result, you can stop here. But you might want to improve the video, for example:

* [https://trac.ffmpeg.org/wiki/Seeking Cut sections out of your video with the -ss and -t options]
* [https://trac.ffmpeg.org/wiki/FilteringGuide Add complex filters to clean up the video and audio]
* [http://stackoverflow.com/questions/20254846/how-to-add-an-external-audio-track-to-a-video-file-using-vlc-or-ffmpeg-command-l Edit the audio with Audacity then copy the new version back in]

Some of these improvements require you to identify the millisecond where an event occurred. <code>mpv --osd-fractions</code> will print millisecond-accurate timestamps, and [http://mpv.io/manual/master/#keyboard-control its default keybindings] allow you to step back and forward one frame at a time.

This section will discuss some of the high-level issues you'll face if you choose to improve your video.

=== Cleaning audio ===

Any analogue recording will contain a certain amount of background noise. Cleaning noise is optional, and you'll always be able to produce a slightly better result if you spend a little longer on it, so this section will just introduce enough theory to get you started. Audacity's [http://manual.audacityteam.org/o/man/equalization.html equalizer] and [https://wiki.audacityteam.org/wiki/Noise_Reduction noise reduction effect] are good places to start experimenting.

The major noise sources are:

* '''your audio codec''' might throw away sound it thinks you won't hear in order to reduce file size
* '''your recording system''' will produce a small, consistent amount of noise based on its various electrical and mechanical components
* '''VHS format limitations''' cause static at high and low frequencies, depending on the VCR's settings
* '''imperfections in tape recording and playback''' produce noise that differs between recordings and even between scenes

A lossless audio format (e.g. WAV or FLAC) should ensure your original encoding doesn't produce any extra noise. Even if you transcode to a format like MP3 that throws information away, a lossless original ensures there's only one lot of noise in the result.

The primary means of reducing noise is the frequency-based [http://en.wikipedia.org/wiki/Noise_gate noise gate], which ''blocks'' some frequencies and ''passes'' others. ''High-pass'' and ''low-pass'' filters pass noise above or below a certain frequency, and can be combined into ''band-pass'' or even ''multi-band'' filters. The rest of this section discusses how to build a series of noise gates for your audio.

Identify noise from your recording system by recording the sound of a paused tape or silent television channel for a few seconds. If possible, use the near-silence at the start of your recording so you can guarantee your sample matches your current hardware configuration. Use this baseline recording as a ''noise profile'' which your software uses to build a multi-band noise gate. You can apply that noise gate to the whole recording, and to other recordings with the same hardware that don't have a usable sample.

Identify VHS format limitations by searching online for information based on your TV norm (NTSC, PAL or SECAM), your recording quality (normal or Hi-Fi) and your VHS play mode (short- or long-play). [https://en.wikipedia.org/wiki/VHS#Audio_recording Wikipedia's discussion of VHS audio recording] is a good place to start. If you're able to find the information, gate your recordings with high-pass and low-pass filters that only allow frequencies within the range your tape actually records. For example, a long-play recording of a PAL tape will produce static below 100Hz and above 4kHz so you should gate your recording to only pass audio in the 100Hz-4000Hz range. If you can't find the information, you can determine it experimentally by trying out different filters to see what sounds right - your system probably produces static below about 10Hz or 100Hz and above about 4kHz or 12kHz, so try high- and low-pass filters in those ranges until you stop hearing background noise. If you don't remove this noise source, the next step will do a reasonable job of guessing it for you anyway.

Identify imperfections in recording and playback by watching the video and looking for periods of silence. You only need half a second of background noise to generate a profile, but the number of profiles is up to you. Some people grab one profile for a whole recording, others combine clips into averaged noise profiles, others cut audio into scenes and de-noise each in turn. At a minimum, tapes with multiple recordings should be split up and each one de-noised separately - a tape containing a TV program recorded in LP mode in one VCR followed by a home video recorded in SP in another VCR will produce two very different noise profiles, even if played back all in one go.

It's good to apply filters in the right order (system profile, then VHS limits, then recording profiles), but beyond that noise reduction is very subjective. For example, intelligent noise reduction tends to remove more noise in quiet periods but less when it would risk losing signal, which can sound like a snare drum being brushed whenever someone speaks. But dumb filters silence the same frequencies at all times, which can make everything sound muffled.

You can run your audio through as many gates as you like, and even repeat the same filter several times. If you use a noise reduction profile, you can even get different results from different programs (see for example [http://sourceforge.net/p/sox/mailman/message/30019023/ this comparison of sox and Audacity's algorithms]). There's no right answer but there's always a better result if you spend a bit more time, so you'll need to decide for yourself when the result is good enough.

=== Cleaning video ===

Much like audio, you can spend as long as you like cleaning your video. But whereas audio cleaning tends to be about doing one thing really well (separating out frequencies of signal and noise), video cleaning tends to be about getting decent results in different circumstances. For example, you might want to just remove the overscan lines at the bottom of a VHS recording, denoise a video slightly to reduce file size, or aggressively remove grains to make a low-quality recording watchable. [https://ffmpeg.org/ffmpeg-filters.html FFmpeg's video filter list] is a good place to start, but here are a few things you should know.

Some programs need video to have a specified aspect ratio. If you simply crop out the ugly overscan lines at the bottom of your video, some programs may refuse to play your video. Instead you should ''mask'' the area with blackness. In <code>ffmpeg</code>, you would use a <code>crop</code> filter to remove the overscan followed by a <code>pad</code> filter to put the image back to its original height.

Analogue video is [https://en.wikipedia.org/wiki/Interlaced_video interlaced], essentially interleaving two consecutive video frames within each image. This confuses video filters that compare neighbouring pixels (e.g. to look for bright grains in dark areas of the screen), so you should ''deinterleave'' the frames before using such filters, then ''interleave'' them again afterwards. For example, an <code>ffmpeg</code> filter chain might start with <code>il=d:d:d</code> and end with <code>il=i:i:i</code>. If you skip the trailing <code>il=i:i:i</code>, you can see that de-interleaving works by putting each image in a different half of the frame to trick other filters into doing the right thing.

=== Choosing formats ===

Your transcoding format needs to be small and compatible with whatever software you will use to play it back. If you can't find accurate information about your players, create a short test video and try it on your system. Your video codec may well have options to reduce file size at the cost of encoding time, so you may want to leave your computer transcoding overnight to get the best file size.

== Wrapping it all up in a script ==

[[V4L_capturing/script|A V4L capture script]] has been written based on this page. It presents the commands above in a more usable package, and adds several extra functions that were too complex to describe here. For example, it will encode a secondary "review" file that makes it easier to find cut-points in videos.

If you would rather write your own, consider looking through the script for inspiration. You can see the commands it runs by searching for <code>CMD_</code> on [[V4L_capturing/script|the script page]].

= See Also =

* [[Streaming-V4L]] - information about streaming live video (e.g. webcams)
* [[TV_Related_Software|TV-related software]] - information about digital video (DVB)
* [[Easycap|EasyCAP]] - a common type of capture device
* [[GStreamer]] - an audio/video processing toolkit
* [https://frnmst.github.io/automated-tasks/scripts.html#convert-videos-py Franco Masotti's video converter] - a Python script partially inspired by this page

Latest revision as of 06:17, 14 June 2020

This page discusses how to capture analogue video for offline consumption (especially digitising old VHS tapes). For information about streaming live video (e.g. webcams), see the streaming page. For information about digital video (DVB), see TV-related software.

Overview

Analogue video technology was largely designed before the advent of computers, so accurately digitising a video is a difficult problem. For example, software often assumes a constant frame rate throughout a video, but analogue technologies can deliver different numbers of frames from second to second. This page will present a framework for recording video, which you can alter for your specific requirements.

Recommended process

Your workflow should look something like this:

  1. Set your system up - understand the quirks of your TV card, VCR etc.
  2. Encode an accurate copy of the source video - handle issues with the analogue half of the system. Do as little digital processing as possible
  3. Transcode a usable copy of the video - convert the previous file to something pleasing to use
  4. Try the video and transcode again - check whether the video works how you want, otherwise try some different settings

Converting analogue input to a digital format is hard - VCRs overheat and damage tapes, computers use too much CPU and drop frames, disk drives fill up, etc. Creating a good digital video is also hard - not all software supports all formats, overscan and background hiss distract the viewer, videos need to be split into useful chunks, and so on. It's much easier to learn the process and produce a quality result if you tackle encoding in one step and transcoding in another.

Suggested software

This page assumes you have installed the following programs:

  • gst-launch-1.0 for capturing video (probably part of the gstreamer1.0-tools package)
  • FFMpeg for saving and editing video (probably part of the ffmpeg package)
  • v4l-ctl for controlling your video card (probably part of the v4l-utils package)
  • mpv for viewing videos (probably part of the mpv package)

There are alternatives for each of these (e.g. the older 0.10 series of GStreamer and the libav fork of FFMpeg). You should be able to modify the instructions below to suit your preferences.

Choosing formats

When you create a video, you need to choose your video format (e.g. XviD or MPEG-2), audio format (e.g. WAV or MP3) and container format (e.g. AVI or MP4). There's constant work to improve the codecs that create audio/video and the muxers that create containers, and whole new formats are invented fairly regularly, so this page can't recommend any specific formats. For example, as of late 2015 MPEG-2 is the most widely supported by older DVD players, H.624 is becoming a de facto standard in modern web browsers, and people are waiting to see whether HEVC will be blocked by patent trolls. That's probably enough to decide which video codec is right for you in late 2015, but the facts will have changed even by early 2016.

You'll need to do some research to find the currently-recommended formats. Wikipedia's comparisons of audio, video and container formats are a good place to start. Here are some important things to look for:

  • encoding speed - during the encoding stage, using too much CPU load will cause frame-drops as the computer tries to keep up
  • accuracy - some formats are lossless, others throw away information to improve speed and/or reduce file size
  • file size - different formats use different amounts of disk space, even with the same accuracy
  • compatibility - newer formats usually produce better results but can't be played by older software

Remember that you can use different formats in the encode and transcode stages. Speed and accuracy are most important when encoding, so you should use a modern, fast, low-loss format to create your initial accurate copy of the source video. But size and compatibility are most important for playback, so you should transcode to a format that produces a smaller or more compatible file. For example, as of late 2015 you might encode FLAC audio and x264 video into a Matroska file, then transcode MP3 audio and MPEG-2 video into an AVI file. You can examine the result and transcode again from the original if the file is too large or your grandmother's DVD player won't play it.

Setting up

Before you can record a video, you need to set your system up and identify the following information:

  • connector type (RF, composite or S-video)
  • TV norm (some variant of PAL, NTSC or SECAM)
  • video device (/dev/video<number>)
  • audio device (hw:CARD=<id>,DEV=<number>)
  • video capabilities (video/x-raw, format=UYVY, framerate=<fraction>, width=<int>, height=<int>)
  • audio capabilities (audio/x-raw, rate=<int>, channels=<int>)
  • colour settings (optional - hue, saturation, brightness and contrast)

This section will explain how to find these.

Connecting your video

Rf-connector.png avoid RF Connector tends to create more noise than the alternatives. Usually input #0, shows snow when disconnected
Composite-video-connector.png use Composite video connector widely supported and produces a good signal. Usually input #1, shows blackness when disconnected
S-video-connector.png use if available S-video connector should produce a good video signal but most hardware needs a converter. Usually input #2, shows blackness when disconnected

Connect your video source (TV or VCR) to your computer however you can. Each type of connector has slightly different properties - try whatever you can and see what works. If you have a TV card that supports multiple inputs, you will need to specify the input number when you come to record.

Finding your TV norm

Most TV cards only support the TV norm of the country they were sold in (e.g. PAL-I in the UK or NTSC-M in the Americas), but it's best to confirm this just in case. Wikipedia has a graph of colour systems by country and a complete list of standards with countries they're used in.

If you like, you can store your TV norm in an environment variable:

TV_NORM=<norm>

For example, if your norm was PAL-I, you might type TV_NORM=PAL-I into your terminal. This guide will use $TV_NORM to refer to your video norm - if you choose not to set an environment variable, you will need to replace instances of $TV_NORM with your TV norm.

Determining your video device

Once you have connected your input, you need to determine the name Linux gives it. See all your video devices by doing:

ls /dev/video*

One of these is the device you want. Most people only have one, or can figure it out by disconnecting devices and rerunning the above command. Otherwise, check the capabilities of each device:

for VIDEO_DEVICE in /dev/video* ; do echo -e "\n\n$VIDEO_DEVICE\n" ; v4l2-ctl --device=$VIDEO_DEVICE --list-inputs ; done

Usually you will see e.g. a webcam with a single input and a TV card with multiple inputs. If you're still not sure which one you want, try each one in turn:

mpv av://v4l2:<device> tv:///<whichever-input-number-you-connected>
# or ...
mpv --tv-device=<device> tv:///<whichever-input-number-you-connected>  # only works in mpv v0.25.0 or lower

Before you start, plug everything in and start playing a tape in your VCR. That way you won't be confused by weird issues (like seeing snow instead of blackness from a composite cable, because the VCR's RF connector is unplugged). Usually input #0 is the RF connector, input #1 is the composite connector and input #2 is the S-video connector; but some video cards are different so check your input numbers against the output of v4l2-ctl.

If you like, you can store your device and input number in environment variables:

VIDEO_DEVICE=<device>
VIDEO_INPUT=<whichever-input-number-you-connected>

Further examples on this page will use $VIDEO_DEVICE and $VIDEO_INPUT - you will need to replace these if you don't set environment variables.

Determining your audio device

See all of your audio devices by doing:

arecord -l

Again, it should be fairly obvious which of these is the right one. Get the device names by doing:

arecord -L | grep ^hw:

If you're not sure which one you want, try each in turn:

mpv --tv-device=$VIDEO_DEVICE --tv-adevice=<device> tv:///$VIDEO_INPUT  # mpv version 0.25.0 or lower

Again, you should hear your tape playing when you get the right one. Note: always use an ALSA hw device, as they are closest to the hardware. Pulse audio devices and ALSA's plughw devices add extra layers that, while more convenient for most uses, only cause headaches for us.

Optionally set your device in an environment variable:

AUDIO_DEVICE=<device>

Further examples on this page will use $AUDIO_DEVICE in place of an actual audio device - you will need to replace this if you don't set environment variables.

Getting your device capabilities

To find the capabilities of your video device, do:

gst-launch-1.0 --gst-debug=v4l2src:5 v4l2src device=$VIDEO_DEVICE ! fakesink 2>&1 | sed -une '/caps of src/ s/[:;] /\n/gp'

To find the capabilities of your audio device, do:

gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/  s/[s;] /\n/gp'

You will need to press ctrl+c to close each of these programs when they've printed some output. When you record your video, you will need to specify capabilities based on the ranges displayed here. Some things to remember:

  • audio format is optional (your software can decide this automatically)
  • video format (discussed below) should be optional, but as of 2015 a bug means you should specify format=UYVY
  • video height (discussed below) should be the appropriate height for your TV norm
  • video framerate (discussed below) should be the appropriate value for your TV norm, but may need to be tweaked for your hardware
  • pixel-aspect-ratio will be set below - do not specify it here
  • for all other capabilities, just pick the highest number (or delete it altogether if there's only one choice)

For example, if your TV norm was some variant of PAL and your video card showed these results:

$ gst-launch-1.0 --gst-debug=v4l2src:5 v4l2src device=$VIDEO_DEVICE ! fakesink 2>&1 | sed -une '/caps of src/ s/[:;] /\n/gp'
0:00:00.052071821 29657      0x139fc50 DEBUG                v4l2src gstv4l2src.c:306:gst_v4l2src_negotiate:<v4l2src0> caps of src
video/x-raw, format=(string)YUY2, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)UYVY, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)Y42B, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)I420, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)YV12, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)xRGB, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)BGRx, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)BGR, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB16, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB15, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)GRAY8, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
$ gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/  s/[s;] /\n/gp'
0:00:00.039231863 30898      0x25fcde0 INFO                    alsa gstalsasrc.c:318:gst_alsasrc_getcaps:<alsasrc0> returning cap
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)1

Then you would select video/x-raw, format=UYVY, framerate=25/1, width=720, height=576 and audio/x-raw, rate=32000, channels=2

Once again, you can set your capabilities in an environment variable, but you will need to put quote marks around them:

VIDEO_CAPABILITIES="<capabilities>"
AUDIO_CAPABILITIES="<capabilities>"

For example, AUDIO_CAPABILITIES="audio/x-raw, rate=32000, channels=2". Further examples on this page will use $VIDEO_CAPABILITIES and $AUDIO_CAPABILITIES in place of actual capabilities - you will need to replace these if you don't set environment variables.

Video formats

Different video formats will affect the colours in your video. In practice, most formats simply won't work, and format=UYVY or format=YUY2 will give the best results. This section provides more detail in case you need to make a different choice.

Computers usually describe colours using one of the RGB family of formats - three numbers specifying an amount of red, green and blue. Modern computers mainly use RGB24 - for example, CSS colour codes are RGB24. Here's what CSS colour codes would look like if they used different types of RGB format:

What a CSS hex code for OliveDrab would look like in different RGB formats
Format Example Comments
RGB24 #6B8E23 red, green and blue (in that order), on a scale of 0 to 255
BGR24 #238E6B blue, green and red (in that order), on a scale of 0 to 255
RGB16 #6c64 red, green and blue (in that order), red and blue on a scale of 0 to 32, green on a scale of 0 to 64
RGB15 #3624 red, green and blue (in that order), on a scale of 0 to 32

These formats are all quite similar, and historically have all served useful purposes. RGB15 and RGB16 have less information and should be avoided nowadays, BGR24 is fine but happened not to become popular.

Analogue television didn't use RGB. Black-and-white TV described colours using one number - brightness. Colour television added two more numbers - how much of the brightness was blue and how much was red (green was then calculated from that information). These three numbers are often referred to as Y, U and V, so the family of colour formats used by television is called YUV.

Like with different members of the RGB family, some members of the YUV family are better (UYVY is better than YV12), and some are arbitrarily different (UYVY is just as good as YUY2).

Because your source video was created using a YUV format, you should encode with a YUV format if possible. RGB formats will lose a bit of colour information during the conversion process, and you won't gain anything by it. Look through the list of video formats and pick whichever YUV format has the highest numbers (packed vs. planar doesn't matter).

Video heights

Some devices report a maximum height of 578. A PAL TV signal is 576 lines tall and an NTSC signal is 486 lines, so height=578 won't give you the best picture quality. To confirm this, tune to a non-existent TV channel then take a screenshot of the snow:

gst-launch-1.0 -q v4l2src device=$VIDEO_DEVICE \
    ! $VIDEO_CAPABILITIES, height=578 \
    ! imagefreeze \
    ! autovideosink

Here's an example of what you might see - notice the blurring in the middle of the picture. Now take a screenshot with the appropriate height for your TV norm:

gst-launch-1.0 -q v4l2src device=$VIDEO_DEVICE \
    ! $VIDEO_CAPABILITIES, height=<appropriate-height> \
    ! imagefreeze \
    ! autovideosink

Here's an example taken with height=576 - notice the middle of this picture is nice and crisp.

You may want to test this yourself and set your height to whatever looks best.

Video framerates

Due to hardware issues, some V4L devices produce slightly too many (or too few) frames per second. To check your system's actual frame rate, start your video source (e.g. a VCR or webcam) then run this command:

gst-launch-1.0 v4l2src device=$VIDEO_DEVICE \
    ! $VIDEO_CAPABILITIES \
    ! fpsdisplaysink fps-update-interval=100000
  1. Let it run for 100 seconds to get a large enough sample. It should print some statistics in the bottom of the window - write down the number of frames dropped
  2. Let it run for another 100 seconds, then write down the new number of frames dropped
  3. Calculate (second number) - (first number) - 1 (e.g. 5007 - 2504 - 1 == 2502)
    • You need to subtract one because fpsdisplaysink drops one frame every time it displays the counter
  4. That number is exactly one hundred times your framerate, so you should tell your software e.g. framerate=2502/100

Note: VHS framerates can vary within the same file. To get an accurate measure of a VHS recording's framerate, encode to a format that supports variable framerates then retrieve the video's duration and total number of frames. You can then transcode a new file with your desired frame rate.

Correcting your colour settings

Most TV cards have acceptable colour settings by default, but you might get some benefit by configuring things manually. First check the controls available for your hardware:

v4l2-ctl --device=$VIDEO_DEVICE --list-ctrls | tee tv-card-settings-$( date --iso-8601=seconds ).txt

This also saves your settings to a file for future reference. Next, watch your video with some graphs:

ffplay -f lavfi "
    movie=$VIDEO_DEVICE, scale=720:-1, fps=25, split [h0] [v0];
    [h0] histogram=levels_mode=logarithmic, drawgrid=w=8:h=in_h:color=blue@0.5, drawbox=iw/2:0:1:ih:red@0.5, pad=iw+720:ih [h1];
    [h1][v0] overlay=main_w-overlay_w:(main_h-overlay_h)/2 [out0]
"

Because analogue videos use YUV instead of RGB, the graphs represent your input's brightness, greenness-blueness and greenness-redness. You should set your video card's controls so the brightest, bluest and redest inputs move the graphs all the way to the right without going past the edge; and so the darkest and greenest inputs move the graphs all the way to the left. While that's running, do this in a second window:

v4l2-ctl --device=$VIDEO_DEVICE  --set-ctrl=contrast=32

The video should turn much greyer, and the main spike of the top graph should move closer to the centre. Different video cards provide different controls, but here is some advice on choosing the best values for each one:

  • changing the brightness moves the centre of the top graph, contrast alters its range
  • changing the hue moves the centre of the bottom two graphs, saturation controls their range
  • composite and S-video inputs produce blackness when they're disconnected - viewing this makes it easier to set brightness
  • some video cards have an invert control - toggling this when showing a black screen makes it easier to set contrast
  • some VCRs have configuration menus with blue backgrounds - viewing this makes it easier to set saturation
  • when playing most videos, the peaks of the colour graphs should tend to be equally far from the centre (one to the left, one to the right) - viewing this makes it easier to set hue
  • connecting a camcorder lets you capture any colour you like - try videoing a testcard, or just coloured paper

Encoding an accurate video

Your first step should be to record an accurate copy of your source video. A good quality encoding can use anything up to 30 gigabytes per hour, so figure out how long your video is and make sure you have enough space.

As well as the values above, you will need to decide the following (preferably storing them as environment variables):

  • ENCODE_VIDEO_FORMAT - the format you chose to encode the accurate copy of your video (see ffmpeg -encoders for a list)
  • ENCODE_AUDIO_FORMAT - the format you chose to encode the accurate copy of your audio (see ffmpeg -encoders for a list)
  • ENCODE_MUXER_FORMAT - the format you chose to mux your videos together (see ffmpeg -formats for a list)
  • ENCODE_VIDEO_OPTIONS - settings for your video format (see ffmpeg --help encoder=$ENCODE_VIDEO_FORMAT for a list)
  • ENCODE_AUDIO_OPTIONS - settings for your audio format (see ffmpeg --help encoder=$ENCODE_AUDIO_FORMAT for a list)
  • ENCODE_MUXER_OPTIONS - settings for your muxer format (see ffmpeg --help muxer=$ENCODE_MUXER_FORMAT for a list)
  • ENCODE_FILENAME - your preferred filename (see ffmpeg --help muxer=$ENCODE_MUXER_FORMAT for suggested extensions)

FFmpeg is widely recommended, but can't handle the quirks of capturing analogue video. To avoid desynchronising audio and video, you need to combine it with GStreamer:

ffmpeg \
    -i <(
        gst-launch-1.0 -q \
            v4l2src device="$VIDEO_DEVICE" do-timestamp=true norm="$TV_NORM" pixel-aspect-ratio=1 \
                ! $VIDEO_CAPABILITIES \
                ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
                ! mux. \
            alsasrc device="$AUDIO_DEVICE" do-timestamp=true \
                ! $AUDIO_CAPABILITIES \
                ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
                ! mux. \
            matroskamux name=mux \
                ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
                ! fdsink fd=1
    ) \
    -c:v $ENCODE_VIDEO_FORMAT $ENCODE_VIDEO_OPTIONS \
    -c:a $ENCODE_AUDIO_FORMAT $ENCODE_AUDIO_OPTIONS \
    -f   $ENCODE_MUXER_FORMAT $ENCODE_MUXER_OPTIONS \
    "$ENCODE_FILENAME"

This command does two things:

  • tells GStreamer to record raw audio and video use a Matroska media container to communicate with FFmpeg
  • tells FFmpeg to accept the video from GStreamer and encode it using the settings you specified (e.g. remuxing to your preferred container format)

If you have enough free disk space, you could just save the raw video as your accurate copy - see GStreamer for details.

Make sure to start ffmpeg before pressing play - it's easier to remove the first few frames while transcoding than to prepend a second recording to the start of a video. But consider setting your VCR to play from AV instead of a TV channel - it will make those first few frames black, which is easier on the eye when you're deciding which frames to remove.

Handling desynchronised audio and video

Most people can skip this step - GStreamer should be able to synchronise your audio and video automatically using the do-timestamp setting. If your audio and video aren't synchronised (most noticeable when people's mouths don't quite move in time to their words), first check you're using a raw hw audio device, as plughw devices can cause synchronisation issues. If the problem still occurs with a raw hw audio device, your hardware may not support timestamps so you'll have to fix it during transcoding.

If you need to resync audio and video during transcoding, you can make your life easier by creating clapperboard effects at the start of your videos - hook up a camcorder, run your capture command, then clap your hands in front of the camera before pressing play on your VCR. Failing that, make note of any moments where an obvious visual element occurred at the same moment as an obvious audio element (such as the clunk when a cup is placed on a table).

Once you've recorded your video, you'll need to calculate your desired A/V offset. For the best result, play your video with precise timestamps (e.g. mpv --osd-fractions "$ENCODE_FILENAME") and open your audio in an audio editor (e.g. Audacity), then find the exact frame when your clapperboard video/audio occurred and subtract one from the other. To confirm your result, run mpv --audio-delay=<result> "$ENCODE_FILENAME" and make sure it looks right.

Measuring audio noise

Your hardware will create a small amount of audio noise in your recording. If you want to remove this later, you'll need to measure it for every hardware configuration you use - S-video vs. composite, laptop charging vs. unplugged, and so on.

You'll need a recording of about half a second of your system in a resting state, which you will use later to remove noise. This can be a silent TV channel or paused tape, but if you're using composite or S-video connectors, the easiest thing is probably just to record a few moments of blackness before pressing play.

Choosing formats

Your encoding formats need to encode in real-time and lose as little information as possible. Even if you plan to throw that information away during transcoding, an accurate initial recording will give you more freedom when the time comes. For example, your muxer format should support variable frame rates so you can measure your video's frame rate. Once you have that information, you could use it to calculate an accurate transcoding frame rate or to cut out sections where your VCR delivered the wrong number of frames - either way the information is useful even though it was lost from the final video.

Transcoding a usable video

The video you recorded should accurately represent your source video, but will probably be a large file, be a noisy experience, and might not even play in some programs. You need to transcode it to a more usable format. You can use any program(s) to do this, but it's probably easiest to continue using FFmpeg:

ffmpeg -i "$ENCODE_FILENAME" \
    -c:v <transcode-video-format> <transcode-video-options> \
    -c:a <transcode-audio-format> <transcode-audio-options> \
    -f   <transcode-muxer-format> <transcode-muxer-options> \
    <transcode-filename>

If you're happy with the result, you can stop here. But you might want to improve the video, for example:

Some of these improvements require you to identify the millisecond where an event occurred. mpv --osd-fractions will print millisecond-accurate timestamps, and its default keybindings allow you to step back and forward one frame at a time.

This section will discuss some of the high-level issues you'll face if you choose to improve your video.

Cleaning audio

Any analogue recording will contain a certain amount of background noise. Cleaning noise is optional, and you'll always be able to produce a slightly better result if you spend a little longer on it, so this section will just introduce enough theory to get you started. Audacity's equalizer and noise reduction effect are good places to start experimenting.

The major noise sources are:

  • your audio codec might throw away sound it thinks you won't hear in order to reduce file size
  • your recording system will produce a small, consistent amount of noise based on its various electrical and mechanical components
  • VHS format limitations cause static at high and low frequencies, depending on the VCR's settings
  • imperfections in tape recording and playback produce noise that differs between recordings and even between scenes

A lossless audio format (e.g. WAV or FLAC) should ensure your original encoding doesn't produce any extra noise. Even if you transcode to a format like MP3 that throws information away, a lossless original ensures there's only one lot of noise in the result.

The primary means of reducing noise is the frequency-based noise gate, which blocks some frequencies and passes others. High-pass and low-pass filters pass noise above or below a certain frequency, and can be combined into band-pass or even multi-band filters. The rest of this section discusses how to build a series of noise gates for your audio.

Identify noise from your recording system by recording the sound of a paused tape or silent television channel for a few seconds. If possible, use the near-silence at the start of your recording so you can guarantee your sample matches your current hardware configuration. Use this baseline recording as a noise profile which your software uses to build a multi-band noise gate. You can apply that noise gate to the whole recording, and to other recordings with the same hardware that don't have a usable sample.

Identify VHS format limitations by searching online for information based on your TV norm (NTSC, PAL or SECAM), your recording quality (normal or Hi-Fi) and your VHS play mode (short- or long-play). Wikipedia's discussion of VHS audio recording is a good place to start. If you're able to find the information, gate your recordings with high-pass and low-pass filters that only allow frequencies within the range your tape actually records. For example, a long-play recording of a PAL tape will produce static below 100Hz and above 4kHz so you should gate your recording to only pass audio in the 100Hz-4000Hz range. If you can't find the information, you can determine it experimentally by trying out different filters to see what sounds right - your system probably produces static below about 10Hz or 100Hz and above about 4kHz or 12kHz, so try high- and low-pass filters in those ranges until you stop hearing background noise. If you don't remove this noise source, the next step will do a reasonable job of guessing it for you anyway.

Identify imperfections in recording and playback by watching the video and looking for periods of silence. You only need half a second of background noise to generate a profile, but the number of profiles is up to you. Some people grab one profile for a whole recording, others combine clips into averaged noise profiles, others cut audio into scenes and de-noise each in turn. At a minimum, tapes with multiple recordings should be split up and each one de-noised separately - a tape containing a TV program recorded in LP mode in one VCR followed by a home video recorded in SP in another VCR will produce two very different noise profiles, even if played back all in one go.

It's good to apply filters in the right order (system profile, then VHS limits, then recording profiles), but beyond that noise reduction is very subjective. For example, intelligent noise reduction tends to remove more noise in quiet periods but less when it would risk losing signal, which can sound like a snare drum being brushed whenever someone speaks. But dumb filters silence the same frequencies at all times, which can make everything sound muffled.

You can run your audio through as many gates as you like, and even repeat the same filter several times. If you use a noise reduction profile, you can even get different results from different programs (see for example this comparison of sox and Audacity's algorithms). There's no right answer but there's always a better result if you spend a bit more time, so you'll need to decide for yourself when the result is good enough.

Cleaning video

Much like audio, you can spend as long as you like cleaning your video. But whereas audio cleaning tends to be about doing one thing really well (separating out frequencies of signal and noise), video cleaning tends to be about getting decent results in different circumstances. For example, you might want to just remove the overscan lines at the bottom of a VHS recording, denoise a video slightly to reduce file size, or aggressively remove grains to make a low-quality recording watchable. FFmpeg's video filter list is a good place to start, but here are a few things you should know.

Some programs need video to have a specified aspect ratio. If you simply crop out the ugly overscan lines at the bottom of your video, some programs may refuse to play your video. Instead you should mask the area with blackness. In ffmpeg, you would use a crop filter to remove the overscan followed by a pad filter to put the image back to its original height.

Analogue video is interlaced, essentially interleaving two consecutive video frames within each image. This confuses video filters that compare neighbouring pixels (e.g. to look for bright grains in dark areas of the screen), so you should deinterleave the frames before using such filters, then interleave them again afterwards. For example, an ffmpeg filter chain might start with il=d:d:d and end with il=i:i:i. If you skip the trailing il=i:i:i, you can see that de-interleaving works by putting each image in a different half of the frame to trick other filters into doing the right thing.

Choosing formats

Your transcoding format needs to be small and compatible with whatever software you will use to play it back. If you can't find accurate information about your players, create a short test video and try it on your system. Your video codec may well have options to reduce file size at the cost of encoding time, so you may want to leave your computer transcoding overnight to get the best file size.

Wrapping it all up in a script

A V4L capture script has been written based on this page. It presents the commands above in a more usable package, and adds several extra functions that were too complex to describe here. For example, it will encode a secondary "review" file that makes it easier to find cut-points in videos.

If you would rather write your own, consider looking through the script for inspiration. You can see the commands it runs by searching for CMD_ on the script page.

See Also