Tricky questions about resolution and interlacing...

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Gameshow Host
    Junior Member
    Junior Member
    • Mar 2002
    • 22

    Tricky questions about resolution and interlacing...

    I have a bunch of tricky questions, can anyone help?

    Resolution

    1) When I record a live digital broadcast signal, is there any way I can get a 100% pixel-perfect copy of what has been broadcast? I'm not talking about picture quality (that's bound to degrade during capture), I'm talking about matching the resolution at which I capture, to exactly the same resolution that the video is broadcast at, so that no "scaling" takes place. That way I will essentially have a near perfect copy of the original video. I don't really understand how capture cards work, but am I right in saying that they should somehow be able to detect the resolution of the incoming signal? In which case, is it possible to exactly match the capture resolution to the broadcast resolution?

    2) I intend to capture both 16:9 and 4:3 presentations, from both live broadcasts (from a digital satellite signal) and also from video tape. My question is: does the aspect ratio make any difference to the resolution of the video? I know that all DVD videos are 720x576, regardless of their aspect ratio (16:9 just stretches the image out anamorphically). But what about broadcast television and video tape? Do they use two different resolutions for the two different aspect ratios, or just the one?

    Interlacing

    3) How is interlaced video "stored" in avi and mpeg2 formats? Are the two fields compressed seperately or together? I notice my video card captures PAL video at 25fps, so each pair of fields MUST be merged together to form one high resolution frame. Surely if both fields are compressed together as a single frame, the two fields will "blur together" slightly due to compression, and will hence contaminate each other. So then when the fields are separated again (for viewing on an interlaced TV) each image will be "dirtied" slightly because colour from the other field will have bled into it slightly. Am I right about this? If so, is there any way to capture both fields independently to stop the fields from being dirtied?

    4) The resolution that TV programmes are broadcast is bound to be different to the resolution that the VCR records onto the tape at (I believe TV is broadcast at about 330 lines whereas S-VHS is about 410 lines). So whenever you record to tape, you're changing the resolution of the broadcast image (sometimes scaling up, sometimes scaling down). My question is, what happens to the interlacing during this change of resolution? How can interlacing possibly survive when the number of lines of resolution has changed? The only way I can think of that the VCR could preserve interlacing and still scale the image would be to scale each field independently. But the trouble with this is that there's no way you can scale an image (up or down) if half the picture the information is missing! I am led to conclude that when you record to a video tape, the two independent fields are lost, and the whole thing is just merged into a progressive scan. Can anyone shed any light on this?

    5) Another little thing that's been bugging me - when I capture at 720x576 and play back the captured video, it's always 720x540! What happened to the extra 36 pixels?

    Thanks to anyone who can help.
  • Vidbox
    Junior Member
    Junior Member
    • Nov 2001
    • 15

    #2
    To much information running in your...

    Yo, Gameshow
    Keep it simply, you'll be better off. Record your analog TV signals at 352x480 and the digital at 720x576. Don't compress your AVI's, and that will solve the interlacing problem. In the case of mpegs, have the recording interlaced, then let your conversion program select the cropping of pixels on the way to VCD. P.S. don't bother if you've got a DVD recorder.

    Nuff' said???

    Comment

    Working...