The Interlaced Video Problem - Computerphile

Displaying Interlaced Material on Progressive Displays

It is indeed a problem to display interlaced material on a computer with an inherently progressive display without getting all the sorts of its exact patterns well it's not an easy problem to solve but there are some situations where it can be really easy to deal with for example when translating film onto a video, shot at 25 frames per second.

In such cases, transferring the film onto video tape involves scanning the film frame by frame and transmitting the fields as separate lines. The next frame, however, is actually known in advance because it was captured at the same time as the previous two frames. Therefore, the best way to get them back together is literally to weave them together, resulting in a seamless playback of the original film.

This technique can be used for film material that needs to be displayed on computers screens, where the display technology is not capable of showing progressive video without de-interlacing. In such cases, only one line of information should be displayed at a time. This can be achieved by interpolating between lines worn on line three and working out what line would be needed for between five and seven lines.

The problem that arises here is that the resolution of the display is reduced to only being as high as the number of lines available in the field. To overcome this, it becomes necessary to generate information that would have been present if the camera had captured at that point.

A way to achieve this is to use the line above and below a given frame to determine what was likely to be in the missing part. This method relies on the assumption that the original film was shot with an even number of frames per second, resulting in half-field images being captured for each field of the interlaced video. By storing these half-fields separately, it becomes possible to delay the playback of the video by one or two fields.

This allows the computer to generate more information than is currently available on screen, using both past and future data to create a more accurate representation of what should be displayed at each point in time. The algorithm used for this is based on a combination of complex mathematical calculations, which become more sophisticated as the equipment becomes more advanced.

In reality, most TVs today use LCD panels with integrated de-interlacing chips, rather than relying solely on software algorithms to generate progressive video from interlaced sources. These chips are capable of producing high-quality results and can be adjusted based on factors such as budget and required resolution.

The use of these chips is what enables the production of smooth motion videos, even at lower resolutions. However, they do come with a price tag, which may explain why some TVs tend to produce less-than-ideal results in terms of de-interlacing performance.

"WEBVTTKind: captionsLanguage: enWhat are those zig-zaggy lines I keep seeing on videos on YouTube?Wait, you mean ones like this?So you can see on the BBC's news ticker herewe've got a lot of this sort of combingand ziggy-zag like effect on thevideo what's going on thatwell this is an interesting problem butthe basic problem is that computerpeople don't understand video whatyou're seeing there is an artifact ofthe way that the video system to puttogether in the thirties when theydeveloped the first video systems aredesigned them using analog electronicsremember this is about 10 years beforethe first computer was invented thefirst electronic computer was inventedso they're having to develop the videosystem using pure analog electronics andso they had to make sensible designdecisions on that time to encode thevideo so they could transmit it and getinto people's homes where they can watchit on the television screens that's allwell and good but we've got moderncomputers and we've been some movementback since then why if we got wasexactly like this and hopefully stepswell we'll come to that was actuallylook at what is actually happening howthese things are put together now Sean'shopefully give me a bit of computerlisting paper which is great becausedivided into lines so the way that theimage was built up is that every 5000second we talked about in the previousvideo why need to do it 50 frames persecond to get decent motion renditionthe camera would scan the image from thetop left to the bottom right and itwould go along the first line going backto the beginning and go on the next lineswing back and then go along the nextone and so on until eventually it comesto the end at which point it goes backup to the beginning of the frame andstarts during the same thing but there'sa problemthe amount of day 2 there is generatedscanning was formed and five line TVback in 96 in the UK at 50 frames persecond was too much that could bereliably transmitted with the technologyof the time there's too much informationthat you need to transmit too muchbandwidth will be taken up so they needto do something they couldn't go down to25 frames per second because then youwould flicker like crazy as we talkedabout in the last video so they couldn'treduce the frame rate this lot ofshooting forthe frames per second to get the framerate so they came up with a trick whichthey call interlacing so if we startagain if we call the first warm alignwarmthe second one is line 2 3 4 5 and so onwhat they said was we will transmitfirst line oneso you scan across Longbourn like so andthen we'll skip over line to andtransmit line threeso we fly back and we transmit Lymethree and then we skip over the line forand transmit Lyme five and so you do allthat until you get down to the bottom ofthe image you transmitting only theodd-numbered life so in that 50 thesecond one thing the whole frame yousend every of the line or half the frameand they refer to that as a field andthen you go back and scan the evennumbered lines so 2 4 6 8 and so on toscan all the even numbered lines in thenext 50 per second so what you actuallyended up doing you send your first fieldwhich would be all the odd lines andthen offensive a second later yousending the second films that got allthe even lines in it and then yousending the third field which got allthe old lines in it again and so on soyou sending all the guidelines for theeven lines or the outlines now becausethis is all been doing with analogequipment you couldn't store the imageand send the old lines and send evenlive in the same point in time so whenyou capture the odd lines here thiswould start at time zero when you startcapturing the even lines it's a54-second later so you capturing this 20milliseconds later and so on so eachfield is sampled a different point intime so you've got 50 discreet imagescaptured but each of their only has halfthe number of lines and they have adifferent half in that that's fine andyou transmit that you can record that Imy loved video tape you can transmit youcan do all sorts of processing with ituntil you start coming to put intocomputers and because what happened wasis that people started to treat italways say well actually people stilltalk about things being 25 frames persecond in UK they never were they werealways 50 fields per second so when hegets pushed into the computeryour computer will capture the first notfilled never capture the second evenfield and it will start to interlacethem back together to create a singleframe about the other things in that inthe actual image and he puts themtogether and stores them in theQuickTime file in the ABI whatever it ishe using at 25 frames per secondnow that's fine because you can playthem back out of us or capture card andmid nineties back onto a TV and itlooked files because it wouldn't pickthem and send them out in the rightorderthe problem comes if you then try todisplay that image directly on screen inthat because things are moving betweeneach of those things you get the sort oflittle zigzag effects because actuallythis letter T here is movinghorizontally so each time is capturedthe lines of the different . so when youinterlock them you get that sort ofcarry me effect on the edgeit's a pain how do you display itproperly you do ask difficult questionsso what do you have to do well first ofall you need to think about it not asbeing a single frame that you interlaceby together but actually being separatecrimes if we have the lines along hereand we have time along here we got zerothere we got four field one here feel tohear field three here field for fieldfive on points0 we're capturing let'ssay we do this with the odd ones wecapture these lines here at it . oneactually capturing the bits in themiddle catching that bit capturing thatbit capturing that bit capturing thatbit capturing that bitcharacter in that bit then it . to we'recapturing the there and so on . throughwere capturing hear it hear what they'retrying to do with interlaced is toreduce the amount of information theyneed to transmit so you could think ofit perhaps a bit like a sort of earlyanalog compression system been like mp3reduces the amount of information needsto be store to store some audiointerfaces doing the same thing withvideos reducing the amount ofinformation but hopefully it's notthrowing away anything that you're goingto see something static like a videointo thisbook very little is lost by transmittingin an interlaced form over anon-interlaced from progressive form wasit be called would still see all thatpretty much all the detail we seebetween the two so we reduce the amountof information you transmit by half butwe're still effectively transmittingwhat looks the same to the end user whenthey're viewing it on the televisionscreencertainly the time when this wasdeveloped but we're still freeinformation where every sample . beforeanyway half the information we couldpossibly have captured there andactually what we're throwing away is theseparation between vertical resolution Ihow much detail we can represent andalso temporal resolution what we've gothere is this is a single capture . 2.0we capture all the odd lines . when wecapture all the even lines now thinkabout something like the piece of paperi am capturing here 2.0 I capture thisline here which is white and this launchis white and so on all the way down soeffectively what we captured each pointon here is a completely white field atthis point though we capture this linewhich is green we capture this linewhich is also green and we capture thisline which is also green and at thispoint in time we capture completelygreenfield the next point in time we goback and capture completely white fieldand then we capture completelygreenfield and if you display this whatyou would see would not be a series ofwatching green lines but actually theimage flashing between white and greenwe've got to a situation where yes we'vereduced the amount of information thatwe need to transmit we've alsomanipulated the information so that wecannot distinguish betweenhigh-resolution vertical informationit's eerie just to help try to help outyetI'm sorry seriously you can't do acomputer file yet so effectively whatwe've got is we've mapped both thehigh-frequency vertical information andalso temporal information into the samepart of the encoding of the informationand so there's no easy way todistinguish between the two so highfrequency information like thisoscillating white-and-green pattern isindistinguishable once we've interlacedit from flashing white and green screenthere's just no way you can deal withthat the way you get round this isinside the camera when you sample theinformation you filter it vertically sothat you don't have the high-resolutionimage that so effectively an interlacedcamera let's go slightly lower verticalresolution and the progressive woman hadbut probably around seventy percent ofthe vertical information is still therestill better than you'd get if you areyou transmitted a smaller number oflines every frame so it gives you thebenefit but this still comes down tothink how on earth do we displayinterlace material on someone like acomputer which has an inherentlyprogressive display without getting allthe sort of it's exactly patternswell it's not an easy it's not an easyproblem to Seoul it can be in somesituations so there's some of thesituations where it can be really easyto deal with for example film translatedonto a video film shot at 25 frames thatwe talked about in the other one whenthat is transferred onto video tape isthat you take the piece of film you scanthe ordinal bloodlines transmit that thefields can't even numbered linestransmitted as a field then we want tothe next frame is actually knows . thetwo fields do come from the same pointin time so actually the best way to getthem back together is literally to weavethem together though and you get thesame film praying that you started withall the representation of what you getall the detail so that's the best way todo it for fill material for somethinglike videos we've seen on the computerscreen that doesn't work what you needto do is actuallyonly have information that should be it. 0 displayed there and I've only haveinformation . one displayed here andthen you have information . to displayedhere however you go about that well thisseveral ways you could do it you couldjust say well okay I know it's gonnafill in this gap hereso what I'll do is I'll interpolatebetween line worn on line three and workout what line to would be so i'll justuse the information inside . of time andjust create a sort of that's the thingthere and then i'll do the same betweenthree and five between five and sevenand so on the problem that happens hereis that you immediately reduce theresolution down to only being as high asthe number of lines you've got in thefield so is there any way you can dothis well yeah basically what you wantto do is to generate the informationthat would have been there if the cameracaptured at that point and the way youcan do that is not just use the lineabove the line below but also realizethat you know what was in that . 50-50split second before and you could alsoknow what's in that . 15-second in thefuture if you delayed everything by asingle field so what we doing instead ofactually trying to generate this framehere at point2 we store it digitally wecan do the computer quite easily untilwe get 2.3 at which point we've alreadyseen this information so we can use thatwe've got this information was justarrived and we got these two bits ofinformation and we can combine all thattogether to generate the data thatshould be at this point and the datathat should be at this point and so onso we delay the video by field or twofields and said that we now have moreinformation we got something that's inthe future from the point wheregenerating we've got the information inthe past we've already seen and theinformation that's on different lines aswell and we can combine all that invarious different complicated algorithmsto generate the information and the moreexpensive your equipment the clever thealgorithm will be and the better it willdo and so if we were to enable one ofthem on our computer and VLC which iswhat I'm using here is got severalbuilt-in then i'm going to enable yetanother deinterlacing algorithm andif I turn it on and you will see thatsuddenly instead of being zigzaggy itgoes back to being straightforward textmost tvs these days are LCD panels onthe younger or whatever maybe plasma soare they doing this kind of stuff allthe time that or do they use a differentmethodno that's why they're doing this insideevery LCD you'll get there will be achip which will be taking the video andprocessing it to produce progressivevideo from the interlaced video that'scoming in and depending on how much youpay for that chip but it's a 50p chip or75p chip and that 50 p is the price notthe resolution is a video is producingit depends 250 pounds chip or fiftycents because it's probably worth morethese days then you are going to get abetter quality out with hope you have onthis reprogramming but that cause evenmore moneythat's why we get ugly lines and that'show you essentially fix them know thatyou didn't think it was gonna be thatsimple did you all right let's go backto being so we the video signal comingis a series of annual screen and then sowe got a bit later on we move it up showthe next one and so on and so by doingthat fast enough you get the appearanceof motion\n"