Vince Maskeeper
Senior HTF Member
- Joined
- Jan 18, 1999
- Messages
- 6,500
Let me start this little editorial off by warning you that I have very pedestrian knowledge of video signals, their creation, and their limitations. I am, as many of you know, an audio guy primarily with a mind that tends toward geeky concepts and technical mumbo-jumbo. I like computers, recording equipment, home theater, candle light dinners and long walks in the park...
But I digress.
I have been very interested in the idea of "edge enhancement"- and have followed the discussions, indictments, screen shots and examples very closely. While I never had much to directly add to the conversation- I lurked nonetheless in hopes of gather more knowledge on the topic, in the unending pursuit of perfect video experience...
In the course of reading the extensive discussions about Edge Enhancement, the one issue which really stuck in my crawl, so to speak, was the fact that even when it was as obvious as the nose on your face-- studios and technical types denied it was applied. As a natural cynic, I considered that these folks were simply flat out lying- but a logical consideration of why they would bother lead me to no answers.
I started to think this whole thing might be a misunderstanding- an issue of syntax or semantics which has thrown confusion into the mix and resulted in poor communication.
Often, in the discussion of audio concepts, there is much confusion because people improperly apply terms and concepts which they don't understand. Standards vary, terminology gets misused and misunderstood, and even basic processes are often tough to grasp-- as a result sometimes helping someone with an audio problem starts with a 20 minute process of educating them on the terms they're using and the concepts they're employing before getting to the meat of the problem.
In order to address the issue of edge enhancement- and specifically possible misunderstanding which is leading to misinformation- I think we have to establish the popular concept of what this is and how it happens.
I think I understand the idea as follows:
Edge Enhancement is an intentional artifact, introduced during the telcine process, which causes halo-ing of edge lines. The telecine operator uses some sort of optical process (or possibly digital process, I'm still unclear) to skew edge lines so they look "doubled". This results in an illusion of added resolution on smaller displays- as the smaller picture size doesn't display the added "enhanced edge" as a second line- rather they blend together to add crisp edges to things which originally had none.
Unfortunately, on larger displays- the enhanced edge becomes far more obvious, and appears more as a pronounced halo surrounding the original image.
Edge enhancement is applied specifically during the film to video transfer process (again, I'm not sure if this is the argument, but I believe it to be)- and is done on purpose to create an illusion of extra crispness and resolution to a video transfer.
So, back to the issue of syntax and confusion. As far as I can calculate, the term "edge enhancement" is not one used by the industry at large. I have spoken with one telecine operator, and have seen a second post here on the forum, and both referred to the process as simply "enhancement". The process of "enhancement" as they described it, sounded essentially the same as the concept discussed by observers here as "edge enhancement".
Now, here comes my main reason for posting this piece:
I worked on an independent movie called Dev/Null, I served as audio designer and essentially performed every duty required of the audio department. The movie was shot on Digital video using Canon excellent prosumer camera, the XL-1. The XL-1 is a very nice 3 CCD mini-DV camera which allows interchangeable lenses and manual controls like aperture which go very far in creating a profession image.
The movie is "finished" and the producers sent the DV tape off to a place to have some DVDs produced to use for festival submissions (and as the master for VHS dubs for festival submissions). These folks are quite budget minded, so instead of using the professional studio I suggested- they used some company from the back of a magazine which offered DVD copies from DV tape for $25 a piece.
I borrowed the DVD from them, so I could watch the finished cut with the DP (who had never seen the final edit)- and noticed something very interesting.
Edge Enhancement
A movie which was shot on Digital Video, edited in a PC, and mastered to DVD had very clear edge haloing, nearly identical in some scenes to the obvious examples presented from the Phantom Menace. Heck, this DV sourced movie also presented what I've seen called "Adaptive" enhancement- which supplies enhanced edges only on certain surfaces in particular directions.
So the question, for me, becomes- if edge enhancement is the same as the telecine process of "enhancement"- why am I seeing an identical artifact of a movie which never once had anything to do with a photo-chemical carrier like film, never experienced a telcine process, and stayed 100% digital from the moment it was shot?
I did some close study on the matter. Went back to the DV source tape for the DVD master, and would A-B them on my projector. The enhancement that was clear on the DVD was not at all present in DV tape. Going back to the DV tapes on which the material was shot- again no edge artifacts.
So I busted out my Phantom Menace DVD, since it seems to have become the standard for how bad edge enhancement can be, and started to notice some common elements between he Inge problems on the TPM DVD and the edge problems on the Dev/Null DVD.
Namely, it seemed the edge enhancement was most clear and pronounced when a moving foreground object moved across as static, or mostly static background. Scenes in TPM, such as space ships flying across a pale blue sky seemed to display this artifacting horribly. Also, the royal guard's hat against the desert backgrounds- again showed it badly.
On the Dev/Null DVD- one scene featuring an very dark skinned african-american actor who has a shaved head standing against a white wall in an office displayed extensive edge artifcting.
While these weren't the exclusive location of visible enhancement, they were certainly the most common and obvious locations to find the artifact.
So, the finding of "edge enhancement" type artifacting on the Dev/Null DVD cause me to wonder what the source of the artifact really is... and this above info on moving images on fixed backgrounds made me seriously consider the concept that maybe this enhancement is caused by MPEG compression. It stands to reason that certain types of compression would more tightly compress static, homogeneous background images-- and thus the point where a moving complex image passes across that- where they meet would be a potential problem for the compression scheme.
These seems to be suggested as a distinct possibility as I recently saw a thread in the TV area suggesting that similar "edge enhancement" had been applied to the live HD sports feeds- which is either a result of a similar compression problem- or a misnomer of the idea of "edge enhancement" as it has been discussed as a telecine operation.
In addition- I believe that currently most DVD releases are getting a "downconverted" master from a HD telecine master-- yet when reviews I've seen of DVHS HD material from (assumably) the same transfers as the DVD material- the issue of edge enhancement hasn't been mentioned.
So, any one of these seems to be a resonable possibility- if not likely possibility in a large number of the thinks illustarted as "edge enhancement". I think as MPEG compression techniques have developed, new programs and schemes have been used- the "quality" of the image vs the amount of the compression have shifted around quite a bit.
I think what I'm trying to say here is this- The possible reason why we've gone round and round with technical people from studios who deny edge enhancement was applied stems from a misunderstanding between our views and theirs. Some have suggested that it is possible that a few telecine machines apply enhancement by default- so the operators deny applying it, because it was applied without their knowledge. I think it seems even more realistic that some sort of process related to MPEG compression or the filtering occurring from a downconversion from a HD or higher resolution master is causing these edge artifacts. Or, even, that the filtering is done once the transfer is completed and in the digital realm, and thus when asking about "edge enhancement" applied, there becomes some confusion in what this means exactly...
As a result, when the operator who struck the transfers for Die Hard swears that no edge enhancement was applied... or when THX swears that no edge enhancement was used on TPM-- I can't help but wonder if they are literally telling the truth- but because we are misguided as to the source of the artifacts or their exact cause, and as a result we are asking the wrong questions.
-Vince
But I digress.
I have been very interested in the idea of "edge enhancement"- and have followed the discussions, indictments, screen shots and examples very closely. While I never had much to directly add to the conversation- I lurked nonetheless in hopes of gather more knowledge on the topic, in the unending pursuit of perfect video experience...
In the course of reading the extensive discussions about Edge Enhancement, the one issue which really stuck in my crawl, so to speak, was the fact that even when it was as obvious as the nose on your face-- studios and technical types denied it was applied. As a natural cynic, I considered that these folks were simply flat out lying- but a logical consideration of why they would bother lead me to no answers.
I started to think this whole thing might be a misunderstanding- an issue of syntax or semantics which has thrown confusion into the mix and resulted in poor communication.
Often, in the discussion of audio concepts, there is much confusion because people improperly apply terms and concepts which they don't understand. Standards vary, terminology gets misused and misunderstood, and even basic processes are often tough to grasp-- as a result sometimes helping someone with an audio problem starts with a 20 minute process of educating them on the terms they're using and the concepts they're employing before getting to the meat of the problem.
In order to address the issue of edge enhancement- and specifically possible misunderstanding which is leading to misinformation- I think we have to establish the popular concept of what this is and how it happens.
I think I understand the idea as follows:
Edge Enhancement is an intentional artifact, introduced during the telcine process, which causes halo-ing of edge lines. The telecine operator uses some sort of optical process (or possibly digital process, I'm still unclear) to skew edge lines so they look "doubled". This results in an illusion of added resolution on smaller displays- as the smaller picture size doesn't display the added "enhanced edge" as a second line- rather they blend together to add crisp edges to things which originally had none.
Unfortunately, on larger displays- the enhanced edge becomes far more obvious, and appears more as a pronounced halo surrounding the original image.
Edge enhancement is applied specifically during the film to video transfer process (again, I'm not sure if this is the argument, but I believe it to be)- and is done on purpose to create an illusion of extra crispness and resolution to a video transfer.
So, back to the issue of syntax and confusion. As far as I can calculate, the term "edge enhancement" is not one used by the industry at large. I have spoken with one telecine operator, and have seen a second post here on the forum, and both referred to the process as simply "enhancement". The process of "enhancement" as they described it, sounded essentially the same as the concept discussed by observers here as "edge enhancement".
Now, here comes my main reason for posting this piece:
I worked on an independent movie called Dev/Null, I served as audio designer and essentially performed every duty required of the audio department. The movie was shot on Digital video using Canon excellent prosumer camera, the XL-1. The XL-1 is a very nice 3 CCD mini-DV camera which allows interchangeable lenses and manual controls like aperture which go very far in creating a profession image.
The movie is "finished" and the producers sent the DV tape off to a place to have some DVDs produced to use for festival submissions (and as the master for VHS dubs for festival submissions). These folks are quite budget minded, so instead of using the professional studio I suggested- they used some company from the back of a magazine which offered DVD copies from DV tape for $25 a piece.
I borrowed the DVD from them, so I could watch the finished cut with the DP (who had never seen the final edit)- and noticed something very interesting.
Edge Enhancement
A movie which was shot on Digital Video, edited in a PC, and mastered to DVD had very clear edge haloing, nearly identical in some scenes to the obvious examples presented from the Phantom Menace. Heck, this DV sourced movie also presented what I've seen called "Adaptive" enhancement- which supplies enhanced edges only on certain surfaces in particular directions.
So the question, for me, becomes- if edge enhancement is the same as the telecine process of "enhancement"- why am I seeing an identical artifact of a movie which never once had anything to do with a photo-chemical carrier like film, never experienced a telcine process, and stayed 100% digital from the moment it was shot?
I did some close study on the matter. Went back to the DV source tape for the DVD master, and would A-B them on my projector. The enhancement that was clear on the DVD was not at all present in DV tape. Going back to the DV tapes on which the material was shot- again no edge artifacts.
So I busted out my Phantom Menace DVD, since it seems to have become the standard for how bad edge enhancement can be, and started to notice some common elements between he Inge problems on the TPM DVD and the edge problems on the Dev/Null DVD.
Namely, it seemed the edge enhancement was most clear and pronounced when a moving foreground object moved across as static, or mostly static background. Scenes in TPM, such as space ships flying across a pale blue sky seemed to display this artifacting horribly. Also, the royal guard's hat against the desert backgrounds- again showed it badly.
On the Dev/Null DVD- one scene featuring an very dark skinned african-american actor who has a shaved head standing against a white wall in an office displayed extensive edge artifcting.
While these weren't the exclusive location of visible enhancement, they were certainly the most common and obvious locations to find the artifact.
So, the finding of "edge enhancement" type artifacting on the Dev/Null DVD cause me to wonder what the source of the artifact really is... and this above info on moving images on fixed backgrounds made me seriously consider the concept that maybe this enhancement is caused by MPEG compression. It stands to reason that certain types of compression would more tightly compress static, homogeneous background images-- and thus the point where a moving complex image passes across that- where they meet would be a potential problem for the compression scheme.
These seems to be suggested as a distinct possibility as I recently saw a thread in the TV area suggesting that similar "edge enhancement" had been applied to the live HD sports feeds- which is either a result of a similar compression problem- or a misnomer of the idea of "edge enhancement" as it has been discussed as a telecine operation.
In addition- I believe that currently most DVD releases are getting a "downconverted" master from a HD telecine master-- yet when reviews I've seen of DVHS HD material from (assumably) the same transfers as the DVD material- the issue of edge enhancement hasn't been mentioned.
So, any one of these seems to be a resonable possibility- if not likely possibility in a large number of the thinks illustarted as "edge enhancement". I think as MPEG compression techniques have developed, new programs and schemes have been used- the "quality" of the image vs the amount of the compression have shifted around quite a bit.
I think what I'm trying to say here is this- The possible reason why we've gone round and round with technical people from studios who deny edge enhancement was applied stems from a misunderstanding between our views and theirs. Some have suggested that it is possible that a few telecine machines apply enhancement by default- so the operators deny applying it, because it was applied without their knowledge. I think it seems even more realistic that some sort of process related to MPEG compression or the filtering occurring from a downconversion from a HD or higher resolution master is causing these edge artifacts. Or, even, that the filtering is done once the transfer is completed and in the digital realm, and thus when asking about "edge enhancement" applied, there becomes some confusion in what this means exactly...
As a result, when the operator who struck the transfers for Die Hard swears that no edge enhancement was applied... or when THX swears that no edge enhancement was used on TPM-- I can't help but wonder if they are literally telling the truth- but because we are misguided as to the source of the artifacts or their exact cause, and as a result we are asking the wrong questions.
-Vince