# HD-Lite: A Not So Dirty, Little Secret



## smiddy (Apr 5, 2006)

April 04, 2008 | by//= 0; i=i-1){ if (l_.substring(0, 1) == ' ') document.write("&#"+unescape(l.substring(1))+";"); else document.write(unescape(l));}//]]> __Ben Hardy 
Since the advent of HDTV, few things have aroused more anger in HD viewers than the purported use of "HD-Lite" by television service providers. Before declaring oneself a victim, the consumer needs to understand just what HD-Lite is, when it might be used, and what else could be responsible for that alleged less-than-HD image up on the screen.

Get the whole article here.

This should get people's perspectives in an uproar, eh? _


----------



## Steve (Aug 22, 2006)

Read the article, and found it a little "lite" on details.  Some helpful basic info, tho. /steve


----------



## smiddy (Apr 5, 2006)

Well, I was expecting more about pixel x by y and that wasn't there, since most folks I've seen here discuss it seem to discuss it in those terms and not in a compression sense. From what I gathered it was saying that since the compression is over done, that it would be lossy when uncompressed as compared to a full stream say over HDMI, but that was even unclear to me.


----------



## Jim5506 (Jun 7, 2004)

Compression is more deliterious to PQ than resolution drop.

Compression will show up in high motion sequences and create blockiness, whereas most people will not notice the resolution has been reduced, say from 1920X1080 to 1440X1080 if motion artifacts are handled.

Yes, I know you claim you can see it, you're just setting too close.


----------



## smiddy (Apr 5, 2006)

Jim5506 said:


> Compression is more deliterious to PQ than resolution drop.
> 
> Compression will show up in high motion sequences and create blockiness, whereas most people will not notice the resolution has been reduced, say from 1920X1080 to 1440X1080 if motion artifacts are handled.
> 
> Yes, I know you claim you can see it, you're just setting too close.


I'm not sure what your saying with respect to (so called) HD-Lite.

Out of curiosity, wouldn't a 1440 x 1080 (12:9) be stretched to 1920 x 1080 (16:9) and destroy the aspect ratio? I ask because I don't know that I've experienced this...and mathematically this seems so.

I do agree, that lossy compressioning will make PQ bad. But I'm not sure I understand reduced pixel counts.


----------



## HIPAR (May 15, 2005)

smiddy said:


> I'm not sure what your saying with respect to (so called) HD-Lite.
> 
> Out of curiosity, wouldn't a 1440 x 1080 (12:9) be stretched to 1920 x 1080 (16:9) and destroy the aspect ratio? I ask because I don't know that I've experienced this...and mathematically this seems so.
> 
> I do agree, that lossy compressioning will make PQ bad. But I'm not sure I understand reduced pixel counts.


The pixels are not square in the 1440 x 1080 case.

--- CHAS


----------



## spartanstew (Nov 16, 2005)

smiddy said:


> Out of curiosity, wouldn't a 1440 x 1080 (12:9) be stretched to 1920 x 1080 (16:9) and destroy the aspect ratio?


If so, then everyone with 720 sets would be watching a distorted picture.


----------



## Jim5506 (Jun 7, 2004)

Some sources even take 1080i to 1280X1080 using rectangular pixels. That is where the term "HD-Lite" came from, when DirecTV reduced 1080i to 1280X1080.

My Hitachi 57" RP-CRT has a beautiful picture, but it will only resolve 1280 pixels horizonally and 1080 vertically - so to my estimation horizonal resolution is not as critical as vertical resolution and compression.

I can tell the difference between 720p and 1080i, though, because of the vertical pixel count difference.


----------



## TomCat (Aug 31, 2002)

smiddy said:


> ...I do agree, that lossy compressioning will make PQ bad. But I'm not sure I understand reduced pixel counts.


All compression of raster-scan video is lossy (at least for consumers), just like all jpeg photos are lossy. There is only less than a 2:1 compression that can be done that is not lossy, and that is reserved for medical imaging. HD for consumers is compressed at a much higher ratio, and therefore is very lossy. If raw HD is 1.5 Gb/s, and your OTA station transmits at 15 Mb/s, that is a 100:1 compression ratio. Since most HD channels aren't even that generous, that means that typically less than 1% of the original data remains, the rest being reconstituted (recreated by educated guess) in the decoder.

If that is the point of reference for understanding, that sort of makes the entire article somewhat beside the point.

Reducing the pixel count is pretty easy to understand. For instance, if you have a HD display with a "720p" native resolution (1280x720), for it to display a 1080i signal (1920x1080), it reduces (averages the luminance and chrominance values of)1920 pixels in the H domain to 1280, and 1080 pixels in the V domain to 720. So that practice is very common. It happens every day on sets with 720p native resolution.

Let's use an even-simpler example: Lets say the display native rez is 960x540. To display 1920x1080, the scaler algorithm has to take every two transmitted pixels in a scan line and average their luminance to get a value to display for the first displayed pixel, and it has to take the first two pixels in each transmitted line and average them to get the luminance value for the pixel in the first displayed scan line. (ignore chrominance issues for the sake of the example)

In this case, one displayed pixel is the average luminance of four transmitted pixels, and the resolution is reduced by a factor of two in the H dimension, and again by a factor of two in the V dimension. Simply put, the lowered resolution allows that picture to be 4 times less-sharp than the transmitted signal.

Of course in practice it is not nearly that severe. DTV reduces 1920x1080 to 1280x1080, so there is no loss of rez in the V dimension, and the rescaling in the H dimension is weighted by how much the new pixel position is offset from the original location of the existing two pixels it is averaged from (since 1280 does not divide into 1920 evenly). And all of this assumes the original 1080 video has high resolution in the first place, which it very often does not.

For this to have any effect at all on PERCEIVED resolution, All of the following requirements also have to be met: the ACTUAL resolution of the video has to reach the full capability of 1920x1080, which is somewhat rare due to lens aberrations, focus issues, imager issues (most are only 1440 to begin with), viewer placement issues (most folks sit too far away from a too-small screen to resolve 1080), human vision issues (perfect 20/20 vision is required) and of course the fact that most HD is not really HD, and half of what truly is, is 720p.

And of course 1080 V resolution is dithered (reduced in resolution or made blurrier on purpose) so that interlace flicker is prevented.


----------



## smiddy (Apr 5, 2006)

spartanstew said:


> If so, then everyone with 720 sets would be watching a distorted picture.


I'm sorry, I don't follow, please explain.


----------



## smiddy (Apr 5, 2006)

TomCat said:


> All compression of raster-scan video is lossy (at least for consumers), just like all jpeg photos are lossy. There is only less than a 2:1 compression that can be done that is not lossy, and that is reserved for medical imaging. HD for consumers is compressed at a much higher ratio, and therefore is very lossy. If raw HD is 1.5 Gb/s, and your OTA station transmits at 15 Mb/s, that is a 100:1 compression ratio. Since most HD channels aren't even that generous, that means that typically less than 1% of the original data remains, the rest being reconstituted (recreated by educated guess) in the decoder.


I was of the impression that if you can reproduce the original picture via a compression method, that was not considered lossy. That lossy compression was when you can not repproduce the original image at 100%. From your explanation any reduction in size from the original image makes it lossy, is that right?



TomCat said:


> If that is the point of reference for understanding, that sort of makes the entire article somewhat beside the point.


I was thinking the same thing since all transmitted video, is the digital domain, is compressed upon delivery.



TomCat said:


> Reducing the pixel count is pretty easy to understand. For instance, if you have a HD display with a "720p" native resolution (1280x720), for it to display a 1080i signal (1920x1080), it reduces (averages the luminance and chrominance values of)1920 pixels in the H domain to 1280, and 1080 pixels in the V domain to 720. So that practice is very common. It happens every day on sets with 720p native resolution.


Here you lost me, but I think I understand because of another reference above that they treat the pixels as not square but rectangle. This reduces, to me, the resolution, thus making it fuzzy, is that right?

How is each pixel represented within an original image? Is color three bytes and an additional byte needed for luminance and chrominance?



TomCat said:


> Let's use an even-simpler example: Lets say the display native rez is 960x540. To display 1920x1080, the scaler algorithm has to take every two transmitted pixels in a scan line and average their luminance to get a value to display for the first displayed pixel, and it has to take the first two pixels in each transmitted line and average them to get the luminance value for the pixel in the first displayed scan line. (ignore chrominance issues for the sake of the example)


This would seem to be 2:1 or roughly half the information.



TomCat said:


> In this case, one displayed pixel is the average luminance of four transmitted pixels, and the resolution is reduced by a factor of two in the H dimension, and again by a factor of two in the V dimension. Simply put, the lowered resolution allows that picture to be 4 times less-sharp than the transmitted signal.


Yep, you lost me...probably because I don't know what makes up each pixel from the digital perspective.



TomCat said:


> Of course in practice it is not nearly that severe. DTV reduces 1920x1080 to 1280x1080, so there is no loss of rez in the V dimension, and the rescaling in the H dimension is weighted by how much the new pixel position is offset from the original location of the existing two pixels it is averaged from (since 1280 does not divide into 1920 evenly). And all of this assumes the original 1080 video has high resolution in the first place, which it very often does not.
> 
> For this to have any effect at all on PERCEIVED resolution, All of the following requirements also have to be met: the ACTUAL resolution of the video has to reach the full capability of 1920x1080, which is somewhat rare due to lens aberrations, focus issues, imager issues (most are only 1440 to begin with), viewer placement issues (most folks sit too far away from a too-small screen to resolve 1080), human vision issues (perfect 20/20 vision is required) and of course the fact that most HD is not really HD, and half of what truly is, is 720p.
> 
> And of course 1080 V resolution is dithered (reduced in resolution or made blurrier on purpose) so that interlace flicker is prevented.


How can you tell these differences between what is being shown? I suppose it is from creation to reproduction, but how can one tell the difference. I know I can tell the difference between HD Theater and Smithsonian Channel HD and other HD channels, but I was thinking that was due to content. Is this in relation to less transmited pixels or the original production? And how can you tell?

Sorry if this seems like I'm poking, I'm genuinely wanted to understand. It doesn't seem like enough information is available to get the big picture (to pardon a pun).


----------



## Stewart Vernon (Jan 7, 2005)

smiddy said:


> I was of the impression that if you can reproduce the original picture via a compression method, that was not considered lossy. That lossy compression was when you can not repproduce the original image at 100%. From your explanation any reduction in size from the original image makes it lossy, is that right?


I was thinking the point he was making that to get to 100:1 compression ratio on the data, it is necessary to throw away data and the computer makes "guesses" to reconstitute it later. The point, of course, being that our quibbling over 1440x1080 vs 1920x1080 pales in the comparison of the original 1.5Gbps vs delivered 15Mbps (or lower) data, such that it would be difficult to say how much additional loss is noticable at that point.

I will say that I believe I can tell the difference sometimes, but not always in a way that bothers me. What a lot of people don't take into consideration, and this is where I sit on the matter... There may be times when going to 1440x1080 permits a particular compression algorithm to perform at a higher efficiency for a given bandwidth such that the resulting picture is better than 1920x1080.

Not exactly the same argument, but consider the issue of dropped/skipped frames.

IF you have the capability to display 30 fps, then feeding 60 fps is a waste and could in fact reduce the overall quality. 60fps might result in a dropped frame rate of greater than 1 out of 2 because of delays in re-synching... such that you may only get 24 fps of watchable imagery. By starting with only 30fps in the first place you eliminate dropped frames (by not overtaxing your system) and get better results than trying to force the 60fps down the pike.

I can easily imagine a situation where a particular 1920x1080 video stream is taxing for the compression/decoding algorithms such that the resulting image for the user is of lower quality not only from compression but from errors or lost data in the stream... but a properly re-scaled 1440x1080 video stream could be compressed without taxing the encoders/decoders and survive the process with more of its original data... resulting in a better image for the end-user.

Sometimes things are not as intuitive as you might think... Larger numbers are not always greater than smaller ones


----------

