I still can't help thinking of this:
[IMG]https://i.ytimg.com/vi/9QEsjd1WZuY/hqdefault.jpg[/IMG]
[QUOTE=DOG-GY;50520439]It's something the people who discuss sensor tech talk about in great detail, and not to uncommon to find mentions in photography blogs.
I think the statement is fine. It gives the watcher the understanding that the content being displayed is going to be more vibrant and detailed than what was previously possible, but without relying on knowledge like gamut or bit depth. Not only that, but how many people have physically seen HDR displays? People understand "more pixels", or resolution, but how would you sum up HDR? The only answer is, "better pixels."[/QUOTE]
"Better pixels" is just way too vague though. It doesn't give anyone a better understanding.
An advertisement for a game console utilizing HDR does not need to and should not describe how HDR functions.
[QUOTE=DOG-GY;50520799]An advertisement for a game console utilizing HDR does not need to and should not describe how HDR functions.[/QUOTE]
No it doesn't, but they didn't mention HDR by name even once in that video. They went from talking about 60 hz and 4k straight to high quality pixels. I mean I understand that they mean HDR, but the way it's phrased there and how they never mention HDR makes it sound funny.
In the press releases and Im pretty sure in the conference they mentioned it.
[url]https://www.reddit.com/r/gaming/comments/4nx0lq/these_are_the_highest_quality_pixels_that_anybody/d48cehr[/url]
[QUOTE]Maybe you want a serious answer amidst the "it's bullshit" and "yuuuge pixels" reply:
A pixel is just a sampled point of an image and it usually represents color in an rgb space. First of all the color representation is limited to the local monitor gamut (subset of colors that can be represented by the device) but also the default output format, which today is 24 bits, aka 8 bits per color value - 256 steps of red, green and blue.
This has been standard for many years, but lately tv and monitor makers have been pushing for a far greater range of colors possible to display and they market this technology as HDR - high dynamic range. It covers a far superior gamut and colors people can perceive compared to traditional monitors.
So that is part one - the pixel contains more information than before.
The other thing is - how expensive is your pixel?
Do I rasterize/sample my geometry once and simply apply a texture value per fragment/pixel?
This is not very expensive, but my output image and pixels are not of highest quality.
The bulk of shader expense, along with antialiasing/over sampling and stuff like particles (overdraw) is pixel/fragment bound ( fragment is a better name, just google fragment vs pixel shader).
Overdraw means how many times a pixel is "drawn over" for example geometry rendered in front afterwards or smoke or glass will "overdraw" the pixel in the background and therefore increase the cost per pixel.
That means that calculating the geometry/ triangle projection (which is done in the vertex shader, it "rotates" and distorts the model and projects it to the 2d screen - more expensive the more triangles a model has) is not the most relevant part at all, but what we do with the sampled pixels afterwards, when it comes to performance.
What the guy is essentially saying here is: We have the most recourses we've ever had to make each pixel look good.
EDIT 2: Someone was asking about "compressed" pixels. Which depends on what he's talking about, but my guess is this:
It is a fact that most modern rendering engines are deferred engines, which write the world information to different rendertargets (textures written on GPU) in a g-buffer and later apply lighting information. (Super quick how does it work - I save 3 or 4 images of the current screen - one with only albedo (pure color/texture), one with normal information (which way is every pixel facing) and one with depth information (how far away from the camera is each pixel) and potentially many more for further computation. Later we calculate lighting for each pixel using the position of lights and the information given in this so-called G-buffer).
This makes lighting pretty cheap, because we can calculate it per-pixel and only for the lights affecting this pixel.
BUT the problem is that our g-buffer (albedo, normal, depth etc.) is pretty large and therefore depends a lot on memory bandwidth. Actually the memory bandwidth / speed is the bottleneck often times in this type of rendering.
So game developers are going for a slim g-buffer and they try to - you guessed it - compress this buffer.
Now the idea is that this should be lossless, but in practice it isn't.
First of all our color values are by default just a discrete value after sampling, so all the values in between get lost. For example if the color value we calculate for our pixel is 0.5, we will just round it up to 1.
(FYI there are other compression methods, for example CryEngine and frostbite save the chroma (color) values in half-resolution, because people only need luma ("brightness" of the color in the widest sense) in full resolution to perceive a sharp image. This is a technique used by television since basically ever. [url]https://en.wikipedia.org/wiki/Chroma_subsampling[/url])
Every modern game engine calculates most of the lighting in High Dynamic Range, so it will calculate values for colors which in the end might be to bright and too dark for our monitors (and then it will use tone-mapping / eye adaption to simulate camera exposure and select the right values for output).
However, this HDR rendertarget/texture wants to cover a lot more than only 256 colors. But because we know that humans don't perceive luma linearly we can usually encode this stuff (with some logarithm, if interested google LogLuv encoding) to preserve more of the colors in the spectrum we can actually perceive.
Now, all of this would be much MUCH easier and more accurate if the rendertargets we used would be more accurate.
Right now basically all of these rendertargets are R8G8B8A8 - 32 bits per pixel, 8 bits per color and 8 bit per alpha.
But if we could use higher bitrates we can create less compressed, higher quality images. This is necessary for HDR monitors anyways.
Which brings us back to memory bandwidth. The Scorpio has a lot of it. And the new generation of AMD cards supports higher precision rendertargets. That's why it can do what previous consoles couldn't.[/QUOTE]
le epik high quality pixel meme
It's funny because for so many reasons their claim holds water, but people don't know enough about the stuff they buy to realize when a claim like this is true.
[QUOTE=DOG-GY;50522268]It's funny because for so many reasons their claim holds water, but people don't know enough about the stuff they buy to realize when a claim like this is true.[/QUOTE]
Well yeah, again because it's totally out of context in that video. It's not even a new thing to that console, Xbone S has HDR support too.
But HDR is only a component of the matter, just as that reddit post outlined. + Again, you cannot and should not provide such deep information like all that in an advertisement. It would just fly over people's heads instead of giving them an indication of why it's better. But go try to debunk it, and you quickly figure out that they're on the money.
This is just so blown up tbh. Like why should they get criticized for making a benign marketing statement that they can actually substantiate?
I don't see how this will be reasonably priced when the 1080 is just barely managing 4k 60fps.
[QUOTE=IrishBandit;50522623]I don't see how this will be reasonably priced when the 1080 is just barely managing 4k 60fps.[/QUOTE]
It's still a year away, plus I doubt most devs will target 4k 60fps anyway looking at the trend today, graphics > performance. It'll be 4k 30fps at most for the majority of games.
[QUOTE=IrishBandit;50522623]I don't see how this will be reasonably priced when the 1080 is just barely managing 4k 60fps.[/QUOTE]
companies get more powerful stuff for cheap all the time for products like this.
[QUOTE=DOG-GY;50522613]
This is just so blown up tbh. Like why should they get criticized for making a benign marketing statement that they can actually substantiate?[/QUOTE]
Because unlike "high resolution", high quality pixels MAKES NO SENSE outside of an explained context.
Like, who's the target audience for that statement?
Anyone tech literate will think it sounds ridiculous because it provides no context and just sounds like bullshit, while any common person will not understand anyways and it just flies over their heads.
pretty sure they're referring to the fact that you can now watch Pixels in the highest quality, 4k
[QUOTE=343N;50525376]pretty sure they're referring to the fact that you can now watch Pixels in the highest quality, 4k[/QUOTE]
Or they meant HDR, which would make more sense.
[QUOTE=Wii60;50523299]companies get more powerful stuff for cheap all the time for products like this.[/QUOTE]
If that was true, both the PS4 and Xbox One would have great performance, but they don't because if they ramped it up anymore it would simply cost way too much. But even with the amount of consoles made, and chips ordered from AMD, it doesn't decrease the cost that much. It may be a year away but the price will be high regardless.
[QUOTE=paul simon;50524753]Because unlike "high resolution", high quality pixels MAKES NO SENSE outside of an explained context.
Like, who's the target audience for that statement?
Anyone tech literate will think it sounds ridiculous because it provides no context and just sounds like bullshit, while any common person will not understand anyways and it just flies over their heads.[/QUOTE]
high resolution would mean nothing to someone who doesnt understand pixel density either. roll back to the times when the general public first started to learn about resolution, it's no different.
and guess what, the information got quickly distributed by people who could substantiate the claim. now you know, so what are you really complaining about? you don't like this short advertisement bc viewed with no context it doesn't provide enough context? adverts like this are made to build excitement.
the context [I]was[/I] presented in their conference, where the video was originally shown. the context [I]is[/I] in press releases, where this video is embedded or linked to. now it's on reddit and other places explained by people qualified to do so. like its all cool, just new tech people don't know.
Nothing like face punch users constantly trying to discredit anything Xbox attempts to do
Maybe it can finally run 1080p games now.
Sorry, you need to Log In to post a reply to this thread.