[hd]https://www.youtube.com/watch?v=kIf9h2Gkm_U[/hd]
[editline]22nd August 2016[/editline]
It actually works, even with Youtube's crappy compression.
Try it:
[hd]https://www.youtube.com/watch?v=A-9tcBWgF8A[/hd]
[hd]https://www.youtube.com/watch?v=NGrhY9z62qc[/hd]
[hd]https://youtu.be/QpKlb7uyttw[/hd]
This is actually really good, he talks about chroma sub-sampling instead of the very common bitrate reasons.
I really wish more industry (especially online video) focused on it, since it causes any recordings of text on a monitor to look terrible. IIRC Chrome supports VP8 4:4:4, firefox doesn't, and there's no 4:4:4 for VP9 (It's planned spec, but not implemented anywhere).
What he said about getting a DSLR capable of recording 4K video has left me thinking... are there any DSLRs out there that are not outrageously expensive?
I have 1080p monitors and I always try to switch the video quality to the highest possible. YouTube 1080p compression sucks when there's a lot of action going on but if it's in 4K, it's a lot crisper and better. I've even pushed out 1080p recorded videos in 4K just so people can watch the high action scenes without the crap compression.
[editline]22nd August 2016[/editline]
[QUOTE=Pretiacruento;50927655]What he said about getting a DSLR capable of recording 4K video left me thinking... are there any DSLRs out there that are not outrageously expensive?[/QUOTE]
There's a bunch. One of the cheapest is the Sony A6300 which is $1000 USD, body only (aka without a lens). Even GoPros and smartphones these days can do 4K video recording. My smartphone, the OnePlus One, can film in 4K. It's not super great indoors or in low light, but it's there.
[QUOTE=garychencool;50927657]I've even pushed out 1080p recorded videos in 4K just so people can watch the high action scenes without the crap compression.[/QUOTE]
It'll help on the bitrate, but at what point do you make the first encode (h.264 or otherwise)? (I'm assuming it's videogame footage, where you actually have true RGB24 data) because to get the benefit you need to have it [I]after[/I] the 4k upscale.
First one that comes to mind is the Panasonic Lumix DMC-GH4, which is god-tier for 4K video recording, judging by a few reviews I've seen so far.
Shane Carruth filmed his first movie with it, "Upstream Color".
[hd]https://www.youtube.com/watch?v=5U9KmAlrEXU[/hd]
But [url=https://www.amazon.com/dp/B00I9GYG8O/ref=wl_it_dp_o_pC_nS_ttl?_encoding=UTF8&colid=2TVO3KML6JK2J&coliid=I214GCWT17REF3&psc=1]it's waaaay too expensive[/url]. :/
if you think it looks good on a 1080p monitor, imagine how amazing it looks on a 4k monitor
only issue is even in 4k youtube videos are a bit shit because of compression
That Armin Van Buuren video looks really damn good.
Another one
[hd]https://www.youtube.com/watch?v=TcwObyuk7xg[/hd]
[editline]22nd August 2016[/editline]
A gameplay vid
[hd]https://www.youtube.com/watch?v=vD1WPDC73Wo[/hd]
Kinda gives it a pretty compelling reason for most youtubers to start recording their Let's Play's and game reviews in 4K -- except of course, playing most games in 4K, or having a 4K gaming rig costs a small fortune.
[QUOTE=glitchvid;50927664]It'll help on the bitrate, but at what point do you make the first encode (h.264 or otherwise)? (I'm assuming it's videogame footage, where you actually have true RGB24 data) because to get the benefit you need to have it [I]after[/I] the 4k upscale.[/QUOTE]
If the 1080p bit rate was the same as the 4K bit rate on YouTube, it would probably look the same in terms of quality. Just look at Vimeo 1080p, it looks pretty good because the bit rate (and probably the render settings) are higher/better. I watch 4K videos on 1080p monitors for that reason, for it to look better. I have the Internet and the GPU power to play it.
Now for exporting videos, I've only done it to a few videos just for people to have the option to watch it in 4K. Some people do what I do and watch 4K YouTube videos on 1080p displays. I usually don't do it because I don't care enough, the rendering and uploading takes longer than it's worth, and the source footage was crap to begin with so it's pointless. Now if the source footage was better (i.e. Fraps video recording, decent 1080p camera), then it would be more worth it to export in 4K.
[editline]22nd August 2016[/editline]
[QUOTE=J!NX;50927667]if you think it looks good on a 1080p monitor, imagine how amazing it looks on a 4k monitor
only issue is even in 4k youtube videos are a bit shit because of compression[/QUOTE]
I have a XPS 15 that has the 4K display. Remember that this is on a 15.6 inch display so the pixels are very small to begin with. For most videos (pick a video that shows up when you search "4K" on YouTube or Vimeo), it looks pretty good. Compression isn't too bad but when there's a lot of action, you will run into the blocky video as seen in 1080p YouTube videos. Those blocks are just a lot smaller and are usually not that easy to see.
Also pro tip: don't get a 4K display on a laptop unless:
1) you are ok with the shitter battery life
2) you are ok with the high price tag
3) you don't plan on playing games in 4K
4) you actually need 4K on a laptop
5) you are ok with lag while navigating Google Maps and other basic stuff
6) you like crisp af text and images
7) you are ok with seeing pixelated web images because 90% of the web images are not upscaled for 4K displays yet. Only text and vector graphics and such work.
8) the occasional program that can't scale properly so you get blocky text and UI
[hd]https://www.youtube.com/watch?v=mb-jv1-Pqwo[/hd]
I never really bothered watching Youtube vids at 4K before... it's like I've found a whole new world :v:
What a cool video, thanks for posting that dude. I don't know why that clicked in my head so nicely, but it's really interesting to learn about.
[QUOTE=srobins;50927833]What a cool video, thanks for posting that dude. I don't know why that clicked in my head so nicely, but it's really interesting to learn about.[/QUOTE]
Yup, it makes perfect sense, actually.
I have the bandwidth, so I think that whenever possible, I'll just switch to 2160p. Usually I just went to 1080p because it just loads up faster, at the cost of quality.
I'm totally sold now. :v:
[editline]22nd August 2016[/editline]
Here's another interesting vid
[hd]https://www.youtube.com/watch?v=VkU0bjZwJ0o[/hd]
Why not just use RGB if no chroma subsampling is to be done?
[QUOTE=Silikone;50927944]Why not just use RGB if no chroma subsampling is to be done?[/QUOTE]
Video is not using RGB, it's YCbCr.
Chroma subsampling is still being done when you record/upload video in 4K 4:2:0. When you view it in 1080p, it basically becomes 1080p 4:4:4.
One of the reasons why cameras and such don't do 1080p 4:4:4 is because consumers care more about the resolution rather than what chroma subsampling technique their cameras are using. More consumers understand what resolution is rather than chroma subsampling.
On the pro level, 4:4:4 is only really used for high budget productions due to the insane amounts of data it would be. And no high end budget would shoot in 1080p, they go for the facny 2-6K cameras. 4:4:4 can be listed as the RAW recording, where what you get is straight off the sensor with no internal camera processing (4:2:0 or 4:2:2 chroma subsampling, burning in the colour and white balance, adding compression artifacts).
On the technical level, 4:2:0 chroma subsampling has always been what they did with SD, HD video recording and it works well for the time. It greatly reduced file sizes and what you get looks good. The engineers probably didn't bother with improving 1080p to 4:4:4 because it would be seen as pointless for most people (aka consumers using their phones and consumer-level cameras). It would increase file sizes of 1080p video and consumers might not actually see it or care.
If you want to do some hardcore colour grading or green screen effects, you would get yourself the appropriate camera, probably a 4K one for "future proofing"
[QUOTE=garychencool;50927995] The engineers probably didn't bother with improving 1080p to 4:4:4 because it would be seen as pointless for most people (aka consumers using their phones and consumer-level cameras). It would increase file sizes of 1080p video and consumers might not actually see it or care.[/QUOTE]
It also has to do with the decoding hardware, codecs have specific levels they can run at, meaning a content producer can know exactly what to use for what level of devices. Virtually every hardware decoder on the marker only works for 4:2:0 chroma, meaning 4:4:4 would simply not play.
[QUOTE=glitchvid;50928126]It also has to do with the decoding hardware, codecs have specific levels they can run at, meaning a content producer can know exactly what to use for what level of devices. Virtually every hardware decoder on the marker only works for 4:2:0 chroma, meaning 4:4:4 would simply not play.[/QUOTE]
IIRC the 4:4:4 decoders cost extra for licensing so the workaround is to basically have the content in 4K 4:2:0.
More gameplay vids
[hd]https://www.youtube.com/watch?v=UsQy8Jlavc4[/hd]
[QUOTE=garychencool;50927995]Video is not using RGB, it's YCbCr.
Chroma subsampling is still being done when you record/upload video in 4K 4:2:0. When you view it in 1080p, it basically becomes 1080p 4:4:4.
One of the reasons why cameras and such don't do 1080p 4:4:4 is because consumers care more about the resolution rather than what chroma subsampling technique their cameras are using. More consumers understand what resolution is rather than chroma subsampling.
On the pro level, 4:4:4 is only really used for high budget productions due to the insane amounts of data it would be. And no high end budget would shoot in 1080p, they go for the facny 2-6K cameras. 4:4:4 can be listed as the RAW recording, where what you get is straight off the sensor with no internal camera processing (4:2:0 or 4:2:2 chroma subsampling, burning in the colour and white balance, adding compression artifacts).
On the technical level, 4:2:0 chroma subsampling has always been what they did with SD, HD video recording and it works well for the time. It greatly reduced file sizes and what you get looks good. The engineers probably didn't bother with improving 1080p to 4:4:4 because it would be seen as pointless for most people (aka consumers using their phones and consumer-level cameras). It would increase file sizes of 1080p video and consumers might not actually see it or care.
If you want to do some hardcore colour grading or green screen effects, you would get yourself the appropriate camera, probably a 4K one for "future proofing"[/QUOTE]
Yes, but since 4:4:4 doesn't discard the information that makes video encoding efficient in the first place, what is the point of full quality YCbCr? Why not just store video in RGB and thus avoid the small loss of converting between two color spaces?
[QUOTE=Silikone;50932366]Yes, but since 4:4:4 doesn't discard the information that makes video encoding efficient in the first place, what is the point of full quality YCbCr? Why not just store video in RGB and thus avoid the small loss of converting between two color spaces?[/QUOTE]
I'm just gonna cite Wikipedia as it would better explain it than I would
[QUOTE]Cathode ray tube displays are driven by red, green, and blue voltage signals, but these RGB signals are not efficient as a representation for storage and transmission, since they have a lot of redundancy.
YCbCr and Y′CbCr are a practical approximation to color processing and perceptual uniformity, where the primary colors corresponding roughly to red, green and blue are processed into perceptually meaningful information. By doing this, subsequent image/video processing, transmission and storage can do operations and introduce errors in perceptually meaningful ways. Y′CbCr is used to separate out a luma signal (Y′) that can be stored with high resolution or transmitted at high bandwidth, and two chroma components (CB and CR) that can be bandwidth-reduced, subsampled, compressed, or otherwise treated separately for improved system efficiency.
One practical example would be decreasing the bandwidth or resolution allocated to "color" compared to "black and white", since humans are more sensitive to the black-and-white information (see image example to the right). This is called chroma subsampling.[/QUOTE]
[url]https://en.wikipedia.org/wiki/YCbCr[/url]
TL;DR: using RGB had a lot of redundancies so video engineers created YCbCr to decrease bandwidth, improve efficiency and made something compressed for easier sending.
Sorry, you need to Log In to post a reply to this thread.