never ever use presets thats what custom is for and custom is always better the only exception if custom is very complicated like 100 codec and what looks million of possible combination thats what advanced settings are for in that case
Man, I've seen your videos from the very beginning. Really I'm amazed how you´ve improved in every detail, not just about the technical aspects, but in your communication skills. Really, really, congrats. Maybe you think is not noticed, but believe me, it's noticed.
If you don't care about file size: ProRes 422 LT will give you the best grain detail (but files will be huge!). I've tested this a bunch cause I love using grain.
@@Colorado4x4 Yes. Newer macbook pro. Just Quicktime ProRes 422 LT and I make sure my last CST in my color workflow is going to Rec709 and Rec709-A. You want to output 709-A so that what you see in Davinci matches what you see on UA-cam
It should go without saying, because this video is talking about 4K renders, but I'm also astonished by how many people still only upload their videos to YT at 1080p (perhaps because that's what they shot them at) and then wonder why they look horrible. The reason is bitrate. Not the bitrate you sent to YT, but the average and max birate YT allows for 1080p videos after re-render. It's extremely low, and always has been. If you send YT a 1080p video at 50Mbps it really won't look any better than one at 30 or even 20Mbps because it rerenders to < 10Mpbs. But step up even to 1440p will make a huge difference. In fact, upscaling your source 1080p footage to 1440p at exactly the same bitrate WILL MAKE A HUGE DIFFERENCE to how it looks in YT.
@@bear_on_a_bike Just upscale the final result back to at least 2.7K before uploading to YT. It's annoying that the max bitrate YT will give you is tied pretty much solely to the resolution, but that's how it works. Right click on any vid and look at the Stats for Nerds, and then change resolution.. from 1080p, 1440p, 4K and see what the highest bitrate is. Try something like an FPV vid, or downhill cycling to really max it out.
@@daemn42 So, I create a project with my 4k files, edit the video using 2x zooms in some moments, and then render the final video in 1080p. Then I crate a new project with this final 1080p video file and use the "super scale" function to upscail it to 4k?
@@bear_on_a_bike I'd skip the render/re-render step, and just make my final project settings 1440p or 4K. Yes, you may be cropping some stuff from 4K down to 1080p, and then scaling that back up to 4K and we can't do anything about the loss of resolution due to the crop, but it will let YT give you the highest bitrate available. Here's an example from one of my own vids. watch?v=rVG6EW-48Io Skip in a bit until you're at the rear view footage. I'm using an Insta360 1X which records a spherical view at 5K, but of course any time I narrow the view to look in a specific direction, I'm cropped *way* in. Maybe going as low as 720p, and then I scale that back up to 2.7K (1440p). Watch that video at 1080p and almost all the ground detail turns to mush, looking forward or back, but switch to 1440p, and it's mostly back. The only difference is YT's max bitrate for each resolution. Oh, and this is important. Even if you're on a 1080p monitor, you should still choose the 1440p playback option, to get the higher bitrate. I've had to tell that to all my viewers, for years..
@daemn42 thx for the tips. I finally understood how playback related to key frames bitrate. The secret is to max out keyframes to anywhere near 80-100 mbps due to how codecs work.
Hey qazi i loved ur tutorials very much before i have no experience in Davinci or any type of color grade but after watching your tutorials i learned so much about the software and color grading thank you so much for the tutorials man....hey can u please make a video on how to grade like roldangibuena and benjvillena please please🥺🥺
I also did a ton of test regarding this topic a while ago. I personally use H.265 instead of H.264 because its the more advanced codec. There are compatibility issues possible with other application and for web for example, but for UA-cam thats fine. My settings for 4K/30 are H.265 default preset with 40Mbps limit. I think thats the sweetspot between quality and file size for best results. The resulting files are less than 1/3 of the default H.264 Export.
higher efficiency is always better then previous codec theres also bit color depth and most standard is 8-bit and the uncommon standard is 10-bits you might be able to record in 10-bits 65mbps bit rate for 1080p but if you export with it gonna automatically convert to 8-bits and lose some bit rate cause thats why you always choose custom settings cause you know better and these drones that recommend you doing dont know better and always remember that bit rate scales with resolution your 4K 40mbps is the minimum then it could have 80mbps and have less bit rate artifacting on rapid movement or color change but higher bit rate and resolution costs performance and storage space for the user giant corporations can afford extreme high capacity storage but a common user cant they look for excuses to make it easier for them not for you and its easier to downgrade your videos to low bit rate to save storage space but then why they dont give you higher efficiency codec if a codec can compress a video but have way less compromise on quality cause they hate small creators and treat them like common garbage otherwise there would be no need to use low efficiency codec and it becomes obsolete and left in the past like these floppy disks no one uses
For exports from mac you have to do your normal inputs on the first node and out put to DWG, on your output node output in rec709 and rec709A , then begin grading. This will also give you a color profile of 1-1-1 which will have a consistent look on all browsers.
yeah thats also with davinci resolve where you set export file data same as project or was it same as source meaning the data on the recorded file should match davinci settings but dont forget about color bits and then chrome subsampling the standard is 4:2:0 you would be lucky if it can go 4:2:2 and the best is 4:4:4 but due to limitations of editing software they will refuse to read it or bypass it automatically converting to lower settings and that especially noticeable with export for average video editor that steals content for tiktok wont care what color bit rate and what video bit rate even is but then youre a serious creator you will learn these things that didnt existed before you started and then youtube compression stacks what export already took away from file quality meaning export takes 25% of the quality data and youtube compression takes the rest leaving your video lesser of what it was
Hey man I have this issue with my shorts it seems like YT is compressing all my short form videos coming out of capcut and destroying the quality. There's a ton of noise and artifacts introduced only on YT, I think it might be something I am doing? normally shooting in 1080-4k, rendering in 1080 60fps. finished source file looks great, works fine on tiktok and instagram but youtube has a ton of loss in quality.
Everyone watching this and grinding their UA-cam career, cheers and good luck! With the hard work and constandlt improvements, we'll get where we go! See you around!
Great video,it works. Thanks. Just one thing - as soon as I import your settings in Davinci, when I want to export, I cannot change the name of the Video, it remains stagnant as ‘timeline1’. I am unable to rename it. Is that a bug?
Hey not sure if u remember me. I criticised your the grading of your youtube videos about a year ago. You were confused and said it looks good. Just coming back to say: NOW it looks good! Love it
I basically went full-stupid-overkill and use DNxHR 444 12-bit. I only have underwater footage on my channel which tends to be very dark. I tried some options and the footage on youtube always looked grainy or pixelated. I eventually gave up and went with DNxHR. The problem is the massive file sizes but I'm happy with results. I also use YRGB Color managed, in SDR mode, outputting REC.709. Footage is 10-bit H265 from a goPro 11.
Secret is to use as high bitrate as possible, i use at least 100 mbps for 4k BUT the MOST important is keyframe. Its default 75, change it to 12. That eliminates blockiness in moving scenes. Now you got the best image possible in youtube. Also we can now use AV1.
@@mattiatodeschini.director Codecs usually create one key frame, from it it creates next few frames. It does not make whole new frames as movement goes on. There is all-i frames that are like photos that are created as whole new frames. My english isnt best but google is your friend:)
If youtube (re) transcodes every file you upload, why not upload a mezzanine file/lightly compressed ProRes/DNxHR? If the user transcodes their output to h.264 or similar, you're throwing away data unnecessarily...you're essentially adding a great level of generational loss.
Not sharing this if I discovered these techniques yesterday. I have been professionally doing this for over a decade and when I kicked out deliverables for Toyota and Adidas these are the settings I used. Take it or leave it.
@theqazman Respectfully, been training on codecs and compression for 20 years, and I was hoping to have a civil, technical conversation on your findings. Good luck to you.
Your video inspired me to start my first UA-cam Shorts Channel. So far posted 5 Videos and got around 10K views. I know its not much, but I am really enjoying this new found passion. Thank you 😊
Great video. I’ll be checking my settings later. A video I watched a while back recommended a bit rate of 2,000x your frame rate for 4K. I shoot in 4K 30fps and so I have my 4K preset at a bitrate of 60,000. Maybe I need to tweak it, but H.264 preset is definitely the way to go. New subscriber 👍🏼😁
Watching in 720p on my 14ProMAX I don’t see any difference 🤣 Maybe I need to be in 1080p or look at this on my Mac. I did see a difference in the first bright and dark test but the ones after that and the detail in the UA-cam videos…I could not see a difference from my phone…
I believe this is just circumstantial. In my experience no matter what the settings you put out there just varies from many factors and the fact you just put 'blind' settings from Auto might just happen to have hit the right spot for the sample you have chosen. Also length of the movie has also an impact on the compression, so it is unlikely that there's a 'magic' formula otherwise it would be up there in the recommended settings. As far of my experience, the threshold of bitrate that seems to be a good spot as been somewhere around 50mbps to 80mbps.
Why not just upload DHXHR or Prores directly to youtube? Since H264/5 still go through youtube compression anyway, should we just upload the lossless file and let it compress?
it doesnt help to just slap the tag at the end. all it does is do the same to your video in quicktime player as will happen anyway in youtube. If you want a CONSISTENT workflow on your mac, you have to set your apple XDR display which you hopefully have on your macbook, or the XDR itself, or even the studio display works - to the bt1886 reference mode, change resolve output color space to rec 709a. and THEN tag with rec709a. This way you will have a consistent look everywhere.
If you are on a mac you can set timeline color space to what ever you want and then set output color space to Rec709-A. Also check "use display display color profile for viewers" in Resolve preferences and in advanced export settings set gamma tag to Rec709-A. This way I've never had an issue with the gamma shift on UA-cam.
@@nhmk6603 that was what I was talking about and also using without problems until now, but I also understand that that is just a workaround, not a proper fix. It’s a deep rabbit hole and every time I peek into it I’m greeted with (almost agressive) comments about what should I be using (mostly expensive hardware) or way too technical explanations for someone who just wants to see the same result on youtube as he saw in his Resolve viewer, that’s all.
I always use h264. But the doubt I had was about exporting HD videos. Should I export 1080p into 4k? Or 1080p is fine? Will youtube treat it differently if 1080p upscaled to 4k is uploaded. I couldn't notice the difference in vlc, but haven't done a comparison like you have. Please let me know your thoughts.
Definitely export in 4K if you want the best quality. Even if your footage is 1080, upscale in the export. UA-cam has really low bitrates for streaming 1080p, if you upload 4K your viewers will be able to benefit from the much higher bitrate
The main takeaway for me is that Handbrake on the Mac appears to be a 2023 product while PC Handbrake is still Windows 95 code. Oh, and the YT preset is weirdly terrible. Thanks, these side by side tests are incredibly helpful.
07:40 Handbrake is not more efficient (it actually is, because the x264 encoder is more efficient than most of the h.264 encoders on the market, but it is not the case in this example). You gave the encoders different tasks. In the DaVinci, you used a single-pass encoding targeted at a minimum quantizer per frame, not exceeding a target bit rate or using a minimum quantizer (if you didn't limit the bitrate). In Handbrake, you used a two-pass encoding, where the x.264 encoder analyzed the complexity of the entire video during the first pass. In the second pass, having all that information, the available bits budget (45 Mb/s) was redistributed so that the most complex scenes received the most bits while maintaining the same target bitrate. That's why the file is smaller, but it is also why the grain is missing - it was considered a noise that wastes the bits budged. Also, UA-cam does its own re-encoding and provides the viewers with different versions of the same video, either encoded in h.264, VP09, or AV01 (if the video reaches 1M views), depending on the viewer's hardware capabilities. Thus, UA-cam re-encoding kills even more details, especially with AV01. So a way to go - is to either upload a losslessly encoded video to UA-cam or a lossy encoded video with minimum quantizer or a bit rate around 100 mb/s for 4K with h.265 encoder and let UA-cam do its dirty job with re-encoding to h.264, VP09 and AV01.
Thanks for the video! I think the video misses an important point, though... the "why?" question: Whatever you export, UA-cam WILL transcode your upload into various formats. Your original upload will NOT be used directly by UA-cam for streaming, even if it would be perfect fit regarding the codec. Okay, so that would mean uploading your footage with the best quality should be the best, since the transcoder will have a better input to come up with those streaming formats, right? So how can it be that the overkill was beaten by your custom preset? I guess it's at least two things: (1) Your test method: scientifically speaking you are comparing noise in your data, and it's very hard to draw any objective results from that method. A down-hill footage from a mountain biker will completely change your observable differences and results. And this is already is related to my second point: (2) Your footage is largely static footage of the GUI and in default settings, this will encourage encoders to decrease the number of keyframes*. Manually setting them to a low value should have a huge effect on your type of footage. So, generally speaking, when you have chosen the best fitting encoder settings regardless of your intentions with the resulting video file, it will be the best option for your UA-cam upload. I am almost tempted to say the bigger the better but there are a some caveats as shown in your video. As upload size does not seem to matter to UA-cam, go a little crazy on keyframes. The video quality page of UA-cam you have shown is more interesting for consumers as it shows the recommended connection speed for streaming. And here is another big pro for going more into the (sensible) overkill direction: UA-cam will save your original file in the backend. And as UA-cam extends their streaming formats, they can still using the high-quality upload to transcode your video into these future formats. At the end of the day, unfortunately we may spend too much time on these issues when dealing with UA-cam, as most viewers won't see your video in 4K H.264 as you intended, but in alternative formats like VP9 on mobile phones or TVs, and way too often in 720p or similar since sneaky little UA-cam loves to change those settings silently in the background to lower quality formats and most people don't notice that they are watching blocky low-bitrate 720p footage on their 4K screen (a hard-to-swallow pill, I know). It's like the frog sitting in hot water until it boils. * Keyframes are the few frames in your footage that are saved compressed but completely in your output file. All other frames between keyframes are reconstructed heavily based on the contents of these keyframes and as such only need a fraction of space of keyframes. That's probably THE main feature of lossy video codecs.
Thanks. I understand capturing, color transformation from cam to monitor, luts and plugins, but always had troubles with exports. No one explained it as clearly and comprehensively as you did.
Very useful. I solved the Mac Gamma problem by viewing my timeline with Rec.709a as the setting, and then exporting the same way. All looks the same on the Mac in Quictime as it does in Resolve, and also on UA-cam. The other thing that I do is to use H.265 with the goal of having more information in a smaller file size. Have you done any expermentation looking at H.265 vs H.264? Thanks for all your excellent content!
HVEC and AV1 export looks also good on yT. i wonder why u dont use the GOP Timeline to fix the issues. The Problem of the gitter errors, is, because of the low GOP Time-lengh. If you export a 60Frames per second Video, give every Frame a full Frame-Time-Table by set the GOP to 60. yT love it, because the engine have not many to convert the blurry things out between them and you get the exactly mill if you hit your curser on the time-line bar on a video. yT can build exact frames on chapter marks.
This is for the people who does a lot more pixel peeping. I do the same since I am also a photographer as well as a videographer. It's just a habit that we most tend to have. To those that does not pixel peep and just want to upload video and as long as it looks good in front of the user/client... then this is irrelevant. We creatives tend to be perfectionist and even super tiny details bugs us. And knowing these differences in micro level is good to know and just for awareness. Also I believe h.265 does reduce the size but it handles a different range of color. I've done my test on that and the color shifts greatly if my monitors, mobile devices, media device for viewing are not properly adjusted. It is really for HDR. My results with H.264 still widely supported in all viewable media.
Qazi, Can you make an updated video about HDR workflow and export? whats the export settings if I wanna export an HDR video that works across all different platforms? TV-social media-Iphone
Thank you for this usefull video. But I' ve a question. What about the gamma 2,2 for WEB? When I deliver a video for the web I use color space transform to pass from gamma 2,4 to Gamma 2,2. I'm very surprised you don't talk about that???
yeah i still have my handbrake stttings to encode for 60fps youtube properly if it requires pulling it through their transcoder, it means you have lost ,you have to pre-chew everthing for eery platform
The biggest problem for guys who post much on social media is the file size! Even full hd videos are much to big if you render high quality files from a good camera It would be nice if you could do a video how to reduce file size while maintaining quality.
I seems to have better result when exporting in HDR 10 with a 10+ flatpass in H265 it down convert pretty well on my HDTV and streamer and look descent in true HDR device, but you nail it it's hard to keep color natural and not over saturated, i need to found a balance , my source footage are mainly 16 bit compressed DWAA open EXR render in ACES CG or BRAW convert in ACES 2065, it make my switch with HD and 4K less painfull when export i put rec 2020 and a 10 bit with H265 at 65000 kbps the file size is reasonable and youtube made the downscale color correction pretty good, but it's not as controled as a Netflix dolby vision DCP workflow, they need to make something that make HDR 10+ metadata of the MP4 or Quicktime in count without using gaswork scripting tool with separated file.
The export settings on resolve are really hopeless for high compression. You need to waste time exporting large THEN encoding in handbrake. Is there a way to "frameserve" to handbrake? that would be EPIC
In majority if not all vids you can't tell the difference between 4K and HD, especially on a cellphone, but the file in 4K is 3 times heavier. 4K is 4 times HD.
Thank you for the great video ❤ what I also see is, that a lot of videos on Instagram do have a yellowish/greenish tint on skintones - I see it on other channels and on my own uploads. 😮
Just a thing that would be nice to add in your test, even thou is very little used, is a ProRes version directly uploaded to UA-cam, for testing purposes. At the time I tested it didn't give me noticeable improvement but with your grain test it would be nice to see how it would hold. Also, the big file size from Handbrake was due to the "normal" preset. A "veryslow" preset would make a smaller file size for the same quality, so increasing the bitrate to +-65Mb/s could make a more solid version with a similar filesize. By doing a rule of 3 from file sizes, the Automatic bitrate settings from Custom/Default was 65Mb/s, a more proper bitrate for a H264 Delivery Master. The standard is 35Mb/s for Full HD, so 4K should be 4 times more for a Cinematographer's quality check (the most you would want to have on a H264). But the caveat is, with Resolve, Auto is always better, I neved did figure out why having high bitrate would make things worse that Auto. For other encoders I set manually to 35Mb/s (FHD) and 140Mb/s (4K) for extreme quality and lower depending on the delivery.
Also, for whoever is interested: I like (nowadays) Resolve H264 export. They give me almost the best result, so it's my standard practive to export a ProRes and then make all the H264 copies from that, on projects that I need that big master. Mostly TV and Features, but some videclips too. No changes at all from exporting directly, so it's a good way of making deliveries instead of going Adobe ME or always exporting from source. I've even, strangely, seen improvements in some cases on H264 exported from ProRes instead of direct export. Also, one thing to look at too is custom export API's for Resolve that are on the market (paid and free) as they improve customization and uses FFMpeg. Also, for the best size/quality ration, the only perfect way of having the best quality on the smallest file size is through ProRes > FFMpeg on terminal, because the API for Resolve is very limited and dependent on the work from the community. This is good for size restrictions like Vimeo. Cheers!
Great video. Really...but I'll be honest.. I've watched this video on a 55" Sony TV and I don't see any differences... probably the compression of UA-cam is really too much. Anyway again..great video
the gamma shift happens because of the media player that you use not because resolve, etc. it depends if your video is recorded or exported with format rec.709 or rec601... so basically if you play a rec709 video on a rec601 video player the footage will look darker... in youtube won't look as dark because it can support well 709 or 601.
I didn't notice any difference between any of them as it was constantly covered up by the incessantly moving cursor. I mean fine if you want to point something out, but chill with the ADHD cursor movements. Would be cool to see some h.265 as that is what I've been using. Though I haven't really played with the default settings.
Thank you for this awesome insight. I also noticed selecting data levels between “video” and “full” can shift the gamma. Lately I’ve just been working in timeline color space Rec. 709-A and then output colorspace Rec. 709 gamma 2.4 and then you can export any settings on a Mac and the color won’t shift. At least those are the results I found to work.
@@daniellrodriguezs If you are on a mac you can set timeline color space to Davinci Wide Gamut so you have a big color space to work in and then set output color space to Rec709-A. Also check "use display display color profile for viewers" in Resolve preferences and in advanced export settings set gamma tag to Rec709-A. This way the export will look exactly like in Resolve when you upload it to youtube.
I do DNxHR in SQ for youtube, i have tried a lot of different combination, this ends with about 70gb for a 20min video, takes like 30 mins to upload and about 45mins to be procesed in 4K by youtube, always exporting in 4k, no matter if my video is 1080p, 2.4k, or 4k... i think it does well on the quality of youtube for my motorcycle videos.
great comparison! one of my producer uses fcpx to compose everyone's work (sound, color, graphic), I just tested what you said in a video, which you used rec.709 sRGB tag and that's wonderful, all of a sudden quicktime & fcpx & output from fcpx & web they all look the same as image in my monitor. fcpx doesn't read 1-2-1 rec.709 gamma 2.4 tag properly, it just handles 1-13-1 (rec.709 sRGB) really good. thanks Qazi!
I export a custom hevc with a color space transform on A7iv footage (slog3 to rec709 2.4) and it always looks the same as what I see in resolve. I also have no problem when exporting to AV1. I also set it to optimize for network. 🤔🤔
So... the only difference between 264 default and custom is color ~magic~ science - and yet it somehow affects the quality of the final product after recoding in YT? That should not be the case. Unless: - either YT re-encoder uses those to actually calculate colors differently during re-encoding (which is kinda crazy to imagine - and I'd personally consider a bug), or - these two videos happened to be processed by 2 different servers with 2 different encoding tools/versions on them. The latter is much easier to imagine, since YT as every big company constantly runs A/B tests to find the best. So we won't have any proof, of course, but logically you should have got absolutely identical results between custom and default. And any difference between those two is probably just testing noise.
Could you perhaps make a video how to export a video with the original file data. For example if I have a file what is shot in 4k 60 frames and I just want to trim the video(do no colour adjustment, or any enhancement to the footage), my question is why is the export file bigger in than the original file. Logic would say it must be less because the file have been trimmed. So how do I export a video keeping the same data.
If you're just cutting up footage, try LosslessCut. It does what you're describing (no quality loss) but isn't an editor and doesn't reprocess the file. The cuts can be a little messy sometimes, that's the only downside. If you're trimming files to keep/store it's really good.
Incredibly helpful! Thanks so much for this 🙏 But what about uploading in ProRes?? About the UA-cam preset, I had a feeling it was bad after looking at the file size. Hence I never used it. Also I had no idea h.265 had a colour shift.
So you just gave people export profile with sRGB gamma tag, though on Macs it should be Rec.709-A, because sRGB is a profile for photo, not for video. Also most of these people are going to use it on windows, as most people use windows in general, and you just gave them gamma tag with wrong color space to just hang there leaving possibility it will affect the footage in unknown ways. Great, thanks.
Me, a millennial, fully ready and prepared to watch this video in it's entirety - trying to survive the first 5 minutes of the video trying to convince me to watch the video in it's entirety.
For 4k 30 h265 40mbps rec 709 gamma 2.4, for 4k 60fps 85mbps? Screen Recording h265 at 2k, 60mbps or 48mbps later upscale lower res to 4k 60fps 85mbps?
@theqazman thanks for sharing your knowledge and THANKS for tackling the gamma shift problem in such a direct and simple way. Just a couple of quick questions, though... Since I'm on a MBP 16" M2 and using Resolve's Color Management (Davinci Wide Gamut) do I need to: 1- set my working color space to REC 709-A or gamma 2.4 is fine? 2- check or UN-check the "use Mac display color profiles" preference option? Thanks again!
Man, I’ve been using that YT preset in Resolve and I can’t thank you enough for passing on this knowledge. You’re the man.
never ever use presets
thats what custom is for and custom is always better
the only exception if custom is very complicated like 100 codec and what looks million of possible combination
thats what advanced settings are for in that case
Man, I've seen your videos from the very beginning. Really I'm amazed how you´ve improved in every detail, not just about the technical aspects, but in your communication skills. Really, really, congrats. Maybe you think is not noticed, but believe me, it's noticed.
Appreciate the love. Seriously, means a lot. Been a long journey and yeah old sh** is cringy asf. 🤣
@@theqazman😅 you are crazy
@@theqazman oh, no man, enjoy the journey.
@theqazman sorry but for Windows user, what is the recommandation then? H264 Master standard?
If you don't care about file size:
ProRes 422 LT will give you the best grain detail (but files will be huge!). I've tested this a bunch cause I love using grain.
Do you edit on a Mac? What are the other settings that you use for export?
@@Colorado4x4 Yes. Newer macbook pro. Just Quicktime ProRes 422 LT and I make sure my last CST in my color workflow is going to Rec709 and Rec709-A. You want to output 709-A so that what you see in Davinci matches what you see on UA-cam
What’s your complete setting for Resolve? Color management, export and monitor setting?
It should go without saying, because this video is talking about 4K renders, but I'm also astonished by how many people still only upload their videos to YT at 1080p (perhaps because that's what they shot them at) and then wonder why they look horrible. The reason is bitrate. Not the bitrate you sent to YT, but the average and max birate YT allows for 1080p videos after re-render. It's extremely low, and always has been. If you send YT a 1080p video at 50Mbps it really won't look any better than one at 30 or even 20Mbps because it rerenders to < 10Mpbs. But step up even to 1440p will make a huge difference. In fact, upscaling your source 1080p footage to 1440p at exactly the same bitrate WILL MAKE A HUGE DIFFERENCE to how it looks in YT.
I shoot 4k, but render to 1080p, because I need to zoom and crop x2 some moments. So what can I do for better quality on YT?
@@bear_on_a_bike Just upscale the final result back to at least 2.7K before uploading to YT. It's annoying that the max bitrate YT will give you is tied pretty much solely to the resolution, but that's how it works. Right click on any vid and look at the Stats for Nerds, and then change resolution.. from 1080p, 1440p, 4K and see what the highest bitrate is. Try something like an FPV vid, or downhill cycling to really max it out.
@@daemn42 So, I create a project with my 4k files, edit the video using 2x zooms in some moments, and then render the final video in 1080p.
Then I crate a new project with this final 1080p video file and use the "super scale" function to upscail it to 4k?
@@bear_on_a_bike I'd skip the render/re-render step, and just make my final project settings 1440p or 4K. Yes, you may be cropping some stuff from 4K down to 1080p, and then scaling that back up to 4K and we can't do anything about the loss of resolution due to the crop, but it will let YT give you the highest bitrate available.
Here's an example from one of my own vids. watch?v=rVG6EW-48Io Skip in a bit until you're at the rear view footage. I'm using an Insta360 1X which records a spherical view at 5K, but of course any time I narrow the view to look in a specific direction, I'm cropped *way* in. Maybe going as low as 720p, and then I scale that back up to 2.7K (1440p). Watch that video at 1080p and almost all the ground detail turns to mush, looking forward or back, but switch to 1440p, and it's mostly back. The only difference is YT's max bitrate for each resolution.
Oh, and this is important. Even if you're on a 1080p monitor, you should still choose the 1440p playback option, to get the higher bitrate. I've had to tell that to all my viewers, for years..
@daemn42 thx for the tips. I finally understood how playback related to key frames bitrate. The secret is to max out keyframes to anywhere near 80-100 mbps due to how codecs work.
Hey qazi i loved ur tutorials very much before i have no experience in Davinci or any type of color grade but after watching your tutorials i learned so much about the software and color grading thank you so much for the tutorials man....hey can u please make a video on how to grade like roldangibuena and benjvillena please please🥺🥺
I also did a ton of test regarding this topic a while ago.
I personally use H.265 instead of H.264 because its the more advanced codec. There are compatibility issues possible with other application and for web for example, but for UA-cam thats fine.
My settings for 4K/30 are H.265 default preset with 40Mbps limit. I think thats the sweetspot between quality and file size for best results. The resulting files are less than 1/3 of the default H.264 Export.
higher efficiency is always better then previous codec
theres also bit color depth and most standard is 8-bit and the uncommon standard is 10-bits
you might be able to record in 10-bits 65mbps bit rate for 1080p but if you export with it gonna automatically convert to 8-bits and lose some bit rate cause thats why you always choose custom settings cause you know better and these drones that recommend you doing dont know better
and always remember that bit rate scales with resolution your 4K 40mbps is the minimum then it could have 80mbps and have less bit rate artifacting on rapid movement or color change
but higher bit rate and resolution costs performance and storage space for the user
giant corporations can afford extreme high capacity storage but a common user cant
they look for excuses to make it easier for them not for you and its easier to downgrade your videos to low bit rate to save storage space but then why they dont give you higher efficiency codec if a codec can compress a video but have way less compromise on quality
cause they hate small creators and treat them like common garbage
otherwise there would be no need to use low efficiency codec and it becomes obsolete and left in the past like these floppy disks no one uses
I did the test and the quality is noticeably better with this settings. However, the size seems to be 1.5x bigger.
I don't see where you can enter 40Mbps. There is "Restrict to 10000 Kb/s" box though.
@@tonycaine5930 just put 40,000 in the field in that section.
@@tonycaine5930 compression vs codec efficiency
why do you think small creators get avc1 codec
the rest get vp09
and some thats above average get av01
thanks bro, just what i needed
For exports from mac you have to do your normal inputs on the first node and out put to DWG, on your output node output in rec709 and rec709A , then begin grading. This will also give you a color profile of 1-1-1 which will have a consistent look on all browsers.
yeah thats also with davinci resolve where you set export file data same as project or was it same as source
meaning the data on the recorded file should match davinci settings
but dont forget about color bits
and then chrome subsampling
the standard is 4:2:0
you would be lucky if it can go 4:2:2
and the best is 4:4:4 but due to limitations of editing software they will refuse to read it or bypass it automatically converting to lower settings and that especially noticeable with export
for average video editor that steals content for tiktok wont care what color bit rate and what video bit rate even is but then youre a serious creator you will learn these things that didnt existed before you started
and then youtube compression stacks what export already took away from file quality meaning export takes 25% of the quality data and youtube compression takes the rest
leaving your video lesser of what it was
Thanks! It was 15 minutes really detailed comparison!
I love nerding out like that.
Hey man I have this issue with my shorts it seems like YT is compressing all my short form videos coming out of capcut and destroying the quality. There's a ton of noise and artifacts introduced only on YT, I think it might be something I am doing? normally shooting in 1080-4k, rendering in 1080 60fps. finished source file looks great, works fine on tiktok and instagram but youtube has a ton of loss in quality.
Everyone watching this and grinding their UA-cam career, cheers and good luck! With the hard work and constandlt improvements, we'll get where we go!
See you around!
finally !! exported using your preset and the output turned out way better than my previous ones... thanks a ton Qazi
Happy to help.
Can you do a h.264 and h.265 comparison please
i wish to see h.264 and h.265 comparison
Great video,it works. Thanks. Just one thing - as soon as I import your settings in Davinci, when I want to export, I cannot change the name of the Video, it remains stagnant as ‘timeline1’. I am unable to rename it. Is that a bug?
Hey not sure if u remember me. I criticised your the grading of your youtube videos about a year ago. You were confused and said it looks good. Just coming back to say: NOW it looks good! Love it
Great! Thanks. Can't find the link to download preset (
I basically went full-stupid-overkill and use DNxHR 444 12-bit. I only have underwater footage on my channel which tends to be very dark. I tried some options and the footage on youtube always looked grainy or pixelated. I eventually gave up and went with DNxHR. The problem is the massive file sizes but I'm happy with results.
I also use YRGB Color managed, in SDR mode, outputting REC.709. Footage is 10-bit H265 from a goPro 11.
Secret is to use as high bitrate as possible, i use at least 100 mbps for 4k BUT the MOST important is keyframe. Its default 75, change it to 12. That eliminates blockiness in moving scenes. Now you got the best image possible in youtube. Also we can now use AV1.
What about on Vegas tho
@@blakegriplingph same thing. Key frame to 12.
What is keyframe?
@@mattiatodeschini.director Codecs usually create one key frame, from it it creates next few frames. It does not make whole new frames as movement goes on. There is all-i frames that are like photos that are created as whole new frames. My english isnt best but google is your friend:)
@@Thevikingcam thank you!
If youtube (re) transcodes every file you upload, why not upload a mezzanine file/lightly compressed ProRes/DNxHR? If the user transcodes their output to h.264 or similar, you're throwing away data unnecessarily...you're essentially adding a great level of generational loss.
Not sharing this if I discovered these techniques yesterday. I have been professionally doing this for over a decade and when I kicked out deliverables for Toyota and Adidas these are the settings I used.
Take it or leave it.
@theqazman
Respectfully, been training on codecs and compression for 20 years, and I was hoping to have a civil, technical conversation on your findings.
Good luck to you.
Can you list the details of that export setting you're talking about? Thanks
Right off the bat, this fixed a MASSIVE blocking issue I was having with some night scenes. Thank you!
Great educator and educational video - loved it. subscribed and watching several of your videos now. Keep it up, Qazi!
Great video. I always wondered about this. Very informative. Cheers.
You're welcome
Your video inspired me to start my first UA-cam Shorts Channel. So far posted 5 Videos and got around 10K views. I know its not much, but I am really enjoying this new found passion. Thank you 😊
Great video. I’ll be checking my settings later. A video I watched a while back recommended a bit rate of 2,000x your frame rate for 4K. I shoot in 4K 30fps and so I have my 4K preset at a bitrate of 60,000. Maybe I need to tweak it, but H.264 preset is definitely the way to go.
New subscriber 👍🏼😁
Watching in 720p on my 14ProMAX I don’t see any difference 🤣 Maybe I need to be in 1080p or look at this on my Mac. I did see a difference in the first bright and dark test but the ones after that and the detail in the UA-cam videos…I could not see a difference from my phone…
Thanks for freebies qazi 🔥👍🏽
Bro, where this preset I cant find, in your course?😂 please drop dat link in me
I believe this is just circumstantial. In my experience no matter what the settings you put out there just varies from many factors and the fact you just put 'blind' settings from Auto might just happen to have hit the right spot for the sample you have chosen. Also length of the movie has also an impact on the compression, so it is unlikely that there's a 'magic' formula otherwise it would be up there in the recommended settings. As far of my experience, the threshold of bitrate that seems to be a good spot as been somewhere around 50mbps to 80mbps.
Waqas you are a genius. Thank you.
How about H.265 - this is a newer and more efficient codec?
Waqas : thanks mate!! you ROCK!!!!
Why not just upload DHXHR or Prores directly to youtube? Since H264/5 still go through youtube compression anyway, should we just upload the lossless file and let it compress?
upload times. doesnt make sense your youtube
What about the REC709-A for Mac users? Supposed to be a workaround for the gamma shift. Have been using it as output gamma tag.
it doesnt help to just slap the tag at the end. all it does is do the same to your video in quicktime player as will happen anyway in youtube. If you want a CONSISTENT workflow on your mac, you have to set your apple XDR display which you hopefully have on your macbook, or the XDR itself, or even the studio display works - to the bt1886 reference mode, change resolve output color space to rec 709a. and THEN tag with rec709a. This way you will have a consistent look everywhere.
The more misleading 709-A comments out there, the more demand for true professional colorists
If you are on a mac you can set timeline color space to what ever you want and then set output color space to Rec709-A. Also check "use display display color profile for viewers" in Resolve preferences and in advanced export settings set gamma tag to Rec709-A. This way I've never had an issue with the gamma shift on UA-cam.
@@nhmk6603 that was what I was talking about and also using without problems until now, but I also understand that that is just a workaround, not a proper fix. It’s a deep rabbit hole and every time I peek into it I’m greeted with (almost agressive) comments about what should I be using (mostly expensive hardware) or way too technical explanations for someone who just wants to see the same result on youtube as he saw in his Resolve viewer, that’s all.
@@nhmk6603you export on h.264 or 2.65 or what?
I always use h264. But the doubt I had was about exporting HD videos. Should I export 1080p into 4k? Or 1080p is fine? Will youtube treat it differently if 1080p upscaled to 4k is uploaded. I couldn't notice the difference in vlc, but haven't done a comparison like you have. Please let me know your thoughts.
Definitely export in 4K if you want the best quality. Even if your footage is 1080, upscale in the export. UA-cam has really low bitrates for streaming 1080p, if you upload 4K your viewers will be able to benefit from the much higher bitrate
We love you bro ! that was a lot
The main takeaway for me is that Handbrake on the Mac appears to be a 2023 product while PC Handbrake is still Windows 95 code. Oh, and the YT preset is weirdly terrible. Thanks, these side by side tests are incredibly helpful.
Happy to help.
07:40 Handbrake is not more efficient (it actually is, because the x264 encoder is more efficient than most of the h.264 encoders on the market, but it is not the case in this example). You gave the encoders different tasks. In the DaVinci, you used a single-pass encoding targeted at a minimum quantizer per frame, not exceeding a target bit rate or using a minimum quantizer (if you didn't limit the bitrate). In Handbrake, you used a two-pass encoding, where the x.264 encoder analyzed the complexity of the entire video during the first pass. In the second pass, having all that information, the available bits budget (45 Mb/s) was redistributed so that the most complex scenes received the most bits while maintaining the same target bitrate. That's why the file is smaller, but it is also why the grain is missing - it was considered a noise that wastes the bits budged.
Also, UA-cam does its own re-encoding and provides the viewers with different versions of the same video, either encoded in h.264, VP09, or AV01 (if the video reaches 1M views), depending on the viewer's hardware capabilities. Thus, UA-cam re-encoding kills even more details, especially with AV01. So a way to go - is to either upload a losslessly encoded video to UA-cam or a lossy encoded video with minimum quantizer or a bit rate around 100 mb/s for 4K with h.265 encoder and let UA-cam do its dirty job with re-encoding to h.264, VP09 and AV01.
It's good that my videos look like shit from the start
Thanks for the video! I think the video misses an important point, though... the "why?" question:
Whatever you export, UA-cam WILL transcode your upload into various formats. Your original upload will NOT be used directly by UA-cam for streaming, even if it would be perfect fit regarding the codec. Okay, so that would mean uploading your footage with the best quality should be the best, since the transcoder will have a better input to come up with those streaming formats, right? So how can it be that the overkill was beaten by your custom preset? I guess it's at least two things:
(1) Your test method: scientifically speaking you are comparing noise in your data, and it's very hard to draw any objective results from that method. A down-hill footage from a mountain biker will completely change your observable differences and results. And this is already is related to my second point:
(2) Your footage is largely static footage of the GUI and in default settings, this will encourage encoders to decrease the number of keyframes*. Manually setting them to a low value should have a huge effect on your type of footage.
So, generally speaking, when you have chosen the best fitting encoder settings regardless of your intentions with the resulting video file, it will be the best option for your UA-cam upload. I am almost tempted to say the bigger the better but there are a some caveats as shown in your video. As upload size does not seem to matter to UA-cam, go a little crazy on keyframes. The video quality page of UA-cam you have shown is more interesting for consumers as it shows the recommended connection speed for streaming. And here is another big pro for going more into the (sensible) overkill direction: UA-cam will save your original file in the backend. And as UA-cam extends their streaming formats, they can still using the high-quality upload to transcode your video into these future formats.
At the end of the day, unfortunately we may spend too much time on these issues when dealing with UA-cam, as most viewers won't see your video in 4K H.264 as you intended, but in alternative formats like VP9 on mobile phones or TVs, and way too often in 720p or similar since sneaky little UA-cam loves to change those settings silently in the background to lower quality formats and most people don't notice that they are watching blocky low-bitrate 720p footage on their 4K screen (a hard-to-swallow pill, I know). It's like the frog sitting in hot water until it boils.
* Keyframes are the few frames in your footage that are saved compressed but completely in your output file. All other frames between keyframes are reconstructed heavily based on the contents of these keyframes and as such only need a fraction of space of keyframes. That's probably THE main feature of lossy video codecs.
Thanks. I understand capturing, color transformation from cam to monitor, luts and plugins, but always had troubles with exports. No one explained it as clearly and comprehensively as you did.
I was hoping that you will give h265 a test
Very useful. I solved the Mac Gamma problem by viewing my timeline with Rec.709a as the setting, and then exporting the same way. All looks the same on the Mac in Quictime as it does in Resolve, and also on UA-cam.
The other thing that I do is to use H.265 with the goal of having more information in a smaller file size. Have you done any expermentation looking at H.265 vs H.264?
Thanks for all your excellent content!
What!!!! I was looking for the best yt export settings just now 😂 and u dropped it ⚡️
I gtochu.
HVEC and AV1 export looks also good on yT.
i wonder why u dont use the GOP Timeline to fix the issues. The Problem of the gitter errors, is, because of the low GOP Time-lengh.
If you export a 60Frames per second Video, give every Frame a full Frame-Time-Table by set the GOP to 60. yT love it, because the engine have not many to convert the blurry things out between them and you get the exactly mill if you hit your curser on the time-line bar on a video. yT can build exact frames on chapter marks.
This is for the people who does a lot more pixel peeping. I do the same since I am also a photographer as well as a videographer. It's just a habit that we most tend to have. To those that does not pixel peep and just want to upload video and as long as it looks good in front of the user/client... then this is irrelevant. We creatives tend to be perfectionist and even super tiny details bugs us. And knowing these differences in micro level is good to know and just for awareness. Also I believe h.265 does reduce the size but it handles a different range of color. I've done my test on that and the color shifts greatly if my monitors, mobile devices, media device for viewing are not properly adjusted. It is really for HDR. My results with H.264 still widely supported in all viewable media.
Qazi, Can you make an updated video about HDR workflow and export? whats the export settings if I wanna export an HDR video that works across all different platforms? TV-social media-Iphone
Thank you for this usefull video. But I' ve a question. What about the gamma 2,2 for WEB?
When I deliver a video for the web I use color space transform to pass from gamma 2,4 to Gamma 2,2.
I'm very surprised you don't talk about that???
yeah i still have my handbrake stttings to encode for 60fps youtube properly
if it requires pulling it through their transcoder, it means you have lost ,you have to pre-chew everthing for eery platform
The biggest problem for guys who post much on social media is the file size! Even full hd videos are much to big if you render high quality files from a good camera It would be nice if you could do a video how to reduce file size while maintaining quality.
Why all this fiddling with the bitrate when in my experience the main issue in resolve is usually the keyframes/i-frames?
Holy Cow 🐮
That was awesome 😎
Thanks brother!!!
Of course. 😀
I seems to have better result when exporting in HDR 10 with a 10+ flatpass in H265 it down convert pretty well on my HDTV and streamer and look descent in true HDR device, but you nail it it's hard to keep color natural and not over saturated, i need to found a balance , my source footage are mainly 16 bit compressed DWAA open EXR render in ACES CG or BRAW convert in ACES 2065, it make my switch with HD and 4K less painfull when export i put rec 2020 and a 10 bit with H265 at 65000 kbps the file size is reasonable and youtube made the downscale color correction pretty good, but it's not as controled as a Netflix dolby vision DCP workflow, they need to make something that make HDR 10+ metadata of the MP4 or Quicktime in count without using gaswork scripting tool with separated file.
The export settings on resolve are really hopeless for high compression. You need to waste time exporting large THEN encoding in handbrake. Is there a way to "frameserve" to handbrake? that would be EPIC
Not me watching this in 1080p thinking theres barely a difference between them 😭
In majority if not all vids you can't tell the difference between 4K and HD, especially on a cellphone, but the file in 4K is 3 times heavier. 4K is 4 times HD.
Thank you for the great video ❤ what I also see is, that a lot of videos on Instagram do have a yellowish/greenish tint on skintones - I see it on other channels and on my own uploads. 😮
Just a thing that would be nice to add in your test, even thou is very little used, is a ProRes version directly uploaded to UA-cam, for testing purposes. At the time I tested it didn't give me noticeable improvement but with your grain test it would be nice to see how it would hold. Also, the big file size from Handbrake was due to the "normal" preset. A "veryslow" preset would make a smaller file size for the same quality, so increasing the bitrate to +-65Mb/s could make a more solid version with a similar filesize. By doing a rule of 3 from file sizes, the Automatic bitrate settings from Custom/Default was 65Mb/s, a more proper bitrate for a H264 Delivery Master. The standard is 35Mb/s for Full HD, so 4K should be 4 times more for a Cinematographer's quality check (the most you would want to have on a H264). But the caveat is, with Resolve, Auto is always better, I neved did figure out why having high bitrate would make things worse that Auto. For other encoders I set manually to 35Mb/s (FHD) and 140Mb/s (4K) for extreme quality and lower depending on the delivery.
Also, for whoever is interested: I like (nowadays) Resolve H264 export. They give me almost the best result, so it's my standard practive to export a ProRes and then make all the H264 copies from that, on projects that I need that big master. Mostly TV and Features, but some videclips too. No changes at all from exporting directly, so it's a good way of making deliveries instead of going Adobe ME or always exporting from source. I've even, strangely, seen improvements in some cases on H264 exported from ProRes instead of direct export. Also, one thing to look at too is custom export API's for Resolve that are on the market (paid and free) as they improve customization and uses FFMpeg. Also, for the best size/quality ration, the only perfect way of having the best quality on the smallest file size is through ProRes > FFMpeg on terminal, because the API for Resolve is very limited and dependent on the work from the community. This is good for size restrictions like Vimeo. Cheers!
Great video. Really...but I'll be honest.. I've watched this video on a 55" Sony TV and I don't see any differences... probably the compression of UA-cam is really too much. Anyway again..great video
Any suggestions for Windows users with Premiere Pro?
Should your output colour space ALWAYS be set to REC 709 Gamma 2.4 regardless of project?
the gamma shift happens because of the media player that you use not because resolve, etc. it depends if your video is recorded or exported with format rec.709 or rec601... so basically if you play a rec709 video on a rec601 video player the footage will look darker... in youtube won't look as dark because it can support well 709 or 601.
Badass video bro! Cheers from Venezuela. 👏🏻👊🏻
Ty
Just as I shared the older sRGB video with a friend who needed help for Insta. Thanks Qazi!
Let's go! 💪🏽
I didn't notice any difference between any of them as it was constantly covered up by the incessantly moving cursor. I mean fine if you want to point something out, but chill with the ADHD cursor movements. Would be cool to see some h.265 as that is what I've been using. Though I haven't really played with the default settings.
Great video. What about H.265 10 bit (main10)?
We need export settings for instagram reels. The compression kills it every time.
Thank you for this awesome insight. I also noticed selecting data levels between “video” and “full” can shift the gamma. Lately I’ve just been working in timeline color space Rec. 709-A and then output colorspace Rec. 709 gamma 2.4 and then you can export any settings on a Mac and the color won’t shift. At least those are the results I found to work.
Same.
Have you tried working on Davinci Wide Gamut Intermediate timeline and then output to Rec709-A
@@daniellrodriguezs I specifically work in DWG w/ a 709-A color space. Export to 709. Seems to be working great.
@@daniellrodriguezs If you are on a mac you can set timeline color space to Davinci Wide Gamut so you have a big color space to work in and then set output color space to Rec709-A. Also check "use display display color profile for viewers" in Resolve preferences and in advanced export settings set gamma tag to Rec709-A. This way the export will look exactly like in Resolve when you upload it to youtube.
This is the best way to work on a mac…i always use timeline colorspace >DWG and output colorspace >rec709A
Wt about instagram ?? Will it be the same as yt ?? Pls make a video on the export settings for Instagram 😢😢
I do DNxHR in SQ for youtube, i have tried a lot of different combination, this ends with about 70gb for a 20min video, takes like 30 mins to upload and about 45mins to be procesed in 4K by youtube, always exporting in 4k, no matter if my video is 1080p, 2.4k, or 4k... i think it does well on the quality of youtube for my motorcycle videos.
great comparison! one of my producer uses fcpx to compose everyone's work (sound, color, graphic), I just tested what you said in a video, which you used rec.709 sRGB tag and that's wonderful, all of a sudden quicktime & fcpx & output from fcpx & web they all look the same as image in my monitor. fcpx doesn't read 1-2-1 rec.709 gamma 2.4 tag properly, it just handles 1-13-1 (rec.709 sRGB) really good. thanks Qazi!
Happy to help, brother. FCM champion 💪🏽
Awesome Video man. Thank You for this!
I export a custom hevc with a color space transform on A7iv footage (slog3 to rec709 2.4) and it always looks the same as what I see in resolve. I also have no problem when exporting to AV1. I also set it to optimize for network.
🤔🤔
Davinci should make this video play everytime you hit the export button.
Nice video. 15 is to long. Can you do it in 3?
No link in your description for downloading the preset settings
Great stuff!
So... the only difference between 264 default and custom is color ~magic~ science - and yet it somehow affects the quality of the final product after recoding in YT? That should not be the case. Unless:
- either YT re-encoder uses those to actually calculate colors differently during re-encoding (which is kinda crazy to imagine - and I'd personally consider a bug), or
- these two videos happened to be processed by 2 different servers with 2 different encoding tools/versions on them.
The latter is much easier to imagine, since YT as every big company constantly runs A/B tests to find the best. So we won't have any proof, of course, but logically you should have got absolutely identical results between custom and default. And any difference between those two is probably just testing noise.
Could you perhaps make a video how to export a video with the original file data. For example if I have a file what is shot in 4k 60 frames and I just want to trim the video(do no colour adjustment, or any enhancement to the footage), my question is why is the export file bigger in than the original file. Logic would say it must be less because the file have been trimmed. So how do I export a video keeping the same data.
If you're just cutting up footage, try LosslessCut. It does what you're describing (no quality loss) but isn't an editor and doesn't reprocess the file. The cuts can be a little messy sometimes, that's the only downside. If you're trimming files to keep/store it's really good.
The content is amazing as always!
Great video! Next time can you include MP4 vs QuickTime formats and H.264 vs H.265?
BIG HELP! Thank You!!!!
Watching this video on my iPad, I cannot see any differences with ANY of the options.
Why do you upload in H.264 when UA-cam uses VP09? Your H.264 video will be re-processed by UA-cam to VP09 after upload.
What about using AV1? It's super fast to export on my 4090, and the filesizes are really reasonable...
Incredibly helpful! Thanks so much for this 🙏
But what about uploading in ProRes??
About the UA-cam preset, I had a feeling it was bad after looking at the file size. Hence I never used it. Also I had no idea h.265 had a colour shift.
Amazing how these setting changed the look of my uploaded work... THANK YOU!!!
Learn and apply, my dude. Love it.
Hey Qazi! Ty for the preset. However when I click on the three dots to import it, I'm not getting the "import preset" option. :( Don't know why
Same here 🙃
forgive me but i don't see the download link for the preset in the description
Export to Quicktime format for YT?
I always did mp4.
Perhaps there could be a comparison too?
So you just gave people export profile with sRGB gamma tag, though on Macs it should be Rec.709-A, because sRGB is a profile for photo, not for video. Also most of these people are going to use it on windows, as most people use windows in general, and you just gave them gamma tag with wrong color space to just hang there leaving possibility it will affect the footage in unknown ways. Great, thanks.
So I have to use h265 because I need to export in Main10 rather than Main, because the footage is 10bit. Would you do a similar process for that?
Did not work for me , downloaded your footage and preset> exported to youtube from Davinci and still looked washed out .Somebody help me please !!!
Hmm, I looked in the description but can't seem to find the preset
Look for "Practice Footage" in the description.
I've seen so many people recommend MP4 and QuickTime when I export. What's the difference and which should I choose?
Any similar clues for Insta? I know its shit by design but somehow the videos that supposed to be sharp looks like 💩
This is so helpful, thank you!
Why just show still quality, what about blur with panning
Me, a millennial, fully ready and prepared to watch this video in it's entirety - trying to survive the first 5 minutes of the video trying to convince me to watch the video in it's entirety.
My export setting is: Mp4, H265, so good so light.
For 4k 30 h265 40mbps rec 709 gamma 2.4, for 4k 60fps 85mbps? Screen Recording h265 at 2k, 60mbps or 48mbps later upscale lower res to 4k 60fps 85mbps?
@theqazman thanks for sharing your knowledge and THANKS for tackling the gamma shift problem in such a direct and simple way. Just a couple of quick questions, though...
Since I'm on a MBP 16" M2 and using Resolve's Color Management (Davinci Wide Gamut) do I need to:
1- set my working color space to REC 709-A or gamma 2.4 is fine?
2- check or UN-check the "use Mac display color profiles" preference option?
Thanks again!