@@curtisjudd Dude for real, I am obsessed with your channel now. It's like a haze is slowly being lifted and I'm actually starting to understand this stuff. Thank you!
Perfect! I had an editor ask me what's the difference between normalizing the audio and using the Match Loudness feature. My answer was, "WelI...I don't know. Let me ask Curtis." Thank you! Very informative, succinctly described, and validates my use of Matching Loudness to the ITU standard. :)
Wow, how educational this channel is --- I just discovered it a couple days ago and I'm hooked. Thanks so much for your work! Not sure how am I going to manage watching all the other videos you have but definitely they will improve my novice skills exponentially.
It took me 30 years as a commercial photographer to recognize the characteristics of light. But ten years into video - with 12 Amazon Prime shows, and I'm still struggling with audio. Specifically - how to capture spoken dialog that's clear and recorded at consistent levels. Therefore I'm extremely grateful to Curtis for his concise tutorials and product reviews. Thank you very much!
UA-cam should do this for every video. They compress every video. And I’ve left comments on peoples videos not only the levels between episodes, but what drives me most nuts is super loud over produced intro, then very quiet content. Absolutely drives me bananas.
I'm actually glad that UA-cam does NOT do this automatically because they'd probably compress and normalize everything to -14 LUFS which doesn't always sound great for spoken word content in my opinion. Too fatiguing to listen to audio that is so compressed for long periods of time.
Thanks a lot Mr. Judd for a small but such a useful and great info. You are a great inspiration for the people who cares about minute details though these kind of people are a scarce breed now a days and I am one of them. Thanks for sharing you knowledge. Waiting for your review of new upcoming Mavic Pro Gen 2 and comparison of its sensor quality with latest phantom.
Normalization is one of three different ways to increase loudness. The other two are amplification and compression with gain makeup. From Judd's discussion normalization is adding a consistent fixed amount to each sample. Normalization does not change dynamic range, it just offsets it. Amplification on the other hand is a multiplier. So the increase in level is less for quiet signals and more for louder signals. Amplification [artificially] increases dynamic range. The benefit of normalization over amplification is that it tends to introduce less overloading of the signal. The signal exceeding the maximum sample amplitude. For the spoken word or other sounds that have a relatively consistent dynamic range normalization works well. But for sounds that have a wide and variable dynamic range like live music or a spoken word heated exchange where the volume goes up and down as the emotions change, normalization typically results in excessive clipping, exceeding the maximum sample amplitude. Caveat, this is my understanding of normalization, YMMV. Compression on the other hand has variable gain depending on the average amplitude. At low volumes, quiet passages, it has linear gain, a 1 to 1 (1:1) ratio. When the amplitude reaches a set threshold, gain reduction takes place. Mild compression with a 2:1 ratio cuts the gain in half above the threshold. For every 2 dB increase in level, only 1 dB of gain occurs. Heavy compression may have a ratio of 4:1 or 6:1. A limiter is compression with a greater than 10:1 ratio. With compression there is unity 1:1 gain for the average parts of the conversation. But when the conversation becomes excited, the gain is reduced. With live music you have unity gain for the average parts of the song, but when the singer belts it out, or the guitar cranks it up for a solo, the gain is reduced. Compression evens out the dynamic range. Because compression is reducing the dynamic range, compressors also have "make up" gain to bring the level back up to 0 dB. The best type of compression for recording is analog compression because it reduces the input signal before the A/D converter, thus preventing overloading the A/D converter which causes harsh digital distortion. I have been cleaning up live audio tracks of singer/song writer performances that are too quiet. I have found that I set a compressor to a 6:1 ratio and a very fast attack time and release time, 10 msec, with make up gain to 0 dB. I then use the threshold as a volume knob. By lowering the threshold to -12 dB, -18 dB or even -24 dB, I increase the volume to match my normal listening level. Try it, you may like it. Before anyone complains, yes this is very heavy handed compression. I call it "mashing" the signal. For me it is the simplest way to increase the overall volume of expressive live recordings without introducing digital overload distortion. Music can have an amazing dynamic range.
Now I finally understand what "There are zero post processing done to the audio files besides loudness normalize to 24 LUFS, which is the standard loudness for European broadcasting." means :P
Curtis - I see a lot of discussion in editing and camera forums where someone says that their track is running out of sync. Pretty much every one of them has someone tell them that all the audio needs to be at 48 kHz and having one of them be at 44.1 kHz is making the audio out of sync. This is coming from pros in editing sometimes but I don't get how audio sampled at different rates per second would make any difference in sync. Seems to me the key word is per second. I would think that a 3.00 minute song recorded at 96khz would still be 3 minutes. Is there any way for an audio to go out of sync from the sampling rate? I know this is not normalization related, but you did explain the sample per second in this video.
Hi Jim, I'm not 100% certain, but I think a bigger issue than differing sample rates is the quality of the clocks in each of the devices. Most $500+ cameras have decent clocks that don't drift more than a frame in 10 minutes. Same with audio recorders. But phones tend to drift quite quickly in my experience.
That is pretty much what I commented when this came up in forums. Back in the old days of digital we used to use master and slaves - all slaves would sync to the master clock. Thanks for your response, Curtis. Informative video ( as per usual.)
Looks like the loudness normalisation algorithms includes a bit of compression? From what I understand, Audio normalisation generally is the process of automatically matching the audio level to a desired result, be that peak level or overall perceived loudness. I haven’t seen compression being part of the normalization process before.
For spoken word audio, it is often necessary to first create enough headroom to hit most modern loudness targets. So in most cases, at least a little compression is needed.
Thanks Curtis excellent video know how to do loudnesd normalisation is a great.help now that my project will.be in the post production phase very soon .
Nice work Curtis. You the best audio tech UA-camr, and really like your lighting setup. Quick question, can you do this RMS normalization in Premiere instead of Adobe Audio?
Excellent info! Thanks very much! As an "all-in-one producer," I shoot, edit, and mix projects for my clients. Audio is my weak point, for sure, so I sure appreciate this info. My question is, how does loudness normalization differ from compression? Clearly I don't understand either because I thought compression was exactly what loudness normalization is.
Hi Matthew, compression and loudness normalization are not exactly the same thing though they almost always used together. When you loudness normalize an audio clip, there is often the potential to clip the peaks, thus, we usually use a little bit of compression before loudness normalization to pull those peaks down so they don't get clipped. Does that make sense?
First, THANKS SO MUCH! Then; Oh, so confused. I must be the only one. I'd have thought that if you Loudness Normalize to, say -1.5dB, your loudest peak would hit that and go no further, while the rest of the wave would follow, so no clipping. Compression, in my experience, has the chance of clipping. Gads, the more I learn, the less I find I actually knew. If you're moved to do so, a full, "good practices" audio workflow from PP to AU back to PP would be crazy helpful. But then, it may just be me...
Hi Matthew, when you loudness normalize, two of the main parameters you set are the loudness (e.g., -19 LUFS) and the peak limiter (I often set this to -1.5dBTP). The clip is normalized to -19 and if any peaks go closer to 0 than -1.5dB, they are limited/compressed so that they stop at -1.5dB. By "clipping", I'm referring to cases where the waveform hits 0dB. Compression is usually used to "push the peaks down" farther away from 0dB so that when you do normalize, the peaks will not hit 0dB. Stick with it. With practice it'll make sense.
Hi Curtis, thank you for this explanation. You're videos on this topic have been incredibly helpful to us in maintaining consistency from video to video. - Sam & Rebekah
Great video! Had a question though: In this video you mentioned a preference for -19dB, but in other videos I thought you suggested -16dB was a better target for web. Am I confusing the applications for these clips? Thanks in advance.
Hi Gerald, -19 LUFS for mono is the same as -16 LUFS for stereo. Same thing from a human perception point of view. If you audio is stereo, go for -16 LUFS. If Mono, -19 LUFS.
Hi there. Very helpful introduction. I'm doing a fan edit of a favourite movie - but have two sources with different loudness levels which I both want to use. A software program called GoldWave told me they are, respectively, An SD video: -28.76 LUFS (with a loudness range of 23.84 LU), and an HD video: at -16.20 LUFS (with a loudness range of 23.64 LU). Apart from loudness, the only difference between the two audios is that one has been mixed down to 2-channel stereo and the other (the -16.20 LUFS one) appears to be across 6 channels. I am using Cyberlink's Power Director 19 to edit. Can this handle different audios, I wonder? As a total newbie who barely knows what LUFS is, how on earth do I make the movies the same volume? What simple to use free software program can do it? Maybe GoldWave can???
The film with 6 channels is likely a 5.1 surround mix. You might try the Youlean loudness meter which I *think* can measure loudness in LUFS on surround mixes. You might need the pro version. *not 100% sure on this as I haven't worked with surround mixes.
Great video Curtis. I'm going to look into my Corel editor. Just to see what type normalizations It will do. If I am not happy with it I will take it into audacity. And I'm sure I can probably rework it from there. Any or all of your comments are welcome. Thanks Keith
@@curtisjudd thanks for trying. sorry if the question did not make any sense. very new to audio normalization. I have focused on everything but that lol
how we do this kind of normalization on audacity? I usually do noise reduction, then normalize. But should I do loudness normalization instead? thanks a lot
Here's about as close as you can do with the default plugins in Audacity: ua-cam.com/video/L1FLali0xLk/v-deo.html You could also use the Youlean Loudness plugin which is free.
Hello Curtis Keith Kuhn here. I wonder if you could tell me ? You know as well as I do every recording is recorded at a different level. Some louder some softer. My question is I have a video I'm working on. I am using Corel editing software. When I create an audio file. Which I have not done yet. Do you think some of the quieter audio levels Will be brought up ? Usually I will just use my ear to adjust audio levels to my liking. But in this case I think it will just normalize the whole thing to see what comes about. All or any of your input is welcome. Thanks Keith
Loudness normalization is usually something you use for your final mix after you've already mixed the background sound/music/dialogue/effects. Then loudness normalization is the last step. Hope that helps!
Simple and informative video. Thank you. When I try normalize 4 track ambisonic audio in davinci resolve I get no signal on output. Should I normalize ambisonic audio in different way?
Might be worth mentioning that in the last example some pretty heavy limiting is applied. If your definition of 'normalization' requires a linear change in amplitude, then this isn't it. It is possible to apply BS.1770-2 matching in Audition without limiting, but then of course you risk pushing the peaks above 0dBFS.
hey Curtis, I got a quick question regarding post production of audio. What are you in general better of doing: Either boosting the low frequencies to get depth to the sound for midtone pronounced mics, or pushing the mids on bass pronounced mics in the EQ( for recording voice)? Want to decide if it makes more sense buying a low frequ/bass emphasized mic like to vmp, or a midtone pronounced one. Thanks a lot
Hey Curtis, just wondering if you have a tutorial explaining all the steps you take in order starting from normalizing the sound with your NLE to final mixing in audition for export to UA-cam.
Hi MONIZ, I have an older video here: ua-cam.com/video/VcMOBrdbHgk/v-deo.html And a course over at school.learnlightandsound.com/p/sound-post-processing-1
Hey Curtis, So, from what I am understanding, normalizing to -3 DB is no longer necessary. Loudness normalization is the way to go. Is it still necessary to use a compressor or limiter while editing in the NLE.? I'm assuming loudness normalization takes care of that. Correct?
Hi Moniz, for most dialogue audio and for best results, you'll generally still want to use a compressor to manage the peaks down to a point where you have enough headroom to use the "Match Loudness" function without chopping off the peaks with its limiter.
Thank you for another great video. I hit -19 lufs in mono then i convert to stereo for -16 lufs which is perfect for web. Do you have all your youtube videos at -24 or -19?
Between -16 and -17 LUFS for most, yes. When I do a microphone review, I will often loudness normalize to -23 LUFS without processing the audio in any other way with the aim that people can hear samples from the mic, in the raw.
Hi Curtis, I have a question. I want to buy a wireless microphone system and both the Sennheiser AVX and Rodelink Filmmaker kit seem like great products. I'm not an expert (like not at all) but if the Sennheiser AVX system operates on 1.9 GHz and the Rodelink uses 2.4 GHz, does that mean I would get more interference problems with the Rodelink because WiFi uses the same frequency band? I ask this because I will use the wireless system inside all kind of new cars at car dealerships and I absolutely want to avoid any interference problems. Thanks for all the great videos, really helped me a lot already!
HI Stan, Possibly, but I haven't generally found that to be the case. I have only experienced interference once on the RODELink (and I've used it a lot more than the AVX in wi-fi heavy environments). I would base your decision more on whether the additional cost of the AVX for the "dynamic range" feature is worth it to you.
I'm gonna follow your advice. The reason I asked about an alternative lav mic is because there's a message on RODE's website: "coming soon - RØDELink devices to be sold separately". But like you said, their latest mic should be better then the first version, so I'll buy the complete kit. I'll share my findings here in a couple of weeks. Thanks for the help. I really appreciate your effort to reply on every comment here!
Thank you for this! I have been using the normalization tool in the "effects tab" when rendering in Adobe Media Encoder. I can definitely tell the difference and I generally normalize to the ITU-R BS.1770-3 standard with a target loudness of -16 LUFS. I do have a question though: I use DaVinci Resolve for most of my personal videos, is there a plugin or technique that you could recommend for Loudness Normalization within DaVinci Resolve? All I am doing right now is aiming for -6 on the meter and using compression and limiting to stay around that range. Thank you!
Hi TX - 130, yes, I'll add this to my to-do list. Short version: Resolve does NOT have an offline way to measure loudness or to loudness normalize. Instead, it only has a real-time loudness meter. You set the target in the Project Settings->General Options. Then you have to play through the entire video to get a read on the loudness in LUFS. It is a bit tedious because then you have to tweak and run through the entire thing again.
Can someone help me? I've read you should keep your levels at -10 because if you hit 0 it peaks and distorts. So wouldn't boosting everything to -1.5 make it nearly distort?
Thanks for the question! That's old school info you got there. True peak at -1.5dB is safe to prevent distortion and digital clipping. Note that true peak is the important part.
WWJD... What would Judd do? Watch, listen, and learn my fellow audio shepherds. Thank you for the content you have bestowed upon us on this glorious day.
Hi Pawel, loudness is measured in a way similar to how humans hear loudness - sustained high amplitude and disregarding silence. To normalize audio to a target loudness level, you sometimes need to first apply some compression to reduce transient peaks so that they don’t clip when you do increase the loudness.
@@curtisjudd Exactly! Speaking of that. I would like to normalize audio in a recording of choir concert. Unfortunately for logistic reasons I had to record it with only 2 mics positioned quite far away (between middle field - place when normally you would place the main stereo pair and the far field - ambient/reverb) - closer to the audience than to the choir itself. What I end up with is clapping as loud as gunfires especially from people sitting near the mics. I don't know if you know what I am talking about, but these are extremely high and sharp peaks. When I tried to peak normalize it (I use Audacity), it didn't work, as they almost reached 0bD in the first place. When I wanted to compress these fragments it didn't work well neither, as the compression was either to soft or it ruined the sound and made it inconsistent. Of course I can cut the clapping and normalize each song separately but it would be inconsistent between tracks and I would like to make a series of movies with this audio. So what to do to compress this clapping? What settings should I use to compress it gently but sufficiently so that I could normalize whole recording properly?
How much do audio peaks (clapping, syncing, coughing) affect LUFS normalization? When do you release a video for a film or documentary at what point do you normalize? At the beginning or at the end of the audio editing process? When you edit your audio for UA-cam, do you follow a different strategy (in addition to setting the appropriate LUFS value for the different platforms)? Thanks
Short transients like claps do not affect the overall integrated loudness by very much - that the value of using LUFS. You normalize to the target loudness at the very end after you mix is complete. For online content I target -16 LUFS. For content which goes to a theater, I’d target -23 LUFS so we’d have more dynamic range on a good playback system/room.
Curtis Judd Yeap. This is the method i currently use. After viewing this last year, I’ve been relying on this method consistently. I’m going to research and see if there is Logic Pro process.
Thank you Curtis! May I request a small input from you about a problem of mine? I am struggling to get the "round robin" from Premiere to Audition to Premiere to work. I just want to export my whole project to AU, let it mix down and reimport it into PP so it doesnt take so long while exporting ("processing audio"...). But it does not work correctly. Leaving some random (!) moments without any sound and rendering others completely fine. But with no pattern to this error. How do you work with multiple audio channels / tracks / sources in PP?
Hi DSP, Thanks for the question. Unfortunately I don't work a lot with Premiere Pro these days but I would use the "Edit Entire Sequence in Adobe Audition" option in the Edit menu. Then updates between Premiere and Audition are automatically reflected in the other. For example, if you do your edit in PP, then use the "Edit in Audition", then mix in Audition and save, all of the changes end up back in Premiere and it takes virtually no time. At least that was the way it worked last time I used Premiere CC. Perhaps it has changed?
You're a treasure for us, Curtis. :D Out of curiosity, would it be the same for music (live, or studio sessions). Or is that a whole 'nother ball of wax? :D Haha!
Hi Curtis, how would you go about normalizing other types of audio, like the audio from this video? I assume that they're trying to leave the footage as untouched and "pure" as possible, but sometimes there might be parts that are uncontrollably quite loud or too soft. Do you think they applied even a smidge of normalization, or would it be that they kept it "pure"? Thanks.
Hi Jayro, the compression and loudness normalization are not exactly the same thing though they almost always go together. When you loudness normalize an audio clip, there is often the potential to clip the peaks, thus, we usually use a little bit of compression before loudness normalization to pull those peaks down so they don't get clipped.
Curtis Judd Thank you very much for responding! If I understand correctly, loudness normalization brings up the overall clip gain so it meets the desired loudness level without compressing the peaks or modifying any dynamics. Right?
Correct that it does not modify the dynamics except that most tools like the match loudness do employ a limiter which will compress the peaks if they hit the threshold (in this example, -1.5dBTP).
Hey Curtis. I have a tricky question. Broadcast standard loudness normalisation is -24/-23 LUFS in US/EU for stereo content. Do you know how this changes when dealing with mono, 3.0 and with 5.1? My sub-question being if I'm checking the loudness of a my film's mix against a reference movie, that movie is in one of those configurations but my film may be in a different one. How do I compare LUFS between movies of different channel configuration ?
Hi ronfya, I don't know about surround but the perceptual equivalent for mono is always -3 LUFS from the equivalent stereo LUFS reading. That is to say, -23 LUFS stereo is perceptually equivalent to -26 LUFS mono.
Yeah, that one is easy :D Unfortunately most films are 3.0, 3.1 or 5.1 :P Can you point me to some audio geek that would be able to answer this tricky question ? Thanks !
how do you did this in audacity. the free open source... Im guessing the compressor effect and what settings? sliders? loudness normalize to -19LUFS audacity
thanks.. so audacity dont have plugin to apply that tv standard algorithm then. so how can you be sure it meets -19 LUFS . you have to check waveform db and do it by eye... -12db and play
Adobe.com is where you buy Audition. Here’s a free plugin that might work in Audacity: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html I demonstrate how it works in Final Cut Pro X in the video but it should hopefully work in Audacity as well. Good luck
thanks I will check it out.. I use vegas pro.. but can extract audio and tweak in audacity then join again with avidemux free too. I like to pay once.. not yearly.. why I keep away from adobe seems a greedy way to me..
So when I need to edit a bit the volume of an iterview for youtube I can easily use Match loudness at -19lufs and than the little fix with the equalizer, right?
Is there a way to do this in FCPX? My biggest issue is getting all the individual clips within a project to the same standard. Often I find that I don't perceive differences until the video is posted, and then I notice a glaring error....
Hi spelunkerd, this is one way to do it in FCPX: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html As for matching one clip to another, the Youlean plugin I mentioned in the video can be used to measure the loudness of each clip so you can get them in the same ballpark.
Can this be done in rx7 advanced? I’m trying to ween myself off needing adobe. I also have waves loudness meter plus...which I’m thinking does this as well.
Yes, RX7 Advanced has a great loudness module which is what I use 90% of the time. I haven't used the Waves Loudness Meter, but it probably does the same.
Hi Curtis I was wondering. When a video has multiple tracks. Like dialogue and gameplay for example. Should you Loudness Normalize the tracks separately or only at the end the Master. I work with Adobe products btw. Thanks in advance.
Does anyone know how to apply these concepts within Magix Vegas Pro (formerly Sony Vegas)? The obvious normalize feature seems really limited as any spike keeps it from doing much of anything and you can't seem to control what standard it normalizes to.
Hi rsmith02, I don't have a working knowledge of Vegas Pro, but I bet it can work with VST plugins. And if so, you can use the Youlean loudness meter plugin (free) to at least measure your loudness. You'll still need to use an audio compressor plugin (probably included with Vegas Pro) to pull down the peaks so that you can normalize the audio to the level you need. Here's how I did it in Final Cut Pro X. Of course it won't be identical, but this should give you the idea: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html
@@curtisjudd Thanks so much for the personal reply! I'll try this for my next project. Attempting to do it by hand for a 2 hr panel discussion was painful.
DaVinci Resolve's Fairlight and Apple Logic X both have LUFS/LKFS loudness meters. However, they do not have off-line clip editors like Audition and Fairlight does not have a true peak meter. Izotope RX is one of my favorite apps for doing loudness normalization so I rely on it for that. I round trip from Audition or Fairlight to RX and back quite seamlessly.
Ironically, this very video is not fully normalized / equalized. I just turned on my Loudness Equalization to check and I can hear your voice slightly better, because it turns it up a bit. While turning off, your voice slightly lowers. Which means you did a pretty good job at normalizing this video, but not completely, about 70%. :)
@@curtisjudd More like a reference point, the volume itself, from that windows enhancement called "Loudness Equalization". I'm not an expert on this though, lol. I figured, that if by enabling the enhancement on, the video volumes turns up, that means the original volume of the video was too low, and if it gets more quiet, that means the original volume of the video was too high. I come across a lot of such videos on youtube, that either are too quite or too loud. With Loudness Equalization enabled, all those videos become normalized. Also, with LE enabled you are basically immune to those ear-rape meme videos :D Whenever I watch YT I enable it.
@@RTXonerix Thanks for the details. So in short, Microsoft and I only disagree on the loudness target one should hit. There is a trade-off to keep in mind - the louder you go, the less dynamic range you get. From my point of view, spoken word content should be normalized to -16 LUFS. That is an opinion. If I had to guess, Windows is either targeting -14 LUFS which is common in the music world, or using some sort of RMS target. That is also an opinion. So I'm not wrong and Windows isn't wrong, we just have different preferences for different targets.
I hate how music is louder than the talking and lot of the time I need to turn the volume to hear the talking than the loud music come on and spooks me and in hate that because it to loud than I turn it down and music gone and I can't hear talking so I turn volume up and down pain in my rear!!
@@ericcxxxx It depends on which app you’re using to edit. But you should be able to reduce the level on just the music either by putting it on its own track and reducing the track level, or reducing the level on the music clips.
Also, the -14 LUFS target is all well and good for music but too compressed for spoken word content from my point of view. So I aim for -17 to -16 LUFS.
I wish all youtubers would normalize their speech sound. Manually increasing and decreasing my phone's volume as the youtuber decides to scream for a whole planet to hear and in the next second talk the quietest pisses me the fuck off.
Honestly I've never understood this until you just perfectly explained it. This just improved the quality of my audio.
Thanks!
@@curtisjudd I 100% agree... you are brilliant!!!
@@kaigatz Thanks Kai!
@@curtisjudd Dude for real, I am obsessed with your channel now. It's like a haze is slowly being lifted and I'm actually starting to understand this stuff. Thank you!
@@brandonbuckles826 👍
Perfect! I had an editor ask me what's the difference between normalizing the audio and using the Match Loudness feature. My answer was, "WelI...I don't know. Let me ask Curtis." Thank you! Very informative, succinctly described, and validates my use of Matching Loudness to the ITU standard. :)
p.s. Tell my pretend bro "Hi!".
Thanks for the question, that was very helpful! Good questions often lead to good things.
Will do!
Fun tip: You can add this normalization to your Adobe Premiere Pro and Adobe Media Encoder export presets - that's how I handle it :)
Thanks, yes! Covered in detail in the linked videos (upper right corner in the “I” menu)
Here's a way to do it in FCPX. It isn't offline like in Premiere or Audition, but the job can be done: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html
Wow, how educational this channel is --- I just discovered it a couple days ago and I'm hooked. Thanks so much for your work! Not sure how am I going to manage watching all the other videos you have but definitely they will improve my novice skills exponentially.
Thanks KeshCGY!
Thank you I viewed some of your older videos on this topic and it has improved my video’s audio tremendously
Glad they helped! Thanks for the feedback!
Curtis your tutorials are so amazing - if all the content on UA-cam met this standard, the world would be very different!
Thanks David.
So much gold in these here hills...
Thanks Cry of the Prophet!
It took me 30 years as a commercial photographer to recognize the characteristics of light. But ten years into video - with 12 Amazon Prime shows, and I'm still struggling with audio. Specifically - how to capture spoken dialog that's clear and recorded at consistent levels. Therefore I'm extremely grateful to Curtis for his concise tutorials and product reviews. Thank you very much!
Thanks so much Robert! It is a journey, indeed!
This was really helpful, great video!
Thanks!
Once again great video. Thanks
Thanks Nurul!
UA-cam should do this for every video. They compress every video. And I’ve left comments on peoples videos not only the levels between episodes, but what drives me most nuts is super loud over produced intro, then very quiet content. Absolutely drives me bananas.
I'm actually glad that UA-cam does NOT do this automatically because they'd probably compress and normalize everything to -14 LUFS which doesn't always sound great for spoken word content in my opinion. Too fatiguing to listen to audio that is so compressed for long periods of time.
Thanks a lot Mr. Judd for a small but such a useful and great info. You are a great inspiration for the people who cares about minute details though these kind of people are a scarce breed now a days and I am one of them. Thanks for sharing you knowledge. Waiting for your review of new upcoming Mavic Pro Gen 2 and comparison of its sensor quality with latest phantom.
Thanks Vivek!
Agreed.!
Curtis Judd No need to say thanks Sir your work so more than enough. Learned a lot from you and happy to see your channel grow.
0 dislikes, that's what a video like this deserves... Thank you Curtis!
Thanks Erick.
Very well explained! Thank you sir!
Thanks FitFessor!
Normalization is one of three different ways to increase loudness. The other two are amplification and compression with gain makeup. From Judd's discussion normalization is adding a consistent fixed amount to each sample. Normalization does not change dynamic range, it just offsets it. Amplification on the other hand is a multiplier. So the increase in level is less for quiet signals and more for louder signals. Amplification [artificially] increases dynamic range. The benefit of normalization over amplification is that it tends to introduce less overloading of the signal. The signal exceeding the maximum sample amplitude. For the spoken word or other sounds that have a relatively consistent dynamic range normalization works well. But for sounds that have a wide and variable dynamic range like live music or a spoken word heated exchange where the volume goes up and down as the emotions change, normalization typically results in excessive clipping, exceeding the maximum sample amplitude. Caveat, this is my understanding of normalization, YMMV.
Compression on the other hand has variable gain depending on the average amplitude. At low volumes, quiet passages, it has linear gain, a 1 to 1 (1:1) ratio. When the amplitude reaches a set threshold, gain reduction takes place. Mild compression with a 2:1 ratio cuts the gain in half above the threshold. For every 2 dB increase in level, only 1 dB of gain occurs. Heavy compression may have a ratio of 4:1 or 6:1. A limiter is compression with a greater than 10:1 ratio. With compression there is unity 1:1 gain for the average parts of the conversation. But when the conversation becomes excited, the gain is reduced. With live music you have unity gain for the average parts of the song, but when the singer belts it out, or the guitar cranks it up for a solo, the gain is reduced. Compression evens out the dynamic range. Because compression is reducing the dynamic range, compressors also have "make up" gain to bring the level back up to 0 dB. The best type of compression for recording is analog compression because it reduces the input signal before the A/D converter, thus preventing overloading the A/D converter which causes harsh digital distortion.
I have been cleaning up live audio tracks of singer/song writer performances that are too quiet. I have found that I set a compressor to a 6:1 ratio and a very fast attack time and release time, 10 msec, with make up gain to 0 dB. I then use the threshold as a volume knob. By lowering the threshold to -12 dB, -18 dB or even -24 dB, I increase the volume to match my normal listening level. Try it, you may like it. Before anyone complains, yes this is very heavy handed compression. I call it "mashing" the signal. For me it is the simplest way to increase the overall volume of expressive live recordings without introducing digital overload distortion. Music can have an amazing dynamic range.
Good info, thanks Robert!
OMG! This is timesaver. Thank you
👍
Another great explanation video! Impressed and following
Thanks Goran
I love this page
Thanks sam.
should I trust the loudness's built-in compressor or should I compress it myself first before having it normalize?
The match loudness panel just uses a brick wall limiter so I generally prefer to compress first so I have a bit more control over how that is done.
Now I finally understand what "There are zero post processing done to the audio files besides loudness normalize to 24 LUFS, which is the standard loudness for European broadcasting." means :P
Excellent!
Great information that's clearly explained, as always. Thanks, Curtis!
Thanks!
Curtis - I see a lot of discussion in editing and camera forums where someone says that their track is running out of sync. Pretty much every one of them has someone tell them that all the audio needs to be at 48 kHz and having one of them be at 44.1 kHz is making the audio out of sync. This is coming from pros in editing sometimes but I don't get how audio sampled at different rates per second would make any difference in sync. Seems to me the key word is per second. I would think that a 3.00 minute song recorded at 96khz would still be 3 minutes. Is there any way for an audio to go out of sync from the sampling rate? I know this is not normalization related, but you did explain the sample per second in this video.
Hi Jim, I'm not 100% certain, but I think a bigger issue than differing sample rates is the quality of the clocks in each of the devices. Most $500+ cameras have decent clocks that don't drift more than a frame in 10 minutes. Same with audio recorders. But phones tend to drift quite quickly in my experience.
That is pretty much what I commented when this came up in forums. Back in the old days of digital we used to use master and slaves - all slaves would sync to the master clock. Thanks for your response, Curtis. Informative video ( as per usual.)
This video helped me out a lot, and it's so sad that it got such little views, after being so helpful.
👍
Just as usual: brilliant and helpful! Thx
Thanks Michael!
Great stuff, Curtis.
Thanks Chris.
1:28 it doesn’t raise the samples with value = 0 does it? Surely they stay the same
It does, that’s why you need to apply compression first.
Looks like the loudness normalisation algorithms includes a bit of compression? From what I understand, Audio normalisation generally is the process of automatically matching the audio level to a desired result, be that peak level or overall perceived loudness. I haven’t seen compression being part of the normalization process before.
For spoken word audio, it is often necessary to first create enough headroom to hit most modern loudness targets. So in most cases, at least a little compression is needed.
Thank you for this turorial!!!
You’re welcome!
Thanks Curtis excellent video know how to do loudnesd normalisation is a great.help now that my project will.be in the post production phase very soon .
Thanks. Happy mixing!
Awesome as always!
Thanks Heng!
Great tutorial, thank you very much!
👍🏻
Terrific video! Thanks Curtis :)
Thanks Leigh
Nice work Curtis. You the best audio tech UA-camr, and really like your lighting setup. Quick question, can you do this RMS normalization in Premiere instead of Adobe Audio?
Yes. Here's a piece where we cover that: ua-cam.com/video/RrRn8stkEBs/v-deo.html
Perfect explanation!
Thank you for this video, Now i know how to normalize my audio for my videos!
👍
Thank you!
You're welcome!
Really useful tip, thanks!
Thanks Magnus.
Excellent info! Thanks very much!
As an "all-in-one producer," I shoot, edit, and mix projects for my clients. Audio is my weak point, for sure, so I sure appreciate this info.
My question is, how does loudness normalization differ from compression? Clearly I don't understand either because I thought compression was exactly what loudness normalization is.
Hi Matthew, compression and loudness normalization are not exactly the same thing though they almost always used together. When you loudness normalize an audio clip, there is often the potential to clip the peaks, thus, we usually use a little bit of compression before loudness normalization to pull those peaks down so they don't get clipped. Does that make sense?
First, THANKS SO MUCH!
Then; Oh, so confused. I must be the only one. I'd have thought that if you Loudness Normalize to, say -1.5dB, your loudest peak would hit that and go no further, while the rest of the wave would follow, so no clipping. Compression, in my experience, has the chance of clipping.
Gads, the more I learn, the less I find I actually knew.
If you're moved to do so, a full, "good practices" audio workflow from PP to AU back to PP would be crazy helpful. But then, it may just be me...
Hi Matthew, when you loudness normalize, two of the main parameters you set are the loudness (e.g., -19 LUFS) and the peak limiter (I often set this to -1.5dBTP). The clip is normalized to -19 and if any peaks go closer to 0 than -1.5dB, they are limited/compressed so that they stop at -1.5dB.
By "clipping", I'm referring to cases where the waveform hits 0dB. Compression is usually used to "push the peaks down" farther away from 0dB so that when you do normalize, the peaks will not hit 0dB.
Stick with it. With practice it'll make sense.
You're a huge help and amazing fellow. Thank you for what you do and bring to the community!
@@curtisjudd So, would it make sense to use the hard limiter before adding loudness normalization?
Awesome video.
Thanks Daniel.
Hi Curtis, thank you for this explanation. You're videos on this topic have been incredibly helpful to us in maintaining consistency from video to video. - Sam & Rebekah
Thanks Sam and Rebekah!
This is great - have looked for that feature :)
👍
wow! so good, thanks
👍
Great video Curtis! Question!? Would I want to add Loudness Normalize first before doing any Noise Reduction and other audio processing??
I’d do everything to clean your audio and then as the very, very last step, loudness normalize.
@@curtisjudd Perfect! Appreciate the response!
thanks Curtis!
You're welcome!
Great video! Had a question though: In this video you mentioned a preference for -19dB, but in other videos I thought you suggested -16dB was a better target for web. Am I confusing the applications for these clips? Thanks in advance.
Hi Gerald, -19 LUFS for mono is the same as -16 LUFS for stereo. Same thing from a human perception point of view. If you audio is stereo, go for -16 LUFS. If Mono, -19 LUFS.
Nice! Thanks!
👍
Hi there. Very helpful introduction.
I'm doing a fan edit of a favourite movie - but have two sources with different loudness levels which I both want to use. A software program called GoldWave told me they are, respectively, An SD video: -28.76 LUFS (with a loudness range of 23.84 LU), and an HD video: at -16.20 LUFS (with a loudness range of 23.64 LU). Apart from loudness, the only difference between the two audios is that one has been mixed down to 2-channel stereo and the other (the -16.20 LUFS one) appears to be across 6 channels. I am using Cyberlink's Power Director 19 to edit. Can this handle different audios, I wonder? As a total newbie who barely knows what LUFS is, how on earth do I make the movies the same volume? What simple to use free software program can do it? Maybe GoldWave can???
The film with 6 channels is likely a 5.1 surround mix. You might try the Youlean loudness meter which I *think* can measure loudness in LUFS on surround mixes. You might need the pro version. *not 100% sure on this as I haven't worked with surround mixes.
@@curtisjudd Many thanks for the tip. After you suggested YouLean, I discovered it's available as an Audacity plug-in. Thanks again.
@@malvanlondon8683 👍
Thanks!
You're welcome!
Great video Curtis.
I'm going to look into my Corel editor.
Just to see what type normalizations
It will do.
If I am not happy with it I will take it into audacity.
And I'm sure I can probably rework it from there.
Any or all of your comments are welcome.
Thanks Keith
Hi Keith, good call. Audition is probably a much better place to handle loudness normalization.
@@curtisjudd
Did you mean audacity or audition ?
@@KTHKUHNKK Sorry, I misread. I meant Audition. Audacity can do the same thing if you download and install the free Youlean Loudness Meter plugin.
@@curtisjudd
Thanks Curtis
I use Audacity.
It seems to did everything I news.
Keith Kuhn
What percentage should I use for normalization of my videos? My program does not show DBs, only percentage.
There's no way I know of to correlate percentage and LUFS.
@@curtisjudd thanks for trying. sorry if the question did not make any sense. very new to audio normalization. I have focused on everything but that lol
@@curtisjudd I am trying to start a channel about Game Reviews. So I appreciate any and all tips :) I currently use a Blue Yeti.
how we do this kind of normalization on audacity? I usually do noise reduction, then normalize. But should I do loudness normalization instead? thanks a lot
Here's about as close as you can do with the default plugins in Audacity: ua-cam.com/video/L1FLali0xLk/v-deo.html You could also use the Youlean Loudness plugin which is free.
Thanks for the insights! I used the info in this video to help me normalize the loudness of my videos in Premiere :D
👍
Hello Curtis
Keith Kuhn here.
I wonder if you could tell me ?
You know as well as I do every recording is recorded at a different level.
Some louder some softer.
My question is I have a video I'm working on.
I am using Corel editing software.
When I create an audio file. Which I have not done yet.
Do you think some of the quieter audio levels
Will be brought up ?
Usually I will just use my ear to adjust audio levels to my liking.
But in this case I think it will just normalize the whole thing to see what comes about.
All or any of your input is welcome.
Thanks Keith
Here's my input: ua-cam.com/video/nOzZKEWJ5wk/v-deo.html and ua-cam.com/video/lHNhxGojFRU/v-deo.html
My Audio is still quiet on UA-cam even with normalization, for some reason UA-cam is putting my Content Loudness to -28 db and below.
That's odd - I've never heard UA-cam make my videos less loud unless they were at -12 LUFS.
Thank you, this was very helpful to me. Would you recommend using peak or loudness normalization for background or ambient sounds?
Loudness normalization is usually something you use for your final mix after you've already mixed the background sound/music/dialogue/effects. Then loudness normalization is the last step. Hope that helps!
Hi Curtis! Great content! What is current level of optimal audio normalization for stereo content for UA-cam?
Hi Małgorzata, most reasonable recommendations for spoken word content are -16 to -17 LUFS.
Simple and informative video. Thank you. When I try normalize 4 track ambisonic audio in davinci resolve I get no signal on output. Should I normalize ambisonic audio in different way?
I’m not sure, I haven’t spent much time in Resolve in recent years.
I’m not sure, I haven’t spent much time in Resolve in recent years. Sorry about that.
Might be worth mentioning that in the last example some pretty heavy limiting is applied. If your definition of 'normalization' requires a linear change in amplitude, then this isn't it. It is possible to apply BS.1770-2 matching in Audition without limiting, but then of course you risk pushing the peaks above 0dBFS.
Agreed. I typically do some compression prior to normalizing.
hey Curtis,
I got a quick question regarding post production of audio. What are you in general better of doing: Either boosting the low frequencies to get depth to the sound for midtone pronounced mics, or pushing the mids on bass pronounced mics in the EQ( for recording voice)? Want to decide if it makes more sense buying a low frequ/bass emphasized mic like to vmp, or a midtone pronounced one.
Thanks a lot
I usually find mid range a little harder to correct in post so I like to avoid midrange emphasized mics.
Hey Curtis, just wondering if you have a tutorial explaining all the steps you take in order starting from normalizing the sound with your NLE to final mixing in audition for export to UA-cam.
Hi MONIZ, I have an older video here: ua-cam.com/video/VcMOBrdbHgk/v-deo.html
And a course over at school.learnlightandsound.com/p/sound-post-processing-1
Hey Curtis, So, from what I am understanding, normalizing to -3 DB is no longer necessary. Loudness normalization is the way to go. Is it still necessary to use a compressor or limiter while editing in the NLE.? I'm assuming loudness normalization takes care of that. Correct?
Hi Moniz, for most dialogue audio and for best results, you'll generally still want to use a compressor to manage the peaks down to a point where you have enough headroom to use the "Match Loudness" function without chopping off the peaks with its limiter.
Thank you for another great video. I hit -19 lufs in mono then i convert to stereo for -16 lufs which is perfect for web. Do you have all your youtube videos at -24 or -19?
Between -16 and -17 LUFS for most, yes. When I do a microphone review, I will often loudness normalize to -23 LUFS without processing the audio in any other way with the aim that people can hear samples from the mic, in the raw.
Hi Curtis, I have a question. I want to buy a wireless microphone system and both the Sennheiser AVX and Rodelink Filmmaker kit seem like great products. I'm not an expert (like not at all) but if the Sennheiser AVX system operates on 1.9 GHz and the Rodelink uses 2.4 GHz, does that mean I would get more interference problems with the Rodelink because WiFi uses the same frequency band?
I ask this because I will use the wireless system inside all kind of new cars at car dealerships and I absolutely want to avoid any interference problems.
Thanks for all the great videos, really helped me a lot already!
HI Stan, Possibly, but I haven't generally found that to be the case. I have only experienced interference once on the RODELink (and I've used it a lot more than the AVX in wi-fi heavy environments). I would base your decision more on whether the additional cost of the AVX for the "dynamic range" feature is worth it to you.
In that case I'm going for the RODELink. Is there any lav mic you recommend to replace the standard one?
What’s your budget for the mic?
Actually, I’d stick with the included mic as a start and replace it if needed later on. They evidently updated it since my original kits.
I'm gonna follow your advice. The reason I asked about an alternative lav mic is because there's a message on RODE's website: "coming soon - RØDELink devices to be sold separately". But like you said, their latest mic should be better then the first version, so I'll buy the complete kit. I'll share my findings here in a couple of weeks.
Thanks for the help. I really appreciate your effort to reply on every comment here!
Is there a way or plugin to use for this in FCPX that you know of?
Yes: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html
Thank you for this! I have been using the normalization tool in the "effects tab" when rendering in Adobe Media Encoder. I can definitely tell the difference and I generally normalize to the ITU-R BS.1770-3 standard with a target loudness of -16 LUFS.
I do have a question though: I use DaVinci Resolve for most of my personal videos, is there a plugin or technique that you could recommend for Loudness Normalization within DaVinci Resolve? All I am doing right now is aiming for -6 on the meter and using compression and limiting to stay around that range.
Thank you!
Hi TX - 130, yes, I'll add this to my to-do list. Short version: Resolve does NOT have an offline way to measure loudness or to loudness normalize. Instead, it only has a real-time loudness meter. You set the target in the Project Settings->General Options. Then you have to play through the entire video to get a read on the loudness in LUFS. It is a bit tedious because then you have to tweak and run through the entire thing again.
Can someone help me? I've read you should keep your levels at -10 because if you hit 0 it peaks and distorts. So wouldn't boosting everything to -1.5 make it nearly distort?
Thanks for the question! That's old school info you got there. True peak at -1.5dB is safe to prevent distortion and digital clipping. Note that true peak is the important part.
How can I do this in FCPX without Adobe Audition? Are there any suitable plugins for FCPX for this? Thanks.
Like this: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html
When I edit audio in Audacity (like a song from my favorite band for example) can normalization be a good method to prevent clipping?
Yes
WWJD... What would Judd do? Watch, listen, and learn my fellow audio shepherds.
Thank you for the content you have bestowed upon us on this glorious day.
This is great, as I struggled with some normalization issues this past week.
Haha! Thanks Insanejughead. Appreciate the support!
Is loudness normalizations based on some kind of compression?
Hi Pawel, loudness is measured in a way similar to how humans hear loudness - sustained high amplitude and disregarding silence. To normalize audio to a target loudness level, you sometimes need to first apply some compression to reduce transient peaks so that they don’t clip when you do increase the loudness.
@@curtisjudd Exactly! Speaking of that. I would like to normalize audio in a recording of choir concert. Unfortunately for logistic reasons I had to record it with only 2 mics positioned quite far away (between middle field - place when normally you would place the main stereo pair and the far field - ambient/reverb) - closer to the audience than to the choir itself. What I end up with is clapping as loud as gunfires especially from people sitting near the mics. I don't know if you know what I am talking about, but these are extremely high and sharp peaks. When I tried to peak normalize it (I use Audacity), it didn't work, as they almost reached 0bD in the first place. When I wanted to compress these fragments it didn't work well neither, as the compression was either to soft or it ruined the sound and made it inconsistent. Of course I can cut the clapping and normalize each song separately but it would be inconsistent between tracks and I would like to make a series of movies with this audio. So what to do to compress this clapping? What settings should I use to compress it gently but sufficiently so that I could normalize whole recording properly?
How much do audio peaks (clapping, syncing, coughing) affect LUFS normalization? When do you release a video for a film or documentary at what point do you normalize? At the beginning or at the end of the audio editing process?
When you edit your audio for UA-cam, do you follow a different strategy (in addition to setting the appropriate LUFS value for the different platforms)? Thanks
Short transients like claps do not affect the overall integrated loudness by very much - that the value of using LUFS. You normalize to the target loudness at the very end after you mix is complete. For online content I target -16 LUFS. For content which goes to a theater, I’d target -23 LUFS so we’d have more dynamic range on a good playback system/room.
@@curtisjudd Thanks a lot Curtis. Accurate and exhaustive as usual
I’m curious how to do this with Apple Logic. Are there similar features to accomplish this in Apples software?
Hi Adrian, I'm not familiar enough with Logic to demo this but here's one way to do it in Final Cut Pro X: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html
Curtis Judd Yeap. This is the method i currently use. After viewing this last year, I’ve been relying on this method consistently. I’m going to research and see if there is Logic Pro process.
Thank you Curtis! May I request a small input from you about a problem of mine?
I am struggling to get the "round robin" from Premiere to Audition to Premiere to work. I just want to export my whole project to AU, let it mix down and reimport it into PP so it doesnt take so long while exporting ("processing audio"...). But it does not work correctly. Leaving some random (!) moments without any sound and rendering others completely fine. But with no pattern to this error.
How do you work with multiple audio channels / tracks / sources in PP?
Hi DSP, Thanks for the question. Unfortunately I don't work a lot with Premiere Pro these days but I would use the "Edit Entire Sequence in Adobe Audition" option in the Edit menu. Then updates between Premiere and Audition are automatically reflected in the other. For example, if you do your edit in PP, then use the "Edit in Audition", then mix in Audition and save, all of the changes end up back in Premiere and it takes virtually no time. At least that was the way it worked last time I used Premiere CC. Perhaps it has changed?
Can I apply the Normalization settings when I use for the web or just for broadcasting only?
Hi Gabbo, for web, you'll generally want to target -16 LUFS instead of -23 or -24 LUFS. Otherwise, the approach is the same.
So to make the volume louder should we turn this on or off?
Depends on how loud your audio is currently. But I’d always turn it on
THIS should be MANDATORY knowledge for anyone posting on YT. lol.
Haha, yes!
I have it as a habit to normalize all my music in vegas before I add them to my playlist, is this a smart thing to do ? : E
If it suits you, seems like a fine practice. Will make it so you don't have to keep adjusting the volume.
That's the no-nonsense, well explained info we are used to from you. Thanks again for sharing your knowledge!
Thanks Arpad!
Does it matter than audition doesn't use 1770-4? like should I hop on Resolve and use theirs since it's more current?
Only if you are processing audio for TV broadcast. If not, you'll be fine with the previous iterations of BT1770.
You're a treasure for us, Curtis. :D
Out of curiosity, would it be the same for music (live, or studio sessions). Or is that a whole 'nother ball of wax? :D Haha!
Hi huyked! Thanks! Yes, definitely applies to recorded music, though not so much for live sound.
@@curtisjudd
Okay. Thank you so much!
Oh! And, hello! 👋
Hi Curtis, how would you go about normalizing other types of audio, like the audio from this video?
I assume that they're trying to leave the footage as untouched and "pure" as possible, but sometimes there might be parts that are uncontrollably quite loud or too soft.
Do you think they applied even a smidge of normalization, or would it be that they kept it "pure"? Thanks.
Hi, I would mix the levels so that they're even first, then normalize the entire mix in one pass. In fact, that's what I did.
Thanks for the video, Curtis. One question: Isn't loudness normalization basically compression?
Hi Jayro, the compression and loudness normalization are not exactly the same thing though they almost always go together. When you loudness normalize an audio clip, there is often the potential to clip the peaks, thus, we usually use a little bit of compression before loudness normalization to pull those peaks down so they don't get clipped.
Curtis Judd Thank you very much for responding! If I understand correctly, loudness normalization brings up the overall clip gain so it meets the desired loudness level without compressing the peaks or modifying any dynamics. Right?
Correct that it does not modify the dynamics except that most tools like the match loudness do employ a limiter which will compress the peaks if they hit the threshold (in this example, -1.5dBTP).
Curtis Judd Ok, I understand now. Thank you very much and keep up the great work!
Can you this loudness normalisation in audacity ?
Hi Abbas, Audacity does not come with a plugin to measure loudness but you could add a plugin to do that.
Curtis Judd ok that's great then ,if you recommend one it will be helpful
Youlean Loudness Meter: youlean.co/youlean-loudness-meter/
What you need to do is yell and whisper in the recording too see how it brings down the yelling and up the whispering to one hearable volume level.
👍
Hey Curtis. I have a tricky question. Broadcast standard loudness normalisation is -24/-23 LUFS in US/EU for stereo content. Do you know how this changes when dealing with mono, 3.0 and with 5.1? My sub-question being if I'm checking the loudness of a my film's mix against a reference movie, that movie is in one of those configurations but my film may be in a different one. How do I compare LUFS between movies of different channel configuration ?
Hi ronfya, I don't know about surround but the perceptual equivalent for mono is always -3 LUFS from the equivalent stereo LUFS reading. That is to say, -23 LUFS stereo is perceptually equivalent to -26 LUFS mono.
Yeah, that one is easy :D
Unfortunately most films are 3.0, 3.1 or 5.1 :P
Can you point me to some audio geek that would be able to answer this tricky question ? Thanks !
I wish I could. Cinema Sound might have some info on that but I haven't seen a specific video where he addressed that.
how do you did this in audacity. the free open source... Im guessing the compressor effect and what settings? sliders? loudness normalize to -19LUFS audacity
Hi, here’s a tutorial for Audacity we made a while back: ua-cam.com/video/L1FLali0xLk/v-deo.html
thanks.. so audacity dont have plugin to apply that tv standard algorithm then. so how can you be sure it meets -19 LUFS . you have to check waveform db and do it by eye... -12db and play
where can you download your software using? and get the tv loudness setting. I dont see it in description only hardware gear
Adobe.com is where you buy Audition. Here’s a free plugin that might work in Audacity: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html I demonstrate how it works in Final Cut Pro X in the video but it should hopefully work in Audacity as well. Good luck
thanks I will check it out.. I use vegas pro.. but can extract audio and tweak in audacity then join again with avidemux free too. I like to pay once.. not yearly.. why I keep away from adobe seems a greedy way to me..
I find I can't remember all the details in a UA-cam. A wiki would be better because it can be printed out.
Ah, I see. Perhaps someone could volunteer to setup and maintain the wiki.
So when I need to edit a bit the volume of an iterview for youtube I can easily use Match loudness at -19lufs and than the little fix with the equalizer, right?
Yes.
@@curtisjudd thanks for the quick response;) have a nice day
Is there a way to do this in FCPX? My biggest issue is getting all the individual clips within a project to the same standard. Often I find that I don't perceive differences until the video is posted, and then I notice a glaring error....
Hi spelunkerd, this is one way to do it in FCPX: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html
As for matching one clip to another, the Youlean plugin I mentioned in the video can be used to measure the loudness of each clip so you can get them in the same ballpark.
Can this be done in rx7 advanced? I’m trying to ween myself off needing adobe. I also have waves loudness meter plus...which I’m thinking does this as well.
Yes, RX7 Advanced has a great loudness module which is what I use 90% of the time. I haven't used the Waves Loudness Meter, but it probably does the same.
Hi Curtis I was wondering. When a video has multiple tracks. Like dialogue and gameplay for example. Should you Loudness Normalize the tracks separately or only at the end the Master.
I work with Adobe products btw.
Thanks in advance.
Always do the normalization on the final mix 👍
@@curtisjudd thank you for your fast response 😀
Ah yes, thank you so much!
👍
This is Moira Donohoe, not Ron Hamann. Can you recommend a business that can edit my videos for my artist business? Thank you, Moira Donohoew
I'm not sure, but this might help: filmora.wondershare.com/business/website-find-freelance-video-editor.html
Does anyone know how to apply these concepts within Magix Vegas Pro (formerly Sony Vegas)? The obvious normalize feature seems really limited as any spike keeps it from doing much of anything and you can't seem to control what standard it normalizes to.
Hi rsmith02, I don't have a working knowledge of Vegas Pro, but I bet it can work with VST plugins. And if so, you can use the Youlean loudness meter plugin (free) to at least measure your loudness. You'll still need to use an audio compressor plugin (probably included with Vegas Pro) to pull down the peaks so that you can normalize the audio to the level you need. Here's how I did it in Final Cut Pro X. Of course it won't be identical, but this should give you the idea: ua-cam.com/video/xgDKF5Rjj7s/v-deo.html
@@curtisjudd Thanks so much for the personal reply! I'll try this for my next project.
Attempting to do it by hand for a 2 hr panel discussion was painful.
Other than Audition, what other apps, particularly editing and DAW apps, can do loudness normalization using LUFS or LKFS?
DaVinci Resolve's Fairlight and Apple Logic X both have LUFS/LKFS loudness meters. However, they do not have off-line clip editors like Audition and Fairlight does not have a true peak meter. Izotope RX is one of my favorite apps for doing loudness normalization so I rely on it for that. I round trip from Audition or Fairlight to RX and back quite seamlessly.
Ironically, this very video is not fully normalized / equalized. I just turned on my Loudness Equalization to check and I can hear your voice slightly better, because it turns it up a bit. While turning off, your voice slightly lowers. Which means you did a pretty good job at normalizing this video, but not completely, about 70%. :)
What are you using as a target or reference point?
@@curtisjudd More like a reference point, the volume itself, from that windows enhancement called "Loudness Equalization". I'm not an expert on this though, lol. I figured, that if by enabling the enhancement on, the video volumes turns up, that means the original volume of the video was too low, and if it gets more quiet, that means the original volume of the video was too high. I come across a lot of such videos on youtube, that either are too quite or too loud. With Loudness Equalization enabled, all those videos become normalized. Also, with LE enabled you are basically immune to those ear-rape meme videos :D Whenever I watch YT I enable it.
@@RTXonerix Thanks for the details. So in short, Microsoft and I only disagree on the loudness target one should hit. There is a trade-off to keep in mind - the louder you go, the less dynamic range you get. From my point of view, spoken word content should be normalized to -16 LUFS. That is an opinion. If I had to guess, Windows is either targeting -14 LUFS which is common in the music world, or using some sort of RMS target. That is also an opinion. So I'm not wrong and Windows isn't wrong, we just have different preferences for different targets.
@@curtisjudd Agreed. and thank you too, for the explanations and analysis, great video :]
how to turn the music down in movie on steaming serives?
I’m sorry, I do not understand the question.
@@curtisjudd sorry no so good at texting, i edit my question
I hate how music is louder than the talking and lot of the time I need to turn the volume to hear the talking than the loud music come on and spooks me and in hate that because it to loud than I turn it down and music gone and I can't hear talking so I turn volume up and down pain in my rear!!
@@ericcxxxx It depends on which app you’re using to edit. But you should be able to reduce the level on just the music either by putting it on its own track and reducing the track level, or reducing the level on the music clips.
@@curtisjudd edit is easy fix
Can't believe how many people liked this video, even though it's -7.9dB (which is incredibly ironic)
Had to publish it at a lower LUFS level to demonstrate how to normalize later in the video - so people could hear the difference.
Also, the -14 LUFS target is all well and good for music but too compressed for spoken word content from my point of view. So I aim for -17 to -16 LUFS.
I wish all youtubers would normalize their speech sound. Manually increasing and decreasing my phone's volume as the youtuber decides to scream for a whole planet to hear and in the next second talk the quietest pisses me the fuck off.
👍
came here trying to figure this out... my conclusion is: leave it for the folks that really understand. maybe in my next life I'll be one of them/you.
So hire a sound guy? Not a bad conclusion.