This is the sort of video people like me need to see more of. A comprehensive and clear communication of what the feature does and why it's important. Really excellent; thank you.
I also wanted to add that I am a big Billie Eilish fan, and in my opinion, her latest album is mastered fantastically! It's great to now know who and what is behind it! And I am very grateful that this useful tutorial is also provided with subtitles, which makes it more accessible to an audience that does not speak English!
I'm going to get myself a birthday present🤣. Actually, I discovered how awesome RX elements is once I saw what it could do for vocals say you want to become a mix engineer for example, its basically a required tool to anyone serious about their recording artist career/mixing and mastering lifestyle. It made me look at audio in a new way, and I can vouch it had improved the sound of my works from searching for the next level, until oh found it. Must have. On top of that having to learn the Merging Technology MT48 ethernet audio setup in aes67 kh120ii's/ma1 alignment software was a major upgrade to my setup too. Got to love Neumann also they're awesome people, Good luck bro/sis! ♥
And all that really needs to happen is for streaming services to *stop re-processing and lossy encoding* music, aside from some volume normalising (downwards, not upwards with limiting) as a playback option.
Having used RX5 to clean up audio files... I'm surprised (and pleased) to hear that the new versions have these capabilities. (These issues of 'not sounding the same on Spotify / iTunes' are very familiar to me, from the albums I've been a part of..) I'd love to be able to make tweaked / optimalized versions of my music. I found the visual representation of the spectrum very intuitive - to understand what's going on, and to interact / edit too. Really interested to see how well things can be cleaned up -- (unfortunately, pretty much everything is out of my price range due to long-term financial situations.. But 'when my ship comes in'... (Presuming it hasn't stranded on a reef, and the late captain Isn't surrounded by Snorkles...)...) -- Love these videos, by the way. The details of how professionals use these tools is very enlightening to me, and - at the very least - helps me in understanding what to look out for in music production. (Sort of teaching myself everything I need.. Mostly when I encounter the next problem.. And the next.. And the next... It's paying off, though) Having AD(H)D, a lot of interfaces look daunting to me, and I'm overwhelmed by the choices and knobs and dials and settings. iZoTope have done a wonderful job in reducing visual complexity, while expanding capabilities.
Thank you very much for introducing the great new feature! I was also struggling with mastering nature sounds, and it is indeed important to be able to equalize the whole recording without any loss. So that it doesn't happen that a horse neighs loudly in one place and the rest of the recording, where birds are chirping much more quietly, becomes dull because of that one neigh. And thank you, I don't want to compress the rest. At some point, I considered using automations, but I find this new feature quite good. It's even a bit addictive, and I think it shouldn't be used everywhere. But then again, why not!? 🙂
Thank you so much. I don't understand why all new songs have this type of flat waveform that is in final stage and not waveform that was before. They sound like overcompressed in this way. Also they are like lofi type or like not many highs. But it's an EQ issue I think.
Hi, I love Loudness Optimize. It's a great feature that I'd love to use with RX 11 batch processing. Is there a way, in batch processing, to make it "learn" optimal settings like you'd do when editing a single track?
Brilliant demo of this. I have a question. Most info says we should leave -1dB of headroom on master for the compression algorithm to sound its best. I notice you haven't done that here. that might be just not the focus of the demo but do you adhere to this?
Often the downsides of having the ceiling at -1db (not as loud in non-normalized listening environments, which are more common than many people think) outweigh the barely, if at all, noticeable difference in quality it would give you. Depends on the song, but a ceiling close to 0 is probably still the standard and is rarely a problem.
An incredible tool…weird how the streaming industry told DIYers to “master to -14LUFS or get turned down anyways,” while the last example in this video was measuring -6.99LUFS.
Imagine you mastered an album, how do you get the same levels for each song with optimization? For single releases this might be a superb way to go, but when your distributor delivers a whole bunch of songs to the stores how would it be there? So if there is a song measured at 50% in the beginning with a refrain that matches the perceived levels of alle the other songs, would this lead to much different levels if the were measured at for example 90% in the beginning?
so pretty much this 12:00 minute example is like having a parallel reversed compression. it isnt necessary to own the RX11 to do that. Of course you need to understand the plugin choice and the phase related issues.
Please make the next version of Izotope Ozone 12 with even much smarter AI, it’s just not good enough at mastering certain tracks and genres. This needs to be an excellent master assistant.
Ok someone please help me understand this. He measures a song that isn't optimized at -8,2 LUFS integrated... Then it gets optimized by the upward compressor resulting in -8,7 LUFS integrated. Then he goes on saying it will be played back louder even though it is quieter now (speaking in LUFS integrated) What am I missing? Can someone please explain why it matters how much of the song is getting measured? Why does lowering the LUFS result in a half dB increase in playback volume on spotify .. as he says?
Spotify adjusts the gain based on the measured LUFS. So at the start, the measured LUFS is -8.2. Spotify will turn it down by 5.8 dB so that it plays back at -14 LUFS. After optimization, the measured LUFS is -8.7 LUFS. It is measuring quieter, even though the quiet sections were boosted. Spotify will only turn it down by 5.3 dB so that it plays back at -14 LUFS. So not only will Spotify turn the song down less, the quiet sections were boosted. Does that make sense?
It does. Thats crazy. Thank you. So its a momentary LUFS Gate not a Peak Gate that triggers the actual LUFS measurement for spotify. So in order to be measured not by the loudest section but the whole song we push the whole loudness over the threshhold and get a louder playback with a louder mix. This is really the bedrock of audio engineering. Whats next? Haha.
@@leonscholz97 yeah isn’t that crazy?! The integrated LUFS algorithm doesn’t necessarily take the full song into account, only the audio that’s above the relative gate. So if you’re mastering for the most optimally loud playback on streaming, you want to ensure that the whole song is above the gate and being measured
"select the one of your choice" ie: you cannot account for all streaming platforms. And it's largely irrelevant, as it's utterly dependent on the playback normalizing settings, which in turn are dependent on the platform, if it's a free or paid account, playback settings, desktop or mobile app, and of course is subject to change at any time. And with bluetooth and spotify you immediately get bonus *compound* lossy encoding! Just make it sound great for the music, lossless, on a detailed (flat) system. The listener always has the final say, as does the multitude of radio processing.
hi there, after Logic Pro 11 update, I wanna ask you if its still compatible with the ozone 11 advanced, cause I try to download the free trial nd it does download on my computer, Sonoma 14.5 it looks like, So I wait like 5 to 7 minutes, but then the window closed its self, after it says finished, and that's it, I can't open it ..... can you plis help me or any tips, cause I really wanna buy the ozone advanced 11. its very hard to get in touch with you guys.
Sorry to hear about the trouble you're encountering. To contact our product support team to troubleshoot this issue, please use a form on this page here: bit.ly/izo_prod Make sure to leave an email address for us and we'll get back to you as soon as we can.
@@iZotopeOfficial ok thks ….. I think it’s just got to be a set up on my computer, but I don’t know where to go on settings. Cause my computer it’s Sonoma 14.5 I will try the website you gave me. Thks 🙏🏻
Being super loud on a playlist sometimes just sound like that douchebag at a party, you know? So let's screw the songs dynamics to please Spotify and the likes? I honestly think the big majority of the listener don't care about that 1-2 dB difference in the end and won't skip it. It only has to be in the ballpark of the loudness of the genre. I'm all in when it's about getting the loudest the song can go withing breaking anything musical but I don't see the point changing blindly the macro dynamics to get a better LUFS reading... Please, don't be insecure, do great music. AI will take care of shitty stuff very very well in the long run
As a long term user of RX since the beginning, I am a big fan. This video, however, shows how completely half witted most mastering is these days. Better technology as years go by and declining skill in the music and the mastering. There is no excuse except a love of mediocrity.
Big thanks to Dale Becker for taking the time to share his knowledge with us!
This is the sort of video people like me need to see more of. A comprehensive and clear communication of what the feature does and why it's important. Really excellent; thank you.
Agreed!
exactly
Love to see Dale on here! An absolute legend.
The amount of knowledge I just gained from this short video is staggering. Thank you so much for this!
We swapped loudness wars for squeeze wars.
show no weakness with your soft clipping 😂😂
@edwardlang5972 I never do anyways. Never! 😂
I soft clip the bageezus outa the master. P.A.'s Black box is my best friend lol😅
I also wanted to add that I am a big Billie Eilish fan, and in my opinion, her latest album is mastered fantastically! It's great to now know who and what is behind it!
And I am very grateful that this useful tutorial is also provided with subtitles, which makes it more accessible to an audience that does not speak English!
Great info here
I'm going to get myself a birthday present🤣. Actually, I discovered how awesome RX elements is once I saw what it could do for vocals say you want to become a mix engineer for example, its basically a required tool to anyone serious about their recording artist career/mixing and mastering lifestyle. It made me look at audio in a new way, and I can vouch it had improved the sound of my works from searching for the next level, until oh found it. Must have. On top of that having to learn the Merging Technology MT48 ethernet audio setup in aes67 kh120ii's/ma1 alignment software was a major upgrade to my setup too. Got to love Neumann also they're awesome people, Good luck bro/sis! ♥
Your job in this album is amazing. Hands down to you, man.
Did not know there was a gate on the LUFS measurement on Spotify and other steaming platforms. Makes sense and good to know. Thanks for sharing this!
Bro is working with sausages XD
The work on the Billie album was incredible. Happy to see him talk through it here
thats soooo helpful! thank u once again izotope + mr becker! ❤
Wow! This was crystal clear. Great presentation.
And all that really needs to happen is for streaming services to *stop re-processing and lossy encoding* music, aside from some volume normalising (downwards, not upwards with limiting) as a playback option.
Great video
Having used RX5 to clean up audio files...
I'm surprised (and pleased) to hear that the new versions have these capabilities.
(These issues of 'not sounding the same on Spotify / iTunes' are very familiar to me, from the albums I've been a part of..)
I'd love to be able to make tweaked / optimalized versions of my music.
I found the visual representation of the spectrum very intuitive - to understand what's going on, and to interact / edit too.
Really interested to see how well things can be cleaned up
--
(unfortunately, pretty much everything is out of my price range due to long-term financial situations..
But 'when my ship comes in'... (Presuming it hasn't stranded on a reef, and the late captain Isn't surrounded by Snorkles...)...)
--
Love these videos, by the way.
The details of how professionals use these tools is very enlightening to me, and - at the very least - helps me in understanding what to look out for in music production.
(Sort of teaching myself everything I need.. Mostly when I encounter the next problem.. And the next.. And the next...
It's paying off, though)
Having AD(H)D, a lot of interfaces look daunting to me, and I'm overwhelmed by the choices and knobs and dials and settings.
iZoTope have done a wonderful job in reducing visual complexity, while expanding capabilities.
Muito interessante a atuação do RX 11 no loudness das produções musicais. Obrigado pelo vídeo!
Thank you very much for introducing the great new feature! I was also struggling with mastering nature sounds, and it is indeed important to be able to equalize the whole recording without any loss. So that it doesn't happen that a horse neighs loudly in one place and the rest of the recording, where birds are chirping much more quietly, becomes dull because of that one neigh.
And thank you, I don't want to compress the rest. At some point, I considered using automations, but I find this new feature quite good. It's even a bit addictive, and I think it shouldn't be used everywhere. But then again, why not!? 🙂
Thank you so much. I don't understand why all new songs have this type of flat waveform that is in final stage and not waveform that was before. They sound like overcompressed in this way.
Also they are like lofi type or like not many highs. But it's an EQ issue I think.
Thanks for your video. First ever useful video on the RX11, which I have the advanced version but have never trusted or used. Lol.
Hi, I love Loudness Optimize. It's a great feature that I'd love to use with RX 11 batch processing. Is there a way, in batch processing, to make it "learn" optimal settings like you'd do when editing a single track?
Whoa!
Thanks, best/Mathias
Brilliant demo of this. I have a question. Most info says we should leave -1dB of headroom on master for the compression algorithm to sound its best. I notice you haven't done that here. that might be just not the focus of the demo but do you adhere to this?
Often the downsides of having the ceiling at -1db (not as loud in non-normalized listening environments, which are more common than many people think) outweigh the barely, if at all, noticeable difference in quality it would give you. Depends on the song, but a ceiling close to 0 is probably still the standard and is rarely a problem.
An incredible tool…weird how the streaming industry told DIYers to “master to -14LUFS or get turned down anyways,” while the last example in this video was measuring -6.99LUFS.
do i need this process after ozone 11?
Streaming Preview and Loudness Optimize are features that are only available in RX 11, so we'd say yes 😎
Imagine you mastered an album, how do you get the same levels for each song with optimization? For single releases this might be a superb way to go, but when your distributor delivers a whole bunch of songs to the stores how would it be there? So if there is a song measured at 50% in the beginning with a refrain that matches the perceived levels of alle the other songs, would this lead to much different levels if the were measured at for example 90% in the beginning?
so pretty much this 12:00 minute example is like having a parallel reversed compression. it isnt necessary to own the RX11 to do that. Of course you need to understand the plugin choice and the phase related issues.
Did not find it inside the Trial, just plugins, no standalone version with streaming preview, where is it
Please make the next version of Izotope Ozone 12 with even much smarter AI, it’s just not good enough at mastering certain tracks and genres.
This needs to be an excellent master assistant.
Very useful and complex, but I didn’t get it 😵💫
Our video with Sam Loose is worth checking out, too, as another resource on how it all works! ua-cam.com/video/SZk5Xn1nDuY/v-deo.html
Ok someone please help me understand this.
He measures a song that isn't optimized at -8,2 LUFS integrated... Then it gets optimized by the upward compressor resulting in -8,7 LUFS integrated.
Then he goes on saying it will be played back louder even though it is quieter now (speaking in LUFS integrated)
What am I missing?
Can someone please explain why it matters how much of the song is getting measured?
Why does lowering the LUFS result in a half dB increase in playback volume on spotify .. as he says?
Spotify adjusts the gain based on the measured LUFS. So at the start, the measured LUFS is -8.2. Spotify will turn it down by 5.8 dB so that it plays back at -14 LUFS. After optimization, the measured LUFS is -8.7 LUFS. It is measuring quieter, even though the quiet sections were boosted. Spotify will only turn it down by 5.3 dB so that it plays back at -14 LUFS. So not only will Spotify turn the song down less, the quiet sections were boosted. Does that make sense?
It does. Thats crazy. Thank you.
So its a momentary LUFS Gate not a Peak Gate that triggers the actual LUFS measurement for spotify. So in order to be measured not by the loudest section but the whole song we push the whole loudness over the threshhold and get a louder playback with a louder mix. This is really the bedrock of audio engineering. Whats next? Haha.
@@leonscholz97 yeah isn’t that crazy?! The integrated LUFS algorithm doesn’t necessarily take the full song into account, only the audio that’s above the relative gate. So if you’re mastering for the most optimally loud playback on streaming, you want to ensure that the whole song is above the gate and being measured
In other words. You are ordering fries with extra fries.
"select the one of your choice" ie: you cannot account for all streaming platforms. And it's largely irrelevant, as it's utterly dependent on the playback normalizing settings, which in turn are dependent on the platform, if it's a free or paid account, playback settings, desktop or mobile app, and of course is subject to change at any time. And with bluetooth and spotify you immediately get bonus *compound* lossy encoding!
Just make it sound great for the music, lossless, on a detailed (flat) system. The listener always has the final say, as does the multitude of radio processing.
hi there, after Logic Pro 11 update, I wanna ask you if its still compatible with the ozone 11 advanced, cause I try to download the free trial nd it does download on my computer, Sonoma 14.5 it looks like, So I wait like 5 to 7 minutes, but then the window closed its self, after it says finished, and that's it, I can't open it ..... can you plis help me or any tips, cause I really wanna buy the ozone advanced 11. its very hard to get in touch with you guys.
Sorry to hear about the trouble you're encountering. To contact our product support team to troubleshoot this issue, please use a form on this page here: bit.ly/izo_prod Make sure to leave an email address for us and we'll get back to you as soon as we can.
@@iZotopeOfficial ok thks ….. I think it’s just got to be a set up on my computer, but I don’t know where to go on settings. Cause my computer it’s Sonoma 14.5
I will try the website you gave me. Thks 🙏🏻
But you are reducing dynamics, no ?
Spotify...the new gatekeeper.... lose the dynamics of your intro to "beat" a poor algorithm. This is why vinyl sales are skyrocketing.
So , this is basically what you can do with ADPTR Streamliner but not in real-time …..
Too expensive for software sorry
Everything I dislike about modern mastering techniques. Pump up the volume make it all loud.
Being super loud on a playlist sometimes just sound like that douchebag at a party, you know? So let's screw the songs dynamics to please Spotify and the likes? I honestly think the big majority of the listener don't care about that 1-2 dB difference in the end and won't skip it. It only has to be in the ballpark of the loudness of the genre. I'm all in when it's about getting the loudest the song can go withing breaking anything musical but I don't see the point changing blindly the macro dynamics to get a better LUFS reading... Please, don't be insecure, do great music. AI will take care of shitty stuff very very well in the long run
As a long term user of RX since the beginning, I am a big fan. This video, however, shows how completely half witted most mastering is these days. Better technology as years go by and declining skill in the music and the mastering. There is no excuse except a love of mediocrity.