If you want to get up and running with Atmos fast in LPX, this is the video for you. I've been involved with Atmos for a couple of years now with ProTools and 7.1.4 speaker setup. And it was a pain to be an early adopter of this new technology. Thanks for making this accessible and easy...it's the only way it will be widely adopted!
Thanks for this video Michael!! I have been told that the renderer in Logic does not replicate the same loudness as the Dolby Renderer does. I do not have the Dolby Renderer and I am hesitant to make the purchase at the moment. Have you made any comparisons on whether Logic's render is significantly different than using the external Dolby Renderer?? Thanks again!!
Michael: I am using Logic Pro and want to import a Dolby Atmos bed, derived from an Ambisonic field recording. I own the Rode NT-SF1 and plan to use their Soundfield Plugin to generate the surround file (I haven't done this yet because I'm having trouble getting that plugin to work in Logic Pro). In any event, I'm wondering if ill I be able to drag-and-drop the Dolby Atmos file right into Logic Pro to act as a bed?
If you have it in 7.1.2 format you can simply drop it. If you have it in Ambisonics (A or B) you first need to convert it to 7.1.2. Logic does not understand Ambisonics.
@@michaelgwagner Great! And another quick question. I'm gathering recordings using the Rode NT-SF1 and want to use their SoundField plugin. But I've been completely unable to get that plugin to work in Logic Pro or in Izotope. So do you know a quick and inexpensive way to get the SoundField plugin to work, so that I can easily generate Dolby Atmos beds and get on with producing head-trackable surround? Rode seems to have dropped the ball with regard to their plugin and haven't updated it since around 2018.
@@michaelgwagner I'm now using the trial version of Reaper. The SoundField plugin works, but when I try to render a 7.1.2 surround file, I am forced to choose an 8 channel (or less) output and only the first two channels in the rendered file end up containing audio (the other six channels are silent). So I must be doing something wrong in the rendering process. Thanks for your help; this whole thing is driving me a bit nuts, I must admit.
Hi Michael. Great video! As a creator, what you need is for technology to do its job and get out of the way of the creative process. Logic does that 99% of the time. And Apple did an excellent job with Dolby on this one! As a professional, what you need is a way to provide that quality control your clients require and, as you mentioned, Logic is the only DAW at this time that lets you monitor, in real time, with a simple click, both binaural renderers (Dolby's and Apple's Spatial Audio) in order to try and match as much as possible the mix for playback on both Apple Music and Tidal or Amazon, which is so important for artists and labels! Yes, of course, it is a work in progress, and there are shortcomings (like not being able to assign tracks to specific objects, which in Pro Tools for example is so easy!). But Logic, and all DAWs, as well as the Dolby software, will only get better. What a great time to be making records . . .
What do you mean by it not being able to assign tracks to specific objects? It seemed in this video that you can set up any track to be a 3D object. What does ProTools do differently that you prefer? Curious. Thanks.
@@davekerzner yes, any track can be an object, but it is whatever object Logic decides (in the integrated Renderer), you cannot say for example I want this track to be object 75 or whatever. But that's only for the integrated renderer. If you work with the Dolby Atmos Renderer software and Logic you can use the Dolby Atmos Music Panner along with the Dolby Atmos Binaural Settings plugin and you can assign whatever you want wherever you want. But now, you lost the one click reference for Spatial Audio!! Hehehe!!! Can't win, can you???
Actually adding Gain>Multi Mono with about -7.5 worked well to get my first Spatial Audio track to -18 LUFS, however, now I'm wondering how to get the True Peak to -1.0. Does anyone know? (Izotope Ozone is not yet compatible with Atmos).
Dolby has an article about this: professionalsupport.dolby.com/s/article/How-Should-I-Measure-Loudness-in-Logic-Pro-Using-the-Integrated-Dolby-Atmos-Workflow?language=en_US I am not 100% certain but from what I understand Dolby Atmos measures loudness based on a multichannel downmix. If you want to stay within Logic you can simply put a loudness meter and a channel meter after the Atmos renderer. That would measure the 7.1.4 downmix. If you work with the external Dolby Atmos renderer you can use the metering that is built into the external renderer. Edit: apparently, in oder to match the metering by the external Dolby Atmos renderer, you need to set the monitoring option in the internal renderer to 5.1. If I read that correctly, this means that the "official" loudness measurement of a Dolby Atmos track is simply the loudness of the 5.1 downmix.
I’ve found a few bugs, Samp doesn’t work, also today adding some new tracks the elevation in the top left was silent - also bounces sound different in binaural to the Dolby renderer
@@michaelgwagner here's another issue, when you bounce 'Surround' mix, i.e the binaural ALL the different encoders, I.E dolby, apple, 2.0 do not phase cancel, they're also not sample alligned so have different start points. Exporting correctly for delivery to a distributor is scary, becasue stereo version has to be sample accurate to the atmos file.
Great video! Thanks for making it. What does the "size" parameter do exactly? It was when you were in the panning section and it went by fast showing each left and right icon getting wider but not in the spread between them. Is to do with them going into the height channels?
Oh and you didn't get into the meta data aspect of the spacial audio mix. Does Apple Music recognize this meta data (mid, near, far etc.) yet? Do all the streaming platforms recognize it? Is there a difference in how effective it is when set in a bed vs. an object? Good useful things to know.
The size of an object specifies the area from which the sound is emitted from. If you have size 0, it is a point source. For streaming Atmos is encoded in a special format that allows and Atmos capable player to approximate the original objects. They do not use the Atmos metadata directly. Most streaming services support this at this point.
Hi Michael, thanks a LOT for all your content! I have a question, I mixed an Atmos project using Logic Pro and I can get TOP information of my 3d Object Panners when I use Binaural rendering but there's no TOP speakers information when I use 7.1.4 monitoring. Any ideas on why?
@@michaelgwagner Thanks for your rapid response! I am monitoring in full 7.1.4, and I think I just found the problem. The project had a Linear EQ AFTER the Dolby Atmos Plugin, and the linear EQ is Multi-Mono so it was messing with the top speakers' routing. Just disabled the EQ and the top speakers information appeared! Again, thanks for your rapid response and I want you to know that most of what I know about Atmos is because of your invaluable videos! Thanks a lot again!
Interesting vidéo. The problem I can have with atmos with headphone is I cant' actualy, honnestly and for real, hear something "above" or "behind". I just hear a kind of extended stereo. And I'm note alone in this case. So, without a real atmos studio with 12 speakers in a treated room, it's difficult to realy mix in atmos. I guess with "special headphones with personnalized settings" this could be different.
Humans are generally bad perceiving height information, which is why the Facebook format ignores second level height information altogether. Behind depends a lot on your ear physiology. That’s where a personalized HRTF makes a huge difference.
Is there a way to set the default to be an object channel rather than the bed? I put almost everything into objects and hardly ever use the bed for anything aside from a few specific things.
Michael: Great introduction! I have a question ... does Logic Pro Atmos implementation make it easy to add 3D spatial reverberation to dry, monaural sound objects? If it does, I'll buy a beer for everyone in the bar!
Not necessarily easier. It pretty much works the same way it does in Cubase or Pro Tools. You need to add a send into the bed and then make sure that the panning is linked between send and track out.
@@michaelgwagner I think I now understand ... to add reverb, the object signal is sent through a 3D reverb and then the reverb is "baked" into a static Atmos bed. Right? Also, it follows that if the object is a dry, mono file that should be clearly localized (eg. a bird song), the reverb should be entirely free of direct sound, such that only the reverb is baked in, not the dry sound object. In other words, the reverb should be "100% wet". Right? In your Cubase video, the reverb you created seemed to include quite a lot of direct signal, which makes sense I suppose because you were spatializing a stereo recording, which would not be highly localized anyway. For my application, which involves spatialization of bird songs in an environmental bed, it seems important not to include any direct signal in the reverb because that is likely to muddy the object's location in the final 3D sound-field. So, am I thinking about this correctly? My plan is to use DearVR Pro (in Logic Pro), both for spatialization and to add 3D reverb to bird songs. I don't know exactly what the workflow will look like, but I am confident it will be fairly easy to pull off. Or not?
Thank you for the videos, you are the pilot of my wild ride into the Atmos world. Please look into logic pro Atmos and the possible use of the waves Atmos nx
On a Mac you can use nxosc to use the WavesNX headtracker with APL Virtuoso. Virtuoso has a standalone versiin that you can connect with the multichannel output of Logic.
Great video, thank you! I'm surprised how simple it is. I have a question about LFE in the Bed tracks. If I have a drum instrument and want to send to the LFE speaker, is it as simple as using that horizontal slider on the 3d panner? I've heard that in other DAWs it's important to low-pass any sounds going to the LFE. This would require making an aux track that only goes to the LFE with a filter on it. The reason being that on some playback systems, the LFE is full-range. Please let us know if you have thoughts on this!
Good question.. Depending on your rig, you shouldn't need to employ any filters on the DAW such as hi and lo pass (for the reason you say anyhow), off course you have that as an option but watch out for overlap and phase issues if done badly.. Most of our target listeners go through an atmos or surround setup or headphones. Normally these will have Bass Management (Crossovers) built in to their AU or and speakers, so if thats your target its a good idea to use it.. For your LFE and low frequencies etc., id recommend having a look into Bass Management and it's Frequency/Channel, ours for example sits with a +10db gain above the Centre Channel, and comes in at 80hz as that works for our studio. This directs everything to the sub and can be extended in frequency by roughly an octave. Again, most satellite home speakers are not designed to reproduce anything below 80-100hz.. Studio Monitors however go much lower so it in theory be possible to get away without it but if the monitors are full range sudh as the KH310's we use (Most studio monitors are full range anyhow, else a technical setup) they may interfere with the LFE itself, such as frequency overlap and phase issues - which may truly kill the mix. So, most of your bass should ideally be driven by your L, R, Ls etc (never C). monitors themselves, the LFE being Low Frequency Effect, doesn't do what most people think, it's designed to emphasis the low end when needed, think canon fire in a move or explosions - in truth, this is only really getting worked out for music, it's still the wild west to a certain point. So if you setup the LFE and beds well, it should take care of itself.. Just to add, we use the Dolby Atmos Renderer itself, using Dolby Bridge, it provides a lot more functionality but you still get to use the Logic Panner etc.. So, in the drop down we select 7.1.4 or 7.1 itself depending on the job that is;)
What would be a good solution if I have a project where I have already used a lot of bus routings and bus mixing tools? Because surround routes everything to the master out. And what do I do with my Master Channel and the Stereo Plugins? So does it make sense to bounce my entire mix into individual audio tracks and then start a new surround project? But then I miss the previously used master plugins, because of course I turn them off during track bouncing. Thanks
If you want to create an Atmos file you need to export it as an ADM BWF. I have not worked with Logic much, but if you just bounce or render the audio it should generate whatever you have set in the monitoring option of the renderer. But take this with a grain of salt, I have not tested that in Logic.
Nice video Michael. I'm an old Nuendo user (on Mac !!) and I am seriously going to Logic Pro for the same reason that you expose in your video: how easy and efficient it's to work in Logic with Dolby Atmos. On the other hand, Did you notice that Immerse Virtual Studio Signature is compatible in Logic Pro? Have you been able to compare different headphone monitoring among Immerse vs Apple?
Thanks! Yes, the Apple headphone monitoring is second to none right now. But Audiomovers just released a plugin that brings this monitoring solution to any DAW (on a Mac).
If you are working in Logic and produce Atmos, then nothing can beat the Airpods Max due to their built in headtracking capabilities. Otherwise it is a matter of preference tbh. I currently use the Neumann NDH30, but my preferences change from time to time.
Does anyone have a problem with object tracks in Logic getting out of sync? I'm done remixing a song for dolby atmos, I only ended up using few object tracks, and everything else is in surround bed. Project only has audio tracks and most of the processing is baked into tracks, so not as many plugins. Objects get out of sync during playback, when I rewind and play the part again it sounds fine, but the problem persists on the export as well... so now i'm finished bud cant get the final product.
I think I remember that being mentioned somewhere but I don’t recalll where or why this can be. If you do a comprehensive internet search you might find an answer.
Thank you for this Michael - just subscribed. I would be really grateful for help regarding the use of Ambisonic recordings in a Logic 10.7.4 Dolby Atmos project. I want to use recordings I made on a Zoom H3VR mic (both ambisonics A and Ambix format) as bed channels in the Dolby Atmos project then use other stereo files as objects. What would be the workflow for the H3VR ambisonic recordings into the Logic project? Do I need to convert these somehow to B format before importing into the Logic project or is there an auto convert with the Logic version of Dolby atmos? If I do need to convert before importing into the Logic/Dolby Atmos project, the H3VR is a 4-mic array so first order ambisonics - does this mean I need to save as first order b format? I"ve watched your video in ambisonics in Cubase but not sure that it translates over to the Logic workflow for ambisonics in a Logic Dolby Atmos project. Any pointers you could give would be very, very welcome.
Thanks! Yes, you have to first convert A to B since the A format is Mic specific. I have not yet looked at Logic but my understanding is that while Logic is not set up to work with Ambisonics, you can nudge it into using 1st order Ambisonics (which is what you get through the H3VR) by using a Quadrophonic track. You would then have to convert the audio into Atmos channel format somehow. It might be easier to do the entire Ambisonics A -> 7.1.2 through Reaper and then just drop the converted file into Logic. We do have an H3VR. I might do a video about it in the coming weeks. This is an excellent question you posted.
Hello Michael, and thank you for your videos (I've been watching for a couple years now. Did you find a solution for adding ambisonic recordings into dolby atmos? I have a sennheiser ambeo mic, a zoom H8, and I was also trying to see if I could use the 3d recordings as a bed in dolby atmos composer for either logic or ableton. I'd love to see if you've covered any ground in this direction. Thanks!
@thebillymark Thanks! That’s easy to do in Cubase/Nuendo or Pro Tools, but a challenge in Ableton or Logic. Interesting topic, let me think about this.
I look forward to it! Also, is it easy in Reaper to add 3d recordings in an atmos bed? And/Or is there an inexpensive version of Cubase that I could use for this one feature? Again, your videos have been so helpful!
Nice video Michael, as always! I'd love to see a video of you comparing the renderers (i.e. Dolby vs. Apple). I'd be curious to hear the differences in how the two handle binaural reproduction. Cheers!
hey Michael. Remember I mentioned that my 3d object tracks go out of synch in Logic Pro? Well I've since discovered that it's ONLY instrumental tracks and doesn't apply to audio tracks. Bend tracks are fine with both. Also while instrumental 3d object tracks go out of synch during playback if I bounce a stereo mix down offline the end result is fine timing wise. Do you have any idea of a fix at all please Michael?
its is crazy. When i chnage the input format to surround on every track the sound and loudness is changing completely. when surround input is selected the Apple Renderer sounds normal and if not the Dolby Renderer sounds normal...any tips?
Another important thing, where are meters for the master, the Dolby renderer provides that, I do not see that on logic implementation..............meehhhh!!
@@michaelgwagner a fraction of a second but makes it unusable. Doesn't ALWAYS happen but enough for me to not bother using 3d objects any more. Is there even much of a difference between using beds vs 3d objects?
@@michaelgwagner really? Beds and 3d objects sound the same to me but I'm using it to create immersive sounding stereo binaural mix downs. I would love to use 3d objects however so if you do come across a solution for this problem please would you let me know Michael?
If you want to get up and running with Atmos fast in LPX, this is the video for you. I've been involved with Atmos for a couple of years now with ProTools and 7.1.4 speaker setup. And it was a pain to be an early adopter of this new technology. Thanks for making this accessible and easy...it's the only way it will be widely adopted!
Thanks! Much appreciated!
Thanks for this one Michael! I've been waiting on this one for a while, very much appreciated! Fingers crossed for more.
Thanks for this video Michael!! I have been told that the renderer in Logic does not replicate the same loudness as the Dolby Renderer does. I do not have the Dolby Renderer and I am hesitant to make the purchase at the moment. Have you made any comparisons on whether Logic's render is significantly different than using the external Dolby Renderer?? Thanks again!!
Straight to the point - Thanks Michael!!
Michael: I am using Logic Pro and want to import a Dolby Atmos bed, derived from an Ambisonic field recording. I own the Rode NT-SF1 and plan to use their Soundfield Plugin to generate the surround file (I haven't done this yet because I'm having trouble getting that plugin to work in Logic Pro). In any event, I'm wondering if ill I be able to drag-and-drop the Dolby Atmos file right into Logic Pro to act as a bed?
If you have it in 7.1.2 format you can simply drop it. If you have it in Ambisonics (A or B) you first need to convert it to 7.1.2. Logic does not understand Ambisonics.
@@michaelgwagner Great! And another quick question. I'm gathering recordings using the Rode NT-SF1 and want to use their SoundField plugin. But I've been completely unable to get that plugin to work in Logic Pro or in Izotope. So do you know a quick and inexpensive way to get the SoundField plugin to work, so that I can easily generate Dolby Atmos beds and get on with producing head-trackable surround? Rode seems to have dropped the ball with regard to their plugin and haven't updated it since around 2018.
I would try Reaper. That usually solves everything.
@@michaelgwagner I'm now using the trial version of Reaper. The SoundField plugin works, but when I try to render a 7.1.2 surround file, I am forced to choose an 8 channel (or less) output and only the first two channels in the rendered file end up containing audio (the other six channels are silent). So I must be doing something wrong in the rendering process. Thanks for your help; this whole thing is driving me a bit nuts, I must admit.
Reaper can do this. And you can use the trial version indefinitely. It just keeps showing the nag screen but it keeps working.
Hi Michael. Great video! As a creator, what you need is for technology to do its job and get out of the way of the creative process. Logic does that 99% of the time. And Apple did an excellent job with Dolby on this one! As a professional, what you need is a way to provide that quality control your clients require and, as you mentioned, Logic is the only DAW at this time that lets you monitor, in real time, with a simple click, both binaural renderers (Dolby's and Apple's Spatial Audio) in order to try and match as much as possible the mix for playback on both Apple Music and Tidal or Amazon, which is so important for artists and labels! Yes, of course, it is a work in progress, and there are shortcomings (like not being able to assign tracks to specific objects, which in Pro Tools for example is so easy!). But Logic, and all DAWs, as well as the Dolby software, will only get better. What a great time to be making records . . .
Thanks!
What do you mean by it not being able to assign tracks to specific objects? It seemed in this video that you can set up any track to be a 3D object. What does ProTools do differently that you prefer? Curious. Thanks.
@@davekerzner yes, any track can be an object, but it is whatever object Logic decides (in the integrated Renderer), you cannot say for example I want this track to be object 75 or whatever. But that's only for the integrated renderer. If you work with the Dolby Atmos Renderer software and Logic you can use the Dolby Atmos Music Panner along with the Dolby Atmos Binaural Settings plugin and you can assign whatever you want wherever you want. But now, you lost the one click reference for Spatial Audio!! Hehehe!!! Can't win, can you???
With Atmos in Logic Pro, I'm very confused as to how to get the final output to be -18 LUFS and -1 True Peak! That's what Apple Music is requiring.
Actually adding Gain>Multi Mono with about -7.5 worked well to get my first Spatial Audio track to -18 LUFS, however, now I'm wondering how to get the True Peak to -1.0. Does anyone know? (Izotope Ozone is not yet compatible with Atmos).
Dolby has an article about this: professionalsupport.dolby.com/s/article/How-Should-I-Measure-Loudness-in-Logic-Pro-Using-the-Integrated-Dolby-Atmos-Workflow?language=en_US
I am not 100% certain but from what I understand Dolby Atmos measures loudness based on a multichannel downmix. If you want to stay within Logic you can simply put a loudness meter and a channel meter after the Atmos renderer. That would measure the 7.1.4 downmix. If you work with the external Dolby Atmos renderer you can use the metering that is built into the external renderer.
Edit: apparently, in oder to match the metering by the external Dolby Atmos renderer, you need to set the monitoring option in the internal renderer to 5.1. If I read that correctly, this means that the "official" loudness measurement of a Dolby Atmos track is simply the loudness of the 5.1 downmix.
Hello
Can you do a tutorial using VSL VI instruments and MIR PRO 3D
The step to convert a stereo song to a Dolby Atmos
Many thanks
I’ve found a few bugs, Samp doesn’t work, also today adding some new tracks the elevation in the top left was silent - also bounces sound different in binaural to the Dolby renderer
Interesting
@@michaelgwagner here's another issue, when you bounce 'Surround' mix, i.e the binaural ALL the different encoders, I.E dolby, apple, 2.0 do not phase cancel, they're also not sample alligned so have different start points. Exporting correctly for delivery to a distributor is scary, becasue stereo version has to be sample accurate to the atmos file.
Great video! Thanks for making it. What does the "size" parameter do exactly? It was when you were in the panning section and it went by fast showing each left and right icon getting wider but not in the spread between them. Is to do with them going into the height channels?
Oh and you didn't get into the meta data aspect of the spacial audio mix. Does Apple Music recognize this meta data (mid, near, far etc.) yet? Do all the streaming platforms recognize it? Is there a difference in how effective it is when set in a bed vs. an object? Good useful things to know.
The size of an object specifies the area from which the sound is emitted from. If you have size 0, it is a point source. For streaming Atmos is encoded in a special format that allows and Atmos capable player to approximate the original objects. They do not use the Atmos metadata directly. Most streaming services support this at this point.
Hi Michael, thanks a LOT for all your content! I have a question, I mixed an Atmos project using Logic Pro and I can get TOP information of my 3d Object Panners when I use Binaural rendering but there's no TOP speakers information when I use 7.1.4 monitoring. Any ideas on why?
Are you monitoring with a full 7.1.4 speaker system?
@@michaelgwagner Thanks for your rapid response! I am monitoring in full 7.1.4, and I think I just found the problem. The project had a Linear EQ AFTER the Dolby Atmos Plugin, and the linear EQ is Multi-Mono so it was messing with the top speakers' routing. Just disabled the EQ and the top speakers information appeared! Again, thanks for your rapid response and I want you to know that most of what I know about Atmos is because of your invaluable videos! Thanks a lot again!
@ThalesPosella Great!
Interesting vidéo. The problem I can have with atmos with headphone is I cant' actualy, honnestly and for real, hear something "above" or "behind". I just hear a kind of extended stereo. And I'm note alone in this case.
So, without a real atmos studio with 12 speakers in a treated room, it's difficult to realy mix in atmos.
I guess with "special headphones with personnalized settings" this could be different.
Humans are generally bad perceiving height information, which is why the Facebook format ignores second level height information altogether. Behind depends a lot on your ear physiology. That’s where a personalized HRTF makes a huge difference.
Please I’ll love if you can make a configuration video for the external Dolby Atmos Renderer on Logic Pro X
Interesting idea. I‘ll look into it.
Thanks s lot! What do you think, is Ambisonics in in Logic Pro feasible in a similar way?
Unfortunately, no.
Is there a way to set the default to be an object channel rather than the bed? I put almost everything into objects and hardly ever use the bed for anything aside from a few specific things.
I honestly don't know. Likely not.
Michael: Great introduction! I have a question ... does Logic Pro Atmos implementation make it easy to add 3D spatial reverberation to dry, monaural sound objects? If it does, I'll buy a beer for everyone in the bar!
Not necessarily easier. It pretty much works the same way it does in Cubase or Pro Tools. You need to add a send into the bed and then make sure that the panning is linked between send and track out.
@@michaelgwagner I think I now understand ... to add reverb, the object signal is sent through a 3D reverb and then the reverb is "baked" into a static Atmos bed. Right? Also, it follows that if the object is a dry, mono file that should be clearly localized (eg. a bird song), the reverb should be entirely free of direct sound, such that only the reverb is baked in, not the dry sound object. In other words, the reverb should be "100% wet". Right?
In your Cubase video, the reverb you created seemed to include quite a lot of direct signal, which makes sense I suppose because you were spatializing a stereo recording, which would not be highly localized anyway. For my application, which involves spatialization of bird songs in an environmental bed, it seems important not to include any direct signal in the reverb because that is likely to muddy the object's location in the final 3D sound-field.
So, am I thinking about this correctly? My plan is to use DearVR Pro (in Logic Pro), both for spatialization and to add 3D reverb to bird songs. I don't know exactly what the workflow will look like, but I am confident it will be fairly easy to pull off. Or not?
Yes, you are correct.
Thank you for the videos, you are the pilot of my wild ride into the Atmos world. Please look into logic pro Atmos and the possible use of the waves Atmos nx
On a Mac you can use nxosc to use the WavesNX headtracker with APL Virtuoso. Virtuoso has a standalone versiin that you can connect with the multichannel output of Logic.
Would like to know what you think of Dolby Atmos in studio one 6.5
Next week 😉
Great video, thank you! I'm surprised how simple it is. I have a question about LFE in the Bed tracks. If I have a drum instrument and want to send to the LFE speaker, is it as simple as using that horizontal slider on the 3d panner? I've heard that in other DAWs it's important to low-pass any sounds going to the LFE. This would require making an aux track that only goes to the LFE with a filter on it. The reason being that on some playback systems, the LFE is full-range. Please let us know if you have thoughts on this!
Have not tested that tbh. I might do more Logic videos in the future.
@@michaelgwagner Do you have to use an Aux with low pass filter when mixing in Nuendo or PT?
No, you can send some of the audio from objects into the LFE with the panner and you can use traditional bass management on beds.
Good question..
Depending on your rig, you shouldn't need to employ any filters on the DAW such as hi and lo pass (for the reason you say anyhow), off course you have that as an option but watch out for overlap and phase issues if done badly..
Most of our target listeners go through an atmos or surround setup or headphones. Normally these will have Bass Management (Crossovers) built in to their AU or and speakers, so if thats your target its a good idea to use it..
For your LFE and low frequencies etc., id recommend having a look into Bass Management and it's Frequency/Channel, ours for example sits with a +10db gain above the Centre Channel, and comes in at 80hz as that works for our studio.
This directs everything to the sub and can be extended in frequency by roughly an octave. Again, most satellite home speakers are not designed to reproduce anything below 80-100hz.. Studio Monitors however go much lower so it in theory be possible to get away without it but if the monitors are full range sudh as the KH310's we use (Most studio monitors are full range anyhow, else a technical setup) they may interfere with the LFE itself, such as frequency overlap and phase issues - which may truly kill the mix.
So, most of your bass should ideally be driven by your L, R, Ls etc (never C). monitors themselves, the LFE being Low Frequency Effect, doesn't do what most people think, it's designed to emphasis the low end when needed, think canon fire in a move or explosions - in truth, this is only really getting worked out for music, it's still the wild west to a certain point.
So if you setup the LFE and beds well, it should take care of itself..
Just to add, we use the Dolby Atmos Renderer itself, using Dolby Bridge, it provides a lot more functionality but you still get to use the Logic Panner etc.. So, in the drop down we select 7.1.4 or 7.1 itself depending on the job that is;)
What would be a good solution if I have a project where I have already used a lot of bus routings and bus mixing tools? Because surround routes everything to the master out. And what do I do with my Master Channel and the Stereo Plugins? So does it make sense to bounce my entire mix into individual audio tracks and then start a new surround project? But then I miss the previously used master plugins, because of course I turn them off during track bouncing. Thanks
I would probably render out stems and setup an immersive mixing project.
How do you fold down ATmos to a Stereo Mix if need be.
If you want to create an Atmos file you need to export it as an ADM BWF. I have not worked with Logic much, but if you just bounce or render the audio it should generate whatever you have set in the monitoring option of the renderer. But take this with a grain of salt, I have not tested that in Logic.
Nice video Michael. I'm an old Nuendo user (on Mac !!) and I am seriously going to Logic Pro for the same reason that you expose in your video: how easy and efficient it's to work in Logic with Dolby Atmos. On the other hand, Did you notice that Immerse Virtual Studio Signature is compatible in Logic Pro? Have you been able to compare different headphone monitoring among Immerse vs Apple?
Thanks! Yes, the Apple headphone monitoring is second to none right now. But Audiomovers just released a plugin that brings this monitoring solution to any DAW (on a Mac).
Thanks for that Michael. I was wondering if non-apple head tracking might work inside Logic.. Any ideas?
It’s possible and I might do a video about it. But to be honest, the Apple head tracking is so good, I would probably just pick up a pair of Airpods.
@@michaelgwagner Indeed. It's just that I really like, (and know) my headphones..
Hi Micheal. please what studio headphone would you recommend for mixing and mastering. Thank you.
If you are working in Logic and produce Atmos, then nothing can beat the Airpods Max due to their built in headtracking capabilities. Otherwise it is a matter of preference tbh. I currently use the Neumann NDH30, but my preferences change from time to time.
Philly In The House!!!! Ok.. now I'm gonna go listen.. lol
😂
Does anyone have a problem with object tracks in Logic getting out of sync? I'm done remixing a song for dolby atmos, I only ended up using few object tracks, and everything else is in surround bed. Project only has audio tracks and most of the processing is baked into tracks, so not as many plugins. Objects get out of sync during playback, when I rewind and play the part again it sounds fine, but the problem persists on the export as well... so now i'm finished bud cant get the final product.
I think I remember that being mentioned somewhere but I don’t recalll where or why this can be. If you do a comprehensive internet search you might find an answer.
These seems way easier than I imagined. I hate that I can’t do this on Studio One though. Guess I’m going back to Logic. Lol
Logic is ahead when it comes to Atmos. Especially with respect to headtracking.
hello thanks for the explanation i have a question can i apply 5.1 cinema movies as dolby atmos?
Thank you for this Michael - just subscribed. I would be really grateful for help regarding the use of Ambisonic recordings in a Logic 10.7.4 Dolby Atmos project. I want to use recordings I made on a Zoom H3VR mic (both ambisonics A and Ambix format) as bed channels in the Dolby Atmos project then use other stereo files as objects. What would be the workflow for the H3VR ambisonic recordings into the Logic project? Do I need to convert these somehow to B format before importing into the Logic project or is there an auto convert with the Logic version of Dolby atmos? If I do need to convert before importing into the Logic/Dolby Atmos project, the H3VR is a 4-mic array so first order ambisonics - does this mean I need to save as first order b format? I"ve watched your video in ambisonics in Cubase but not sure that it translates over to the Logic workflow for ambisonics in a Logic Dolby Atmos project. Any pointers you could give would be very, very welcome.
Thanks! Yes, you have to first convert A to B since the A format is Mic specific. I have not yet looked at Logic but my understanding is that while Logic is not set up to work with Ambisonics, you can nudge it into using 1st order Ambisonics (which is what you get through the H3VR) by using a Quadrophonic track. You would then have to convert the audio into Atmos channel format somehow. It might be easier to do the entire Ambisonics A -> 7.1.2 through Reaper and then just drop the converted file into Logic. We do have an H3VR. I might do a video about it in the coming weeks. This is an excellent question you posted.
Hello Michael, and thank you for your videos (I've been watching for a couple years now. Did you find a solution for adding ambisonic recordings into dolby atmos? I have a sennheiser ambeo mic, a zoom H8, and I was also trying to see if I could use the 3d recordings as a bed in dolby atmos composer for either logic or ableton. I'd love to see if you've covered any ground in this direction. Thanks!
@thebillymark Thanks! That’s easy to do in Cubase/Nuendo or Pro Tools, but a challenge in Ableton or Logic. Interesting topic, let me think about this.
I look forward to it! Also, is it easy in Reaper to add 3d recordings in an atmos bed? And/Or is there an inexpensive version of Cubase that I could use for this one feature? Again, your videos have been so helpful!
Nice video Michael, as always! I'd love to see a video of you comparing the renderers (i.e. Dolby vs. Apple). I'd be curious to hear the differences in how the two handle binaural reproduction. Cheers!
Great suggestion!
@@michaelgwagner also a comparison in Logic of a stereo binaural mix down with Apple Renderer selected vs Dolby Renderer in Logic ?
hey Michael. Remember I mentioned that my 3d object tracks go out of synch in Logic Pro? Well I've since discovered that it's ONLY instrumental tracks and doesn't apply to audio tracks. Bend tracks are fine with both. Also while instrumental 3d object tracks go out of synch during playback if I bounce a stereo mix down offline the end result is fine timing wise. Do you have any idea of a fix at all please Michael?
I don’t have a solution, but I know that Fiedler has the same issue and they are working on a fix for their plugin as a top priority.
@@michaelgwagner thank you for your reply Michael :-)
Great video. Thanks!
Glad you liked it!
Hey! Great content! Do I still need the Dolby atoms renderer if I had logic ?
Not unless you need the additional features the renderer provides. If you do not know what they are, you very likely don’t need them.
its is crazy. When i chnage the input format to surround on every track the sound and loudness is changing completely. when surround input is selected the Apple Renderer sounds normal and if not the Dolby Renderer sounds normal...any tips?
Yes I know, the inconsistency between rendering algorithms is annoying. Not sure how to work around that.
Can you also do headtracking on a macbookpro with intel chips?
As far as I know, yes.
@@michaelgwagner i just discovered on Apple support that it only works on Mac with Apple silicon. What a pity!
Ah, interesting, did not know that. Thanks for the info.
Tell me can I use an apple headphones max to mix my music In logic ?
Of course
Another important thing, where are meters for the master, the Dolby renderer provides that, I do not see that on logic implementation..............meehhhh!!
mono is the answer to any issues you'll have with this plugin. this is better than neutron.
🤔
I can’t use 3d objects as for some reason they go out of synch so I can only use beds. Has anyone else found this issue and resolved it?
That’s interesting. Out of sync by how much?
@@michaelgwagner a fraction of a second but makes it unusable. Doesn't ALWAYS happen but enough for me to not bother using 3d objects any more. Is there even much of a difference between using beds vs 3d objects?
Yes, if you are not using objects, then Atmos is pretty much pointless. You could just as well do a regular surround setup.
@@michaelgwagner really? Beds and 3d objects sound the same to me but I'm using it to create immersive sounding stereo binaural mix downs. I would love to use 3d objects however so if you do come across a solution for this problem please would you let me know Michael?
If you are only doing binaural mixdowns then Atmos is overkill. The value of Atmos is in the way objects are treated by the Atmos endpoint device.
More More More!
We’ll see. Lol.
Do we need an headbtracker because I reached out to rj for a head tracker so I sent kow what to get cheaply
If you are using Logic I strongly recommend going with the Airpods.
Very interesting :-) not a very big fan of logic as i never found it very logical lol
I have mixed feelings as well. 😂
Feelings Pro X
nice basic reverb tutorial
Thanks! 😃