By far, one of the best presentations I have seen so far! His no nonsense approach is easy to understand, he gives clear notes and doesn't move on until you understand the basics. Terrific!
I just finished building my Atmos studio. This video was amazing. I've never sat through a UA-cam video this long before, but it was totally informative. Definitely subbed.
Hi Justin, thank you so much for sharing all this information. This is an all encompassing MasterClass! You clearly addressed so many topics where precise info is hard to find. Mastering has always been a part of making great sounding music, and you have proven it is just as important when working in Atmos/Immersive. Thanks again and greetings from Montreal.
What a treat! I'm just a home theater nerd listening in. Some thoughts on binural/headphone spatial audio. I don't understand why there's meta data or special mastering considerations for headphone use. An ideal binural render would be personalised and simulate real loud speakers in a real room. That's essentially what Apple does with the depth camera. If you then go ahead and listen to a movie from Prime/Netflix/Apple/Plex it then sounds like a real home theater on your Apple TV. Now there's a different virtual room for your phone, where speakers are virtually placed much more around you and there's less room in the virtual playback. So you mix for real speakers - it's then up to the hrtf/binural renderer to convert that. I currently playback many blu-rays via Infuse on my Apple TV on Air Pods Pro and it sounds like a real theater. Other solutions like Creative SXFI do the exact same thing. They just playback the speaker mix & it sounds remarkably like real speakers in a room
Very nice Justin you have provided a lot of info i did not know.. Having a 9.1.6 focal system and been working in 7.1.4 since start of 2023 you have opened my eyes up to a lot.
Love your channel! I've been watching your content since the first video on Atmos 2 years ago. :) Thank you very much for creating such content, this is just pure gold!
Wow, that's some great info, thank you. You just explained a few things I was hearing and wondering about, like the DD+ JOC artefacts when soloing channels from Apple Music playback. Thanks a lot!
**Amendment to Dolby Atmos Assembler information. I was just at Namm 2023, and Dolby confirmed for me that the Album Assembler is Millisecond-Based (Not Frame-Based). I apologize for that misunderstanding. More importantly, under the hood, the assembler is actually able to render sample accurate files, which means that my concern about final master assets is not valid. Even though it appears millisecond based, if you select the start and end of a file (let's say the stereo master), and export, it will render a sample-accurate ADM BWF.
Excellent stuff Justin! I would like to add something VERY important about Spatial Coding/Clustering! Using 16 elements in the Renderer will give you "louder" readings on Loudness. I had a master file exported from Album Assembler which was -18.7 LKFS with TP -1.1, so all according to Dolby specs. After importing the same file into Dolby Renderer (with 16 elements on) I got readings of -18.8 LKFS / 0.1 TP. So back and forth with Dolby engineers this is what they said: "For now, when you are measuring loudness in the Renderer, ensure that spatial coding is set to 14 elements. Note that the default value of 14 is what is used for most applications other than blu-ray discs, so you should keep it at 14 unless you are mixing for True HD on blu-ray. I will discuss with the development and let you know if there is any further action needed." I put then 14 channels on in the renderer and got the same reading as Assembler! My advise is to keep it on 14 (for now) as this is how your masters will be measured and encoded for STREAMING purposes. 16 is good is for True HD. Cheers, Nenad from Audio 9.1.4 www.dolbyatmosmusic.com
Hey. Thanks for this. This is an update that was just confirmed by Dolby. It clarifies that 16 elements is the correct number to choose, and that the assembler will be updated to solve this problem. From Dolby (posted on FB forum today) Hi all, Thanks for your patience. Allow me to clarify a few things. As has been mentioned, spatial coding emulation should always be on if you want to hear what content will sound like on a consumer device and for accurate loudness measurement. If you want to disable it earlier in the mixing process you totally can, but you should always check your mix with it on at the end, in case there are any tweaks you want to make. As you’ve seen, measuring loudness with a different number of spatial coding elements in the Renderer can produce slightly different results. The differences, if any, are very content-dependent. The reason why the Renderer allows you to select different numbers of elements is because different Dolby codecs (.e.g. TrueHD) can be configured for different numbers of elements. 16 is used for DD+JOC, which is what is currently streamed on Apple Music. When we created the Album Assembler, we used 14 elements as that matched the Renderer’s default at the time and is also a happy medium. The Renderer default (for a fresh install) changed to 16 elements in v5.0. This was done to align with DD+JOC, but of course, doesn’t align with the Assembler. Not ideal. Given this, we’ll be changing the Assembler’s loudness measurement to 16 elements in the next release (coming very soon!). With this change, all Dolby tools and DD+JOC encoders will be aligned. So, the recommendation is to set the Renderer to 16 elements. The downside is that measurements taken with Assembler 1.1 and the previous may not be identical to the new version. We will continue to evolve and improve the loudness workflow over the coming year, including getting all of the DAWs aligned.
@@justingraysound Wow, thanx Justing for this info, this makes Assembler temporarily unreliable for Loudness measurements. Wonder now why would they even give other options to choose, specially 12 elements? Where is that kind of coding used for?
@@neddnl I always QC every ADM in the Renderer anyways, so I never personally integrated the assembler into my final QC stage. It will get fixed though, as I know Dolby is all over this. I am also happy to see 16 as the choice over 14 (we need all the spatial resolution we can get!)
@@justingraysound I just hope soon one of the streaming services will take on higher tier of Atmos streaming and offer something in between what we now have and True Hd quality. How about 64 or 32 elements cluster?
Wonderful information! Thank you. Is there link to a document that covers the same information as your slides? That would be helpful for quickly finding topics or feeding into an AI chat for intelligent dialog. Edit: Ahh... answering my own question... the UA-cam "Show transcript" function gives me all that I was looking for. 👍
I think this should be considered the bible of Atmos Mastering. (Incredibly helpful for mixing too). Thank you for sharing, this info is going to help get Atmos Music to a better state, and it's nice that the information regarding this tech is becoming more accessible.
Wow such a great presentation. Very helpful and I am only halfway through it. I'm sure it will take several more times watching to wrap my head around some of this material, but I love the depth of field. So interesting in the continuing evolution of audio recording. I'm looking at it through the lens of the creative side since I am fortunate to work with incredibly talented people on the other side of the glass, but I believe it is beneficial to the arrangement of a song to understand and consider the ultimate forms of presentation available to the composer. I see it as a major contributing factor to the further development of the creative process to keep the music interesting. The Atmos mixes I have out there so far were recorded, and assembled, by Jeff Jones or Mark Abrams and mixed by a young engineer named Tom Beuchel, who seems to grasp these concepts with a youthful energy. I absolutely love the results, and I see the vision becoming clearer and better as time goes on. The playlists on your immersivemastering site sound awesome as well, (from where I sit), very interesting and exciting stuff. Keeps the blood flowing!. Thank you for doing this!
Thank you so much. Keep creating! This is all about supporting artistic vision, and music!. I love reading this, as it is a reminder for everyone that working in teams is where it is at! The best music in the world has commonly been created with teams. We are in an era where everyone is encouraged to be everything all the time. It is possible, but most often, having a great artist/composer on one side of the glass, and a great team on the other side, helps to facilitate powerful music!
Part of the reason for this existing is licensing income for Dolby. They will push AC-4 as patents on E-AC-3 expire. Currently there isn't a complete open-source decoder for AC-4. You can put a binaural mix in any format, or derive it in real time. If the beds work well enough, then regular surround should also work, and it does. What is new is the binaural downmixer. I believe it could be made to accept normal surround 5.1 and apply a HRTF to create an illusion of speakers. I do appreciate how Dolby have always stood for good quality sound, even if their products are usually frugal with the bitrate. We will benefit from the greater dynamic range. Few people desire the current mastering loudness. Many surround releases of the past on DVD-A or SACD sound good in downmix. Not so much when they have been upmixed and have the same sound going on multiple channels. -18 dB has been the ReplayGain standard. Apple will effectively rob 2 dB, which is still great for pop music. I think the music should be normalized higher than dialog, similar to music during credits, because a complete movie mix also sits around -22 .. -18 dB with all elements together. Dialog is somewhat like what vocals are to the complete mix. Well, the E-AC-3 JOC is rather bitrate starved for the object channels, which is why it's garbled. It always is on the Internet. 640 kbit is enough for 5 full range channels. They should up it to 1.5 Mbit.
My entire life I've been an information gatherer. Your level of experience and education far exceeds mine but I can appreciate your passion and desire to produce the best sound technology allows you to. I look forward to the works you've produced to listen for how your approach transforms the music. My home studio is where I do gospel but want to assist in brining this new immersive sound via demo tracks as I listen to learn then apply. I'm waiting on my M2 mac studio with 64 gigs and 1Tb of space to begin the next phase of my music production journey. I have been a Protools user since 2008 and have the Dolby Atmos Renderer software working as I'm learning how to place objects with the stems from songs I've produced. I'm a new subscriber and thankful for the artful information about what I'm getting into...........
"The moment you solo Something, you'll start to hear the artifacts" ... does excerpt tell you something. Plus, the deliverable media is basically MP3 with video = MP4. The more you dive into it, the crazier it gets ... and I WANT to be a believer. Immersive is my passion in mixing. Ok ... I should try and sit through this.
Hornet SAMP allows you to easily select which Atmos Objects are used for its side-chain. You just click on the object number on the left of the plugin (when it is blue, that object is enabled to be part of the side chain signal).
Thank you for sharing this point. I had intended on clarifying this in the video, but I did not articulate that feature with enough detail. It is a fantastic and well conceived aspect of this plug in for sure!
As an additional point, I still have many instances where I prefer to use other tools beyond this plugin (which is great at what it does!) which I tigger with a side chain, so I still have the need to create my own custom side chains in those instances. This is in both mixing and mastering contexts.
Hello Justin, thank you for doing this, it's so helpful. I found that sometimes assigning objects as dummy is useful in some cases. Can't we have a blank audio object in ADM file? Or is it sort of manner thing to do for avoiding unnecessary amount of data?
Thank you for doing this in-depth video! This has been the most informative video I’ve come across so far. I have to say I was a little turned off when I saw total time, but damn am I glad I didn’t skip by.
1:52:53 How often would you say QC failures get through the Atmos pipeline? I'm reminded of the Atmos mix of Washington on Your Side on the Hamilton soundtrack. The bass track is woefully out of sync. I've seen a few mentions of it online, and tweeted Dolby about it, but it's still screwed up (at least on AM).
QC failures are usually due to human error, but when it comes to any music production, mistakes can happen. What matters is that we catch them before release. Stereo and Atmos require careful QC to ensure something like this does not happen. Even if the technology caused an issue like what you describe, it is the responsibility of the engineers to catch it and rectify it. I work with major labels daily, and one of the things I admire most about their teams is their QC standards. They are ruthless (in a good way). As an engineer, I don't want QC notes, as they cause delays, so I do my best to catch everything before delivering a master file to an artist, producer, composer, or label.
Justin, should we expect to provide Apple with both a stereo master AND an ADM BWF for one track? I noticed Tunecore accepts both (I think even as one ISRC).
Yes. If you only have ATMOS available on Apple, it will be a stereo fold down that is used for non-ATMOS playback. Although this can still sound very good, I highly recommend that stereo deliverables should be properly mastered to maximize that listening experience
@@justingraysynthesis9082 If I want to simply have a dry acoustic piano recording add some ambiences (for example Logic has Space Designer now Atmos capable) what is the basic technique of where to send those ambiences, and as objects, or beds? Also, I've heard the term quad ambience and am wondering if you can describe very generally what that is, and how it can be done, in the Atmos context?
One options is to make a quad bus with an immersive reverb on it. Then send the piano signal to that reverb, to create a 4-channel ambience. Then output that bus to 4 objects, and then place those objects in L, R, LRS and RRS. The routing of this would take a while to describe here, but you can find that information quite easily in some of the great video content Dolby has made.
Another option is to make 2 stereo reverbs, and delay the one in the rear with a longer pre-delay. That is how immersive verbs were created before the creation of the current multi-channel verbs, and can yield more control over the ambience, since you can use any reverbs (or delays) you want. Read about Haas effects to learn more about this concept.
Hi Justin - great video. I am about to set up my new system to mix Atmos Music, 7.1.2 and 9.1.4. Would you suggest that I set for; beds plus objects? (If so who goes where?) Or all channels as objects? My end goal is binaural on headphones. Thanks...Steve
This is a question that does not have a simple answer. The decisions around what layouts to use depend completely on all the factors discussed in this video. I can say that you need to be prepared and capable to use both beds and objects to be mixing successfully in this format. What is chosen for a given production depends on many more factors.
@@justingraysound I am looking for a starting point. As I am coming up with new ways to up-mix 2-channel to 91.4. I already have mixes in discrete. Just looking for the first way to try it in the renderer. Does either beds and objects, or all objects, affect the binaural down mix which is my first goal. I don't like to reinvent the wheel especially when someone like you has gone there... You should do a guide - I would gladly buy one if it was .pdf or a power-point...s PS the problem with DAWs and manuals is there are too many choices. Leads to more confusion then choices. PPS I was the Dolby Consultant on the original Star Wars, CE3K, Altered States...
Although I cannot support upmixing a stereo master to 9.1.4 (which of course can be done with Nugen or Penteo) as an approach to creating Atmos content, I do use this often in creative ways to expand stems in mixing. In this case, I would suggest a 9.1.5 or 9.1.6 object bed for sure.
Atmos. Binaural will be higher. There is no "spec" for binaural, which is one of the issues I have identified. For me, the spec is -0.1 dbtp on the binaural side at a bare minumum, although there is much more to consider than just LUFS/Peak Value of course
19:27 In my test Dolby's binaural playback is actually rendering the object itself rather than what the 9.1.6 bed. Example would be, put object on the exact position of the Rtr speaker, and automate the object to movefrom Rtr speaker position to the right top back corner of the room (right above the Rsr speaker), in 9.1.6 bed you hear no difference at all, because the Rtr is the only channel in 9.1.6 bed in charge of those back right ceilling places, so moving the object within those area you only get the Rtr sound unchanged. But in Dolby's binaural monitoring, you hear the object moving as per the automation, it works even during playback of the E-AC3-JOC file with "Dolby Access for headhone" on. Basically the object is indeed more flexible than even a 9.1.6 bed, especially in binaural.
Hey, thanks for the note. This is correct and I think it is also explained at some point later in the video. Any time that we place sounds outside of a discrete channel position, the objects should be used, IMO, for the exact reason you discovered here. The translation of movement and tonal accuracy is much better with objects when placed between speaker locations (on headphones). It is also true for speakers when we study the spatial coding of the DD+ JOC codec. The comparison between the object and bed being 1:1 (at this point in the video) is only when they are in the exact same discrete channel speaker position.
When you talked about expanding beds beyond 7.1.2 it is kind of possible to do in Nuendo. You can create let's say a 7.1.6 group track (group tracks are how aux tracks are called in Nuendo) and turn it into a multiobject. What Nuendo does is it takes a 7.1.6 group track, removes the LFE channel, and connects all remaining channels to objects. So, the renderer will see it as just regular objects with panning metadata for each channel transferred from Nuendo. But inside Nuendo, it will behave like a discrete multichannel group track. The only issue with this is that if you use a built-in renderer in Nuendo it doesn't allow you to set binaural metadata for each channel of the multiobject independently (you can choose one setting for all channels in multiobject), but when using an external renderer you can set metadata for each channel independently. Here's a video that shows how it works: ua-cam.com/video/rEKJHyQ64eI/v-deo.html I don’t know if you knew about this, and probably in your workflow switching to Nuendo doesn’t make a lot of sense, but just thought that it may be interesting. :)
Thank you. This is what we refer to as an Object Bed. In Pro Tools there are some complicated ways to mimic what you are describing, but the workflows are complex. I do believe we will see Pro Tools (and all DAW's) eventually expand to a 7.1.6 or 9.1.6 object bed approach soon enough. Once that happens (or for someone working on Nuendo like yourself), the most important thing will be how we use them. Since EQ and Saturation can be used with linked processing (across multiple tracks), it really comes down to linked compression. The way I get around this in pro tools now is with side-chain triggering, but as the video explores, I actually rarely want to equally compress content in different "zones." What we will see is tools like Waves Shperinx, where we can control "zone-based" compression behavior, even when on a bed. All of this can be achieved at the moment, but it will get easier. Nuendo is already ahead of Pro Tools in this regard when it comes to workflow, although with proper templates, and a good understanding of Pro Tools, it is possible when required. What I do not want to see is everyone suddenly limiting their bed content in Atmos, just to reach a certain loudness target, which I hope I have advocated for in this video. Thank you so much for this helpful note!
@@justingraysound totally agree. When I tried to apply compression and limiting to beds, I noticed the artifacts you talked about in the video. It was even noticeable to some extent in binaural and stereo fold down. Acon Digital compressor (and even multiband compressor) has channel linking controls that allow you to fine tune how compressor reacts to signals in different channels. It doesn’t allow to control different zones separately, it’s more like a slider between multi-mono mode or internal mono sidechain that triggers compression (if I remember correctly it just sums all channels of the bed internally).
@@gkmixing I will look at the Acon tool. Thanks. In addition to these points, the actual mono side-chain itself is something that is overlooked by the current plugins. I don't necessarily want a full mono fold down of the mix to be triggering compression behavior. If anyone has ever made a full mono fold down (with no supervised level control) and listened to it, it can be a mess! That is where designing my own side chain trigger, based on my goal, is my current practice when I need to work with side-chained compression.
@@justingraysound that actually blew my mind. I never thought about creating different sidechains based on the goal. Seems so obvious now, but still genius! I think I will be rewatching this video several times, just to make sure I didn’t miss any other awesome ideas you shared. :)
The ADM can be used to generate a master file in Dolby True HD, which is what is used on Blu-Ray and MKV containers. It is not something you can do without professional tools. if you would like to purse making a Blu-Ray or Dolby True HD/MKV container, please feel free to visit my website and reach out. Happy to help/guide you to the right people.
I have not had the chance to use these yet. I will see if I can get a pair to listen to at NAMM this week. I use my Audeze collection every day, and I can say with full confidence that the LCD-5, LCD-4z, and LCD CRBN are at a mastering level for Atmos and Stereo work.
Hey Justin - I know you're super busy so no worries if you don't have time but could you just briefly break down your current chain for listening to immersive music from Apple Music and Tidal and Amazon on your system? Does your apple tv plug directly into your interface (staying digital) or are you using a receiver in-between and then going in analog? Is the process different for each streaming service? Just wondered what was needed at this current time? Can you just play full Atmos mixes off your computer in full 9.2.4? Anyway, I'm sure my question is lacking some detail but I hope it's clear enough to get the idea of what I'm trying to ask. Thanks so much brother 🙌
I have an Apple TV 4K going into the Arvus HD-4D. That then sends the (up to) 16 channels (up to 9.1.6 speaker layout) via Dante to my Avid MTRX. That is then an Atmos Source which I can then listen to on my speakers. The same for Tidal. They are both DD+ JOC being decoded by the Arvus. I love this, as there is no additional conversion, so it is the most accurate way to listen to consumer Atmos at the moment. Then for headphones, I have my Iphone, with Air Pod Pro 2 and Air Pod Max. I then also have an older Android paired up with a pair of wireless sony headphones, but that is more for earlier listening, before Tidal offered IOS playback
@@justingraysound Hey Justin - thanks so much for getting back so quickly!! I really appreciate it. That's a smart way to go about it for sure. Not only does it save the conversion but have a rack item rather than a bulky receiver must be nice. Grateful to you for sharing all this knowledge. I started my first Atmos mix in headphones last week and I'm absolutely loving it so far!!!
What I mean by that it’s so overly complicated that only 20% of people who would need to know this stuff actually know this stuff (and that’s being generous). As a result, 80% of Dolby Atmos art arrives not as intended. That’s a problem. That’s Dolby’s failure. The system needs to be far, far simpler. Or, the hardware medium needs to be standardized under one company’s roof.
My company will try to make spatial audio beautiful for everyone by mass-producing enhanced, standardized performance venues that are absurdly high-tech jukeboxes by day and other things by night. One format. One type of venue. One type of room. One mega-intuitive software. Tech side completely separate from art side. Every city in a America. Rent venue room during day for $50/hr. Everything is always perfect every time without fail save an act of God. It’s only room, one system copy pasted everywhere. And it’s about Brand name. You walk into my theater for an in-house show and you know for a fact you will receive a quality experience. The Dolby name is everywhere on everything so it means nothing.
This is certainly a consideration, but it is heavily content-dependent (as is all mastering). In all reality, even a file normalized to -18 / -1.0 will playback very nicely against stereo, since LKFS between the two is not truly apples to apples. So, since there really is no loudness war here, targetting -3dbtp just to have it normalize a bit louder is not worth it if it is sacrificing spatiality. I will always choose to allow more dynamics if it helps the sense of space, over reducing peaks in order to hit a number. This is not because I am a purist, but because it sounds better. Transients are essential in successful spatial distribution, so reducing the to have a -3dbtp can have consequences to the spatial experience. As with all things, it is 100% content and context-dependent. Hence, don't mix or master by numbers. It just does not work.
@@justingraysound thanks for the reply and the hole video, I somehow managed to watch it fully. And yes, its seems to be quite a subject all in itself.
I was wondering how often you get idgits trying to argue the contrary point of view. I have had the same discussion with several “top” studio engineers in LA and Miami. Several actually asserted that anything above 0dBFS is irreparably “lost” regardless of the recording bit depth. Interestingly, a couple are engineers that mix on NS-10’s at around 85-90 dB all day. Maybe it is that they’re simply hearing the ringing in their ears… who know?
If I understand correctly, these are two different things. If we cross 0 dbfs in the digital domain, that information is indeed "lost" as it is clipped at the final authoring/conversion stage. The discussion of mixing at 85-90 db is in relation to what listening volume they use in the room, which is not related to the internal digital headroom (dbfs). Does that make sense?
The average consumer still don't care about atmos , most people who buy apple headphones even deactivate the spacial audio option . Those " immersive " mixes don't translate well at all on headphones and soundbars ... This will end up like NFTs , those who are desperate to profit from it will be like " muuuuuuuuuh it is the future bro , it is inevitable !!! " and most people will laugh and keep ignoring it until the " future " dies inevitably lol
Time will tell. Regardless of what the future holds for immersive audio, my intent is just to make the best sounding music I can every-time. I did a different video which discusses this reality. I don’t disagree that stereo is still the primary format. I do disagree that immersive mixes cannot sound amazing in headphones. At the end of the day, I currently work in the format all day, everyday, and since I respect each song that someone trusts me to mix and master, I just want to do the absolute best I can to help realize their art. Whether it goes the way of the NFT or not is not really important to me while I am working. No one can predict the future, and so I rather just focus on making the best music I can in the present.
No one has a crystal ball. That is why you will note that at no point in any video do I ever encourage engineers to make any investments. I did this because I believed in it from the very start. It is about more than income for me, and I would never suggest that someone take a risk they don't fully understand. For the future of music/art, I hope that is not the case, but for now, all I am interested in is making the best art I can with the tools I have available. One day at a time.
The unfortunate thing is it won’t work in the home and it won’t work on Cd or vinyl format. It’s great to home theatre cinema but in the end music will always be folded down to two channels. So it is pretty a much useless format.
Of course, we are all entitled to develop our own relationship with technology. I would encourage you to keep an open mind to object-based audio in the future, as there is a lot that can be done with adaptive audio. It will never replace vinyl, nor was it intended to. It will never replace the CD (although a Blu-Ray is essentially the same concept if one likes discs...which I do). Anyways, if you ever get a chance to listen to an "immersive" 2-channel system, you may find something you like. Then of course, when you can add a third or 4th channel wirelessly, at a low-cost point, that is where the real fun begins. The future has a lot in store for this format, and then of course we will see if it has staying power.
Why does Dolby atmos on Apple sound like garbage compared to a stereo master of the same song elsewhere? I literally can’t find a Dolby master that doesn’t sound in a well and dead.
Headphone listening can be sensitive for those who have an HRTF (essentially head/ear shape_ that is vastly different than the standard. Are you on AirPod Pro? Have you done Apple's custom HRTF? I would encourage you to listen to any examples from my own playlist (find it on my site, or in the video) and let me know if the comparison is not working for you. Also make sure Soundcheck is on, as loudness must be matched for reasonable comparisons.
By far, one of the best presentations I have seen so far! His no nonsense approach is easy to understand, he gives clear notes and doesn't move on until you understand the basics. Terrific!
easily the most comprehensive video on Atmos yet posted.
Much better than the BS I have been seeing with all the "so called" gurus!
This is probably the best video ever published on UA-cam. Very thorough, educational, focused. Thank you Justin!
Wow. Thank you for the kind words
This is a fantastic explanation. I am deeply impressed how well this Dolby Atmos mastering story has been told and this makes a lot clear.
Probably the best crash course on this format I have ever had, so thankful
A fantastic and thorough education into Atmos. Thanks for taking the time Justin! I'll be returning to this video for sure.
Wow - need to watch this again and again! See you on FB!
Thanks so much for your hard work this is by far the most comprehensive look at mastering in atmos. Worth watching til the end.
I just finished building my Atmos studio. This video was amazing. I've never sat through a UA-cam video this long before, but it was totally informative. Definitely subbed.
Thanks so much for this absolutely stunning Birds Eye view on mastering in immersive audio. Helped me a LOT!
Absolutely love your attention to detail and also the why and when aspect to your discussion. Much appreciated.
finished first half, thanks Justin for all that hard work, it's assembling the puzzle in my head
fantastic material! thank you for putting all those important knowledge together and for all additional personal notes from your experience!
Hi Justin, thank you so much for sharing all this information. This is an all encompassing MasterClass! You clearly addressed so many topics where precise info is hard to find. Mastering has always been a part of making great sounding music, and you have proven it is just as important when working in Atmos/Immersive. Thanks again and greetings from Montreal.
Thank you for watching.
Yes, the best Atmos educational video! Perfect explanation of spatial coding. Many thanks!
Excellent video, far above the level of most university lectures. Well done!
What a treat! I'm just a home theater nerd listening in.
Some thoughts on binural/headphone spatial audio. I don't understand why there's meta data or special mastering considerations for headphone use. An ideal binural render would be personalised and simulate real loud speakers in a real room. That's essentially what Apple does with the depth camera. If you then go ahead and listen to a movie from Prime/Netflix/Apple/Plex it then sounds like a real home theater on your Apple TV.
Now there's a different virtual room for your phone, where speakers are virtually placed much more around you and there's less room in the virtual playback.
So you mix for real speakers - it's then up to the hrtf/binural renderer to convert that. I currently playback many blu-rays via Infuse on my Apple TV on Air Pods Pro and it sounds like a real theater.
Other solutions like Creative SXFI do the exact same thing. They just playback the speaker mix & it sounds remarkably like real speakers in a room
Very nice Justin you have provided a lot of info i did not know.. Having a 9.1.6 focal system and been working in 7.1.4 since start of 2023 you have opened my eyes up to a lot.
Love your channel! I've been watching your content since the first video on Atmos 2 years ago. :) Thank you very much for creating such content, this is just pure gold!
Thank you!
Wow, that's some great info, thank you. You just explained a few things I was hearing and wondering about, like the DD+ JOC artefacts when soloing channels from Apple Music playback. Thanks a lot!
So nicely presented, Justin!
This is amazing. You have answered so many questions I have been wondering about as I am about to venture into the world of immersive audio 🙏
Thank you so much for all the knowledge and sharing your experiences.... truly appreciate. 👍
**Amendment to Dolby Atmos Assembler information. I was just at Namm 2023, and Dolby confirmed for me that the Album Assembler is Millisecond-Based (Not Frame-Based). I apologize for that misunderstanding. More importantly, under the hood, the assembler is actually able to render sample accurate files, which means that my concern about final master assets is not valid. Even though it appears millisecond based, if you select the start and end of a file (let's say the stereo master), and export, it will render a sample-accurate ADM BWF.
Wow this is an epic explanation of atmos thank you
When we thought the techy side of things with Atmos had reached the pinnacle, you prove us wrong. Super interesting, thanks! 👌
Thanks for watching
A very impressive video. Thanks so much.
36:27 Love the Ian Shepherd callout! Dynamic range is still the main reason I prefer listening to Atmos.
Excellent stuff Justin! I would like to add something VERY important about Spatial Coding/Clustering! Using 16 elements in the Renderer will give you "louder" readings on Loudness. I had a master file exported from Album Assembler which was -18.7 LKFS with TP -1.1, so all according to Dolby specs. After importing the same file into Dolby Renderer (with 16 elements on) I got readings of -18.8 LKFS / 0.1 TP. So back and forth with Dolby engineers this is what they said:
"For now, when you are measuring loudness in the Renderer, ensure that spatial coding is set to 14 elements.
Note that the default value of 14 is what is used for most applications other than blu-ray discs, so you should keep it at 14 unless you are mixing for True HD on blu-ray.
I will discuss with the development and let you know if there is any further action needed."
I put then 14 channels on in the renderer and got the same reading as Assembler!
My advise is to keep it on 14 (for now) as this is how your masters will be measured and encoded for STREAMING purposes. 16 is good is for True HD.
Cheers,
Nenad from Audio 9.1.4
www.dolbyatmosmusic.com
Hey. Thanks for this. This is an update that was just confirmed by Dolby. It clarifies that 16 elements is the correct number to choose, and that the assembler will be updated to solve this problem.
From Dolby (posted on FB forum today)
Hi all,
Thanks for your patience. Allow me to clarify a few things.
As has been mentioned, spatial coding emulation should always be on if you want to hear what content will sound like on a consumer device and for accurate loudness measurement. If you want to disable it earlier in the mixing process you totally can, but you should always check your mix with it on at the end, in case there are any tweaks you want to make.
As you’ve seen, measuring loudness with a different number of spatial coding elements in the Renderer can produce slightly different results. The differences, if any, are very content-dependent. The reason why the Renderer allows you to select different numbers of elements is because different Dolby codecs (.e.g. TrueHD) can be configured for different numbers of elements. 16 is used for DD+JOC, which is what is currently streamed on Apple Music.
When we created the Album Assembler, we used 14 elements as that matched the Renderer’s default at the time and is also a happy medium. The Renderer default (for a fresh install) changed to 16 elements in v5.0. This was done to align with DD+JOC, but of course, doesn’t align with the Assembler. Not ideal. Given this, we’ll be changing the Assembler’s loudness measurement to 16 elements in the next release (coming very soon!). With this change, all Dolby tools and DD+JOC encoders will be aligned. So, the recommendation is to set the Renderer to 16 elements. The downside is that measurements taken with Assembler 1.1 and the previous may not be identical to the new version. We will continue to evolve and improve the loudness workflow over the coming year, including getting all of the DAWs aligned.
@@justingraysound Wow, thanx Justing for this info, this makes Assembler temporarily unreliable for Loudness measurements. Wonder now why would they even give other options to choose, specially 12 elements? Where is that kind of coding used for?
@@neddnl I always QC every ADM in the Renderer anyways, so I never personally integrated the assembler into my final QC stage. It will get fixed though, as I know Dolby is all over this. I am also happy to see 16 as the choice over 14 (we need all the spatial resolution we can get!)
@@justingraysound I just hope soon one of the streaming services will take on higher tier of Atmos streaming and offer something in between what we now have and True Hd quality. How about 64 or 32 elements cluster?
Wonderful information! Thank you. Is there link to a document that covers the same information as your slides? That would be helpful for quickly finding topics or feeding into an AI chat for intelligent dialog. Edit: Ahh... answering my own question... the UA-cam "Show transcript" function gives me all that I was looking for. 👍
I think this should be considered the bible of Atmos Mastering. (Incredibly helpful for mixing too). Thank you for sharing, this info is going to help get Atmos Music to a better state, and it's nice that the information regarding this tech is becoming more accessible.
Thank you!
Wow such a great presentation. Very helpful and I am only halfway through it. I'm sure it will take several more times watching to wrap my head around some of this material, but I love the depth of field. So interesting in the continuing evolution of audio recording. I'm looking at it through the lens of the creative side since I am fortunate to work with incredibly talented people on the other side of the glass, but I believe it is beneficial to the arrangement of a song to understand and consider the ultimate forms of presentation available to the composer. I see it as a major contributing factor to the further development of the creative process to keep the music interesting. The Atmos mixes I have out there so far were recorded, and assembled, by Jeff Jones or Mark Abrams and mixed by a young engineer named Tom Beuchel, who seems to grasp these concepts with a youthful energy. I absolutely love the results, and I see the vision becoming clearer and better as time goes on. The playlists on your immersivemastering site sound awesome as well, (from where I sit), very interesting and exciting stuff. Keeps the blood flowing!. Thank you for doing this!
Thank you so much. Keep creating! This is all about supporting artistic vision, and music!. I love reading this, as it is a reminder for everyone that working in teams is where it is at! The best music in the world has commonly been created with teams. We are in an era where everyone is encouraged to be everything all the time. It is possible, but most often, having a great artist/composer on one side of the glass, and a great team on the other side, helps to facilitate powerful music!
Great video I really appreciate the time, effort and information. Thanks so much. New sub here.
"Loudness... Ahhh Loudness..." 😂 Thanks for sharing this with the world Justin!
Thanks for taking the time to do this excellent video, Justin.
Part of the reason for this existing is licensing income for Dolby. They will push AC-4 as patents on E-AC-3 expire. Currently there isn't a complete open-source decoder for AC-4. You can put a binaural mix in any format, or derive it in real time.
If the beds work well enough, then regular surround should also work, and it does. What is new is the binaural downmixer. I believe it could be made to accept normal surround 5.1 and apply a HRTF to create an illusion of speakers.
I do appreciate how Dolby have always stood for good quality sound, even if their products are usually frugal with the bitrate.
We will benefit from the greater dynamic range. Few people desire the current mastering loudness. Many surround releases of the past on DVD-A or SACD sound good in downmix. Not so much when they have been upmixed and have the same sound going on multiple channels. -18 dB has been the ReplayGain standard. Apple will effectively rob 2 dB, which is still great for pop music.
I think the music should be normalized higher than dialog, similar to music during credits, because a complete movie mix also sits around -22 .. -18 dB with all elements together. Dialog is somewhat like what vocals are to the complete mix.
Well, the E-AC-3 JOC is rather bitrate starved for the object channels, which is why it's garbled. It always is on the Internet. 640 kbit is enough for 5 full range channels. They should up it to 1.5 Mbit.
My entire life I've been an information gatherer. Your level of experience and education far exceeds mine but I can appreciate your passion and desire to produce the best sound technology allows you to. I look forward to the works you've produced to listen for how your approach transforms the music. My home studio is where I do gospel but want to assist in brining this new immersive sound via demo tracks as I listen to learn then apply. I'm waiting on my M2 mac studio with 64 gigs and 1Tb of space to begin the next phase of my music production journey. I have been a Protools user since 2008 and have the Dolby Atmos Renderer software working as I'm learning how to place objects with the stems from songs I've produced. I'm a new subscriber and thankful for the artful information about what I'm getting into...........
Invaluable. Thank you!
I love this soooo much nutritional value ❤ thank you
Great video Justin thank you so much ❤
"The moment you solo Something, you'll start to hear the artifacts" ... does excerpt tell you something. Plus, the deliverable media is basically MP3 with video = MP4. The more you dive into it, the crazier it gets ... and I WANT to be a believer. Immersive is my passion in mixing.
Ok ... I should try and sit through this.
Amazing again, watched the whole video excellent… wish we could meet and have a immersive talk!!!
Thank you. I look forward to that anytime Henry.
Hornet SAMP allows you to easily select which Atmos Objects are used for its side-chain. You just click on the object number on the left of the plugin (when it is blue, that object is enabled to be part of the side chain signal).
Thank you for sharing this point. I had intended on clarifying this in the video, but I did not articulate that feature with enough detail. It is a fantastic and well conceived aspect of this plug in for sure!
As an additional point, I still have many instances where I prefer to use other tools beyond this plugin (which is great at what it does!) which I tigger with a side chain, so I still have the need to create my own custom side chains in those instances. This is in both mixing and mastering contexts.
Hello Justin, thank you for doing this, it's so helpful. I found that sometimes assigning objects as dummy is useful in some cases. Can't we have a blank audio object in ADM file? Or is it sort of manner thing to do for avoiding unnecessary amount of data?
You will want to avoid empty objects in a final delivery for sure.
Thank you for doing this in-depth video! This has been the most informative video I’ve come across so far. I have to say I was a little turned off when I saw total time, but damn am I glad I didn’t skip by.
Thank you.
I’m new to this and can’t find out how to deliver to distribution the stereo master with the atmos file. Please do this video
Holy cow, thank you!
Thanks Justin for the video, do you cover the Object-Bed technic in one of your videos ? Thanks :)
1:52:53 How often would you say QC failures get through the Atmos pipeline? I'm reminded of the Atmos mix of Washington on Your Side on the Hamilton soundtrack. The bass track is woefully out of sync. I've seen a few mentions of it online, and tweeted Dolby about it, but it's still screwed up (at least on AM).
QC failures are usually due to human error, but when it comes to any music production, mistakes can happen. What matters is that we catch them before release. Stereo and Atmos require careful QC to ensure something like this does not happen. Even if the technology caused an issue like what you describe, it is the responsibility of the engineers to catch it and rectify it. I work with major labels daily, and one of the things I admire most about their teams is their QC standards. They are ruthless (in a good way). As an engineer, I don't want QC notes, as they cause delays, so I do my best to catch everything before delivering a master file to an artist, producer, composer, or label.
Justin, should we expect to provide Apple with both a stereo master AND an ADM BWF for one track? I noticed Tunecore accepts both (I think even as one ISRC).
Yes. If you only have ATMOS available on Apple, it will be a stereo fold down that is used for non-ATMOS playback. Although this can still sound very good, I highly recommend that stereo deliverables should be properly mastered to maximize that listening experience
@@justingraysynthesis9082 very helpful. Thank you.
@@justingraysynthesis9082 If I want to simply have a dry acoustic piano recording add some ambiences (for example Logic has Space Designer now Atmos capable) what is the basic technique of where to send those ambiences, and as objects, or beds? Also, I've heard the term quad ambience and am wondering if you can describe very generally what that is, and how it can be done, in the Atmos context?
One options is to make a quad bus with an immersive reverb on it. Then send the piano signal to that reverb, to create a 4-channel ambience. Then output that bus to 4 objects, and then place those objects in L, R, LRS and RRS. The routing of this would take a while to describe here, but you can find that information quite easily in some of the great video content Dolby has made.
Another option is to make 2 stereo reverbs, and delay the one in the rear with a longer pre-delay. That is how immersive verbs were created before the creation of the current multi-channel verbs, and can yield more control over the ambience, since you can use any reverbs (or delays) you want. Read about Haas effects to learn more about this concept.
Hi Justin - great video. I am about to set up my new system to mix Atmos Music, 7.1.2 and 9.1.4. Would you suggest that I set for; beds plus objects? (If so who goes where?) Or all channels as objects? My end goal is binaural on headphones. Thanks...Steve
This is a question that does not have a simple answer. The decisions around what layouts to use depend completely on all the factors discussed in this video. I can say that you need to be prepared and capable to use both beds and objects to be mixing successfully in this format. What is chosen for a given production depends on many more factors.
@@justingraysound I am looking for a starting point. As I am coming up with new ways to up-mix 2-channel to 91.4.
I already have mixes in discrete. Just looking for the first way to try it in the renderer.
Does either beds and objects, or all objects, affect the binaural down mix which is my first goal. I don't like to reinvent the wheel especially when someone like you has gone there...
You should do a guide - I would gladly buy one if it was .pdf or a power-point...s
PS the problem with DAWs and manuals is there are too many choices. Leads to more confusion then choices.
PPS I was the Dolby Consultant on the original Star Wars, CE3K, Altered States...
Although I cannot support upmixing a stereo master to 9.1.4 (which of course can be done with Nugen or Penteo) as an approach to creating Atmos content, I do use this often in creative ways to expand stems in mixing. In this case, I would suggest a 9.1.5 or 9.1.6 object bed for sure.
Wow, this was incredible. Would you have any recommendations for Atmos mixing education? Thank you so much for this content.
I would look at the Dolby Educational resources. There is some excellent material available.
Thanks for this.
To clarify. Is the -18 export -18 atmos levels or the binaural levels?
Atmos. Binaural will be higher. There is no "spec" for binaural, which is one of the issues I have identified. For me, the spec is -0.1 dbtp on the binaural side at a bare minumum, although there is much more to consider than just LUFS/Peak Value of course
any links to that FB group?
19:27 In my test Dolby's binaural playback is actually rendering the object itself rather than what the 9.1.6 bed. Example would be, put object on the exact position of the Rtr speaker, and automate the object to movefrom Rtr speaker position to the right top back corner of the room (right above the Rsr speaker), in 9.1.6 bed you hear no difference at all, because the Rtr is the only channel in 9.1.6 bed in charge of those back right ceilling places, so moving the object within those area you only get the Rtr sound unchanged. But in Dolby's binaural monitoring, you hear the object moving as per the automation, it works even during playback of the E-AC3-JOC file with "Dolby Access for headhone" on. Basically the object is indeed more flexible than even a 9.1.6 bed, especially in binaural.
Hey, thanks for the note. This is correct and I think it is also explained at some point later in the video. Any time that we place sounds outside of a discrete channel position, the objects should be used, IMO, for the exact reason you discovered here. The translation of movement and tonal accuracy is much better with objects when placed between speaker locations (on headphones). It is also true for speakers when we study the spatial coding of the DD+ JOC codec. The comparison between the object and bed being 1:1 (at this point in the video) is only when they are in the exact same discrete channel speaker position.
When you talked about expanding beds beyond 7.1.2 it is kind of possible to do in Nuendo. You can create let's say a 7.1.6 group track (group tracks are how aux tracks are called in Nuendo) and turn it into a multiobject. What Nuendo does is it takes a 7.1.6 group track, removes the LFE channel, and connects all remaining channels to objects. So, the renderer will see it as just regular objects with panning metadata for each channel transferred from Nuendo. But inside Nuendo, it will behave like a discrete multichannel group track. The only issue with this is that if you use a built-in renderer in Nuendo it doesn't allow you to set binaural metadata for each channel of the multiobject independently (you can choose one setting for all channels in multiobject), but when using an external renderer you can set metadata for each channel independently.
Here's a video that shows how it works: ua-cam.com/video/rEKJHyQ64eI/v-deo.html
I don’t know if you knew about this, and probably in your workflow switching to Nuendo doesn’t make a lot of sense, but just thought that it may be interesting. :)
Thank you. This is what we refer to as an Object Bed. In Pro Tools there are some complicated ways to mimic what you are describing, but the workflows are complex.
I do believe we will see Pro Tools (and all DAW's) eventually expand to a 7.1.6 or 9.1.6 object bed approach soon enough. Once that happens (or for someone working on Nuendo like yourself), the most important thing will be how we use them. Since EQ and Saturation can be used with linked processing (across multiple tracks), it really comes down to linked compression.
The way I get around this in pro tools now is with side-chain triggering, but as the video explores, I actually rarely want to equally compress content in different "zones." What we will see is tools like Waves Shperinx, where we can control "zone-based" compression behavior, even when on a bed. All of this can be achieved at the moment, but it will get easier.
Nuendo is already ahead of Pro Tools in this regard when it comes to workflow, although with proper templates, and a good understanding of Pro Tools, it is possible when required.
What I do not want to see is everyone suddenly limiting their bed content in Atmos, just to reach a certain loudness target, which I hope I have advocated for in this video. Thank you so much for this helpful note!
@@justingraysound totally agree. When I tried to apply compression and limiting to beds, I noticed the artifacts you talked about in the video. It was even noticeable to some extent in binaural and stereo fold down. Acon Digital compressor (and even multiband compressor) has channel linking controls that allow you to fine tune how compressor reacts to signals in different channels. It doesn’t allow to control different zones separately, it’s more like a slider between multi-mono mode or internal mono sidechain that triggers compression (if I remember correctly it just sums all channels of the bed internally).
@@gkmixing I will look at the Acon tool. Thanks. In addition to these points, the actual mono side-chain itself is something that is overlooked by the current plugins. I don't necessarily want a full mono fold down of the mix to be triggering compression behavior. If anyone has ever made a full mono fold down (with no supervised level control) and listened to it, it can be a mess! That is where designing my own side chain trigger, based on my goal, is my current practice when I need to work with side-chained compression.
@@justingraysound that actually blew my mind. I never thought about creating different sidechains based on the goal. Seems so obvious now, but still genius! I think I will be rewatching this video several times, just to make sure I didn’t miss any other awesome ideas you shared. :)
you are amazing
Hey Justin... After I received a mastered ADM BWF version of a song. How can I create a physical format? For instance a dvd?
The ADM can be used to generate a master file in Dolby True HD, which is what is used on Blu-Ray and MKV containers. It is not something you can do without professional tools. if you would like to purse making a Blu-Ray or Dolby True HD/MKV container, please feel free to visit my website and reach out. Happy to help/guide you to the right people.
Did you try neumann ndh 30 for mixing and mastering? I would like to hear your honest opinion🙂.
I have not had the chance to use these yet. I will see if I can get a pair to listen to at NAMM this week. I use my Audeze collection every day, and I can say with full confidence that the LCD-5, LCD-4z, and LCD CRBN are at a mastering level for Atmos and Stereo work.
✨✨✨
Bassplayer,......makes sense :)
😃
Hey Justin - I know you're super busy so no worries if you don't have time but could you just briefly break down your current chain for listening to immersive music from Apple Music and Tidal and Amazon on your system? Does your apple tv plug directly into your interface (staying digital) or are you using a receiver in-between and then going in analog? Is the process different for each streaming service? Just wondered what was needed at this current time? Can you just play full Atmos mixes off your computer in full 9.2.4? Anyway, I'm sure my question is lacking some detail but I hope it's clear enough to get the idea of what I'm trying to ask. Thanks so much brother 🙌
I have an Apple TV 4K going into the Arvus HD-4D. That then sends the (up to) 16 channels (up to 9.1.6 speaker layout) via Dante to my Avid MTRX. That is then an Atmos Source which I can then listen to on my speakers. The same for Tidal. They are both DD+ JOC being decoded by the Arvus. I love this, as there is no additional conversion, so it is the most accurate way to listen to consumer Atmos at the moment. Then for headphones, I have my Iphone, with Air Pod Pro 2 and Air Pod Max. I then also have an older Android paired up with a pair of wireless sony headphones, but that is more for earlier listening, before Tidal offered IOS playback
@@justingraysound Hey Justin - thanks so much for getting back so quickly!! I really appreciate it. That's a smart way to go about it for sure. Not only does it save the conversion but have a rack item rather than a bulky receiver must be nice. Grateful to you for sharing all this knowledge. I started my first Atmos mix in headphones last week and I'm absolutely loving it so far!!!
In your experience, what’s the best downmix setting for 5.1 and Stereo?
5.1 Direct Render. Stereo Lo/Ro. All with Surround and Height Trims at 0 db. I hope we see a stereo Direct Render mode at some point in the future.
This video is a seminar on the failures of Dolby and a call to action for those who can develop a vastly simpler and more useful tool.
What I mean by that it’s so overly complicated that only 20% of people who would need to know this stuff actually know this stuff (and that’s being generous). As a result, 80% of Dolby Atmos art arrives not as intended. That’s a problem. That’s Dolby’s failure. The system needs to be far, far simpler. Or, the hardware medium needs to be standardized under one company’s roof.
My company will try to make spatial audio beautiful for everyone by mass-producing enhanced, standardized performance venues that are absurdly high-tech jukeboxes by day and other things by night.
One format.
One type of venue.
One type of room.
One mega-intuitive software.
Tech side completely separate from art side.
Every city in a America.
Rent venue room during day for $50/hr.
Everything is always perfect every time without fail save an act of God. It’s only room, one system copy pasted everywhere.
And it’s about Brand name. You walk into my theater for an in-house show and you know for a fact you will receive a quality experience.
The Dolby name is everywhere on everything so it means nothing.
It would be handy if you could upload the slides somewhere.
59:12 So we should actually not exceed -3db true peak, so our tracks normalize to -16 lufs.
This is certainly a consideration, but it is heavily content-dependent (as is all mastering). In all reality, even a file normalized to -18 / -1.0 will playback very nicely against stereo, since LKFS between the two is not truly apples to apples. So, since there really is no loudness war here, targetting -3dbtp just to have it normalize a bit louder is not worth it if it is sacrificing spatiality. I will always choose to allow more dynamics if it helps the sense of space, over reducing peaks in order to hit a number. This is not because I am a purist, but because it sounds better. Transients are essential in successful spatial distribution, so reducing the to have a -3dbtp can have consequences to the spatial experience. As with all things, it is 100% content and context-dependent. Hence, don't mix or master by numbers. It just does not work.
@@justingraysound thanks for the reply and the hole video, I somehow managed to watch it fully. And yes, its seems to be quite a subject all in itself.
I was wondering how often you get idgits trying to argue the contrary point of view. I have had the same discussion with several “top” studio engineers in LA and Miami. Several actually asserted that anything above 0dBFS is irreparably “lost” regardless of the recording bit depth. Interestingly, a couple are engineers that mix on NS-10’s at around 85-90 dB all day. Maybe it is that they’re simply hearing the ringing in their ears… who know?
If I understand correctly, these are two different things. If we cross 0 dbfs in the digital domain, that information is indeed "lost" as it is clipped at the final authoring/conversion stage. The discussion of mixing at 85-90 db is in relation to what listening volume they use in the room, which is not related to the internal digital headroom (dbfs). Does that make sense?
The average consumer still don't care about atmos , most people who buy apple headphones even deactivate the spacial audio option . Those " immersive " mixes don't translate well at all on headphones and soundbars ... This will end up like NFTs , those who are desperate to profit from it will be like " muuuuuuuuuh it is the future bro , it is inevitable !!! " and most people will laugh and keep ignoring it until the " future " dies inevitably lol
Time will tell. Regardless of what the future holds for immersive audio, my intent is just to make the best sounding music I can every-time. I did a different video which discusses this reality. I don’t disagree that stereo is still the primary format. I do disagree that immersive mixes cannot sound amazing in headphones. At the end of the day, I currently work in the format all day, everyday, and since I respect each song that someone trusts me to mix and master, I just want to do the absolute best I can to help realize their art. Whether it goes the way of the NFT or not is not really important to me while I am working. No one can predict the future, and so I rather just focus on making the best music I can in the present.
There will be a lot used speakers on eBay within 2 years..
No one has a crystal ball. That is why you will note that at no point in any video do I ever encourage engineers to make any investments. I did this because I believed in it from the very start. It is about more than income for me, and I would never suggest that someone take a risk they don't fully understand. For the future of music/art, I hope that is not the case, but for now, all I am interested in is making the best art I can with the tools I have available. One day at a time.
The unfortunate thing is it won’t work in the home and it won’t work on Cd or vinyl format. It’s great to home theatre cinema but in the end music will always be folded down to two channels. So it is pretty a much useless format.
Of course, we are all entitled to develop our own relationship with technology. I would encourage you to keep an open mind to object-based audio in the future, as there is a lot that can be done with adaptive audio. It will never replace vinyl, nor was it intended to. It will never replace the CD (although a Blu-Ray is essentially the same concept if one likes discs...which I do). Anyways, if you ever get a chance to listen to an "immersive" 2-channel system, you may find something you like. Then of course, when you can add a third or 4th channel wirelessly, at a low-cost point, that is where the real fun begins. The future has a lot in store for this format, and then of course we will see if it has staying power.
Why does Dolby atmos on Apple sound like garbage compared to a stereo master of the same song elsewhere? I literally can’t find a Dolby master that doesn’t sound in a well and dead.
Headphone listening can be sensitive for those who have an HRTF (essentially head/ear shape_ that is vastly different than the standard. Are you on AirPod Pro? Have you done Apple's custom HRTF? I would encourage you to listen to any examples from my own playlist (find it on my site, or in the video) and let me know if the comparison is not working for you. Also make sure Soundcheck is on, as loudness must be matched for reasonable comparisons.
Dolby Atmos is a dead parrot!
This presentation does not flag-waive for Atmos. It just presents the technical facts about the format.