Great video. I've been aware of this concept for several years but I have never given it the proper attention. It just clicked for me yesterday and I can't believe how effective it is. I thought it was an alternative to Atmos but boy was I wrong. It's basically like stereo recording for 3D. I now feel like it should be a more widely studied and utilized technique. If you mix in surround sound but you don't use ambisonics, it's like mixing in stereo but only using mono recordings spread out through the stereo field. It's still pretty cool but you're missing the best feature of the technology.
On one side this video wants to address a large auditory of non experts on the other hand this video goes very fast on terms that are very difficult. In this sense the video is confuse. "Impule response" I know from reverbs. They are very short and give the characteristics of a room and the reflection of sound. But the ambisonic format is for music, not only for claps. If I look at a First Order Ambisonic I get 4 wave files of the sound, no impulse response. Are there 2 components a) impule responses b) sounds? Please describe exactly what is in the data, how you get it, how you use it, what it is for and don't mix everything in a pseudo-explanation. If you wouldn't have heard anything thing about ambisonics you wouldn't understand anything more from your video either. I've got the plugins from Nx(waves), facebook(compiled myself), dearVR and I can play them in Unity and compose them in Cubase. I know that I need a special recording microphone from Zoom (which I will buy soon). I know very well the mathematics of the Fourier analysis and understand that you could try to find anything similar for the 3 dimensional space. I know using impulse responses for reverbs and know how to calculate them with plugins in SoundForge(from Sony). But your video didn't help me anything to understand about ambisonic. It was completely confuse: An overview for an imaginary auditory that doesn't exist, junk science. What is the role of the impulse responses how is the sound decomposed? Are the impulse responses inside the B-Format or are the impule responses apart? Is the B-Format those responses or the abstraction from those responses?
There also convolution reverbs on ambisonic files and I had already downloaded First Order impulse responses for them. iZotope (resp. exponential audio) has 3D reverbs. But I thought that those impulse responses were a special case and not the essence of ambisonic. On the other side I've already complained that the standard Ambisonic plugins, i mentionend in the previous post, didn't offer the possibilty to upload custom impulse responses for custom characteristics of costum rooms but only presented a predefined list of room characteristics.
@@WerIstWieJesus Hello! Thank you for your comment. The reason for the sudden levels of the technicality is that we are not a music production company, rather we develop a room acoustic simulation software. Our audience is mainly acoustic engineers (I'm one myself) so our content is made mostly with that kind of demographic in mind. Still, I will try to answer your questions as best as possible. It is true that you can record, produce, and play music in ambisonic format. But in the context of room acoustic simulations, you only need an ambisonic impulse response because this will allow you to simulate any audio in that room (real room if measured with e.g.: Zoom microphone, or virtual room if computed inside a room acoustic simulation software), and also calculate parameters such as the ratio of lateral reflections in a concert hall. This is why here we mainly address impulse responses: because it is the main application in our software specifically. An ambisonic impulse response is simply a collection of time-synchronized impulse responses (4 in the case of first order), each of which describe the sound field in the room in different directions when in B-format. So, the B-format ambisonic impulse response is a 3D representation of the acoustics of a room. These impulse responses can then be decoded into A-format, where the impulse responses are now associated with the specific speaker/headphone setup you will use for playback. You then convolve any anechoic recording with the A-format impulse responses and you obtain a 3D audio simulation of that anechoic recording in that room. An advantage over direct binaural auralisation is that ambisonics allows you to freely choose the speaker setup and the listening direction. Keep in mind that this video is not intended as a tutorial for manually processing ambisonic audio or for music production. Rather, it only gives the relevant theoretical background for the process of producing ambisonic auralisations in our software (ODEON Room Acoustics Software). We have a more practical video on acoustic measurements with ambisonic microphone (ua-cam.com/video/KKLr_7rq558/v-deo.html), where we are actually using a Zoom microphone, but again you will notice this is more focused on the analysis of rooms rather than music production. For your interests, we do recommend you to watch content from a channel more focused on music production and VR.
@@ODEONRoomAcousticsSoftware Thank you for your nice reply to my strong words. I see that I was mistaken. I thought that your video was intended as a popular explanation of ambisonics. But infact it explains the room acoustics that are necessary for a 3D ambisonics recording. I intend to buy Zoom H3-VR. Perhaps my understanding of an ambisonics recording was to simplistic. My version of Cubase allows to work with ambisonics and I have even bought a bluetooth emitter for my headphones to examine the head movement in a 3D environement. In fact I believe that a modern mastering of music will have to keep attention of the 3D position of the instruments in any case (Stereo or Amisonics First, Second or Third degree). In Cubase I am very interested in the impule response for instruments and did also researches. I think that the impulse response is not only important for rooms but also for the corpus of an instrument (like e.g. the corpus of a Stradivari or a piano). Each instrument should be recognizable by the corpus of its individual IR of the corpus and its position in the room. If this is missing the music can become very boring, a "sound" and no orchestral piece of music. So I was already on the search of Ambisonic IR. I even replied to the big players (dearVR, Facebook, waves NX, B360) that I was somewhat deceived that I could not add an individual IR in their commercialized plugins. I miss also plugins for 3D reverb. For the moment I have to decode in Cubase ambisonics and add 4 reverbs on every channel. But I can open everything. So your explanations are of big value for me and now that I see, that I still understand nearly nothing about it, I will follow more actively the other explanatory videos you proposed. My mathematical background should be enough to understand them. But I need clear definitions.
Great video. I've been aware of this concept for several years but I have never given it the proper attention. It just clicked for me yesterday and I can't believe how effective it is.
I thought it was an alternative to Atmos but boy was I wrong. It's basically like stereo recording for 3D.
I now feel like it should be a more widely studied and utilized technique. If you mix in surround sound but you don't use ambisonics, it's like mixing in stereo but only using mono recordings spread out through the stereo field. It's still pretty cool but you're missing the best feature of the technology.
Thank you for this well explained video. I struggled with understanding spherical harmonics, the comparison with Fourier transform was a nice touch.
Clear and direct explanation, tks!
Thank you!
On one side this video wants to address a large auditory of non experts on the other hand this video goes very fast on terms that are very difficult. In this sense the video is confuse. "Impule response" I know from reverbs. They are very short and give the characteristics of a room and the reflection of sound. But the ambisonic format is for music, not only for claps. If I look at a First Order Ambisonic I get 4 wave files of the sound, no impulse response. Are there 2 components a) impule responses b) sounds? Please describe exactly what is in the data, how you get it, how you use it, what it is for and don't mix everything in a pseudo-explanation. If you wouldn't have heard anything thing about ambisonics you wouldn't understand anything more from your video either. I've got the plugins from Nx(waves), facebook(compiled myself), dearVR and I can play them in Unity and compose them in Cubase. I know that I need a special recording microphone from Zoom (which I will buy soon). I know very well the mathematics of the Fourier analysis and understand that you could try to find anything similar for the 3 dimensional space. I know using impulse responses for reverbs and know how to calculate them with plugins in SoundForge(from Sony). But your video didn't help me anything to understand about ambisonic. It was completely confuse: An overview for an imaginary auditory that doesn't exist, junk science. What is the role of the impulse responses how is the sound decomposed? Are the impulse responses inside the B-Format or are the impule responses apart? Is the B-Format those responses or the abstraction from those responses?
There also convolution reverbs on ambisonic files and I had already downloaded First Order impulse responses for them. iZotope (resp. exponential audio) has 3D reverbs. But I thought that those impulse responses were a special case and not the essence of ambisonic. On the other side I've already complained that the standard Ambisonic plugins, i mentionend in the previous post, didn't offer the possibilty to upload custom impulse responses for custom characteristics of costum rooms but only presented a predefined list of room characteristics.
@@WerIstWieJesus Hello! Thank you for your comment. The reason for the sudden levels of the technicality is that we are not a music production company, rather we develop a room acoustic simulation software. Our audience is mainly acoustic engineers (I'm one myself) so our content is made mostly with that kind of demographic in mind. Still, I will try to answer your questions as best as possible.
It is true that you can record, produce, and play music in ambisonic format. But in the context of room acoustic simulations, you only need an ambisonic impulse response because this will allow you to simulate any audio in that room (real room if measured with e.g.: Zoom microphone, or virtual room if computed inside a room acoustic simulation software), and also calculate parameters such as the ratio of lateral reflections in a concert hall. This is why here we mainly address impulse responses: because it is the main application in our software specifically.
An ambisonic impulse response is simply a collection of time-synchronized impulse responses (4 in the case of first order), each of which describe the sound field in the room in different directions when in B-format. So, the B-format ambisonic impulse response is a 3D representation of the acoustics of a room. These impulse responses can then be decoded into A-format, where the impulse responses are now associated with the specific speaker/headphone setup you will use for playback.
You then convolve any anechoic recording with the A-format impulse responses and you obtain a 3D audio simulation of that anechoic recording in that room. An advantage over direct binaural auralisation is that ambisonics allows you to freely choose the speaker setup and the listening direction.
Keep in mind that this video is not intended as a tutorial for manually processing ambisonic audio or for music production. Rather, it only gives the relevant theoretical background for the process of producing ambisonic auralisations in our software (ODEON Room Acoustics Software).
We have a more practical video on acoustic measurements with ambisonic microphone (ua-cam.com/video/KKLr_7rq558/v-deo.html), where we are actually using a Zoom microphone, but again you will notice this is more focused on the analysis of rooms rather than music production. For your interests, we do recommend you to watch content from a channel more focused on music production and VR.
@@ODEONRoomAcousticsSoftware Thank you for your nice reply to my strong words. I see that I was mistaken. I thought that your video was intended as a popular explanation of ambisonics. But infact it explains the room acoustics that are necessary for a 3D ambisonics recording. I intend to buy Zoom H3-VR. Perhaps my understanding of an ambisonics recording was to simplistic. My version of Cubase allows to work with ambisonics and I have even bought a bluetooth emitter for my headphones to examine the head movement in a 3D environement. In fact I believe that a modern mastering of music will have to keep attention of the 3D position of the instruments in any case (Stereo or Amisonics First, Second or Third degree). In Cubase I am very interested in the impule response for instruments and did also researches. I think that the impulse response is not only important for rooms but also for the corpus of an instrument (like e.g. the corpus of a Stradivari or a piano). Each instrument should be recognizable by the corpus of its individual IR of the corpus and its position in the room. If this is missing the music can become very boring, a "sound" and no orchestral piece of music. So I was already on the search of Ambisonic IR. I even replied to the big players (dearVR, Facebook, waves NX, B360) that I was somewhat deceived that I could not add an individual IR in their commercialized plugins. I miss also plugins for 3D reverb. For the moment I have to decode in Cubase ambisonics and add 4 reverbs on every channel. But I can open everything.
So your explanations are of big value for me and now that I see, that I still understand nearly nothing about it, I will follow more actively the other explanatory videos you proposed. My mathematical background should be enough to understand them. But I need clear definitions.