I found this kind of aliasing kinda of pleasing when playing computer games on sound cards from the 1990s. Then XP came about and did mixing in software with reasonable quality, and suddenly all 16-22k samples sounded dinstinctly muffled. If I don't have a mental picture of the correct sound of a piano, the samples a synthetic or a mix of different sounds, the extra fuzziness seems to belong in the signal.
I agree. I mix in a little resampling into a lot of my mixes actually. I find it's also a great way to spruce up lossy sound files when mixed in at 5-20 percent. Especially on drums!
Really good video. You can gain a lot of understanding about aliasing just from reading about the principle, but it helps to actually see it in action.
Why are samples-per-second measured in Hz? Hz are cycles-per-second. Does this relate to how, in digital systems, you need at least two samples per cycle/frequency (one positive, one negative) for the frequency to be reproduced? and that's why the sample-rate needs to be twice the highest frequency you wish to record? Because if the system is able to take one sample per cycle of 48kHz, its sample-rate is therefore fast enough to take 2 samples of 24kHz?
Thank you. I have watched tons of nyquist video back and forth multiple times already and this last video I watched just made it more clearer. I was so confused what aliasing does and if I am not wrong, basically it bounces off frequencies over the nyquist range back to the frequency area where it's within the nyquist. These aliasing creates some unwanted harmonics which most of the time subtle but can slowly pile up and accumulate into a much noticeable foreign sound for the mix.
Is it a big deal that all Waves plugins do not oversample even it is 2022 now? I do not hear the difference although it is there when I invert the phase, is it just our fixation and ocd most of the time?
Does anyone know if there's any loss in quality when 44100 Hz FLAC is converted to 48000 Hz ALAC? I've been making this conversion with MediaHuman Audio Converter for hundreds of files lately, but never noticed the difference in sampling rate between what I put in and what I got out.
@@--..__ Aliasing? If you're increasing your sampling rate, you should be able to perfectly recreate the waveform and then some, no? Perhaps it will need to interpolate, causing some (probably not noticeable) accuracy smudging, but I wouldn't imagine it would *alias* .
So mixing at higher sample rates should actually be beneficial against aliasing should there be sounds that have frequency content that go beyond 1/2 the sample rate. So aliasing can occur from audio that is made at 96kHz or even 192kHz and then resampled to 44.1kHz if no high cutting is done. All the frequency content beyond the 22.05kHz section will just be reproduced as aliasing. I once exported audio at 96kHz and then attempted to mix it at 44.1kHz. Some of the samples changed in pitch. They had frequency content beyond 22.05kHz & I hadn't cut them. I now know that this was aliasing that caused the perceived change in pitch.
Maybe. But I think it's more likely that what you're referring to is pre-ringing caused by sharp temporary in-band frequency cut-offs. Like they happen very audibly in low bit rate mp3 conversions.
If modern equipment can handle Nyquist rates in the human spectrum of hearing with no problem, then why would one even bother with a sampling rate that can introduce aliasing like he does? Is the point that encoding/bit depth and other errors might introduce too high of frequencies and so high-pass filters are often (perhaps always?) needed to avoid aliasing, even though it's occurring "behind the scenes?"
Aliasing happens no matter what sampling frequency you use. For instance, suppose a microphone picks up sound at 30kHz. Humans can't hear that sound, but if you sampled at 44.1kHz, it will look like you have sound at 14.1kHz, a frequency that humans can hear. That's why you have to pass the microphone signal through a low pass filter first, before sampling. I suppose the same principles would apply in digital sound production. If you're using some instrument with sound sampled at 96kHz, you should pass it through a filter before downsampling to 44.1kHz.
ua-cam.com/video/Bx9hW07BLIk/v-deo.html Here's a fun example of extreme aliasing in modulation resulting in pretty much noise, from FM synthesis when the modulator has a harmonic 14 octaves higher than the fundamental (the operator is in 2x, so 7 octaves -> 14 octaves). Since it's FM (or phase modulation), the output is not a constant signal, it's noise instead. Kinda cool.
I have tried to understand aliasing and anti aliasing for years and i think i finally get it now thanks to this video you are the goat
I found this kind of aliasing kinda of pleasing when playing computer games on sound cards from the 1990s. Then XP came about and did mixing in software with reasonable quality, and suddenly all 16-22k samples sounded dinstinctly muffled. If I don't have a mental picture of the correct sound of a piano, the samples a synthetic or a mix of different sounds, the extra fuzziness seems to belong in the signal.
I agree. I mix in a little resampling into a lot of my mixes actually. I find it's also a great way to spruce up lossy sound files when mixed in at 5-20 percent. Especially on drums!
Really good video. You can gain a lot of understanding about aliasing just from reading about the principle, but it helps to actually see it in action.
This video is a great supplement to my EE courses! The visual demo with Pro Tools was very helpful for me since I’m a musician.
Really nice explanation, I've had issues with this concept for years!
Superb makes sense of why we use 48k in Mixbus or Ardour Nyquist at 24k then limit to 20hz and 20khz. Many thanks.
Great explanation about phaser vs flanger
Excellent explanation. Thank you.
Why are samples-per-second measured in Hz? Hz are cycles-per-second.
Does this relate to how, in digital systems, you need at least two samples per cycle/frequency (one positive, one negative) for the frequency to be reproduced? and that's why the sample-rate needs to be twice the highest frequency you wish to record? Because if the system is able to take one sample per cycle of 48kHz, its sample-rate is therefore fast enough to take 2 samples of 24kHz?
Is there way to prevent aliasing in plugins that have no oversampling?
Lower input volumes
Great explanation and demonstration. Thank you! 🙏
Thank you. I have watched tons of nyquist video back and forth multiple times already and this last video I watched just made it more clearer. I was so confused what aliasing does and if I am not wrong, basically it bounces off frequencies over the nyquist range back to the frequency area where it's within the nyquist.
These aliasing creates some unwanted harmonics which most of the time subtle but can slowly pile up and accumulate into a much noticeable foreign sound for the mix.
Is it a big deal that all Waves plugins do not oversample even it is 2022 now?
I do not hear the difference although it is there when I invert the phase, is it just our fixation and ocd most of the time?
New video from Eric Tarr.
wow! That's so weird, need some time to wrap my head around that...
Does anyone know if there's any loss in quality when 44100 Hz FLAC is converted to 48000 Hz ALAC?
I've been making this conversion with MediaHuman Audio Converter for hundreds of files lately, but never noticed the difference in sampling rate between what I put in and what I got out.
Yes, there will be some aliasing issues but you probably won't notice it.
Run a file trough a spectrogram and you will be able to visually see the aliasing, probably.
@@--..__ Aliasing? If you're increasing your sampling rate, you should be able to perfectly recreate the waveform and then some, no? Perhaps it will need to interpolate, causing some (probably not noticeable) accuracy smudging, but I wouldn't imagine it would *alias* .
Why in case of piano, we didn't get mirrored image of frequencies above Nyquist limit?
I think it get mixed with real high freqenucy and there's not much of it. Would be more noticeable if piano played few octaves up.
Thank you sir!!! This was wonderful...
Os this bad for our mixing?
I think it is, but how to avoid it?
You are realy good teacher :)
that was so good. thank you for this
So mixing at higher sample rates should actually be beneficial against aliasing should there be sounds that have frequency content that go beyond 1/2 the sample rate. So aliasing can occur from audio that is made at 96kHz or even 192kHz and then resampled to 44.1kHz if no high cutting is done. All the frequency content beyond the 22.05kHz section will just be reproduced as aliasing. I once exported audio at 96kHz and then attempted to mix it at 44.1kHz. Some of the samples changed in pitch. They had frequency content beyond 22.05kHz & I hadn't cut them. I now know that this was aliasing that caused the perceived change in pitch.
But as you mentioned, the DAWs filter out these for us automatically so aliasing isn't much of an issue in this regard.
Depends on converter. Ableton 10 converter is clean, but Reaper and FL creates lot of aliasing when you downsample.
Why can't you see the 8k Hz signal?
Why not include the actual audio?
is this how audio can end up with something that sounds like chirping heard from underwater?
Maybe. But I think it's more likely that what you're referring to is pre-ringing caused by sharp temporary in-band frequency cut-offs. Like they happen very audibly in low bit rate mp3 conversions.
I think I've heard it only once, when used lots of saturation and fm synth that have no oversampling (I actually liked that sound)
Usually difference between higher and lower sample rate, or oversampling, is more detailed high end. Aliasing blurs sound a little.
If modern equipment can handle Nyquist rates in the human spectrum of hearing with no problem, then why would one even bother with a sampling rate that can introduce aliasing like he does? Is the point that encoding/bit depth and other errors might introduce too high of frequencies and so high-pass filters are often (perhaps always?) needed to avoid aliasing, even though it's occurring "behind the scenes?"
Aliasing happens no matter what sampling frequency you use. For instance, suppose a microphone picks up sound at 30kHz. Humans can't hear that sound, but if you sampled at 44.1kHz, it will look like you have sound at 14.1kHz, a frequency that humans can hear. That's why you have to pass the microphone signal through a low pass filter first, before sampling. I suppose the same principles would apply in digital sound production. If you're using some instrument with sound sampled at 96kHz, you should pass it through a filter before downsampling to 44.1kHz.
it's bitcrusher effect.
That was great. Thank you.
in the mix it makes no sense. It also makes no sense to downsample a mix so hard.
ua-cam.com/video/Bx9hW07BLIk/v-deo.html Here's a fun example of extreme aliasing in modulation resulting in pretty much noise, from FM synthesis when the modulator has a harmonic 14 octaves higher than the fundamental (the operator is in 2x, so 7 octaves -> 14 octaves). Since it's FM (or phase modulation), the output is not a constant signal, it's noise instead. Kinda cool.
you're a fuckin goat for this, thank you
Pretty cool
thank you!
You are welcome, Mandy! I appreciate your feedback.
yes and no
Nice, got the 666th like \m/