Why Higher Bit Depth and Sample Rates Matter in Music Production

Поділитися
Вставка
  • Опубліковано 7 чер 2023
  • What is the benefit of using higher bit depth and sample rate in a DAW session for recording music? Should you use 16-bit, 24-bit, or 32-bit floating point? Is it worth recording music in 96kHz or 192kHz, or is 48kHz sample rate good enough?
    Watch Part 1 here: • Debunking the Digital ...
    Watch this video to learn more about sample rates in music production (Dan Worrall and Fabfilter): • Samplerates: the highe...
    Dan Worrall UA-cam Channel: / @danworrall
    Fabfilter UA-cam Channel: / @fabfilter
    This video includes excerpts from "Digital Show & Tell", a video that was originally created by Christopher "Monty" Montgomery and xiph.org. The video has been adapted to make the concepts more accessible to viewers by providing context and commentary throughout the video.
    "Digital Show & Tell" is distributed under a Creative Commons Attribution-ShareAlike (BY-SA) license. Learn more here: creativecommons.org/licenses/...
    Watch the full video here: • Digital Show & Tell ("...
    Original Video: xiph.org/video/vid2.shtml
    Learn More: people.xiph.org/~xiphmont/dem...
    Book a one to one call:
    audiouniversityonline.com/one...
    Website: audiouniversityonline.com/
    Facebook: / audiouniversityonline
    Twitter: / audiouniversity
    Instagram: / audiouniversity
    Patreon: / audiouniversity
    #AudioUniversity
    Disclaimer: This description contains affiliate links, which means that if you click them, I will receive a small commission at no cost to you.

КОМЕНТАРІ • 400

  • @Texasbluesalley
    @Texasbluesalley 11 місяців тому +152

    Fantastic breakdown. I went through a 192Khz phase about 15 years ago and suffice it to say.... lack of hard drive space and the computing power of the day cured me of that pretty quickly. 🤣

    • @snowwsquire
      @snowwsquire 11 місяців тому

      @MF Nickster I don’t know about other Daws, but reaper lets you move audio clips on a sub sample level

    • @Emily_M81
      @Emily_M81 11 місяців тому

      hah. I have an old MOTU Ultralite mk3 kicking around the advertised 192Khz and in the day it was the new hotness I was just like O.O at ever wanting to record that

    • @MrJamesGeary
      @MrJamesGeary 10 місяців тому +3

      Holy moly you’re brave. About a year ago I started mostly working at 88.2/96. I’ve been blown away by how far computing power has come as performance is super smooth these days and the results are quality but man a decade+ later I still feel you on that storage battle. Can’t even imagine what you went through. Whenever I break and print my hardware inserts in the session it’s basically a “lol -12gb” button for my C drive

    • @andyboxish4436
      @andyboxish4436 6 місяців тому +2

      96khz is all you need even if you are a believer in this stuff. Not a fan of 192

    • @popsarocker
      @popsarocker 4 місяці тому +1

      It's interesting. Shot a concert recently with ≈ 10 cameras and multicammed it in the NLE. We used Sony Venice across the board and shot X-OCN ST. That's about 5GB per minute. We recorded ≈ 100 channels of 24/48 on the audio side. If we had recorded @192Khz we would have still only wound up with around 45% less data than the video. From a purely data storage and bitrate standpoint PCM audio is not really all that terrible. Sony X-OCN is pretty efficient actually. ProRes is a real hog on the other hand but notably unlike PCM neither X-OCN nor ProRes are "uncompressed". Undoubtedly the resource hog for audio is plugins because unlike video where VFX and color correction tend to be tertiary operations (i.e. entirely non-real time and done by someone else), audio engineers are typically focusing on editing and finishing within the same software environment and in real time.

  • @Sycraft
    @Sycraft 10 місяців тому +10

    Something to add about bit depth and floating point for audio processing is the phenomena of rounding/truncation and accumulated error. If you are processing with 16 or 24-bit integers then every time you do a math operation, you are truncating to that length. Now that doesn't sound bad on the surface, particularly for 24-bit, what would the data below 144dB matter? The problem is that the error in the result accumulates with repeated operations. So while just the least significant bit might be wrong at first, it can creep up and up as more and more math is done and could potentially become audible. It is a kind of quantization error.
    The solution is to use floating point math, since it maintains a fixed precision over a large range. Thus errors are much slower to accumulate and the results more accurate. So it ends up being important not only for things like avoiding clipping, but also to avoid potentially audible quantization errors if lots of processing is happening. In theory with enough operations, you could still run in to quantization error with 32-bit floating point, since it only has 24 bits of precision, though I'm not aware of it being an issue in practice. However plenty of modern DAWs and plugins like to operate in 64-bit floating point which has such a ridiculous amount of precisions (from an audio standpoint) that you would never have any error wind up in the final product, even at 24-bit.

  • @colin.chaffers
    @colin.chaffers 11 місяців тому +121

    Love it, I worked for Sony Broadcast including professional audio products, my team worked in Abbey road and the like, this take me back to those days when the analogue and digital battle lines were being drawn, I've always maintained digital offers a better sustainable quality, for the reasons you outline. Keep it up

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +5

      Thanks, Colin! Sounds like you’ve worked on some awesome projects!

    • @frankfarago2825
      @frankfarago2825 11 місяців тому +4

      There is no "battle"going on, Dude. BTW, did you work on the analog or digital side of (Abbey) Road?

    • @colin.chaffers
      @colin.chaffers 11 місяців тому +12

      @@frankfarago2825 I said battles lines, I did not say there was a battle, I worked for Sony broadcast in the time when digital recording equipment like the PCM 3324 was being introduced, and remember conversations with engineers where they preferred analogue recorders, because they could get a better sound by altering it like bias levels, which to me always felt they were distorting the original recordings. I ran a team of engineers who installed, maintained and supported (sometimes during recording session (sometimes overnight)) these products at a time when the industry was starting to embrace the technology.

    • @InsideOfMyOwnMind
      @InsideOfMyOwnMind 11 місяців тому +2

      I remember this time when digital audio wasn't quite in the hands of the consumer yet and a guy who's name escapes me from Sony's "Digital Audio Division" as he put it brought a digital reel to reel deck into the studio of a radio station in San Francisco and played the theme to Star Trek Motion Picture/ Wrath of Kahn. It was awesome but the station was not properly set up for it and there was heavy audible clipping. They stopped and came back to it later and while the clipping was gone the solution just sucked all the life out of the recording. I wish I remembered the guy's name. I think it was Richard something.

    • @christopherdunn317
      @christopherdunn317 11 місяців тому

      But how many albums out there have been recorded to tape ? most all of them ! how many digital albums have i heard ? squat and if i have ? it was early adat !

  • @lohikarhu734
    @lohikarhu734 10 місяців тому +23

    Yep, I think that it's quite analogous to photography, where 8-bit colour channels work "pretty well" for printed output, but really fall apart for original scene capture, and just get worse when any kind DSP is applied to the "signal", where 'banding' shows up in stretched tones, and softened edges can get banding or artifacts introduced during processing... Great discussion.

  • @oh515
    @oh515 11 місяців тому +61

    I find a higher sample rate most useful when stretching audio tracks is necessary. Especially on drums to avoid “stretch marks.” But it's enough to bounce up from 48 (my standard) to e.g. 96 and bounce back when done.

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +4

      Here's the simpler way to get the same effect. Check the settings for your time stretch algorithm. The default is usually the highest pitch accuracy. What increasing the project sample rate does is decrease pitch accuracy in favor of time accuracy. The alternative way of doing this is to set the time stretching algorithm to a lower pitch accuracy.

    • @PippPriss
      @PippPriss 11 місяців тому +2

      ​@@simongunkel7457 Are you using REAPER? There is this setting to preserve lowest formants - is that what you mean?

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +3

      @@PippPriss Sorry for the late reply, didn't get a notification for some reason. REAPER has multiple time stretch engines and for this particular application switching from Elastique Pro to Elastique Efficient is the way to go. You can more directly change the window size on "simple windowed", though Reaper actually goes with a time based setting, rather than a number of samples. Also note that stretch markers can be set to pitch-optimized, transient-optimized and balanced..

    • @customjohnny
      @customjohnny 11 місяців тому +5

      Haha, “Stretch Marks” never heard that before. I’m going to say that instead of ‘artifacting’ from now on

    • @DrBuffaloBalls
      @DrBuffaloBalls 2 місяці тому +1

      How exactly does upsampling from 48k make the stretching more transparent? Since it's not adding any new data, I'd imagine it would do nothing.

  • @simonmedia7
    @simonmedia7 10 місяців тому +10

    I always thought about the sample rate problem being that if you wanted to slow down a piece of audio with the highest frequencies being 20kHz, you'd lose information proportional to the amount you slow it down. So you need the extra magical inaudible information beyond 20kHz for the slowed down audio to still fill the upper end of the audible spectrum. That is something every producer will have probably experienced.

    • @albusking2966
      @albusking2966 5 місяців тому +2

      yeah if its essential for your workflow to slow some audio down then yes by all means. otherwise im happy with 48 or 44.1 because it sounds good. I like to export any files before mastering as 32 bit files tho, saves you issues from downsampling (as most DAWs run a 32 bit master fader now)

  • @RealHomeRecording
    @RealHomeRecording 11 місяців тому +185

    I like high sample rates and I cannot lie. You other engineers cannot deny....

    • @ericwarrington6650
      @ericwarrington6650 11 місяців тому +7

      Lol...itty bitty waist....round thing...face..😂🤘🎶

    • @Mix3dbyMark
      @Mix3dbyMark 11 місяців тому +34

      When the engineer walks in with some RME and a Large Nyquist in your face, you get sprung!

    • @maxuno8524
      @maxuno8524 11 місяців тому +1

      ​@@Mix3dbyMark😂😂😂

    • @DeltaWhiskeyBravo13579
      @DeltaWhiskeyBravo13579 11 місяців тому +4

      FWIW I'm running 32 bit float and 48k on my DAW. That's my max bit depth with Studio One 6.1 Artist, it goes to 64 bit float in Pro.
      As for sample rates, it looks like S1 goes up to 768K. Good enough?

    • @jordanrodrigues1279
      @jordanrodrigues1279 11 місяців тому +7

      ...if a mix walks in with that crunchy ali'sin and cramped bells boostin air I just can't

  • @jim90272
    @jim90272 11 місяців тому +29

    A fun fact: The exact same reasoning is used in professional video cameras. The Arri Alexa 35 - a camera often used in movie making - has a whopping 17 stops of dynamic range. So even if a scene is way under exposed or over exposed, the problems can be corrected in post-processing.

    • @jackroutledge352
      @jackroutledge352 11 місяців тому +6

      That's really interesting.
      So why is everything on my TV so dark and hard to see!!!!!!

    • @blakestone75
      @blakestone75 11 місяців тому +7

      ​@@jackroutledge352 Maybe modern filmmakers think underexposed means "gritty" and "realistic". Lol.

    • @Breakstuff5050
      @Breakstuff5050 11 місяців тому +4

      ​@jackroutledge352 perhaps your TV doesn't have a good dynamic range.

    • @RealHomeRecording
      @RealHomeRecording 10 місяців тому +3

      ​@@jackroutledge352yeah that sounds like an issue with your TV quality. Or maybe your settings are not optimized?
      I have a 4K OLED Sony TV and it has HDR. Looks gorgeous with the right material.

    • @Magnus_Loov
      @Magnus_Loov 10 місяців тому +6

      @@RealHomeRecording It's a well-known problem/phenomenon. Lot's of people are complaining about the darker TV/movie-productions these days. It is much darker now.
      I also have a 4k Oled TV (LG) but I can also see that some scenes are very dark in production.

  • @alanpassmore2574
    @alanpassmore2574 11 місяців тому +17

    For me 24 bit, 48khz digital recorder with analog desk and outboard gives all I need. You get the balance of pushing the levels through the analog to create the excitement and keeping lower digital levels to capture it with plenty of headroom.

    • @jmsiener
      @jmsiener 2 місяці тому

      It’s all you need. Your DAW might process audio as a 32bit float but your ADC is more than likely capturing 24bit. 48k gives a touch more high frequencies before nyquist sets it without essentially doubling file size.

  • @pirojfmifhghek566
    @pirojfmifhghek566 11 місяців тому +26

    That video you mentioned last time absolutely blew my mind. I didn't have a clue that the aliasing around the Nyquist frequency issue was a thing at all. I had the feeling that higher sample rates were better for basic audio clarity, in the same way that a higher bit-depth helps with dynamic range. I just had no idea how or why.

  • @mastersingleton
    @mastersingleton 11 місяців тому +10

    Thank you for showing that 24 bit is not necessary for audio playback however for audio production that makes a big difference in terms of the amount of buffer between clipping and not clipping the input audio that is being produced.

  • @eitantal726
    @eitantal726 11 місяців тому +21

    Same reason why graphic designers need high res and high bit depth. A 1080p jpg image is great for viewing, but will look terrible once you zoom or change the brightness. If your final image is composed of other images, they better be at a good resolution, or they'll look pixelated

    • @lolaa2200
      @lolaa2200 11 місяців тому +2

      It's not about resolution it's about compression. Not entering int eh details but actually how much you compress your JPEG and the trade off between image quality and file size is exactly what is discussed here : matter of sampling rate.

    • @gblargg
      @gblargg 11 місяців тому

      As an amateur Photo(GIMP)-shopper, I figured this out a few years ago. Always start with a higher resolution than you think you'll need. It's easy to reduce the final product but a pain to go back and redo it with higher resolution.

    • @MatthijsvanDuin
      @MatthijsvanDuin 11 місяців тому +1

      @@lolaa2200 Ehh no, audio sampling rate is directly analogous to image resolution. We're not talking about image compression nor audio compression here.

    • @lolaa2200
      @lolaa2200 11 місяців тому +1

      @@MatthijsvanDuin I reacted to a message talking about JPEG. The principle of JPEG compression is precisely to give a different sample rate to different part of the image so yes it totally relate. JPEG compression IS a sampling matter.

  • @rowanshole
    @rowanshole 6 місяців тому +3

    It's like Ansel Adams 'zone system' for audio.
    Adams would prefog his film with light, then over expose the film in camera, while under exposing the film in chemistry, so as to get rid of the noise floor (full blacks) and get rid of the digital clipping (full whites), both of which he maintained "contained no information".
    This resulted in very pleasing photographs.

    • @dmillionaire7
      @dmillionaire7 2 місяці тому

      Who would I do this process in photoshop

  • @shorerocks
    @shorerocks 11 місяців тому +7

    Thumbs up for the Dan Worrall link. His, and Audio Universities videos are the top vids on YT to watch.

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +1

      I love Dan’s videos! Thank you for the kind words, Sven!

  • @benjoe999
    @benjoe999 11 місяців тому +39

    Would be cool to see the importance of audio resolution in resampling!

  • @DDRMR
    @DDRMR 11 місяців тому +8

    I've slowly been learning the benefits of oversampling for the last few years and before final mix export ill spend an hour or so applying oversampling on every plug in that offers the option on every mixer track.
    This video really solidified my knowledge and affirmed that me spending that extra time has always been worth it!
    The final mixes and masters do sound fucking cleaner by the end of it all because I do use a lot of compressions and saturation on most things.

  • @BertFlanders
    @BertFlanders 11 місяців тому +1

    Thanks for clarifying these things! Really useful for deciding on project bitrates and sample rates. Cheers!

  • @macronencer
    @macronencer 11 місяців тому +1

    This is an excellent and very clear explanation. Thank you so much! I've seen Dan Worrall's videos on this topic, and I agree they are also brilliant.

  • @maxheadrom3088
    @maxheadrom3088 4 місяці тому

    Both this and the previous video are great! Thanks for the great work!

  • @ZadakLeader
    @ZadakLeader 11 місяців тому +67

    I guess having a high sample rate for when you need to e.g. slow recordings down is also useful because you still have that data. And to me that's pretty much the only reason to have things above 44kHz sampling rate

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +9

      Great point, Zadar Leader!

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +8

      Not something I think is neccessary, unless you specifically want ultrasonic content and make bat sounds audible. Now if you think time stretching sound better with a higher sample rate, you might be right, but you are using the most computationally expensive hack I can think of. Time stretching and pitch shifting algorithms use windows of a particular size (e.g. 2048 samples). But their effect depends on how long those windows are in time. So a higher sample rate would just decrease the time window. All of these effects make a trade-off though: The longer the window, the more accurate they get in the frequency domain, but the shorter the window the more accurate they get in the time domain. Most of them default to maximal window size and thus maximal accuracy in the frequency domain, but the errors in the time domain lead to some artefacts. So instead of increasing your project sample rate, which will make all processing more costly in computation, you could just opt for the second higherst frequency domain setting for your pitch shifting or time stretching algorithm. Which means window size is decreased, which actually reduces computational load for pitch shifting or time stretching.

    • @5ilver42
      @5ilver42 11 місяців тому +6

      @@simongunkel7457 I think he means the simpler resampling version where things get pitched down when playing slower, then the higher sample rate still has content to fill in the top of the spectrum when pitched down.

    • @gblargg
      @gblargg 11 місяців тому +6

      @@5ilver42 That's the ultrasonic content he referred to, wanting it to be captured so when you lower the rate it drops into the audible range.

    • @MatthijsvanDuin
      @MatthijsvanDuin 11 місяців тому +2

      @@5ilver42 It depends on the application, but if you're just slowing down for effect you actually want the top of the spectrum to remain vacant rather than shifting (previously inaudible) ultrasonic sounds into the audible part of the spectrum. Obviously if you want to record bat sounds you need to use an appropriate sample rate for that application, regardless of how you intend to subsequently process that record.

  • @zyonbaxter
    @zyonbaxter 11 місяців тому +18

    I'm surprised he didn't mention how higher sample rates decrease latency when live monitoring.
    PS. I would love to see videos about the future of DANTE AV and Midi 2.0.

    • @MikeDS49
      @MikeDS49 11 місяців тому +5

      I guess because the digital buffers fill up sooner?

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +11

      Good point, Zyon Baxter! It’s a balance in practice though, as it’s more processor intensive so using a higher sample rate might lead to needing a larger buffer size.
      If anyone reading this is interested in learning more about this, check out this video: ua-cam.com/video/zzM4yk3I8tc/v-deo.html

    • @lolaa2200
      @lolaa2200 11 місяців тому +6

      Actually with a given computing power, and assuming you make full use of it, higher sample rate mean higher latency.

    • @andytwgss
      @andytwgss 11 місяців тому +2

      @@lolaa2200 lower latency, even within the ADC/DAC, feedback loop is reduced

    • @DanWorrall
      @DanWorrall 11 місяців тому +9

      I think this is kind of a myth in all honesty.
      In every other way, doubling samplerate means doubling buffer sizes. You have a delay effect? You'll need twice as many samples in the buffer at 96k.
      Same for the playback buffer: if you double the samplerate, but keep the same number of samples in the buffer, you've actually halved the buffer size.

  • @mandolinic
    @mandolinic 11 місяців тому

    This stuff is pure gold. Thank you so much.

  • @soundslikeamillion77
    @soundslikeamillion77 20 днів тому +1

    Here are two other reasons to go with 96k:
    The ad/da latency of your system will be much smaller, and (if for some reason) you record to a file played back in the wrong sample rate you will notice it right away 😁

  • @Fix_It_Again_Tony
    @Fix_It_Again_Tony 11 місяців тому +1

    Awesome stuff. Keep 'em coming.

  • @theonly5001
    @theonly5001 11 місяців тому +3

    More samples are a great thing for denoising as well.
    Temporal Denoising is quite a resource intensive task, but it works wonders in recodings of any type. Especially if you want to get rid of higher frequency noise.

  • @magica2z
    @magica2z 10 місяців тому

    Thank you for all your great videos and subscribed.

  • @emiel333
    @emiel333 11 місяців тому +1

    Great video, Kyle.

  • @MadMaxMiller64
    @MadMaxMiller64 2 місяці тому +1

    Modern converters work as 1bit sigma-delta anyway and convert the data stream after the fact, using digital filters with a dithering noise beyond the audible range.

  • @cassettedisco6954
    @cassettedisco6954 11 місяців тому +1

    Gracias amigo, saludos desde México 🇲🇽❤

  • @michelvondenhoff9673
    @michelvondenhoff9673 11 місяців тому +1

    Compression you apply before any gain or od/ds is brought into the signalpath. It only might be applied again when mastering for different formats.

  • @TarzanHedgepeth
    @TarzanHedgepeth 11 місяців тому

    Good stuff to know. Thank you.

  • @Zelectrocutica
    @Zelectrocutica 11 місяців тому +14

    I already say this but i say again, most plugin work better at high sample rates since most plugins doesn't have internal oversampling, so it's good to work at reasonably high sample rate like 96k or 192kHz, though i say this but im still working at 44-48kHz 😂

    • @weschilton
      @weschilton 11 місяців тому +3

      Actually almost all plugins these days have internal oversampling.

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +2

      My DAW (Reaper) has external oversampling per plugin or plugin chain, which means it takes care of the upsampling and after processing the filtering and downsampling. To the plugin it looks like the project runs at a higher sample rate, while for plugins where aliasing isn't an issue it can still run at the lower sample rate.

    • @mb2776
      @mb2776 11 місяців тому

      most plugins like 10 years ago had oversampling allready built in them.

  • @oaooaoipip2238
    @oaooaoipip2238 11 місяців тому +4

    Don't ignore clipping. Or it will sound like Golden hour by JVKE.

  • @danielsfarris
    @danielsfarris 11 місяців тому +1

    WRT noise floor and compression, when working with analog tape, it was (and I presume still is) much more common to compress and EQ on the way in to avoid adding noise by doing it later.

  • @-IE_it_yourself
    @-IE_it_yourself 11 місяців тому

    you have done it again. i would love to see a video on square waves

  • @breernancy
    @breernancy 11 місяців тому

    !All points on point!

  • @3L3V3NDRUMS
    @3L3V3NDRUMS 11 місяців тому +1

    That was really great man! I didn't know this before! I was just using standard because I didn't know what would it change. But now I understand it! 🤘

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +2

      Glad to help, 3L3V3N DRUMS! I still use 48kHz most of the time because the processing power and storage I save outweigh the tiny bit of aliasing that might occur. (In my opinion)

    • @3L3V3NDRUMS
      @3L3V3NDRUMS 11 місяців тому

      @@AudioUniversity Great to know. That's actual the standard in Ardour while I'm recording my drums. So I'll let it like that!

    • @bulletsforteeth5029
      @bulletsforteeth5029 11 місяців тому +1

      It will require 50% more storage capacity, so be sure to factor that in on your projects.

    • @simongunkel7457
      @simongunkel7457 11 місяців тому

      @@AudioUniversity Where would it occur though? Your converter on the hardware side always uses the maximum sample rate it can support, because that makes the analog filter design much easier. Then if you record at lower sampling rates it will apply a digital filter and then downsample - both are hardware accelerated DSP that get controled via the driver. If you set to record at 48k, your converters don't switch to a different filter design and a physically different sample rate, they just start to perform filtering and downsampling before sending the digital signal to the box.

    • @simongunkel7457
      @simongunkel7457 11 місяців тому

      @MF Nickster I agree.

  • @DeltaWhiskeyBravo13579
    @DeltaWhiskeyBravo13579 11 місяців тому +10

    Excellent video Kyle.
    Sometimes I miss the analog tape days, till it comes to signal to noise. At least tape saturation sounds much better than digital clipping, which I'm sure nobody goes that hot. 🙂

  • @plfreeman111
    @plfreeman111 11 місяців тому +2

    "...for any properly mastered recording." I long for properly mastered recordings. A thing of myth and beauty. Like a unicorn.

  • @MegaBeatOfficial
    @MegaBeatOfficial 5 місяців тому +1

    AFAIK this method is built in to every audio interface nowadays. So obviously a sampling resolution higher than 44.1 is useful, but in principle you shouldn't record audio files at 96kHz or higher, because they just take up a lot of hard disk space and need more cpu power to play them, especially if you have a lot of tracks... but you don't gain quality.

  • @mattbarachko5298
    @mattbarachko5298 11 місяців тому +1

    Got so excited when I saw you were using reaper

  • @BeforeAndAfterScience
    @BeforeAndAfterScience 11 місяців тому +2

    Succinctly, while human hearing has an upper frequency bound, targeting that limit when converting from analog to digital can (and usually does) result in literally corrupted digital representation because the contribution of the higher analog frequencies to the waveform don't just disappear, they get aliased into the lower frequencies.

  • @SigururGubrandsson
    @SigururGubrandsson 11 місяців тому

    Really nice stuff.
    But I disagree with the Nyquist alignment.
    You can defend it if you know the input, but if its misshapen like music, then you cant defend Nyquist.
    Misshapen frequency, volume and variance needs to be taken into account and you need mpre than 2x the frequency for that, as well the bit depth.
    Not to mention intentional saturation.
    Keep it up, I'm eager to watch the next vid!

  • @AdielaMedia
    @AdielaMedia 5 місяців тому

    great video!

  • @nine96six
    @nine96six 10 місяців тому

    Rather than for musical purposes, I think it is valuable as data for profiling or natural phenomenon analysis in the future.

  • @lolaa2200
    @lolaa2200 11 місяців тому +1

    almost nailed it although what you said about the pressure on anti-aliasing analog filter is only true with very basic converter topologies. So if you've only atanded mixed signal electronic 101 that's what you have seen. However we don't use that topology anymore for audio for several decades now and mostly for this exact reason. The "true" (i.e. : in terms of what is seen by the analog side) sampling frequency is several MHz.

    • @gblargg
      @gblargg 11 місяців тому

      The way modern converters work just amplifies his point: the higher the sampling rate, the easier the filtering is.

  • @bassyey
    @bassyey 11 місяців тому +1

    I record 24bit because of the noise floor. But I record on 96KHz because of the round trip time! My system actually has a lower latency when it's set to 24bit 96KHz.

  • @professoromusic
    @professoromusic 11 місяців тому +1

    Love this, always great content. Where have you studied?

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +4

      Thanks, Professor O. I studied audio production at Webster University. I’ve also learned a lot from mentors, of course!

  • @delhibill
    @delhibill 10 місяців тому

    Clear explanations

  • @kyleo2113
    @kyleo2113 11 місяців тому +1

    Is there any advantage to upsampling when applying parametric EQ, crossfeed, filters, volume leveling etc? Also do some DACs work better with higher sample rates if you are able to offload the conversion in the digital domain in a pc? I am a roon user and curious your take on this.

  • @shueibdahir
    @shueibdahir 2 місяці тому

    The demonstration about analog audio gain and noise floor is exatcly how cameras work aswell. I'm actually shocked by how similar they are. Capturing images with a camera is a constant battle between distortion (clipping the highs) and the image being too dark (blending in with the noise floor) and bringing it up in post then causes the noise to come up aswell.

  • @deaffatalbruno
    @deaffatalbruno 11 місяців тому +1

    well, the comments around noise floor are a bit misleading. a 24 bit signal doesn't have 144 db noise floor, that would be nice, as this depends on the noise floor of the conversion. 144 ( 6db per bit, ) is theory only.

  • @isotoxin
    @isotoxin 11 місяців тому +3

    Finally I have strong arguments to argue with audiophiles! 😅

    • @garymiles484
      @garymiles484 11 місяців тому

      Sadly, most are like flat earthers. They just won't listen.

  • @___David___Savian
    @___David___Savian 2 місяці тому

    Here is the right level to render to for audio uploaded to UA-cam:
    The ideal volume limit level is -5.9 Db. (UA-cam automatically normalizes volume to that level)
    All instruments should be below this level with the peak spikes reaching -5.9 Db.
    Just put all instruments at around -18 Db and then increase accordingly between -18 and -5.9 Db.

  • @aaronmathias6739
    @aaronmathias6739 11 місяців тому +1

    It is good to see you back in action with your awesome videos!

  • @elanfrenkel8058
    @elanfrenkel8058 Місяць тому +1

    Another reason to use higher sample rates is it decreases latency

  • @Tryggvasson
    @Tryggvasson 11 місяців тому

    sample rate does more than help with anti-aliasing. rupert neve was convinced that capturing and processing the ultrasonic signal that came with the audible actually contributed to the perceived pleasantness of the sound, and the emotional state it communicates. so, even if you can't hear it, per se, it counts in the overall timbre and feel - you can easily argue that, in the analog domain, ultrasonic signal - for instance harmonics - actually changed the behavior of compressors, to say the least - and that, multiplied by x number of tracks. so higher sample rates also allow for a wider bandwidth into the ultrasonics, which seems to matter for the quality of the signal. the downside is the processing power, and storage space.

    • @FallenStarFeatures
      @FallenStarFeatures 11 місяців тому

      It's risky to record frequencies above 20KHz, even when the original sample rate is above 88.2khz. Ultrasonic frequencies in this band are susceptible to being digitally folded down into the audio range, producing extremely unnatural-sounding aliasing distortion. While this hazard can be carefully avoided within a pure 96Khz+ digital processing chain, any side trip to an external digital processor may involve resampling that can run afoul of ultrasonic frequencies. Why take such risks when the speculative benefits have never been shown to be audible?

  • @VendendoNaInternetAgora
    @VendendoNaInternetAgora День тому +1

    I'm watching all the videos on the channel, thank you for sharing your knowledge with us. One question: what is the setup of the sound equipment installed in your car? Is it a High-End system? I'm curious to know what system (equipment) you use in your car...

    • @AudioUniversity
      @AudioUniversity  День тому

      I just use the stock system, but I’d love to upgrade someday! Thanks.

  • @nunnukanunnukalailailai1767
    @nunnukanunnukalailailai1767 11 місяців тому +5

    Weird how limiting was not discussed. It's one of the applications of high sample rates that actually make actual sense in practice most of the time. At least in a mastering context that is. The sample peak level has a higher chance to match the intersample peak level (true peak) when higher sample rates are used even if high sample rates have no effect on the d/a. That's the main working principle behind true peak limiters.

  • @stephenbaldassarre2289
    @stephenbaldassarre2289 9 місяців тому

    One thing often overlooked in the sample rate argument is digital mixers. The converters are often run in low-latency (high speed) mode in order to keep the round trip through the console low enough that it doesn't affect people's performances. This is done by simplifying the digital anti-aliasing filters to reduce processing time. This is not trivial stuff, I'm talking on the order of 40dB attenuation at .6fs vs 100dB. In other words, if your console runs at 48KHz, an input of 28.8K at full scale will come out the other side of your console as 19.2K at -40dB. That's enough to cause some issues, especially since a lot of manufacturers trying to meet a price point completely leave out the analogue anti-aliasing filters (Sony suggests 5-pole analogue filters in front of 48K ADCs). Running a digital console at 96KHz effectively means around 90dB stop-band attenuation even with the ADCs in low-latency mode. Of course, you also reduce aliasing caused by internal processing as you say.

    • @stephenbaldassarre2289
      @stephenbaldassarre2289 8 місяців тому

      @mfnickster The issue isn't processing power so much as ADCs MUST have group delay in order to have linear phase anti-aliasing. DACs must also have group delay for the reconstruction filters. The processing power within the console's DSP is fast, but nothing is instantaneous, so every place once can reduce the latency must be considered. Oversampling also requires group delay, so pick your poison. In a computer environment, the plug-in can report it's internal latency so the DAW can compensate by pre-reading the track, not so in a mixer.

  • @marcbenitez3227
    @marcbenitez3227 6 місяців тому

    96 is the sweet spot, think of sample rates as the display quality on your monitor, 1080p is going to look worse than 4k because it has less pixels, it’s the same thing in music, more samples equals more detail.

  • @ukaszpruski3528
    @ukaszpruski3528 11 місяців тому

    Perhaps an Idea to consider (and make a video) that compares DSD to PCM and the differences between PURE DSD recording mastering output and the ones that use PCM in between ... Nevertheless, DSD128 or DSD256. PCM 24/96 vs DSD128 ... Is it really that close ? Or is there some "hidden difference" ;-) ...

  • @WaddleQwacker
    @WaddleQwacker 10 місяців тому

    it's sort of the same with visual production with pictures and video files. The average joe posts jpegs in 8bits and maybe a png with alpha channel every sunday. But in production we use 32bits EXRs everywhere because you can play with high dynamic range in comp and it's fast it can store layers and metadata you haven't even heard about and deep data and ....

  • @XRaym
    @XRaym 11 місяців тому

    02:02 it worths noticing that it is not because you record in 24 bits audio that you have 24 bits of dynamics : hardwares have noise level as well. But sure, digital intefrace are still way lower in noise than analog.

  • @camgere
    @camgere 11 місяців тому

    I'm a bit rusty on this, but there is an issue with the Nyquist frequency. Going from analog to digital. You want to "brick wall" band pass the signal at half the sampling frequency. Brick wall is a perfect low pass filter, which doesn't exit. There are very good low pass filters. Going from digital to analog, you again want to brick wall filter the signal to recover the analog signal from the sampled signal. Even more confusing, there are digital low pass filters, but they have to obey Nyquist as well.

  • @crapmalls
    @crapmalls 11 місяців тому +3

    Higher sample rates reproduce higher frequency. There is no more info in the audible range. The clue is in the file size, double the frequency double the size. More bits is lower noise floor which most dacs cant reproduce out the audio port. And yet it sometimes sounds better to me 🤷‍♂️

    • @paulhamacher773
      @paulhamacher773 11 місяців тому +5

      did you test it in an ABX-setting? Otherwise your perception just might have fooled you! 😀 #beenthere

    • @crapmalls
      @crapmalls 11 місяців тому

      @@paulhamacher773 a lot of the time its difficult to find a higher res version of the same mastering

    • @mb2776
      @mb2776 11 місяців тому +1

      @@crapmalls ...then just record your own stuff at different settings and let somebody else play it for you without telling. also, use more than just a few examples. you will see, you got fooled. there isn't more info, you can't hear above 20kHz.

    • @crapmalls
      @crapmalls 11 місяців тому

      @@mb2776 yeah thats what i mean. I know theres literally no difference because the higher sample rate just goes into higher frequencies. The file size is the giveaway. Apparently it can help with timing in the dac but thats an oversampling issue and a dac issue IF the dac is even good enough for it to matter

  • @calumgrant1
    @calumgrant1 2 місяці тому

    Real music is not single static sine waves but a whole spectrum that varies with time. I would like to see this mathematical argument extended to spectra, because the error on each frequency component would surely accumulate? Real music is very very processed, being encoded and decoded multiple times from various streaming services and codecs, so I think adding a bit of headroom in terms of frequency and bit depth is quite sensible to keep the artefacts down.

  • @scarface44243213
    @scarface44243213 2 місяці тому

    Hey, what microphone are you using in this video? It's really nice

  • @ferrograph
    @ferrograph 10 місяців тому

    Nice to see that this stuff is understood properly by the younger engineers that didnt live through the evolution of analogue and digital recording. So much nonsense spoken about hgh bit rates. Well done.

  • @ritaauton3006
    @ritaauton3006 5 місяців тому +1

    I hope you can help with a practical question please:
    :
    Streaming services are now offering a lot of high res i24/96 and 24/192 streams..
    In my home stereo system I have a streamer and two different separate digital to audio converters. One DAC is newer and plays high-res. One DAC is older and maxes out at 16/48 (yes, 48. Not 44)
    This sreamer allows me to set the output maximums for bit depth and forsampling rate for each output: digital coax and digital optical.
    Question:: something somewhere is going to downsample a high-res file to play it through my 16/48 digital analog converter. What I want to know is should I have that done by the streamer via adjusting the appropriate output port to 16/44 or 16/48
    or
    should I let the high res pass through the streamer by leaving the output maximums to 24/192 and let the DAC do The down sampling (assuming it can do it on its own ? )
    For the sake of this exercise let's just assume that God gave me golden ears that allow me to hear the color of the carpet in the studio. What is the best practice for doing this downsampling ?
    1. By the streamer before outputting via digital coax or digital optical
    2. Let it all pass through the streamer and let the DAC deal with it.
    Can't wait to hear back from you please

    • @AudioUniversity
      @AudioUniversity  5 місяців тому

      I'm not sure the answer to this question. I typically recommend using the best device in the chain for any ADC, DAC, or sample conversion. But I expect there will be no audible difference. If there is - go with that one. If there's not - don't worry about it. Put on a great record and enjoy.

  • @Skandish
    @Skandish 11 місяців тому

    Yep, frequency and volume range is enough. But what about resolution? In 16 bits 48 kHz signal is just so many data paints, which will wipe any difference between very close, but slightly different signals.
    For example, digitizing short enough 15 kHz and 15.001 kHz sine signals would result in the same binary file. Moreover, DAC is not looking at the whole file, only at a short part of it, meaning that we will likely have frequencies changing over time.
    Compare this it to image sensors or displays. Having 1 inch HDR sensor gives enough size and depth. But we still want it to be 4K or 8K.

  • @TonyAndersonMusic
    @TonyAndersonMusic 10 місяців тому +2

    That was super clear. You’re a great instructor. Is it useless to record in 96k and then bounce stems down to 48k to give my logic session a break?

    • @AudioUniversity
      @AudioUniversity  10 місяців тому

      No. It’s not useless. You can even bounce out the multitrack instead of combining sections into stems.

  • @ProjectOverseer
    @ProjectOverseer 11 місяців тому

    I use 192kHz multi tracking then master to DSD for amazing replay via a decent DAC

  • @rts100x5
    @rts100x5 10 місяців тому

    say whatever you want to ...believe whatever you want to .... the difference between DSD recordings and Lossless wav or flac on my Fiio DAP is NIGHT vs DAY
    Its really about the recording just as much as the file type / resolution

  • @tillda2
    @tillda2 4 місяці тому

    Question: Is the 20bit HDCD mastering any good for playback? Is it recognizable (compared to normal CD), given a good enough audio system? Thanks for answering.

  • @DGTelevsionNetwork
    @DGTelevsionNetwork 11 місяців тому

    This is why the USMC Sony and others need to make DSD more available and not guard it so much. It's a lot easier to work with when the editing program supports it. Almost never have to worry about noise floor and you can do almost all processing on a core duo with ease.

    • @wavemechanic4280
      @wavemechanic4280 10 місяців тому

      You using Pyramix for this? If not, then what?

  • @-IE_it_yourself
    @-IE_it_yourself 11 місяців тому

    5:55 that is cool

  • @lucianocastillo694
    @lucianocastillo694 6 місяців тому

    I wish there was a higher sample rate option for highmid to higher frequencies that keeps a 48hz sample rate on the lowmid-low frequencies but targets a higher sample rate for the rest.

  • @baronofgreymatter14
    @baronofgreymatter14 4 місяці тому

    So in purely playback scenarios, is it recommended to oversample above 44.1....for example my streamer allows me to oversample thru its USB output to my DAC. Does it make sense to oversample to 88.2 or higher in order to get the smoother roll off above nyquist?

  • @rodrigotobiaslorenzoni5707
    @rodrigotobiaslorenzoni5707 11 місяців тому +1

    Excellent Vídeo!!!! One question I have is if the higher sample rates would help to draw a more complex wave , like a mastered song, with many instruments playing at the same time, more accurately. I'm supposing that a "complex" wave may be a combination of various frequencies and even sine and not perfectly sine wavez. May be worth testing.

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +4

      The only frequencies that won’t be accurately sampled exceed the Nyquist frequency, and therefore the audible frequency range. Check out this video to see this demonstrated: ua-cam.com/video/UqiBJbREUgU/v-deo.html

    • @RobertFisher1969
      @RobertFisher1969 11 місяців тому +5

      Any complex wave form can be decomposed into a set of sine waves. If none of those component sine waves are above the Nyquist frequency, then the DAC can perfectly reproduce the complex wave. Which is why such high frequencies need to be filtered out of the complex wave to prevent aliasing.

  • @paullevine9598
    @paullevine9598 Місяць тому

    Could you do a video on dsd, what it is, pros, cons etc

  • @AgentSmith911
    @AgentSmith911 Місяць тому

    I heared Spotify is rumored to offer higher fidelity audio, probably with less compression or lossless audio using codecs like Flak instead of mp3. My audio equipment probably isn't good enough to hear the difference though, but maybe it will be good for music producers.

  • @ats-3693
    @ats-3693 11 місяців тому

    Aliasing definitely isn't a problem unique to digital audio recording, I'm a geophysicist and a geophysical data processor aliasing is also an issue in geophysical data in exactly the same way except it ends up being a visual issue.

  • @VendendoNaInternetAgora
    @VendendoNaInternetAgora 2 дні тому +1

    One question... When I'm listening to a song on UA-cam, how do I identify if that song is an audio file without loss of quality or if it's an audio file with loss of quality? Where can I see the specifications of the audio being played to know if it is, for example: a “WAVE” or “FLAC” format (without loss of quality) or if it is an “MP3” type file (where there was compression and loss Of Quality)? Is there any extension for the Chrome browser that shows real-time specifications of the audio being played? I visited UA-cam's audio file guidelines and it says the following... “[...] Supported file formats: (1) MP3 audio in MP3/WAV container, (2) PCM audio in WAV container, (3 ) AAC audio in MOV container and (4) FLAC audio. Minimum audio bitrate for lossy formats: 64 kbps. Minimum audible duration: 33 seconds, excluding silence and background noise. Maximum duration: none “[...]”. Therefore, UA-cam accepts audio files without loss of quality and audio files with loss of quality.

    • @AudioUniversity
      @AudioUniversity  2 дні тому

      I believe UA-cam videos have an audio Bitrate of 128kbps.

  • @macronencer
    @macronencer 11 місяців тому +1

    I understand about oversampling and why it's used internally. However, sometimes I think of potential reasons to *record* at higher sample rates - but I'm no expert and I wonder whether this is ever justified. Two such reasons I can think of right now:
    1. Field recordings that you might want to slow down later on to half or quarter speed.
    2. Recordings made in adverse conditions that might need noise reduction processing (I've heard some people say that higher sample rates can help with NR quality).
    Do you have any comments on either of these? I'd be interested to hear your advice. Thank you!

    • @RealHomeRecording
      @RealHomeRecording 10 місяців тому +1

      The two reasons you listed are indeed valid points. Pitch correction or pitch manipulation would be another.

    • @macronencer
      @macronencer 10 місяців тому +1

      @@RealHomeRecording Many thanks, that's helpful!

  • @Jungle_Riddims
    @Jungle_Riddims 10 місяців тому

    Word of the day: Nyquist 😉💥

  • @moskitoh2651
    @moskitoh2651 10 місяців тому +1

    If your signal to noise ratio is below 96dB (including not only mic and preamp but also room), recording with 24 Bits only makes sense for the manufacturer. ;-)
    Unless you like to record 8 Bits of noise...

  • @jamesgrant3343
    @jamesgrant3343 11 місяців тому +1

    Bit depth matters, assuming you are going to change the dynamic range of what is on the file by a lot. Sample rate does not (assuming you only care about audible frequencies!)… if you stretch a 20 KHz sinewave, and make it a 19 KHz sinewave, the application doing this re-sampling is not taking the original samples and moving them, it is interpolating a position between the original samples and synthesising a new sample, it will be as good as the algorithm the software uses - the sample rate of the source (44.1/48/96etc) is irrelevant, if the software is good, it will do a good job, if the software is poor, it will do a poor job. Luckily for us in 2023, this is a very solved problem and things like Reaper which still has a re-sampling mode on export, default to very good implementations for sample rate conversion whereas in the olden days, where the original Pentium processor was crazy expensive, it would take forever to export whilst re sampling. Any re-sampling algorithm, that is used today, does not simply draw a straight line between two samples and put a new dot the appropriate proportion along the line, The wave form represented by the original samples is effectively constructed, and the sample that is synthesised is placed on the reconstructed wave form, which is mathematically very precise relative to the original samples. This accuracy does not get better at high sample rates, these samples are temporal, not amplitudinal (ie - the inaccuracy is in the bit depth, not jitter in when the sample was made - unless the ADC was bad - in which case it’s bad at any rate!!) For those that are thinking about aliasing, again, the quality of the software you are using is far more profound than the sample rate you select, for example, a good piece of software may put a low resonance, brick wall filter at about 21 kHz to filter away higher frequencies, so they don’t cause aliasing. If your software does this, and many do, there is a good chance that the software developer has thought things through carefully. If you are dependent on sample rate to minimise aliasing, then there is a good chance that your software of choice has problems in many areas!

  • @GLENNKEARNEY1
    @GLENNKEARNEY1 11 місяців тому

    So what's your Opinion On Ableton Live 11

  • @ChristopherRoss.
    @ChristopherRoss. 11 місяців тому

    If I wanted to manually anti-alias, how would I go about that? A hard low pass at the end of the mix bus, or would I need to put an instance of the low pass after every source of harmonic distortion? Or would that even work?

    • @snowwsquire
      @snowwsquire 11 місяців тому

      it would not, because you would be cutting the 48khz signal not the 96khz one. assuming the project sample rate was 48khz

  • @SergeyMachinsky
    @SergeyMachinsky 11 місяців тому +4

    I wondering if a human can differentiate a sine wave from a sawtooth wave at high frequencies, when the harmonics forming the sawtooth wave will be above 18-20khz
    So we can't hear these frequencies, but the pressure difference will be much steeper in the case of a sawtooth wave (much faster attack)
    Maybe you can give me some opinions and sources on this topic?
    P.s. I really like your videos, thank you!

    • @BenCaesar
      @BenCaesar 11 місяців тому +1

      That’s an interesting question, would be curious too, but I assume you’d be able to hear the difference, what you think?

    • @AudioUniversity
      @AudioUniversity  11 місяців тому +1

      Check out this video: Digital Show & Tell ("Monty" Montgomery @ xiph.org)
      ua-cam.com/video/UqiBJbREUgU/v-deo.html
      Monty runs a square wave through the system and illustrates something called the Gibbs Effect. Although, the frequencies that make a triangle wave or square wave perfectly triangular or perfectly square exceed 20 kHz. So the sound should be the same theoretically!

    • @77WOR
      @77WOR 11 місяців тому +1

      ​@@BenCaesar Above around 5k, no audible difference between a sine and square wave tones. Try it yourself!

  • @EthanRMus
    @EthanRMus 7 місяців тому

    Question about the Nyquist Theorum - Say you are sampling audio at 48kHz, specifically a 24kHz signal. So this signal will get exactly 2 samples, one for the crest and one for the trough, and the Nyquist theorum states that those 2 samples are sufficient to completely reproduce the original analog frequency when converting back from a digital to an anolog signal. My question is, how would there be any way to distinguish in this scenario between say a sin and a square wave at this same 24kHz frequency? Both would be converted from analog to digital using the same 2 samples, but when your interface is reconverting these two different signals using the same 2 samples, how would it know that one had originally been a sin wave and the other had originally been a square wave?

    • @EthanRMus
      @EthanRMus 7 місяців тому

      @nrezmerski Thank you! Especially helpful that you mentioned that modern converters use low pass filters at 20kHz *before* sampling. I was wondering about that as well but that helps clear a lot of other confusion for me.

  • @fasti8993
    @fasti8993 11 місяців тому +1

    Great video. In audio production, another beneficial effect of using higher sample rates, apart from getting rid of aliasing, is that doubling the sample rate cuts latency in half...

    • @gblargg
      @gblargg 11 місяців тому +1

      It's odd that so many people bring this up. It tells me that many systems are poorly designed and don't adjust the sample buffer size to the sample rate, e.g. they are a fixed number of samples rather than a fixed amount of time. Or people just don't know how to adjust the buffer size to reduce latency (at a cost of higher chance of dropouts).

  • @eitantal726
    @eitantal726 11 місяців тому +1

    Might be worth explaining what aliasing is, and why aliasing can occur only in digital processing, and never in analog processing

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +3

      ua-cam.com/video/O_me3NrPMh8/v-deo.html (most visible on the back wheel of the carriage going right). That's aliasing on a film from 1903 and that's not digital.

    • @eitantal726
      @eitantal726 11 місяців тому

      @@simongunkel7457 aliasing in the realm of analog Audio processing

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +1

      ​@@eitantal726 Well BBD delays can alias and a classic piece of gear which can do this is the Moogerfooger MF-104M. But you could mod any BBD effect to do it - it's just the Moog has controls that allow you to go to aliasing mode without any further tinkering.

    • @MatthijsvanDuin
      @MatthijsvanDuin 11 місяців тому +3

      Aliasing is an issue whenever something is discretely sampled, which it why also applies to motion film (with each "sample" being a frame of video)

  • @phinok.m.628
    @phinok.m.628 10 місяців тому

    For some reason, everybody keeps saying you can perfectly sample up to half the sampling rate. Technically, that is not true, you can only sample LESS than half the sampling rate, not exactly half the sampling rate. If you sample exactly half the sampling rate, you could theoretically always sample the signal at zero, in which case the sampled signal would be equal to a signal at half the sampling rate with any amplitude (including zero), meaning there are infinitely many band limited solutions for that particular sampled waveform. In fact, no matter at which point you sample half the sampling rate, there will ALWAYS be multiple band limited solutions. Which breaks the whole point of being able to perfectly reconstruct the original analog signal.

    • @phinok.m.628
      @phinok.m.628 10 місяців тому +1

      @@nicksterj Fair enough... :D
      I guess I could have expressed myself a little more accurately. But I'm sure you know what I mean.

  • @jonasdaverio9369
    @jonasdaverio9369 11 місяців тому +3

    For digital SNR to be that important, you would need it to be higher than your hardware SNR, which is quite unlikely in the case of acoustic recordings. Maybe for electronic music it's more important.

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +1

      In the 16-bit days keeping the digital noise floor below analog meant going in hot and thus risking clipping. With 24 bits, you gain the headroom and your analog noise floor will be louder than the quantization noise. So it a non-issue these days, but only because we moved to recording at 24 bits, where the bottleneck becomes the analog chain in front of the ADC.

    • @jonasdaverio9369
      @jonasdaverio9369 11 місяців тому

      @@simongunkel7457 Do you have the order of magnitude? I feel like 90dB of SNR is already extremely good for your whole analog chain.

    • @simongunkel7457
      @simongunkel7457 11 місяців тому +3

      @@jonasdaverio9369 If you wanted to make use of the 96dB provided by 16 bits, you'd have no headroom. I tend to be cautious and leave at least 12dB between the loudest peak I got during soundcheck. There are plenty of mics that can beat 90dB, and dynamic mics don't generate noise on their own, so you only get the preamp noise. 100dB SNR isn't that uncommon even at quite low price points and I just measured 90dB on an old Behringer interface I have lying around.

    • @jonasdaverio9369
      @jonasdaverio9369 11 місяців тому

      @@simongunkel7457 Thanks for the details!

    • @gblargg
      @gblargg 11 місяців тому

      You get an increase in noise as you add effects and tracks in the digital domain, thus it's not just capture, but also editing that needs a lower noise floor. Even just adjusting gain in the digital domain adds noise.

  • @farfymcdoogle3461
    @farfymcdoogle3461 3 місяці тому

    They telled me you have to keep same sound quality settings you record it so basically can not change at any step even master but if you mix bits/sample thru process will caused phasing and ishoos ???

  • @saultube44
    @saultube44 Місяць тому

    192 Khz sampling shouldn't be a problem for today's computers, if the processing load is distributed among all CPU Cores concurrently and even can be helped with GPU Cores. Additional Hardware support should be provided by professional grade Sound Cards.

  • @forbiddenera
    @forbiddenera 11 місяців тому

    Somehow I intuitively want to believe it should be possible to separate audio by level, eg. Remove any sound below a certain level in a recording, leaving only the louder sounds (instantaneously, not separated in time domain like a noise gate) but I can't remotely prove this letalone actually believe it knowing how waves work, yet it's fairly possible in the frequency domain with filters and fourier transforms at least upto a certain rolloff and phase shift. But if we take the fact that FFT can in theory separate a complex waveform into all of it's individual components, then it should be possible to do so which is why I intuitively want to believe it even though I've never seen it achieved or explained as possible and other than using FFT to extract every single component wave, removing or keeping the ones within a certain amplitude window I can't think of any other explanation that could say it's possible. If a digital sample is just an infitesimal point in time represented by a number with it's precision defined by the bit depth and say your sample is like 245/255 and you want to remove everything under 30/255, how could you? You can't, subtracting just decreases the amplitude or makes it quieter, adding the opposite, the number representing the sample can only increase or decrease so you at least absolutely cannot do it without considering the frewuency and time domains.. And what would it take for a perfect FFT extracting every constitutant element of a 16/44.1 recording? And even if you could do that and removed the waveforms within or outside a window and recombined, how do you know you're only removing sound correlating to that window and not components of the sound you want? Maybe when separated some of the components under your cutoff are part of the timbre of your cymbals.. I wish I was super good at math and could resolve this internal debate, my knowledge/scientific side says it's impossible but part of me wants to believe otherwise..and if frequency is a component of the time domain than how can we filter certain frequencies (even if not able to do a brick wall filter with zero phase) and not different levels? They're just two axis of the same function!

  • @marianochvro
    @marianochvro Місяць тому

    All this is 100% true for processing all digital however, I have a question. Is it possible to succeed manipulating the waveform coming from a 44.1 kHz recording, but using an analog path for mixing and mastering? i.e a great quality DAC>analog compressor, analog EQ etc…?