Greetings Doctor!
My Presonus Firepod is a 24-bit/96k recording device. The question I have is should I record at 96k or should I record at 44.1 k? If one can dither from 24 to 16 can and should you do the same with the sampling rate?
Also are there devices which record at higher than 24 bits?
I think we covered at least part of this in another post of yours.
First off, there are no converters that can record at more than 24 bits of resolution. And, as I know I’ve mentioned before, even the best and most expensive 24 bit converters can’t deliver a full 24 bits worth of real data. At best, you get about 22 bits of data, and the rest is covered up by the noise floor of the electronics themselves.
In most typical recording and playback circumstances, with good converters, most people would not be able to tell the difference between a 16 bit recording and a 24 bit recording. 16 bits gives us more than enough dynamic range for most sources that we typically record. The type of instruments that benefit most from 24 bit recording are those with a HUGE dynamic range, such as a symphony orchestra recorded in a really well built recording studio. In most pop/rock stuff, though, you’d never hear the difference between 16 and 24 bits on individual tracks.
However, that doesn’t mean you should just record everything at 16 bits. There are a couple of reasons why you would want to use 24 bits anyway. First, if your converters are 24 bit converters, and there are no hardware options for sending out 16 bit that have been properly dithered, then you would probably want to record at 24 bits to avoid the hard truncation (meaning no dither) from 24 bits to 16 bits in your recording software. Most of the time, you still probably wouldn’t hear the difference, but there are some circumstances where digital error noise caused by truncating those last 8 bits could possibly be audible.
Another reason is that some people believe that if you have a LOT of tracks in your project, all at 16 bit, then that digital noise floor might all add up and be noticeable. I personally have never experienced that back in my 16 bit days, but I suppose it “might” be possible if all the noise was somehow correlated so that it really did add together to raise the level. 24 bits is just safer because your digital noise floor is a lot lower.
Finally, hard disks are so cheap these days, that the little bit of extra storage space you need to record at 24 bits versus 16 bits isn’t going to cost you much more, nor is it going to tax modern hard drives very much more.
So, the safe bet is just to record everything at 24 bits since it doesn’t really cost you that much extra is storage space or drive capabilities, nor does it affect your processing power any more than 16 bits.
In addition, you should ALWAYS be saving your mixes at 24 bit resolution! You could even save them in 32 bit floating point format if you prefer and your software supports it. Most modern DAW software use 32 bit floating point math for the mix/audio engine anyway, so you could save at 32 bit floating point if you prefer, although 24 bit fixed point is a safer bet if you are going to send those files somewhere else for mastering. The reason you want to save your mixes at 24 bits, is that even if all your individual tracks are 16 bits, the math you do by summing all those tracks together, adding plug-ins, and other effects, will produce results that can’t be represented with just 16 bits. So, if you are saving your mixes at 16 bits, you are doing a lot of rounding off in the math, and throwing away a lot of extra data, which is usually low level detail which adds extra depth to the sound. Then, if you send the mastering guy only 16 bit mixes, and he does additional processing to it in the mastering stage, you are doing more rounding, and those rounding errors really start to add up and you end up with less than 16 bits of real data at the end of it all. That’s why they always say to save your mixes at 24 bits (or 32 bit floating point), and don’t dither down to 16 bits until after the very last stage of the mastering. That will preserve the highest amount of dynamic range and detail in your music.
Now, Sample Rate Conversion, is entirely a different subject matter. Unlike recording resolution (bits), there is no simple “dither” method to convert from one sample rate to another. That is called “sample rate conversion”, or SRC for short, and is MUCH different than dithering. Proper dithering from a 24 bit signal to 16 bits can actually give you more than 16 bits of real data (too complicated to explain here). But, with Sample Rate Conversion, there is no easy way to do it, and you pretty much always lose some sound quality.
The other thing about higher sample rates is that it takes up MUCH more processing power and MUCH more disk space… so, unless your computer and hard drives are up to the task, it’s not always worth it. For example, 96Khz sample rate takes up twice as much hard disk space per track than 48Khz sample rate, and more than twice that of 44.1Khz sample rate. You’ll also only be able to run about half the number of plug-ins or VST instruments at higher sample rates since it takes roughly twice the processing power for your DAW software to operate at the higher sample rate.
If you believe in the Nyquist Theorem, then there is absolutely no reason to record at a sample-rate higher than 44.1Khz, since that is more than sufficient to reproduce all the frequencies that humans can hear. Also, since CDs are 44.1Khz anyway, and if you are mixing digitally within the box, if you do your projects at 44.1Khz, you can avoid doing sample rate conversions later, which can do more damage to your sound than any possible gain you might get from recording at a higher sample rate.
I personally still record all my projects destined for CD at 44.1Khz, since I mix digitally and don’t want to do any sample rate conversions. If I’m doing music for video projects, I record at 48 Khz, since that’s the standard audio sample rate for video guys. The only time I might possibly consider 96Khz would be if I was doing a high resolution DVD project since DVD audio tracks can be 96Khz sample rates.
I personally believe in the Nyquist theorem, though, having studied it in college, and think 44.1Khz is plenty good enough for us. That allows us to reproduce frequencies up to 22Khz, which is beyond the best human hearing, and certainly beyond what most microphones and speakers can capture and playback anyway.
However, you shouldn’t always take everyone else’s word for things. Test it out for yourself and record the same material in a controlled test using the same exact equipment and levels and everything, both at 44.1Khz and again at 96Khz, and then do a blind listening test to see if you can tell the difference. It’s possible that some converters have been optimized for certain sample rates and may actually sound a bit better at those sample rates than others. You would typically find this situation in the relatively cheaper soundcards and converters, as the more expensive converters would typically be made to much higher standards and designed to sound the same at any sample rate.
In the old days, when digital was new and they used analog anti-alias filters, then the filter design really could affect the sound of the converters, and be very noticeable at lower sample rates since the filter slope would extend down into the audible range. However, that hasn’t been an issue for many years now since pretty much all converters these days using oversampling techniques which initially sample at a very high sample rate to move the anti-alias filters well up out of the audible range.
This is still a hotly debated topic, though, and there is a lot of information and beliefs floating around, including lots of widespread myths as well as hype from the gear manufacturers who just want to sell us more gear. There are other reasons why higher sample rates could possibly sound better, with the most reasonable theories having more to do with filter design and phase shifts and other processing phenomenon rather than there being any frequencies above 20 Khz that are actually useful or perceived in some way (although there are those that believe that as well).
The only way to know for sure is to test your own equipment at various sample rates and try to figure out what sounds better. However, this is VERY difficult to do in a scientific way without introducing any personal bias into the results. You would need a way to record the same exact audio through the same exact system at different sample rates, to start with. Then, you need a system that can instantly play back each piece of audio you recorded at different sample rates, without having to switch clocks or do anything fancy that would take time. Then you need to do it in a truly double-blind fashion where somebody is switching the playback between pieces of audio without them knowing which is which, and you can’t know in advance which is which either. Only then can you get a truly non-biased test and be able to figure out if you clearly prefer the sound of one over the other, or if it’s pretty much a 50/50 guessing game. Of all the people out there who claim to hear a huge difference in sound between different sample rates, I believe that the vast majority of them have never done a double-blind test, and if they did, they would not be able to do better than a 50/50 guess. People just want to believe that more is better, and to somehow believe that they have super human ears that can tell the difference. However, I also do believe that there could be some gear that either on purpose, or by design flaw, clearly sounds at certain sample rates, and it may not be a very subtle difference. But, I don’t believe it’s due to the reproduction of ultrasonic frequencies that we simply can’t hear no matter how good our ears are.
Hope that helps clear things up a bit.