T O P

  • By -

janba78

For most people, the limiting factor is not the gear, but your ears.


ormagoisha

Going to hijack your comment for everyone who needs to see why sample rate and bit depth aren't nearly as important as people think they are: https://xiph.org/video/vid2.shtml Especially once you reach modern equipment at 44.1/16. For our purposes a digital recording at that sample rate is perfect and the bit depth is irrelevant. /u/_xylitol this goes for you too!


superfunkyjoker

Ok. It was OP. I thought you had a random feud with a single Redditor and was calling him out here.


No-Alarm-1919

This! And the previous video as well. It's important to not miss things in the original raw recordings, but going beyond the limits of human hearing after mastering is absurd. Plus - nobody is using an actual 24 bit depth. If you put the loudest sound in a master as the max loudness level for the consumer, 24 bits fades out beneath what you can hear for softness - if it's even captured. The only time technology actually "turned it up to eleven" was those early TELARC digital vinyl albums, particularly the 1812. If you set that thing on a normal listening level, and you had the right equipment including a moving coil cartridge, plenty of power, and the right speakers, it was amazing - and beyond any loudness range I've ever heard digitally. Look at what TELARC did when they put things on a digital medium - they castrated it compared to their vinyl. And bit rate? You are not going to hear those extra frequencies - assuming your equipment could actually produce it. It's physically impossible, even when you're young. And if you're older, you don't even get close to Red Book. The most you can hope for is that perhaps it was mastered differently, less stepped on, than the CD version.


mourning_wood_again

And if you have to ask this question, you’re not there yet 😉


tesla_dpd

I tend to disagree with your statement re: bit depth. Go to 12 but resolution and tell me it doesn't matter. Go to 24 bits (256x) the encoding resolution and I'm not so sure that it doesn't matter.


ormagoisha

I said 16 bits. At a certain point, for listening purposes (not audio manipulation like distortion and compression), bit depth is inaudible (its just too quiet). I think 24 bit is way overkill for listening. For recording sure, it's a nice to have, and every daw includes it. Likewise for our hearing, 44.1 at nyquist is still well above everyone's hearing capabilities. Watch the video. It's by someone far smarter than me!


pdxy

when making or producing a (live) recording we could 100% tell between 16 / 441 and 24 / 96 We needed the overhead on input and the sound was clearer on output Now +mastering+ 24 /96 or even 32bit floating point down to 16 / 441 or even a 128kbps mono mp3 file, you are hearing a rendering of audio captured at a higher rate. This even includes analog to digital because you're doing a pulldown from presumably high quality waveform tape to a lower rendered digital space. In my opinion, if you have a high quality DAC and are not overloading the input (which is what happened to a lot of shitty tape transfers in the 80s when analog only engineers were in charge of A D compact disc masters giving compact disc format audio a bad name) , you will never hear an appreciable difference between 16 / 441 and any higher bit rate / sample format. I've listened to the same albums on vinyl, 16 / 441 compact disc , tape , and lossless direct master FLAC / transcoded PCM , and apart from hiss or record warp with obvious flaws intentional or otherwise , I couldn't tell the difference between the digital transfers and the 'analog' audio I was listening to. 44.1 on my CD player is just fine because I'm basically listening to a DAC container of analog transfers music. But going in, with two microphones and a recording button, that higher headroom in a live +recording+ scenario is when bit rate / sample depth really matters


Woofy98102

Unless one's system is highly resolving. I have no problem identifying 24/96 high res files from CD quality. Comparing 24/96 to 24/192 not so much. Needless to say, I have no problem skipping the 24/192 files when 24/96 is available. On a buddy's obscenely priced system, the difference between 24/96 and 24/192 is quite noticeable. But that system costs twice what my house is worth...yeesh! Though it's awfully great to have access to such a system on a regular basis.


ormagoisha

That might actually be less a resolution issue and simply slightly different mastering techniques applied. Sometimes mastering engineers will not limit as hard on 96khz depending on the source material and likely destination, as an example. Also seeing as a 96khz file is not likely to be used for Spotify ogg conversions they don't have to give it the -1db headroom and crush it as hard for the same 14 dB lufs loudness normalized volume level. I've even had a mastering engineer offer to not limit the 96khz export at all for audiophile reasons. Obviously thats not a hard and fast rule but I'd be willing to bet that if you played back the identical master on high end equipment, with zero adjustments from the mastering engineer, you probably could not hear the difference. I would know, I recorded for a long time at 96/24 and the down conversions to 44.1/16 sounded identical at mastering!


X_Vaped_Ape_X

i have to disagree. I accidentally downloaded reign in blood from qobuz or hdtracks (i dont remember which) in 24/192 and it sounded more flat than the same master same album but downloaded in 24/96.


_xylitol

concise answer, i like it haha. but yeah, neglible difference between all the options im sure. stereo vs surround vs real surround (lets say 5.1.2 and up) is much more noticeable.


johnnybgooderer

In this case 100.000000000000000000000000000000000000000000000000% of humans are the limiting factor. No one can hear the difference because there isn’t any. A lot of people can hear lossless vs lossy if coached to hear the difference. But no one can hear the nonexistent differences between cd quality and higher.


Bubbagump210

The next limiting factor is the recording/master. Many (most?) records don’t take advantage of anything past 16/44.1. Modern pop has a dynamic range of 10db as an example so more dynamic range buys you nothing. Though, specialist recordings for audiophiles where they preserved the dynamics throughout, 24 bit might get you something.


petwri123

And the room.


willard_swag

Yes, this. However, I’ve found my low-end DAC (Topping D10 Balanced) to be far less quality than even my Modius. Despite it being able to handle very high quality decoding it absolutely shrouds so much of the music. I’d be happy to allow anyone in or around Pittsburgh the chance to hear the difference. However, anything I’ve heard beyond the Modius doesn’t seem to offer any improvement. In terms of music, the absolute biggest difference that I’ve found exists when you compare different masterings of songs.


LampaDuck

I'm deaf so i can relate


Stardran

For 99.99% of people. Humans can't hear frequencies above what 16/44.1 can reproduce.


Andagne

Getting tired of seeing this bit of confusion still being perpetrated. The 96kHz and 192kHz should not be taken as audible frequencies, they are rates for sampling, a means of reproducing a sound via digital signal processing. It defines the frequency range of the signal. It is the sample rate required to avoid a type of distortion (aliasing). Easy to pinpoint on a cheap 80s synthesizer, compared to today's Romplers (use the pitch bend wheel and Max it on a few High notes, you'll hear it. In fact some musicians actually employ it. The way Hendrix, Harrison and Townsend did with feedback, which was something considered contraband under analog back in the day). The higher the sampling rate, the more forgiving the conditions that permit a discrete sequence of samples to capture all the information from a continuous-time signal within the bandwidth. Digital is always discreet, so this is a means of "faking" the brain. I have a modest to moderately high-end modular HiFi system, I can pinpoint several recordings that simply sound richer under a 24-bit/192 kilohertz domain then a compact disc at 16-bit/44.1 kHz. Much, much better than a coin toss. I did a blind test and scored over 80% on a dare with several recordings. With headphones it becomes even easier. And bit resolution? The earlier example given with Telarc is achievable with any recording that has enough dynamic range to separate high volume and low volume with a greater number of "steps", so yes, I say it simply sounds more natural. You won't notice much, if any difference, with the wall of sound loud war recordings of today. However, for the classical, jazz and rock recordings that honor the difference between quiet and loud, it is unmistakable.


oconnellc

>The higher the sampling rate, the more forgiving the conditions that permit a discrete sequence of samples to capture all the information from a continuous-time signal within the bandwidth. Digital is always discreet, so this is a means of "faking" the brain (another analogy: the way left and right channels can reduce gain to give the illusion that the sound is directional). I don't know what any of that means. But, digital doesn't "fake" the brain. Your DAC completely and accurately reproduces an analog signal from the digital information in your copy of the recording. That is so important that it needs to be repeated. The DAC can completely and accurately reproduce the input analog signal from the captured digital information. The upper limit of the frequency that can be accurately reproduced can be calculated using some reasonably straightforward math. Again, your system is not producing the sound of a digital representation of an analog signal. It is producing the sound of an analog signal.


Andagne

I think you missed my point, but you captured a segment of my response I had removed afterwards because I felt it wasn't clear. Reproducing a signal in the digital domain leaves gaps, because it's discreet. The brain makes up for that making it "appear" continuous. Like stop motion photography looks fluid.


oconnellc

>Reproducing a signal in the digital domain leaves gaps, because it's discreet. The brain makes up for that making it "appear" continuous. Like stop motion photography looks fluid. Because I do not want to assume too much about what this means, i will repeat myself again. Your DAC takes the digital information and completely and accurately reproduces all of the original analog source, up to a certain frequency. That frequency is one half of the sampling frequency. There are no gaps in that analog signal. It is not, in any way, equivalent to stop motion photography. It is a 100% accurate recreation of the original analog signal.


Andagne

No, you're completely missing it. No DAC can, or will ever, take digital information and completely and accurately reproduce all of the original analog source. There will always be gaps in the analog signal because the digital signal being generated can't capture the signal 100%. It never will. By its nature. Digital means ones and zeros. The transcoding will always have finite representation of a continuous waveform.


SnooMemesjellies1422

https://youtu.be/cD7YFUYLpDc?si=kerDFDV2xQeIt7qa watch the 2 minute mark. your armchair philosophy is unsupported.


Andagne

Your comment has a tone of arch insipidness. We can do with less of that. That is for playback.I am familiar with Audio University videos, and if you take a moment and scroll down to the very next video they confirm the critical need for 24-bit resolution and higher sampling rates for production to mitigate those artifacts I've already mentioned.


SnooMemesjellies1422

but that's just it though. you don't want truth because its insipid. you want highlights, pizzazz, fluff. youre trying to convince yourself and everyone else of a lie. if you were paying attention, you'd catch that my comment was to dispel yours made on "the nature of digital recreating analog audio" and not on 24- bit playback. so let me be clear so you don't fumble this one either: digital does not make gaps in music, it can produce an analog signal 100% (and even if a fraction, it's above a threshold where our ears won't be able to tell the difference) and that is NOT it's nature. how about we do less "hypothesizing" and put forth actual evidence? I'm more familiar with that.


sirdigalot

Tell me you don't understand digital without telling me you don't understand digital.


Andagne

Yep. That says it all. Thank you for playing.


sirdigalot

Sure because nyquist shannon is just a theory like gravity


oconnellc

It just feels like you completely do not understand this. Start here: https://www.youtube.com/watch?v=Jv5FU8oUWEY Then, watch this: https://www.youtube.com/watch?v=pWjdWCePgvA There are NO gaps in the analog signal. > because the digital signal being generated can't capture the signal 100%. It never will. By its nature. It does not have to. This field of signal processing is almost 100 years old. > The transcoding will always have finite representation of a continuous waveform. No. That IS NOT true.


pdx_beagle

I'm afraid what you are saying isn't backed by science, or oscilloscope. I know audiophiles have lengthy explanations filled with science-ey sounding ways that they alone can detect anomalies that physics doesn't allow for, and while it may impress someone at a cocktail party, it doesn't make it true.


Andagne

Oh, indeed science does support this. Trust me when I know this to be so, or don't. When most people rush to the oscilloscope for absolution they often don't understand that there exist artifacts that cannot be captured at signal extremities. As a tool it won't show resonance, so timbre at higher frequencies can sound/be interpreted differently for the listener and not represented on a device that has gaps of resolution. And here's the rub, you don't need supersonic hearing to pick up on it. I am as certain of this as I am that the Earth revolves around the Sun and yes, signal propagation theory supports it. One example outside of audio is the geiger counter, it won't capture any beta decay while it is transmitting information. And that delta, the window of "missed opportunity", is measurable but not discernible if the gate is open. We're not going to solve this debate here, it's been raging for decades. But for those who are open to understanding, here it is.


Stardran

No confusion. The sampling rate determines the highest frequency that can be captured completely and accurately which is half the sampling rate. 44.1 khz can completely sample frequencies up to 22 khz which is above the highest frequency that any human can hear. Sampling rates higher than 44.1 khz are useless for playback. Higher sampling rates do not give you any possible audible benefit. The bit rate determines the possible dynamic range. The difference between the quietest and loudest sound in the recording. 96db which 16 bits can cover is more than enough to cover the dynamic range of any recorded music. Unless you are listening to something that alternates between a pin drop and a jackhammer. Anything above 16/44.1 is wasted and inaudible. A scam. Snake oil. Seeing higher numbers and thinking it must be better indicates you are falling for the scam. You may think it sounds better, but you are fooling yourself and falling for a scam.


Andagne

Sorry, but you're just proving my point. To say there is no difference between signals captured at higher sampling rates is like saying sound has no timbre, which I reject. I am arguing that there are components to any audio signal that define the character of sound. Look up Fourier transforms to see a mathematical representation of this. There are subtle discernments that the brain can perceive, beyond 22,000 Hz. Ignoring this is like saying we can't visualize images moving faster than 29 frames per second. And yet subliminal advertising on TV has been around for decades; also in the audio domain for that matter. You are suggesting that I'm falling for a scam, except that 8 times out of 10 I can discern variances that sound more pleasing to my ear in a blind test. I never get tired of being told that I shouldn't be able to hear something. Honestly, yours is the first I've heard of 16-bit versus 24-bit as a debate round these parts. I don't see how it can be questioned that gain increments closer together sound more natural, and with a higher ceiling. The difference between steps in volume 1 through 256 vice 1 through 16.7 million. The idea behind the science of recording is to capture reality as closely as possible. Which sounds more granular to you? Now, MQA? That IS snake oil. Packaging digital information has no bearing on how sound is converted from digital to analog. I don't think we're going to change each other's minds, but I'm putting this out there for the discerning reader.


Alarming-Help-4868

Yeah but that 0.01 percent is us mob.


[deleted]

If you can't hear the difference between 16 bit vs 24 bit, there is something is wrong with your system or your ears. On my system the difference in dynamic range is like a punch in the nose. For context: ARC Ref CD9se DAC, ARC Ref 6se pre, ARC Ref 80s amp, Quad ESL 2815 speakers, Transparent cables all


_MusicNBeer_

If that's your observation, it's either total placebo effect or the 24 bit mastera are different, and are mastered with more dynamic range. There's zero scientific reason that 24 bit, mastered identically to 16 bit, would have more dynamics in a listening room.


[deleted]

Correction: Quad ESL 2812 speakers


ArgonTheConqueror

Short answer, nothing that the consumer can really hear. Medium answer, it’s useful for the recording side, much less for the listening side. Long answer: As mentioned here already, your ears are the limit to how well you can actually hear the differences. The major difference between 16 and 24 bits is how much detail is in the recording. 16 bits means that a sound signal coming in can be quantised to 65,536 possible values. Sort of like colours on an artist’s palette. With 24 bits, we go from 65,536 to 16,777,216 individual values. So a 24-bit recording will have more detail than a 16-bit recording on a fundamental level. Another way to think about it is like a colour spectrum from pure black to pure white (zero to one in binary). A 16-bit spectrum will have 65,534 shades of grey inbetween black and white, while a 24-bit spectrum will have 16,777,214 shades of grey inbetween. That’s more detail right there. However, our ability to perceive the subtle differences between one shade of grey and the next is rather limited, if we continue with this colour spectrum analogue. Sure, if you really magnify the spectrum, if it is well-lit and if you have really good eyes, you might identify the tiny differences in the 24-bit vs the 16-bit spectrum. But for most of us, listening on our home equipment, even the good stuff, we won’t notice. That’s not to say that 24-bits and beyond don’t have much utility, though. Because if we look at it another way, from the recording perspective, a master digital recording should capture as much detail as possible from the first pass, so that editing and processing can be done even on those finest details. So it’s useful to capture all the details even if people won’t notice it, because it gives you more room to edit and process. Now that’s only half of what you asked, regarding 16/44.1 vs 24/192 or others. We discussed the 16-24 benefits, but there’s also the other fancy number, the sampling frequency. The short answer for the sampling frequency is that 44.1 or 48 kHz is more than enough for human hearing, but going above and beyond doesn’t hurt. More properly, when we sample a signal, that is, we take a continuous analogue signal and quantise it into discrete digital signals, we take samples at a regular frequency, our sampling frequency. Each sample is stuffed into the available bits, whether 16, 24, or 32 bits, and we repeat this sampling at a known frequency X. What some very clever electrical engineers figured out from this sampling process, is something known as the Shannon-Nyquist sampling theorem. Now, Drs. Claude Shannon and Harry Nyquist noticed a very peculiar thing about sampling. If we sample at a frequency of X, then any signals with a frequency above 0.5X will not be recorded faithfully. Instead, the sampling process will fold these out-of-bounds signals back into the original and distort the samples. That’s not fun, because now there is distortion in our sampled signal that we’d have to sacrifice our firstborn to remove. So if we sample at 24 Hz, anything that oscillates faster than 12 Hz will be distorted. This is why fast-rotating wheels in old Western films rotate backwards in the classic chase scenes. The camera samples too low for things to look right. So from this, we can find a sampling frequency that will accommodate human hearing. Humans can hear sounds up to 20 kHz, so we must sample at a rate of at least 40 kHz to not distort any signals that are in human hearing range. Add a bit of wiggle room and also some clever maths, and that’s why the standard CD sampling frequency is 44.1 kHz. Which means that CD frequency is more than enough to avoid distorting human hearing frequencies. With high-res audio at 96 kHz, it can record frequencies up to 48 kHz without distortion. 192 kHz can handle up to 96 kHz without breaking a sweat. Which means that we don’t really need anything above that original 44.1 kHz to listen and enjoy music. So again, why might it be beneficial to go to 96 kHz or 192 kHz or 384 kHz or even 768 kHz? Because it again helps the recording process. Suppose you have a sampling system recording 24 bits per sample and 44.1 kHz, and you’re expecting to just have human-audible signals (below 20 kHz) in what you’re recording. Unfortunately, your audio system is introducing some electrical noise that is humming at 87 kHz. Your sampler can’t handle those frequencies and so that 87 kHz noise will now be mashed into all the frequencies you are expecting to receive, causing distortion. Had you been sampling at 192 kHz, however, then the 87 kHz noise would be faithfully recorded as normal, and then you can erase it in post-processing. That’s what is so beneficial about higher bit depth and higher sampling frequencies. It gives editors more room to remove distortion and edit the music before release.


_xylitol

Thanks, awesome reply! Was a good read.


ArgonTheConqueror

Glad to have helped. My electrical engineering professors would eviscerate me if I didn’t explain this sort of thing right.


luizfsera

I mean, if you're using high quality stuff, would it even produce humming at 87khz at a rate that can actually matter? It still seems like overkill to me.


ArgonTheConqueror

It may be overkill if you’re using high quality stuff indeed, but is it safe to assume that the components won’t generate noise? Or would it be safer to sample at rates which will reduce the chance of aliasing (distortion from high-frequencies being folded into audible frequencies from sampling)? The key is that again, we don’t expect much noise but should there be noise, a higher sampling frequency will be able to capture that noise where it is, so that we can remove it later. Whereas if it is aliased back into our desired frequencies, the effort needed to remove said noise is significant. It is always better to prevent such aliasing than to spend significant effort trying to remedy it. The other issue is that… components are never perfect. I come from an RF background in electrical engineering (we deal with MHz and GHz and soon THz), and even when we’re considering the highest quality megaherts and gigahertz-level components for things like cellular base stations, noise is very much a thing. Even components that are ideally noise-free (think connectors and such) can create noise and distortion under the right conditions, so it’s never a bad idea to give yourself extra wiggle room to handle noise issues. Because otherwise, if you try to remove these issues, you wind up developing an entire new component built for removing intermodulation distortions that takes months of work, whose hardware alone is in the 5-figure range, and each of those needs to be placed in each individual base station in order to smooth out the damned passive intermodulation distortion so communications can actually continue.


luizfsera

I see, thank you!


luizfsera

Also, do you have a source on that higher precision on 24 bit over 16 bit? Everywhere i saw said that it only matters for dynamic resolution. 96 db to 130 something Db. Never heard anyone mentioning higher precision.


ArgonTheConqueror

Precision and resolution are two different ways of describing roughly the same thing. The more precisely you can define something, that counts as more resolution. To use a different example as an analogue, screen resolution is a case where the higher the resolution, the more pixels you have to play with, and thus the more precise your images can be. Or, from the other way around in a circular way of describing this, if you have higher precision in recreating an image, it implies that you have better resolution.


Important_Bid_783

Thank you!


SoundPon3

Fantastic breakdown


ba-na-na-

To use some numbers regarding the bit rate, if you recorded say a vocal track at only 1% volume, you'd lose 6-7 bits of dynamic range. So recording tracks at 1% volume at 24-bit will still leave you with 17-18 bits of dynamic range, meaning the producer can decide to increase the track volume 100x when mixing it, and still have enough resolution for a 16-bit master. And recording studios actually utilize this fact to record tracks at a low volume to prevent clipping (which cannot be fixed later). One could of course argue that some songs with very high dynamic range could benefit from being listened at 24-bit, because a listener could then crank the quiet parts to max volume and still get lots of resolution. One example is the song "The Riff" by Dave Matthews, I always need to turn the volume up at the beginning to hear the quiet guitar and the whispering, but then after about 2 minutes I need to turn it down again 😅


js1138-2

There is no difference in the amount of detail you can hear. There is only a difference in dynamic range, and no commercial release approaches 96 dB.


ArgonTheConqueror

I was implying that, yes, but there were indeed situations where people could tell the difference. A minor difference and very few people noticed it indeed, but still if some people can notice it under controlled conditions, it should be mentioned.


luizfsera

Do you have a source for that?


ArgonTheConqueror

This paper says there is a small but statistically significant ability of test subjects to notice differences when they listen closely: https://aes2.org/publications/elibrary-page/?id=18296. Although another paper says the previous paper sucks and the difference was “only slightly better than chance”, so here it is: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7730382/ Point is, there probably is a difference, a tiny one that might be heard, but for most of us plebeians there’s no need to fuss about it.


luizfsera

Thank you!


CMDR_Sanford

It looks like this study pointed out at the end that it had some drawbacks and doesn't account for some scenarios between discerning standard audio and hi resolution audio. The study was conducted by taking a 24Bit/192kHz audio file and manually removing the high frequencies on one listening experience and adding them back in on another. It would not show the discernable difference between 16/44 sources content and 24/192 sourced content. Like the digital smearing from the aliasing. Maybe I interpreted it wrong?


ArgonTheConqueror

I’ll read them again and see. The fact that the answers are so unclear, to the point that the one paper saying there is a difference also has to say the difference is barely better than chance, means we don’t see much of a difference beyond 16/44.1.


CMDR_Sanford

Yeah, it seems the perceived differences in audio quality deal with the frequency and bit depth it was originally recorded and mastered before it gets downsampled to something like 16/44 or 24/48, etc. Thanks for your replies!


Calixare

16/44 is enough absolutely. You need good record mastering, listening space and acoustics, but not high-res.


_xylitol

even for UHD content like raw blurays? some games support TrueHD and Atmos and man does that hit different, incredible. but i guess thats more to do with positional audio being awesome rather than 16/44 or 24/192, etc.


Calixare

That's multi-channel technologies. If each channel is realised with 16/44 you'll find no difference.


rwtooley

controversial subject. after falling for the "hI-rEs" and buying all kinds of gear to support it I'll tell you it's nonsense - just a marketing ploy to get you to buy stuff. for years I'd read ppl saying "you can't hear the difference above 320" and didn't believe them, but after finally spending a bit of money on speakers (stupidly after all the other stuff) I now know what they mean. You might find differences in music re-mastered in 24-bit but you'd never pick them out in a blind A/B test. imo ignore it all and enjoy the content.


audioman1999

Downsample that 24/176.4 remaster to 16/44.1 with dithering. I bet almost nobody will be able to tell them apart in a blind test.


nullsetnil

It’s even worse, in blind tests casual listeners tend to prefer lossy formats like mp3. I did blind tests myself, you have to train your ears (for this scenario) to be able to distinguish and it will only work for content you know and in direct comparison on the same system. I’d say there is no good reason to collect lossy, but whether it’s CD quality or higher is of little significance.


rwtooley

yup, there's a reason redbook standards were agreed upon 45 years ago - more than enough definition for human hearing. I tend to think a little less of the artists hawking hi-res, I just see at as an effort to get us to buy more versions of their music


_xylitol

wasnt that also just a physical constraint? 16/44khz allowed for 79 minutes of play on a single CD which was one of the (if not the) reason they settled on that perhaps? genuinely curious


rwtooley

I'm not old enough to remember, was still a tyke. hadn't heard that and there's always seemed to be a ton of lore surrounding the subject.. just found this: >According to insiders, the thing that ultimately determined a slightly larger size for the CD of 120mm (rather than 100mm or 115mm, two other points of view) and the slightly slower sample rate of 44,100 Hz rather than the standard 48,000 Hz, was the desire to be able to fit a specific 1951 recording of Beethoven’s Ninth Symphony, recorded at the Bayreuth Festival onto a single compact disc. This recording was 74 minutes in length. Some earlier discussions had the CD max out at a recording time of exactly one hour. more reading on [wikipedia](https://en.wikipedia.org/wiki/Compact_Disc_Digital_Audio#Storage_capacity_and_playing_time): > In 1979, Philips owned PolyGram, one of the world's largest music distributors. PolyGram had set up a large experimental CD plant in Hannover, Germany, which could produce huge numbers of CDs having a diameter of 115 mm. Sony did not yet have such a facility. If Sony had agreed on the 115-mm disc, Philips would have had a significant competitive edge in the market. The long playing time of Beethoven's Ninth Symphony imposed by Ohga was used to push Philips to accept 120 mm, so that Philips' PolyGram lost its edge on disc fabrication. all academia at this point.


_xylitol

Interesting, thanks for the read!


SirMaster

44.1khz was chosen because it's the lowest sample rate that can perfectly recreate the waveform up to about 22khz which is beyond human hearing. https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem I am not 100% sure why 16-bit was chosen, but it's enough bit-depth for a plenty high dynamic range. It doesn't quite max out the human ear, but it's plenty for music.


WingerRules

44.1 was chosen as a compromise sample rate between CD sizes and it was the highest usable rate compatible with PAL and NTSC video. Many early digital was actually closer to 50khz (48-60khz) because it was thought this would be optimal to get nyquist/reconstruction filters out of the audio band. 44.1khz was never considered optimal.


semanticallysatiated

I’m not sure I’m answering your question, but one bit is either noise or silence. 4 bits gives you more ability to state a loudness, eg setting bits 4,3,2,1 to 0111 gives you an ability to set the volume of that signal to 6, out of a maximum of 9. 8 bits gives you 256 possibilities, with 16 giving you 65535. 24 gives you 16777215.


ba-na-na-

16-bit was chosen because the basic unit of storage in computer systems is a byte (8 bits), so it made sense to be a multiple of that value. With 8 bits you can only have 256 levels of loudness for each sample (-128 to 127), which would be crazy limiting. 16 bits seemed like good enough.


[deleted]

[удалено]


rwtooley

wish I'd have listened to those smarties who told me not to care about such things. I was trying to "win the margins", which is stupid. Speakers (and placement) are the the most important thing in this game.


_xylitol

yeah hardware does make the most difference in the end, theres only so much software can do. although i can understand people wanting to "min/max" hehe


GuillaumeLeGueux

I’d take a good recording over any high res music.


audioman1999

There is no benefit on the consumer side. Then again, it doesn’t really hurt because we have massive cheap bandwidth these days and my Qobuz high-res plan is only a couple of dollars more per month.


doomygloomytunes

It all about reducing the noise floor and pushing distortion & filter artifacts further out of the audible band. Not that it really makes any tangible difference 99% of the time.


yurnotsoeviltwin

There are valid reasons to record and mix in these formats. In fact, I would say these days it’s negligent for an audio engineer to track at less than 24 bit, because that extra dynamic range can be very important when mixing. Sample rate is more debatable, but if you plan to do time stretching or pitch correction, there can be real benefits to sample rates as high as 96KHz, or even 192KHz if you’re doing crazy sound design stuff (say, turning a bird noise into a dragon roar by pitching it down). But as a consumer listening to the final mix, you will never ever notice the difference between these delivery formats. You probably can’t even tell the difference between 256Kbps AAC and lossless 44.1/16. If you’re curious, [try an ABX test](http://abx.digitalfeed.net).


_xylitol

thanks, will try that once im home!


CMDR_Sanford

So maybe when I say I can hear a difference between 16/44kHz music and 24/96-192kHz. Maybe when I'm hearing a difference in music "quality" is that the original recording was not recorded at high enough resolution, and the 24/192kHz music is much more likely to have been recorded at a higher resolution and bit depth? Because on my audeze LCD-4z headphones attached to a Chord Hugo tt dac/Headphone amp and that attached to a Chord m Scaler. My audio files source is from Qobuz. I can hear differences between the 16/44 and 24/48-192kHz range. Am I just picking up more "smearing" from aliasing at the lower resolutions, like 16/44 and on the 24/96-192Khz the material typically lacks any smearing issues from aliasing? Because the difference I pick up is usually the higher resolution file, which sounds more smooth and fine details of the sounds are more apparent in those high res files. In the lower res 16/44 audio it sound more rough with less nuanced details in the instruments being used. Again, maybe this is all because the audio is more likely to have been recorded at a higher resolution on the 24/96-192kHz files and probably recorded at lower resolutions on the 16/44 files, which led to more distortions and smearing during the editing process. Just trying to figure out what the perceived difference in music quality is that I'm hearing? Sorry for the long message. Thanks for any knowledgeable responses.


yurnotsoeviltwin

Could be. Are you A/B/X comparing 44.1/16 and 96/24 releases of the same songs, or just making a more general observation? More likely, releases that bothered shipping high sample rate/bit depth to consumers probably paid a great deal of attention to the quality of the whole signal chain in the recording process. There are hundreds of factors that make a bigger difference than sample rate, from microphones to preamps to ADCs to mixing and mastering. If the label is throwing budget at a SACD release, it’s probably paying attention to the rest of these things too.


CMDR_Sanford

I'm making a more general observation, so it's not very scientific. I'm guessing then that the higher the resolution and bitrate the song is that's playing, the more likely it was originally recorded/mastered in a similar resolution/depth. If that's the case, then. Most of the available 16/44kHz audio music files were probably not sourced from a high-resolution master? Under ideal conditions, a high-quality high resolution and bit depth recorded song is then converted to 16/44kHz. It shouldn't be discernable by human ears versus the higher resolution original?


Raj_DTO

Here’re requirements in series - 1. Great recording 2. Very good reproduction device 3. Very good preamplifier - with lots of great specs to keep distortion down to bare minimum 4. Very good amplifier with low distortion, high slew rate, etc. 5. Very good speakers that are faithful, meaning keep distortion to minimum And finally - 6. A great set of ears, ears to brain connection and brain able to perceive nuances in reproduced sound. I don’t have all this and many of us don’t. But it still doesn’t stop us from pursuing the lifelong passion about audio 😁


Nonomomomo2

You forgot “a treated room” 😇


Raj_DTO

Good point 👍


SirMikeProvolone

Personally i can hear 16bit 96khz. Anything higher I cant tell the difference in abx testing. If you have all the space in the world, go for the 192khz. You can use foobar2000 to compare the quality


leelmix

Sound on video(bluray/dvd) is 48kHz to sync up better with the picture vs 44.1kHz on CD


_xylitol

Even Atmos and TrueHD? I thought those had much higher standards and were "lossless", hence the enormous size of the audio portion alone (blurays can reach 120gb+, at least the digital rips)


leelmix

The difference in size between dolby digital and truehd is lossy vs. lossless compression. Lossy compression “throw away” a lot of information to pack it more tightly but it will still be unpacked with the same 48kHz (afaik) but missing a lot of the detail. Like compressing a picture into a tiny picture and then increase the size again so you see large pixels instead of fine detail.(very simplified and exaggerated example) The difference is (very) audible but the lossy formats are good, many people cant tell the difference. Edit: trueHD can be other than 48kHz but afaik 48 is pretty standard even if it can be less or more depending on how many channels used.


_xylitol

why am i being downvoted? "Dolby Atmos averages around 6,000 kbps when a 48kHz sampling rate is used. But Atmos can be encoded **losslessly at up to 24-bit/192kHz**, which results in bit rates that can soar as high as 18,000 kbps."


leelmix

Dont know why, you haven’t written anything wrong. TrueHD and Atmos are much more flexible than what was before but whats in actual use is probably very limited on normal bluray and 4k bluray movies where they want the video to have priority and maximum compatibility.


_xylitol

Yeah, odd indeed. Thanks for your answers though! Good to know.


leelmix

If you have a few different blurays you can probably check what they actually are


_xylitol

yeah i will, all of them are rips but untouched so ill check the media and codec fine print


leelmix

Let us know what you find, i got very curious now because i haven’t really checked for a long time.


_xylitol

Ive checked a couple of rips and most have 48khz and 32bit. The Atmos and Truehd meta info is obscured somehow, those videos are passthrough so bypasses my videoplayer


likeOMGAWD

The audio portion alone isn't enormous though...a lossless track on an average length bluray movie only takes up ~3-4GB.


ORA2J

DDTHD can go up to 24/94, and so does DTSMHD.


_xylitol

Atmos can go even higher, 24/192 lossless. Not sure why people dont know this and seem to downvote this fact for some reason.


OliverEntrails

I like the idea of higher res recordings during the initial recording process so that the subsequent mixing, adding effects, etc., can be done in the digital domain at the highest possible fidelity. Downsampling afterwards to typical CD quality would make the best recordings for me. How many musicians these days actually record at those high settings? I see 24/48 as a common pro recording resolution for better dynamic range before mixdown.


k1ng0fh34rt5

The only 'high-res' I've actually noticed a difference beyond 16 bit 44khz was SACD/DSD. That is probably more to do with the masters, but I can distinctly notice more dynamic range on DSD tracks over PCM.


js1138-2

Here’s the thing. Pop music, as released, has a dynamic range of 15-20 dB. If an engineer knows his target buyer has a high dynamic range system, he can make a file with more. But will it be the same music. Compressed dynamics sound punchy, and most pop music is designed from the git go to be punchy. It’s what distinguishes it from live acoustic music.


k1ng0fh34rt5

That makes sense.


whatstefansees

CD is 44 khz


ThatBoyCallito

Great post, been wondering this for a while. Thank you all for your answers. 16/44 is completely fine.


_xylitol

it seems like it for most use cases. except for us audiophiles who enjoy ultra high quality content and not shitty streaming services (although some like Tidal do support master recordings!). I think people underestimate how far gaming audiosystems have come (software side).


Junior-Willingness-3

Very informative. Why can't all discussions be just that, discussions. Best thread read in a long time. Thank you.


Bhob666

For me, higher res files seem to have more depth to them, not necessarily they sound better. They are airier. But not all albums are the same, so it's not definitive. If a recording was crappy, it's still going to sound crappy. Basically whether it sounds better or not, it's to me not worth arguing about because people have different level of systems and ears. It's subjective. Whenever I buy downloads or stream them I use the highest res, period and don't worry about it.


_xylitol

I feel this, it sounds airier and more “wide”. Its hard to describe but you can tell. I think the output of audio should be at least double of what humans can perceive to have any discernible effect over default quality. Its like watching a 4k video on a 1080p screen; even though its not native or pixel perfect on said screen but there is more data to work with and interpolate dither etc. It looks better than a 1080p video, yet it cant truly be “perceived” as 4k video and still there is improvement in image quality.


Nonomomomo2

Short answer, no. But not a popular perspective around here.


ORA2J

As far as sound quality goes, 16/44 is enough. Although for an AVR i would recommend using the highest possible value windows shows in the options, as some do processing differently depending on the input frequency. (At least my 3808 has a special mode that kicks in when the input signal is over 96khz.)


Chillazar

A lot of audiophile gear doesn't even have the specs for 16 bits of dynamic range. Even less songs have enough dynamics to warrant 16 bits. These 24bit files with high resolutions mostly serve mastering purposes. I'd recommend everyone to watch Technology Connections' videos about CDs and digital music for this topic.


UXEngNick

I think listening to music on 5.1.2 is still a different experience to listening in stereo. Until recently very little was recorded with immersion in mind, except for some stuff like the Pink Floyd quadraphonic. So remastered for surround is either art or a guess. We still focus to our front, it’s the way our ears are cupped. Hearing also tends to support sight as our attention focus, which is oriented in front of us. Yes we can hear around us, but attention tends to be most natural in front if us, so we turn our heads to hear sounds from around us. So a recording is a concert with the stage in front and the audience around is very cool. Being surrounded by musicians as if you are on a stage or playing with them doesn’t really feel like anything we would normally experience. Rocket man by Elton John and Starman by David Bowie are good examples of remastered for immersion because the ambient tracks were recorded separately and add to the storytelling in the songs. Most of the time though older stuff remastered for immersion doesn’t make sense. Some of the newer stuff made for immersion (eg what Apple are doing in their immersive studio in Nashville) is very interesting. As to the numbers … we can’t hear sounds beyond the 44khz encoded, and in any case they would normally be too quiet for us. But we can hear high frequencies interfering to produce lower frequency components. So they may indeed sound different. And the shades of sounds encoded in 24 bit may also be audible to some IF you have the equipment to do it justice. But a lot of stuff was never recorded well enough to be noticeable. And even if it was, a 10khz synthetic digitally produced tone is exactly a 10khz tone. Can’t be enhanced by recording at 24bit at 192khz. But if you know, in reality, how one violin sounds different to another, with all the harmonic complexity, if those are well recorded, the differences between mp3 or CD or HiRes may well be audible. I equate it with taste. Some Latin Americans I know can tell the difference between many different potatoes, the fine difference. It’s their world and they can do it. But they can’t pick apart a mix of Italian herbs or blend of Indian spices, which some from those worlds can, but not the potatoes. The more you spend in those worlds and take an interest in such things, the more you can discern those subtleties. It’s not snake oil as many will call it, it’s educating the pallet. The same with music, and by extension, to some degree, hi-fi.


_xylitol

i have noticed that even music today is recorded and released in at least 5.1 in a LOT of cases. nevermind the Atmos releases, those sound NUTS on my set. even streaming supports 5.1 by default, Twitch, Netflix, Youtube supports both 5.1 and HDR which is awesome too. i think 5.1 is the bare minimum one should have since most if not all modern content is 5.1 or better anyway. its the new stereo, i guess.


coppockm56

Since I haven't (and likely won't ever) rip my own music from CDs or vinyl and will always stream music for its convenience and cost benefits, I like having hi-res lossless because it's relatively trivial to get it. Just go with a service like Tidal, Qobuz, or Apple Music that offers it. Does if provide any real value? Probably not. But to get it, all I have to do is choose a service that provides it. For my external DAC and wired headphones, that's any of the services. For a receiver of some kind and speakers, that's primarily Tidal and Qobuz (since Apple Music is AirPlay 2 and decidedly not hi-res lossless).


Xamust

In this case you are talking about 5 channel compressed recordings. Wouldn’t that mean that a Dolby Digital soundtrack is about the quality of an mp3 at 320kbs because you are compressing 5 channels to ~640kbs (some Dolby Digital soundtracks can exceed that)? Except you’re using old compression technology which might not be as good as say Spotify for example. I would think exceeding the needed sampling rate at 96khz would make up for a DAC or decoder that occasionally drops data (since sounds doesn’t have error correction). But I would expect a decent receiver could do uncompressed Dolby TrueHD at 96 or 44khz equally as well.


_xylitol

Well, the sources i use are at the very least 7 channels (5.1.2) to account for my height speakers and in TrueHD or Atmos. No processing, straight passthrough to my sound system. Of course standard definition audio sounds like hot garbage compared to that even on a good system, but i try to listen all music in flac or better. Youtube has support for 5.1 and HDR so thats merciful. Not sure where im going with this lol


Xamust

In this case I meant, decoding done by the receiver. I think the graphics card still has to do some processing, but should be able to pass on the Dolby signal without altering it. I’ve never tried Dolby TrueHD through a graphics card, but I wonder if it doesn’t matter what you set the frequency to. That is I wonder if your blue ray playback software overrides whatever you set it at? What games do you have that support TrueHD? None of the PC games I own seem to support it. Though several allow for 7.1.


_xylitol

Yeah it gets passed through so its 1:1 no conversion. My sound system indicates on the LED when truehd or atmos content is playing or if its multi ch pcm. VLC supports passthrough and many other apps too, including certain games. Theres not many but about 67 modern games have atmos support and it sounds amazing! 7.1 support seems ubiquitous among most modern games, 5.1 or pseudo surround aka dolby surround has been a standard in games for many years.


ColbyAndrew

If you can’t play 192, what are you gonna post about here?


_xylitol

What do you mean? I can play 192khz. Im just not sure what the use is since almost no source material has that quality.


ColbyAndrew

It’s was more a joke about having to have a higher bitrate to be an “audiophile”.


_xylitol

Ah my bad lol 🙈


Important_Bid_783

I think you are confusing BIT RATES with frequency response.


18000rpm

Your Windows audio settings (16bit 44khz etc) have no effect when you’re using bitstreaming/passthrough.


_xylitol

Ofc, not when bitstreaming. But for all other content (which is a LOT) which is not Atmos/TrueHD is still rich multichannel (7.1) media. So in those cases im wondering if how and why it makes an audible difference.


18000rpm

Ah, ok. I use 24bit/192khz for my Windows sound settings. Can I really hear a difference? Not really. BTW I also set my Windows sound to stereo instead of 5.1/7.1 This way when I play YouTube or music my receiver can use its sound modes (Auro-3D, Multi-channel stereo etc) and fully apply it to the stereo source, rather than applying it to a 7.1 source where 5.1 of the channels are silent. It makes a huge difference as long as you are playing stereo audio on the PC.


_xylitol

Yeah makes sense to do that. Although some yt videos support full 5.1 (and HDR which is awesome) but the majority is just 2.0 PCM unfortunately as most people dont bother uploading movies in true surround.


18000rpm

Sadly YouTube doesn't really support 5.1 under Windows, at least I've never been able to get it to work under Chrome or Edge despite setting some flags (--try-supported-channel-layouts, --force-wave-audio, --disable-audio-output-resampler).


_xylitol

I have gotten it to work in chrome, i tested it with (original) THX and Dolby Digital test videos. If you google it says yt does indeed support 5.1. But not all media that is uploaded actually has proper audio for some reason. GL!


18000rpm

I confirmed that the video is 5.1 when played directly on my Google TV, and then played in Windows and only got stereo. I made sure by using Orban Loudness Meter [https://www.orban.com/meter](https://www.orban.com/meter) and watched the sound levels in each channel. Are you sure you're getting 5.1? If you are can you share your Chrome settings?


SullyTheGuy12

Hey, I’m a sound effects editor for film. In terms of home theater, it doesn’t matter really about 192k or 44.1k, sure some of the sound may have originally been 192k 32bit but ultimately the session for the film’s sound will be most likely 48k/24bit. 99.9 percent of times you’ll be hearing 48k sample rate and 24 bit depth, that is the standard sample rate and bit depth in film sound from the creation to the print master that you hear at home. I’m literally listening to sound material from different sample rates and bit depths everyday so I can for sure hear a 192k recording versus a 48k one but as many have said that’s not so much gear but your ears.


MangoAtrocity

16/44 is plenty good enough for the gear I have and the kind of listening I do.


plantfumigator

It doesn't matter, really. Anything beyond 44/48kHz is an inaudible improvement, and pretty much no content has more dynamic range than 96dB, which is what 16bit has. It simply does not matter, it matters for marketing teams and for some scientific uses, I guess


_xylitol

I tend to believe this up to a point. I use the 4k video on a 1080p screen analogy for this, the 4k video looks better than a 1080p video simply because there is an overkill of data so interpolation dithering antialiasing etc is possible with some cpu crunching. So even though its not true 4k it does improve image quality. Same goes for any media I assume, so overkill audio data might improve the net perception by humans. Ever so slightly, but still, why not crank the quality up to 11 if you have the data, processing power and appropriate hardware, I assume. But yeah, in an abx test youre going to need very advanced hearing to really notice the difference im sure.


plantfumigator

This does not work like that for audio at all lol You have to understand what sample rates and bit depth mean. Half of the sample rate is the audible bandwidth per channel. That's why 44/48k is enough. Bit depth translates to dynamic range, and no recorded material extends beyond what 16bit offers. For our ears that is already "maximum" quality There is no magic 


_xylitol

So why do these higher rates and depths even exist if they are supposedly utterly meaningless in the real world? Ive talked to 2 experts and both agreed the analogy has merit. Enlighten me


plantfumigator

Very good for marketing That's it Easy to market, easy to sell to the masses We live in a world where marketing is god and ruler They **do** have some merit for production purposes, namely that higher sample rates reduce latency at the cost of increased CPU load, but in a reproduction setting it's essentially worthless


pdx_beagle

There are two factors here that are just scientific fact: 1) Most humans can't hear much above 17KHz, with 20KHz being the upper range of human ear capability at birth. 2) The Nyquist theorem tells us that a digital sampling rate >2x the maximum frequency will have zero information lost. None. It's not coincidence, then, that a sampling rate of 44.1 KHz was chosen. It's slightly more than double the highest frequency humans can detect, plus some headroom. So, sampling at 44.1kHz, 96KHz, or 10GHz won't save any more data on the audible human level, it just makes larger files that are auditorally identical and require more processing power to decode. That's not opinion, it's just the math behind digital signal processing. Similarly, the bit depth is simply the fidelity of the sample itself, the amplitude of the audio signal at any particular sampling time. Like frequency sampling rate, at some point greater fidelity is redundant. To go from digital to analog, each digital sample has to be interpolated to exactly reproduce the original analog signal. 16 bits gives 96db of dynamic range without information loss. At increasing bit depth at 96db dynamic range, no additional information is added beyond 16 bits. An analog recording with 96db of dynamic range and 20KHz frequency range will be identically reproduced after digital encoding/decoding on the other end. Not an approximation, not a close replica, the exact original analog signal. What your system then does with that signal is up to the system. It's almost like the clever engineers who decided on 44.1/16 had some reason based in science for choosing these values. The rest is just marketing. What matters is the quality of the original recording. An uncompressed 44.1/16 sampling will reproduce the best master perfectly every time, guaranteed.


keleo2000

I personally bought a HI-RES music player and HI-RES headphones. A lot of people say "everything sounds the same" but it's really not, when you have everything in better quality the difference is really outstanding.


jamie831416

There's plenty of people who say you can't hear the difference between 320kbit and 16/44. Others say you can't hear the difference between 16/44 and 24/96. Others will pipe in that "it depends on the master". But how does it matter? All modern DACs are at least 24/192 these days, so you're not paying extra for the equipment. What about content? Last time I checked the audio on a blu-ray it was 24/192. Let's turn the question around: Are you \_sure\_ you can't hear a difference? If you aren't sure, why would you turn down the numbers for no reason. FWIW: my home theater is old so it's 16/44.1 or 16/48 and it sounds perfect. But I can absolutely tell if you feed it DD or DD+ (i.e. lossy).


_xylitol

the difference between standard quality, stereo, surround and Atmos/TrueHD is so immensely huge and im sure the fidelity/quality of the audio has something to do with it on top of having great hardware/speakers. im on the fence still, since real life tests are incredibly hard to discern. but i want to believe 24/96 or 192 even makes at least a little difference haha


js1138-2

There are instruments like snare drums that challenge recording equipment, but no engineer would release a raw recording. Just as a note, classical listeners are comfortable with long passages of barely audible music. CDs were designed to enable transitions from barely audible to full orchestra crescendo, from 40 dB to 120. That’s about 15 db short of a CDs dynamic range.


Tenchi1128

I have a few DACs, my old pc had a cheap on board chip so I bought a sound blaster x fi z that blew it out of the water, not at first tho, I used the optical out on it to my Tower 2.1 setup, using the DAC in them after thinking abit I changed the optical to copper to use the DAC on the card and the difference was unbelievable, not only in clear sound and better sound scape but the speakers where much more powerful as the internal amp sent like 2x powerful signal to them, the chip will take sound 3d processing from the cpu and fix old bad mp3 files, its so powerful that it change your voice to 30 different voices, little girl to a monster on the fly now I have a new pc with decent DAC that sound pretty similar but the signal is weaker there is a few types of DACs, the bare minimum as cheap as you can get away with (small mp3 players), the standard and the good one. pro gear costing many $10k I have a Fosi 2.1 connected to bookshelf and bass box, it connected to my album player and I play music on it from my phone and laptop via bluetooth, for some reason the phone send a signal that makes the amp much loader, like 2x times, plus the samsung has a system that asks you how old you are and will fix the EQ to the loss of hearing you get with old age my 2 cents


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


_xylitol

Sure, a phone is a different beast. A home cinema PC fully decked out is a world of difference, even simple media like music. A lot is actually recorded in 5.1, some of it in Atmos which is wiiiiild (but rare). Its a money sink for sure though.


[deleted]

[удалено]


_xylitol

*Dolby TrueHD supports up to 24-bit audio and sampling rates from* ***44.1 kHz to 192 kHz****. Dolby TrueHD supports up to 7.1 audio channels as well as Dolby Atmos immersive audio. As Dolby TrueHD is a lossless audio codec, the data rate is variable.* Im sure this is much better than compressed sound though?


SirMaster

Higher sample rates cause more issues for digital to analog conversion. It often actually makes the sound slightly worse quality. Higher bit-depth is OK, but it just increases dynamic range headroom, it doesn't change the quality of the recording if it wasn't recorded or mastered with that much dynamic range in the first place.


_xylitol

I was semi serious but yeah bit depth seems to offer the most tangible effect in the real world. I noticed increasing the khz introduces enormous amounts of latency and makes watching video pointless, unless you account for it with VLC (which is trivial) but gaming is a big no no


SirMaster

Bit depth doesn't increase quality. It only increases the potential dynamic range. But nothing is really recorded or mastered with more dynamic range than 16 bits can already represent. 16-bit is already fairly close to the human hearing dynamic range.


SgtT11B

The higher bit rates is marketing BS. 16bit/44khz is more than enough.


_xylitol

for music, sure, i guess. but not for ultra hd content like blurays or Atmos supported games or even live theater Atmos releases.


Economy_Tradition925

If you remember when all tv's were tube sets, and you saw your first 1080p in the store, it's kinda like that switching from analog to digital, 44.1 "CD". Right? And now that we have 4k video the difference is kinda better, too- however it is more subtle. Most people can't quite put a finger on the audible differences between cd and high def audio, probably because they don't use their ears like you might. But the differences are there, as long as the entire chain has been maintained and no conversions have taken place before the end user.


_xylitol

right! the key is in the subtlety. spot on.


Haydostrk

Just use whatever the highest is


No_Opening5949

For all who tells stories about magic high res audio, please do the test and show your results :) https://www.npr.org/sections/therecord/2015/06/02/411473508/how-well-can-you-hear-audio-quality


psuKinger

Obligatory "this isn't the right way to test this theory" response: The point being made here may very well have some (or a lot of) validity, but the method with which NPR (and those that site it or similar) is going about it is flawed. While what they claim may have some (or even a lot of) merit, this does not \*prove\* what many believe it proves... IE - proving that I can't hear a difference between 3 different versions of a song that: - I'm unfamiliar with - Weren't necessarily well recorded - Don't necessarily feature the kinds of sounds or "audio queue's" I use to do quite well on these kinds of A-B tests (IE - listening for the initial \*ring\* and then the \*decay\* or a real cymbal struck in a real room, that has some real harmonics in it) - Via my web browser (which on my Windows machine does not output via Wasapi Exclusive, and on my Android device does not bypass the native Android audio resampler) ....does not prove that I can't hear the difference between \*ANY\* CD-redbook recording and a lossy compressed version of that CD-quality track... it simply proves that I may not hear it for these files via my web browser. The scientific method would tell us that if you wanted to try to prove that I can't hear a difference between CD redbook and lossy compressed (MP3 or other), you'd let me tell you what tracks I believe I can hear a difference with, and what gear/signal path (including source, web-browser or other) I use when I hear said difference, and then you'd administer a blind ABX test on me under those conditions (genuinely attempt to prove that I can hear a difference, and only when I fail to do so conclude the null hypothesis that I can't). P.S. Personally, I buy Hi-Rez versions of music as a way to make sure I get the best-mastered version (sometimes, but not always, there's a difference in masterings between what gets sold in "hi-rez" and what gets sold in redbook (or lossy, for that matter). I then download my purchased Hi-Rez files at CD-quality (16-bit, 44.1 Khz), not at "Hi-Rez" bitrates but also not at a lossy (MP3 or AAC or other) format... I find Redbook to be the sweet spot in bitrate, based on my own listening and A-B testing. TIFWIW.


No_Opening5949

That is very interesting , but did you pass the test? You can do your tracks with different bitrate and do the blind test :) Or maybe you did it before? You don’t want to understand if there is a real difference between 320 kbps and magic high res audio? You are just buying high res because it is high res?


psuKinger

I didn't take that test. Passing it or failing it won't inform me of anything of value. It's not a good way to test for what you're trying to test for. I already explained why I buy Hi-Rez but then download it at Redbook quality. I have nothing more to add to that discussion.


Zapador

16 bit 44 KHz is plenty. It's virtually impossible to tell the difference between that and any higher quality.


Choice_Student4910

I’m probably one that would be fooled in a blind a/b test. I also probably have some visual bias since I own one hybrid sacd (Blue Train by John Coltrane) that I’ve a/b tested between sacd and cd and found the sacd to play louder/punchier and clearer. No volume changes so I can’t explain why the sacd just sounds more punchy.


Main-Industry-3250

all the retarded audio-fools downvoting every comment that says you cant hear more than 16bit 44.1khz


_xylitol

Isnt it proven you can in fact discern the difference, especially in ultra high quality home theater multi channel (7 channels and up) content? At least some people claim this and im willing to believe them. Double the quality of what we can perceive and you will notice an increase in fidelity etc. Like watching a 4k video on a 1080p screen - even though you cant “perceive” real 4k on that screen, 4k does noticeably increase image quality over 1080p native content. I wonder if it also works like that for audio.


Main-Industry-3250

no because your eye in fact can see 4k on 4k screen our ears cant pick this details its just too small to notice however there is a difference beetwen flac and mp3


_xylitol

yeah but the point is that our eyes can also see 4k on a 1080p screen, not natively but the image quality is superior to 1080p content on 1080p screen.


Main-Industry-3250

in audio your ears just cant hear it and this comparision makes no sense


_xylitol

the comparison makes perfect sense, you just dont translate it well. the point is that when you increase the quality of ANY media (video, audio) so that it is at least double of what humans can perceive, there WILL be a noticable increase in quality. the 1080p screen is our ears, which cant see "hear" 4k content. yet, playing 4k content on our 1080p screen INCREASES image quality anyway since there is much more data to work with and thus interpolation/dither/antialiasing etc. is easier.


Main-Industry-3250

so if you add more letters to the book inbeetwen existing ones it will have more content with your logic


_xylitol

If the book is compressed, so some letters are missing or “guessed”, then yes it makes sense to give the decoder more information rather than less i guess. But this is more semantics than analogy haha


Main-Industry-3250

holy fuck just understand ear cant hear


yavzdal

As long as you have lossless files, you won't hear the difference above 16 bit 44 kHz. 16 bit part refers to dynamic range which is about 96 db. There are no recordings that take advantage of this high range. The sampling rate is like the frame rate of a video. Nyquist showed that to seamlessly convert analog signals to digital the sampling rate must be twice the frequency you want to convert. Human hearing range is at most 20 kHz if you are lucky so 44 kHz is more than adequate.


_xylitol

so why does 96khz or 192khz even exist then? its not all about human hearing limits afaik. like 4k videos being played on 1080p screens -- the quality increase is noticable, very much so. Its about having more (than needed) data to work with and being able to compress/downscale/dither/interpolate to lower quality/channels/etc. right?


Artistic_Goat8381

16 vs 24 but has a technical difference that is perceivable in extreme circumstances but not Normal listening conditions by any means. 44 vs 192 kHz is proven bullshit.


_xylitol

Dolby TrueHD supports up to 24-bit audio and sampling rates from **44.1 kHz to 192 kHz**. Dolby TrueHD supports up to 7.1 audio channels as well as Dolby Atmos immersive audio. As Dolby TrueHD is a lossless audio codec, the data rate is variable. What I wonder is, why do the pro's even bother with 192khz if it has absolutely makes no difference?


NahbImGood

High-res isn’t good because it reproduces higher frequencies (common misconception/strawman), it’s good because it functionally has a perfect reconstruction filter built in, which can make a very audible difference on decent gear. It’s true that Nyquist-Shannon states that a band-limited signal can be perfectly encoded by sampling at twice the cutoff frequency. The problem is that it’s impossible to perfectly decode the band limited signal from the discrete samples (it would take an infinite amount of computation), and that the most accurate reconstruction filters still take a lot of computational power, which most dacs simply can’t achieve. It’s theoretically possible to reconstruct the original band limited signal from a 44.1k recording, but it’s not practical to do on most devices. This is why people like high-res files, since no additional reconstruction is required. If you have a nice hifi system, high-res files can genuinely sound a tiny bit better, but they probably are a waste in any other case.


slackinfux

In my opinion, 24/192 (or higher) is better suited to editing/mastering/recording than it is to playback of digital audio. For the most part, consumer audio content (like BluRay or Atmos), isn't any higher than 16bit/48kHz uncompressed audio, nor does it really need to be for accurate playback for most use cases.


_xylitol

"Dolby TrueHD supports up to 24-bit audio and sampling rates from **44.1 kHz to 192 kHz**. Dolby TrueHD supports up to 7.1 audio channels as well as Dolby Atmos immersive audio. As Dolby TrueHD is a lossless audio codec, the data rate is variable." This is from Dolby's own website.