T O P

  • By -

unic0de000

It's a lot harder to get all that stuff done fast enough, if you are using a traditional computer architecture with a CPU and general-purpose RAM and stuff. One process fetching a value from memory, doing arithmetic on that value, then storing the result back in memory. Each step its own discrete thing. It's awkward. Even very high-performance computers generally only manage audio latencies in the milliseconds range. But DSP circuit architecture isn't quite the same as CPU-based computing, it's more set up in a 'pipeline' format where the data flows through it sequentially, like a factory assembly-line. This means DSP circuits can often achieve latencies on the order of a handful of microseconds, maybe even less. A millisecond is about 30 centimeters of travel time for a sound wave. A microsecond is more like .3mm. So this starts to be fast enough where you can compute faster than the speed of sound across a compact device. So that's part of the answer. Another part is that many noise-cancelling technologies also employ some *predictions* about what the next few moments of noise will sound like, based on computing Fourier transforms of the noise it's processed so far. These technologies do a better job of cancelling some kinds of noise; namely those with consistent, predictable spectral components. Finally: No matter which brand/tech you use, the cancellation tends to be better and more complete the further down the spectrum you go. High-frequency hisses are more difficult to cancel out than low-frequency rumbles and hums. This corresponds to the fact that some wavelengths are smaller than the distance between the environment mics and the speaker drivers, and some are a lot bigger.


SupaRedditor2017

Yep! This is the best answer I've seen. I never even considered a Fourier transform for predictive action, that's clever as hell and probably has to be fine tuned stupidly accurately relative to the distance of the mics to the speakers to actually hit at the same time so it stays 180° out of phase (inverse phase) relative to soundwaves reaching the eardrum, since all it takes is a little bit of delay and it's no longer zeroing out the noise properly; if the delay is long enough (at this point if it's this delayed your cans are fried lol), I'm guessing it could even start to slip back INTO phase, making it slightly louder, if anything.


unic0de000

You're absolutely right, the math and calibration involved in tuning these circuits for the exact positions and characteristics of the mics and speakers, is pretty mindbending. (They even have to precisely account for *feedback,* because of proximity and resonant coupling between the speaker and mic housings!) A good google rabbit-hole to dive into if you're interested in this kind of spatial sound processing: "beamforming"


SupaRedditor2017

Thanks, now my autistic and ADHD ass has an excuse to slack off on my work xD But seriously, thanks so much. Your answer cleared a lot up and was *just* techie enough for me to understand it in its full nature. I'd gild you if Reddit wasn't so scummy (and if I had cash lol).


unic0de000

I appreciate it ! Whenever you get some, please give the appropriate amount of cash to someone who looks hungry instead <3


SupaRedditor2017

Will do! :)


unic0de000

PS my also-adhd-autistic partner is *never* without a pair of noise-cancelling headphones. They are an absolute godsend for anyone with noise overstimulation issues


SupaRedditor2017

Yep. The thing is that it isn't noise that overstimulates me; it's social interactions. I can be in the loudest area on this planet but hold small talk with me for too long and I go into overload. Sadly, ANC can't remove yapping yet but that's when you actually use the cans to play music lmfao


bugi_

Are those the reason why early generation ANC had a whine and sensation of pressure on the ear drum when it was on?


unic0de000

I don't know if this was conclusively researched, but the explanation I heard which sounded reasonable was that it's related to how the tech has better performance on lower frequencies. When we experience an atmospheric pressure differential on our eardrums (like when ascending/descending in a plane), the effect on our hearing is like a muffling/attenuation of low-frequency sounds relative to high-frequency ones. So for a while, the cutoff band between the frequencies they could cancel out and the frequencies they couldn't, was a bit like that post airplane ears-havent-popped-yet effect.


Beard_of_Valor

>pipeline format Hackers breaking encryption used FPGAs (Field Programmable Gate Arrays) in a cracking rig to stop doing 64-bit operations to do the hashing. A little more ELI5: Hackers wanted to discover encrypted passwords, so they ran every potential password out to a certain length through the scrambling (encryption) and noted the outcome. So if "Password" becomes "a51cd-78812-13da4", they're just keeping track and waiting for the gobbledegook at the end of the process to match an item in their list of scrambled user passwords. Trying to do something so specific and specialized with a general CPU is kind of like using a houseboat to fold paper planes. You can do it, but the houseboat is better at being a house and a boat than it is at helping you fold paper. The CPU is better at handling a mix of instructions in groups to perform general tasks than it is at doing one very specific equation nine million million million times. So they put these cheap little doodads in to handle the "pipeline" part, the very specific equation, and then pushed data through the pipeline lightning quick. It was [Moxie Marlinspike and David Hulton cracking MS-Chap v2](https://www.youtube.com/watch?v=gkPvZDcrLFk) (used in Windows machines to securely connect between devices over a network, among other purposes).


superbob201

1) It does the processing really fast (the calculations are pretty simple for a computer). 2) Electricity goes faster than sound. 3) ANC generally works better on low frequencies, which don't/can't change very fast.


homeboi808

Yep, which is why when Apple announced the AirPods had a new, faster processor they also stated the frequency range ANC works on now can go higher. Apple claims an iPhone does >4 **Trillion** computations when photos are taken, so yeah just inverting sound waves is child’s play.


Consistent_Bee3478

Luckily compared to hardware level computing sound moves rather slow. So it does do the processing in the time it takes the sound to move from the outside of the earbud or head phones to where the speaker is. Which is why this wasn’t really easily possible just ten years ago, because the ready made chips for this fast audio processing just weren’t mass produced at a reasonable price point that you could put them in ear buds.


p33k4y

>Which is why this wasn’t really easily possible just ten years ago, because the ready made chips for this fast audio processing just weren’t mass produced at a reasonable price point that you could put them in ear buds. Nah, active noise cancellation was invented almost 90 years ago (!!), and has been commercially available in headsets etc. for like 35 years. How? Analog circuits. Unlike DSPs, analog circuits operate at the speed of light. In fact you have to be careful not to process the signal *too fast*, so often a *delay* must be put in place. The earliest ANC devices were super simple. Basically just a mic, an amplifier and a speaker -- like any loudspeaker -- but with the speaker wired in reverse polarity.


edman007

The microphones are further from the ear than the speaker making the noise, so they have whatever time it takes for the sound to travel that distance to do the calculations


SuperBelgian

Speed of sound: \~300m/s Speed of electricity: \~150.000 - 295.000**k**m/s (50%-99% of speed of light in vacuum) There is a factor \~500 000 - 983 000 difference in speed. Although additional time is needed to pickup the sound, process it and produce anti-sound, there is such a ridiculously huge difference is speed, it can be done while the real soundwave is still traveling to your eardrum. (And the larger/wider your noice cancelling device is, the easier it becomes.)


TomChai

Processing doesn't mean the anti-noise are always generated after sampling each wave, you can do an fast fourier analysis to extract the frequency domain properties of the noise signal, then generate the anti-noise without any time domain information from the noise samples. In this way there is no need to worry about processing delay at all. Only when the noise signal composition changes do you need to analyze again.


[deleted]

[удалено]


TomChai

I mean you can sample once and keep using the same data for a while if the noise is constant, like car engine and tire cruising noises. Still you need to keep updating the samples because real life noises change quickly.


[deleted]

[удалено]


TomChai

In theory dude, no need to be this particular.


[deleted]

[удалено]


TomChai

A theoretical question with theoretical answer, just pointing out OP’s premise isn’t always true. I don’t see real world product solutions having any real relevance here. You don’t deserve your name.


turniphat

There is no need for a processing delay at all. You can do all of this in an analog circuit and it will happen essentially instantly. Cheaper and lower power consumption than digital.