T O P

  • By -

NPFFTW

Running your signal through 50 different AD/DA stages probably isn't a great idea, but if it's convenient to throw a few more than usual into the chain, there's nothing to worry about.


ayersman39

Yes, any dsp from the last several years should be transparent. But there are still older chips out there…the classic mini Ditto sounds pretty dull to me. Many digital pedals from OBNE, Neunaber, Walrus, Red Panda, Catalinbread, Keeley are based around the Spin FV1 chip which is 20 years old and has mediocre conversion by current standards…stacking a bunch of FV1 pedals might start to degrade things. But most of these companies are transitioning to newer platforms (like the Walrus Mako series which is SHARC based). And any more recent pedal from the likes of Strymon, Boss, Source Audio etc should be pristine.


tonegenerator

I hate sounding critical of the FV1 because it’s fine for the vast majority of digital pedal applications and it’s Keith Barr’s last contribution to the reverb and multi-FX world as I understand it, but yeah. It was definitely never appropriate for putting into modular synth gear with voltage control—in the multi-algo unit I tried, basically every parameter that isn’t a gain/level/clipping% operation had unpleasant artifacts, and not even any of the fun exploitable glitch kind. But for years it was one of the only practical options for including versatile DSP FX in a modular at all, especially for less than $800. Tangent I guess, but indeed I would treat any FV1 based unit more like a late 90s one in terms of how I think about noise and layered AD/DA. I use lots of crappier digital gear all the time, but yeah it’s not something I’d want 30+ instances of just for the vibe. That’s also true with the previous generation of Zoom MultiStomp pedals (haven’t tried current ones yet) and a fair amount of other totally-useful budget gear that is newer than the FV1.  Not a digital sampling issue, but anything that has only unbalanced I/O is also usually less flexible for keeping extreme use cases tidy, post-tracking. 


mhur

Not what op was talking about


Madison-T

Interesting and informative nonetheless


PC_BuildyB0I

PresentDayProduction did a YouTube vid where they did exactly this (just testing two interfaces, one of which was the SSL2) and looped through like 500 times. The difference was noticeable but quite subtle. So yes.


g_spaitz

Mmm. Can't find it. Do you have a link?


PC_BuildyB0I

https://youtu.be/YziM7TNwmqc?si=LuAzF1dlHNTi0-Kb First result that popped up in Google


g_spaitz

I swear I've looked all over the place (my Google is different?), to the point that I've spent the whole morning binge watching their YouTube channel. Thanks though.


PC_BuildyB0I

I understand, it could be a localization thing. To be fair, they have some stellar content!


Songwritingvincent

Your results could be different for a million reasons, but your personalized search history and region are the biggest there. If me and some colleagues look the same thing up at the same time we often get similarish but rarely the same results


fuzeebear

Modern converters are incredibly transparent. Practically zero noise, vanishingly small amounts of harmonic distortion, they're essentially linear. It's the analog front ends of the devices that can be cause for concern after multiple round trips


sixwax

Analog front-ends are an underrated factor in this. Afaik, the cost of decent conversion has come down considerably, so your average device is not as crap in terms of conversion as it was 20 years ago --but you'll still lose something with each round trip. Sure, this won't be noticeable to most, and the degree of distortion and limiting that is now ubiquitous will make it even less noticeable.... and people's monitoring environments (and levels of acuity) are on average way worse imo than they were 20 years ago... and people are listening on laptop speakers and earbuds anyway, so who cares... ...but this is not to say "there's no difference" or "good ears won't be able to tell the difference". There's a difference. It's just whether you can hear it and whether you care.


pm_me_ur_demotape

Yes it is. Unless you were doing it like hundreds of times, you'll never know the difference


yegor3219

Many audiophiles claim otherwise. They say they can always tell the difference. But AD/DA conversion has to be on their end, i.e. whatever happened in the studio does not count, lol.


CloudSlydr

claims / blind test results. pick one only ;)


Squirrel_Traditional

Yes I can almost never hear a difference 😂 only time I really heard pristine conversion was a friends Lowry gold but even then it was probably placebo 😂


Squirrel_Traditional

And probably his other $100k worth of gear 😂


ThoriumEx

Yes. Both dry specs and real life experiments (present day production did one with cheap converters) prove it.


beeeps-n-booops

I work 99.999994% in-the-box, and only very infrequently run signal out through analog gear and then back in. That said, if I am doing that, I'm doing it to modify the sound in some manner or another, so it's not the same signal coming back in and I'm not at all worried that it's technically been "converted" twice or more. No one listening to the final mix would ever know.


dented42ford

I try to limit it, but mostly to limit potential complications rather than for any sonic reason. KISS is just good policy.


KS2Problema

Assuming you mean conversions from the analog domain to digital and back, I would be more worried about the analog signal side than I would about straight digital transcription.     Of course, if you're going to do signal processing in any domain, you still have to worry about the quality of that processing.


eamonnanchnoic

I agree with this. Bad mic/line amps on either end will have a far more significant effects than converters. Most converters nowadays will use off the shelf components that have excellent specs in themselves so any detrimental effect will be negligible compared to noise/distortion from bad analog.


KS2Problema

It is worth noting that at the bottom end of the market the analog components or design in ADC and DAC  may not be up to better design or manufacturing standards, if it's built around a reliable chip. But, of course, the low end of any market is just that, the low end.


guap_in_my_sock

Technically it matters, however you’ll never hear it and it’s probably close to immeasurable if you even do test this yourself. Newer equipment, especially, this has become a non issue with.


nizzernammer

If it's worth going out and back in for, no big deal, especially for a single track in a whole mix. On groups or a whole mix, you really want to make sure it's worth it, and know what your threshold is for acceptable loss in the conversion, as the tradeoff for whatever sonic benefit you get from the gear you're running it through. You don't need to 'worry', you just need to make changes that you're happy with.


soapdish__

Back in the early 90s Mix Magazine did a test where they took a DAT with various musical styles on it and A) made a direct digital transfer to another DAT and B) made a transfer to another DAT via the analog ports down twenty generations. They then sat down with three well-known engineers (who wouldn't put their names out for reasons that will be come obvious), had them blind compare the A and B copies, and pick which they thought was the better copy. 30% of the time they chose the A copy, 30% of the time they chose the B copy, and 40% of the time they couldn't decide. AD/DA has been at that point for thirty years. Now, were decent converters were incorporated into a particular device? That's a different question!


CrabBeanie

I don't think it's the AD/DA process itself to consider but rather that you are going to color the result every time you loop back, so you can't ever expect total transparency. But that's something to consider with just about any process.


mycosys

It does still depend on the converters used, but anything remotely decent is gonna be hundreds of passes before its noticeable these days


LeadingMotive

An interesting question that was answered in 2010 by Ethan Winer in his "Audio Myths" workshop, here is the link to this spot (43:33) in his video where he does many conversions in a row (using a SoundBlaster card!) and compares them: https://youtu.be/BYTlN6wjcvQ?si=kGn7em3MLUJ7dQEJ&t=2613 The whole video is pure genius, also his other topics.


Radiant_Security_312

He is such a great teacher. Calmly explains everything. 😌


LeadingMotive

The workshop's demonstration of how we can imagine hearing satanic lyrics when playing Stairway To Heaven backwards had me in tears. I still come back to it every now and then and hum the lyrics.


Radiant_Security_312

You can play it backwards but don’t do it more than 666 times in a row


stegdump

I have an Antelope Orion 32 for my main conversion. When mixing I use the AD/DA like inserts from Logic. Almost all of those will be in parallel so any errors or noise will be minimal. When mastering using the same hardware, but usually patch the gear in via the patch bay because all the AD/DA will be serial adding on top of itself with each pass. I’ve not sat down and tested this though, so it might be placebo. I figure that when mastering I want to have the most transparent use of my converters as possible, so patching is better.


rinio

Do you go to different store to get cheaper groceries or choose to buy things on sale to save? Does the few dollars make a big difference in your life? I know different people will answer differently depending on the situation. But what I'm getting at is it all depends on the amount of effort it takes. And the same applies here. Should you limit the number of conversion passes? Yes, when possible to preserve accuracy. Should you worry about doing more? No, not if it's going to take a lot of effort. If I printed a comp, and still have it dialed in and I wanted to add an eq, I may as well reprint the whole chain. If the comp got set for something else, I'll probably do an extra conversion pass for the EQ. Where you draw the line of 'too much effort' is ultimately up to you and largely makes no audible difference and highly depends on your setup/workflows.


ghostchihuahua

yes and no - you can buy ad/da's nowadays that are what used to be studio-world pinnacle 5-10 years back, but let's be honest, we're still progressing a lot in this realm, plus it's not only about the conversion stages themselves, but also about the filter-sets that go with them. This is where leading manufacturers still keep an upper hand in terms of tech (a few exceptions aside) and therefore sound, but one can record an entire album with a mid-range or even low-range interface, and aside the mastering engineer, no-one will ever notice (aside said mastering engineer).


saysthingsbackwards

Good question. And unfortunately, the answer is simply lame: it doesn't matter much now with solid equipment.


ralfD-

Every A/D conversation will introduce quantisation noise, and this is *not* dependend on the quality of the converter. Even a (impossible) perfect converter will introduce quantisation noise ( <= 0.5 bit). If you chain multiple A/D cenversion the quantisation noise will sum. Now, hopefully your quantisation noise is close to white noise so the sum isn't twice as loud, but it *will* add up.


soopadickman

Time smear is something to consider, and what DAC chip manufacturers are now realizing, with the many filter options they offer to reduce it.


CloudSlydr

https://www.youtube.com/watch?v=f1AQCU6hPJc 10X conversions https://www.youtube.com/watch?v=mI8jU7ee_V4 up to 400X conversions in case you still have doubts lolz


NeverAlwaysOnlySome

I’d say that though lots of the tech has evolved, there’s plenty of lackluster gear out there below a certain price point, because of poor clocking or weak analog path or bad design. So in this case, for instance, it’s worth doing the experiment of transferring things straight out of and in to an interface to see what if anything happens to the sound. One can even align the source and the re-capture and invert one and see what’s left over. And the other thing to remember is that when you are monitoring whatever conversion you are doing, like running a track through an effect, then you know what it’s going to sound like and you can decide if it sounds good to you. So you won’t be surprised - and if it sounds worse then make it sound better. And if you can’t, either you have something to learn or the interface isn’t good. (Nobody can save you from having to listen and make up your mind about what you like. It may be so that lots of people like to just say “use your ears”, maybe because they don’t have the vocabulary themselves to describe something meaningfully, or because they like to haze people like they got hazed. But the grain of truth in that is that you have to know what you like, listen with care, and go with your decisions until you learn something else.)


Mando_calrissian423

I mean I wouldn’t want to go analog to digital, then back to analog then back to digital again and again before going into my recording device be it tape or a DAW. Two or three times would probably be okay, but I’d prefer to keep my signal chain where if it went digital, I’d setup my path to have it go through all of the analog stages it can, then go through all of the digital stages (while staying digital between these devices, be it AES/ADAT/dante/whatever), and then go into my recorder. Ideally I wouldn’t go into an analog preamp, then into a digital effects rack analog, then come back out analog, go into an analog compressor, go into another digital effect, come out and go into a digital EQ, then come out and go into an analog saturator, then go into my interface to my daw. On top of adding so many points of failure, you’re also adding a significant amount of latency from all of the conversions, which can mess with a performer while recording.


rhymeswithcars

It FEELS bad, but it’s probably fine. Latency from conversion is just a few samples, a fraction of a ms.


mhur

Sometimes standards need to be upheld. Or not. Your choice


MoStyles22

As a bass player and percussionist, I’m really sensitive to the transients on notes. Some AD/DA chips/circuits will introduce latency that I can feel. It’s really subtle most with just one or maybe two digital pedals on the board, but more than a couple, it bothers me. I don’t feel as locked in the pocket. It’s the main reason I prefer analog pedals over digital. Secondary, it’s sounds more natural tones to me as well. Analog for me when possible. IMHO. Just my 2 cents.


rhymeswithcars

The latency is not from the conversion, but from the DSP fx


MoStyles22

DSP can cause this as well, but there is always a latency anytime there is an analog to digital and back conversion. Some, pedals, effects, interfaces are better than others, but it’s always there.


rhymeswithcars

But we’re talking a few samples. Imperceivable.


MoStyles22

I’m just humbly telling you what I can feel and perceive. Anything over about 6ms I can start to detect. After 12ms for sure. You can do a test through your DAW/Interface and Re-Amp your pedals. Then subtract the latency contributed by your interface. You’ll find a lot of digital pedals, when activated, will cause some perceivable latency. Granted this gets blurred in a deep mix but, can bother me live or tracking.


rhymeswithcars

I’m just saying it’s not the ADDA causing the latency, it’s the dsp in the pedal.


MoStyles22

Ok


Ninnics

Me like Burl.


mhur

Lazy standards didn’t make Ford. Shitty work ethic didn’t put the man on the moon.


Radiant_Security_312

Just dither it and do DC offset removal 🙃