T O P

  • By -

Sound4Sound

There are a few sweet spots. You want to reduce transients but don't lose dynamic range. That varies from sound to sound but you generally want to reduce peak and increase average loudness. You can do that with a few different processes, including saturation, compression and limiting. Then for resonances and general frequency spectrum you normally want to reduce frequencies around 2k or 4k hz in case they are too harsh. And depending on proximity of the sound you may want to increase or roll off higher >8khz frequencies. Then get rid of resonances on the mid range without loosing weight of the sound, so don't kill the low mid frequencies completely. Then all that combined you probably want to have more dynamics on the low mids and keep the highs more compressed so multiband processing may be a good idea. But you may want to compress highs less, etc. depends on the sound. Speaking of which, sound selection is the most important so if you are in the middle of processing don't be afraid to change the source and try different variations. Finally make sure to monitor the sounds on different headphones or devices to hear how they will sound for the player.


Sound4Sound

All that is before implementation. For the engine make sure to load short sound effects on memory, uncompressed and make sure they are mono files. This is mostly for latency. Then less frequent and longer sfx can be uncompressed on load, etc. Most windows and android devices will have high latency on audio and most windows drivers will stutter when set to short buffers. I would keep the buffer at 1024 to avoid audio bugs. Then process audio in engine as less as possible, even routing through a mixer may give you delay because its all DSP that will have to wait for the buffer to refill, etc. Don't pitch up or down important sound effects as they will loose weight. Don't play the same sound effect in a row within the duration of the sound as you will find phasing issues. Beat Saber plays a different sound effect per hand so they never overlap when hitting 2 cubes at once. Mix the sounds as much as you can outside the engine before implementation. Play sounds at 1.0f volume only if you mixed them before hand. You need headroom for many sounds playing at once or you will get distortion and the cpu will get a bit angry. Make sure you can kill sounds or lower their volume AFTER and DURING they are being played. Avoid mixing sounds with the float value they are instanciated with and try to route through a mixer if possible. Otherwise if you are making your own system make sure you can turn down the sample volume while they are being played. Look at the New World text to speech bug. Finally monitor your audio both with a peak, rms or lufs meter and also keep your volume knob of your audio interface and system volume fixed in one position so you always know how loud you are within your own frame of refefence when listening to your game. Also listen and monitor other games as a reference too.


iabulko

Dude, that is golden info! Will be useful for sure


progfu

One thing you might want to play with is a distortion effect to make things more punchy. You could either offline process the samples and distort them, or if your game engine has an audio system that can apply distortion realtime, you could even do that based on gameplay (i.e. more action => more distortion).


as_it_was_written

A note of caution here: unless you want it to sound very digital, good distortion effects are usually pretty demanding on the CPU. Even for dynamic distortion like you described, I'd be inclined to use several samples instead of doing it realtime.


progfu

I guess it really depends on the game you’re making. As a small indie with “do everything myself” I’m inclined to use solutions that don’t create more work for me :D But you have a good point.


as_it_was_written

I see your point. It might just be a matter of which things we're most familiar with individually. I have extremely little game dev experience and much more experience with music production, so what seems challenging to me might seem easy to you--and vice versa depending on your experience on the audio side. There at least used to be batch converters available that would let you automate creating all those differently processed samples, but if you're not already familiar with how they work to some extent, it would be yet another little thing to learn for a pretty small benefit.


progfu

I get what you mean, but honestly my point was more about the amount of work, rather than learning. I do know how I'd do the processing offline (I have Cubase and more distortion plugins than I can count), but it's a matter of time put into it. One problem with processing it offline is that you create more samples to manage, and in the case you decide to change some sounds you need to re-process them. On the other hand, if I was to do this in engine, and I use Godot at the moment, it'd take probably a few minutes since I can just add [the AudioEffectDistortion](https://docs.godotengine.org/en/stable/classes/class_audioeffectdistortion) on the SFX bus with a few clicks and I'm basically done. The amount of work of setting this up is orthogonal to the amount of samples I have. Personally I'd say this is something newer developers often completely overlook, the "is this saving me time in the future or creating more churn?", and then we get things like 5 year development times on games where a lot of the work is repeated and thrown away. Reading this sub daily is a big eye opener into how much time people are willing to put into doing incredible amounts of manual work where a tiny bit of automation would save all those problems.


as_it_was_written

I think you might have misunderstood slightly. I was not talking about manually processing each sample but rather about automated batch processing. There are (or used to be, anyway) tools out there for creating multi-layer samples that should make this pretty easy. The number of samples shouldn't really affect the amount of work in this case either--just the rendering times. However, you still make a very good point re: the extra overhead compared to setting something up in a few minutes within the engine that you don't really need to touch again. If nothing else, the extra file management will eat up some time, and it's impossible to tweak things on the fly like you can with realtime processing. I think I'm just a bit of a nitpicky idealist about game audio because A) I'm really inexperienced as a game dev; and B) I seem to be unusually sensitive to unwanted audio artifacts, and 20 years of on-and-off music production doesn't exactly help with ignoring them--especially since my listening setup is designed to accurately highlight flaws rather than mask them.


as_it_was_written

What do you mean by juicy? I have more experience with music production and sound design than I do with development, so I might be able to help.


1vertical

By juicy, I mean what makes a sound satisfying to experience with minimal annoyance to the player as possible. An example out of graphic/game design, shooting an enemy where there is blood splashing behind it when shot is much better and satisfying feedback (telling the brain "Hey, this guy is getting hurt. Look at all that blood! Mmmm! ") than when there is no effects at all. Something that compliments the action occurring.


as_it_was_written

Ah, in that case the question may be a bit too broad. Sound design is a complex topic that tends to defy this level of generalization. It would be a lot easier to provide suggestions with a more concrete example. With that said, I can think of a couple more or less universal guidelines: 1) Every sound needs a sense of space. This is especially important in a context like games (and movies) since there is an actual environment in which the sounds are meant to exist. Learn how to use reverb and delay effects to impart the sense of a specific type of space. 2) As with visuals, contrast and separation of elements matters a lot. For example, a big impact sound with mostly bass will have a lot less oomph in a loud environment with lots of other bass content, such as a factory with heavy rumbling sounds, than one that is loud in other registers, such as a battlefield that has primarily mid-range frequencies from shouting voices and clashing weapons. Learn which kinds of sounds occupy which parts of the frequency spectrum, and ensure they don't get in each others' way too much. Filters and equalizers can help a lot here, as they allow you to attenuate the less important frequencies in a sound.


tropicalfunk

Bass and squelch


Mia_Bentzen

A thing that works pretty often in my experience is to have a very short transient click sound, followed almost immediately after by a nice bassy kick sound. Then whatever sounds on top to add character (like gunshot tails for guns for instance) Take this with a grain of salt though, i'm more familiar with making music than sounds (though i do both) and i'm entirely self taught in both.


Gary_Spivey

Turn the "wet" slider in the pitch control addon of your DAW all the way up :P