A Sense Of Space

String Session VignSML W arning; this is “techy” – but a gradually unfolding realization that has lead me to be able to create more “alive” and “breathing” sounding mixes.

When listening to vintage recordings of Jazz or Latin artists, I always noticed that there was a depth of placement which made it perfectly clear to my mind where the instruments “lived”.

I particularly loved the sound where the whole drum kit could be heard – versus the sound of having the drums individually miked. Think Beatles vs. the 80’s.

I have always favored the sound of, e.g., a bongo when it sounded like the player was standing in front of the back wall rather than right at the microphone – the sound was bigger, fatter, more tribal and impactful to my ears.

When hearing a live show, the best sound experiences I ever had was hearing a good amount of direct sound from the stage, mixed with the sound of the speakers

When I spoke to one of my admired colleagues, Grammy winning engineer Moogie Canazio, how he preferred to record a tamborim ( the Brazilian instrument ) – he explained to me that he sometimes placed the microphone between 4 and 8 feet away from the instrument to get the desired result.

Think about it: at this distance, the direct transient (the part of the sound that would reach the microphone when recorded in an anechoic environment) is almost reduced to zero – the room makes the entire sound and reflections reach the microphone almost as early as the direct sound.

“The magic happens within the first 3-5 milliseconds after the transient”
Stephan Oberhoff

The longer I am privileged to engineer high quality music in the United States, Germany and Brazil – the more I like to develop a sense of spatial depth in all of my mixes.

It used to be that the only way to create space is to record instruments in those settings with distant and also close up microphone placements.

The other option was (and still is) to re-amplify the instruments by playing them back through the speakers and placing microphones further away from the speakers to capture a real room reflection and add that back into the sound mix.

It is imperative to understand that such early reflections and room “feedback” start immediately after the sound event has taken place. You have no time for latency!

Sometimes there is zero direct sound reaching the microphone at all – the room “speaks”- if you’ll accept that expression.
We have to consider this when using technology as we nowadays have available in software room emulations such as the legendary ALTIVERB.

Such plug-ins frequently get diminished in their usefulness by the “bottle neck” called latency.

A simple mathematical example; if it takes your digital workstation between 10 and 20 ms to get the signal to the plug-in, processed and then returned to the main audio output, such “room emulators” have a very hard time of creating a realistic and cohesive result.

The “magic zone” between the transient and the first few milliseconds after the transient stays blank. 

Does “blank” sound natural to your ears?

It is almost as if we used a standard reverb unit and gave it a pre-delay of 10 ms or more.

For long reverbs that is just fine and even desirable – for room emulation plug-ins such latency (or pre-delay) is absolutely undesirable and renders them nearly useless for our purpose.

We need an enormously high degree of realism in order to create three-dimensional sonic spaces, which require very short, warm dense (and fast)  room reflections.

I have found a way to bypass the problem of latency in my recent project mixes and have gotten extraordinary positive feedback from my favorite mastering engineer of all times, Bernie Grundman.

Bernie stated in one of our last sessions:  “You know, it’s a nice thing when you not only hear a great left / right balance – but there’s also something going on BEHIND this mix!”

Oftentimes we as producers created multiple layers of instruments because we feel that the space isn’t sufficiently filled with instrumentation. It is very surprising what happens when we give each instrument it’s natural sounding room ambience!

In this day and age where so many productions are recorded “in layers” there is no natural communication of the instruments with one another.

Iso booths and direct miking also separate the instruments from each other.

The studio ambience of each instrument’s room, if even recorded, does not always work right for each mix situation.
I have taken to using convolution reverbs and other techniques in order to create a controlled Room experience.
This process is quite fast and I have a plethora of room reflections to choose from.

One of the workarounds I have come up with is the following:

 

Say you have a stereo piano track and it lacks the sense of “air” around it.

Rather than using an aux send to your favorite room plug-in (which would cause latency), just copy the track and process that track (wet signal only) with said plug in and your room sound of choice –  directly “on the file”.

In ProTools that’s done in the Audio Suite and it renders the file with zero latency – which is the real deal!

You can verify this by rendering the file with the Altiverb in Bypass mode – the timing of the 2 files will be identical.

Whichever way you try this, make sure you don’t let latency creating a gap between the transient and the room reflections.
Lower the copied track to zero output and slowly bring it up in level until you get a sense of space.


Once again, whatever digital workstation you are working in, make sure that there is no gap between the transient and the arrival of the first reflection created by your room simulation plug-in!

I can  almost guarantee that you will want at least a 7-band EQ on your freshly created Room Track (as I’m now calling it) in order to prevent it from coloring the sound in a negative way.

Logically, you must use the fastest EQ plug in for obvious reasons.

You will also want to experiment with the pan setting as to where this newly created reflection track should appear.

The direct track and the reflection track will from this point on “live as one” as both are the emulation of a distant and spatial recording and will have to be mixed grouped together.
You will also notice that reverbs behave very differently if you send only the reflection track to the reverb system rather than the direct track’s sound! It could also be a mix of both signals that could be sent to the main reverb.

The last project I worked on with this technique was the rather lightly orchestrated “The Music Never Ends” by Susan Watson which I co-produced with my longtime partner Michele Brourman.

Feel free to send me your mix and I’ll add my magic sauce to your project.

Contact me at: admin@stephanoberhoff.com

To wrap things up, here’s an empowering quote by Bernie from my 2016 sessions with him:


“I have recently mastered a few albums that Stephan Oberhoff mixed and I can honestly say that, for the most part, I am at a loss to find anything to do to improve his recordings. Outside of an occasional adjustment for consistency, he is spot on, and that’s impressive. Only a small number of mixers have reached this level of expertise. He certainly is one of the best.” –Bernie Grundman

STEPHAN OBERHOFF