Convolution vs Algorithmic Reverbs

I’ve received this question multiple times, and it’s the number one search phrase (or a combination of those words) that leads to my site.  This is surprising since I haven’t actually written the article on it yet, but here it is!

I’d like to start off by saying I am indeed not an expert.  I’ve asked this question to many composers that I’ve had the pleasure of being in contact with and they all have varying opinions on the subject.  I’ve learned a bit from each of them, and I think I’ve narrowed down what I think is believable reverb and what I think you should use (especially for composers starting out).

For those of you that are unfamiliar with reverb (skip this paragraph if you are familiar), reverb is the effect that is applied to a sound signal that makes it “sound” like it’s in a space.  This space can be anything from a believable recording studio, jazz club stage, orchestral hall, or anything you can imagine.  Two common uses are as follows:

  • To make instruments sound real / like recordings in real rooms
  • To apply a thickness effect or a “distance” effect (exaggeration)

Most reverb plug-ins will do a fine job with the second point, so I’ll mostly be focusing on the first one.

 

Convolution

Convolution reverbs use real sound samples, recorded from real rooms (or modified but based on real recordings in some cases) known as “Impulse Responses”.  They produce these in magical ways, and all you really need to know is that they use multiple microphones to capture the acoustics of a room and produce an IR file for you, so you don’t need to do any of that.

The reverb plug-in then filters your sound through this impulse response to generate a “believable organic tail” based on the characteristics of a real room.

Since the effect process for these types of reverbs involves running filters over your signal and mixing that with another signal, they often have a larger impact on CPU (this is not always the case, but in general is true).  The CPU hit is similar to another audio track in your project.  So two tracks, each with a convolution reverb as an insert effect will make your DAW behave like it has roughly four tracks playing simultaneously.

Because of the CPU hit, it’s often recommended you don’t use too many of these (either have them on an effects send channel and share the plug-in across multiple tracks) or use them sparingly for specific instruments.

One common problem with reverb in general is the amount of bass build up they can cause.  Physics lesson: Lower frequencies tend to penetrate surfaces, and higher frequencies tend to bounce of surfaces (the effect we’re trying to produce).  Reverbs tend to just apply their tail to everything (unless they have sufficient modeling for wall “material types” or provide cutoff filters and EQs).  Convolution reverbs, using their impulse response files, tend to exaggerate… you guessed it, the impulse response from your sound.  This is the initial collision between a sound and the surrounding material.  If this is not set up correctly, it can cause your bass to build up much faster than algorithmic reverbs.

With multiple instruments playing in an ensemble this can cause a large amount of muddiness (check out some of my older tracks, they are almost all muddy from my poor reverb configuration).  Because of this effect and the CPU effect, I recommend using convolution reverbs on solo instruments only.

That being said, solo instruments tend to be at the forefront of a section of your piece (or the whole piece depending on the context) and sound beautiful with convolution reverbs.  This is because you tend to not have enough going on in the background to cover up the simulated rooms.

 

Algorithmic

This is your standard reverb plug-in.  Most DAWs are going to come with one, and they would probably rate around 7/10 on a “should I use this” scale.  They get the job done, but don’t sound amazing.  This is because a basic reverb is pretty easy to create, but very hard to master (I highly recommend getting a dedicated reverb plug-in, I’ll state my suggestions at the end of this article).

Algorithmic reverbs generate your reverb sound strictly based on parameters you set in the DAW, they try to generate the same thing as convolution reverbs but because they are simulating the impulse responses (as opposed to the convolution case) they can tend to sound fake especially when isolated in a solo instrument case.  Unless you have a very good reverb plug-in, I recommend not using algorithmic reverb on your solo instruments.

 

But which one should I be using?

The short answer is, both.

As stated before, I tend to use convolution reverb on solo instruments because they help fill out the sound where there aren’t many other things going on in your track.  Your mileage may vary, but that’s my general approach.  The more instruments you have the more you can get away with a strictly algorithmic reverb.

I also tend to use convolution reverb on instruments that aren’t sampled.  Sampled instruments, even if sampled in a very dry environment, can often have a small amount of natural room in the sample.  When you mix this with an algorithmic reverb, it can sound amazing.  Some of my setups that I strictly use convolution reverb on (and sometimes a combination of both):

  • Live guitar through an amp simulator (Guitar Rig)
  • Programmable Synths (Massive, FM8, Absynth)
  • Electronic / Hip-Hop style snare drums

In all these cases, the convolution reverb adds a tiny bit of realism back into something that is simulated.  The note about the snare drums is just there because I find that my particular convolution reverb of choice (Native Instruments Reflektor) adds a tiny bit of punch to the snares that I love.

Algorithmic reverbs on the other hand tend to excel at adding reverb to full mixes, or samples that already have some room recorded in.  Some examples of instruments and libraries I use algorithmic reverbs on:

  • Recorded acoustic guitar from my studio
  • Symphony ensembles that I (rarely) want to add more space to (CineBrass, Albion)
  • LA Scoring Strings ensembles
  • Any ambiances in a track (almost never use my convolution reverb on this)

 

Which plugins should I use

If you’ve read one of my previous articles on Komple 7/8 then you already have Reflektor.  Reflektor in short, sounds pretty good for something you’re getting in a bundle with so many other useful tools.  I still use it exclusively on my guitar tracks, and love the sound.

For algorithmic reverb, I recommend basically anything that has a built in high/low cut filter.  Most that I’ve seen have this as an option now, I know Ableton Live’s reverb plugin has it and sounds pretty good.  I was using it for quite some time until I stumbled upon Arts Acoustic Reverb.  Daniel James posted a video to Youtube where he says that he almost exclusively uses this reverb.  If you’re not familiar with his work, I highly suggest checking him out as his stuff sounds awesome.  After listening to his stuff I’ve since tried using Arts Acoustic Reverb on my solo tracks too and it’s pretty convincing.

I’ll try to upload some examples for this article in the next week or so to help show the differences between these plug-ins.  If you just want to find one solid reverb plug-in that doesn’t have that “metally aftertaste” that a lot of built in plug-ins create, I recommend trying out Arts Acoustic Reverb.

 

Conclusion

Algorithmic Reverb: Arts Acoustic Reverb

Convolution Reverb: Use Reflektor, especially if you’ve already bought Komplete from Native Instruments.

If you’d like to suggest any other reverbs for people to check out, feel free to let me know and I’ll add them to this list.  Audio examples coming soon…

 


3 Responses to “Convolution vs Algorithmic Reverbs”

Leave a Reply

*