Some recent tests being discussed in this PC Music thread have shown that your choice of audio interface can have a marked effect on how many plug-in effects/softsynth voices you can run, particularly at lower latencies/buffer sizes:
The big confusion there seems to be what buffer sizes most musicians typically use/need, so let's try to find that out here
Please explain the reason for your choice with a short post e.g.
a) Monitoring live vocals
b) Playing guitar through plug-in effects
c) Playing drum pads live
d) Playing keyboards
e) Mixing/mastering
f) Composing
I suspect this will prove very useful information for audio interface manufactureres fi they know what the majority of us are trying to do with their products
OK - I voted first, and my choice of 256 is for playing softsynths - at 44.1kHz this buffer size offers me a perfectly acceptable playback latency of 6mS. I tend to leave it the same for mixing/mastering too.
Well, I voted for 256, because that's what I generally use for tracking. on my interface at 44.1 it gives the slightly higher latency of about 7-8ms, but I don't often notice a problem with that.
However, I will add that I use a range of latencies depending on what I'm doing. For instance, I may drop down to 128 sometimes tracking something really percussive just to be safe on timings, and for mixing I'd bump it up to 512, again, just to be safe. Plus by then latency isn't that much of an issue.
In the main, 128 or 256 gives me acceptable performance for playing synths and stuff. Occasionally if there's something particularly timing critical, I might whack it down to 64.
32 puts a little too much strain on my MBP system - it's fine for just playing a simple softsynth, but playing a heavier one, particularly with other fx on, is a bit heavy.
If I'm playing guitar through FX and software monitoring, I will go to 64 (again, 32 here, by the time you are adding live amp modelling and FX processing, is a little too heavy for me).
However, the days of moving up to a higher buffer setting to mix with are long gone for me, I seldom find it's necessary to go to a 512 buffer size or something as Logic is pretty well optimised in this area.
So generally speaking, I'm at 128 most of the time for general use, and shift down to 64 for software monitoring live playing...
Mostly 256 to allow for using softsynths, but since I monitor outside of my DAW (RME TotalMix) I'm happy to run at 512 to get a bit more grunt when the mix needs it.
Hi Martin. I've not filled this is, as I don't use only one!
I can't remember the last time I thought of the buffer size as a set-and-forget thing: it depends what it is I'm doing. I'll set it for as low latency as I can get it if tracking V-Drums/BFD or playing soft synths via MIDI; or to free up as much CPU resource as I can when mixing very large projects with a gazillion thirsty plug-ins; or leave it at whatever it happens to be set at when 'programming' instrument parts. Like Elf, I'll use Total Mix for monitoring while tracking so the setting isn't important there.
It also depends to some extent on which computer system and audio interface I'm working with at the time, and what word-length/sample rate I'm working on.
Hi Martin. I've not filled this is, as I don't use only one!
I can't remember the last time I thought of the buffer size as a set-and-forget thing: it depends what it is I'm doing. I'll set it for as low latency as I can get it if tracking V-Drums/BFD or playing soft synths via MIDI; or to free up as much CPU resource as I can when mixing very large projects with a gazillion thirsty plug-ins; or leave it at whatever it happens to be set at when 'programming' instrument parts. Like Elf, I'll use Total Mix for monitoring while tracking so the setting isn't important there.
It also depends to some extent on which computer system and audio interface I'm working with at the time, and what word-length/sample rate I'm working on.
ditto. i set it as low as it can go when recording MIDI inputs, but then whack it up pretty high when doing other things so it doesn't bother me.
Voted 64 for tracking edrums - it'd be 32 if my set-up could reliably sustain it (close, but it'll occasionally blast digital noise down the 'phones).
After drums I'll track guitars through Guitar Rig at 128.
After that I increase the buffer throughout a project as demands dictate - heavy use of certain Play and Kontakt libraries will often force me up to 512 at which point I'll start to freeze tracks to decrease CPU load. I intend to upgrade my PC soon. I'd like to stick with the interface for its particular feature set.
(Tascam FW1082 / Phenom X4 9750)
64. For composing, an option that wasn't in Martin's list.
Covers a multitude of sins - VSTi, external instruments, slave machines etc.
Agree with Matt too - when it comes to mixdowns I tend to ramp it up a bit, but anything above 128 makes decent accurate playing difficult unless you adjusts your playing timing to allow for it. Come what may, the idea of anything as high as 6ms is my idea of hell.
Sorry I didn't specifically cater for those musicians who routinely change settings depending on the task. Will those who fall into that category please vote 'None of the above', so we can include them
256 for me, mainly composing and softsynths. Occasional live instrumentation. Depending on the amount of VIs I will go higher (512, 1024) for mixing. However, got a new rig and will try permanently moving to 128 for composition and VIs.
1. My interface has HW direct monitoring so for recording guitar, vocals and overdubs the latency means nothing to me, so..
2. I just set it high so that I have no risk of dropouts and my processing count being curtailed
3. I don't really 'play' keyboard, yes I can bang out a few chords for strings and that but my playing isn't effected by any sort of feel, as I edit/quantize afterwards any way to get the feel I need, I 'spose for playing piano type sounds it is an issue but at that point i tend to reduce it before I start mixing (and using plugins) do a take, than wang it back up again before mixing begins.
4. mixing, buffer size is irrelevant isn't it, except for plugin count?
5. for guitars I use a real amp (so HW direct monitoring again) or split the signal off to a POD type device to get the right sort of sound and then record a DI for re-amping or POD farming/Guitar Rigging later. For bass a DI sound is nearly always good enough to monitor, if not ditto what I do for guitars.
To be honest latency/buffer size just isn't any issue for me, so I don't worry about it.
512 for MIDI programming (Logic). I don't actually "play" instruments so this is fine. Low buffer on Wavelab will cause dropouts. It tend to whack it right up (this is Wavelab specific though).
I use 64 samples for realtime audio. I play keyboard and (very soon) drum controllers so I need near-zero latency. When CPU load is an issue, which almost never happens for me, I switch to 128 or 256, but never above as latency would become too noticeable.
Most DSP softwares I use are quite CPU-friendly so these settings are valid for my 5-year-old laptop too. The exception is when needing lots of voices on polyphonic samplers. But I prefer keeping the latency low so I simply reduce the number of voices. For example, on my laptop that has a 5400rpm drive, I am using Ivory Italian Grand with all features enabled, though only 36 or 48 voices of polyphony, but the latency is still set to 64 samples.
I still do not have Kontakt and use a custom Max/MSP polyphonic sampler but as it is very simple it does not require a top computer for getting enough voices.
Also, when requiring an additional complex "tape" track in live I render it to a wave file instead of processing it in realtime, so that the CPU can be used for the actual live playing.
When working in non-realtime or when recording, latency does not matter so I may go up to 1024 samples, but it is rarely needed in practice, then finally I leave the latency to 64 samples most of the time on desktops, and 128 or 256 on my old laptop. I do not like working with huge projects, so for very complex tracks I break the workflow into logically-structured small projects. Anyway a big part of the processes is not even done inside the DAW.
Sometimes, I also program some very complex DSP processes that could not be rendered in realtime, so I render them in non-realtime and then use the files I get as new audio samples.
If I was using 10 convolution reverbs at the same time, or some of the newest "emulation" VST instruments, or if I made 64-track projects with lots of plugins, things would be different I'm afraid.
EDIT: I forgot to precise that I only use 44100Hz or 48000Hz sample rates and do not plan on using other rates in the future (48kHz when working with images of course)
a) monitoring live vocals
On my PC (running Reaper) using a Focusrite Saffire Pro 40 I use 128 - minimal delay without any clicks or pops. Also tend to use zero-latency reverb e.g. EpicVerb.
d) playing live keyboards
On my MacBook using a Focusrite Saffire LE I use 64 for playing keyboards through MainStage.
e) mixing
PC/Saffire Pro 40 again... 1024 because the delay doesn't matter and it runs smoothly
EDIT: made clearer after I saw the a-f list above...
I play my guitar live through effects plugins with a Scan 3XS laptop and an RME Babyface @ 128 samples & 96KHz. This gives me a RT latency of 4.32ms.
But I would like to note that personally I think that the ASIO latency and the process (mixing, playing guitar live, composing etc) by themselves don't mean anything.. One should take into account the Samplerate too and most important if the audio card is a PCI or external (firewire, usb). External audio cards have hidden buffers that give greater roundtrip latency than internal ones at the same ASIO setting.
So, the question would be: at what roundtrip latency do you feel comfortable to mix, monitor live vocals, compose etc?
ElGreco wrote:External audio cards have hidden buffers that give greater roundtrip latency than internal ones at the same ASIO setting.
So, the question would be: at what roundtrip latency do you feel comfortable to mix, monitor live vocals, compose etc?
True, but many people posting here (including me) are primarily playing softsynths, so round trip latency isn't applicable. Moreover, once you start asking people about round trip latencies including hidden/safety buffers, extra latency due to converters etc. the question can become overly complicated to answer.
I also suspect many musicians won't actually know their real world latency - I use the excellent CEntrance LTU to manually measure this value, but I suspect its results might come as a shock to some musicians
64 because I can. I track at 64 for monitoring purposes and generally just leave it there unless I get crackles when I'm mixing, when I'll go to 96. Good old RME!
I tend to go for 2048 with my RME Hammerfall 9636 card but that's down to problems with Adobe Audition V2 which has big problems with its performance. Even though I rarely use Audition 2 nowadays, I've still stuck with that setting as reliability is more important than latency for most of what I do.
Of course, I'll take it down to 64 or 128 when using virtual instruments.