Ramirez wrote:One other question I have - I have tried calibrating everything with my monitor level control (M-Patch 2) at 2 o'clock, as usually recommended.

My monitors (AE22) only offer 3 stepped level settings. To achieve around 74dB SPL reading from each speaker at the listening position from -20dBFS band limited pink noise, with the AE22s at their least sensitive, requires around 23dB of (digital) attentuation on the interface DSP.
Hmmm.... not ideal as digital attenuation degrades the signal-noise performance of the D-A converters... ( but see below)
Should I juggle the monitor control level and digital attentuation a bit (perhaps back the M-Patch to 12 o'clock and bring the digital level up +10dB or so), or will it make no practical difference?
Yes, you could certainly do that, which would improve the signal-noise ratio of the D-A by 10dB -- but I wouldn't set a reference position on the monitor controller any lower than 12-o'clock, though, for fear of degraded stereo tracking.
If the interface has switchable output levels -- as RME converters do, for example -- then selecting a lower analogue output level would be a good idea. The other alternative is to build some balanced 20dB pads to reduce the signal level between the monitor control outputs and the speaker inputs.
I'd suggest soldering 1% metal film resistors into the XLR plugs and, because you're using a passive monitor controller, I'd suggest having the attenuation between the interface and monitor controller, rather than between the controller and speaker.
You need to build a U-pad which looks like this:

The hot and cold signal wires connect on the left, while the XLR pins are on the right.
If placed between the interface and controller, I'd suggest connecting the hot and cold wires via 2k7 resistors (the two R1/2 resistors) to their respective pins (2 and 3), with a 620 Ohm resistor (R2) wired directly between pins 2/3.
It should look something like this:

A higher-impedance arrangement might be beneficial if you are forced for some reason to insert the pad between passive monitor controller and speaker, in which case I'd suggest using 4k7 resistors in the hot/cold feeds and 1k between pins 2/3
I know (think?) digital attentuation reduces bit depth... but losing 4bits or so of dynamic range is not really an issue on the output side is it?
You're not really losing bits -- they are all still there... it's just that the top ones won't be doing anything. Assuming the digital attenuation is properly dithered the end result is nothing worse than a reduced signal-noise ratio in the converter.
However, since a typical mid-market interface converter will have a dynamic range performance of 115dB or better, knocking 20dB off that still gives 95dB or better.
Your reference listing level allows 20dB headroom, so the converter noise floor will be 75dB (95-20) below your reference acoustic level... or around 0dBSPL, the threshold of hearing! So I really don't think you'll have anything to worry about!
If your reference level had been at the 83dBC international reference for large rooms, it might have been more of a practical issue, but even then I doubt it!
The plain fact is that the old 16 bit format of CDs was very well chosen as it is capable of vastly more dynamic range than the majority of people can ever accommodate in a domestic home setting, and it's still comfortably more than most project studios with stupidly loud monitoring can handle too!
So... bottom line... if the digital attenuation sounds okay, stick with it and don't worry. If you are concerned about low level distortion arising from imperfectly dithered processing (unlikely, but possible), then remove the digital attenuation and build some passive analogue attenuators instead.
Hope that helps...
H