Having the system baselined beforehand completely removes the need for live forensics and should be used in all production environments.
I suspect that is what I would (in its physical manifestation) call a "reference machine". It reflects the standard desktop and is what you would use to test upgrades, new applications etc. before moving them into production.

live forensics is by definition on a live system or "a system running in its normal configuration" which is still flawed as it is possible for the security policy (or elements of the user interface for that matter) to interfere with such forensics
When you are analysing the effects of malware, that is something you should be interested in, as you want to know how the malware interacts with your live system?

My thoughts are based on general concepts of the software development cycle. Unit Test - System Test - User Acceptance Test. Part of the back end of the system testing would include "end-to-end testing" and "compatibility testing" where you are basically verifying the integration, both internally and externally.

Now, I tend to look on malware analysis as testing an (albeit unwanted) application, just like any other. I would therefore want to know what the "beast" does in a "sterile" environment, as that would tell me its intentions. However, I would also want to know what it did on a "reference machine" (mirror of my production environment) and both pieces of information are of interest.

As for interference with the forensics, I would say that is a separate issue, as it concerns the effectiveness of the tools being used.