I get where you're coming from but I think C is more about site-work timing. It captures the period between site contract and first patient, which is a key part of startup efficiency. Not totally sure, but to me C fits that dimension better.
I don't think it's D. The trap here is to focus on the investigation rather than what the validation documentation actually records. Documentation needs to show exactly where the test diverged from expectations, so B (expected and actual results) fits better. Root cause comes after, not at this stage. Pretty sure that's what guidelines like GAMP 5 want, but correct me if you read it differently!
I’ve seen similar questions in official practice tests - validation docs always highlight the expected results compared to what actually happened in the test. Extra steps like root cause analysis or training usually come later, not right after a test fails. I think it’s B, but curious if anyone’s study guides say otherwise?
Honestly, I get why B is tempting, since you're not sending real-time alerts in paper studies. Still, I'd pick B here-it's a common trap with similar wording on practice tests.
Always with the idea that "anyone can do data entry"... but in real studies, that's not how it works. Clinical background and EDC/platform training matter, so A just sounds off base to me. On practice sets I’ve seen, B, C and D get picked for this reason.
Not as simple as just hiring clerical staff for data entry. You really need someone with clinical or research background (B, D) and proper training on the EDC system (C). Saw a similar question on a practice test. Agree?
Its C here, since a Learning Management System can also store SOPs and track who has completed relevant training. Just to clarify, if the question had said "manage version control and approvals," I'd probably switch to A. Does the word "best" in the question mean compliance too?