1. IEEE Std 1028-2008, "IEEE Standard for Software Reviews and Audits."
Section 5.5.3, "Output" (for Inspections): This section specifies the required outputs of a formal inspection. It explicitly lists an "Inspection data report, including the number of participants, the time spent on the inspection, and the total number of anomalies found." This highlights the requirement for collecting metrics, which is missing from the question's description.
Section 5.5.2.6, "Follow-up": This section states, "The moderator shall ensure that the inspection data is recorded." This confirms that data collection (metrics) is a key responsibility during the follow-up phase.
2. Wiegers, K. E. (2002). Peer Reviews in Software: A Practical Guide. Addison-Wesley Professional.
Chapter 8, "Analyzing Review Data": This chapter details the importance of collecting and analyzing metrics from reviews. It states, "The follow-up stage is when the moderator or review leader verifies that the necessary changes were made correctly... The review leader also should collect the review metrics at this time." (Paraphrased from concepts in the chapter). This directly links metrics collection to the follow-up phase.
3. Carnegie Mellon University, Software Engineering Institute (SEI). (1996). A 'Personal' Software Process (PSP) for 'Team' Software Process (TSP) (CMU/SEI-96-TR-001).
Section: The Inspection Process: The SEI's work on PSP and TSP heavily incorporates formal inspection methods derived from Fagan's work. The process description emphasizes recording data not only on defects but also on the time spent in each review phase. This data is then used to calculate metrics for process effectiveness and to improve future planning and quality efforts. The absence of this data collection step in the question's scenario is a significant omission for a formal process.