This kind of wording is always confusing in ISTQB exams. A tbh, since when you run tests and find bugs, the whole process eventually boosts quality, right? The intention behind testing is to drive improvements, even if inspecting alone doesn't fix stuff. Maybe missing something in the official syllabus, but that's how it plays out in real life. Anyone else see it this way on practice tests?
Pretty sure it's C here. Saw similar wording on a practice mock, and ISTQB usually sticks to "conformance to requirements" as their preferred answer. D would make sense for user experience questions but that's not what they're asking. Feel free to disagree though.
I don't think D is right here. "Work as designed" sounds good but it's a bit of a trap since just working as designed doesn't guarantee it meets all requirements. ISTQB loves "conformance to requirements" (C) and that's what you see in their official materials. C is definitely the safer pick for the exam, especially with these wording details. Agree?
Had something like this in a mock and it matched option A. Failure is about incorrect behavior in the program because of a fault, not just when or where the bug is found. Pretty sure that's straight from ISTQB definitions.
A is what I've seen in both the official guide and practice tests. Failure means the app does something wrong because of a defect, not just when bugs are found. Pretty sure that's ISTQB's standard-correct me if I'm wrong.
D imo since 12, 35, and 55 seem spread out but is the question asking for only valid (in-range) values or can we pick out of bounds? If only partition ranges count then maybe C fits better.
Option B is correct. There’s only a single IF decision here, so complexity is 3 (number of decision points plus 1). Option D would be if there were more branches or compound conditions, but that isn’t the case with this code. Pretty sure that’s how McCabe goes-correct me if I missed something!
Pretty sure it's B, since there's just a single IF so one decision point. Cyclomatic complexity for that is usually 3 unless there's some hidden compound logic. Open to other views if I'm missing something.
I agree C is the biggest risk. No agreed requirements means the tool likely won't fit, so people won't use it and the rollout fails. Pretty sure that's what usually blocks success in real life.
C tbh. Without agreed requirements, teams can pick the wrong tool or no one adopts it properly. I think A and D are sometimes problems, but C is almost always where things break down based on exam stuff I’ve seen.
I don’t think automation (A) makes sense for a small project. Usually ISTQB expects us to pick B, risk-based analysis, since it helps focus testing where it matters. A is a trap if you go by cost-benefit. Open to thoughts if anyone's seen something different in the syllabus!
Big-bang (B) isn't incremental, it's all-at-once integration. I've seen this phrased similarly in practice tests and the official guide, so pretty confident here but happy to hear if anyone saw different wording.
D stands out because if testing is always seen as 'someone else’s job,' devs might slack off on quality. I’ve seen ISTQB mention this risk before. Not 100% but fits best in their context, right?