Pretty sure it's C here. Saw similar wording on a practice mock, and ISTQB usually sticks to "conformance to requirements" as their preferred answer. D would make sense for user experience questions but that's not what they're asking. Feel free to disagree though.
I don't think D is right here. "Work as designed" sounds good but it's a bit of a trap since just working as designed doesn't guarantee it meets all requirements. ISTQB loves "conformance to requirements" (C) and that's what you see in their official materials. C is definitely the safer pick for the exam, especially with these wording details. Agree?
Had something like this in a mock and it matched option A. Failure is about incorrect behavior in the program because of a fault, not just when or where the bug is found. Pretty sure that's straight from ISTQB definitions.
A is what I've seen in both the official guide and practice tests. Failure means the app does something wrong because of a defect, not just when bugs are found. Pretty sure that's ISTQB's standard-correct me if I'm wrong.
I don’t think it’s B or C since those are just about when bugs are found, not what a failure actually is. It should be A because ISTQB says failure is the visible incorrect behavior from a defect. Anyone disagree?
Failure should be C. It's when a bug actually appears after release, not while coding or designing. At least that's what I recall from practice sets.
I’ve seen this get tricky because "implementation" can mean creating tests, but when it’s paired with "execution," ISTQB is usually talking about running tests and checking if actual matches expected (so B). I think B fits best here, but A would make sense if execution wasn’t mentioned. Anyone else see this split in sample exams?
Not sure why folks keep picking A. Doesn't "execution" mean actually running the tests and checking the results, which is B? Developing tests sounds more like prep work to me.
I don’t think it’s A. B matches 'execution', since that’s when you compare expected and actual outcomes. Developing tests (A) is more about prep, not actual execution. A bit of a trap if you mix up the test process steps!
D imo since 12, 35, and 55 seem spread out but is the question asking for only valid (in-range) values or can we pick out of bounds? If only partition ranges count then maybe C fits better.
Option B is correct. There’s only a single IF decision here, so complexity is 3 (number of decision points plus 1). Option D would be if there were more branches or compound conditions, but that isn’t the case with this code. Pretty sure that’s how McCabe goes-correct me if I missed something!
It's B. Only one IF means one decision point, so you add 2 to that for the formula. D can trip people up if they think compound or multiple branches.
C tbh. Without agreed requirements, teams can pick the wrong tool or no one adopts it properly. I think A and D are sometimes problems, but C is almost always where things break down based on exam stuff I’ve seen.
If the project is too small for full-blown testing, B is best. Risk-based analysis lets you focus on what’s critical instead of testing everything. Automation (A) often isn’t worth it unless there’s repeat value or lots of tests to run, which isn’t usually the case in small projects. I’m pretty sure ISTQB wants B here, unless they mention needing speed or coverage above all else. Disagree?
D stands out because if testing is always seen as 'someone else’s job,' devs might slack off on quality. I’ve seen ISTQB mention this risk before. Not 100% but fits best in their context, right?
Yeah, D fits what ISTQB warns about. If independent testers always handle quality, devs might care less about their own testing. I think that's the specific downside they're looking for, even if B sometimes happens in practice. Agree?