Q: 13
In the context of generative AI, which of the following would be the MOST likely goal of penetration
testing during a red-teaming exercise?
Options
Discussion
Makes sense to go with A. Red teaming pen tests for generative AI are really about those unexpected outputs using adversarial inputs.
Option A similar topic came up on an official practice test. Exam guide focuses on adversarial inputs for pen testing.
Nah, I don't think it's B. The real point of pen testing with generative AI is to see if adversarial prompts can get unexpected outputs, so A fits best here. B is more model robustness, not red-teaming. Pretty sure that's what exam guides say.
Feels like B, seen this phrased similarly in some practice sets and official guide reviews.
Seen similar in practice sets, pretty sure it matches A. Official guide has good breakdowns on red-teaming AI too.
A tbh, saw a similar theme in some exam reports. Pen testing for gen AI usually targets unexpected outputs with crafted prompts, fits red team objectives.
Probably B, but does the question mean "most likely" from a security or reliability perspective? That could flip my pick.
B tbh
A gets my pick. In red-teaming generative AI, you're usually crafting adversarial prompts to force the model into unexpected or unsafe outputs, not just stressing it or scrambling everything. Pretty sure that's what pen testers focus on for this context, but correct me if you see it differently.
Option B this time. Stress testing the model’s decision-making gets mentioned in some practice tests as a way to see how it handles edge cases, so I think it fits for pen testing sometimes. Definitely check the official guide though if you’re not sure.
Be respectful. No spam.