Q: 7
Consider a TAS that exclusively uses the APIs of a SUT. To make this work, significant changes have
been required to the SUT by adding a set of dedicated test interfaces to the APIs. All the automated
tests will use these test interfaces when interacting with the SUT. Assume that you are currently
verifying the correctness of the automated test environment and test tool setup.
Which of the following would you expect to be the MOST specific risk associated with this scenario?
Options
Discussion
D . False alarms are tied directly to testing through non-prod test interfaces like here. Not every API approach has that risk.
Option D
D . The main thing is those dedicated test interfaces can make the automation report problems that wouldn't really show up with the real-world APIs, so you get false alarms. A seems tempting but isn't as linked to this scenario. Similar question popped up in practice materials.
Guessing D. When you add special test interfaces to the SUT just for automation, there's always a risk the tests catch stuff that wouldn't happen in real usage, causing false alarms. That's much more specific to this setup than generic connectivity issues. Pretty sure that's what they're asking for here, unless I misread it.
Honestly, I'm still not 100% on this-my gut says D but I can kinda see the case for A too.
A is wrong, D is definitely the most specific risk here. Making special test interfaces means the tests might catch things that wouldn't show up in real usage, so you get false positives. A (connectivity) could be a problem, but that's always a generic risk. Pretty sure about D here, unless I'm missing something.
A for me. Connectivity to the new test interfaces could easily break with all the changes done, that feels more concrete. I get why some pick D, but not convinced that's the biggest issue upfront.
D . Adding those special test interfaces that only the tests use leads to a risk of getting false failures in automation, since those paths aren’t real production ones. Makes this scenario pretty unique imo.
Nah, D seems better than A because the false alarms are really tied to using special test interfaces that real users never hit. A feels like a generic API risk. I think D, but let me know if I'm missing something.
Had something like this in a mock and D was the right pick. When you add special test interfaces that aren't used in production, you can get false alarms during testing that wouldn't actually happen for real users, which is pretty specific to this setup. Option A feels more generic since connectivity issues can hit any API, not just dedicated test hooks. Pretty sure it's D, unless I've missed a nuance here. Agree?
Be respectful. No spam.