The question asks which characteristic is least likely to cause safety-related issues for an AI system.
Let's evaluate each option:
Non-determinism (A): Non-deterministic systems can produce different outcomes even with the
same inputs, which can lead to unpredictable behavior and potential safety issues.
Robustness (B): Robustness refers to the ability of the system to handle errors, anomalies, and
unexpected inputs gracefully. A robust system is less likely to cause safety issues because it can
maintain functionality under varied conditions.
High complexity (C): High complexity in AI systems can lead to difficulties in understanding,
predicting, and managing the system's behavior, which can cause safety-related issues.
Self-learning (D): Self-learning systems adapt based on new data, which can lead to unexpected
changes in behavior. If not properly monitored and controlled, this can result in safety issues.
Reference:
ISTQB CT-AI Syllabus Section 2.8 on Safety and AI discusses various factors affecting the safety of AI
systems, emphasizing the importance of robustness in maintaining safe operation.