Examine the following indicators regarding the operational health and security of a production Deep Learning model hosted in a virtual container environment:
I. Sustained 100% GPU memory and compute saturation triggered by a single user session. II. Dramatic increase in inference latency (from milliseconds to minutes) for specific input types. III. Unauthorized access and exfiltration of the model weights via a side-channel attack. IV. Failure of the system to respond to legitimate requests due to resource depletion. Which of the items above are primary characteristics or direct results of a Model Denial of Service (DoS) attack?
Evaluation of a new AI-driven orchestration tool identifies several architectural flaws regarding how the agent interacts with external cloud APIs. Consider the following security findings:
I. The agent shares a single high-privileged IAM Role with all other management scripts. II. The agent lacks a mechanism for user confirmation before executing resource deletions. III. The agent uses a Jupyter environment for initial model prototyping. IV. The agent’s output is not filtered for potential command injection patterns. Which of the items above contribute directly to the risk of Excessive Agency?
Consider the following actions performed during the development of an AI-driven intrusion prevention system:
I. Executing the model on the Test Set for final accuracy reporting. II. Iteratively adjusting learning rates based on performance against a held-out dataset. III. Selecting between a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). IV. Using Backpropagation to minimize the loss function on the training data. Which of these items are EXCLUSIVELY associated with the Validation phase of the AI life cycle?