Get ahead in your AI-900 exam prep with our free, accurate, and 2025-updated questions
Cert Empire is dedicated to providing the best and latest exam questions for students preparing for the Microsoft AI-900 Exam. To better assist students, we’ve made sections of our AI-900 exam preparation resources free for everyone. You can get ample practice with our Free AI-900 Practice Test.
Question 1
HOTSPOT To complete the sentence, select the appropriate option in the answer area. 
Show Answer
ANALYSIS
The AI solution is assessing specific qualities or attributes of the face, such as exposure, noise, and occlusion.
- Facial detection simply locates a face in an image (e.g., provides a bounding box).
- Facial recognition identifies a specific person.
- Facial analysis (or attribute detection) is the process that extracts information about a detected face. This includes quality metrics (blur, exposure, noise), pose, emotion, and physical attributes (occlusion, glasses, etc.).
Because the solution provides feedback on these specific attributes, it is performing facial analysis.
Microsoft. (2024). Face detection and attributes. Azure AI services documentation.
Reference: In the "Face - Detect" API documentation, the service can "extract a set of face-related attributes... The available attributes... include... Occlusion... Noise... Exposure." This confirms that determining these specific properties is a function of facial attribute analysis, which is a component of the broader "Face" service, distinct from simple detection.
Hjelmรฅs, E., & Goth, B. K. (2001). Face Detection: A Survey. Computer Vision and Image Understanding, 83(3), 236-274. https://doi.org/10.1006/cviu.2001.0921
Reference: This survey (Section 1, "Introduction") distinguishes between "detection" (locating a face) and "recognition" (identifying a face). The task described in the questionโevaluating attributesโis a separate analytical step that follows detection.
MIT OpenCourseWare. (2020). 6.819/6.869: Advances in Computer Vision.
Reference: Lecture 15, "Face Recognition." Course materials distinguish the core tasks: Detection ("Find all faces"), Attribute Analysis ("Find properties: pose, expression, gender, image quality"), and Recognition ("Identify the person"). The question clearly aligns with Attribute Analysis.
Question 2
Show Answer
A. This is incorrect. While data sourcing is important, there is no principle mandating the exclusive use of publicly available data; the focus is on responsible data handling, regardless of its source.
B. This is incorrect. The principles of responsible AI are human-centric, focusing on fairness and societal benefit, not solely on corporate interests, which could potentially conflict with ethical considerations.
D. This is incorrect. This statement directly contradicts the principle of Privacy and Security, which requires AI systems to protect personal data and respect user privacy.
1. Microsoft Learn. (2024). Microsoft's responsible AI principles. AI-900: Microsoft Azure AI Fundamentals. "Transparency: AI systems should be understandable. The people who create and use AI systems must be able to fully understand how they work so they can identify potential issues, such as bias or unexpected outcomes, that could otherwise go undiscovered."
2. Microsoft Learn. (2024). Introduction to responsible AI in Azure. AI-900: Microsoft Azure AI Fundamentals. "The six principles that form the foundation of Microsoft's approach to responsible AI are: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability."
3. Microsoft Corporate. (2023). Microsoft Responsible AI Standard, v2. "Transparency: We will be transparent about the capabilities and limitations of our AI systems." (Section 1.5, Page 6).
Question 3
Show Answer
B. machine-learned: This entity type is best for concepts that are defined by context, not a strict pattern, and requires training with many labeled examples. It is inefficient for a well-structured pattern like a phone number.
C. list: A list entity is used for a fixed, closed set of words and their synonyms (e.g., a list of cities or product categories). It cannot be used to identify the near-infinite variations of phone numbers.
D. Pattern-any: This is a generic placeholder used within a larger pattern template to capture variable text. A regular expression entity is more specific and appropriate for defining the exact structure of a phone number.
1. Microsoft Learn, "Entity components in conversational language understanding." Under the "Regular expression entity component" section, it states, "A regular expression entity component extracts an entity based on a regular expression pattern you provide. It's ideal for text with consistent formatting." This directly supports using regex for patterned data like phone numbers.
2. Microsoft Learn, "Entity components in conversational language understanding." The "List entity component" section clarifies, "A list entity component represents a fixed, closed set of related words along with their synonyms...This component is a good choice when you have a set of items that don't change often." This confirms why a list is unsuitable for phone numbers.
3. Microsoft Learn, "How to create a conversational language understanding project." In the "Add entity components" section, the documentation guides users to "Select Regular expression" for entities that follow a defined pattern, contrasting it with list and machine-learned entities.
Question 4
Show Answer
A. Face: The Azure AI Face service is specialized for detecting, identifying, and analyzing human faces and their attributes, not for recognizing commercial logos or brands.
B. Custom Vision: This service is used to build, train, and deploy your own custom image classification and object detection models. It is not a pre-built solution for general brand detection.
D. Form Recognizer: This service is designed to extract text, key-value pairs, and tables from documents like invoices and receipts, not for analyzing brands in general photographs.
1. Microsoft Learn. (2024). Analyze images with the Computer Vision service. AI-900: Microsoft Azure AI Fundamentals learning path.
Reference: In the "Image Analysis" section, the documentation states, "The Computer Vision service can detect thousands of famous brands." This confirms its pre-built capability for brand detection.
2. Microsoft Azure Documentation. (2023). What is Computer Vision? - Image analysis.
Reference: Under the "Image analysis" feature list, it specifies "Brand detection: Detects brands in images from a database of thousands of global logos." This explicitly identifies the required functionality as part of the Computer Vision service.
3. Microsoft Azure Documentation. (2023). Call the Analyze Image API.
Reference: In the section "Specify visual features," the documentation lists Brands as one of the features that can be requested. The description states, "Detects various brands within an image, including the approximate location of the brand logo."
Question 5
HOTSPOT For each of the following statements. select Yes if the statement is true. Otherwise, select No. NOTE; Each correct selection is worth one point
Show Answer
YES
YES
NO
Object Detection (Yes): The Azure Custom Vision service is designed for two primary functions: image classification (assigning labels to an entire image) and object detection (identifying the location and label of specific objects within an image using bounding boxes).
Requires Own Data (Yes): The "Custom" aspect of Custom Vision means it is used to build and train a model tailored to a specific use case. This process requires the user to upload and tag their own set of images to serve as the training data. This contrasts with the general-purpose Computer Vision service, which uses a pre-trained model.
Analyze Video Files (No): The Custom Vision service prediction API is built to analyze static images (e.g., JPEG, PNG) or image URLs. It does not have a native API endpoint that accepts video files (e.g., MP4, AVI) for analysis. While a complete solution can analyze video by first using a different process to extract individual frames and then sending those frames as images to Custom Vision, the service itself does not analyze video files.
Microsoft Corporation. (2024). What is Custom Vision?. Microsoft Learn. Retrieved October 24, 2025, from https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/overview
Reference for Statement 1: The document states, "Custom Vision is an image recognition service that lets you build, deploy, and improve your own image... object detection models."
Reference for Statement 2: The same document notes, "You provide a set of labeled images to train your model..."
Microsoft Corporation. (2024). Tutorial: Analyze videos in near real-time with Custom Vision. Microsoft Learn. Retrieved October 24, 2025, from https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/analyze-video
Reference for Statement 3: This official tutorial outlines the correct pattern, which confirms the service does not analyze video files directly. The process states: "This tutorial shows how to use the Custom Vision Service API to perform... analysis on frames taken from a live video stream... The basic approach is to... break the video stream down into a sequence of frames... [and] submit selected frames to the Custom Vision predictor." This confirms the API target is the frame (image), not the video file.
Question 6
Show Answer
A. object detection: This technique is used to locate and classify objects within an image (e.g., identifying the bounding box of a "street sign"), but it does not read the text on the object.
C. image classification: This assigns a single label or category to an entire image (e.g., "urban street" or "daytime"). It does not extract specific information like text from within the image.
D. facial recognition: This is a specialized technology used exclusively for identifying and analyzing human faces in images or videos, which is irrelevant to reading a street sign.
1. Microsoft Learn, AI-900: Explore computer vision in Microsoft Azure, "Explore optical character recognition" unit. This document states, "Optical character recognition (OCR) is a technique used to detect and read text in images. You can use the Computer Vision service to read text in images..." This directly supports the use of OCR for reading text from signs.
2. Microsoft Learn, AI-900: Explore computer vision in Microsoft Azure, "Explore object detection" unit. This source explains, "Object detection is a form of computer vision in which a model is trained to classify individual objects within an image, and indicate their location with a bounding box." This clarifies that object detection locates objects but does not read text.
3. Microsoft Learn, AI-900: Explore computer vision in Microsoft Azure, "Introduction" unit. This unit provides an overview of computer vision tasks, distinguishing between image classification (what is the image about?), object detection (what objects are in the image, and where are they?), and OCR (reading text in the image).
Question 7
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
ADDING AND CONNECTING MODULES ON A VISUAL CANVAS
The Azure Machine Learning designer is a low-code/no-code tool within the Azure ML workspace. Its primary interface is a visual canvas where users build end-to-end ML pipelines by dragging, dropping, and connecting pre-built modules for tasks like data import, transformation, model training, and scoring.
- Option 3 (
automatically selecting an algorithm...) is incorrect as this specifically describes Automated ML (AutoML), a separate feature. - Option 4 (
using a code-first notebook experience) is incorrect as this describes using Jupyter notebooks with the Azure ML SDK, which is the code-based alternative to the visual designer.
Microsoft. (2024, May 22). What is Azure Machine Learning designer? Microsoft Learn.
Section: Introduction, Paragraphs 1-2.
Quote: "Azure Machine Learning designer is a drag-and-drop interface... The designer provides a visual canvas where you can add datasets and modules... You connect the modules to create a pipeline draft..."
URL: https://learn.microsoft.com/en-us/azure/machine-learning/concept-designer
Microsoft. (2024, October 17). What is automated machine learning (AutoML)? Microsoft Learn.
Section: "How AutoML works."
Note: This official documentation confirms that "automatically selecting an algorithm" is the function of AutoML, distinguishing it from the designer.
URL: https://learn.microsoft.com/en-us/azure/machine-learning/concept-automated-ml
Duke University. (n.d.). AI for Everyone: Azure ML Designer. Duke AIPI (Artificial Intelligence for Product Innovation).
Section: "Azure ML Designer."
Quote: "Azure ML Designer... is a visual, drag-and-drop environment... In the Designer, you create an ML 'pipeline' on a visual canvas by dragging and dropping 'modules'..."
URL: https://aipi.pratt.duke.edu/azure-ml-designer
Microsoft. (2024, September 24). What are Jupyter notebooks in Azure Machine Learning? Microsoft Learn.
Section: Introduction.
Note: This documentation defines the "code-first notebook experience" as a separate component (Jupyter notebooks) within the Azure ML workspace, confirming it is distinct from the designer.
URL: https://learn.microsoft.com/en-us/azure/machine-learning/concept-notebooks
Question 8
Show Answer
B. Azure Machine Learning: This is a platform for building, training, and deploying custom machine learning models, which is overly complex for a simple Q&A bot with predefined answers.
C. Translator: This service is used for text translation between languages. The scenario does not specify a requirement for multilingual support, making this service unnecessary for the core goal.
1. Microsoft Learn, Azure AI Language documentation. "What is question answering?" This document states, "Question answering provides cloud-based Natural Language Processing (NLP) that allows you to create a natural conversational layer over your data... It is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications."
Source: Microsoft, "What is question answering?", Azure AI Language documentation. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/language-service/question-answering/overview
2. Microsoft Learn, Azure Bot Service documentation. "What is Azure Bot Service?" This page explains, "Azure Bot Service is a comprehensive development environment for building enterprise-grade conversational AI... Bots can be used to shift simple, repetitive tasks, such as taking a dinner reservation or gathering profile information, on to automated systems that may no longer require direct human intervention."
Source: Microsoft, "What is Azure Bot Service?", Azure Bot Service documentation. Retrieved from https://learn.microsoft.com/en-us/azure/bot-service/bot-service-overview
3. Microsoft Learn, Tutorial. "Create a question answering bot". This tutorial explicitly demonstrates the required architecture: "In this tutorial, you learn how to: 1. Create a question answering project and import a file as a knowledge base. 2. Add your knowledge base to a bot... 3. Build and run your bot." This confirms the direct integration of the Language Service (for the knowledge base) and the Bot Service (for the bot itself).
Source: Microsoft, "Quickstart: Create a question answering bot", Azure AI Language documentation. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/language-service/question-answering/quickstarts/bot-service
Question 9
DRAG DROP Match the machine learning models to the appropriate deceptions. To answer, drag the appropriate model from the column on the left to its description on the right Each model may be used once, more than once, or not at all. NOTE: Each correct match is worth one point.
Show Answer
REGRESSION
CLASSIFICATION
CLUSTERING
The solution correctly maps the three fundamental types of machine learning models to their definitions.
- Regression is a supervised learning task used to predict a continuous numeric value, such as the price of a house or a future temperature.
- Classification is also a supervised learning task, but it is used to predict a discrete category or class, such as whether an email is 'spam' or 'not spam', or if a tumor is 'benign' or 'malignant'.
- Clustering is an unsupervised learning task. It does not use pre-defined labels; instead, it analyzes the input data to identify natural groupings (clusters) of items based on their shared features or similarities.
Microsoft. (2024). "What is machine learning?" Azure Machine Learning Documentation. Microsoft.
Section: "Machine learning model types" > "Supervised learning"
Quote/Paraphrase: This documentation explicitly defines regression as a supervised method for predicting continuous values (e.g., price, sales). It defines classification as a supervised method for predicting categories (e.g., yes/no, true/false). It defines clustering as an unsupervised method used to discover structure and group items into clusters based on similarity.
James, G., Witten, D., Hastie, T., & Tibshirani, R. (2021). An Introduction to Statistical Learning: with Applications in R (Second Edition). Springer.
Chapter 2, Section 2.1.2 "Supervised and Unsupervised Learning": This section distinguishes the two main types. It states that supervised learning involves building a model for predicting an output based on inputs, further breaking this down into regression problems (predicting a quantitative or numeric output) and classification problems (predicting a qualitative or categorical output).
Chapter 12, Section 12.1 "Unsupervised Learning": This section defines unsupervised learning as a setting with only feature measurements (X) and no response variable (Y). The goal is described as finding interesting patterns or groups, which directly relates to clustering.
Ng, A. (2023). "Course Notes: CS229 - Machine Learning." Stanford University.
Section: "Part I: Supervised Learning"
Quote/Paraphrase: The notes define supervised learning as the task of learning a function that maps inputs to outputs given a set of input-output pairs. It specifies that if the target output (label) is continuous (e.g., price), the task is regression. If the target output is discrete (e.g., 'cat' or 'dog'), the task is classification.
Section: "Part IX: Unsupervised Learning"
Quote/Paraphrase: This section describes unsupervised learning as the process of finding structure in unlabeled data. The canonical example provided is clustering, such as grouping news articles by topic or customers by preferences.
Question 10
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
FEATURES
In the context of machine learning, features are the defined as the independent variables or attributes that serve as the inputs to a model. These are the measurable properties or characteristics of the data (e.g., the square footage of a house, the pixel values of an image) that the model uses to make a prediction.
Conversely, a label is the output or target variable (e.g., the price of the house, the object in the image) that the model learns to predict. An instance is a single row of data, which typically includes both its features and (if supervised) its label.
Microsoft Azure Documentation. (n.d.). What is automated machine learning (AutoML)? Azure Machine Learning. Retrieved October 24, 2025.Reference: In the "Features and labels" section, the documentation states: "A feature is a data column that is used as an input for your model... A label is the data column that you want to predict."Ng, A. (n.d.). CS229 Machine Learning Course Notes: Supervised Learning. Stanford University.Reference: Section 1.1, "Supervised Learning," defines the training set as (x, y) pairs, stating: "We call $x^{(i)}$ the input variables (or features) and $y^{(i)}$ the output variables (or labels)."Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.Reference: Chapter 1, Section 1.1 (p. 2-3), introduces the input vector $\mathbf{x}$, whose components are referred to as features. This input vector is used to predict the target variable $t$ (the label).Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3(Mar), 1157-1182.Reference: The abstract defines features: "The variables collected from the field, which are used as inputs to a predictor, are referred to as 'features'."
Question 11
Show Answer
A. After you clean the data, no model has been trained yet, so there are no performance metrics to evaluate.
B. Before you train a model, it has not yet learned any patterns from the data, so it cannot be evaluated.
C. Before you choose the type of model, you have not even started the training process, making evaluation impossible.
---
1. Microsoft Learn, AI-900: Describe machine learning models. In the "Evaluate a model" unit, the documentation states, "After you've used the training dataset to train a model, you need to evaluate it to determine how well it predicts. To do this, you use a second dataset that the model hasn't seen before... You can then compare the labels predicted by the model with the actual known labels in the original dataset. From this comparison, you can calculate a range of metrics that quantify how well the model performed." This directly confirms that evaluation occurs after training and testing on holdout data.
Source: Microsoft, AI-900: Microsoft Azure AI Fundamentals, "Describe machine learning models", "Evaluate a model" section.
2. Microsoft Azure Documentation, "What is automated machine learning (AutoML)?" The process described for AutoML, a core Azure service, follows the standard machine learning lifecycle. The documentation outlines that after a model is trained, it is scored against a validation dataset, and "the metrics from that scoring are used for evaluation." This reinforces that evaluation metrics are reviewed after the model is tested.
Source: Microsoft Azure Documentation, What is automated machine learning (AutoML)?, "Train and tune models" section.
3. Stanford University, CS229: Machine Learning Course Notes. In the section on model selection, the standard procedure is outlined: "The standard way to do this is to split it into a training set, a validation set (also called a hold-out cross validation set), and a test set... we can then select the model that did best on the validation set." This academic source confirms that model performance (via metrics) is reviewed on validation data after training.
Source: Ng, A. & Thrun, S. (2023). CS229: Machine Learning Course Notes, "Part V: Learning Theory", Section on "Model Selection and Train/Validation/Test Sets". Stanford University.
Question 12
Show Answer
A. Custom Vision is used for building custom image classification and object detection models; it analyzes visual content, not text within documents for sensitive information.
B. Conversational Language Understanding is designed to understand user goals, or intents, in conversational text (e.g., chatbots), not for scanning entire documents for PII.
1. Microsoft Learn, Azure AI Document Intelligence, "PII detection feature": "The PII detection feature in Document Intelligence can identify and redact sensitive information in your documents. The feature is part of the Document Intelligence service and can be enabled by setting the optional features query parameter to pii-detection." (This directly supports the correct answer C).
2. Microsoft Learn, Azure AI services, "What is Custom Vision?": "Custom Vision is an image recognition service that lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels (which represent classes) to images, according to their visual characteristics." (This supports why option A is incorrect).
3. Microsoft Learn, Azure AI Language, "What is conversational language understanding (CLU)?": "Conversational language understanding (CLU) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information." (This supports why option B is incorrect).
Question 13
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
OPTICAL CHARACTER RECOGNITION (OCR)
The core task is to "digitize newspaper articles," which implies converting the printed text from a scanned image into machine-readable, searchable, and editable data. Optical Character Recognition (OCR) is the specific computer vision technology designed to identify and extract printed or handwritten text from images.
The other options are incorrect as they serve different purposes:
- Facial analysis detects and analyzes human faces.
- Image classification assigns a single label to an entire image (e.g., "newspaper").
- Object detection locates and identifies specific objects within an image (e.g., a photograph, an advertisement) but does not extract the text content.
Microsoft. (2024). What is Optical Character Recognition (OCR)? Azure AI Vision documentation. This official documentation states: "The Optical Character Recognition (OCR) feature of Azure AI Vision extracts printed or handwritten text from images and documents... This is useful for various scenarios, including... digitizing print media like books, articles, and reports to make them searchable."
Smith, R. (2007). An Overview of the Tesseract OCR Engine. Proceedings of the Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), vol. 2, pp. 629-633. This paper details an open-source OCR engine widely used for digitizing text from scanned documents, a core task for historical archives. (DOI: https://doi.org/10.1109/ICDAR.2007.4376991)
Szeliski, R. (2010). Computer Vision: Algorithms and Applications. (Referenced in MIT OpenCourseWare, 6.819 / 6.869: Advances in Computer Vision). Chapter 14, Section 14.4.2 "Text recognition (OCR)" specifically defines OCR as the task of converting images of text into character codes.
Zou, Z., et al. (2023). Object Detection in 20 Years: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1), pp. 355-374. Section I (Introduction) clearly distinguishes object detection (determining the location and category of objects) from image classification (categorizing an entire image) and OCR (text extraction), confirming they are separate tasks. (DOI: https://doi.org/10.1109/TPAMI.2023.3238524)
Question 14
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
NUMERIC
In supervised machine learning, regression is the task of predicting a continuous quantitative value. The label (also known as the target or dependent variable) is the specific value the model is being trained to predict. By definition, a continuous quantitative valueโsuch as a price, temperature, or a countโis represented by a numeric data type (e.g., float or integer).
In contrast, boolean (True/False) or text (categorical) data types are used as labels for classification tasks, where the goal is to predict a discrete class or category.
Microsoft Azure Documentation: In the context of Azure Machine Learning, the documentation for automated ML tasks explicitly states the requirement for regression. For the "Regression" task type, the "Label column data type" must be numeric (integer or decimal).Source: Microsoft Azure. (2024). Set up automated ML to train a model (v2). Microsoft Learn. Retrieved October 24, 2025, from https://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train?view=azureml-api-2. (See the table under the "Data source and format" section).Academic Publication (Textbook): Reputable machine learning textbooks define regression by the nature of its output variable.Source: Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer. In Chapter 2, Section 2.2, it is stated: "Each measurement... has a quantitative output $Y$, such as price... This is a regression problem." Quantitative outputs are inherently numeric. (Page 9).Peer-Reviewed Journal: Academic literature consistently distinguishes regression from classification based on the label's data type.Source: Kotsiantis, S. B. (2007). Supervised Machine Learning: A Review of Classification Techniques. Informatica (Slovenia), 31(3), 249-268. The introduction states, "In machine learning, a typical supervised learning task is... regression (predicting a numeric value)." (Section 1, p. 249).
Question 15
Show Answer
A. AI enrichment for Azure Search is used to extract insights from unstructured data to make it searchable, not as the primary service for real-time brand identification in a bot.
C. Custom Vision is used to train your own image recognition models. While possible, the standard Computer Vision service already has a pre-built capability for detecting common brands.
D. Language understanding capabilities are used to process and understand text or spoken language, not to analyze the content of images.
1. Microsoft Learn: AI-900 Analyze images with the Computer Vision service. This module explicitly describes the capabilities of the Computer Vision service.
Reference: In the "Introduction" and "Detect common objects in images" units, it details the service's ability to return information about visual content, including brand detection.
2. Microsoft Learn: What is Computer Vision? This official documentation provides a high-level overview of the service's features.
Reference: Under the "Image analysis" section, "Brand detection" is listed as a specific feature: "Detect commercial brands in images from a database of thousands of global logos."
3. Microsoft Learn: What is Custom Vision? This document clarifies the purpose of the Custom Vision service, distinguishing it from the pre-trained Computer Vision service.
Reference: The "Overview" section states, "Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifiers." This highlights its use for custom, not pre-built, recognition tasks.
4. Microsoft Learn: What is Language Understanding (LUIS)? This document explains the purpose of language services.
Reference: The "Overview" section defines it as "a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning." This confirms it is for text, not images.
Question 16
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
AN ANOMALY DETECTION WORKLOAD
Anomaly detection is a machine learning technique specifically designed to identify rare events, unexpected items, or unusual observations in data that differ significantly from the majority. Detecting "unusual temperature fluctuations" is a classic example of an anomaly detection workload. The goal is to find data points (temperatures) that fall outside the normal, expected operating range, which could indicate a fault or impending failure. The other options are incorrect as the data is numerical (time-series), not visual (computer vision) or text-based (NLP), and the specific goal is not to index unstructured data (knowledge mining).
Microsoft Azure Documentation. (2024). "What is the Anomaly Detector service?" Azure AI services documentation. Retrieved from Microsoft Learn.
Reference: Section: "Overview," Paragraph 1. This document states the service is used to "monitor data over time and detect anomalies" and explicitly lists "spotting unusual trends in business metrics, [or] monitoring machine health" as key use cases.
Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A survey. ACM Computing Surveys (CSUR), 41(3), Article 15, pp. 15:1โ15:2.
Reference: Section I (Introduction) defines anomalies as "patterns in data that do not conform to expected behavior." Section II.A ("Application Domains") cites "Industrial Damage Detection," which involves monitoring sensor data (such as temperature, vibration) from heavy machinery to detect abnormal patterns indicating damage.
DOI: https://doi.org/10.1145/1541880.1541882
Ng, A. (n.d.). "Lecture Notes 9: Anomaly Detection." Stanford University, CS 229: Machine Learning.
Reference: Section 1 ("Problem motivation"). This university courseware uses the example of monitoring computers in a data center, using features like temperature and fan speed, to detect "unusual" (anomalous) behavior that could predict an impending failure. This directly maps to the question's scenario.
Question 17
HOTSPOT You have an app that identifies birds in images. The app performs the following tasks: * Identifies the location of the birds in the image * Identifies the species of the birds in the image Which type of computer vision does each task use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Show Answer
OBJECT DETECTION
IMAGE CLASSIFICATION
Object detection is the correct technique for the first task because its primary function is to locate instances of objects within an image and provide their specific positions, typically by outputting bounding box coordinates.
Image classification is the correct technique for the second task. Its function is to analyze the pixels of an image (or a detected region) and assign one or more categorical labels to it. Identifying the species of a bird is a classic classification problem, as it assigns a specific category (e.g., "Robin," "Sparrow") to the detected object.
Microsoft Azure AI Vision Documentation. (2024). Object detection (Image Analysis 4.0). "Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found in the image... This lets you get the coordinates of the objects found in an image." This source confirms that object detection is used to find the location (coordinates) of objects.
Source Link: https://learn.microsoft.com/en-us/azure/ai-services/vision-services/concept-object-detection-4-0
Microsoft Azure AI Vision Documentation. (2024). Image classification (Image Analysis 4.0). "The Image Analysis 4.0 'classify' API lets you classify an image based on a taxonomy of categories. The classifier can use the default built-in model, or a custom model you've trained on your own categories." This source confirms that image classification is used to assign a category (like a species) to an image.
Source Link: https://learn.microsoft.com/en-us/azure/ai-services/vision-services/concept-image-classification-4-0
Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object Detection With Deep Learning: A Review. IEEE Transactions on Neural Networks and Learning Systems, 30(11), 3212โ3232.
Reference: Section II-A, "Related Tasks," defines image classification as a task that "aims to assign a semantic label to an entire image," and object detection as a task that "aims to localize... all instances of target objects in an image... and classify these instances." The question explicitly separates the "locate" (localization) and "identify" (classification) tasks, making these the two most precise answers.
Question 18
HOTSPOT To complete the sentence, select the appropriate option in the answer area.
Show Answer
FEATURES
In machine learning, features are the input variables used to make a prediction. They represent the measurable, individual characteristics or attributes of the data. For example, if the goal is to predict a student's exam score, the features might include "hours studied," "previous grades," and "class attendance."
The label (or dependant variable in statistics) is the outputโthe value you are trying to predict (e.g., the actual exam score). Identifiers (like a Student ID) are unique keys that are typically excluded from a model as they have no predictive value.
Microsoft. "Features and labels." What is machine learning? - Azure AI Fundamentals. Microsoft Learn documentation (AI-900 path). Accessed Oct 24, 2025.Quote: "The features are the characteristics of the item... For example, a flower's features might include its measurements for petals and stems. The label is the thing we're trying to predict... For example, the flower's species."Ng, A. (2018). "Supervised Learning." CS229 Machine Learning Course Notes. Stanford University. p. 1.Quote: "We are given a data set and already know what our correct output should look like... In this regression problem, $x$ are the โinputโ variables (or features) and $y$ is the โoutputโ or target variable (also called the label) that we are trying to predict."Alpaydฤฑn, E. (2020). Introduction to Machine Learning (4th ed.). MIT Press. Chapter 2, Section 2.1, p. 25.Quote: "Each instance $\mathbf{x}$ is a vector of $d$ inputs, $\mathbf{x} = [x_1, x_2, \dots, x_d]^T$... The $x_j$, $j=1,\dots,d$ are the features."Verencar, J., et al. (2023). "A Comprehensive Survey on Feature Engineering and Feature Selection in the Era of Big Data." IEEE Access, vol. 11, p. 100319. DOI: 10.1109/ACCESS.2023.3314988.Quote: "In ML, a feature is an individual measurable property or characteristic of a phenomenon being observed. Features are the inputs of an ML model..."
Question 19
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
MODEL EVALUATION
The sentence describes the standard methodology for testing a machine learning model's performance on unseen data. This entire process is known as model evaluation.
A dataset is split into at least two parts:
- A training set ("a portion of a dataset") is used "to prepare" (i.e., train) the model.
- A testing/validation set ("the balance of the dataset") is withheld from the training process and used "to verify the results" (i.e., evaluate the model's accuracy and generalization).
This separation is essential to prevent overfitting and to get an unbiased estimate of how the model will perform on new, real-world data. "Model training" is only the first part of this process, not the overarching purpose.
Microsoft. (n.d.). Split Data module. Azure Machine Learning documentation. Retrieved October 24, 2025, from https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/split-data
Reference (Module Overview): The official documentation for the "Split Data" module in Azure Machine Learning states, "This module is particularly useful when you need to separate data into training and testing sets... This is a common task in machine learning, to support model evaluation."
James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning: with Applications in R. Springer. https://doi.org/10.1007/978-1-4614-7138-7
Reference (Chapter 5, Section 5.1.1, "The Validation Set Approach"): This section describes the process: "...randomly dividing the available set of observations into two parts, a training set and a validation set... The model is fit on the training set, and the fitted model is used to predict the responses for the observations in the validation set." This entire procedure is presented as a method for estimating the test error, which is the core of model evaluation.
MIT OpenCourseWare. (2020). Lecture 2: Deep Sequence Modeling. 6.S191 Introduction to Deep Learning. Massachusetts Institute of Technology.
Reference (Video Timestamp ~50:18, "Splitting Data"): The lecture explicitly discusses splitting data into training, validation, and test sets. The purpose of the validation and test sets is "for evaluation" to check the model's generalization capabilities on data it has not been trained on.
Question 20
Show Answer
B. inclusiveness: This principle focuses on designing AI systems to empower and engage all people, avoiding exclusion. It is not directly related to data acquisition consent.
C. transparency: This principle is about ensuring that AI systems are understandable. It relates to explaining how a model works and its limitations, not the ethical sourcing of its training data.
D. reliability and safety: This principle ensures that AI systems operate consistently, safely, and as intended. While unvetted data could affect reliability, the core ethical breach is about consent, not performance.
1. Microsoft Learn. (2024). Microsoft's principles for responsible AI. In "AI-900: Describe AI concepts and workloads". Microsoft. Retrieved from https://learn.microsoft.com/en-us/training/modules/get-started-ai-fundamentals/5-responsible-ai-principles.
Reference Details: In the "Privacy and security" section, it states, "The data used to train the AI system should be analyzed to ensure that the privacy of individuals is protected." This directly links the training data's handling to the privacy principle.
2. Microsoft Cloud Adoption Framework for Azure. (2024). What is responsible AI?. Microsoft. Retrieved from https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/strategy/responsible-ai.
Reference Details: The "Privacy and security" section explicitly states, "AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data and that mandate that consumers have appropriate controls to choose how their data is used." Using data without permission is a clear violation of this mandate.
Question 21
Show Answer
A. C#: C# is not a supported language for writing custom scripts within the Azure Machine Learning designer components.
B. Scala: Scala is not supported for custom code execution in the designer; it is more commonly associated with Azure Databricks and Spark environments.
1. Microsoft Learn. (2023). Execute Python Script component. In Azure Machine Learning documentation. "This article describes the Execute Python Script component in Azure Machine Learning designer. Use this component to run Python code."
Reference: Section: "Overview".
2. Microsoft Learn. (2023). Execute R Script component. In Azure Machine Learning documentation. "This article describes how to use the Execute R Script component to run R code in your Azure Machine Learning designer pipeline."
Reference: Section: "Overview".
3. Microsoft Learn. (2023). Azure Machine Learning designer component reference. This document lists all available components in the designer. Under the "Python Language" and "R Language" sections, it details the "Execute Python Script" and "Execute R Script" components, respectively. There are no equivalent components listed for C# or Scala.
Reference: Component list under sections "Python Language" and "R Language".
Question 22
Show Answer
A. clustering: This is an unsupervised learning technique used to group similar data points into clusters. It does not predict a specific numerical value.
B. classification: This supervised learning technique predicts a categorical label or class (e.g., 'high population' vs. 'low population'), not a specific numerical quantity like population size.
1. Microsoft Learn, AI-900: Describe machine learning. "Regression is a form of machine learning used to predict a numeric label based on an item's features. For example, a regression model might be used to predict the price of a house based on its features (like size, number of bedrooms, location, and so on)." This directly aligns with predicting a numeric value like population.
Source: Microsoft Learn, "Introduction to machine learning," Module: "Describe machine learning," Unit: "What is machine learning?".
2. Microsoft Learn, AI-900: Explore regression. "Regression is a common and popular kind of machine learning. You can use regression to predict a numeric value, such as price, sales amount, or age." Predicting an animal population is analogous to these examples.
Source: Microsoft Learn, "Explore regression with Azure Machine Learning designer," Module: "Use Azure Machine Learning designer," Unit: "Introduction".
3. Microsoft Learn, AI-900: Explore classification. "Classification is a form of supervised machine learning in which you train a model to predict which category, or class, an item belongs to." This confirms classification is for categories, not numeric values.
Source: Microsoft Learn, "Explore classification with Azure Machine Learning designer," Module: "Use Azure Machine Learning designer," Unit: "Introduction".
Question 23
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
FAIRNESS
According to Microsoft's Responsible AI principles, fairness is the principle that directly addresses the mitigation of bias. AI systems can reflect and even amplify societal biases present in their training data. The fairness principle requires that AI systems are designed to treat all people fairly, with a specific goal of identifying and mitigating such biases to avoid unfair impacts on different groups of people.
Microsoft. (2022). Microsoft Responsible AI Standard, v2. Section 3.1: Fairness defined, p. 10.
Citation: "AI systems can behave unfairly for many reasons, including as a result of biases in the data used to train them... The goal of fairness, in the context of AI systems, is to mitigate such impacts."
Source: https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5cmFm
Microsoft. (n.d.). Our responsible AI principles. Microsoft AI. Retrieved October 24, 2025.
Citation: Under the "Fairness" tab: "AI systems can also reinforce and amplify societal biases, for example, by reflecting biases in their training data."
Source: https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1:primaryr6
Microsoft. (2024, October 1). What is responsible AI? (Azure Machine Learning documentation). Retrieved October 24, 2025.
Citation: Under the "Fairness" section: "AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways... Examples of unfair behavior include systemic biases in the hiring or lending process."
Source: https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai
Question 24
Show Answer
A: This is an example of robotics and computer vision, which involves physical automation and visual interpretation, not processing human language.
C: This describes a simple, rule-based automation system that reacts to sensor data (temperature), which does not involve understanding language.
1. Microsoft Learn, "Introduction to natural language processing": This module defines NLP as the area of AI dealing with software that understands written and spoken language. It lists common workloads, including conversational AI bots and question-answering solutions, which directly align with the correct answers. (Reference: Microsoft Learn, Module: "Get started with natural language processing", Unit: "Introduction")
2. Microsoft Learn, "What is question answering?": The official documentation states, "Question answering provides cloud-based Natural Language Processing (NLP) that allows you to create a natural conversational layer over your data." This directly supports option D as a primary NLP workload. (Reference: Microsoft Learn, Azure AI Language documentation, Section: "Question answering", Article: "What is question answering?")
3. Microsoft Learn, "What is conversational language understanding?": This resource explains that conversational AI services use machine learning to understand a user's natural language text to predict meaning and extract information. This technology is fundamental to both smart assistants (B) and interactive website bots (D). (Reference: Microsoft Learn, Azure AI Language documentation, Section: "Conversational language understanding", Article: "What is conversational language understanding?")
Question 25
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
COMPUTER VISION
Computer vision is the field of artificial intelligence (AI) that enables computers to derive meaningful information from digital images, videos, and other visual inputs. The task of "counting the number of animals in an area based on a video feed" is a specific application of computer vision, typically achieved using an object detection model. This model is trained to identify and locate (often by drawing a bounding box) instances of specific objects (animals) within each frame of the video, allowing them to be counted.
The other options are incorrect:
- Forecasting predicts future values based on historical time-series data.
- Knowledge mining (or knowledge discovery) primarily involves extracting insights from large volumes of unstructured text data.
- Anomaly detection identifies rare or unusual events or patterns, not the routine counting of all items.
Microsoft. (2024, October 1). Object detection (version 4.0). Azure AI Vision documentation, Microsoft Learn. Retrieved October 24, 2025.
Section: Overview. The documentation states, "Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found... For example, in an image containing animals, the service can detect and return the location of each animal." This directly confirms the scenario as an object detection task, which is a subfield of computer vision.
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
Chapter 25, Section 25.1: "Vision". This chapter defines computer vision as the process of "understanding" images and videos. It explicitly covers object detection as a key task: "The task of identifying the objects in an image... and finding their location." Counting animals is a direct application of this principle.
Stanford University. (n.d.). CS231n: Deep Learning for Computer Vision. Course materials.
Lecture 11: "Detection and Segmentation". The course materials define object detection as a core computer vision problem: "Object Detection: localize and classify all objects in an image." Analyzing a video feed to count animals requires precisely this capability.
Question 26
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
OPTICAL CHARACTER RECOGNITION (OCR)
Optical character recognition (OCR) is the specific technology that processes images of printed, typed, or handwritten text and converts them into machine-readable text data. While object detection, facial recognition, and image classification are all computer vision tasks, they serve different purposes. Object detection locates specific items (e.g., a car, a sign), facial recognition is a specialized form of object detection for faces, and image classification assigns a descriptive label to an entire image (e.g., "document," "landscape"). Only OCR performs the specific action of extracting the textual content from the document.
Microsoft. (2024). What is Optical Character Recognition? - Azure AI Vision. Azure AI Vision Documentation.
Reference: In the "Read API" section, the documentation states: "The Azure AI Vision Read API extracts printed text (in several languages) and handwritten text (English only) from photos and documents." This directly confirms OCR (specifically the Read API implementation) is the correct technology for extracting handwritten text.
Forsyth, D. A., & Ponce, J. (2012). Computer Vision: A Modern Approach (2nd ed.). Pearson.
Reference: Chapter 22, "Finding Text," discusses the methods and applications of Optical Character Recognition as the established process for identifying and extracting text from images, distinguishing it from general object recognition.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). "Gradient-Based Learning Applied to Document Recognition." Proceedings of the IEEE, 86(11), 2278โ2324.
Reference: https://doi.org/10.1109/5.726791. This foundational paper, widely cited in university courseware, details the application of neural networks to handwritten character recognition (specifically digits), which is a core component of modern OCR systems.
Szeliski, R. (2022). Computer Vision: Algorithms and Applications (2nd ed.). Springer.
Reference: Chapter 11, "Recognition," differentiates between various recognition tasks. Section 11.4, "Instance Recognition," covers tasks like object and face matching, while later sections implicitly and explicitly reference text recognition (OCR) as a distinct task of extracting symbolic character data.
Question 27
Show Answer
A. Recognize text (OCR) extracts printed or handwritten text from an image; it does not describe the visual content of the scene.
C. Identify the areas of interest is used for smart cropping, determining the most salient region of an image to generate a thumbnail, not a descriptive caption.
D. Detect objects identifies and locates individual objects within an image, typically with bounding boxes, but does not generate a complete descriptive sentence about the entire scene.
1. Microsoft Learn, "What is Image Analysis?": Under the "Image Analysis features" section, it lists "Caption an image (v4.0 only)". It states, "Generate a human-readable sentence that describes the content of an image." This directly supports the correct answer.
Source: Microsoft. (n.d.). What is Image Analysis?. Microsoft Learn. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview-image-analysis (Section: "Image Analysis features")
2. Microsoft Learn, "Quickstart: Image Analysis": This guide demonstrates how to use the Image Analysis 4.0 API. In the "Analyze Image" section, it shows how to select visual features, including caption, to get a "descriptive caption for the image."
Source: Microsoft. (n.d.). Quickstart: Image Analysis. Microsoft Learn. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40 (Section: "Analyze Image")
3. Microsoft Learn, "Call the Image Analysis API": The API reference documentation for Image Analysis 4.0 lists caption as one of the features that can be requested. The description for this feature is "a human-readable sentence that describes the content of the image." This confirms the specific API feature name.
Source: Microsoft. (n.d.). Call the Image Analysis API. Microsoft Learn. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40 (Section: "Specify model version and visual features")
Question 28
Show Answer
B. Azure Cognitive Search: This is an AI-powered cloud search service used for indexing and querying content. A bot might use it to find answers, but it doesn't build the conversational interface.
C. Language service: This service provides natural language understanding capabilities. While a bot uses it to process user input, it is a component of the bot's intelligence, not the framework for building and deploying it.
D. Speech service: This service enables speech-to-text and text-to-speech functionality. It is used to voice-enable a bot but is not the service used to build the core bot logic or connect it to channels.
1. Microsoft Learn, "What is Azure Bot Service?": "Azure Bot Service and Bot Framework provide an integrated set of tools and services to help you create a bot... You can create a bot that interacts with users naturally in conversation, on websites, apps, Cortana, Microsoft Teams, Skype, Slack, Facebook Messenger, and more." This document explicitly lists the multi-channel capability as a core feature.
2. Microsoft Learn, "Connect a bot to channels": Under the "Channels" section, the documentation states, "A channel is a connection between the Bot Framework Service and a communication app... You can configure your bot to connect to any of the standard channels, such as Alexa, Facebook Messenger, and Slack." This confirms the service's role in multi-platform deployment.
3. Microsoft Learn, AI-900 Study Guide, "Describe features of conversational AI workloads on Azure": This guide identifies Azure Bot Service as the platform to "build, publish, and manage bots." It highlights the ability to "publish a bot to multiple channels" as a key capability for creating conversational AI solutions.
Question 29
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
FORM RECOGNIZER
The task is to extract specific, structured information (fields like name, address, license number) from a driver's license to populate a database. The Azure AI Form Recognizer service (now part of Azure AI Document Intelligence) is designed for this exact purpose. It uses a prebuilt-idDocument model to analyze identity documents, including driver's licenses, and extract key-value pairs.
While Computer Vision can perform Optical Character Recognition (OCR) to extract all raw text from the image, it does not inherently understand the document's structure or identify which text string corresponds to which field. Conversational Language Understanding processes text-based conversation, and Custom Vision is for image classification or object detection, not field extraction.
Microsoft Azure Documentation (Azure AI Document Intelligence). (2025). Prebuilt models - Azure AI Document Intelligence. Section: Identity document (prebuilt-idDocument).
This official documentation states that the identity document model "combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from identity documents." It explicitly lists "US Driver's Licenses (all 50 states and District of Columbia)" as a supported document type and identifies extractable fields such as "FirstName", "LastName", "DocumentNumber", and "DateOfBirth".
Microsoft Azure Documentation (Azure AI Vision). (2025). Optical character recognition (OCR) - Azure AI Vision. Section: Read API.
This document describes the Computer Vision Read API, stating its function is to "extract printed text... handwritten text, digits, and currency symbols from images and multi-page PDF documents." This contrasts with Form Recognizer, as the Read API's output is the raw text content and its coordinates, not structured key-value pairs representing specific document fields.
Question 30
Show Answer
A. Train the model by using the clinical data.
Training on the entire dataset without splitting first would mean there is no unseen data left to validate the model, making it impossible to prove its accuracy on new data.
C. Train the model by using automated machine learning (automated ML).
Automated ML is a method for training a model. The fundamental requirement to prove accuracyโsplitting the data for validationโmust be done regardless of the training method used.
D. Validate the model by using the clinical data.
Validation is the process of evaluating a trained model. This step cannot be performed next, as the model has not yet been trained.
1. Microsoft Learn, AI-900 Study Guide, "Describe features of machine learning in Azure". This guide outlines the typical machine learning process. It states, "After you've prepared your data, you need to split it into two sets... You use the training dataset to train the model... After the model is trained, you evaluate it by using the testing dataset." This confirms that splitting is the step between data preparation and training/evaluation.
Reference: Microsoft Learn, "AI-900: Describe features of machine learning in Azure", Section: "The machine learning process".
2. Microsoft Learn, "Train and evaluate models in Azure Machine Learning". This document explicitly details the workflow. Under the "Train and evaluate a model" section, the first step listed is to "Split the data into training and validation sets." This directly supports splitting the data as the next logical action after preparation to enable evaluation.
Reference: Microsoft Learn, "Train and evaluate models in Azure Machine Learning", Section: "Train and evaluate a model".
3. Microsoft Learn, Tutorial: "Create a classification model with automated ML in Azure Machine Learning". The tutorial's procedure shows that data splitting is a core part of the process. It notes, "Automated ML automatically splits the data into a training data set and a validation data set... The model is trained with the training data and validated against the validation data". This reinforces that splitting is a prerequisite for training and validation, even when the process is automated.
Reference: Microsoft Learn, "Tutorial: Create a classification model with automated ML in Azure Machine Learning", Section: "Configure experiment run".
Question 31
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE; Each correct selection is worth one point.
Show Answer
YES
NO
YES
The Azure AI Language service provides natural language processing (NLP) features to analyze text.
- Language Detection (Yes): This is a core feature of the service. It takes text input and returns the language identifier (e.g., "en" for English).
- Signature Detection (No): The Language service analyzes text. Detecting a handwritten signature is an image and layout analysis task, which falls under the Azure AI Document Intelligence (formerly Form Recognizer) or Azure AI Vision services, not the Language service.
- NER (Yes): This capability is called Named Entity Recognition (NER). The service can scan text to identify and categorize entities, which explicitly includes "Organization" (covering companies and organizations).
Microsoft Learn (Azure AI Language). (2024). What is language detection in Azure AI Language? "The language detection feature of the Azure AI Language service can detect the language a document is written in."
Microsoft Learn (Azure AI Language). (2024). What is Named Entity Recognition (NER) in Azure AI Language? This documentation details the feature, and the "Entity categories" section explicitly lists "Organization" as a supported type, providing examples like "Microsoft."
Microsoft Learn (Azure AI Language). (2024). What is Azure AI Language? The official service overview lists its features (e.g., NER, sentiment analysis, language detection, key phrase extraction). It does not list document layout analysis or signature detection as a capability.
Microsoft Learn (Azure AI Document Intelligence). (2024). What is Azure AI Document Intelligence? This service documentation describes capabilities for "Layout analysis" and "prebuilt models" which extract data from documents, including fields like signatures. This confirms signature detection is outside the scope of the Language service.
Question 32
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE; Each correct selection is worth one point.
Show Answer
NO
YES
NO
A restaurant can use a chatbot to answer queries through Cortana. (No) The Cortana channel within the Microsoft Bot Framework, which was required for this integration, is deprecated. It is no longer possible to connect new bots to Cortana, and support for existing connections has ended.
A restaurant can use a chatbot to answer inquiries about business hours from a webpage. (Yes) This is a core, standard use case for a chatbot. Services like Azure Bot Service provide a "Web Chat" channel that is specifically designed to be embedded within a webpage. This allows users to ask the bot common questions, such as business hours, which the bot answers from its configured knowledge base (e.g., Azure AI Language's question answering feature).
A restaurant can use a chatbot to automate responses to customer reviews on an external website. (No) This scenario describes web automation or publishing, not a conversational AI (chatbot) function. A chatbot is designed for conversational interaction with a user through a defined channel (like a chat window). Automating posts to an external third-party site (like a review platform) would require a different technology, such as Robotic Process Automation (RPA) or direct integration with that site's specific API, if one exists.
Microsoft Learn (Official Documentation). (2023). Connect a bot to the Cortana channel. Retrieved October 24, 2025.
Reference: The document explicitly states: "Important: The Cortana channel is deprecated. We're no longer accepting new bots to the Cortana channel, and the channel will be shut down for existing bots..."
Microsoft Learn (Official Documentation). (2023). Connect a bot to Web Chat. Retrieved October 24, 2025.
Reference: This document details the "Web Chat channel," stating, "The Bot Framework Web Chat control is a client-side component that allows a user to interact with a bot... The Web Chat channel is a canvas for rendering your bot in a web page." This confirms the capability described in Statement 2.
Microsoft Learn (Official Documentation). (2024). What is Azure Bot Service?. Retrieved October 24, 2025.
Reference: The service is defined as providing "an interface for users to interact with [bots] through conversation." This highlights the interactive, conversational nature of a chatbot, which contrasts with the automated publishing task described in Statement 3.
Syed, T. A., & Suri, G. (2019). A Study of Robotic Process Automation and Chatbots. 2019 International Conference on Automation, Computational and Technology Management (ICACTM).
DOI: 10.1109/ICACTM.2019.8776707
Reference: This paper distinguishes the two technologies: "A chatbot is used for conversational purposes... RPA is an automation software technology to automate back-end processes..." This academic source supports that Statement 3 describes a process (RPA or web automation) distinct from a chatbot's function.
Question 33
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE; Each correct selection is worth one point.
Show Answer
NO
YES
YES
Statement 1 (No): This statement is false. While bots can be built with custom code (e.g., using the Bot Framework SDK), Microsoft also provides low-code and no-code platforms. Services like Microsoft Copilot Studio (formerly Power Virtual Agents) and Azure AI Language's Custom question answering feature allow for the creation and deployment of chatbots with minimal to no custom coding.
Statement 2 (Yes): This statement is true. The Azure Bot Service is the core PaaS (Platform-as-a-Service) offering in Azure specifically designed to provide the runtime, infrastructure, and management services required to host and operate conversational bots.
Statement 3 (Yes): This statement is true. Azure Bot Service uses channels to connect the hosted bot logic to various front-end communication applications. Microsoft Teams is a primary, fully supported channel, enabling bots built with the service to interact directly with users within the Teams environment.
Microsoft Learn. (2024). What is Microsoft Copilot Studio? "Microsoft Copilot Studio lets you create powerful AI-powered copilots... Copilot Studio is a low-code platform that's built on the Microsoft Power Platform."
Microsoft Learn. (2024). Custom question answering overview. "Custom question answering... finds the most appropriate answer for a user's input... This information can be accessed through a conversational client application, such as a chat bot."
Microsoft Learn. (2024). What is Azure Bot Service? "Azure Bot Service is a managed bot development service that helps you create, test, deploy, and manage intelligent bots, all in one place... The service provides the core components for creating bots, including the Bot Framework SDK..."
Microsoft Learn. (2024). Connect a bot to channels. "A channel is a connection between a bot and a communication app. Azure Bot Service... connects your bot to these channels... You configure a bot to connect to the channels you want it to be available on."
Microsoft Learn. (2024). Connect a bot to Microsoft Teams. "You can configure your bot to communicate with people via Microsoft Teams. This article describes how to create a Teams app in Teams, connect your bot to your Teams app in Azure, and then test your bot in Teams."
Question 34
DRAG DROP Match the tasks to the appropriate machine learning models. To answer, drag the appropriate model from the column on the left to its scenario on the right. Each model may be used once, more than once, or not at all. NOTE: Each correct selection is worth one point.
Show Answer
CLUSTERING
REGRESSION
CLASSIFICATION
The solution correctly maps each machine learning model type to its corresponding task:
- Clustering: This is an unsupervised learning model used to find natural groupings or "clusters" in data. The task "Assign categories to passengers" does not have a predefined correct output; instead, the goal is to discover segments based on similarity, which is the definition of clustering.
- Regression: This is a supervised learning model used to predict a continuous numerical value. The task "Predict the amount of consumed fuel" requires predicting a specific number (e.g., 5,000 liters), making it a regression problem.
- Classification: This is a supervised learning model used to predict a discrete category or class. The task "Predict whether a passenger will miss their flight" requires a categorical prediction (e.g., "Yes" or "No"), which is a classic binary classification problem.
Microsoft. (n.d.). Machine learning basics: What is regression?. Microsoft Learn (AI-900 path). This document states, "Regression is a form of supervised machine learning in which you train a model to predict a numeric value." This directly supports matching Regression to "Predict the amount of consumed fuel."
Microsoft. (n.d.). Machine learning basics: What is classification?. Microsoft Learn (AI-900 path). This official documentation notes, "Classification is a form of supervised machine learning in which you train a model to predict which category (or class) an item belongs to." This supports matching Classification to "Predict whether..." (a yes/no category).
Microsoft. (n.d.). Machine learning basics: What is clustering?. Microsoft Learn (AI-900 path). This document defines clustering as an "unsupervised machine learning in which you train a model to group similar items into clusters." This supports matching Clustering to "Assign categories to passengers," as the model creates the categories.
Ng, A. (n.d.). CS229 Lecture Notes 1. Stanford University.
Part I ("Supervised Learning"): Distinguishes between regression (predicting a continuous-valued output) and classification (predicting a discrete-valued output). This reference supports the separation of the regression (amount of fuel) and classification (will miss flight) tasks.
Part V ("Unsupervised Learning"): Describes clustering as a problem where "we are given a data set and weโd like to automatically find structure in the data," such as grouping it into clusters. This supports the "Assign categories" task.
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd ed.). Springer.
Chapter 2, Section 2.3 ("Supervised and Unsupervised Learning"): This foundational text formally distinguishes the goals. Supervised learning (classification, regression) aims to predict a specific, known output or response. Unsupervised learning (clustering) has "no outcome variable, but rather... to describe the associations and patterns among a set of input variables." This confirms the fundamental logic used in the answer.
Question 35
HOTSPOT Select the answer that correctly completes the sentence.
Show Answer
A VOICE-ACTIVATED SECURITY KEY SYSTEM
Speech recognition is the core technology that enables a system to process spoken language and convert it into a machine-readable format, such as text or a specific command. A voice-activated security key system is a direct example of this.
To function, the system must first recognize the content of the speechโsuch as a specific passphrase (e.g., "Open sesame") or a command (e.g., "Unlock"). This process of understanding what is said is speech recognition.
This is distinct from speaker recognition, which is a biometric technology used to verify who is speaking. While a robust security system would likely use both (recognizing the command and verifying the user's voice), the "voice-activated" part fundamentally relies on speech recognition to interpret the command.
Microsoft Azure AI Speech Documentation. (n.d.). What is speech-to-text?. Microsoft. Retrieved October 24, 2025.
Section: Overview
Quote: "Speech-to-text... converts audio streams or files to text. Your applications, tools, or devices can consume, display, and act on this text as command input." (A "voice-activated security key system" is a prime example of acting on a command input).
Microsoft Azure AI Speech Documentation. (n.d.). Speech recognition basics. Microsoft. Retrieved October 24, 2025.
Section: Speech recognition and more
Quote: "Use speech recognition (also known as speech-to-text) to transcribe audio into text... Other scenarios include voice command..."
Microsoft Azure AI Speech Documentation. (n.d.). What is speaker recognition?. Microsoft. Retrieved October 24, 2025.
Section: Overview
Quote: "Speaker recognition... verify and identify speakers by their unique voice characteristics." (This reference is provided to clarify the distinction. While the security aspect of the system may use speaker recognition, the activation by voice requires speech recognition).
Juang, B. H., & Rabiner, L. R. (2005). Automatic speech recognitionโa brief history of the technology development. Georgia Institute of Technology, Center for Signal and Image Processing.
Page: 1
Quote: "The task of an ASR [Automatic Speech Recognition] system is to convert a speech signal into a corresponding sequence of words." (This fundamental definition covers the system's need to understand the command/passphrase).


















