Microsoft AI-900 Azure AI Fundamentals Exam Questions 2025

Updated:

Our AI-900 Exam Questions provide authentic, up-to-date content for the Microsoft Certified: Azure AI Fundamentals certification. Each question is reviewed by certified Microsoft professionals and includes verified answers with clear explanations to strengthen your understanding of AI concepts, machine learning, and Azure AI services. With access to our exam simulator, you can practice in real exam conditions and confidently prepare to pass on your first attempt.

Exam Questions

Question 1

HOTSPOT To complete the sentence, select the appropriate option in the answer area. AI-900 exam question

Show Answer
Correct Answer:

ANALYSIS

Explanation

The AI solution is assessing specific qualities or attributes of the face, such as exposure, noise, and occlusion.

  • Facial detection simply locates a face in an image (e.g., provides a bounding box).
  • Facial recognition identifies a specific person.
  • Facial analysis (or attribute detection) is the process that extracts information about a detected face. This includes quality metrics (blur, exposure, noise), pose, emotion, and physical attributes (occlusion, glasses, etc.).

Because the solution provides feedback on these specific attributes, it is performing facial analysis.

References

Microsoft. (2024). Face detection and attributes. Azure AI services documentation.

Reference: In the "Face - Detect" API documentation, the service can "extract a set of face-related attributes... The available attributes... include... Occlusion... Noise... Exposure." This confirms that determining these specific properties is a function of facial attribute analysis, which is a component of the broader "Face" service, distinct from simple detection.

Hjelmås, E., & Goth, B. K. (2001). Face Detection: A Survey. Computer Vision and Image Understanding, 83(3), 236-274. https://doi.org/10.1006/cviu.2001.0921

Reference: This survey (Section 1, "Introduction") distinguishes between "detection" (locating a face) and "recognition" (identifying a face). The task described in the question—evaluating attributes—is a separate analytical step that follows detection.

MIT OpenCourseWare. (2020). 6.819/6.869: Advances in Computer Vision.

Reference: Lecture 15, "Face Recognition." Course materials distinguish the core tasks: Detection ("Find all faces"), Attribute Analysis ("Find properties: pose, expression, gender, image quality"), and Recognition ("Identify the person"). The question clearly aligns with Attribute Analysis.

Question 2

Which statement is an example of a Microsoft responsible AJ principle?
Options
A: Al systems must use only publicly available data.
B: Al systems must protect the interests of the company
C: Al systems must be understandable.
D: Al systems must keep personal details public
Show Answer
Correct Answer:
Al systems must be understandable.
Explanation
Microsoft's responsible AI principle of Transparency dictates that AI systems should be understandable. This means that users should be able to comprehend how the system works, the data it uses, its limitations, and the reasoning behind its decisions. Making AI systems understandable helps build trust and enables accountability by allowing developers and users to identify and rectify potential biases or errors.
Why Incorrect Options are Wrong

A. This is incorrect. While data sourcing is important, there is no principle mandating the exclusive use of publicly available data; the focus is on responsible data handling, regardless of its source.

B. This is incorrect. The principles of responsible AI are human-centric, focusing on fairness and societal benefit, not solely on corporate interests, which could potentially conflict with ethical considerations.

D. This is incorrect. This statement directly contradicts the principle of Privacy and Security, which requires AI systems to protect personal data and respect user privacy.

References

1. Microsoft Learn. (2024). Microsoft's responsible AI principles. AI-900: Microsoft Azure AI Fundamentals. "Transparency: AI systems should be understandable. The people who create and use AI systems must be able to fully understand how they work so they can identify potential issues, such as bias or unexpected outcomes, that could otherwise go undiscovered."

2. Microsoft Learn. (2024). Introduction to responsible AI in Azure. AI-900: Microsoft Azure AI Fundamentals. "The six principles that form the foundation of Microsoft's approach to responsible AI are: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability."

3. Microsoft Corporate. (2023). Microsoft Responsible AI Standard, v2. "Transparency: We will be transparent about the capabilities and limitations of our AI systems." (Section 1.5, Page 6).

Question 3

Which type of natural language processing (NLP) entity is used to identify a phone number?
Options
A: regular expression
B: machine-learned
C: list
D: Pattern-any
Show Answer
Correct Answer:
regular expression
Explanation
A phone number follows a specific, predictable structure (e.g., (555) 123-4567, 555-123-4567). In Natural Language Processing (NLP) services, such as Azure AI Language's Conversational Language Understanding, a regular expression entity is the ideal component for identifying and extracting text that conforms to a defined pattern. This method is highly precise and efficient for structured data like phone numbers, email addresses, or product codes, as it does not require machine learning from examples.
Why Incorrect Options are Wrong

B. machine-learned: This entity type is best for concepts that are defined by context, not a strict pattern, and requires training with many labeled examples. It is inefficient for a well-structured pattern like a phone number.

C. list: A list entity is used for a fixed, closed set of words and their synonyms (e.g., a list of cities or product categories). It cannot be used to identify the near-infinite variations of phone numbers.

D. Pattern-any: This is a generic placeholder used within a larger pattern template to capture variable text. A regular expression entity is more specific and appropriate for defining the exact structure of a phone number.

References

1. Microsoft Learn, "Entity components in conversational language understanding." Under the "Regular expression entity component" section, it states, "A regular expression entity component extracts an entity based on a regular expression pattern you provide. It's ideal for text with consistent formatting." This directly supports using regex for patterned data like phone numbers.

2. Microsoft Learn, "Entity components in conversational language understanding." The "List entity component" section clarifies, "A list entity component represents a fixed, closed set of related words along with their synonyms...This component is a good choice when you have a set of items that don't change often." This confirms why a list is unsuitable for phone numbers.

3. Microsoft Learn, "How to create a conversational language understanding project." In the "Add entity components" section, the documentation guides users to "Select Regular expression" for entities that follow a defined pattern, contrasting it with list and machine-learned entities.

Question 4

You need to implement a pre-built solution that will identify well-known brands in digital photographs. Which Azure Al sen/tee should you use?
Options
A: Face
B: Custom Vision
C: Computer Vision
D: Form Recognizer
Show Answer
Correct Answer:
Computer Vision
Explanation
The Azure AI Computer Vision service provides pre-built, pre-trained models for analyzing images. One of its core features is the ability to detect thousands of well-known commercial brands from a continuously updated database. This capability is available through the Analyze Image API by specifying the brands visual feature. It directly fulfills the requirement for a pre-built solution to identify brands in photographs without needing to train a custom model.
Why Incorrect Options are Wrong

A. Face: The Azure AI Face service is specialized for detecting, identifying, and analyzing human faces and their attributes, not for recognizing commercial logos or brands.

B. Custom Vision: This service is used to build, train, and deploy your own custom image classification and object detection models. It is not a pre-built solution for general brand detection.

D. Form Recognizer: This service is designed to extract text, key-value pairs, and tables from documents like invoices and receipts, not for analyzing brands in general photographs.

References

1. Microsoft Learn. (2024). Analyze images with the Computer Vision service. AI-900: Microsoft Azure AI Fundamentals learning path.

Reference: In the "Image Analysis" section, the documentation states, "The Computer Vision service can detect thousands of famous brands." This confirms its pre-built capability for brand detection.

2. Microsoft Azure Documentation. (2023). What is Computer Vision? - Image analysis.

Reference: Under the "Image analysis" feature list, it specifies "Brand detection: Detects brands in images from a database of thousands of global logos." This explicitly identifies the required functionality as part of the Computer Vision service.

3. Microsoft Azure Documentation. (2023). Call the Analyze Image API.

Reference: In the section "Specify visual features," the documentation lists Brands as one of the features that can be requested. The description states, "Detects various brands within an image, including the approximate location of the brand logo."

Question 5

HOTSPOT For each of the following statements. select Yes if the statement is true. Otherwise, select No. NOTE; Each correct selection is worth one point AI-900 exam question

Show Answer
Correct Answer:

YES

YES

NO

Explanation

Object Detection (Yes): The Azure Custom Vision service is designed for two primary functions: image classification (assigning labels to an entire image) and object detection (identifying the location and label of specific objects within an image using bounding boxes).

Requires Own Data (Yes): The "Custom" aspect of Custom Vision means it is used to build and train a model tailored to a specific use case. This process requires the user to upload and tag their own set of images to serve as the training data. This contrasts with the general-purpose Computer Vision service, which uses a pre-trained model.

Analyze Video Files (No): The Custom Vision service prediction API is built to analyze static images (e.g., JPEG, PNG) or image URLs. It does not have a native API endpoint that accepts video files (e.g., MP4, AVI) for analysis. While a complete solution can analyze video by first using a different process to extract individual frames and then sending those frames as images to Custom Vision, the service itself does not analyze video files.

References

Microsoft Corporation. (2024). What is Custom Vision?. Microsoft Learn. Retrieved October 24, 2025, from https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/overview

Reference for Statement 1: The document states, "Custom Vision is an image recognition service that lets you build, deploy, and improve your own image... object detection models."

Reference for Statement 2: The same document notes, "You provide a set of labeled images to train your model..."

Microsoft Corporation. (2024). Tutorial: Analyze videos in near real-time with Custom Vision. Microsoft Learn. Retrieved October 24, 2025, from https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/analyze-video

Reference for Statement 3: This official tutorial outlines the correct pattern, which confirms the service does not analyze video files directly. The process states: "This tutorial shows how to use the Custom Vision Service API to perform... analysis on frames taken from a live video stream... The basic approach is to... break the video stream down into a sequence of frames... [and] submit selected frames to the Custom Vision predictor." This confirms the API target is the frame (image), not the video file.

Question 6

You need to identify street names based on street signs in photographs. Which type of computer vision should you use?
Options
A: object detection
B: optical character recognition (OCR)
C: image classification
D: facial recognition
Show Answer
Correct Answer:
optical character recognition (OCR)
Explanation
The core task is to extract and read text (the street names) from an image (a photograph of a street sign). Optical Character Recognition (OCR) is the specific computer vision technology designed for this purpose. It detects the presence of text in an image and then transcribes the characters into a machine-readable format. While object detection might first be used to locate the sign, OCR is the essential step to read the name itself.
Why Incorrect Options are Wrong

A. object detection: This technique is used to locate and classify objects within an image (e.g., identifying the bounding box of a "street sign"), but it does not read the text on the object.

C. image classification: This assigns a single label or category to an entire image (e.g., "urban street" or "daytime"). It does not extract specific information like text from within the image.

D. facial recognition: This is a specialized technology used exclusively for identifying and analyzing human faces in images or videos, which is irrelevant to reading a street sign.

References

1. Microsoft Learn, AI-900: Explore computer vision in Microsoft Azure, "Explore optical character recognition" unit. This document states, "Optical character recognition (OCR) is a technique used to detect and read text in images. You can use the Computer Vision service to read text in images..." This directly supports the use of OCR for reading text from signs.

2. Microsoft Learn, AI-900: Explore computer vision in Microsoft Azure, "Explore object detection" unit. This source explains, "Object detection is a form of computer vision in which a model is trained to classify individual objects within an image, and indicate their location with a bounding box." This clarifies that object detection locates objects but does not read text.

3. Microsoft Learn, AI-900: Explore computer vision in Microsoft Azure, "Introduction" unit. This unit provides an overview of computer vision tasks, distinguishing between image classification (what is the image about?), object detection (what objects are in the image, and where are they?), and OCR (reading text in the image).

Question 7

HOTSPOT Select the answer that correctly completes the sentence. AI-900 exam question

Show Answer
Correct Answer:

ADDING AND CONNECTING MODULES ON A VISUAL CANVAS

Explanation

The Azure Machine Learning designer is a low-code/no-code tool within the Azure ML workspace. Its primary interface is a visual canvas where users build end-to-end ML pipelines by dragging, dropping, and connecting pre-built modules for tasks like data import, transformation, model training, and scoring.

  • Option 3 (automatically selecting an algorithm...) is incorrect as this specifically describes Automated ML (AutoML), a separate feature.
  • Option 4 (using a code-first notebook experience) is incorrect as this describes using Jupyter notebooks with the Azure ML SDK, which is the code-based alternative to the visual designer.

References

Microsoft. (2024, May 22). What is Azure Machine Learning designer? Microsoft Learn.

Section: Introduction, Paragraphs 1-2.

Quote: "Azure Machine Learning designer is a drag-and-drop interface... The designer provides a visual canvas where you can add datasets and modules... You connect the modules to create a pipeline draft..."

URL: https://learn.microsoft.com/en-us/azure/machine-learning/concept-designer

Microsoft. (2024, October 17). What is automated machine learning (AutoML)? Microsoft Learn.

Section: "How AutoML works."

Note: This official documentation confirms that "automatically selecting an algorithm" is the function of AutoML, distinguishing it from the designer.

URL: https://learn.microsoft.com/en-us/azure/machine-learning/concept-automated-ml

Duke University. (n.d.). AI for Everyone: Azure ML Designer. Duke AIPI (Artificial Intelligence for Product Innovation).

Section: "Azure ML Designer."

Quote: "Azure ML Designer... is a visual, drag-and-drop environment... In the Designer, you create an ML 'pipeline' on a visual canvas by dragging and dropping 'modules'..."

URL: https://aipi.pratt.duke.edu/azure-ml-designer

Microsoft. (2024, September 24). What are Jupyter notebooks in Azure Machine Learning? Microsoft Learn.

Section: Introduction.

Note: This documentation defines the "code-first notebook experience" as a separate component (Jupyter notebooks) within the Azure ML workspace, confirming it is distinct from the designer.

URL: https://learn.microsoft.com/en-us/azure/machine-learning/concept-notebooks

Question 8

You need to reduce the load on telephone operators by implementing a Chabot to answer simple questions with predefined answers. Which two Al services should you use to achieve the goal? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Options
A: Azure 8ol Service
B: Azure Machine Learning
C: Translator
D: Language Service
Show Answer
Correct Answer:
Azure 8ol Service, Language Service
Explanation
To create a chatbot that answers questions from a predefined knowledge base, you need two core components. First, the Azure Bot Service provides the framework to build, deploy, and manage the bot, enabling it to interact with users on various channels. Second, the Azure AI Language Service, specifically its Question Answering feature (formerly QnA Maker), is used to build the knowledge base from existing content like FAQs. This service allows the bot to understand natural language questions and map them to the correct predefined answers. Together, these services provide a complete solution for a conversational Q&A experience, directly addressing the need to reduce the load on telephone operators.
Why Incorrect Options are Wrong

B. Azure Machine Learning: This is a platform for building, training, and deploying custom machine learning models, which is overly complex for a simple Q&A bot with predefined answers.

C. Translator: This service is used for text translation between languages. The scenario does not specify a requirement for multilingual support, making this service unnecessary for the core goal.

References

1. Microsoft Learn, Azure AI Language documentation. "What is question answering?" This document states, "Question answering provides cloud-based Natural Language Processing (NLP) that allows you to create a natural conversational layer over your data... It is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications."

Source: Microsoft, "What is question answering?", Azure AI Language documentation. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/language-service/question-answering/overview

2. Microsoft Learn, Azure Bot Service documentation. "What is Azure Bot Service?" This page explains, "Azure Bot Service is a comprehensive development environment for building enterprise-grade conversational AI... Bots can be used to shift simple, repetitive tasks, such as taking a dinner reservation or gathering profile information, on to automated systems that may no longer require direct human intervention."

Source: Microsoft, "What is Azure Bot Service?", Azure Bot Service documentation. Retrieved from https://learn.microsoft.com/en-us/azure/bot-service/bot-service-overview

3. Microsoft Learn, Tutorial. "Create a question answering bot". This tutorial explicitly demonstrates the required architecture: "In this tutorial, you learn how to: 1. Create a question answering project and import a file as a knowledge base. 2. Add your knowledge base to a bot... 3. Build and run your bot." This confirms the direct integration of the Language Service (for the knowledge base) and the Bot Service (for the bot itself).

Source: Microsoft, "Quickstart: Create a question answering bot", Azure AI Language documentation. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/language-service/question-answering/quickstarts/bot-service

Question 9

DRAG DROP Match the machine learning models to the appropriate deceptions. To answer, drag the appropriate model from the column on the left to its description on the right Each model may be used once, more than once, or not at all. NOTE: Each correct match is worth one point. AI-900 exam question

Show Answer
Correct Answer:

REGRESSION

CLASSIFICATION

CLUSTERING

Explanation

The solution correctly maps the three fundamental types of machine learning models to their definitions.

  • Regression is a supervised learning task used to predict a continuous numeric value, such as the price of a house or a future temperature.
  • Classification is also a supervised learning task, but it is used to predict a discrete category or class, such as whether an email is 'spam' or 'not spam', or if a tumor is 'benign' or 'malignant'.
  • Clustering is an unsupervised learning task. It does not use pre-defined labels; instead, it analyzes the input data to identify natural groupings (clusters) of items based on their shared features or similarities.

References

Microsoft. (2024). "What is machine learning?" Azure Machine Learning Documentation. Microsoft.

Section: "Machine learning model types" > "Supervised learning"

Quote/Paraphrase: This documentation explicitly defines regression as a supervised method for predicting continuous values (e.g., price, sales). It defines classification as a supervised method for predicting categories (e.g., yes/no, true/false). It defines clustering as an unsupervised method used to discover structure and group items into clusters based on similarity.

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2021). An Introduction to Statistical Learning: with Applications in R (Second Edition). Springer.

Chapter 2, Section 2.1.2 "Supervised and Unsupervised Learning": This section distinguishes the two main types. It states that supervised learning involves building a model for predicting an output based on inputs, further breaking this down into regression problems (predicting a quantitative or numeric output) and classification problems (predicting a qualitative or categorical output).

Chapter 12, Section 12.1 "Unsupervised Learning": This section defines unsupervised learning as a setting with only feature measurements (X) and no response variable (Y). The goal is described as finding interesting patterns or groups, which directly relates to clustering.

Ng, A. (2023). "Course Notes: CS229 - Machine Learning." Stanford University.

Section: "Part I: Supervised Learning"

Quote/Paraphrase: The notes define supervised learning as the task of learning a function that maps inputs to outputs given a set of input-output pairs. It specifies that if the target output (label) is continuous (e.g., price), the task is regression. If the target output is discrete (e.g., 'cat' or 'dog'), the task is classification.

Section: "Part IX: Unsupervised Learning"

Quote/Paraphrase: This section describes unsupervised learning as the process of finding structure in unlabeled data. The canonical example provided is clustering, such as grouping news articles by topic or customers by preferences.

Question 10

HOTSPOT Select the answer that correctly completes the sentence. AI-900 exam question

Show Answer
Correct Answer:

FEATURES

Explanation

In the context of machine learning, features are the defined as the independent variables or attributes that serve as the inputs to a model. These are the measurable properties or characteristics of the data (e.g., the square footage of a house, the pixel values of an image) that the model uses to make a prediction.

Conversely, a label is the output or target variable (e.g., the price of the house, the object in the image) that the model learns to predict. An instance is a single row of data, which typically includes both its features and (if supervised) its label.

References

Microsoft Azure Documentation. (n.d.). What is automated machine learning (AutoML)? Azure Machine Learning. Retrieved October 24, 2025.Reference: In the "Features and labels" section, the documentation states: "A feature is a data column that is used as an input for your model... A label is the data column that you want to predict."Ng, A. (n.d.). CS229 Machine Learning Course Notes: Supervised Learning. Stanford University.Reference: Section 1.1, "Supervised Learning," defines the training set as (x, y) pairs, stating: "We call $x^{(i)}$ the input variables (or features) and $y^{(i)}$ the output variables (or labels)."Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.Reference: Chapter 1, Section 1.1 (p. 2-3), introduces the input vector $\mathbf{x}$, whose components are referred to as features. This input vector is used to predict the target variable $t$ (the label).Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3(Mar), 1157-1182.Reference: The abstract defines features: "The variables collected from the field, which are used as inputs to a predictor, are referred to as 'features'."

Sale!
Total Questions316
Last Update Check October 16, 2025
Online Simulator PDF Downloads
50,000+ Students Helped So Far
$30.00 $50.00 40% off
Rated 4.92 out of 5
4.9 (26 reviews)

Instant Download & Simulator Access

Secure SSL Encrypted Checkout

100% Money Back Guarantee

What Users Are Saying:

Rated 5 out of 5

“The practice questions were spot on. Felt like I had already seen half the exam. Passed on my first try!”

Sarah J. (Verified Buyer)

Download Free Demo PDF Free AI-900 Practice Test
Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE