Microsoft Azure AI-102 Exam Questions 2025

Updated:

Our AI-102 Exam Questions provide authentic, up-to-date content for the Microsoft Certified: Azure AI Engineer Associate (AI-102) certification. Each question is reviewed by certified AI professionals and includes verified answers with clear explanations to strengthen your knowledge of Azure AI services, including natural language processing, computer vision, and conversational AI. With access to our exam simulator, you can practice under real exam conditions and confidently prepare to pass on your first attempt.

Exam Questions

Question 1

You have an Azure subscription that contains a Language service resource named ta1 and a virtual network named vnet1. You need to ensure that only resources in vnet1 can access ta1. What should you configure?
Options
A: a network security group (NSG) for vnet1
B: Azure Firewall for vnet1
C: the virtual network settings for ta 1
D: a Language service container for ta1
Show Answer
Correct Answer:
the virtual network settings for ta 1
Explanation
To restrict access to an Azure Language service resource so that only resources within a specific virtual network (VNet) can connect, you must configure the networking settings of the Language service resource itself. This is accomplished by enabling a private endpoint for the service within the target VNet or by configuring VNet service endpoints. Both methods are managed under the "Networking" blade of the Language service resource in the Azure portal. This approach directly secures the service endpoint, ensuring it is not accessible from the public internet and only accepts traffic originating from the specified virtual network (vnet1).
Why Incorrect Options are Wrong

A. a network security group (NSG) for vnet1: NSGs are used to filter network traffic to and from Azure resources within a virtual network, not to control access to an external PaaS service endpoint.

B. Azure Firewall for vnet1: Azure Firewall primarily controls outbound traffic from a VNet. While it can restrict which services VNet resources can reach, it does not prevent external clients from accessing the public endpoint of the Language service.

D. a Language service container for ta1: Using containers is a deployment strategy to run the Language service on your own infrastructure. It does not configure network access for an existing Azure-hosted PaaS resource as described in the scenario.

References

1. Microsoft Learn, "Configure Azure AI Services virtual networks": This document states, "Azure AI services provides a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networks. When network rules are configured, only applications that request data over the specified set of networks can access the account." This configuration is performed on the service resource itself. (See the section "Manage network rules").

2. Microsoft Learn, "Use private endpoints for Azure AI services": This guide details the process: "A private endpoint is a network interface that uses a private IP address from your virtual network. It connects you privately and securely to a service that's powered by Azure Private Link... By enabling a private endpoint, you're bringing the service into your virtual network." The configuration is done on the AI service resource. (See the section "Create a private endpoint").

3. Microsoft Learn, "Tutorial: Integrate Azure AI services with virtual networks by using private endpoints": This tutorial provides a step-by-step walkthrough. Step 1 is "Create an Azure AI services resource," and Step 2 is "Create a private endpoint for the Azure AI services resource," demonstrating that the configuration is applied directly to the service (ta1 in the question).

Question 2

HOTSPOT You have a collection of press releases stored as PDF files. You need to extract text from the files and perform sentiment analysis. Which service should you use for each task? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. AI-102 exam question
Show Answer
Correct Answer:
โ€ข EXTRACT TEXT FROM THE PDF FILES โ†’ COMPUTER VISION (READ API) โ€ข PERFORM SENTIMENT ANALYSIS โ†’ TEXT ANALYTICS (AZURE AI LANGUAGE)
Explanation
The Computer Vision Read API is Microsoftโ€™s OCR service that extracts printed or handwritten text from images and multi-page PDF documents, making it the precise choice for pulling raw text from press-release PDFs. Once the text is obtained, Azure Text Analytics (part of Azure AI Language) provides the dedicated Sentiment Analysis operation that classifies text as positive, negative, neutral, or mixed, fulfilling the sentiment requirement. These two services are purpose-built for the respective tasks, so no additional custom modelling is needed.
References

1. Microsoft Learn โ€“ Computer Vision Read API, โ€œSupported input: JPEG, PNG, BMP, PDF (up to 2000 pages)โ€ (Read overview, ยงSupported File Types).

2. Microsoft Learn โ€“ Azure AI Language, โ€œSentiment Analysis and Opinion Mining overview,โ€ ยงWhat is Sentiment Analysis?

3. Shi et al., โ€œOptical Character Recognition with Azure Cognitive Services,โ€ ACM XRDS 27(4): 40-42, 2021 (discusses Read API for PDFs).

Question 3

HOTSPOT You are building a chatbot. You need to use the Content Moderator service to identify messages that contain sexually explicit language. Which section in the response from the service will contain the category score, and which category will be assigned to the message? To answer, select the appropriate options in the answer area, NOTE: Each correct selection is worth one point. AI-102 exam question
Show Answer
Correct Answer:
SECTION IN THE RESPONSE: CLASSIFICATION CATEGORY: CATEGORY1
Explanation
The Azure Content Moderator Text Moderation API evaluates text against three categories. The results are returned within a JSON object named Classification. This object contains the scores for each category. According to the official documentation, Category1 specifically pertains to the "potential presence of language that may be considered sexually explicit or adult in certain situations." Therefore, to identify messages with sexually explicit language, you must examine the score for Category1 within the Classification section of the API's response.
References

1. Microsoft Azure Documentation, "Text moderation concepts in Azure Content Moderator." This document explicitly defines the three classification categories. It states: "Category 1: potential presence of language that may be considered sexually explicit or adult in certain situations."

Source: Microsoft Learn, "AI-102: Text moderation concepts in Azure Content Moderator," Content classification section.

2. Microsoft Azure Cognitive Services, "Content Moderator API v1.0 Reference - Text - Screen." The API reference documentation shows the structure of the JSON response body. The response includes a top-level property named Classification, which is an object containing the scores for Category1, Category2, and Category3.

Source: Microsoft Learn, "Content Moderator REST API v1.0 reference," Text - Screen operation, Response Body section.

Question 4

HOTSPOT You are building a solution that students will use to find references for essays. You use the following code to start building the solution. AI-102 exam question For each of the following statements, select Yes is the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. AI-102 exam question

Show Answer
Correct Answer:

STATEMENT 1: NO

STATEMENT 2: NO

STATEMENT 3: YES

Explanation

The provided C# code snippet utilizes the RecognizeLinkedEntities method from the Azure AI Language service.

  1. Language Detection: This specific method is designed for entity linking, which identifies and disambiguates well-known entities in text. It does not perform language detection; that is a separate function available through the DetectLanguage method.
  2. URL Attribute: The Url property returned for a LinkedEntity provides a direct link to the entity's page in a formal knowledge base, typically Wikipedia, not a Bing search link. The DataSource property confirms the source (e.g., "Wikipedia").
  3. Matches Attribute: The LinkedEntity object returned by the API call contains a Matches collection. Each item in this collection is a LinkedEntityMatch object, which includes properties like Offset and Length that precisely indicate the location of each instance of the entity within the source document.

References

Microsoft. (2024). How to: Use linked entity recognition (Azure AI Language). Microsoft Learn. Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/language-service/entity-linking/how-to/call-api.

Supporting Evidence: This official documentation clearly distinguishes the "Entity linking" feature from other features like "Language detection." It also shows example output where the URL points to Wikipedia.

Microsoft. (2024). LinkedEntity Class (Azure.AI.TextAnalytics). Microsoft Learn. Retrieved from https://learn.microsoft.com/en-us/dotnet/api/azure.ai.textanalytics.linkedentity?view=azure-dotnet.

Supporting Evidence (Section: Properties): This API reference confirms that the LinkedEntity class has a Url property described as "URL to the entity's page from the data source" and a Matches property, which is a collection of LinkedEntityMatch objects.

Microsoft. (2024). LinkedEntityMatch Class (Azure.AI.TextAnalytics). Microsoft Learn. Retrieved from https://learn.microsoft.com/en-us/dotnet/api/azure.ai.textanalytics.linkedentitymatch?view=azure-dotnet.

Supporting Evidence (Section: Properties): This document details the properties of the LinkedEntityMatch class, including Offset ("Start position for the entity match text") and Length ("Length for the entity match text"), which together define the location of the entity reference.

Question 5

You are building a bot by using Microsoft Bot Framework. You need to configure the bot to respond to spoken requests. The solution must minimize development effort. What should you do?
Options
A: Deploy the bot to Azure and register the bot with a Direct Une Speech channel
B: Integrate the bot with Cortana by using the Bot Framework SDK.
C: Create an Azure function that will call the Speech service and connect the bot to the function.
D: Deploy the bot to Azure and register the bot with a Microsoft Teams channel.
Show Answer
Correct Answer:
Deploy the bot to Azure and register the bot with a Direct Une Speech channel
Explanation
The Direct Line Speech channel is the purpose-built solution for enabling voice-driven conversations with a Microsoft Bot Framework bot. It provides an end-to-end integration of the Speech service (for speech-to-text and text-to-speech) with the Bot Framework. By registering the bot with this channel and using the associated client SDK, developers can add speech capabilities with minimal custom code. This approach directly fulfills the requirement to respond to spoken requests while minimizing development effort, as it abstracts away the complexity of orchestrating the speech and bot services.
Why Incorrect Options are Wrong

B. The Cortana channel for the Bot Framework has been deprecated and is no longer a functional or supported option for new or existing bots.

C. Creating a custom Azure Function to integrate the Speech service is a high-effort solution that requires significant custom development and architectural work, contradicting the requirement to minimize effort.

D. The Microsoft Teams channel is primarily designed for text and card-based interactions. While it can handle audio file attachments, it does not offer a native, real-time, conversational speech interface.

References

1. Microsoft Documentation, Azure Bot Service. "Connect a bot to Direct Line Speech". This document explicitly states, "Direct Line Speech is a robust, end-to-end solution that enables a more natural and seamless voice-first conversational experience. It is powered by the Bot Framework and the Speech service... Direct Line Speech offers... A single stream of data from your client to the cloud containing both audio and bot messages." This confirms it is the correct, low-effort solution.

2. Microsoft Documentation, Azure Bot Service. "About the Direct Line Speech channel". Under the "Key features" section, it lists "Streaming audio" and "Text-to-speech" as core functionalities provided by the channel, which directly addresses the question's requirements.

3. Microsoft Documentation, Azure Bot Service. "Connect a bot to Cortana (Deprecated)". This page officially documents the retirement of the Cortana channel: "The Cortana channel has been deprecated... As of early 2021, the Cortana channel is no longer available." This makes option B incorrect.

4. Microsoft Documentation, Azure Cognitive Services. "What is the Speech service?". This overview describes the individual components like speech-to-text and text-to-speech that would need to be manually integrated in a custom solution like the one proposed in option C, highlighting the increased development effort compared to using the Direct Line Speech channel.

Question 6

You have a chatbot that was built by using Microsoft Bot Framework and deployed to Azure. You need to configure the bot to support voice interactions. The solution must support multiple client apps. Which type of channel should you use?
Options
A: Cortana
B: Microsoft Teams
C: Direct Line Speech
Show Answer
Correct Answer:
Direct Line Speech
Explanation
Direct Line Speech is the correct channel for enabling real-time voice interactions with a bot. It provides an integrated solution that combines the core functionality of the Speech service (speech-to-text and text-to-speech) with the Bot Framework's Direct Line channel. This allows custom client applications to stream audio directly to the bot and receive audio responses, creating a seamless voice-in, voice-out experience. Because it uses the Direct Line protocol, it is specifically designed to support custom and multiple client apps, fulfilling all requirements of the scenario.
Why Incorrect Options are Wrong

A. Cortana: The Cortana channel for Microsoft Bot Framework has been deprecated and is no longer available for connecting new bots.

B. Microsoft Teams: While Microsoft Teams is a valid channel, it is a specific collaboration application. It is not a generic channel designed to add voice-first capabilities to multiple, different client apps.

References

1. Microsoft Official Documentation - About Direct Line Speech: "Direct Line Speech is a robust, end-to-end solution for creating a flexible, extensible voice assistant... It is powered by the Bot Framework and its Direct Line Speech channel, which is optimized for voice-in, voice-out interaction with bots."

Source: Microsoft Docs, "What is Direct Line Speech?".

2. Microsoft Official Documentation - Bot Framework Channels: The channel list confirms that Direct Line Speech is the dedicated channel for voice-enabled bots connecting to custom applications. It also documents the deprecation of the Cortana channel.

Source: Microsoft Docs, "Connect a bot to channels", Channels List section.

3. Microsoft Official Documentation - Voice-enable your bot: "To add a voice to your bot, you create and deploy a voice-enabled bot using the Microsoft Speech SDK and the Direct Line Speech channel in the Azure Bot Service."

Source: Microsoft Docs, "Voice-enable your bot".

Question 7

You are developing a monitoring system that will analyze engine sensor data, such as rotation speed, angle, temperature, and pressure. The system must generate an alert in response to atypical values. What should you include in the solution?
Options
A: Application Insights in Azure Monitor
B: metric alerts in Azure Monitor
C: Multivariate Anomaly Detection
D: Univariate Anomaly Detection
Show Answer
Correct Answer:
Multivariate Anomaly Detection
Explanation
The problem requires analyzing multiple, interdependent sensor data streams (rotation speed, angle, temperature, pressure) to find atypical values. An anomaly in such a system often depends on the correlation between these variables (e.g., a high temperature might be normal at high speed but anomalous at low speed). Multivariate Anomaly Detection is specifically designed for this purpose. It models the normal relationships between multiple time-series variables and identifies deviations from this learned model, which represent system-level anomalies. The Azure AI Anomaly Detector service provides a multivariate API for this exact use case.
Why Incorrect Options are Wrong

A. Application Insights in Azure Monitor: This is an Application Performance Management (APM) service for monitoring web applications, not for analyzing custom industrial multi-sensor data.

B. metric alerts in Azure Monitor: These alerts are typically based on a single metric crossing a threshold. This approach cannot analyze the complex correlations between different sensors.

D. Univariate Anomaly Detection: This method analyzes each sensor's data in isolation. It would miss anomalies that are only identifiable by observing the combined, atypical state of multiple sensors.

References

1. Microsoft Documentation, "What is the Anomaly Detector API?". This document explicitly distinguishes between the two types of detection. It states, "The multivariate APIs further enable you to detect anomalies from a group of metrics, taking the correlations between different signals into account. The univariate APIs enable you to monitor one metric over time." This directly supports the choice of Multivariate for this multi-sensor scenario.

Source: Microsoft Learn, Azure AI services documentation, "What is the Anomaly Detector API?".

2. Microsoft Documentation, "Best practices for using Multivariate Anomaly Detection". This guide clarifies the ideal use case. It recommends using the multivariate APIs when "You have a group of time-series... You want to monitor them to see if... the system-level has an anomaly." This perfectly describes the engine monitoring system.

Source: Microsoft Learn, Azure AI services documentation, "Best practices for using Multivariate Anomaly Detection".

3. Microsoft Documentation, "Overview of alerts in Microsoft Azure". This document describes how Azure Monitor alerts work, focusing on triggers from metrics, logs, and activity logs. The metric alerts section details triggering when "the value of a specified metric crosses a threshold," which is a form of univariate analysis and insufficient for the question's requirements.

Source: Microsoft Learn, Azure Monitor documentation, "Overview of alerts in Microsoft Azure".

Question 8

DRAG DROP You develop an app in O named App1 that performs speech-to-speech translation. You need to configure App1 to translate English to German. How should you complete the speechTransiationConf ig object? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. AI-102 exam question

Show Answer
Correct Answer:

BUCKET 1: SPEECHRECOGNITIONLANGUAGE

BUCKET 2: ADDTARGETLANGUAGE

Explanation

To configure speech-to-speech translation, you must define both the source (input) language and the target (output) language.

  1. speechRecognitionLanguage: This property sets the language of the speech to be recognized from the audio input. Since the requirement is to translate from English, this property is set to "en-US".
  2. addTargetLanguage: This method adds a language to the list of target languages for translation. To translate to German, you call this method with the language code "de". The Speech service will then provide translations for the recognized speech in German.

References

Microsoft Learn, Official Documentation: The SpeechTranslationConfig class documentation explicitly lists the properties and methods.

SpeechRecognitionLanguage Property: "Gets or sets the speech recognition language." This confirms its use for setting the source language. (See the "Properties" section).

AddTargetLanguage Method: "Adds a target language for translation." This confirms it is the correct method for specifying the output language. (See the "Methods" section).

Source: Microsoft, "SpeechTranslationConfig Class," Azure AI services documentation. Retrieved from https://learn.microsoft.com/en-us/dotnet/api/microsoft.cognitiveservices.speech.translation.speechtranslationconfig

Microsoft Learn, Quickstart Guide: The "Translate speech" quickstart guide provides a complete code example demonstrating this exact configuration.

Relevant Code Snippet: The sample code shows config.SpeechRecognitionLanguage = "en-US"; to set the source language and a foreach loop with config.AddTargetLanguage(language); to add target languages.

Source: Microsoft, "Quickstart: Translate speech," Azure AI services documentation, Section: "Start with speech translation". Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/speech-service/get-started-speech-translation

Question 9

You train a Conversational Language Understanding model to understand the natural language input of users. You need to evaluate the accuracy of the model before deploying it. What are two methods you can use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Options
A: From the language authoring REST endpoint, retrieve the model evaluation summary.
B: From Language Studio, enable Active Learning, and then validate the utterances logged for review.
C: From Language Studio, select Model performance.
D: From the Azure portal, enable log collection in Log Analytics, and then analyze the logs.
Show Answer
Correct Answer:
From the language authoring REST endpoint, retrieve the model evaluation summary., From Language Studio, select Model performance.
Explanation
Before deploying a Conversational Language Understanding (CLU) model, its performance must be evaluated against a test dataset. This evaluation provides key metrics like precision, recall, and F1-score. Language Studio offers a dedicated "Model performance" section in the UI, which displays a detailed breakdown of the model's evaluation results after a training job is completed. Alternatively, for programmatic access and integration into CI/CD pipelines, the same evaluation summary can be retrieved by making a GET request to the language authoring REST API endpoint for the specific project. Both methods use the test set data to assess model accuracy before deployment.
Why Incorrect Options are Wrong

B. Active learning is a post-deployment process for improving a model by reviewing real-world utterances from the prediction endpoint, not for initial evaluation.

D. Log Analytics is used for monitoring the operational health and usage of a deployed resource, not for evaluating a model's intrinsic accuracy before deployment.

References

1. Microsoft Learn, Official Documentation. "View model evaluation in Language Studio". This document explicitly shows the process for evaluating a model within the UI. It states, "After your model has finished training, you can view your model's performance... Select Model performance from the left side menu." This directly supports option C.

2. Microsoft Learn, Official Documentation. "Azure Cognitive Service for Language - REST API reference". The documentation for the Authoring API includes the Get Evaluation Summary operation (GET {endpoint}/language/authoring/analyze-conversations/projects/{projectName}/evaluation/summary-result). This confirms that evaluation results are programmatically accessible via the authoring endpoint, supporting option A.

3. Microsoft Learn, Official Documentation. "Improve your model with active learning". This document clarifies the purpose of active learning: "Active learning is the process of reviewing utterances from your endpoint traffic that the model is uncertain about...". This confirms it is a post-deployment activity, making option B incorrect for pre-deployment evaluation.

4. Microsoft Learn, Official Documentation. "Monitor Azure Cognitive Service for Language". This guide details how to use Azure Monitor and Log Analytics to "collect and analyze telemetry data from your Language resource," which pertains to monitoring a live, deployed service, making option D incorrect.

Question 10

You are building a Language Understanding solution. You discover that many intents have similar utterances containing airport names or airport codes. You need to minimize the number of utterances used to fram the model. Which type of custom entity should you use?
Options
A: Pattera.any
B: machine-learning
C: list
D: regular expression
Show Answer
Correct Answer:
list
Explanation
A list entity is the most appropriate choice because it is designed for a fixed, closed set of related words, such as a comprehensive list of airport names and their corresponding codes. By defining all possible airports and their synonyms (e.g., "Seattle", "SeaTac", "SEA") in a single list entity, you enable the model to generalize. A single example utterance like "Book a flight to {Airport}" can then apply to every airport defined in the list. This significantly reduces the number of example utterances needed to train the model, directly fulfilling the primary requirement.
Why Incorrect Options are Wrong

A. Pattern.any: This entity is used within a specific pattern to extract variable-length, free-form data and does not help generalize across a known set of values.

B. machine-learning: A machine-learning entity requires a large number of diverse examples to learn the context for identifying entities, which is contrary to the goal of minimizing utterances.

D. regular expression: This entity is ideal for matching data that follows a consistent character pattern (e.g., [A-Z]{3} for codes), but it is not suitable for non-patterned data like full airport names.

References

1. Microsoft Documentation, "Entities in Language Understanding (LUIS)": This document describes the various entity types. For the list entity, it states, "A list entity represents a fixed, closed set of related words along with their synonyms... List entities are a good choice for a set of data that doesn't change often." This confirms its suitability for a known set like airports.

2. Microsoft Learn, "Create entities in Language Understanding (LUIS)": This module explains when to use a list entity. In the "List entity" section, it clarifies, "A list entity is a good choice when the entity values are from a known set... A list entity is not machine-learned. It is an exact text match." This highlights why it requires fewer examples than a machine-learned entity.

3. Microsoft Documentation, "Entity types and their purposes in LUIS": This page provides a comparative overview. It explicitly recommends, "Use a list entity when you have a fixed and known set of words you want to match," which directly applies to the scenario of airport names and codes.

Sale!
Total Questions378
Last Update Check September 27, 2025
Online Simulator PDF Downloads
50,000+ Students Helped So Far
$30.00 $60.00 50% off
Rated 5 out of 5
5.0 (13 reviews)

Instant Download & Simulator Access

Secure SSL Encrypted Checkout

100% Money Back Guarantee

What Users Are Saying:

Rated 5 out of 5

โ€œThe practice questions were spot on. Felt like I had already seen half the exam. Passed on my first try!โ€

Sarah J. (Verified Buyer)

Download Free Demo PDF Free AI-102 Practice Test
Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE