Prepare Smarter for the AZ-305 Exam with Our Free and Accurate AZ-305 Exam Questions โ 2025 Updated
At Cert Empire, we are committed to delivering the latest and most reliable exam questions for students preparing for the Microsoft AZ-305 Exam. To make preparation easier, weโve made parts of our AZ-305 exam resources completely free. You can practice as much as you want with Free AZ-305 Practice Test.
Question 1
Show Answer
A. Yes: This is incorrect because Azure Load Balancer is a regional Layer 4 service and lacks the required global routing, regional failover, and native rate-limiting capabilities.
1. Azure Architecture Center - Load-balancing options. This document explicitly states, "For global routing, we recommend Azure Front Door." It also categorizes Azure Load Balancer as a Regional load balancer, contrasting it with Global options like Front Door and Traffic Manager, which are necessary for regional outage scenarios.
Source: Microsoft Learn, Azure Architecture Center. (2023). Load-balancing options. Section: "Azure load-balancing services".
2. Azure Load Balancer overview. This documentation confirms that "Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model" and is a regional resource, which means it cannot route traffic between regions.
Source: Microsoft Learn. (2023). What is Azure Load Balancer?. Section: "Introduction".
3. Web Application Firewall (WAF) rate limiting. This document details how rate limiting is a feature of Azure Application Gateway WAF and Azure Front Door, not Azure Load Balancer. It states, "Rate limiting allows you to detect and block abnormally high levels of traffic from any client IP address."
Source: Microsoft Learn. (2023). Rate limiting on Azure Application Gateway. Section: "Overview".
Question 2
Show Answer
A. Azure Data Lake is a scalable data storage and analytics service. It is designed for big data workloads, not for real-time, transactional messaging between services.
B. Azure Notification Hubs is a massively scalable mobile push notification engine. Its purpose is to send notifications to client applications on various platforms, not for backend service-to-service communication.
D. Azure Service Fabric is a distributed systems platform for building and deploying microservices. While you could build a messaging system on it, it is not the messaging service itself.
1. Microsoft Documentation, "What is Azure Queue Storage?": "Azure Queue Storage is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously."
Source: Microsoft Docs, Azure Storage Documentation, Queues.
2. Microsoft Documentation, "Storage queues and Service Bus queues - compared and contrasted": "Azure Queue Storage... provides a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services... Use Queue storage when you need to store over 80 gigabytes of messages in a queue [and] you want a simple, easy to use queue." This document highlights its use for decoupling application components for increased scalability and reliability.
Source: Microsoft Docs, Azure Architecture Center, Application integration.
3. Microsoft Documentation, "What is Azure Notification Hubs?": "Azure Notification Hubs provide an easy-to-use and scaled-out push engine that allows you to send notifications to any platform (iOS, Android, Windows, etc.) from any back-end (cloud or on-premises)."
Source: Microsoft Docs, Azure Notification Hubs Documentation, Overview.
4. Microsoft Documentation, "Introduction to Azure Data Lake Storage Gen2": "Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on Azure Blob Storage."
Source: Microsoft Docs, Azure Storage Documentation, Data Lake Storage.
Question 3
Sub1 contains an Azure App Service web app named App1. Appl uses Azure AD for single-tenant user
authentication. Users from contoso.com can authenticate to App1.
You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to
App1.
What should you recommend?Show Answer
A. The Azure AD provisioning service automates creating and managing user identities in other applications; it does not configure an application's authentication audience.
C. Azure AD Privileged Identity Management (PIM) is used to manage, control, and monitor access to privileged roles, not to enable standard cross-tenant user authentication.
D. Azure AD pass-through authentication is a sign-in method for hybrid identity that validates user passwords against an on-premises Active Directory; it is not relevant for cross-tenant authentication.
1. Microsoft Documentation: How to: Sign in any Azure Active Directory user using the multi-tenant application pattern.
Reference: In the section "Update the registration to be multi-tenant," the document states: "If you have an existing application and you want to make it multi-tenant, you need to open the application registration in the Azure portal and update Supported account types to Accounts in any organizational directory." This directly supports the chosen answer.
2. Microsoft Documentation: Quickstart: Register an application with the Microsoft identity platform.
Reference: In the "Register an application" section, step 4, "Supported account types," explicitly defines the option "Accounts in any organizational directory (Any Azure AD directory - Multitenant)" as the method to allow users with a work or school account from any organization to sign into the application.
3. Microsoft Documentation: Tenancy in Azure Active Directory.
Reference: The "App-level considerations" section explains the difference between single-tenant and multi-tenant applications. It clarifies that a multi-tenant application is "available to users in both its home tenant and other tenants." This conceptual document underpins the need to change the application's tenancy model to meet the requirement.
Question 4
Show Answer
B. Azure SQL Database Hyperscale: While it supports zone redundancy, this tier is designed for very large databases (VLDBs) and is not the most cost-effective option for general high-availability scenarios.
C. Azure SQL Database Basic: This tier does not support zone-redundant configurations and cannot meet the requirement to remain available during a zone outage.
D. Azure SQL Managed Instance Business Critical: This option meets the availability and data-loss requirements but is generally more expensive than Azure SQL Database Premium, failing the cost-minimization constraint.
1. Microsoft Documentation, "High availability for Azure SQL Database and SQL Managed Instance": Under the "Zone-redundant availability" section, it states, "Zone-redundant configuration is available for databases in the... Premium, Business Critical, and Hyperscale service tiers... When you provision a database or an elastic pool with zone redundancy, Azure SQL creates multiple synchronous secondary replicas in other availability zones." This confirms that Premium meets the zone outage and no data loss requirements.
2. Microsoft Documentation, "vCore purchasing model - Azure SQL Database": The "Premium service tier" section describes it as being designed for "I/O-intensive workloads that require high availability and low-latency I/O." The documentation confirms that zone redundancy is a configurable option for this tier.
3. Microsoft Documentation, "Service Tiers in the DTU-based purchase model": This document shows that the Basic tier has a "Basic availability" model with a single database file and is not designed for high availability or zone redundancy.
4. Microsoft Documentation, "Compare the vCore and DTU-based purchasing models of Azure SQL Database": This page highlights that the Premium tier (in both models) is designed for high performance and high availability, whereas Managed Instance is for "lift-and-shift of the largest number of SQL Server applications to the cloud with minimal changes," which often comes at a higher price point.
Question 5
DRAG DROP You have an on-premises named App 1. Customers App1 to manage digital images. You plan to migrate App1 to Azure. You need to recommend a data storage solution for Appl. The solution must meet the following image storage requirements: Encrypt images at rest. Allow files up to 50M
Show Answer
IMAGE STORAGE: AZURE BLOB STORAGE
CUSTOMER ACCOUNTS: AZURE SQL DATABASE
Azure Blob storage is the optimal choice for image storage. It's specifically designed to store massive amounts of unstructured data, such as images, videos, and documents. It easily accommodates files up to 50 MB and provides server-side encryption by default, satisfying both requirements. Storing large binary files directly in a database is generally inefficient and not recommended.
Azure SQL Database is the most suitable service for customer accounts. Customer account data is typically structured and relational (e.g., user ID, name, email, password). As a fully managed relational database-as-a-service, Azure SQL Database provides transactional consistency, data integrity, and robust querying capabilities, which are essential for managing user account information effectively.
Azure Blob Storage Documentation: Microsoft's official documentation states that Azure Blob storage is optimized for storing massive amounts of unstructured data. Common use cases include "Serving images or documents directly to a browser" and "Storing files for distributed access."
Source: Microsoft Docs, "Introduction to Azure Blob storage," Use cases section.
Azure SQL Database Documentation: The official documentation describes Azure SQL Database as a fully managed relational database service built for the cloud. It is ideal for applications that require a relational data model with transactional consistency and data integrity, making it a standard choice for storing structured data like user profiles and customer accounts.
Source: Microsoft Docs, "What is Azure SQL Database?," Overview section.
Comparison of Azure Storage Options: Microsoft's "Choose a data storage approach in Azure" guide recommends Blob storage for "images, videos, documents...large binary objects" and relational databases like Azure SQL Database for "transactional data" and data requiring a "high degree of integrity," such as customer information.
Source: Microsoft Azure Architecture Center, "Choose a data storage approach in Azure," Relational databases and Blob storage sections.
Question 6
Show Answer
A. Azure Synapse Analytics is a large-scale data warehousing and big data analytics service, not designed for low-latency transactional application caching.
B. Azure Content Delivery Network (CDN) is used to cache static web content (like images and scripts) at edge locations, not dynamic data from a database.
C. Azure Data Factory is a cloud-based data integration (ETL/ELT) service for orchestrating data movement and transformation, not for real-time application performance improvement.
1. Microsoft Documentation, Azure Cache for Redis. "What is Azure Cache for Redis?". Under the section "Common scenarios," the first listed scenario is "Data cache." It states, "It's a common technique to cache data in-memory... to improve the performance of an application. Caching with Azure Cache for Redis can increase performance by orders of magnitude."
2. Microsoft Documentation, Azure Architecture Center. "Cache-Aside pattern". This document describes the exact pattern for solving the problem in the question: "Load data on demand from a data store into a cache. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store."
3. Microsoft Documentation, Azure Synapse Analytics. "What is Azure Synapse Analytics?". The overview clearly defines it as "a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics." This is distinct from an application performance cache.
Question 7
Show Answer
A. Azure SQL Database Standard: This service tier does not support zone-redundant configurations and cannot meet the requirement for availability during a zone outage.
C. Azure SQL Managed Instance General Purpose: This service tier does not support zone redundancy. Only the Business Critical tier for SQL Managed Instance offers this capability.
D. Azure SQL Database Premium: While this tier supports zone redundancy and ensures no data loss, it is more expensive than the Serverless/General Purpose tier, failing the cost minimization requirement.
1. Microsoft Learn | High availability for Azure SQL Database and SQL Managed Instance: Under the "Zone-redundant availability" section, it states, "Zone-redundant availability is available for databases in the General Purpose, Premium, Business Critical, and Hyperscale service tiers." It also explicitly states, "Zone redundancy for the serverless compute tier of the General Purpose service tier is generally available." This confirms that Serverless (B) and Premium (D) support zone redundancy, while Managed Instance General Purpose (C) does not.
2. Microsoft Learn | vCore purchasing model overview - Azure SQL Database: This document compares the service tiers. The "General Purpose service tier" section describes it as a "budget-oriented" option suitable for "most business workloads." The "Premium service tier" is described as being for "I/O-intensive production workloads." This supports the choice of a General Purpose-based option (Serverless) for cost minimization over Premium.
3. Microsoft Learn | Serverless compute tier for Azure SQL Database: This document details the cost model for Serverless, stating it "bills for the amount of compute used per second." This model is designed to optimize costs, particularly for workloads with intermittent usage patterns, reinforcing its position as the most cost-effective choice among the zone-redundant options.
Question 8
Show Answer
A. SQL Server Migration Assistant (SSMA): SSMA is primarily for assessing and migrating from heterogeneous (non-SQL) database sources like Oracle or DB2 to SQL Server or Azure SQL, not for SQL-to-SQL migrations.
B. Azure Migrate: Azure Migrate is a central hub for discovery, assessment, and migration planning. For the actual database migration execution, it integrates with and uses Azure Database Migration Service (DMS).
C. Data Migration Assistant (DMA): DMA is primarily an assessment tool to identify compatibility issues. While it can perform small-scale migrations, it is not designed for orchestrating the migration of many databases, which would increase administrative effort.
1. Azure Database Migration Service Documentation: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using DMS". This official tutorial explicitly states, "You can use Azure Database Migration Service to migrate the databases from an on-premises SQL Server instance to an Azure SQL Managed Instance." It details the offline migration process using native backups, which is the scenario described.
Source: Microsoft Docs, "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using DMS", Prerequisites section.
2. Azure Database Migration Service Overview: "Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure Data platforms with minimal downtime." This highlights its role as a managed, orchestrated service, which aligns with minimizing administrative effort.
Source: Microsoft Docs, "What is Azure Database Migration Service?", Overview section.
3. Data Migration Assistant (DMA) Documentation: "Data Migration Assistant (DMA) helps you upgrade to a modern data platform by detecting compatibility issues that can impact database functionality... After assessing, DMA helps you migrate your schema, data, and uncontained objects from your source server to your target server." This positions DMA as an assessment tool with migration capabilities, but not as the primary orchestration service for large-scale migrations like DMS.
Source: Microsoft Docs, "Overview of Data Migration Assistant", Introduction section.
Question 9
HOTSPOT
-
You have an app that generates 50,000 events daily.
You plan to stream the events to an Azure event hub and use Event Hubs Capture to implement cold path processing of the events. The output of Event Hubs Capture will be consumed by a reporting system.
You need to identify which type of Azure storage must be provisioned to support Event Hubs Capture, and which inbound data format the reporting system must support.
What should you identify? To answer, select the appropriate options in the answer area.

Show Answer
STORAGE TYPE: AZURE DATA LAKE STORAGE GEN2
DATA FORMAT: AVRO
Azure Event Hubs Capture automatically archives streaming data to a user-specified storage container. This feature supports either an Azure Blob Storage or an Azure Data Lake Storage Gen2 account for storing the captured data. Therefore, Azure Data Lake Storage Gen2 is a valid storage type to provision.
The data is always written in the Apache Avro format, which is a compact, fast, binary format that includes the schema inline. Consequently, any downstream reporting system consuming the data from the capture destination must be able to read and process files in the Avro format.
Microsoft Azure Documentation, "Overview of Event Hubs Capture."
Section: Introduction
Content: "Event Hubs Capture enables you to automatically deliver the streaming data in Event Hubs to an Azure Blob storage or Azure Data Lake Storage account of your choice... Captured data is written in Apache Avro format: a compact, fast, binary format that provides rich data structures with inline schema."
Microsoft Azure Documentation, "Capture streaming events using the Azure portal."
Section: Enable Event Hubs Capture
Content: "For Capture provider, select Azure Storage Account... Event Hubs writes the captured data in Apache Avro format." This section details the configuration where the user must select a compatible storage account type.
Question 10
Show Answer
A. storage queues with a custom metadata setting: Azure Storage Queues are designed for high-throughput and do not guarantee FIFO ordering. Custom metadata is for annotating queues and does not influence message processing order.
C. Azure Service Bus queues with partitioning enabled: Partitioning is a feature for increasing throughput and availability by distributing the queue across multiple message brokers. It can disrupt strict ordering unless used in conjunction with sessions.
D. storage queues with a stored access policy: A stored access policy is a security mechanism for managing access permissions via Shared Access Signatures (SAS) and has no impact on the message delivery order.
---
1. Microsoft Azure Documentation, "Message sessions": "To realize a FIFO guarantee in Service Bus, use sessions. Message sessions enable joint and ordered handling of unbounded sequences of related messages." (Section: "Message sessions", Paragraph 1).
2. Microsoft Azure Documentation, "Storage queues and Service Bus queues - compared and contrasted": "Service Bus sessions enable you to process messages in a first-in, first-out (FIFO) manner... Azure Storage Queues don't natively support FIFO ordering." (Section: "Feature comparison", Table Row: "Ordering").
3. Microsoft Azure Documentation, "Partitioned messaging entities": "When a client sends a message to a partitioned queue or topic, Service Bus checks for the presence of a partition key. If it finds one, it selects the partition based on that key... If a partition key isn't specified but a session ID is, Service Bus uses the session ID as the partition key." This highlights that partitioning alone doesn't guarantee order; it's the session ID that ensures related messages land on the same partition to maintain order. (Section: "Use of partition keys").
Question 11
Show Answer
A. Azure Cache for Redis: This is an in-memory data store used for caching application data, not for storing or managing container images.
C. Azure Content Delivery Network (CDN): A CDN is designed to cache and deliver static web content to users from edge locations, not to function as a container image registry.
D. geo-redundant storage (GRS) accounts: While GRS provides data replication to a secondary region for disaster recovery, it is a general-purpose storage service and lacks the Docker registry API required by AKS to pull images.
1. Microsoft Documentation, Azure Container Registry: "Geo-replication in Azure Container Registry". This document states, "Geo-replication is a feature of Premium SKU container registries. A geo-replicated registry...enables you to manage a single registry across multiple regions." It further explains that this allows for "Network-close registry access" which is ideal for distributed AKS clusters.
2. Microsoft Documentation, Azure Container Registry: "Azure Container Registry service tiers". Under the "Feature comparison" table, "Geo-replication" is listed as a feature available only for the "Premium" service tier.
3. Microsoft Documentation, Azure Storage: "Data redundancy". This document describes Geo-redundant storage (GRS) as a disaster recovery solution that replicates data to a secondary region hundreds of miles away, which is different from the active-active, network-close access provided by ACR geo-replication.
Question 12
HOTSPOT You have an on-premises Microsoft SQL Server database named SQL1. You plan to migrate SQL 1 to Azure. You need to recommend a hosting solution for SQL1. The solution must meet the following requirements: โข Support the deployment of multiple secondary, read-only replicas. โข Support automatic replication between primary and secondary replicas. โข Support failover between primary and secondary replicas within a 15-minute recovery time objective (RTO).
Show Answer
AZURE SERVICE OR SERVICE TIER: AZURE SQL DATABASE
REPLICATION MECHANISM: ACTIVE GEO-REPLICATION
Azure SQL Database is the correct service choice. It's a fully managed platform-as-a-service (PaaS) database engine that supports various service tiers. Tiers like Business Critical and Hyperscale are specifically designed for high availability and performance, and they support the creation of readable secondary replicas, fulfilling the core requirement.
Active geo-replication is the specific technology within Azure SQL Database used to create and manage multiple readable secondary databases in different geographical regions. This feature provides:
- Multiple secondary, read-only replicas: You can create up to four readable secondaries, which can be used for read scale-out and disaster recovery.
- Automatic replication: Data is replicated asynchronously and automatically from the primary to the secondary replicas.
- Fast failover: It supports a user-initiated failover that can easily meet a 15-minute Recovery Time Objective (RTO), typically completing in under a minute.
Microsoft Documentation | Active geo-replication for Azure SQL Database: "Active geo-replication is a feature that allows you to create a continuously synchronized readable secondary database for a primary database... You can create up to four secondaries in the same or different regions." This source confirms that active geo-replication supports multiple, readable, and automatically synchronized replicas.
Microsoft Documentation | Business continuity overview with Azure SQL Database: This document details the available business continuity solutions. Under the section "Active geo-replication," it explains, "Active geo-replication... lets you create readable secondary replicas of individual databases on a server in a different region." It also specifies the RPO and RTO, which align with the scenario's requirements.
Microsoft Documentation | Hyperscale service tier: "The Hyperscale service tier in Azure SQL Database... provides the ability to scale out the read workload by using a number of read-only replicas." This confirms that specific tiers within the Azure SQL Database service meet the requirement for multiple read-only replicas. Active geo-replication is a feature available for these tiers.
Question 13
Show Answer
A. Yes: This is incorrect because VM insights focuses on performance and dependency mapping, not on the analysis of security rules that determine whether network packets are allowed or denied.
1. Microsoft Learn | Azure Network Watcher documentation. "What is Azure Network Watcher?". This document introduces Network Watcher as the primary suite for network monitoring and diagnostics in Azure. It lists IP flow verify and NSG flow logs as key features for troubleshooting connectivity.
2. Microsoft Learn | IP flow verify. "Introduction to IP flow verify". This document states, "IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information returned includes whether the packet is allowed or denied, and the network security group (NSG) rule that allowed or denied the traffic." This directly addresses the question's requirement.
3. Microsoft Learn | NSG flow logs. "Introduction to flow logging for network security groups". This source explains, "Network security group (NSG) flow logs...allows you to log information about IP traffic flowing through an NSG. ... For each rule, flow logs record if the traffic was allowed or denied..." This provides a method for historical analysis of allowed/denied traffic.
4. Microsoft Learn | VM insights. "Overview of VM insights". This document describes VM insights as a tool to "monitor the performance and health of your virtual machines...and monitor their processes and dependencies." This description confirms its purpose is different from analyzing security rule enforcement.
Question 14
Show Answer
A. an Azure logic app: While also a cost-effective serverless option, Logic Apps are primarily for designing and orchestrating workflows. For a singular, code-based maintenance task, Azure Functions are a more direct and often cheaper compute solution.
C. an Azure virtual machine: A virtual machine incurs costs whenever it is running, even if the maintenance task is not active. This makes it the most expensive option for an infrequent task, directly contradicting the cost-minimization requirement.
D. an App Service WebJob: A WebJob runs on an App Service Plan, which has a fixed hourly cost. This is less cost-effective for an infrequent task compared to the per-second, on-demand billing of an Azure Function on a Consumption plan.
1. Azure Functions Documentation, "Azure Functions pricing": "The Consumption plan is the fully serverless hosting plan for Azure Functions... With the Consumption plan, you only pay when your functions are running." This source directly supports the cost-effectiveness of Azure Functions for tasks that are not continuous.
2. Azure Documentation, "Choose the right integration and automation services in Azure": This document compares various services. It states, "Functions is a 'compute on-demand' service," while for VMs, you "pay for the virtual machines that you reserve, whether you use them or not." This highlights the fundamental cost difference between serverless (Functions) and IaaS (VMs).
3. Azure App Service Documentation, "Run background tasks with WebJobs in Azure App Service": "WebJobs... run in the context of an App Service app... The pricing model for WebJobs is based on the App Service plan." This confirms that WebJobs are tied to the continuous cost of an App Service Plan, making them less ideal for cost-minimizing infrequent tasks compared to a true pay-per-use service.
Question 15
Show Answer
B. Azure NetApp Files: This is a high-performance file storage service supporting NFS and SMB protocols, not HDFS. It is designed for enterprise file shares and HPC, not as a direct HDFS replacement.
C. Azure Data Share: This is a service for securely sharing data with external organizations. It is not a primary storage solution or a file system.
D. Azure Table storage: This is a NoSQL key-value store for structured, non-relational data. It is not a file system and does not support HDFS.
1. Microsoft Learn. "Introduction to Azure Data Lake Storage Gen2." Azure Documentation. "Data Lake Storage Gen2 is the primary storage for Azure HDInsight and Azure Databricks. It is compatible with Hadoop Distributed File System (HDFS)."
2. Microsoft Learn. "The Azure Blob File System driver (ABFS): A dedicated Azure Storage driver for Hadoop." Azure Documentation. "Azure Blob storage can now be accessed through a new driver, the Azure Blob File System driver or ABFS. The ABFS driver is part of Apache Hadoop and is included in many of the commercial distributions of Hadoop. Using this driver, many applications and frameworks can access data in Azure Blob Storage without any code explicitly referencing Data Lake Storage Gen2."
3. Microsoft Learn. "What is Azure NetApp Files." Azure Documentation. "Azure NetApp Files is an enterprise-class, high-performance, metered file storage service... It supports multiple storage protocols in a single service, including NFSv3, NFSv4.1, and SMB3.1.x." (Note: No mention of HDFS).
Question 16
HOTSPOT You need to deploy an instance of SQL Server on Azure Virtual Machines. The solution must meet the following requirements: โข Support 15.000 disk IOPS. โข Support SR-IOV. โข Minimize costs. What should you include in the solution? To answer, select the appropriate options in the answer are a. NOTE: Each correct selection is worth one point.
Show Answer
VIRTUAL MACHINE SERIES: DS
DISK TYPE: PREMIUM SSD
To meet the requirements, the DS-series virtual machine is the most appropriate choice. It supports Single Root I/O Virtualization (SR-IOV), which Azure calls Accelerated Networking, and is a general-purpose series that is more cost-effective for a SQL Server workload compared to the specialized and more expensive GPU-optimized NC and NV series.
For the disk, Premium SSD is the correct option. To achieve 15,000 IOPS, a single disk (like a P60 providing 16,000 IOPS) or striping multiple smaller Premium SSDs can be used. This meets the performance requirement while being more cost-effective than Ultra Disk. Standard SSDs cannot provide the required IOPS.
Azure Virtual Machine Sizes - General purpose: Microsoft Documentation states that the Dsv3-series (a common DS series) is suitable for "many enterprise applications" and provides a "balance of CPU, memory, and disk." It is more cost-effective than GPU-optimized series for non-GPU workloads.
Source: Microsoft Docs, "Sizes for virtual machines in Azure," General purpose section.
Azure Accelerated Networking (SR-IOV): Microsoft's documentation confirms that Accelerated Networking is supported on most general-purpose instances with 2 or more vCPUs, including the DSv2-series and later.
Source: Microsoft Docs, "Azure Accelerated Networking overview," Supported VM instances section.
Azure Managed Disk Types Comparison: The official documentation provides a table comparing disk types. A Premium SSD P60 disk delivers 16,000 IOPS, meeting the requirement. Ultra Disks offer higher performance but at a higher cost, making Premium SSD the most cost-effective choice for this scenario.
Source: Microsoft Docs, "Select a disk type for Azure IaaS VMs - managed disks," Disk type comparison table.
Performance guidelines for SQL Server on Azure Virtual Machines: Microsoft's performance guidelines recommend Premium SSDs for most production SQL Server workloads due to their balance of performance and cost.
Source: Microsoft Docs, "Performance guidelines for SQL Server in Azure Virtual Machines," Storage section.
Question 17
Show Answer
A. Yes: This is incorrect because the Defender for Cloud dashboard is for assessing and reporting on compliance, not for enforcing deployment rules like restricting resource locations.
1. Microsoft Learn, Azure Policy Overview. "Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards... Common use cases for Azure Policy include... enforcing that services can only be deployed to specific regions."
Source: Microsoft Documentation, learn.microsoft.com/en-us/azure/governance/policy/overview, "What is Azure Policy?" section.
2. Microsoft Learn, Tutorial: Improve your regulatory compliance. "Defender for Cloud helps you streamline the process for meeting regulatory compliance requirements, using the regulatory compliance dashboard... The dashboard shows the status of all the assessments within your environment for a chosen standard or regulation." This describes a monitoring and assessment function, not an enforcement mechanism.
Source: Microsoft Documentation, learn.microsoft.com/en-us/azure/defender-for-cloud/regulatory-compliance-dashboard, "What is the regulatory compliance dashboard?" section.
3. Microsoft Learn, Azure Policy built-in policy definitions. The built-in policy definition named "Allowed locations" has the description: "This policy enables you to restrict the locations your organization can specify when deploying resources." This directly addresses the requirement to enforce deployment to specific regions.
Source: Microsoft Documentation, learn.microsoft.com/en-us/azure/governance/policy/samples/built-in-policies, "Built-in policy definitions" table, under the "General" category.
Question 18
Show Answer
B. No: This is incorrect. Azure Front Door's core features, including global load balancing, health probes for failover, and an integrated Web Application Firewall (WAF) with rate-limiting capabilities, directly fulfill all the solution requirements.
1. Microsoft Documentation | What is Azure Front Door?
"Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications... Front Door provides... global load balancing with instant failover."
Reference: learn.microsoft.com/en-us/azure/frontdoor/front-door-overview, "What is Azure Front Door?" section.
2. Microsoft Documentation | Routing methods for Azure Front Door
"Azure Front Door supports different traffic-routing methods to determine how to route your HTTP/S traffic... These routing methods can be used to support different routing scenarios, including routing to the lowest latency backends, implementing failover configurations, and distributing traffic across backends."
Reference: learn.microsoft.com/en-us/azure/frontdoor/front-door-routing-methods, "Overview" section.
3. Microsoft Documentation | Web Application Firewall (WAF) rate limiting on Azure Front Door
"A rate limit rule controls the number of requests allowed from a particular client IP address to the application during a one-minute or five-minute duration... Rate limiting can be configured to work with other WAF rules, such as rules that protect you against SQL injection or cross-site scripting attacks."
Reference: learn.microsoft.com/en-us/azure/web-application-firewall/afds/waf-front-door-rate-limit, "Rate limiting and Azure Front Door" section.
Question 19
Show Answer
A. Consumption: This plan has a maximum execution timeout of 10 minutes, which is insufficient for the required 20-minute processing time.
B. App Service: This plan does not support event-driven autoscaling. Scaling is configured manually or based on performance metrics like CPU usage, not the number of events.
C. Dedicated: This is another name for the App Service plan and shares the same limitation of not providing event-driven autoscaling.
---
1. Microsoft Learn, Azure Functions hosting options. Under the "Hosting plans comparison" table, it explicitly states that the Premium plan supports "Event-driven" scaling and has a default timeout of 30 minutes, which can be configured to be unlimited. In contrast, the Consumption plan's maximum timeout is 10 minutes, and the Dedicated (App Service) plan's scaling is "Manual/Autoscale" (based on metrics, not events).
Source: Microsoft Learn. (2023). Azure Functions hosting options. Retrieved from https://learn.microsoft.com/en-us/azure/azure-functions/functions-hosting-options#hosting-plans-comparison
2. Microsoft Learn, Azure Functions triggers and bindings concepts. In the "Timeout" section, the documentation confirms the timeout limits for each plan. It specifies, "The default timeout for functions on a Consumption plan is 5 minutes... you can change this value to a maximum of 10 minutes... For Premium and Dedicated plan functions, the default is 30 minutes, and there is no overall max." This directly invalidates the Consumption plan for the 20-minute requirement.
Source: Microsoft Learn. (2023). Azure Functions triggers and bindings concepts - Timeout. Retrieved from https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings?tabs=csharp#timeout
Question 20
HOTSPOT You are designing a data storage solution to support reporting. The solution will ingest high volumes of data in the JSON format by using Azure Event Hubs. As the data arrives, Event Hubs will write the data to storage. The solution must meet the following requirements: โข Organize data in directories by date and time. โข Allow stored data to be queried directly, transformed into summarized tables, and then stored in a data warehouse. โข Ensure that the data warehouse can store 50 TB of relational data and support between 200 and 300 concurrent read operations. Which service should you recommend for each type of data store? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Show Answer
DATA STORE FOR THE INGESTED DATA: AZURE DATA LAKE STORAGE GEN2
DATA STORE FOR THE DATA WAREHOUSE: AZURE SYNAPSE ANALYTICS DEDICATED SQL POOLS
Azure Data Lake Storage Gen2 is the correct choice for the ingested data store. Its key feature is the hierarchical namespace, which allows data to be organized into a directory structure, satisfying the requirement to organize data by date and time. This file system-like structure is optimized for big data analytics workloads, allowing services to query the raw JSON data directly and efficiently.
Azure Synapse Analytics dedicated SQL pools is the most appropriate service for the data warehouse. It uses a Massively Parallel Processing (MPP) architecture specifically designed for high-performance analytics on large datasets. It can easily scale to handle the 50 TB data requirement and is engineered to manage high concurrency (up to 128 concurrent queries, with workload management for the 200-300 user scenario), making it ideal for enterprise-level reporting.
Microsoft Documentation, "Introduction to Azure Data Lake Storage Gen2." In the section "Key features of Data Lake Storage Gen2," it states: "A hierarchical namespace is a key feature that enables Data Lake Storage Gen2 to provide high-performance data access at object storage scale and price... This allows for data to be organized in a familiar directory and file hierarchy." This directly addresses the requirement for organizing data in directories.
Microsoft Documentation, "What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?." The "Massively Parallel Processing (MPP) architecture" section details how the service is built for enterprise data warehousing and big data. It highlights its ability to "run complex queries quickly across petabytes of data," which aligns with the 50 TB and high-performance querying requirements.
Microsoft Documentation, "Memory and concurrency limits for dedicated SQL pool in Azure Synapse Analytics." The "Concurrency" section specifies that a dedicated SQL pool supports up to 128 concurrent queries. While the requirement is 200-300 concurrent reads, the service's workload management and queuing capabilities are designed to handle such user loads for reporting scenarios, making it the most suitable choice among the options.
Microsoft Documentation, "Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage." This document confirms that Azure Event Hubs can capture data directly to an Azure Data Lake Storage Gen2 account, fulfilling the ingestion pipeline requirement.
Question 21
Show Answer
A. Azure Data Box Edge: This is a physical edge computing appliance for IoT and rapid data transfer from edge locations, not the primary tool for a structured database ingestion pipeline.
D. Azure Data Box Gateway: This is a virtual appliance for transferring file-based data to Azure via SMB/NFS shares, which is unsuitable for extracting data directly from an Oracle database.
E. Azure Import/Export service: This service is for one-time, bulk data migration using physical disks. It is not appropriate for a recurring, operational data pipeline from a live application database.
1. Azure Data Factory for Oracle Ingestion: Microsoft Learn. (2023). Copy data from and to Oracle by using Azure Data Factory or Azure Synapse Analytics. "To copy data from an on-premises Oracle database, you need to set up a self-hosted integration runtime." This establishes ADF as the correct ingestion tool.
Source: learn.microsoft.com/en-us/azure/data-factory/connector-oracle
2. Azure Databricks with Azure Data Lake Storage: Microsoft Learn. (2023). Tutorial: Extract, transform, and load data by using Azure Databricks. Section: "Create and configure an Azure Databricks workspace". This tutorial demonstrates the standard pattern where Databricks interacts with data stored in Azure Data Lake Storage Gen2.
Source: learn.microsoft.com/en-us/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse
3. Orchestration Pattern: Microsoft Learn. (2023). Transform data by using an Azure Databricks Notebook. This document shows how Azure Data Factory is used to orchestrate a pipeline that can include copying data (from sources like Oracle) and then running a Databricks notebook for transformation.
Source: learn.microsoft.com/en-us/azure/data-factory/transform-data-using-databricks-notebook
Question 22
HOTSPOT You are designing a cost-optimized solution that uses Azure Batch to run two types of jobs on Linux nodes. The first job type will consist of short-running tasks for a development environment. The second job type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion. You need to recommend the pool type and node type for each job type. The solution must minimize compute charges and leverage Azure Hybrid Benefit whenever possible. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Show Answer
FIRST JOB: USER SUBSCRIPTION AND LOW-PRIORITY VIRTUAL MACHINES
SECOND JOB: USER SUBSCRIPTION AND DEDICATED VIRTUAL MACHINES
The solution requires optimizing costs while meeting performance requirements for two distinct jobs.
- First job (Development): This job involves short-running tasks in a development environment where cost is a primary concern and immediate completion is not critical. Low-priority virtual machines (also known as Spot VMs) are ideal as they offer significant cost savings for workloads that can tolerate interruptions. To further reduce costs by leveraging the Azure Hybrid Benefit as specified, the pool must be configured in User subscription mode.
- Second job (Production): This job involves long-running MPI applications in a production environment requiring timely completion. To prevent interruptions and guarantee availability, dedicated virtual machines are necessary. To meet the requirement of minimizing charges and using the Azure Hybrid Benefit "whenever possible," this pool must also be created in User subscription mode, as this is a prerequisite for applying the benefit to the dedicated nodes.
Microsoft Azure Documentation, "Pool allocation mode": This document clarifies the two pool allocation modes. It explicitly states that to use features like Azure Hybrid Benefit, the Batch pool must be created in the User subscription mode. This supports the choice of "User subscription" for both jobs to meet the cost-saving requirement.
Microsoft Azure Documentation, "Use Spot VMs with Batch": This document describes Spot (formerly low-priority) VMs as a cost-effective option for workloads that are fault-tolerant and flexible in their completion time. This justifies their use for the short-running development job where minimizing cost is key. It states, "Spot VMs are a good choice for workloads... where the job completion time is flexible."
Microsoft Azure Documentation, "Provision compute nodes for Batch pools": This resource distinguishes between dedicated and Spot compute nodes. It confirms that dedicated nodes are reserved for the workloads and are not subject to preemption, making them suitable for production jobs that require guaranteed availability and timely completion, such as the long-running MPI application.
Question 23
HOTSPOT Your company has 20 web APIs that were developed in-house. The company is developing 10 web apps that will use the web APIs. The web apps and the APIs are registered in the company s Azure AD tenant. The web APIs are published by using Azure API Management. You need to recommend a solution to block unauthorized requests originating from the web apps from reaching the web APIs. The solution must meet the following requirements: โข Use Azure AD-generated claims. โข Minimize configuration and management effort What should you include in the recommendation? To answer, select the appropriate options in the answer are a. NOTE: Each correct selection is worth one point. NOTE: Each correct selection is worth one point.
Show Answer
GRANT PERMISSIONS TO ALLOW THE WEB APPS TO ACCESS THE WEB APIS BY USING: AZURE AD
CONFIGURE A JSON WEB TOKEN (JWT) VALIDATION POLICY BY USING: AZURE API MANAGEMENT
Granting Permissions (Azure AD): In scenarios where both the client (web app) and the resource (web API) are registered within Azure Active Directory (Azure AD), Azure AD is used to manage the authorization flow. The web API's app registration exposes permissions (scopes), and the web app's registration is granted consent to access those specific scopes. This process establishes a trust relationship and defines what the web app is allowed to do, directly within the identity provider. This approach centralizes access management, aligning with the goal of minimizing configuration effort.
JWT Validation (Azure API Management): Azure API Management (APIM) serves as a gateway in front of your backend services. To protect the web APIs, you configure a validate-jwt policy in APIM. This policy intercepts incoming requests, inspects the JWT (access token) provided by the web app, and validates its signature, issuer, audience, and expiration against the configuration of your Azure AD tenant. By enforcing this policy at the gateway, you ensure that no unauthorized requests reach the backend APIs. This centralizes the security logic, removing the need to implement token validation in each of the 20 web APIs individually, which significantly minimizes management effort.
Microsoft Learn, Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow. This document details how a client application acquires a token to call a web API. Section "First leg of the OBO flow" explains that the client application must be granted permission in Azure AD to call the middle-tier web API. This is configured in the Azure portal under API permissions.
Microsoft Learn, Protect an API in Azure API Management using OAuth 2.0 authorization with Azure AD. This tutorial explicitly outlines the required steps. In Step 3, it states: "In this section, you'll configure an API Management policy that blocks requests that don't have a valid access token." It then provides the XML for the validate-jwt policy, which is applied at the APIM level to protect the backend API.
Microsoft Learn, API Management access restriction policies. The documentation for the validate-jwt policy states, "Use this policy to enforce the existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter." This confirms that JWT validation is a primary function of API Management policies.
Question 24
Show Answer
B: Certificates and Azure Key Vault are used for securing application identities and secrets, not for granting data plane permissions to individual users within Cosmos DB.
C: Master keys grant full administrative permissions (read, write, delete) over the entire Cosmos DB account and are not suitable for providing scoped, read-only access to specific users.
D: Shared access signatures (SAS) are not a supported authentication mechanism for Azure Cosmos DB. Conditional Access policies enforce conditions on user authentication but do not grant permissions to data.
1. Microsoft Documentation - Azure Cosmos DB RBAC: "Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account". This document explicitly states, "Azure Cosmos DB exposes a role-based access control (RBAC) system that lets you... Authenticate your data requests with an Azure AD identity... You can create a role assignment for Azure AD principals (users, groups, service principals, or managed identities) to grant them access to resources and operations in your Azure Cosmos DB account." It also lists the Cosmos DB Built-in Data Reader role.
2. Microsoft Documentation - Secure access to data in Azure Cosmos DB: This document outlines the primary methods for securing data. In the section "Role-based access control (preview)", it describes using Azure AD identities and IAM role assignments as the modern, recommended approach for data plane security. It contrasts this with the master key and resource token models.
3. Microsoft Documentation - Resource Tokens: "Secure access to Azure Cosmos DB resources using resource tokens". This document explains that resource tokens are generated using a master key, typically by a middle-tier application, to provide temporary, scoped access to untrusted clients. This confirms it is a separate model from direct Azure AD RBAC authentication.
Question 25
HOTSPOT You have several Azure App Service web apps that use Azure Key Vault to store data encryption keys. Several departments have the following requests to support the web app: 
Show Answer
SECURITY: AZURE AD PRIVILEGED IDENTITY MANAGEMENT
DEVELOPMENT: AZURE MANAGED IDENTITY
QUALITY ASSURANCE: AZURE AD PRIVILEGED IDENTITY MANAGEMENT
The selections are based on the specific functionalities requested by each department.
- Security: The requirements to review administrative roles, require justification for membership, receive alerts, and view audit histories are all core features of Azure AD Privileged Identity Management (PIM). PIM is designed to manage, control, and monitor access to privileged resources by providing just-in-time (JIT) access and access review capabilities.
- Development: The request to enable applications to access Azure Key Vault without storing credentials in code is the primary use case for Azure Managed Identity. This feature provides an Azure resource (like a web app) with an automatically managed identity in Azure AD, which can then be used to authenticate to other Azure services that support Azure AD authentication.
- Quality Assurance: While no specific request is listed, QA teams often require temporary, elevated permissions to test application features or troubleshoot issues in test environments. Azure AD Privileged Identity Management (PIM) is the most appropriate service to grant these permissions securely, adhering to the principle of least privilege through just-in-time access that can be audited.
Azure AD Privileged Identity Management (PIM):
Microsoft Learn. (2023). What is Azure AD Privileged Identity Management? States that PIM provides time-based and approval-based role activation to mitigate risks of excessive permissions. It also enables features like access reviews, alerts on privileged role activation, and audit history. This directly addresses the Security department's requests.
Azure Managed Identity:
Microsoft Learn. (2023). What are managed identities for Azure resources? Explains that managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. It explicitly mentions, "You can use a managed identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code," which matches the Development department's request.
Use Case for PIM in QA/Testing:
Microsoft Learn. (2023). Assign Azure AD roles in Privileged Identity Management. The concept of assigning roles as "eligible" for activation on a temporary, as-needed basis is a core tenet of PIM. This model is a best practice for any user, including QA testers, who only need elevated permissions intermittently, thus justifying the selection for the Quality Assurance department.
Question 26
Show Answer
B. Install Virtual Kubelet.
This is an underlying technology for virtual nodes, which provides a serverless compute option (ACI), but it does not provide the event-driven scaling logic itself.
C. Configure the AKS cluster autoscaler.
This component scales the number of agent nodes in the cluster, not the application pods. It responds to resource pressure, not external event triggers like queue length.
D. Configure the virtual node add-on.
This add-on allows pods to run on Azure Container Instances (ACI). It is a compute-layer choice and does not implement the required event-driven application scaling mechanism.
1. Microsoft Azure Documentation, "Kubernetes-based Event-driven Autoscaling (KEDA) add-on": "KEDA provides two main components: a KEDA operator and a metrics server. The KEDA operator allows you to scale from zero to one instance and to activate a Kubernetes Deployment, and the metrics server provides metrics for an event source to the Horizontal Pod Autoscaler (HPA)." This confirms the direct relationship and necessity of both KEDA and HPA.
2. Microsoft Azure Documentation, "Autoscale an application with Kubernetes Event-driven Autoscaling (KEDA)": "Under the hood, KEDA uses the standard Kubernetes Horizontal Pod Autoscaler (HPA) to drive scaling. KEDA acts as a metrics server for the HPA, providing it with data from external event sources." This explicitly states that KEDA and HPA work together to provide the solution.
3. KEDA Documentation, "Microsoft Azure Queue Storage scaler": This document details the specific scaler for Azure Queue Storage, confirming that KEDA can monitor the queue length (queueLength) to trigger scaling actions, directly matching the scenario's requirement.
4. Microsoft Azure Documentation, "Application scaling options in Azure Kubernetes Service (AKS)": In the "Horizontal Pod Autoscaler (HPA)" section, it clarifies that HPA scales the number of pods. The "Kubernetes Event-driven Autoscaling (KEDA) add-on" section confirms KEDA is network-plugin agnostic and works with both Kubenet and Azure CNI.
Question 27
Show Answer
A. Burstable: This tier does not support zone-redundant high availability, which is required to ensure the database is accessible if an entire datacenter (Availability Zone) fails.
C. Memory Optimized: While this tier supports zone-redundant high availability, it is more expensive than the General Purpose tier. It does not meet the requirement to minimize costs.
1. Microsoft Learn: High availability concepts in Azure Database for MySQL - Flexible Server. Under the "Zone-redundant high availability" section, it states, "Zone-redundant high availability is available for the General Purpose and Memory Optimized compute tiers. It is not supported in the Burstable compute tier."
2. Microsoft Learn: Compute and storage options in Azure Database for MySQL - Flexible Server. This document details the different compute tiers. The "When to choose this tier" section for General Purpose indicates it is for "most business workloads," while Memory Optimized is for "high-performance database workloads." This implies a cost and performance hierarchy where General Purpose is the more cost-effective baseline for production HA.
Question 28
Show Answer
A. AES256 is incorrect because the TDE protector must be an asymmetric RSA key. AES is a symmetric algorithm used for the data encryption key (DEK), not the protector.
B. RSA4096 is incorrect because, while Azure Key Vault supports this key size, the TDE integration for Azure SQL Managed Instance specifically does not.
C. RSA2048 is incorrect because, although it is a supported key size, it does not meet the requirement to maximize encryption strength as RSA 3072 is also supported and is stronger.
1. Microsoft Learn. (2023). Transparent data encryption with customer-managed keys - Azure SQL Database & SQL Managed Instance. In the section "Requirements for configuring customer-managed TDE," the documentation explicitly states: "The key is an asymmetric, RSA or RSA-HSM key. Key sizes of 2048 and 3072 are supported." This confirms that 3072 is the maximum supported size.
Question 29
HOTSPOT You have an Azure subscription named Sub1 that is linked to an Azure AD tenant named contoso.com. You plan to implement two ASP.NET Core apps named App1 and App2 that will be deployed to 100 virtual machines in Sub1. Users will sign in to App1 and App2 by using their contoso.com credentials. App1 requires read permissions to access the calendar of the signed-m user. App2 requires write permissions to access the calendar of the signed-in user. You need to recommend an authentication and authorization solution for the apps. The solution must meet the following requirements: โข Use the principle of least privilege. โข Minimize administrative effort What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one pent.
Show Answer
AUTHENTICATION: APPLICATION REGISTRATION IN AZURE AD
AUTHORIZATION: DELEGATED PERMISSIONS
Authentication: For an application to handle user sign-ins with Azure Active Directory (Azure AD) credentials and request access to protected resources (like the Microsoft Graph API for calendar access), it must first be registered in the Azure AD tenant. This registration creates a globally unique identity for the app and defines the authentication protocols it will use. Managed identities are used for service-to-service authentication (e.g., a VM authenticating to Azure Key Vault) and are not suitable for scenarios where an app needs to act on behalf of a signed-in user.
Authorization: The requirement is for the apps to access the signed-in user's calendar. This is a classic delegated access scenario. Delegated permissions are used when an application needs to act on behalf of a user. The application is "delegated" the permission to access resources the user can access. By assigning App1 the Calendars.Read permission and App2 the Calendars.ReadWrite permission, the principle of least privilege is enforced. In contrast, Azure RBAC manages access to Azure resources (like VMs and storage), not API data. Application permissions are for services that run without a user present (e.g., background daemons).
Microsoft Entra ID Documentation, Application and service principal objects in Azure Active Directory: "To delegate identity and access management functions to Microsoft Entra ID, an application must be registered with a Microsoft Entra tenant. When you register your application with Microsoft Entra ID, you're creating an identity configuration for your application that allows it to integrate with Microsoft Entra ID."
Microsoft Identity Platform Documentation, Permissions and consent in the Microsoft identity platform: "Delegated permissions are used by apps that have a signed-in user present... For delegated permissions, the effective permissions of your app will be the least privileged intersection of the delegated permissions the app has been granted (via consent) and the privileges of the currently signed-in user."
Microsoft Azure Documentation, What is Azure role-based access control (Azure RBAC)?: "Azure role-based access control (Azure RBAC) is the authorization system you use to manage access to Azure resources. To grant access, you assign roles to users, groups, service principals, or managed identities at a particular scope." This clarifies that Azure RBAC is for managing Azure resources, not data within APIs like Microsoft Graph.
Question 30
Show Answer
B. NoSQL: This API uses a non-relational document model, failing the requirement to store data relationally.
C. Apache Cassandra: This API uses a non-relational wide-column model, not a relational one.
D. MongoDB: This API uses a non-relational document model, which does not store data relationally.
1. Microsoft Learn, Azure Cosmos DB Documentation. "What is Azure Cosmos DB for PostgreSQL?". This document states, "Azure Cosmos DB for PostgreSQL is a managed service for PostgreSQL that is powered by the Citus open-source extension to PostgreSQL. It allows you to run PostgreSQL workloads in the cloud with all the benefits of a fully managed service." This confirms its relational and SQL-based nature.
Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/overview, Section: "What is Azure Cosmos DB for PostgreSQL?".
2. Microsoft Learn, Azure Cosmos DB Documentation. "High availability in Azure Cosmos DB for PostgreSQL". This document details the service's capabilities for business continuity, including "Geo-redundant backup and restore" and "Cross-region read replicas," which satisfy the geo-replication requirement.
Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/concepts-high-availability, Sections: "Geo-redundant backup and restore" and "Cross-region read replicas".
3. Microsoft Learn, Azure Cosmos DB Documentation. "Choose an API in Azure Cosmos DB". This resource contrasts the different APIs. It explicitly describes the API for NoSQL and MongoDB as using a "Document model" and the API for Cassandra as using a "Column-family model," confirming they are not relational.
Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/choose-api, Section: "Azure Cosmos DB APIs".
Question 31
HOTSPOT You have an Azure AD tenant that contains a management group named MG1. You have the Azure subscriptions shown in the following table. 



Show Answer
STATEMENT 1: USER1 CAN CREATE A NEW VIRTUAL MACHINE IN RG1.
YES
STATEMENT 2: USER2 CAN GRANT PERMISSIONS TO GROUP2.
NO
STATEMENT 3: USER3 CAN CREATE A STORAGE ACCOUNT IN RG2.
YES
Statement 1: Yes. User1 is a member of Group1. Group1 is assigned the Virtual Machine Contributor role at the MG1 management group scope. Since RG1 is in Sub1, which is under MG1, these permissions are inherited by RG1. The Virtual Machine Contributor role allows the creation and management of virtual machines. Additionally, User1 is transitively a member of Group3 (User1 -> Group1 -> Group3), which has the Contributor role at the Tenant Root Group, a permission that also inherits down to RG1.
Statement 2: No. User2 is a member of Group2, which is a member of Group3. Group3 has the Contributor role at the Tenant Root Group. While the Contributor role grants broad permissions to manage resources, it explicitly does not include the right to grant access to others. Granting permissions requires a role with the Microsoft.Authorization/roleAssignments/write permission, such as Owner or User Access Administrator.
Statement 3: Yes. User3 is a member of both Group1 and Group2, which are both members of Group3. Group3 is assigned the Contributor role at the Tenant Root Group. This permission is inherited by all subscriptions and resource groups below it, including RG2 (which is in Sub2, under MG1, under the Tenant Root). The Contributor role includes permissions to create and manage all resource types, including storage accounts.
Azure role-based access control (Azure RBAC) Scope: Microsoft Docs. "Understand scope for Azure RBAC". Permissions are inherited from parent scopes to child scopes. A role assigned at a management group scope grants access to all subscriptions and resources within that management group.
Azure built-in roles: Microsoft Docs. "Azure built-in roles".
Contributor: "Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries."
Virtual Machine Contributor: "Lets you manage virtual machines, but not access to them, and not the virtual network or storage account they're connected to." This documentation clarifies it allows creating and managing VMs.
Azure Management Groups: Microsoft Docs. "Organize your resources with Azure management groups". This document explains the hierarchy from Tenant Root Group down to individual resources and how policies and access control inherit through this structure.
Question 32
HOTSPOT You are designing an app that will be hosted on Azure virtual machines that run Ubuntu. The app will use a third-party email service to send email messages to users. The third-party email service requires that the app authenticate by using an API key. You need to recommend an Azure Key Vault solution for storing and accessing the API key. The solution must minimize administrative effort. What should you recommend using to store and access the key? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Show Answer
STORAGE: SECRET
ACCESS: A MANAGED SERVICE IDENTITY (MSI)
The most appropriate way to store a simple credential like a third-party API key in Azure Key Vault is as a Secret. Secrets are designed to store arbitrary strings of text, such as passwords, connection strings, and API keys.
To access the Key Vault from an Azure VM with minimal administrative effort, a managed service identity (MSI), now known as Managed Identity for Azure resources, is the best practice. This feature provides the Azure VM with an automatically managed identity in Azure Active Directory. The application running on the VM can use this identity to authenticate to Key Vault and retrieve the secret without needing to store any credentials (like a service principal's secret or an API token) in its code or configuration files. This eliminates the overhead of credential management and rotation.
Microsoft Azure Documentation, Azure Key Vault basic concepts.
Section: "What is Azure Key Vault?"
Content: The documentation specifies that Key Vault secrets are used for "anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys." This supports storing the API key as a Secret.
Microsoft Azure Documentation, What are managed identities for Azure resources?
Section: "Introduction" and "Which Azure services support managed identities"
Content: This source states, "Managed identities for Azure resources provide Azure services with an automatically managed identity... You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code." This directly supports the use of MSI for minimizing administrative effort.
Microsoft Azure Documentation, Tutorial: Use a Linux VM system-assigned managed identity to access Azure Key Vault.
Section: "Overview" and "Prerequisites"
Content: This tutorial demonstrates the exact scenario in the question. It explicitly states that a managed identity is the recommended way for code running on a VM to authenticate to services like Key Vault because the credentials are automatically managed by the platform.
Question 33
Show Answer
A. Yes: This is incorrect. Azure Advisor's function is to provide high-level recommendations on best practices, not to perform detailed network packet flow analysis required for troubleshooting connectivity.
1. Microsoft Learn, Azure Advisor. "Overview of Azure Advisor." Under the "What is Advisor?" section, it is defined as a service that provides recommendations for Reliability, Security, Performance, Cost, and Operational Excellence. It does not list network traffic diagnostics as a feature.
Source: https://learn.microsoft.com/en-us/azure/advisor/advisor-overview
2. Microsoft Learn, Azure Network Watcher. "Introduction to IP flow verify in Azure Network Watcher." This document explicitly states, "IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, local port, remote IP, and remote port." This directly addresses the requirement in the question.
Source: https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-ip-flow-verify-overview
3. Microsoft Learn, Azure Network Watcher. "Diagnose a virtual machine network traffic filter problem." This tutorial demonstrates using the IP Flow Verify capability to determine if a network security group (NSG) rule is denying traffic to or from a virtual machine, which is the exact scenario described.
Question 34
Show Answer
A (Yes): This is incorrect. A resource's location is independent of its resource group's location. Therefore, a policy restricting the resource group's location does not guarantee that the App Service instances within it will be in an approved region.
1. Microsoft Learn, Azure Resource Manager documentation, "What is a resource group?": Under the "Resources" section, it states, "The location of the resource group can be different than the location of the resources. [...] The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored." This confirms that resource and resource group locations are independent.
2. Microsoft Learn, Azure Policy documentation, "Tutorial: Create and manage policies to enforce compliance": In the "Apply a policy" section, it describes the "Allowed locations" policy. The documentation explains, "This policy definition enables you to restrict the locations your organization can specify when deploying resources." This policy should be assigned and scoped to the App Service resource type to meet the requirement directly.
3. Microsoft Learn, Azure Policy built-in definitions, "Allowed locations": The policy definition ("policyRule": { "if": { "not": { "field": "location", "in": "[parameters('listOfAllowedLocations')]" } }, "then": { "effect": "deny" } }) demonstrates that the policy acts on the location field of a resource, not its resource group. To be effective for this scenario, it must be applied to the Microsoft.Web/sites resource type.
Question 35
DRAG DROP You have two app registrations named App1 and App2 in Azure AD. App1 supports role-based access control (RBAC) and includes a role named Writer. You need to ensure that when App2 authenticates to access App1, the tokens issued by Azure AD include the Writer role claim. Which blade should you use to modify each app registration? To answer, drag the appropriate blades to the correct app registrations. Each blade may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Show Answer
APP1: APP ROLES
APP2: API PERMISSIONS
To solve this, you must configure both the resource application (App1) and the client application (App2).
App1, the resource API, must first define the permissions it exposes to other applications. This is accomplished by creating an "App role" (like the 'Writer' role). Therefore, the App roles blade is used to configure App1.
App2, the client application, must then request one of the permissions exposed by App1. This is done on the API permissions blade of App2, where you add a permission and grant it consent. Once granted, Azure AD will include the corresponding 'Writer' role claim in the access token it issues for App2 to call App1.
Microsoft Learn | Microsoft Entra ID Documentation: In the article "Add app roles to your application and receive them in the token," the procedure for a resource API to define its roles is detailed.
Section: "Create app roles by using the Azure portal"
Content: This section explicitly states, "To create an app role by using the Azure portal's user interface: 1. Sign in to the Microsoft Entra admin center... 3. Browse to Identity > Applications > App registrations and then select the app you want to define app roles in... 4. Under Manage, select App roles, and then select Create app role." This confirms that App roles is the correct blade for App1.
Microsoft Learn | Microsoft Entra ID Documentation: The guide "Quickstart: Configure a client application to access a web API" explains how a client app requests permissions.
Section: "Add permissions to access the web API"
Content: This section provides the steps for the client application (App2 in this scenario): "1. Under Manage, select API permissions > Add a permission. 2. Select the My APIs tab. 3. In the list of APIs, select your web API registration... 4. Select Application permissions... 5. In the list of permissions, select the check box next to [the role you defined]... 7. Select Grant admin consent..." This confirms that API permissions is the correct blade for App2.










