Free Practice Test

Free AZ-305 Practice Test – 2025 Updated

Prepare Smarter for the AZ-305 Exam with Our Free and Accurate AZ-305 Exam Questions โ€“ 2025 Updated

At Cert Empire, we are committed to delivering the latest and most reliable exam questions for students preparing for the Microsoft AZ-305 Exam. To make preparation easier, weโ€™ve made parts of our AZ-305 exam resources completely free. You can practice as much as you want with Free AZ-305 Practice Test.

Question 1

You plan to deploy multiple instances of an Azure web app across several Azure regions. You need to design an access solution for the app. The solution must meet the following replication requirements: โ€ข Support rate limiting โ€ข Balance requests between all instances. โ€ข Ensure that users can access the app in the event of a regional outage Solution: You use Azure Load Balancer to provide access to the app. Does this meet the goal?
Options
A: Yes
B: No
Show Answer
Correct Answer:
No
Explanation
The proposed solution is incorrect. Azure Load Balancer is a regional, Layer 4 (TCP/UDP) service. It cannot meet the specified requirements. Firstly, it does not natively support rate limiting, which is a Layer 7 (HTTP/S) feature typically handled by services like Azure Application Gateway WAF or Azure Front Door. Secondly, as a regional service, a standard Azure Load Balancer cannot balance traffic across multiple Azure regions or provide automatic failover in the event of a regional outage. A global load balancing solution, such as Azure Front Door or Azure Traffic Manager, is required to route traffic across regions and ensure high availability during a regional failure.
Why Incorrect Options are Wrong

A. Yes: This is incorrect because Azure Load Balancer is a regional Layer 4 service and lacks the required global routing, regional failover, and native rate-limiting capabilities.

References

1. Azure Architecture Center - Load-balancing options. This document explicitly states, "For global routing, we recommend Azure Front Door." It also categorizes Azure Load Balancer as a Regional load balancer, contrasting it with Global options like Front Door and Traffic Manager, which are necessary for regional outage scenarios.

Source: Microsoft Learn, Azure Architecture Center. (2023). Load-balancing options. Section: "Azure load-balancing services".

2. Azure Load Balancer overview. This documentation confirms that "Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model" and is a regional resource, which means it cannot route traffic between regions.

Source: Microsoft Learn. (2023). What is Azure Load Balancer?. Section: "Introduction".

3. Web Application Firewall (WAF) rate limiting. This document details how rate limiting is a feature of Azure Application Gateway WAF and Azure Front Door, not Azure Load Balancer. It states, "Rate limiting allows you to detect and block abnormally high levels of traffic from any client IP address."

Source: Microsoft Learn. (2023). Rate limiting on Azure Application Gateway. Section: "Overview".

Question 2

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment inventory, and shipping. You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages. What should you include in the recommendation?
Options
A: Azure Data Lake
B: Azure Notification Hubs
C: Azure Queue Storage
D: Azure Service Fabric
Show Answer
Correct Answer:
Azure Queue Storage
Explanation
Azure Queue Storage is a service for storing large numbers of messages that can be accessed from anywhere in the world. It is designed for building scalable, decoupled applications. In this scenario, the different cloud services (orders, billing, inventory) can communicate asynchronously by placing XML messages into a queue. The sending service adds a message and can continue its work, while the receiving service can retrieve and process the message when it is ready. This pattern effectively decouples the components, improving the application's overall reliability and scalability, which is ideal for handling different stages of a transaction.
Why Incorrect Options are Wrong

A. Azure Data Lake is a scalable data storage and analytics service. It is designed for big data workloads, not for real-time, transactional messaging between services.

B. Azure Notification Hubs is a massively scalable mobile push notification engine. Its purpose is to send notifications to client applications on various platforms, not for backend service-to-service communication.

D. Azure Service Fabric is a distributed systems platform for building and deploying microservices. While you could build a messaging system on it, it is not the messaging service itself.

References

1. Microsoft Documentation, "What is Azure Queue Storage?": "Azure Queue Storage is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously."

Source: Microsoft Docs, Azure Storage Documentation, Queues.

2. Microsoft Documentation, "Storage queues and Service Bus queues - compared and contrasted": "Azure Queue Storage... provides a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services... Use Queue storage when you need to store over 80 gigabytes of messages in a queue [and] you want a simple, easy to use queue." This document highlights its use for decoupling application components for increased scalability and reliability.

Source: Microsoft Docs, Azure Architecture Center, Application integration.

3. Microsoft Documentation, "What is Azure Notification Hubs?": "Azure Notification Hubs provide an easy-to-use and scaled-out push engine that allows you to send notifications to any platform (iOS, Android, Windows, etc.) from any back-end (cloud or on-premises)."

Source: Microsoft Docs, Azure Notification Hubs Documentation, Overview.

4. Microsoft Documentation, "Introduction to Azure Data Lake Storage Gen2": "Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on Azure Blob Storage."

Source: Microsoft Docs, Azure Storage Documentation, Data Lake Storage.

Question 3

Your company has the divisions shown in the following table. AZ-305 exam question Sub1 contains an Azure App Service web app named App1. Appl uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1. You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1. What should you recommend?
Options
A: Configure the Azure AD provisioning service.
B: Configure Supported account types in the application registration and update the sign-in endpoint.
C: Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).
D: Enable Azure AD pass-through authentication and update the sign-in endpoint
Show Answer
Correct Answer:
Configure Supported account types in the application registration and update the sign-in endpoint.
Explanation
The application is currently configured as a single-tenant app, which restricts authentication to users within its home tenant (contoso.com). To allow users from an external Azure AD tenant (fabrikam.com) to authenticate, the application must be reconfigured to be multi-tenant. This is accomplished by modifying the "Supported account types" setting in the application's registration within the Azure portal. Changing this setting to "Accounts in any organizational directory (Any Azure AD directory - Multitenant)" makes the application available to users from any Azure AD tenant. The application's sign-in endpoint logic must also be updated to handle requests from the generic /organizations or /common endpoint instead of the tenant-specific one.
Why Incorrect Options are Wrong

A. The Azure AD provisioning service automates creating and managing user identities in other applications; it does not configure an application's authentication audience.

C. Azure AD Privileged Identity Management (PIM) is used to manage, control, and monitor access to privileged roles, not to enable standard cross-tenant user authentication.

D. Azure AD pass-through authentication is a sign-in method for hybrid identity that validates user passwords against an on-premises Active Directory; it is not relevant for cross-tenant authentication.

References

1. Microsoft Documentation: How to: Sign in any Azure Active Directory user using the multi-tenant application pattern.

Reference: In the section "Update the registration to be multi-tenant," the document states: "If you have an existing application and you want to make it multi-tenant, you need to open the application registration in the Azure portal and update Supported account types to Accounts in any organizational directory." This directly supports the chosen answer.

2. Microsoft Documentation: Quickstart: Register an application with the Microsoft identity platform.

Reference: In the "Register an application" section, step 4, "Supported account types," explicitly defines the option "Accounts in any organizational directory (Any Azure AD directory - Multitenant)" as the method to allow users with a work or school account from any organization to sign into the application.

3. Microsoft Documentation: Tenancy in Azure Active Directory.

Reference: The "App-level considerations" section explains the difference between single-tenant and multi-tenant applications. It clarifies that a multi-tenant application is "available to users in both its home tenant and other tenants." This conceptual document underpins the need to change the application's tenancy model to meet the requirement.

Question 4

You need to design a highly available Azure SQL database that meets the following requirements: * Failover between replicas of the database must occur without any data loss. * The database must remain available in the event of a zone outage. * Costs must be minimized. Which deployment option should you use?
Options
A: Azure SQL Database Premium
B: Azure SQL Database Hyperscale
C: Azure SQL Database Basic
D: Azure SQL Managed Instance Business Critical
Show Answer
Correct Answer:
Azure SQL Database Premium
Explanation
The Azure SQL Database Premium tier is the most appropriate choice. It supports zone-redundant configurations, which provision replicas in different availability zones within the same region. This architecture uses synchronous replication, ensuring that failovers occur with zero data loss (Recovery Point Objective - RPO=0) and that the database remains available during a zone-level outage. Compared to Hyperscale and Managed Instance Business Critical, the Premium tier provides these high-availability features at a lower cost, thus satisfying the "costs must be minimized" requirement for workloads that do not require the massive scale of Hyperscale or the instance-level features of Managed Instance.
Why Incorrect Options are Wrong

B. Azure SQL Database Hyperscale: While it supports zone redundancy, this tier is designed for very large databases (VLDBs) and is not the most cost-effective option for general high-availability scenarios.

C. Azure SQL Database Basic: This tier does not support zone-redundant configurations and cannot meet the requirement to remain available during a zone outage.

D. Azure SQL Managed Instance Business Critical: This option meets the availability and data-loss requirements but is generally more expensive than Azure SQL Database Premium, failing the cost-minimization constraint.

References

1. Microsoft Documentation, "High availability for Azure SQL Database and SQL Managed Instance": Under the "Zone-redundant availability" section, it states, "Zone-redundant configuration is available for databases in the... Premium, Business Critical, and Hyperscale service tiers... When you provision a database or an elastic pool with zone redundancy, Azure SQL creates multiple synchronous secondary replicas in other availability zones." This confirms that Premium meets the zone outage and no data loss requirements.

2. Microsoft Documentation, "vCore purchasing model - Azure SQL Database": The "Premium service tier" section describes it as being designed for "I/O-intensive workloads that require high availability and low-latency I/O." The documentation confirms that zone redundancy is a configurable option for this tier.

3. Microsoft Documentation, "Service Tiers in the DTU-based purchase model": This document shows that the Basic tier has a "Basic availability" model with a single database file and is not designed for high availability or zone redundancy.

4. Microsoft Documentation, "Compare the vCore and DTU-based purchasing models of Azure SQL Database": This page highlights that the Premium tier (in both models) is designed for high performance and high availability, whereas Managed Instance is for "lift-and-shift of the largest number of SQL Server applications to the cloud with minimal changes," which often comes at a higher price point.

Question 5

DRAG DROP You have an on-premises named App 1. Customers App1 to manage digital images. You plan to migrate App1 to Azure. You need to recommend a data storage solution for Appl. The solution must meet the following image storage requirements: Encrypt images at rest. Allow files up to 50M AZ-305 exam question

Show Answer
Correct Answer:

IMAGE STORAGE: AZURE BLOB STORAGE

CUSTOMER ACCOUNTS: AZURE SQL DATABASE

Explanation

Azure Blob storage is the optimal choice for image storage. It's specifically designed to store massive amounts of unstructured data, such as images, videos, and documents. It easily accommodates files up to 50 MB and provides server-side encryption by default, satisfying both requirements. Storing large binary files directly in a database is generally inefficient and not recommended.


Azure SQL Database is the most suitable service for customer accounts. Customer account data is typically structured and relational (e.g., user ID, name, email, password). As a fully managed relational database-as-a-service, Azure SQL Database provides transactional consistency, data integrity, and robust querying capabilities, which are essential for managing user account information effectively.

References

Azure Blob Storage Documentation: Microsoft's official documentation states that Azure Blob storage is optimized for storing massive amounts of unstructured data. Common use cases include "Serving images or documents directly to a browser" and "Storing files for distributed access."

Source: Microsoft Docs, "Introduction to Azure Blob storage," Use cases section.

Azure SQL Database Documentation: The official documentation describes Azure SQL Database as a fully managed relational database service built for the cloud. It is ideal for applications that require a relational data model with transactional consistency and data integrity, making it a standard choice for storing structured data like user profiles and customer accounts.

Source: Microsoft Docs, "What is Azure SQL Database?," Overview section.

Comparison of Azure Storage Options: Microsoft's "Choose a data storage approach in Azure" guide recommends Blob storage for "images, videos, documents...large binary objects" and relational databases like Azure SQL Database for "transactional data" and data requiring a "high degree of integrity," such as customer information.

Source: Microsoft Azure Architecture Center, "Choose a data storage approach in Azure," Relational databases and Blob storage sections.

Question 6

You have a multi-tier app named Appl and an Azure SQL database named SQL l. The backend service Of Appl writes data to Users use the Appl client to read the data from SQL 1. During periods of high utilization the users experience delays retrieving the data. You need to minimize how long it takes for data requests. What should you include in the solution?
Options
A: Azure Synapse Analytics
B: Azure Content Delivery Network (CON)
C: Azure Data Factory
D: Azure Cache for Redis
Show Answer
Correct Answer:
Azure Cache for Redis
Explanation
The scenario describes read-latency issues with an Azure SQL database during periods of high utilization. Azure Cache for Redis is an in-memory data store that provides a high-throughput, low-latency caching solution. By implementing a caching layer with Redis, frequently accessed data can be stored in memory. When the application requests data, it first checks the Redis cache. If the data is present (a cache hit), it is returned immediately, avoiding a slower query to the SQL database. This significantly reduces data retrieval times for users and lessens the load on the database, directly addressing the performance bottleneck.
Why Incorrect Options are Wrong

A. Azure Synapse Analytics is a large-scale data warehousing and big data analytics service, not designed for low-latency transactional application caching.

B. Azure Content Delivery Network (CDN) is used to cache static web content (like images and scripts) at edge locations, not dynamic data from a database.

C. Azure Data Factory is a cloud-based data integration (ETL/ELT) service for orchestrating data movement and transformation, not for real-time application performance improvement.

References

1. Microsoft Documentation, Azure Cache for Redis. "What is Azure Cache for Redis?". Under the section "Common scenarios," the first listed scenario is "Data cache." It states, "It's a common technique to cache data in-memory... to improve the performance of an application. Caching with Azure Cache for Redis can increase performance by orders of magnitude."

2. Microsoft Documentation, Azure Architecture Center. "Cache-Aside pattern". This document describes the exact pattern for solving the problem in the question: "Load data on demand from a data store into a cache. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store."

3. Microsoft Documentation, Azure Synapse Analytics. "What is Azure Synapse Analytics?". The overview clearly defines it as "a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics." This is distinct from an application performance cache.

Question 7

You need to design a highly available Azure SQL database that meets the following requirements: Failover between replicas of the database must occur without any data loss. The database must remain available in the event of a zone outage. Costs must be minimized Which deployment option should you use?
Options
A: Azure SQL Database Standard
B: Azure SQL Database Serverless
C: Azure SQL Managed Instance General Purpose
D: Azure SQL Database Premium
Show Answer
Correct Answer:
Azure SQL Database Serverless
Explanation
The solution requires availability during a zone outage, no data loss on failover, and minimal cost. The Azure SQL Database Serverless compute tier, which is part of the General Purpose service tier, meets all these requirements. It supports a zone-redundant configuration that synchronously replicates data across multiple availability zones within a region, ensuring both high availability and zero data loss (RPO=0). Compared to the Premium tier, which also offers zone redundancy, the General Purpose/Serverless tier is the more budget-oriented option, thus satisfying the requirement to minimize costs.
Why Incorrect Options are Wrong

A. Azure SQL Database Standard: This service tier does not support zone-redundant configurations and cannot meet the requirement for availability during a zone outage.

C. Azure SQL Managed Instance General Purpose: This service tier does not support zone redundancy. Only the Business Critical tier for SQL Managed Instance offers this capability.

D. Azure SQL Database Premium: While this tier supports zone redundancy and ensures no data loss, it is more expensive than the Serverless/General Purpose tier, failing the cost minimization requirement.

References

1. Microsoft Learn | High availability for Azure SQL Database and SQL Managed Instance: Under the "Zone-redundant availability" section, it states, "Zone-redundant availability is available for databases in the General Purpose, Premium, Business Critical, and Hyperscale service tiers." It also explicitly states, "Zone redundancy for the serverless compute tier of the General Purpose service tier is generally available." This confirms that Serverless (B) and Premium (D) support zone redundancy, while Managed Instance General Purpose (C) does not.

2. Microsoft Learn | vCore purchasing model overview - Azure SQL Database: This document compares the service tiers. The "General Purpose service tier" section describes it as a "budget-oriented" option suitable for "most business workloads." The "Premium service tier" is described as being for "I/O-intensive production workloads." This supports the choice of a General Purpose-based option (Serverless) for cost minimization over Premium.

3. Microsoft Learn | Serverless compute tier for Azure SQL Database: This document details the cost model for Serverless, stating it "bills for the amount of compute used per second." This model is designed to optimize costs, particularly for workloads with intermittent usage patterns, reinforcing its position as the most cost-effective choice among the zone-redundant options.

Question 8

You have an on-premises Microsoft SQL server named SQLI that hosts 50 databases. You plan to migrate SQL 1 to Azure SQL Managed Instance. You need to perform an offline migration of SQL 1. The solution must minimize administrative effort. What should you include in the solution?
Options
A: SQL Server Migration Assistant (SSMA)
B: Azure Migrate
C: Data Migration Assistant (DMA)
D: Azure Database Migration Service
Show Answer
Correct Answer:
Azure Database Migration Service
Explanation
Azure Database Migration Service (DMS) is a fully managed service designed to enable seamless, large-scale database migrations to Azure data platforms. For an offline migration of 50 databases from an on-premises SQL Server to Azure SQL Managed Instance, DMS provides an orchestrated and resilient workflow. It can use native full database backups stored in Azure Blob Storage to restore the databases to the target instance. This approach is highly efficient, scalable for many databases, and significantly minimizes the administrative effort required compared to using standalone tools for each database.
Why Incorrect Options are Wrong

A. SQL Server Migration Assistant (SSMA): SSMA is primarily for assessing and migrating from heterogeneous (non-SQL) database sources like Oracle or DB2 to SQL Server or Azure SQL, not for SQL-to-SQL migrations.

B. Azure Migrate: Azure Migrate is a central hub for discovery, assessment, and migration planning. For the actual database migration execution, it integrates with and uses Azure Database Migration Service (DMS).

C. Data Migration Assistant (DMA): DMA is primarily an assessment tool to identify compatibility issues. While it can perform small-scale migrations, it is not designed for orchestrating the migration of many databases, which would increase administrative effort.

References

1. Azure Database Migration Service Documentation: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using DMS". This official tutorial explicitly states, "You can use Azure Database Migration Service to migrate the databases from an on-premises SQL Server instance to an Azure SQL Managed Instance." It details the offline migration process using native backups, which is the scenario described.

Source: Microsoft Docs, "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using DMS", Prerequisites section.

2. Azure Database Migration Service Overview: "Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure Data platforms with minimal downtime." This highlights its role as a managed, orchestrated service, which aligns with minimizing administrative effort.

Source: Microsoft Docs, "What is Azure Database Migration Service?", Overview section.

3. Data Migration Assistant (DMA) Documentation: "Data Migration Assistant (DMA) helps you upgrade to a modern data platform by detecting compatibility issues that can impact database functionality... After assessing, DMA helps you migrate your schema, data, and uncontained objects from your source server to your target server." This positions DMA as an assessment tool with migration capabilities, but not as the primary orchestration service for large-scale migrations like DMS.

Source: Microsoft Docs, "Overview of Data Migration Assistant", Introduction section.

Question 9

HOTSPOT

-


You have an app that generates 50,000 events daily.


You plan to stream the events to an Azure event hub and use Event Hubs Capture to implement cold path processing of the events. The output of Event Hubs Capture will be consumed by a reporting system.


You need to identify which type of Azure storage must be provisioned to support Event Hubs Capture, and which inbound data format the reporting system must support.


What should you identify? To answer, select the appropriate options in the answer area.

Answer
Show Answer
Correct Answer:

STORAGE TYPE: AZURE DATA LAKE STORAGE GEN2

DATA FORMAT: AVRO

Explanation

Azure Event Hubs Capture automatically archives streaming data to a user-specified storage container. This feature supports either an Azure Blob Storage or an Azure Data Lake Storage Gen2 account for storing the captured data. Therefore, Azure Data Lake Storage Gen2 is a valid storage type to provision.


The data is always written in the Apache Avro format, which is a compact, fast, binary format that includes the schema inline. Consequently, any downstream reporting system consuming the data from the capture destination must be able to read and process files in the Avro format.

References

Microsoft Azure Documentation, "Overview of Event Hubs Capture."

Section: Introduction

Content: "Event Hubs Capture enables you to automatically deliver the streaming data in Event Hubs to an Azure Blob storage or Azure Data Lake Storage account of your choice... Captured data is written in Apache Avro format: a compact, fast, binary format that provides rich data structures with inline schema."

Microsoft Azure Documentation, "Capture streaming events using the Azure portal."

Section: Enable Event Hubs Capture

Content: "For Capture provider, select Azure Storage Account... Event Hubs writes the captured data in Apache Avro format." This section details the configuration where the user must select a compatible storage account type.

Question 10

You are designing an app that will include two components. The components will communicate by sending messages via a queue. You need to recommend a solution to process the messages by using a First in. First out (FIFO) pattern. What should you include in the recommendation?
Options
A: storage queues with a custom metadata setting
B: Azure Service Bus queues with sessions enabled
C: Azure Service Bus queues with partitioning enabled
D: storage queues with a stored access policy
Show Answer
Correct Answer:
Azure Service Bus queues with sessions enabled
Explanation
Azure Service Bus is the appropriate service for scenarios requiring guaranteed First-In, First-Out (FIFO) message ordering. While a standard Service Bus queue does not guarantee FIFO when multiple competing consumers are present, enabling the sessions feature does. Message sessions group a sequence of related messages, and a session-aware receiver locks the session, ensuring all messages from that specific session are processed in the order they were sent by a single consumer. This provides a strict, ordered handling of messages, fulfilling the FIFO requirement.
Why Incorrect Options are Wrong

A. storage queues with a custom metadata setting: Azure Storage Queues are designed for high-throughput and do not guarantee FIFO ordering. Custom metadata is for annotating queues and does not influence message processing order.

C. Azure Service Bus queues with partitioning enabled: Partitioning is a feature for increasing throughput and availability by distributing the queue across multiple message brokers. It can disrupt strict ordering unless used in conjunction with sessions.

D. storage queues with a stored access policy: A stored access policy is a security mechanism for managing access permissions via Shared Access Signatures (SAS) and has no impact on the message delivery order.

---

References

1. Microsoft Azure Documentation, "Message sessions": "To realize a FIFO guarantee in Service Bus, use sessions. Message sessions enable joint and ordered handling of unbounded sequences of related messages." (Section: "Message sessions", Paragraph 1).

2. Microsoft Azure Documentation, "Storage queues and Service Bus queues - compared and contrasted": "Service Bus sessions enable you to process messages in a first-in, first-out (FIFO) manner... Azure Storage Queues don't natively support FIFO ordering." (Section: "Feature comparison", Table Row: "Ordering").

3. Microsoft Azure Documentation, "Partitioned messaging entities": "When a client sends a message to a partitioned queue or topic, Service Bus checks for the presence of a partition key. If it finds one, it selects the partition based on that key... If a partition key isn't specified but a session ID is, Service Bus uses the session ID as the partition key." This highlights that partitioning alone doesn't guarantee order; it's the session ID that ensures related messages land on the same partition to maintain order. (Section: "Use of partition keys").

Question 11

You plan to deploy an application named App1 that will run in containers on Azure Kubernetes Service (AKS) clusters. The AKS clusters will be distributed across four Azure regions. You need to recommend a storage solution to ensure that updated container images are replicated automatically to all the Azure regions hosting the AKS clusters. Which storage solution should you recommend?
Options
A: Azure Cache for Redis
B: Premium SKU Azure Container Registry
C: Azure Content Delivery Network (CON)
D: geo-redundant storage (GRS) accounts
Show Answer
Correct Answer:
Premium SKU Azure Container Registry
Explanation
Azure Container Registry (ACR) is the managed, private Docker registry service for storing and managing container images. The specific requirement is to automatically replicate images to multiple Azure regions where AKS clusters are deployed. This is achieved using the geo-replication feature, which is exclusively available in the Premium SKU of Azure Container Registry. Geo-replication enables a single registry to serve multiple regions, providing network-close, fast, and reliable image pulls for regional deployments like AKS, while managing the replication automatically.
Why Incorrect Options are Wrong

A. Azure Cache for Redis: This is an in-memory data store used for caching application data, not for storing or managing container images.

C. Azure Content Delivery Network (CDN): A CDN is designed to cache and deliver static web content to users from edge locations, not to function as a container image registry.

D. geo-redundant storage (GRS) accounts: While GRS provides data replication to a secondary region for disaster recovery, it is a general-purpose storage service and lacks the Docker registry API required by AKS to pull images.

References

1. Microsoft Documentation, Azure Container Registry: "Geo-replication in Azure Container Registry". This document states, "Geo-replication is a feature of Premium SKU container registries. A geo-replicated registry...enables you to manage a single registry across multiple regions." It further explains that this allows for "Network-close registry access" which is ideal for distributed AKS clusters.

2. Microsoft Documentation, Azure Container Registry: "Azure Container Registry service tiers". Under the "Feature comparison" table, "Geo-replication" is listed as a feature available only for the "Premium" service tier.

3. Microsoft Documentation, Azure Storage: "Data redundancy". This document describes Geo-redundant storage (GRS) as a disaster recovery solution that replicates data to a secondary region hundreds of miles away, which is different from the active-active, network-close access provided by ACR geo-replication.

Question 12

HOTSPOT You have an on-premises Microsoft SQL Server database named SQL1. You plan to migrate SQL 1 to Azure. You need to recommend a hosting solution for SQL1. The solution must meet the following requirements: โ€ข Support the deployment of multiple secondary, read-only replicas. โ€ข Support automatic replication between primary and secondary replicas. โ€ข Support failover between primary and secondary replicas within a 15-minute recovery time objective (RTO). AZ-305 exam question

Show Answer
Correct Answer:

AZURE SERVICE OR SERVICE TIER: AZURE SQL DATABASE

REPLICATION MECHANISM: ACTIVE GEO-REPLICATION


Explanation

Azure SQL Database is the correct service choice. It's a fully managed platform-as-a-service (PaaS) database engine that supports various service tiers. Tiers like Business Critical and Hyperscale are specifically designed for high availability and performance, and they support the creation of readable secondary replicas, fulfilling the core requirement.

Active geo-replication is the specific technology within Azure SQL Database used to create and manage multiple readable secondary databases in different geographical regions. This feature provides:

  • Multiple secondary, read-only replicas: You can create up to four readable secondaries, which can be used for read scale-out and disaster recovery.
  • Automatic replication: Data is replicated asynchronously and automatically from the primary to the secondary replicas.
  • Fast failover: It supports a user-initiated failover that can easily meet a 15-minute Recovery Time Objective (RTO), typically completing in under a minute.

References

Microsoft Documentation | Active geo-replication for Azure SQL Database: "Active geo-replication is a feature that allows you to create a continuously synchronized readable secondary database for a primary database... You can create up to four secondaries in the same or different regions." This source confirms that active geo-replication supports multiple, readable, and automatically synchronized replicas.

Microsoft Documentation | Business continuity overview with Azure SQL Database: This document details the available business continuity solutions. Under the section "Active geo-replication," it explains, "Active geo-replication... lets you create readable secondary replicas of individual databases on a server in a different region." It also specifies the RPO and RTO, which align with the scenario's requirements.

Microsoft Documentation | Hyperscale service tier: "The Hyperscale service tier in Azure SQL Database... provides the ability to scale out the read workload by using a number of read-only replicas." This confirms that specific tiers within the Azure SQL Database service meet the requirement for multiple read-only replicas. Active geo-replication is a feature available for these tiers.

Question 13

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity. Several virtual machines exhibit network connectivity issues. You need to analyze the network traffic to identify whether packets are being allowed or denied from Azure to the virtual machines. Solution: Install and configure the Azure Monitoring agent and the Dependency Agent on all the virtual machines. Use VM insights in Azure Monitor to analyze the network traffic. Does this meet the goal?
Options
A: Yes
B: No
Show Answer
Correct Answer:
No
Explanation
The proposed solution does not meet the goal. VM insights in Azure Monitor is designed to monitor the performance and health of virtual machines, including their running processes and dependencies. While its Map feature can visualize network connections and identify failed ones, it does not provide the specific functionality to analyze network traffic against security rules to determine if packets are being explicitly allowed or denied. The appropriate tool for this task is Azure Network Watcher. Specifically, its IP flow verify feature can check if a packet is allowed or denied to or from a VM based on configured Network Security Group (NSG) rules. Additionally, NSG flow logs can be enabled to record all IP traffic flowing through an NSG, including the allow/deny decision and the specific rule that was applied.
Why Incorrect Options are Wrong

A. Yes: This is incorrect because VM insights focuses on performance and dependency mapping, not on the analysis of security rules that determine whether network packets are allowed or denied.

References

1. Microsoft Learn | Azure Network Watcher documentation. "What is Azure Network Watcher?". This document introduces Network Watcher as the primary suite for network monitoring and diagnostics in Azure. It lists IP flow verify and NSG flow logs as key features for troubleshooting connectivity.

2. Microsoft Learn | IP flow verify. "Introduction to IP flow verify". This document states, "IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information returned includes whether the packet is allowed or denied, and the network security group (NSG) rule that allowed or denied the traffic." This directly addresses the question's requirement.

3. Microsoft Learn | NSG flow logs. "Introduction to flow logging for network security groups". This source explains, "Network security group (NSG) flow logs...allows you to log information about IP traffic flowing through an NSG. ... For each rule, flow logs record if the traffic was allowed or denied..." This provides a method for historical analysis of allowed/denied traffic.

4. Microsoft Learn | VM insights. "Overview of VM insights". This document describes VM insights as a tool to "monitor the performance and health of your virtual machines...and monitor their processes and dependencies." This description confirms its purpose is different from analyzing security rule enforcement.

Question 14

You need to recommend a solution for the App1 maintenance task. The solution must minimize costs. What should you include in the recommendation?
Options
A: an Azure logic app
B: an Azure function
C: an Azure virtual machine
D: an App Service WebJob
Show Answer
Correct Answer:
an Azure function
Explanation
Azure Functions, operating on a Consumption plan, provide a serverless compute experience. This pricing model is the most cost-effective for a maintenance task that likely runs infrequently or on a schedule. With the Consumption plan, you are billed only for the precise time your code executes, and you benefit from a monthly free grant of execution time and requests. This pay-per-use model eliminates the cost of idle infrastructure, directly fulfilling the requirement to minimize costs.
Why Incorrect Options are Wrong

A. an Azure logic app: While also a cost-effective serverless option, Logic Apps are primarily for designing and orchestrating workflows. For a singular, code-based maintenance task, Azure Functions are a more direct and often cheaper compute solution.

C. an Azure virtual machine: A virtual machine incurs costs whenever it is running, even if the maintenance task is not active. This makes it the most expensive option for an infrequent task, directly contradicting the cost-minimization requirement.

D. an App Service WebJob: A WebJob runs on an App Service Plan, which has a fixed hourly cost. This is less cost-effective for an infrequent task compared to the per-second, on-demand billing of an Azure Function on a Consumption plan.

References

1. Azure Functions Documentation, "Azure Functions pricing": "The Consumption plan is the fully serverless hosting plan for Azure Functions... With the Consumption plan, you only pay when your functions are running." This source directly supports the cost-effectiveness of Azure Functions for tasks that are not continuous.

2. Azure Documentation, "Choose the right integration and automation services in Azure": This document compares various services. It states, "Functions is a 'compute on-demand' service," while for VMs, you "pay for the virtual machines that you reserve, whether you use them or not." This highlights the fundamental cost difference between serverless (Functions) and IaaS (VMs).

3. Azure App Service Documentation, "Run background tasks with WebJobs in Azure App Service": "WebJobs... run in the context of an App Service app... The pricing model for WebJobs is based on the App Service plan." This confirms that WebJobs are tied to the continuous cost of an App Service Plan, making them less ideal for cost-minimizing infrequent tasks compared to a true pay-per-use service.

Question 15

You have an on-premises storage solution. You need to migrate the solution to Azure. The solution must support Hadoop Distributed File System (HDFS). What should you use?
Options
A: Azure Data Lake Storage Gen2
B: Azure NetApp Files
C: Azure Data Share
D: Azure Table storage
Show Answer
Correct Answer:
Azure Data Lake Storage Gen2
Explanation
Azure Data Lake Storage (ADLS) Gen2 is specifically designed for big data analytics workloads. It is built on Azure Blob Storage and includes a hierarchical namespace, which is a key requirement for the Hadoop Distributed File System (HDFS). ADLS Gen2 provides HDFS compatibility through the Azure Blob File System (ABFS) driver, allowing big data frameworks like Hadoop and Spark to access data in ADLS Gen2 as if it were a native HDFS file system. This makes it the ideal choice for migrating an on-premises HDFS-based solution to Azure without significant re-architecture.
Why Incorrect Options are Wrong

B. Azure NetApp Files: This is a high-performance file storage service supporting NFS and SMB protocols, not HDFS. It is designed for enterprise file shares and HPC, not as a direct HDFS replacement.

C. Azure Data Share: This is a service for securely sharing data with external organizations. It is not a primary storage solution or a file system.

D. Azure Table storage: This is a NoSQL key-value store for structured, non-relational data. It is not a file system and does not support HDFS.

References

1. Microsoft Learn. "Introduction to Azure Data Lake Storage Gen2." Azure Documentation. "Data Lake Storage Gen2 is the primary storage for Azure HDInsight and Azure Databricks. It is compatible with Hadoop Distributed File System (HDFS)."

2. Microsoft Learn. "The Azure Blob File System driver (ABFS): A dedicated Azure Storage driver for Hadoop." Azure Documentation. "Azure Blob storage can now be accessed through a new driver, the Azure Blob File System driver or ABFS. The ABFS driver is part of Apache Hadoop and is included in many of the commercial distributions of Hadoop. Using this driver, many applications and frameworks can access data in Azure Blob Storage without any code explicitly referencing Data Lake Storage Gen2."

3. Microsoft Learn. "What is Azure NetApp Files." Azure Documentation. "Azure NetApp Files is an enterprise-class, high-performance, metered file storage service... It supports multiple storage protocols in a single service, including NFSv3, NFSv4.1, and SMB3.1.x." (Note: No mention of HDFS).

Question 16

HOTSPOT You need to deploy an instance of SQL Server on Azure Virtual Machines. The solution must meet the following requirements: โ€ข Support 15.000 disk IOPS. โ€ข Support SR-IOV. โ€ข Minimize costs. What should you include in the solution? To answer, select the appropriate options in the answer are a. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

VIRTUAL MACHINE SERIES: DS

DISK TYPE: PREMIUM SSD


Explanation

To meet the requirements, the DS-series virtual machine is the most appropriate choice. It supports Single Root I/O Virtualization (SR-IOV), which Azure calls Accelerated Networking, and is a general-purpose series that is more cost-effective for a SQL Server workload compared to the specialized and more expensive GPU-optimized NC and NV series.

For the disk, Premium SSD is the correct option. To achieve 15,000 IOPS, a single disk (like a P60 providing 16,000 IOPS) or striping multiple smaller Premium SSDs can be used. This meets the performance requirement while being more cost-effective than Ultra Disk. Standard SSDs cannot provide the required IOPS.

References

Azure Virtual Machine Sizes - General purpose: Microsoft Documentation states that the Dsv3-series (a common DS series) is suitable for "many enterprise applications" and provides a "balance of CPU, memory, and disk." It is more cost-effective than GPU-optimized series for non-GPU workloads.

Source: Microsoft Docs, "Sizes for virtual machines in Azure," General purpose section.

Azure Accelerated Networking (SR-IOV): Microsoft's documentation confirms that Accelerated Networking is supported on most general-purpose instances with 2 or more vCPUs, including the DSv2-series and later.

Source: Microsoft Docs, "Azure Accelerated Networking overview," Supported VM instances section.

Azure Managed Disk Types Comparison: The official documentation provides a table comparing disk types. A Premium SSD P60 disk delivers 16,000 IOPS, meeting the requirement. Ultra Disks offer higher performance but at a higher cost, making Premium SSD the most cost-effective choice for this scenario.

Source: Microsoft Docs, "Select a disk type for Azure IaaS VMs - managed disks," Disk type comparison table.

Performance guidelines for SQL Server on Azure Virtual Machines: Microsoft's performance guidelines recommend Premium SSDs for most production SQL Server workloads due to their balance of performance and cost.

Source: Microsoft Docs, "Performance guidelines for SQL Server in Azure Virtual Machines," Storage section.

Question 17

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases. The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region. You need to recommend a solution to meet the regulatory requirement. Solution: You recommend using the Regulatory compliance dashboard in Microsoft Defender for Cloud. Does this meet the goal?
Options
A: Yes
B: No
Show Answer
Correct Answer:
No
Explanation
The Regulatory compliance dashboard in Microsoft Defender for Cloud is a monitoring and reporting tool. It provides insights into your compliance posture by continuously assessing your environment against controls and best practices from various standards. However, it does not enforce policies or prevent resource creation. The core requirement is to enforce a restriction on which Azure regions can be used for deployment. The dashboard can only report on non-compliant resources after they have been deployed, it cannot proactively block their creation. The appropriate service for enforcing such deployment restrictions is Azure Policy.
Why Incorrect Options are Wrong

A. Yes: This is incorrect because the Defender for Cloud dashboard is for assessing and reporting on compliance, not for enforcing deployment rules like restricting resource locations.

References

1. Microsoft Learn, Azure Policy Overview. "Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards... Common use cases for Azure Policy include... enforcing that services can only be deployed to specific regions."

Source: Microsoft Documentation, learn.microsoft.com/en-us/azure/governance/policy/overview, "What is Azure Policy?" section.

2. Microsoft Learn, Tutorial: Improve your regulatory compliance. "Defender for Cloud helps you streamline the process for meeting regulatory compliance requirements, using the regulatory compliance dashboard... The dashboard shows the status of all the assessments within your environment for a chosen standard or regulation." This describes a monitoring and assessment function, not an enforcement mechanism.

Source: Microsoft Documentation, learn.microsoft.com/en-us/azure/defender-for-cloud/regulatory-compliance-dashboard, "What is the regulatory compliance dashboard?" section.

3. Microsoft Learn, Azure Policy built-in policy definitions. The built-in policy definition named "Allowed locations" has the description: "This policy enables you to restrict the locations your organization can specify when deploying resources." This directly addresses the requirement to enforce deployment to specific regions.

Source: Microsoft Documentation, learn.microsoft.com/en-us/azure/governance/policy/samples/built-in-policies, "Built-in policy definitions" table, under the "General" category.

Question 18

You plan to deploy multiple instances of an Azure web app across several Azure regions. You need to design an access solution for the app. The solution must meet the following replication requirements: โ€ข Support rate limiting. โ€ข Balance requests between all instances. โ€ข Ensure that users can access the app in the event of a regional outage. Solution: You use Azure Front Door to provide access to the app. Does this meet the goal?
Options
A: Yes
B: No
Show Answer
Correct Answer:
Yes
Explanation
Azure Front Door is a global HTTP/S load balancer and web application acceleration service. It meets all the specified requirements. It provides global load balancing to distribute traffic across multiple web app instances in different regions, using methods like latency-based routing. Its health probe mechanism detects regional outages and automatically reroutes traffic to healthy instances, ensuring high availability. Furthermore, the integrated Web Application Firewall (WAF) on Azure Front Door (Standard or Premium tier) can be configured with rate-limiting rules to protect the application from traffic spikes and denial-of-service attacks.
Why Incorrect Options are Wrong

B. No: This is incorrect. Azure Front Door's core features, including global load balancing, health probes for failover, and an integrated Web Application Firewall (WAF) with rate-limiting capabilities, directly fulfill all the solution requirements.

References

1. Microsoft Documentation | What is Azure Front Door?

"Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications... Front Door provides... global load balancing with instant failover."

Reference: learn.microsoft.com/en-us/azure/frontdoor/front-door-overview, "What is Azure Front Door?" section.

2. Microsoft Documentation | Routing methods for Azure Front Door

"Azure Front Door supports different traffic-routing methods to determine how to route your HTTP/S traffic... These routing methods can be used to support different routing scenarios, including routing to the lowest latency backends, implementing failover configurations, and distributing traffic across backends."

Reference: learn.microsoft.com/en-us/azure/frontdoor/front-door-routing-methods, "Overview" section.

3. Microsoft Documentation | Web Application Firewall (WAF) rate limiting on Azure Front Door

"A rate limit rule controls the number of requests allowed from a particular client IP address to the application during a one-minute or five-minute duration... Rate limiting can be configured to work with other WAF rules, such as rules that protect you against SQL injection or cross-site scripting attacks."

Reference: learn.microsoft.com/en-us/azure/web-application-firewall/afds/waf-front-door-rate-limit, "Rate limiting and Azure Front Door" section.

Question 19

You are developing an app that will use Azure Functions to process Azure Event Hubs events. Request processing is estimated to take between five and 20 minutes. You need to recommend a hosting solution that meets the following requirements: โ€ข Supports estimates of request processing runtimes โ€ข Supports event-driven autoscaling for the app Which hosting plan should you recommend?
Options
A: Consumption
B: App Service
C: Dedicated
D: Premium
Show Answer
Correct Answer:
Premium
Explanation
The Azure Functions Premium plan is the only hosting solution that satisfies both requirements. It supports long-running functions with a default timeout of 30 minutes (configurable to be effectively unlimited), which accommodates the required 5-to-20-minute processing time. Furthermore, the Premium plan provides the same event-driven autoscaling mechanism as the Consumption plan, allowing the number of function instances to scale automatically based on the volume of incoming events from Azure Event Hubs. It also adds features like pre-warmed instances to eliminate cold starts, which is beneficial for event-driven workloads.
Why Incorrect Options are Wrong

A. Consumption: This plan has a maximum execution timeout of 10 minutes, which is insufficient for the required 20-minute processing time.

B. App Service: This plan does not support event-driven autoscaling. Scaling is configured manually or based on performance metrics like CPU usage, not the number of events.

C. Dedicated: This is another name for the App Service plan and shares the same limitation of not providing event-driven autoscaling.

---

References

1. Microsoft Learn, Azure Functions hosting options. Under the "Hosting plans comparison" table, it explicitly states that the Premium plan supports "Event-driven" scaling and has a default timeout of 30 minutes, which can be configured to be unlimited. In contrast, the Consumption plan's maximum timeout is 10 minutes, and the Dedicated (App Service) plan's scaling is "Manual/Autoscale" (based on metrics, not events).

Source: Microsoft Learn. (2023). Azure Functions hosting options. Retrieved from https://learn.microsoft.com/en-us/azure/azure-functions/functions-hosting-options#hosting-plans-comparison

2. Microsoft Learn, Azure Functions triggers and bindings concepts. In the "Timeout" section, the documentation confirms the timeout limits for each plan. It specifies, "The default timeout for functions on a Consumption plan is 5 minutes... you can change this value to a maximum of 10 minutes... For Premium and Dedicated plan functions, the default is 30 minutes, and there is no overall max." This directly invalidates the Consumption plan for the 20-minute requirement.

Source: Microsoft Learn. (2023). Azure Functions triggers and bindings concepts - Timeout. Retrieved from https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings?tabs=csharp#timeout

Question 20

HOTSPOT You are designing a data storage solution to support reporting. The solution will ingest high volumes of data in the JSON format by using Azure Event Hubs. As the data arrives, Event Hubs will write the data to storage. The solution must meet the following requirements: โ€ข Organize data in directories by date and time. โ€ข Allow stored data to be queried directly, transformed into summarized tables, and then stored in a data warehouse. โ€ข Ensure that the data warehouse can store 50 TB of relational data and support between 200 and 300 concurrent read operations. Which service should you recommend for each type of data store? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

DATA STORE FOR THE INGESTED DATA: AZURE DATA LAKE STORAGE GEN2

DATA STORE FOR THE DATA WAREHOUSE: AZURE SYNAPSE ANALYTICS DEDICATED SQL POOLS


Explanation

Azure Data Lake Storage Gen2 is the correct choice for the ingested data store. Its key feature is the hierarchical namespace, which allows data to be organized into a directory structure, satisfying the requirement to organize data by date and time. This file system-like structure is optimized for big data analytics workloads, allowing services to query the raw JSON data directly and efficiently.

Azure Synapse Analytics dedicated SQL pools is the most appropriate service for the data warehouse. It uses a Massively Parallel Processing (MPP) architecture specifically designed for high-performance analytics on large datasets. It can easily scale to handle the 50 TB data requirement and is engineered to manage high concurrency (up to 128 concurrent queries, with workload management for the 200-300 user scenario), making it ideal for enterprise-level reporting.

References

Microsoft Documentation, "Introduction to Azure Data Lake Storage Gen2." In the section "Key features of Data Lake Storage Gen2," it states: "A hierarchical namespace is a key feature that enables Data Lake Storage Gen2 to provide high-performance data access at object storage scale and price... This allows for data to be organized in a familiar directory and file hierarchy." This directly addresses the requirement for organizing data in directories.

Microsoft Documentation, "What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?." The "Massively Parallel Processing (MPP) architecture" section details how the service is built for enterprise data warehousing and big data. It highlights its ability to "run complex queries quickly across petabytes of data," which aligns with the 50 TB and high-performance querying requirements.

Microsoft Documentation, "Memory and concurrency limits for dedicated SQL pool in Azure Synapse Analytics." The "Concurrency" section specifies that a dedicated SQL pool supports up to 128 concurrent queries. While the requirement is 200-300 concurrent reads, the service's workload management and queuing capabilities are designed to handle such user loads for reporting scenarios, making it the most suitable choice among the options.

Microsoft Documentation, "Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage." This document confirms that Azure Event Hubs can capture data directly to an Azure Data Lake Storage Gen2 account, fulfilling the ingestion pipeline requirement.

Question 21

You have an on-premises application named App1 that uses an Oracle database. You plan to use Azure Databricks to transform and load data from App1 to an Azure Synapse Analytics instance. You need to ensure that the App1 data is available to Databricks. Which two Azure services should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Options
A: Azure Data Box Edge
B: Azure Data Lake Storage
C: Azure Data Factory
D: Azure Data Box Gateway
E: Azure Import/Export service
Show Answer
Correct Answer:
Azure Data Lake Storage, Azure Data Factory
Explanation
The optimal solution for making on-premises Oracle data available to Azure Databricks involves two key components: a data integration service and a staging storage layer. Azure Data Factory (C) is the cloud-based ETL and data integration service designed for this scenario. It uses a Self-Hosted Integration Runtime to securely connect to the on-premises Oracle database. It orchestrates the extraction of data and its movement into Azure. Azure Data Lake Storage (B) serves as the scalable and high-performance staging area. Data extracted by Data Factory is landed here. Azure Databricks is optimized to read from and write to Azure Data Lake Storage, making it the ideal location to make the data available for transformation.
Why Incorrect Options are Wrong

A. Azure Data Box Edge: This is a physical edge computing appliance for IoT and rapid data transfer from edge locations, not the primary tool for a structured database ingestion pipeline.

D. Azure Data Box Gateway: This is a virtual appliance for transferring file-based data to Azure via SMB/NFS shares, which is unsuitable for extracting data directly from an Oracle database.

E. Azure Import/Export service: This service is for one-time, bulk data migration using physical disks. It is not appropriate for a recurring, operational data pipeline from a live application database.

References

1. Azure Data Factory for Oracle Ingestion: Microsoft Learn. (2023). Copy data from and to Oracle by using Azure Data Factory or Azure Synapse Analytics. "To copy data from an on-premises Oracle database, you need to set up a self-hosted integration runtime." This establishes ADF as the correct ingestion tool.

Source: learn.microsoft.com/en-us/azure/data-factory/connector-oracle

2. Azure Databricks with Azure Data Lake Storage: Microsoft Learn. (2023). Tutorial: Extract, transform, and load data by using Azure Databricks. Section: "Create and configure an Azure Databricks workspace". This tutorial demonstrates the standard pattern where Databricks interacts with data stored in Azure Data Lake Storage Gen2.

Source: learn.microsoft.com/en-us/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse

3. Orchestration Pattern: Microsoft Learn. (2023). Transform data by using an Azure Databricks Notebook. This document shows how Azure Data Factory is used to orchestrate a pipeline that can include copying data (from sources like Oracle) and then running a Databricks notebook for transformation.

Source: learn.microsoft.com/en-us/azure/data-factory/transform-data-using-databricks-notebook

Question 22

HOTSPOT You are designing a cost-optimized solution that uses Azure Batch to run two types of jobs on Linux nodes. The first job type will consist of short-running tasks for a development environment. The second job type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion. You need to recommend the pool type and node type for each job type. The solution must minimize compute charges and leverage Azure Hybrid Benefit whenever possible. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

FIRST JOB: USER SUBSCRIPTION AND LOW-PRIORITY VIRTUAL MACHINES

SECOND JOB: USER SUBSCRIPTION AND DEDICATED VIRTUAL MACHINES


Explanation

The solution requires optimizing costs while meeting performance requirements for two distinct jobs.

  • First job (Development): This job involves short-running tasks in a development environment where cost is a primary concern and immediate completion is not critical. Low-priority virtual machines (also known as Spot VMs) are ideal as they offer significant cost savings for workloads that can tolerate interruptions. To further reduce costs by leveraging the Azure Hybrid Benefit as specified, the pool must be configured in User subscription mode.
  • Second job (Production): This job involves long-running MPI applications in a production environment requiring timely completion. To prevent interruptions and guarantee availability, dedicated virtual machines are necessary. To meet the requirement of minimizing charges and using the Azure Hybrid Benefit "whenever possible," this pool must also be created in User subscription mode, as this is a prerequisite for applying the benefit to the dedicated nodes.

References

Microsoft Azure Documentation, "Pool allocation mode": This document clarifies the two pool allocation modes. It explicitly states that to use features like Azure Hybrid Benefit, the Batch pool must be created in the User subscription mode. This supports the choice of "User subscription" for both jobs to meet the cost-saving requirement.

Microsoft Azure Documentation, "Use Spot VMs with Batch": This document describes Spot (formerly low-priority) VMs as a cost-effective option for workloads that are fault-tolerant and flexible in their completion time. This justifies their use for the short-running development job where minimizing cost is key. It states, "Spot VMs are a good choice for workloads... where the job completion time is flexible."

Microsoft Azure Documentation, "Provision compute nodes for Batch pools": This resource distinguishes between dedicated and Spot compute nodes. It confirms that dedicated nodes are reserved for the workloads and are not subject to preemption, making them suitable for production jobs that require guaranteed availability and timely completion, such as the long-running MPI application.

Question 23

HOTSPOT Your company has 20 web APIs that were developed in-house. The company is developing 10 web apps that will use the web APIs. The web apps and the APIs are registered in the company s Azure AD tenant. The web APIs are published by using Azure API Management. You need to recommend a solution to block unauthorized requests originating from the web apps from reaching the web APIs. The solution must meet the following requirements: โ€ข Use Azure AD-generated claims. โ€ข Minimize configuration and management effort What should you include in the recommendation? To answer, select the appropriate options in the answer are a. NOTE: Each correct selection is worth one point. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

GRANT PERMISSIONS TO ALLOW THE WEB APPS TO ACCESS THE WEB APIS BY USING: AZURE AD

CONFIGURE A JSON WEB TOKEN (JWT) VALIDATION POLICY BY USING: AZURE API MANAGEMENT

Explanation

Granting Permissions (Azure AD): In scenarios where both the client (web app) and the resource (web API) are registered within Azure Active Directory (Azure AD), Azure AD is used to manage the authorization flow. The web API's app registration exposes permissions (scopes), and the web app's registration is granted consent to access those specific scopes. This process establishes a trust relationship and defines what the web app is allowed to do, directly within the identity provider. This approach centralizes access management, aligning with the goal of minimizing configuration effort.

JWT Validation (Azure API Management): Azure API Management (APIM) serves as a gateway in front of your backend services. To protect the web APIs, you configure a validate-jwt policy in APIM. This policy intercepts incoming requests, inspects the JWT (access token) provided by the web app, and validates its signature, issuer, audience, and expiration against the configuration of your Azure AD tenant. By enforcing this policy at the gateway, you ensure that no unauthorized requests reach the backend APIs. This centralizes the security logic, removing the need to implement token validation in each of the 20 web APIs individually, which significantly minimizes management effort.

References

Microsoft Learn, Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow. This document details how a client application acquires a token to call a web API. Section "First leg of the OBO flow" explains that the client application must be granted permission in Azure AD to call the middle-tier web API. This is configured in the Azure portal under API permissions.

Microsoft Learn, Protect an API in Azure API Management using OAuth 2.0 authorization with Azure AD. This tutorial explicitly outlines the required steps. In Step 3, it states: "In this section, you'll configure an API Management policy that blocks requests that don't have a valid access token." It then provides the XML for the validate-jwt policy, which is applied at the APIM level to protect the backend API.

Microsoft Learn, API Management access restriction policies. The documentation for the validate-jwt policy states, "Use this policy to enforce the existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter." This confirms that JWT validation is a primary function of API Management policies.

Question 24

You have an Azure AD tenant. You plan to deploy Azure Cosmos DB databases that will use the SQL API. You need to recommend a solution to provide specific Azure AD user accounts with read access to the Cosmos DB databases. What should you include in the recommendation?
Options
A: a resource token and an Access control (IAM) role assignment
B: certificates and Azure Key Vault
C: master keys and Azure Information Protection policies
D: shared access signatures (SAS) and Conditional Access policies
Show Answer
Correct Answer:
a resource token and an Access control (IAM) role assignment
Explanation
To provide specific Azure AD user accounts with read access to the Azure Cosmos DB data plane, the recommended solution is to use Azure Role-Based Access Control (RBAC). This involves assigning an appropriate role, such as the built-in "Cosmos DB Built-in Data Reader" role, to the Azure AD user principals. This is managed through Access control (IAM) role assignments on the Cosmos DB account, database, or container scope. This method integrates directly with Azure AD, allowing for fine-grained, identity-based permissions without exposing master keys. While resource tokens are another mechanism for granular access, they are typically vended by a middle-tier service and are not directly tied to an end-user's Azure AD identity for authentication. The essential component for this scenario is the IAM role assignment.
Why Incorrect Options are Wrong

B: Certificates and Azure Key Vault are used for securing application identities and secrets, not for granting data plane permissions to individual users within Cosmos DB.

C: Master keys grant full administrative permissions (read, write, delete) over the entire Cosmos DB account and are not suitable for providing scoped, read-only access to specific users.

D: Shared access signatures (SAS) are not a supported authentication mechanism for Azure Cosmos DB. Conditional Access policies enforce conditions on user authentication but do not grant permissions to data.

References

1. Microsoft Documentation - Azure Cosmos DB RBAC: "Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account". This document explicitly states, "Azure Cosmos DB exposes a role-based access control (RBAC) system that lets you... Authenticate your data requests with an Azure AD identity... You can create a role assignment for Azure AD principals (users, groups, service principals, or managed identities) to grant them access to resources and operations in your Azure Cosmos DB account." It also lists the Cosmos DB Built-in Data Reader role.

2. Microsoft Documentation - Secure access to data in Azure Cosmos DB: This document outlines the primary methods for securing data. In the section "Role-based access control (preview)", it describes using Azure AD identities and IAM role assignments as the modern, recommended approach for data plane security. It contrasts this with the master key and resource token models.

3. Microsoft Documentation - Resource Tokens: "Secure access to Azure Cosmos DB resources using resource tokens". This document explains that resource tokens are generated using a master key, typically by a middle-tier application, to provide temporary, scoped access to untrusted clients. This confirms it is a separate model from direct Azure AD RBAC authentication.

Question 25

HOTSPOT You have several Azure App Service web apps that use Azure Key Vault to store data encryption keys. Several departments have the following requests to support the web app: AZ-305 exam question Which service should you recommend for each department's request? To answer, configure the appropriate options in the answer area. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

SECURITY: AZURE AD PRIVILEGED IDENTITY MANAGEMENT

DEVELOPMENT: AZURE MANAGED IDENTITY

QUALITY ASSURANCE: AZURE AD PRIVILEGED IDENTITY MANAGEMENT

Explanation

The selections are based on the specific functionalities requested by each department.

  • Security: The requirements to review administrative roles, require justification for membership, receive alerts, and view audit histories are all core features of Azure AD Privileged Identity Management (PIM). PIM is designed to manage, control, and monitor access to privileged resources by providing just-in-time (JIT) access and access review capabilities.
  • Development: The request to enable applications to access Azure Key Vault without storing credentials in code is the primary use case for Azure Managed Identity. This feature provides an Azure resource (like a web app) with an automatically managed identity in Azure AD, which can then be used to authenticate to other Azure services that support Azure AD authentication.
  • Quality Assurance: While no specific request is listed, QA teams often require temporary, elevated permissions to test application features or troubleshoot issues in test environments. Azure AD Privileged Identity Management (PIM) is the most appropriate service to grant these permissions securely, adhering to the principle of least privilege through just-in-time access that can be audited.

References

Azure AD Privileged Identity Management (PIM):

Microsoft Learn. (2023). What is Azure AD Privileged Identity Management? States that PIM provides time-based and approval-based role activation to mitigate risks of excessive permissions. It also enables features like access reviews, alerts on privileged role activation, and audit history. This directly addresses the Security department's requests.

Azure Managed Identity:

Microsoft Learn. (2023). What are managed identities for Azure resources? Explains that managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. It explicitly mentions, "You can use a managed identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code," which matches the Development department's request.

Use Case for PIM in QA/Testing:

Microsoft Learn. (2023). Assign Azure AD roles in Privileged Identity Management. The concept of assigning roles as "eligible" for activation on a temporary, as-needed basis is a core tenet of PIM. This model is a best practice for any user, including QA testers, who only need elevated permissions intermittently, thus justifying the selection for the Quality Assurance department.

Question 26

You have an Azure Functions microservice app named Appl that is hosted in the Consumption plan. App1 uses an Azure Queue Storage trigger. You plan to migrate App1 to an Azure Kubernetes Service (AKS) cluster. You need to prepare the AKS cluster to support Appl. The solution must meet the following requirements: โ€ข Use the same scaling mechanism as the current deployment. โ€ข Support kubenet and Azure Container Netwoking Interface (CNI) networking. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct answer is worth one point.
Options
A: Configure the horizontal pod autoscaler.
B: Install Virtual Kubelet.
C: Configure the AKS cluster autoscaler.
D: Configure the virtual node add-on.
E: Install Kubemetes-based Event Driven Autoscaling (KEDA).
Show Answer
Correct Answer:
Configure the horizontal pod autoscaler., Install Kubemetes-based Event Driven Autoscaling (KEDA).
Explanation
The goal is to replicate the event-driven scaling behavior of an Azure Function Consumption plan on AKS. In the Consumption plan, functions scale automatically based on the number of incoming events, such as messages in an Azure Queue. Kubernetes-based Event Driven Autoscaling (KEDA) is the standard component for this scenario. KEDA monitors event sources like Azure Queue Storage and exposes metrics. It then works directly with the Kubernetes Horizontal Pod Autoscaler (HPA) to scale the number of application pods up from zero when events occur and back down when they cease. Therefore, installing KEDA and configuring the HPA are the two necessary actions to achieve event-driven scaling based on the queue length, matching the original function's behavior.
Why Incorrect Options are Wrong

B. Install Virtual Kubelet.

This is an underlying technology for virtual nodes, which provides a serverless compute option (ACI), but it does not provide the event-driven scaling logic itself.

C. Configure the AKS cluster autoscaler.

This component scales the number of agent nodes in the cluster, not the application pods. It responds to resource pressure, not external event triggers like queue length.

D. Configure the virtual node add-on.

This add-on allows pods to run on Azure Container Instances (ACI). It is a compute-layer choice and does not implement the required event-driven application scaling mechanism.

References

1. Microsoft Azure Documentation, "Kubernetes-based Event-driven Autoscaling (KEDA) add-on": "KEDA provides two main components: a KEDA operator and a metrics server. The KEDA operator allows you to scale from zero to one instance and to activate a Kubernetes Deployment, and the metrics server provides metrics for an event source to the Horizontal Pod Autoscaler (HPA)." This confirms the direct relationship and necessity of both KEDA and HPA.

2. Microsoft Azure Documentation, "Autoscale an application with Kubernetes Event-driven Autoscaling (KEDA)": "Under the hood, KEDA uses the standard Kubernetes Horizontal Pod Autoscaler (HPA) to drive scaling. KEDA acts as a metrics server for the HPA, providing it with data from external event sources." This explicitly states that KEDA and HPA work together to provide the solution.

3. KEDA Documentation, "Microsoft Azure Queue Storage scaler": This document details the specific scaler for Azure Queue Storage, confirming that KEDA can monitor the queue length (queueLength) to trigger scaling actions, directly matching the scenario's requirement.

4. Microsoft Azure Documentation, "Application scaling options in Azure Kubernetes Service (AKS)": In the "Horizontal Pod Autoscaler (HPA)" section, it clarifies that HPA scales the number of pods. The "Kubernetes Event-driven Autoscaling (KEDA) add-on" section confirms KEDA is network-plugin agnostic and works with both Kubenet and Azure CNI.

Question 27

You plan to migrate on-premises MySQL databases to Azure Database for MySQL Flexible Server. You need to recommend a solution for the Azure Database for MySQL Flexible Server configuration. The solution must meet the following requirements: โ€ข The databases must be accessible if a datacenter fails. โ€ข Costs must be minimized. Which compute tier should you recommend?
Options
A: Burstable
B: General Purpose
C: Memory Optimized
Show Answer
Correct Answer:
General Purpose
Explanation
The requirement for the database to be accessible during a datacenter failure necessitates configuring zone-redundant high availability (HA). In Azure Database for MySQL Flexible Server, zone-redundant HA is supported only on the General Purpose and Memory Optimized compute tiers. The Burstable tier does not support this feature. Between the General Purpose and Memory Optimized tiers, the General Purpose tier offers a balanced ratio of compute, memory, and storage, making it suitable for most production workloads at a lower price point than the Memory Optimized tier. To meet the requirement of minimizing costs while ensuring high availability, the General Purpose tier is the most appropriate recommendation.
Why Incorrect Options are Wrong

A. Burstable: This tier does not support zone-redundant high availability, which is required to ensure the database is accessible if an entire datacenter (Availability Zone) fails.

C. Memory Optimized: While this tier supports zone-redundant high availability, it is more expensive than the General Purpose tier. It does not meet the requirement to minimize costs.

References

1. Microsoft Learn: High availability concepts in Azure Database for MySQL - Flexible Server. Under the "Zone-redundant high availability" section, it states, "Zone-redundant high availability is available for the General Purpose and Memory Optimized compute tiers. It is not supported in the Burstable compute tier."

2. Microsoft Learn: Compute and storage options in Azure Database for MySQL - Flexible Server. This document details the different compute tiers. The "When to choose this tier" section for General Purpose indicates it is for "most business workloads," while Memory Optimized is for "high-performance database workloads." This implies a cost and performance hierarchy where General Purpose is the more cost-effective baseline for production HA.

Question 28

You have an app named App1 that uses an on-premises Microsoft SQL Server database named DB1. You plan to migrate DB1 to an Azure SQL managed instance. You need to enable customer-managed Transparent Data Encryption (TDE) for the instance. The solution must maximize encryption strength. Which type of encryption algorithm and key length should you use for the TDE protector?
Options
A: AES256
B: RSA4096
C: RSA2048
D: RSA3072
Show Answer
Correct Answer:
RSA3072
Explanation
For customer-managed Transparent Data Encryption (TDE) in Azure SQL Managed Instance, the TDE protector must be an asymmetric RSA key stored in Azure Key Vault. According to official Microsoft documentation, the supported key sizes for this specific integration are 2048 and 3072 bits. To fulfill the requirement of maximizing encryption strength, the largest supported key size must be chosen. Therefore, RSA 3072 is the correct option as it provides the highest level of encryption strength available for this service.
Why Incorrect Options are Wrong

A. AES256 is incorrect because the TDE protector must be an asymmetric RSA key. AES is a symmetric algorithm used for the data encryption key (DEK), not the protector.

B. RSA4096 is incorrect because, while Azure Key Vault supports this key size, the TDE integration for Azure SQL Managed Instance specifically does not.

C. RSA2048 is incorrect because, although it is a supported key size, it does not meet the requirement to maximize encryption strength as RSA 3072 is also supported and is stronger.

References

1. Microsoft Learn. (2023). Transparent data encryption with customer-managed keys - Azure SQL Database & SQL Managed Instance. In the section "Requirements for configuring customer-managed TDE," the documentation explicitly states: "The key is an asymmetric, RSA or RSA-HSM key. Key sizes of 2048 and 3072 are supported." This confirms that 3072 is the maximum supported size.

Question 29

HOTSPOT You have an Azure subscription named Sub1 that is linked to an Azure AD tenant named contoso.com. You plan to implement two ASP.NET Core apps named App1 and App2 that will be deployed to 100 virtual machines in Sub1. Users will sign in to App1 and App2 by using their contoso.com credentials. App1 requires read permissions to access the calendar of the signed-m user. App2 requires write permissions to access the calendar of the signed-in user. You need to recommend an authentication and authorization solution for the apps. The solution must meet the following requirements: โ€ข Use the principle of least privilege. โ€ข Minimize administrative effort What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one pent. AZ-305 exam question

Show Answer
Correct Answer:

AUTHENTICATION: APPLICATION REGISTRATION IN AZURE AD

AUTHORIZATION: DELEGATED PERMISSIONS

Explanation

Authentication: For an application to handle user sign-ins with Azure Active Directory (Azure AD) credentials and request access to protected resources (like the Microsoft Graph API for calendar access), it must first be registered in the Azure AD tenant. This registration creates a globally unique identity for the app and defines the authentication protocols it will use. Managed identities are used for service-to-service authentication (e.g., a VM authenticating to Azure Key Vault) and are not suitable for scenarios where an app needs to act on behalf of a signed-in user.

Authorization: The requirement is for the apps to access the signed-in user's calendar. This is a classic delegated access scenario. Delegated permissions are used when an application needs to act on behalf of a user. The application is "delegated" the permission to access resources the user can access. By assigning App1 the Calendars.Read permission and App2 the Calendars.ReadWrite permission, the principle of least privilege is enforced. In contrast, Azure RBAC manages access to Azure resources (like VMs and storage), not API data. Application permissions are for services that run without a user present (e.g., background daemons).


References

Microsoft Entra ID Documentation, Application and service principal objects in Azure Active Directory: "To delegate identity and access management functions to Microsoft Entra ID, an application must be registered with a Microsoft Entra tenant. When you register your application with Microsoft Entra ID, you're creating an identity configuration for your application that allows it to integrate with Microsoft Entra ID."

Microsoft Identity Platform Documentation, Permissions and consent in the Microsoft identity platform: "Delegated permissions are used by apps that have a signed-in user present... For delegated permissions, the effective permissions of your app will be the least privileged intersection of the delegated permissions the app has been granted (via consent) and the privileges of the currently signed-in user."

Microsoft Azure Documentation, What is Azure role-based access control (Azure RBAC)?: "Azure role-based access control (Azure RBAC) is the authorization system you use to manage access to Azure resources. To grant access, you assign roles to users, groups, service principals, or managed identities at a particular scope." This clarifies that Azure RBAC is for managing Azure resources, not data within APIs like Microsoft Graph.

Question 30

You are designing an app that will use Azure Cosmos DB to collate sales data from multiple countries. You need to recommend an API for the app. The solution must meet the following requirements: โ€ข Support SQL queries. โ€ข Support geo-replication. โ€ข Store and access data relationally. Which API should you recommend?
Options
A: PostgreSQL
B: NoSQL
C: Apache Cassandra
D: MongoDB
Show Answer
Correct Answer:
PostgreSQL
Explanation
The solution requires an API that supports SQL queries, geo-replication, and relational data storage. Azure Cosmos DB for PostgreSQL is a managed service for the PostgreSQL relational database engine. It natively supports standard SQL queries and a relational data model. As a distributed Azure service, it provides high availability features, including geo-redundant backups and cross-region read replicas, which fulfill the geo-replication requirement. The other listed APIs (NoSQL, Cassandra, MongoDB) are designed for non-relational data models (document or wide-column) and do not meet the requirement to store and access data relationally.
Why Incorrect Options are Wrong

B. NoSQL: This API uses a non-relational document model, failing the requirement to store data relationally.

C. Apache Cassandra: This API uses a non-relational wide-column model, not a relational one.

D. MongoDB: This API uses a non-relational document model, which does not store data relationally.

References

1. Microsoft Learn, Azure Cosmos DB Documentation. "What is Azure Cosmos DB for PostgreSQL?". This document states, "Azure Cosmos DB for PostgreSQL is a managed service for PostgreSQL that is powered by the Citus open-source extension to PostgreSQL. It allows you to run PostgreSQL workloads in the cloud with all the benefits of a fully managed service." This confirms its relational and SQL-based nature.

Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/overview, Section: "What is Azure Cosmos DB for PostgreSQL?".

2. Microsoft Learn, Azure Cosmos DB Documentation. "High availability in Azure Cosmos DB for PostgreSQL". This document details the service's capabilities for business continuity, including "Geo-redundant backup and restore" and "Cross-region read replicas," which satisfy the geo-replication requirement.

Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/concepts-high-availability, Sections: "Geo-redundant backup and restore" and "Cross-region read replicas".

3. Microsoft Learn, Azure Cosmos DB Documentation. "Choose an API in Azure Cosmos DB". This resource contrasts the different APIs. It explicitly describes the API for NoSQL and MongoDB as using a "Document model" and the API for Cassandra as using a "Column-family model," confirming they are not relational.

Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/choose-api, Section: "Azure Cosmos DB APIs".

Question 31

HOTSPOT You have an Azure AD tenant that contains a management group named MG1. You have the Azure subscriptions shown in the following table. AZ-305 exam question The subscriptions contain the resource groups shown in the following table. AZ-305 exam question The subscription contains the Azure AD security groups shown in the following table. AZ-305 exam question The subscription contains the user accounts shown in the following table. AZ-305 exam question You perform the following actions: โ€ข Assign User3 the Contributor role for Sub1. โ€ข Assign Group1 the Virtual Machine Contributor role for MG1. โ€ข Assign Group3 the Contributor role for the Tenant Root Group. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

STATEMENT 1: USER1 CAN CREATE A NEW VIRTUAL MACHINE IN RG1.

YES

STATEMENT 2: USER2 CAN GRANT PERMISSIONS TO GROUP2.

NO

STATEMENT 3: USER3 CAN CREATE A STORAGE ACCOUNT IN RG2.

YES


Explanation

Statement 1: Yes. User1 is a member of Group1. Group1 is assigned the Virtual Machine Contributor role at the MG1 management group scope. Since RG1 is in Sub1, which is under MG1, these permissions are inherited by RG1. The Virtual Machine Contributor role allows the creation and management of virtual machines. Additionally, User1 is transitively a member of Group3 (User1 -> Group1 -> Group3), which has the Contributor role at the Tenant Root Group, a permission that also inherits down to RG1.

Statement 2: No. User2 is a member of Group2, which is a member of Group3. Group3 has the Contributor role at the Tenant Root Group. While the Contributor role grants broad permissions to manage resources, it explicitly does not include the right to grant access to others. Granting permissions requires a role with the Microsoft.Authorization/roleAssignments/write permission, such as Owner or User Access Administrator.

Statement 3: Yes. User3 is a member of both Group1 and Group2, which are both members of Group3. Group3 is assigned the Contributor role at the Tenant Root Group. This permission is inherited by all subscriptions and resource groups below it, including RG2 (which is in Sub2, under MG1, under the Tenant Root). The Contributor role includes permissions to create and manage all resource types, including storage accounts.


References

Azure role-based access control (Azure RBAC) Scope: Microsoft Docs. "Understand scope for Azure RBAC". Permissions are inherited from parent scopes to child scopes. A role assigned at a management group scope grants access to all subscriptions and resources within that management group.

Azure built-in roles: Microsoft Docs. "Azure built-in roles".

Contributor: "Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries."

Virtual Machine Contributor: "Lets you manage virtual machines, but not access to them, and not the virtual network or storage account they're connected to." This documentation clarifies it allows creating and managing VMs.

Azure Management Groups: Microsoft Docs. "Organize your resources with Azure management groups". This document explains the hierarchy from Tenant Root Group down to individual resources and how policies and access control inherit through this structure.

Question 32

HOTSPOT You are designing an app that will be hosted on Azure virtual machines that run Ubuntu. The app will use a third-party email service to send email messages to users. The third-party email service requires that the app authenticate by using an API key. You need to recommend an Azure Key Vault solution for storing and accessing the API key. The solution must minimize administrative effort. What should you recommend using to store and access the key? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

STORAGE: SECRET

ACCESS: A MANAGED SERVICE IDENTITY (MSI)

Explanation

The most appropriate way to store a simple credential like a third-party API key in Azure Key Vault is as a Secret. Secrets are designed to store arbitrary strings of text, such as passwords, connection strings, and API keys.

To access the Key Vault from an Azure VM with minimal administrative effort, a managed service identity (MSI), now known as Managed Identity for Azure resources, is the best practice. This feature provides the Azure VM with an automatically managed identity in Azure Active Directory. The application running on the VM can use this identity to authenticate to Key Vault and retrieve the secret without needing to store any credentials (like a service principal's secret or an API token) in its code or configuration files. This eliminates the overhead of credential management and rotation.

References

Microsoft Azure Documentation, Azure Key Vault basic concepts.

Section: "What is Azure Key Vault?"

Content: The documentation specifies that Key Vault secrets are used for "anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys." This supports storing the API key as a Secret.

Microsoft Azure Documentation, What are managed identities for Azure resources?

Section: "Introduction" and "Which Azure services support managed identities"

Content: This source states, "Managed identities for Azure resources provide Azure services with an automatically managed identity... You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code." This directly supports the use of MSI for minimizing administrative effort.

Microsoft Azure Documentation, Tutorial: Use a Linux VM system-assigned managed identity to access Azure Key Vault.

Section: "Overview" and "Prerequisites"

Content: This tutorial demonstrates the exact scenario in the question. It explicitly states that a managed identity is the recommended way for code running on a VM to authenticate to services like Key Vault because the credentials are automatically managed by the platform.

Question 33

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity. Several virtual machines exhibit network connectivity issues. You need to analyze the network traffic to identify whether packets are being allowed or denied from the Azure virtual machines to the on-premises virtual machines. Solution: Use Azure Advisor. Does this meet the goal?
Options
A: Yes
B: No
Show Answer
Correct Answer:
No
Explanation
Azure Advisor is a personalized cloud consultant that provides recommendations to optimize Azure deployments across five pillars: Reliability, Security, Performance, Cost, and Operational Excellence. It analyzes resource configuration and usage telemetry to suggest improvements. However, it does not provide tools for real-time network traffic analysis or diagnosing packet-level connectivity issues, such as determining if specific packets are allowed or denied by network security rules. The appropriate tool for this task is Azure Network Watcher, specifically its IP Flow Verify or Connection Troubleshoot capabilities, which are designed to diagnose such network filtering and routing problems.
Why Incorrect Options are Wrong

A. Yes: This is incorrect. Azure Advisor's function is to provide high-level recommendations on best practices, not to perform detailed network packet flow analysis required for troubleshooting connectivity.

References

1. Microsoft Learn, Azure Advisor. "Overview of Azure Advisor." Under the "What is Advisor?" section, it is defined as a service that provides recommendations for Reliability, Security, Performance, Cost, and Operational Excellence. It does not list network traffic diagnostics as a feature.

Source: https://learn.microsoft.com/en-us/azure/advisor/advisor-overview

2. Microsoft Learn, Azure Network Watcher. "Introduction to IP flow verify in Azure Network Watcher." This document explicitly states, "IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, local port, remote IP, and remote port." This directly addresses the requirement in the question.

Source: https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-ip-flow-verify-overview

3. Microsoft Learn, Azure Network Watcher. "Diagnose a virtual machine network traffic filter problem." This tutorial demonstrates using the IP Flow Verify capability to determine if a network security group (NSG) rule is denying traffic to or from a virtual machine, which is the exact scenario described.

Source: https://learn.microsoft.com/en-us/azure/network-watcher/diagnose-vm-network-traffic-filtering-problem

Question 34

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases. The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region. You need to recommend a solution to meet the regulatory requirement. Solution: You recommend using an Azure policy to enforce the location of resource groups. Does this meet the goal?
Options
A: Yes
B: No
Show Answer
Correct Answer:
No
Explanation
The proposed solution is incorrect because enforcing the location of a resource group does not enforce the location of the resources created within it. A resource can be deployed to a different region than its parent resource group. The resource group's location only specifies where the metadata for that group is stored. To meet the regulatory requirement of restricting App Service instance locations, an Azure Policy must be applied directly to the App Service resource type (Microsoft.Web/sites) to restrict its location property to the list of approved Azure regions. This ensures direct compliance, whereas the proposed solution creates a compliance gap.
Why Incorrect Options are Wrong

A (Yes): This is incorrect. A resource's location is independent of its resource group's location. Therefore, a policy restricting the resource group's location does not guarantee that the App Service instances within it will be in an approved region.

References

1. Microsoft Learn, Azure Resource Manager documentation, "What is a resource group?": Under the "Resources" section, it states, "The location of the resource group can be different than the location of the resources. [...] The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored." This confirms that resource and resource group locations are independent.

2. Microsoft Learn, Azure Policy documentation, "Tutorial: Create and manage policies to enforce compliance": In the "Apply a policy" section, it describes the "Allowed locations" policy. The documentation explains, "This policy definition enables you to restrict the locations your organization can specify when deploying resources." This policy should be assigned and scoped to the App Service resource type to meet the requirement directly.

3. Microsoft Learn, Azure Policy built-in definitions, "Allowed locations": The policy definition ("policyRule": { "if": { "not": { "field": "location", "in": "[parameters('listOfAllowedLocations')]" } }, "then": { "effect": "deny" } }) demonstrates that the policy acts on the location field of a resource, not its resource group. To be effective for this scenario, it must be applied to the Microsoft.Web/sites resource type.

Question 35

DRAG DROP You have two app registrations named App1 and App2 in Azure AD. App1 supports role-based access control (RBAC) and includes a role named Writer. You need to ensure that when App2 authenticates to access App1, the tokens issued by Azure AD include the Writer role claim. Which blade should you use to modify each app registration? To answer, drag the appropriate blades to the correct app registrations. Each blade may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. AZ-305 exam question

Show Answer
Correct Answer:

APP1: APP ROLES

APP2: API PERMISSIONS


Explanation

To solve this, you must configure both the resource application (App1) and the client application (App2).

App1, the resource API, must first define the permissions it exposes to other applications. This is accomplished by creating an "App role" (like the 'Writer' role). Therefore, the App roles blade is used to configure App1.

App2, the client application, must then request one of the permissions exposed by App1. This is done on the API permissions blade of App2, where you add a permission and grant it consent. Once granted, Azure AD will include the corresponding 'Writer' role claim in the access token it issues for App2 to call App1.

References

Microsoft Learn | Microsoft Entra ID Documentation: In the article "Add app roles to your application and receive them in the token," the procedure for a resource API to define its roles is detailed.

Section: "Create app roles by using the Azure portal"

Content: This section explicitly states, "To create an app role by using the Azure portal's user interface: 1. Sign in to the Microsoft Entra admin center... 3. Browse to Identity > Applications > App registrations and then select the app you want to define app roles in... 4. Under Manage, select App roles, and then select Create app role." This confirms that App roles is the correct blade for App1.

Microsoft Learn | Microsoft Entra ID Documentation: The guide "Quickstart: Configure a client application to access a web API" explains how a client app requests permissions.

Section: "Add permissions to access the web API"

Content: This section provides the steps for the client application (App2 in this scenario): "1. Under Manage, select API permissions > Add a permission. 2. Select the My APIs tab. 3. In the list of APIs, select your web API registration... 4. Select Application permissions... 5. In the list of permissions, select the check box next to [the role you defined]... 7. Select Grant admin consent..." This confirms that API permissions is the correct blade for App2.

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE