Skip Navigation Links
 

Microsoft - AZ-305: Designing Microsoft Azure Infrastructure Solutions

Sample Questions

Question: 282
Measured Skill: Design data storage solutions (25-30%)

You have an on-premises line-of-business (LOB) application that uses a Microsoft SQL Server instance as the backend.

You plan to migrate the on-premises SQL Server instance to Azure virtual machines.

You need to recommend a highly available SQL Server deployment that meets the following requirements:
  • Minimizes costs.
  • Minimizes failover time if a single server fails.
What should you include in the recommendation?

AAn Always On availability group that has premium storage disks and a virtual network name (VNN).
B An Always On Failover Cluster Instance that has a virtual network name (VNN) and a standard file share.
C An Always On availability group that has premium storage disks and a distributed network name (DNN).
D An Always On Failover Cluster Instance that has a virtual network name (VNN) and a premium file share.

Correct answer: C

Explanation:

Always On availability groups on Azure Virtual Machines are similar to Always On availability groups on-premises, and rely on the underlying Windows Server Failover Cluster. However, since the virtual machines are hosted in Azure, there are a few additional considerations as well, such as VM redundancy, and routing traffic on the Azure network.

If you deploy your SQL Server VMs to a single subnet, you can configure a virtual network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN) to route traffic to your availability group listener.

Virtual network name (VNN)

To match the on-premises experience for connecting to your availability group listener or failover cluster instance, deploy your SQL Server VMs to multiple subnets within the same virtual network. Having multiple subnets negates the need for the extra dependency on an Azure Load Balancer to route traffic to your HADR solution.

In a traditional on-premises environment, clustered resources such as failover cluster instances or Always On availability groups rely on the Virtual Network Name to route traffic to the appropriate target - either the failover cluster instance, or the listener of the Always On availability group. The virtual name binds the IP address in DNS, and clients can use either the virtual name or the IP address to connect to their high availability target, regardless of which node currently owns the resource. The VNN is a network name and address managed by the cluster, and the cluster service moves the network address from node to node during a failover event. During a failure, the address is taken offline on the original primary replica, and brought online on the new primary replica.

On Azure Virtual Machines in a single subnet, an additional component is necessary to route traffic from the client to the Virtual Network Name of the clustered resource (failover cluster instance, or the listener of an availability group). In Azure, a load balancer holds the IP address for the VNN that the clustered SQL Server resources rely on and is necessary to route traffic to the appropriate high availability target. The load balancer also detects failures with the networking components and moves the address to a new host.

The load balancer distributes inbound flows that arrive at the front end, and then routes that traffic to the instances defined by the back-end pool. You configure traffic flow by using load-balancing rules and health probes. With SQL Server FCI, the back-end pool instances are the Azure virtual machines running SQL Server, and with availability groups, the back-end pool is the listener. There is a slight failover delay when you're using the load balancer, because the health probe conducts alive checks every 10 seconds by default.

Configuration of the VNN can be cumbersome, it's an additional source of failure, it can cause a delay in failure detection, and there is an overhead and cost associated with managing the additional resource. To address some of these limitations, SQL Server introduced support for the Distributed Network Name feature.

Distributed network name (DNN)

To match the on-premises experience for connecting to your availability group listener or failover cluster instance, deploy your SQL Server VMs to multiple subnets within the same virtual network. Having multiple subnets negates the need for the extra dependency on a DNN to route traffic to your HADR solution.

For SQL Server VMs deployed to a single subnet, the distributed network name feature provides an alternative way for SQL Server clients to connect to the SQL Server failover cluster instance or availability group listener without using a load balancer. The DNN feature is available starting with SQL Server 2016 SP3, SQL Server 2017 CU25, SQL Server 2019 CU8, on Windows Server 2016 and later.

When a DNN resource is created, the cluster binds the DNS name with the IP addresses of all the nodes in the cluster. The client will try to connect to each IP address in this list to find which resource to connect to. You can accelerate this process by specifying MultiSubnetFailover=True in the connection string. This setting tells the provider to try all IP addresses in parallel, so the client can connect to the FCI or listener instantly.

A distributed network name is recommended over a load balancer when possible because:

  • The end-to-end solution is more robust since you no longer have to maintain the load balancer resource.
  • Eliminating the load balancer probes minimizes failover duration.
  • The DNN simplifies provisioning and management of the failover cluster instance or availability group listener with SQL Server on Azure VMs.

Most SQL Server features work transparently with FCI and availability groups when using the DNN, but there are certain features that may require special consideration.

References:

Always On availability group on SQL Server on Azure VMs

Windows Server Failover Cluster with SQL Server on Azure VMs



Question: 283
Measured Skill: Design data storage solutions (25-30%)

You have an Azure Active Directory (Azure AD) tenant.

You plan to deploy Azure Cosmos DB databases that will use the SQL API.

You need to recommend a solution to provide specific Azure AD user accounts with read access to the Cosmos DB databases.

What should you include in the recommendation?

AShared access signatures (SAS) and Conditional Access policies
B Certificates and Azure Key Vault
C Master keys and Azure Information Protection policies
D A resource token and an Access control (IAM) role assignment

Correct answer: D

Explanation:

Azure Cosmos DB exposes a built-in role-based access control (RBAC) system that lets you:

  • Authenticate your data requests with an Azure Active Directory (Azure AD) identity.
  • Authorize your data requests with a fine-grained, role-based permission model.

The Access control (IAM) pane in the Azure portal is used to configure Azure role-based access control on Azure Cosmos DB resources. The roles are applied to users, groups, service principals, and managed identities in Active Directory. You can use built-in roles or custom roles for individuals and groups. 

To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a TokenCredential class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD token on behalf of the identity you wish to use.

References:

Azure role-based access control in Azure Cosmos DB

Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account



Question: 284
Measured Skill: Design infrastructure solutions (25-30%)

You are designing a cost-optimized solution that uses Azure Batch to run two types of jobs on Linux nodes.

The first job type will consist of short-running tasks for a development environment. The second job type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion.

You need to recommend the pool type and node type for each job type. The solution must minimize compute charges and leverage Azure Hybrid Benefit whenever possible.

What should you recommend?

(To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.)

www.cert2brain.com

AFirst job: Batch service and dedicated virtual machines
Second job: User subscription and low-priority virtual machines
B First job: Batch service and dedicated virtual machines
Second job: User subscription and dedicated virtual machines
C First job: User subscription and dedicated virtual machines
Second job: User subscription and low-priority virtual machines
D First job: User subscription and dedicated virtual machines
Second job: User subscription and low-priority virtual machines
E First job: User subscription and low-priority virtual machines
Second job: User subscription and dedicated virtual machines
F First job: User subscription and low-priority virtual machines
Second job: User subscription and low-priority virtual machines

Correct answer: B

Explanation:

Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. Azure Batch creates and manages a pool of compute nodes (virtual machines), installs the applications you want to run, and schedules jobs to run on the nodes. There's no cluster or job scheduler software to install, manage, or scale. Instead, you use Batch APIs and tools, command-line scripts, or the Azure portal to configure, manage, and monitor your jobs.

When you select a node size for an Azure Batch pool, you can choose from almost all the VM sizes available in Azure. Azure offers a range of sizes for Linux and Windows VMs for different workloads.

There are no costs for using Azure Batch itself, although there can be charges for the underlying compute resources and software licenses used to run Batch workloads. Costs may be incurred from virtual machines (VMs) in a pool, data transfer from the VM, or any input or output data stored in the cloud.

Virtual machines are the most significant resource used for Batch processing. The cost of using VMs for Batch is calculated based on the type, quantity, and the duration of use. VM billing options include Pay-As-You-Go or reservation (pay in advance). Both payment options have different benefits depending on your compute workload and will affect your bill differently.

You can create Batch Windows virtual machine pools and specify that Azure Hybrid Use Benefit licensing is used. When Azure Hybrid Use Benefit is specified, a discount is applied to the VM price. A new license type property has been added to the virtual machine configuration.

Low priority virtual machines do not support Azure Hybrid Benefit.

References:

What is Azure Batch?

Choose a VM size and image for compute nodes in an Azure Batch pool

Get cost analysis and set budgets for Azure Batch

Azure Batch updates



Question: 285
Measured Skill: Design infrastructure solutions (25-30%)

You plan to deploy an application named App1 that will run in containers on Azure Kubernetes Service (AKS) clusters. The AKS clusters will be distributed across four Azure regions.

You need to recommend a storage solution to ensure that updated container images are replicated automatically to all the Azure regions hosting the AKS clusters.

Which storage solution should you recommend?

AGeo-redundant storage (GRS) accounts
B Premium SKU Azure Container Registry
C Azure Content Delivery Network (CDN)
D Azure Cache for Redis

Correct answer: B

Explanation:

To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.

To improve performance and availability:

  1. Use Container Registry geo-replication to create a registry in each region where you have an AKS cluster.
  2. Each AKS cluster then pulls container images from the local container registry in the same region:

When you use Container Registry geo-replication to pull images from the same region, the results are:

  • Faster: Pull images from high-speed, low-latency network connections within the same Azure region.
  • More reliable: If a region is unavailable, your AKS cluster pulls the images from an available container registry.
  • Cheaper: No network egress charge between datacenters.

Geo-replication is a Premium SKU container registry feature.

Reference: Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS)



Question: 286
Measured Skill: Design data storage solutions (25-30%)

You have an on-premises application named App1 that uses an Oracle database.

You plan to use Azure Databricks to transform and load data from App1 to an Azure Synapse Analytics instance.

You need to ensure that the App1 data is available to Databricks.

Which two Azure services should you include in the solution?

(Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.)

AAzure Data Box Gateway
B Azure Import/Export service
C Azure Data Lake Storage
D Azure Data Box Edge
E Azure Data Factory

Correct answer: C, E

Explanation:

We need to use Azure Data Factory to extract the data from the Oracle database and load the data to Azure Data Lake Storage Gen2. The, we can extract the data from Azure Data Lake Storage Gen2 into Azure Databricks, run transformations on the data in Azure Databricks, and load the transformed data into Azure Synapse Analytics.

References:

Load data into Azure Data Lake Storage Gen2 with Azure Data Factory

Tutorial: Extract, transform, and load data by using Azure Databricks





 
Tags: exam, examcollection, exam simulation, exam questions, questions & answers, training course, study guide, vce, braindumps, practice test
 
 

© Copyright 2014 - 2023 by cert2brain.com