Microsoft Azure Cosmos DB(DP-420) Sample Questions
What is Azure Cosmos DB?
- A) A relational database management system
- B) A NoSQL database service
- C) A cloud-based document database
- D) An in-memory data store
Answer: B) A NoSQL database service
Explanation: Azure Cosmos DB is a globally-distributed, multi-model database service provided by Microsoft Azure. It is a NoSQL database, which means that it is designed to handle non-relational data, such as documents, key-value pairs, graph data, and columnar data.
What are the benefits of using Azure Cosmos DB?
- A) Scalability, high availability, and low latency
- B) Advanced security features and data privacy
- C) Both A and B
- D) None of the above
Answer: C) Both A and B
Explanation: Azure Cosmos DB provides a number of benefits to users, including scalability, high availability, and low latency. It also provides advanced security features and data privacy, ensuring that sensitive data is protected and secure. These benefits make Azure Cosmos DB an ideal choice for a wide range of use cases, such as web, mobile, gaming, and IoT applications.
What data models does Azure Cosmos DB support?
- A) Document
- B) Key-value
- C) Graph
- D) All of the above
Answer: D) All of the above
Explanation: Azure Cosmos DB is a multi-model database, which means that it supports multiple data models, including document, key-value, graph, and columnar. This enables users to choose the data model that is best suited to their specific use case, and to easily switch between models as their needs evolve.
What is the purpose of the Azure Cosmos DB query language?
- A) To retrieve data from the database
- B) To update data in the database
- C) To delete data from the database
- D) All of the above
Answer: A) To retrieve data from the database
Explanation: The Azure Cosmos DB query language is used to retrieve data from the database. It provides a flexible and powerful way for users to query and retrieve data, and to filter and aggregate data based on specific criteria. The query language supports a variety of programming languages, including SQL, JavaScript, and MongoDB, making it easy for developers to work with the data in the database.
What is the consistency model in Azure Cosmos DB?
- A) Eventual consistency
- B) Strong consistency
- C) Bounded staleness consistency
- D) All of the above
Answer: D) All of the above
Explanation: Azure Cosmos DB provides a number of consistency options, including eventual consistency, strong consistency, and bounded staleness consistency. This allows users to choose the level of consistency that is appropriate for their specific use case, and to balance consistency, performance, and availability. For example, applications that require low latency and high throughput may choose eventual consistency, while applications that require strong data consistency may choose strong consistency.
What is the role of the Azure Cosmos DB emulator in development and testing?
- A) To allow developers to test their applications locally
- B) To provide a live environment for testing applications
- C) To provide a development environment for building applications
- D) All of the above
Answer: A) To allow developers to test their applications locally
Explanation: The Azure Cosmos DB emulator provides developers with a local environment for testing their applications, without the need for a live connection to the Azure Cosmos DB service. This enables developers to test their applications in a controlled and isolated environment, and to easily simulate different scenarios and test cases. The emulator supports all the features of the Azure Cosmos DB service, making it an ideal tool for development and testing.
What is the purpose of the Azure Cosmos DB partitioning model?
- A) To distribute data across multiple nodes
- B) To improve performance by reducing the amount of data stored on a single node
- C) Both A and B
- D) None of the above
Answer: C) Both A and B
Explanation: The Azure Cosmos DB partitioning model is designed to distribute data across multiple nodes, and to improve performance by reducing the amount of data stored on a single node. This enables the database to scale horizontally and to handle large amounts of data and traffic, while still providing fast and reliable performance. The partitioning model is based on the concept of a partition key, which is used to distribute data across the nodes in the database.
What is the role of the Azure Cosmos DB data migration tool in migrating data to Azure Cosmos DB?
- A) To simplify the process of migrating data from other sources to Azure Cosmos DB
- B) To provide a graphical interface for migrating data to Azure Cosmos DB
- C) Both A and B
- D) None of the above
Answer: C) Both A and B
Explanation: The Azure Cosmos DB data migration tool is designed to simplify the process of migrating data from other sources to Azure Cosmos DB. It provides a graphical interface that makes it easy to select the data to be migrated, and to specify the target database and collection. The tool supports a wide range of data sources, including JSON, MongoDB, Cassandra, and SQL Server, making it easy to migrate data from a variety of sources to Azure Cosmos DB.
What is the purpose of the Azure Cosmos DB global distribution feature?
- A) To replicate data across multiple regions for improved data durability and availability
- B) To improve performance by reducing the amount of data stored on a single node
- C) Both A and B
- D) None of the above
Answer: A) To replicate data across multiple regions for improved data durability and availability Explanation: The Azure Cosmos DB global distribution feature enables users to replicate their data across multiple regions, for improved data durability and availability. This enables users to keep their data close to their users, for fast and reliable access, and to ensure that their data is available even in the event of a regional outage. The global distribution feature provides multi-homing and active-active replication, and enables users to easily configure and manage their global distribution settings.
Question 1
In your Azure Cosmos DB Core (SQL) API account, you have a container named container1 whose contents you wish to make available as reference data for Azure Stream Analytics.
Solution: Use Azure Cosmos DB Core (SQL API) as input and Azure Blob Storage as output to create an Azure Data Factory pipeline. Will this meet the goal?
- A. Yes
- B. No
Answer : B
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution
Question 2
In your Azure Cosmos DB Core (SQL) API account, you have a container named container1 whose contents you wish to make available as reference data for Azure Stream Analytics.
Solution: Build an Azure function that uses Azure Cosmos DB Core (SQL) API change feeds as triggers and Azure event hubs as outputs. Will this meet the goal?
- A. Yes
- B. No
Answer : A
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution
Question 3
App1 is a SQL API application that reads data from an Azure Cosmos DB Core (SQL) account every minute. With eventual consistency, App1 runs the same read queries every minute. A query in the cache consumes request units (RUs) instead of cache items, and you verify the IntegratedCacheiteItemHitRate metric and the IntegratedCacheQueryHitRate metric, both having values of 0. It is verified that the dedicated gateway cluster has been provisioned and is used in the connection string. You are required to ensure that App1 uses the Azure Cosmos DB integrated cache. What must you configure?
- A. indexing policy of the Azure Cosmos DB container
- B. consistency level of the requests from App1
- C. connectivity mode of the App1 CosmosClient
- D. default consistency level of the Azure Cosmos DB account
Answer : C
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/integrated-cache-faq
Question 4
In your Azure Cosmos DB Core (SQL) API account, you have a container named container1. There are three-second updates in container1, and you have an Azure Functions app named function1 that should run whenever an item is inserted or replaced. There is a problem with function1 that does not run on each upsert, and you need to ensure that function1 processes each upsert within one second. Which of the given property will you change in the Function.json file of function1?
- A. checkpointInterval
- B. leaseCollectionsThroughput
- C. maxItemsPerInvocation
- D. feedPollDelay
Answer : D
Reference: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger
Question 5
You have the following query.
SELECT * FROM ׁ
WHERE c.sensor = “TEMP1”
AND c.value < 22 –
AND c.timestamp >= 1619146031231
You must recommend a composite index strategy for minimizing the request units (RUs) consumed by the query. What will you recommend?
- A. a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)
- B. a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC, value DESC, timestamp DESC)
- C. a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)
- D. a composite index for (sensor ASC, value ASC, timestamp ASC)
Answer : A
Reference: https://azure.microsoft.com/en-us/blog/three-ways-to-leverage-composite-indexes-in-azure-cosmos-db/
Question 6
A Cosmos DB Core (SQL) API account will be created that uses customer-managed keys stored in Azure Key Vault, and you need to configure an Azure Key Vault access policy to allow Azure Cosmos DB to access those keys. Which three of the following permissions will you enable in the access policy?
- A. Wrap Key
- B. Get
- C. List
- D. Update
- E. Sign
- F. Verify
- G. Unwrap Key
Answer : ABG
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk
Question 7
Apache Kafka must be configured to ingest data from an Azure Cosmos DB Core (SQL) API account. Data from telemetry containers must be added to the Kafka topic IoT, and the data must be stored in compact binary form. Which three of the following configuration items will you include in the solution?
- A. “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector”
- B. “key.converter”: “org.apache.kafka.connect.json.JsonConverter”
- C. “key.converter”: “io.confluent.connect.avro.AvroConverter”
- D. “connect.cosmos.containers.topicmap”: “iot#telemetry”
- E. “connect.cosmos.containers.topicmap”: “iot”
- F. “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector”
Answer : CDF
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink
https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/
Question 8
To write a dataset, you will use an Azure Cosmos DB (SQL API) sink in an Azure Data Factory data flow. In order to optimise throughput, you need to ensure that 2,000 Apache Spark partitions are used to ingest the data. Which sink setting must be configured?
- A. Throughput
- B. Write throughput budget
- C. Batch size
- D. Collection action
Answer : C
Reference: https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db
Question 9
There is a container named container1 in an Azure Cosmos DB Core (SQL) API account, and a user named User1 needs to be allowed to insert items into container1. The solution must make use of the principle of least privilege. Which of the following roles will you assign to User1?
- A. CosmosDB Operator only
- B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
- C. DocumentDB Account Contributor only
- D. Cosmos DB Built-in Data Contributor only
Answer : A
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control
Question 10
In the Log Analytics workspace, you configure the diagnostic settings so that all log information is sent to your Azure Cosmos DB Core (SQL API) account. To identify when provisioned request units per second (RU/s) for resources within the account were modified, you need to identify when they were modified. You wrote the given query.
AzureDiagnostics –
| where Category == “ControlPlaneRequests”
What must be included in the query?
- A. | where OperationName startswith “AccountUpdateStart”
- B. | where OperationName startswith “SqlContainersDelete”
- C. | where OperationName startswith “MongoCollectionsThroughputUpdate”
- D. | where OperationName startswith “SqlContainersThroughputUpdate”
Answer : A
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs
Question 11
An Azure Cosmos DB Core (SQL API) account is used to run this query on a container within the account.
SELECT –
IS_NUMBER(“1234”) AS A,
IS_NUMBER(1234) AS B,
IS_NUMBER({prop: 1234}) AS C –
What will be the output of the query?
- A. [{“A”: false, “B”: true, “C”: false}]
- B. [{“A”: true, “B”: false, “C”: true}]
- C. [{“A”: true, “B”: true, “C”: false}]
- D. [{“A”: true, “B”: true, “C”: true}]
Answer : A
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-is-number
Question 12
Before an item is inserted into a container, you need to implement a trigger in Azure Cosmos DB Core (SQL) API. Which two of the following actions must be performed for ensuring that the trigger runs?
- A. Append pre to the name of the JavaScript function trigger.
- B. For each create request, set the access condition in RequestOptions.
- C. Register the trigger as a pre-trigger.
- D. For each create request, set the consistency level to session in RequestOptions.
- E. For each create request, set the trigger name in RequestOptions.
Answer : C
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs
Question 13
In Azure Cosmos DB Core (SQL) API account 1 you have an autoscale throughput account that requires you to run a function when a container in account1 reaches a certain normalized request units per second.
Solution: Configuring an Azure Monitor alert for triggering the function.
Will this meet the goal?
- A. Yes
- B. No
Answer : A
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
Question 14
In Azure Cosmos DB Core (SQL) API account 1 you have an autoscale throughput account that requires you to run a function when a container in account1 reaches a certain normalized request units per second.
Solution: Configuring the function for having an Azure CosmosDB trigger.
Will this meet the goal?
- A. Yes
- B. No
Answer : B
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
Question 15
In Azure Cosmos DB Core (SQL) API account 1 you have an autoscale throughput account that requires you to run a function when a container in account1 reaches a certain normalized request units per second.
Solution: Configuring an application for using the change feed processor for reading the change feed and configuring the application for triggering the function.
Will this meet the goal?
- A. Yes
- B. No
Answer : B
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
Question 16
HOTSPOT – A Cosmos DB Core (SQL) API account named telemetry stores IoT data in two containers named readings and devices, which is part of your telemetry database.
Documents in readings have the following structure.
- ✑ id
- ✑ deviceid
- ✑ timestamp
- ✑ ownerid
- ✑ measures (array)
- – type
- – value
- – metricid
Documents in devices have the following structure.
- ✑ id
- ✑ deviceid
- ✑ owner
- Ownerid
- Emailaddress
- name
- ✑ brand
- ✑ model
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Statements | Yes/No |
To return for all devices owned by a specific email address, multiple queries must be performed | |
To return deviceid, ownerid, timestamp, and value for a specific metricid, a join must be performed | |
To return deviceid, ownerid, emailaddress, and model, a join must be performed |
Answer :
Statements | Yes/No |
To return for all devices owned by a specific email address, multiple queries must be performed | Yes |
To return deviceid, ownerid, timestamp, and value for a specific metricid, a join must be performed | No |
To return deviceid, ownerid, emailaddress, and model, a join must be performed | No |
Question 17
DRAG DROP – In your Azure Cosmos DB Core (SQL API) account, you have two containers named container1 and container2, which are configured for multi-region writes.
The following is a sample of a document in container1:
{
“customerId”: 1234,
“firstName”: “John”,
“lastName”: “Smith”,
“policyYear”: 2021
}
The following is a sample of a document in container2:
{
“gpsId”: 1234,
“latitude”: 38.8951,
“longitude”: -77.0364
}
You are required to configure conflict resolution for meeting the following requirements:
- ✑ For container1 you are required to resolve conflicts using the highest value for policyYear.
- ✑ For container2 you are required to resolve conflicts by accepting the distance closest to latitude: 40.730610 and longitude: -73.935242.
- ✑ Administrative effort are supposed to be minimized for implementing the solution.
What will you configure for each container?
Select and Place:
Configurations | Answer Area |
Last write wins (default) mode | Container 1: |
Merge procedures (custom) mode | Container 2: |
An application that reads from the conflicts feed |
Answer :
Configurations | Answer Area |
Last write wins (default) mode | Container 1: Last write wins (default) mode |
Merge procedures (custom) mode | Container 2: Merge procedures (custom) mode |
An application that reads from the conflicts feed |
Question 18
DRAG DROP – You have an app that uses an Azure Cosmos DB Core (SQL API) account to store data. When the app performs queries, it returns large result sets, and you need to paginate the results. Each page of the results should return 80 items. Which three of the given actions are required to be performed in sequence?
Select and Place:
Actions | Answer Area |
Configure MaxItemCount in QueryRequestOptions | |
Run the query and provide a continuation token | |
Configure MaxBufferedItemCount in QueryRequestOptions | |
Append the results to a variable | |
Run the query and increment MaxItemCount |
Answer :
Actions | Answer Area |
Configure MaxItemCount in QueryRequestOptions | |
Run the query and provide a continuation token | |
Configure MaxBufferedItemCount in QueryRequestOptions | Append the results to a variable |
Run the query and increment MaxItemCount |
Question 19
You maintain a relational database for a book publisher containing the following tables.
Name | Column |
Author | authorId (primary key) |
fullname | |
address | |
contactinfo | |
Book | bookId (primary key) |
isbn | |
title | |
genre | |
BookauthorInk | authorId (foreign key) |
bookId (foreign key) |
In most cases, a query will list the books for an authorId. In order to replace the relational database with Azure Cosmos DB Core (SQL) API, you must develop a non-relational data model. It is essential that the solution minimizes latency and read operation costs. What must be included in the solution?
- A. Creating a container for Author and for a Book. In each Author document, embedding a bookId for each book by the author. In each Book document embedding an authorId of each author.
- B. Creating Author, Book, and Bookauthorlnk documents in the same container.
- C. Creating a container containing a document for each Author and a document for each Book. In each Book document, embedding an authorId.
- D. Creating a container for Author and for a Book. In each Author document and Book document embedding the data from Bookauthorlnk.
Answer : A
Question 20
HOTSPOT – A container is in your Azure Cosmos DB Core (SQL) API account, and you need the Azure Cosmos DB SDK to use optimistic concurrency to replace a document. What must be included in the code?
Hot Area:
Request Options property to set: | |
AccessCondition | |
ConsistencyLevel | |
SessionToken | |
Document property that will be compared: | _etag |
_id | |
_rid |
Answer :
Request Options property to set: | ConsistencyLevel |
Document property that will be compared: | _etag |