Exam DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB
Exam DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB is created to assess a candidate’s technical skills and abilities. Candidates for this test should have a solid foundation of knowledge and experience designing Azure apps and working with Azure Cosmos DB database technology. Candidates should also be able to construct apps using the Core(SQL) API and SDKs, write efficient queries for authoring and establishing an appropriate index policy, provision and manage Azure resources, and generate server-side objects using JavaScript. They should be able to decode JSON, read C# or Java code, and execute PowerShell commands.
Skills required for this exam
The candidates for the Exam DP-420 are required to have subject matter expertise in-
- Designing
- Implementing
- Monitoring cloud-native application storing and managing data
Exam DP-420 Details
Exam DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB is only conducted in the English language. The examination fee is $165 USD. There are as such no mandatory prerequisites to give this exam.
Roles and Responsibilities for Exam DP-420
Candidates pairing to take the exam are required to perform, the following tasks-
- Should know to design and implement data models and data distribution.
- Having the ability to load data into an Azure Cosmos DB database
- How to optimize and maintain the solution
- The process of integrating the solution with other Azure services
- The candidate should be able to design, implement, and monitor solutions that consider security, availability, resilience, and performance requirements.
Eligibility Criteria
The applicants should have a solid foundation in the important areas. There are no essential prerequisites for this test; nevertheless, students must pass this exam in order to take the next Microsoft exam-
Exam DP-420 Course Objectives
1. Design and Implement Data Models (35–40%)
Design and implement a non-relational data model for Azure Cosmos DB Core for NoSQL
- develop a design by storing multiple entity types in the same container
- developing a design by storing multiple related entities in the same document (Microsoft Documentation: Table design patterns)
- develop a model that denormalizes data across documents (Microsoft Documentation: Data modeling in Azure Cosmos DB)
- developing a design by referencing between documents
- identify primary and unique keys (Microsoft Documentation: Primary and Foreign Key Constraints)
- identify data and associated access patterns
- Specify a default time to live (TTL) on a container for a transactional store (Microsoft Documentation: Configure time to live in Azure Cosmos DB)
Design a data partitioning strategy for Azure Cosmos DB Core for NoSQL
- choose a partitioning strategy based on a specific workload (Microsoft Documentation: Horizontal, vertical, and functional data partitioning)
- choose a partition key (Microsoft Documentation: Partitioning and horizontal scaling in Azure Cosmos DB)
- plan for transactions when choosing a partition key
- evaluate the cost of using a cross-partition query (Microsoft Documentation: Optimize request cost in Azure Cosmos DB)
- calculate and evaluate data distribution based on partition key selection (Microsoft Documentation: Horizontal, vertical, and functional data partitioning)
- calculate and evaluate throughput distribution based on partition key selection
- construct and implement a synthetic partition key (Microsoft Documentation: Create a synthetic partition key)
- Design and implement a hierarchical partition key
- design partitioning for workloads that require multiple partition keys (Microsoft Documentation: Partitioning and horizontal scaling in Azure Cosmos DB)
Plan and implement sizing and scaling for a database created with Azure Cosmos DB
- evaluate the throughput and data storage requirements for a specific workload (Microsoft Documentation: Introduction to provisioned throughput in Azure Cosmos DB)
- choose between serverless and provisioned models (Microsoft Documentation: choose between provisioned throughput and serverless)
- choose when to use database-level provisioned throughput (Microsoft Documentation: Introduction to provisioned throughput in Azure Cosmos DB)
- design for granular scale units and resource governance
- evaluate the cost of the global distribution of data (Microsoft Documentation: Optimize request cost in Azure Cosmos DB)
- configure throughput for Azure Cosmos DB by using the Azure portal
Implement client connectivity options in the Azure Cosmos DB SDK
- choose a connectivity mode (gateway versus direct) (Microsoft Documentation: Azure Cosmos DB SQL SDK connectivity modes)
- implement a connectivity mode (Microsoft Documentation: Connectivity modes and requirements)
- create a connection to a database (Microsoft Documentation: Connecting to the Database Engine)
- enable offline development by using the Azure Cosmos DB emulator (Microsoft Documentation: Install and use the Azure Cosmos DB Emulator)
- handle connection errors
- implement a singleton for the client
- specify a region for global distribution (Microsoft Documentation: Set up Azure Cosmos DB global distribution using the SQL API)
- configure client-side threading and parallelism options (Microsoft Documentation: Performance tips for Azure Cosmos DB and .NET)
- enable SDK logging (Microsoft Documentation: Logging with the Azure SDK for .NET)
Implement data access by using the SQL Language for Azure Cosmos DB for NoSQL
- implement queries that use arrays, nested objects, aggregation, and ordering (Microsoft Documentation: Common query patterns in Azure Stream Analytics)
- implementing a correlated subquery (Microsoft Documentation: Subqueries (SQL Server))
- implement queries that use array and type-checking functions (Microsoft Documentation: Type checking functions (Azure Cosmos DB))
- implementing queries that use mathematical, string, and date functions (Microsoft Documentation: C# Functions and Operators (U-SQL))
- implementing queries based on variable data (Microsoft Documentation: Query expression basics)
Implement data access by using Azure Cosmos DB for NoSQL SDKs
- choose when to use a point operation versus a query operation (Microsoft Documentation: Basic Query Operations (Visual Basic))
- implement a point operation that creates, updates, and deletes documents (Microsoft Documentation: Implement Azure Cosmos DB SQL API point operations)
- implement an update by using a patch operation
- manage multi-document transactions using SDK Transactional Batch (Microsoft Documentation: Transactional batch operations in Azure Cosmos DB using the .NET SDK)
- perform a multi-document load using Bulk Support in the SDK
- implement optimistic concurrency control using ETags
- Override default consistency by using query request options
- implementing session consistency by using session tokens (Microsoft Documentation: Manage consistency levels in Azure Cosmos DB)
- Applying a query operation that includes pagination (Microsoft Documentation: Efficiently Paging Through Large Amounts of Data (C#))
- implement a query operation by using a continuation token (Microsoft Documentation: Pagination in Azure Cosmos DB)
- handle transient errors and 429s
- specify TTL for a document (Microsoft Documentation: Configure time to live in Azure Cosmos DB)
- retrieve and use query metrics
Implement server-side programming in Azure Cosmos DB Core for NoSQL by using JavaScript
- write, deploy, and call a stored procedure (Microsoft Documentation: Create a stored procedure)
- design stored procedures to work with multiple items transactionally (Microsoft Documentation: Transactional batch operations in Azure Cosmos DB using the .NET SDK)
- implement and call triggers (Microsoft Documentation: CREATE TRIGGER (Transact-SQL))
- implement a user-defined function (Microsoft Documentation: User-Defined Functions)
2. Design and Implement Data Distribution (5–10%)
Design and implement a replication strategy for Azure Cosmos DB
- choose when to distribute data (Microsoft Documentation: designing distributed tables using dedicated SQL pool)
- define automatic failover policies for regional failure for Azure Cosmos DB for NoSQL
- perform manual failovers to move single master write regions
- choose a consistency model (Microsoft Documentation: Consistency levels in Azure Cosmos DB)
- identify use cases for different consistency models (Microsoft Documentation: Consistency levels in Azure Cosmos DB)
- Evaluate the impact of consistency model choices on availability and associated request unit (RU) cost
- evaluate the impact of consistency model choices on performance and latency
- specify application connections to replicated data (Microsoft Documentation: Database replication)
Design and implement multi-region write
- choose when to use multi-region write (Microsoft Documentation: Configure multi-region writes in your applications that use Azure Cosmos DB)
- implement multi-region write
- implement a custom conflict resolution policy for Azure Cosmos DB for NoSQL
3. Integrate an Azure Cosmos DB Solution (5–10%)
Enable Azure Cosmos DB analytical workloads
- enable Azure Synapse Link (Microsoft Documentation: Azure Synapse Link for Azure Cosmos DB)
- choose between Azure Synapse Link and Spark Connector (Microsoft Documentation: Azure Synapse Analytics)
- enable the analytical store on a container (Microsoft Documentation: Azure Cosmos DB analytical store)
- Implement custom partitioning in Azure Synapse Link
- enable a connection to an analytical store and query from Azure Synapse Spark or Azure Synapse SQL (Microsoft Documentation: Configure and use Azure Synapse Link for Azure Cosmos DB)
- perform a query against the transactional store from Spark (Microsoft Documentation: Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics)
- write data back to the transactional store from Spark (Microsoft Documentation: Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API)
- Implement Change Data Capture in the Azure Cosmos DB analytical store
- Implement time travel in Azure Synapse Link for Azure Cosmos DB
Implement solutions across services
- integrate events with other applications by using Azure Functions and Azure Event Hubs (Microsoft Documentation: Azure Event Hubs trigger and bindings for Azure Functions)
- denormalize data by using Change Feed and Azure Functions (Microsoft Documentation: Change feed design patterns in Azure Cosmos DB)
- enforce referential integrity by using Change Feed and Azure Functions (Microsoft Documentation: Optimize databases by using advanced modeling patterns for Azure Cosmos DB)
- aggregate data by using Change Feed and Azure Functions, including reporting (Microsoft Documentation: Use Azure Cosmos DB change feed to visualize real-time data analytics)
- archive data by using Change Feed and Azure Functions
- implement Azure Cognitive Search for an Azure Cosmos DB solution (Microsoft Documentation: Index data from Azure Cosmos DB using SQL or MongoDB APIs)
4. Optimize an Azure Cosmos DB Solution (15–20%)
Optimize query performance when using the API for Azure Cosmos DB for NoSQL
- adjust indexes on the database (Microsoft Documentation: Modify an Index)
- calculate the cost of the query
- retrieve request unit cost of a point operation or query (Microsoft Documentation: Find the request unit charge for operations executed in Azure Cosmos DB SQL API)
- implement Azure Cosmos DB integrated cache (Microsoft Documentation: Azure Cosmos DB integrated cache)
Design and implement change feeds for Azure Cosmos DB for NoSQL
- develop an Azure Functions trigger to process a change feed (Microsoft Documentation: Serverless event-based architectures with Azure Cosmos DB and Azure Functions)
- consume a change feed from within an application by using the SDK (Microsoft Documentation: Change feed in Azure Cosmos DB)
- manage the number of change feed instances by using the change feed estimator (Microsoft Documentation: Use the change feed estimator)
- implement denormalization by using a change feed
- Applying referential enforcement by using a change feed (Microsoft Documentation: Optimize databases by using advanced modeling patterns for Azure Cosmos DB)
- implementing aggregation persistence by using a change feed (Microsoft Documentation: Change feed design patterns in Azure Cosmos DB)
- implement data archiving by using a change feed (Microsoft Documentation: Change feed support in Azure Blob Storage)
Define and implement an indexing strategy for an Azure Cosmos DB for NoSQL
- choose when to use a read-heavy versus write-heavy index strategy
- choose an appropriate index type (Microsoft Documentation: Indexes)
- configure a custom indexing policy by using the Azure portal
- implement a composite index (Microsoft Documentation: CREATE INDEX (Transact-SQL))
- optimize index performance (Microsoft Documentation: Optimize index maintenance to improve query performance)
5. Maintain an Azure Cosmos DB Solution (25–30%)
Monitor and troubleshoot an Azure Cosmos DB solution
- evaluate response status code and failure metrics (Microsoft Documentation: Supported metrics with Azure Monitor)
- monitor metrics for normalized throughput usage by using Azure Monitor (Microsoft Documentation: Monitor and debug with insights in Azure Cosmos DB)
- monitoring server-side latency metrics by using Azure Monitor (Microsoft Documentation: monitor the server-side latency for operations in an Azure Cosmos DB container or account)
- monitor data replication in relation to latency and availability (Microsoft Documentation: Measure Latency and Validate Connections for Transactional Replication)
- configure Azure Monitor alerts for Azure Cosmos DB (Microsoft Documentation: Create alerts for Azure Cosmos DB using Azure Monitor)
- implement and query Azure Cosmos DB logs (Microsoft Documentation: Monitor Azure Cosmos DB data by using diagnostic settings in Azure)
- monitor throughput across partitions (Microsoft Documentation: Monitor and debug with insights in Azure Cosmos DB)
- monitoring distribution of data across partitions (Microsoft Documentation: Horizontal, vertical, and functional data partitioning)
- monitor security by using logging and auditing (Microsoft Documentation: Azure security logging and auditing)
Implement backup and restore for an Azure Cosmos DB solution
- choose between periodic and continuous backup (Microsoft Documentation: Online backup and on-demand data restore in Azure Cosmos DB)
- configure periodic backup (Microsoft Documentation: Configure Azure Cosmos DB account with periodic backup)
- configure continuous backup and recovery (Microsoft Documentation: Continuous backup with point-in-time restore in Azure Cosmos DB)
- locate a recovery point for a point-in-time recovery (Microsoft Documentation: Manage recovery points)
- recover a database or container from a recovery point (Microsoft Documentation: Recover using automated database backups – Azure SQL Database & SQL Managed Instance)
Implement security for an Azure Cosmos DB solution
- choose between service-managed and customer-managed encryption keys (Microsoft Documentation: Customer-managed keys for Azure Storage encryption)
- configure network-level access control for Azure Cosmos DB (Microsoft Documentation: Configure IP firewall in Azure Cosmos DB)
- configure data encryption for Azure Cosmos DB (Microsoft Documentation: Data encryption in Azure Cosmos DB)
- manage control plane access to Azure Cosmos DB by using Azure role-based access control (RBAC) (Microsoft Documentation: Azure role-based access control in Azure Cosmos DB)
- managing data plane access to Azure Cosmos DB by using keys (Microsoft Documentation: Secure access to data in Azure Cosmos DB)
- Manage data plane access to Azure Cosmos DB by using Microsoft Entra ID
- configure Cross-Origin Resource Sharing (CORS) settings (Microsoft Documentation: Cross-Origin Resource Sharing (CORS) support for Azure Storage)
- manage account keys by using Azure Key Vault (Microsoft Documentation: Manage storage account keys with Key Vault and the Azure CLI)
- implementing customer-managed keys for encryption
- implement Always Encrypted (Microsoft Documentation: Configure Always Encrypted by using Azure Key Vault)
Implement data movement for an Azure Cosmos DB solution
- choose a data movement strategy
- move data by using client SDK bulk operations
- moving data by using Azure Data Factory and Azure Synapse pipelines (Microsoft Documentation: Pipelines and activities in Azure Data Factory and Azure Synapse Analytics)
- move data by using a Kafka connector
- moving data by using Azure Stream Analytics (Microsoft Documentation: Azure Stream Analytics)
- move data by using the Azure Cosmos DB Spark Connector (Microsoft Documentation: Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API)
Implement a DevOps process for an Azure Cosmos DB solution
- choose when to use declarative versus imperative operations
- provision and manage Azure Cosmos DB resources by using Azure Resource Manager templates (Microsoft Documentation: Manage Azure Cosmos DB Core (SQL) API resources with Azure Resource Manager templates)
- migrate between standard and autoscale throughput by using PowerShell or Azure CLI (Microsoft Documentation: Provision autoscale throughput on database or container in Azure Cosmos DB – SQL API)
- initiate a regional failover by using PowerShell or Azure CLI
- maintain index policies in production by using ARM templates
Exam DP-420 FAQs
Facing trouble preparing for the exam? Have a lot of questions, click here to get all your doubts and queries resolved.
How to schedule for the exam?
To schedule for the Exam DP-420, follow the below steps-
- Firstly the candidates need to go to the certification home page or the exam detail page, click the ‘schedule the exam’ option displayed on the screen.
- Then proceed to choose the exam delivery partner, in case of this exam ‘certiport’ will be the exam delivery partner and fill up all the details asked.
- The candidates will be redirected to sign in with their personal Microsoft account. If the candidate already has an existing microsoft account, log in to the account. And if not, create a new account.
- Then, give all the information which will include the candidate’s legal name, contact details. Don’t forget to present the same indentification in the exam which you stated in the application form. Move on to ‘save and continue’
- Then, click on the ‘Schedule exam’ option and the candidates will be redirected to their exam delivery partner for scheduling the exam.
- Follow the instructions displayed on the screen and select any proctoring method if available. Book the exam appointment and complete the payment process.
- When completed with all the steps and registration is completed, the candidates can see their appointment in the Certification Dashboard. In case of an online exam, the candidates can start the exam from the dashboard itself.
Cancellation/Rescheduling Policy for this exam
Exam cancellation and rescheduling must be done at least 24 hours prior to the planned exam time. The examination fee will be lost if the exam is rescheduled or canceled after the deadline. Follow the instructions below:
- Log in to the Certification Dashboard
- Select the ‘Appointment’ section and find the appointment needed to cancel/reschedule
- Select the cancel/reschedule option
- The candidates will be redirected to the site where they can proceed with their request of cancelling/rescheduling the exam.
Preparatory Guide for Exam DP-420
Passing the DP-420 Exam necessitates a great deal of concentration and determination. This is a well-organized study guide to assist you in preparing for the exam. This study guide will provide you all of the important data and test information. This comprises all of the stages that must be completed in order to pass the test.
1. Official Website of the Exam
Before you begin your preparation, the most crucial and first step is to go to the Exam DP-420 official website. Microsoft should give the required information and key test instructions to the applicants. This website will provide all real, original, and up-to-date information on the test, which will be quite useful to applicants.
2. Refer to the Course Syllabus
The following stage in the preparation process is to carefully review the course syllabus. Each module should be properly learned and studied by the applicants. This will assist you in understanding the scope of each chapter and planning accordingly. In order to pass this test, applicants must work smartly, which requires a thorough understanding of the course goals.
- Designing and implementing data models (35–40%)
- Designing and implementing data distribution (5–10%)
- Integrating an Azure Cosmos DB solution (5–10%)
- Optimizing an Azure Cosmos DB solution (15–20%)
- Maintaining an Azure Cosmos DB solution (25–30%)
3. Joining a Community
Joining a group or an online forum is really advantageous for applicants. These will familiarise you with the competitive landscape. It has several benefits. You may take part in group discussions, brainstorming sessions, and other interactive activities to expand your knowledge and stay current. This can help you gain confidence and become accustomed to the exam situation.
4. Practice Tests
Taking as many practise tests as possible can assure your exam success. Improve your abilities and become familiar with the course structure and test style, which will help you pass the exam. Giving practise exams will also allow you to identify your strengths and shortcomings, as well as allow you to self-evaluate and improve your performance with each attempt.Take a free practice test now!