AWS Certified Database Specialty Sample Questions
Earning an AWS industry-recognized certification verifies a candidate’s knowledge of the full range of AWS database services and encourages the use of database technology to drive business transformation. Candidates who demonstrate a database-focused function should pursue the AWS Database Specialty certification. This test assesses a candidate’s overall knowledge of databases, including design, deployment, migration, access, automation, monitoring, maintenance, security, and troubleshooting. It will assess the candidate’s ability to select, create, and manage the best AWS database solution in order to boost performance, lower costs, and promote innovation. The article provides a list of AWS Certified Database Specialty Sample Questions that cover core exam topics including –
- Domain 1: Workload-Specific Database Design 26%
- Domain 2: Deployment and Migration 20%
- Domain 3: Management and Operations 18%
- Domain 4: Monitoring and Troubleshooting 18%
- Domain 5: Database Security 18%
Advanced Sample Questions
Your company has decided to store its vast amounts of customer data in Amazon S3. The data is unstructured and can range from 10 MB to 100 GB. Which of the following S3 storage classes would you recommend?
- A) S3 Standard
- B) S3 Standard-Infrequent Access
- C) S3 One Zone-Infrequent Access
- D) S3 Glacier
Answer: B) S3 Standard-Infrequent Access
Explanation: S3 Standard-Infrequent Access is a cost-effective storage class that provides low latency and high throughput performance for data that is accessed less frequently but requires rapid access when needed. This storage class is suitable for the scenario where customer data is unstructured and can range from 10 MB to 100 GB, making it a cost-effective solution compared to the standard S3 storage class.
Your company is using Amazon RDS for its relational database needs. The database is frequently growing, and you need to add additional storage space to keep up with the increasing demand. What is the most straightforward way to add more storage space to an Amazon RDS database?
- A) Modify the database instance type
- B) Create a new database instance and migrate the data
- C) Take a snapshot of the database and create a new database from the snapshot with increased storage
- D) Modify the storage of the existing database instance
Answer: D) Modify the storage of the existing database instance
Explanation: The most straightforward way to add more storage space to an Amazon RDS database is to modify the storage of the existing database instance. This process is straightforward and does not require any downtime. By modifying the storage, the database instance can easily be expanded to meet the growing needs of the company.
Your company has a critical database running on Amazon RDS. To ensure high availability and durability, you want to implement a disaster recovery solution. Which of the following solutions would you recommend for disaster recovery for Amazon RDS?
- A) Implement Amazon S3 for data backups
- B) Use Amazon RDS Read Replicas to replicate the data to multiple regions
- C) Implement Amazon S3 with Cross-Region Replication
- D) Use Amazon RDS with Multi-AZ deployment
Answer: D) Use Amazon RDS with Multi-AZ deployment
Explanation: For disaster recovery, it is recommended to use Amazon RDS with Multi-AZ deployment. Multi-AZ deployment provides high availability by automatically failing over to a standby replica in case of an outage of the primary database instance. This helps ensure that the critical database is always available and helps protect against data loss.
Your company has a large number of users accessing its database on Amazon RDS. The database is experiencing performance issues due to the high number of users and increasing demand. Which of the following solutions would you recommend to improve the performance of the database on Amazon RDS?
- A) Increase the number of database instances
- B) Increase the storage of the database instance
- C) Increase the memory and CPU of the database instance
- D) Increase the number of Amazon RDS Read Replicas
Answer: C) Increase the memory and CPU of the database instance
Explanation: Increasing the memory and CPU of the database instance is the most effective way to improve the performance of the database on Amazon RDS. This allows the database to process more transactions and respond to user requests more quickly. This solution is particularly effective when the database is experiencing performance issues due to high user demand.
Your company has decided to migrate its database to Amazon RDS. The database contains sensitive customer data, and security is a top priority. Which of the following options would you recommend for secure database connectivity to Amazon RDS?
- A) Use SSL for secure database connectivity
- B) Use TCP port 22 for secure database connectivity
- C) Use TCP port 3306 for secure database connectivity
- D) Use VPC for secure database connectivity
Answer: D) Use VPC for secure database connectivity
Explanation: For secure database connectivity to Amazon RDS, it is recommended to use VPC. Amazon VPC provides secure and isolated network environments, allowing you to place Amazon RDS instances in a virtual network. This protects the database from external access and helps ensure the security of sensitive customer data.
Your company has a database running on Amazon RDS, and you want to ensure that the database is always available and that data is never lost. Which of the following options would you recommend for data backup and disaster recovery for Amazon RDS?
- A) Use Amazon S3 for data backups
- B) Use Amazon RDS Snapshots for data backups
- C) Use Amazon RDS Read Replicas for disaster recovery
- D) Use Amazon S3 with Cross-Region Replication for disaster recovery
Answer: B) Use Amazon RDS Snapshots for data backups
Explanation: For data backup and disaster recovery for Amazon RDS, it is recommended to use Amazon RDS Snapshots. Amazon RDS snapshots provide a fast and efficient way to backup the database, allowing you to restore the database to a specific point in time. Additionally, snapshots can be used to create new Amazon RDS instances, making it easy to recover the database in the event of a disaster.
Your company has a database running on Amazon RDS, and the database is growing rapidly. You want to ensure that the database can handle the increased demand and continue to perform optimally. Which of the following options would you recommend to scale the database on Amazon RDS?
- A) Increase the storage of the database instance
- B) Increase the memory and CPU of the database instance
- C) Increase the number of database instances
- D) Increase the number of Amazon RDS Read Replicas
Answer: C) Increase the number of database instances
Explanation: To scale the database on Amazon RDS, it is recommended to increase the number of database instances. This allows the database to handle increased demand by distributing the load across multiple instances. This can help improve performance and ensure that the database continues to perform optimally even as the amount of data grows.
Your company has a database running on Amazon RDS, and you want to ensure that the database is always available and that data is never lost. Which of the following options would you recommend for data backup and disaster recovery for Amazon RDS?
- A) Use Amazon S3 for data backups
- B) Use Amazon RDS Snapshots for data backups
- C) Use Amazon RDS Read Replicas for disaster recovery
- D) Use Amazon S3 with Cross-Region Replication for disaster recovery
Answer: C) Use Amazon RDS Read Replicas for disaster recovery
Explanation: For disaster recovery for Amazon RDS, it is recommended to use Amazon RDS Read Replicas. Amazon RDS Read Replicas provide a way to create read-only copies of the database, allowing you to scale read-only database workloads and provide a disaster recovery solution. In the event of an outage, the read replica can be promoted to a new primary database, helping to ensure that the database is always available and data is never lost.
Your company has a database running on Amazon RDS, and you want to ensure that the database is protected from unauthorized access. Which of the following options would you recommend for database security on Amazon RDS?
- A) Use Amazon RDS Security Groups for database security
- B) Use Amazon VPC for database security
- C) Use Amazon RDS IAM for database security
- D) Use Amazon RDS encryption for database security
Answer: D) Use Amazon RDS encryption for database security
Explanation: To ensure that the database on Amazon RDS is protected from unauthorized access, it is recommended to use Amazon RDS encryption. Amazon RDS encryption uses AWS Key Management Service (KMS) to encrypt the database, providing an additional layer of security.
Basic Sample Questions
Q1)An Amazon Redshift cluster stores a company’s critical business data. AWS KMS is use to encrypt the cluster at rest due to the sensitive nature of the data. As part of its disaster recovery duties, the firm must replicate the Amazon Redshift snapshots to another Region. Which AWS Management Console procedures should be complete to meet disaster recovery requirements?
- Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
- Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
- Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
- Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.
Correct Answer: Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
Explanation: Snapshots are backups of a cluster at a specific point in time. Snapshots are divided into two categories: automated and manual. Amazon Redshift uses an encrypted Secure Sockets Layer (SSL) connection to store these snapshots in Amazon S3.
Amazon Redshift takes incremental snapshots of the cluster that track changes since the last automated snapshot. Automated snapshots save all of the information needed to restore a cluster. You can set up an automated snapshot schedule or take a manual snapshot at any time.
Refer: Amazon Redshift snapshots
Q2)An on-premises IBM Db2 database needs to be migrated to an IBM POWER7 server running AIX. The firm is considering transferring the workload to an Amazon Aurora PostgreSQL DB cluster due to rising support and maintenance costs. How can the company obtain data on migration compatibility in the quickest time possible?
- Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.
- Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.
- Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.
- Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.
Correct Answer: Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.
Explanation:
Refer: AWS Schema Conversion Tool
Q3)A company wants to switch from Oracle on-premises to Amazon Aurora PostgreSQL. Using AWS DMS, the transfer must be completed with the least amount of downtime feasible. A Database Specialist must check that the data was appropriately migrated from the source to the destination before the cutover. The performance of the source database should be unaffected by the migration. Which strategy is the MOST APPROPRIATE for achieving these goals?
- Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
- Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
- Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
- Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mi
Correct Answer: Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
Explanation: Data validation is supported by AWS DMS to guarantee that your data was correctly moved from the source to the target. Validation begins immediately after a full load is performed for a table if it is enabled. Validation examines the incremental changes that occur throughout the execution of a CDC-enabled task.
AWS DMS checks each row in the source with its matching row in the target, ensures that the rows contain the same data, and reports any discrepancies. AWS DMS uses proper queries to retrieve the data in order to accomplish this. It’s worth noting that these searches will use more resources at the source and target, as well as more network resources.
Refer: AWS DMS data validation
Q4)Amazon DynamoDB is used by a company to operate a web-based survey software. During peak usage, a Database Specialist runs into the ProvisionedThroughputExceededException problem when collecting survey responses. What is the role of the Database Specialist in resolving this problem? (Choose two.)
- Change the table to use Amazon DynamoDB Streams
- Purchase DynamoDB reserved capacity in the affected Region
- Increase the write capacity units for the specific table
- Change the table capacity mode to on-demand
- Change the table type to throughput optimized
Correct Answer: increase the write capacity units for the specific table; Change the table type to throughput optimized
Q5)An Amazon RDS for PostgreSQL DB instance hosts a company’s customer relationship management (CRM) system. The database must be encrypted at rest, according to new compliance guidelines. Which method of action will meet these requirements?
- Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot. Most Voted
- Modify the DB instance and enable encryption.
- Restore a DB instance from the most recent automated snapshot and enable encryption.
- Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.
Correct Answer: Restore a DB instance from the most recent automated snapshot and enable encryption.
Explanation: Your Amazon RDS DB instances can be encrypted using Amazon RDS. The underlying storage for DB instances, as well as automated backups, read replicas, and snapshots, are all encrypted at rest.
Amazon RDS encrypted DB instances secure your data on the server that hosts your Amazon RDS DB instances using the industry-standard AES-256 encryption technique. After your data is encrypted, Amazon RDS transparently manages authentication and decryption of your data with no performance impact. To employ encryption, you don’t need to make any changes to your database client programmes.
Refer: Encrypting Amazon RDS resources
Q6)An eCommerce company is migrating its main application database to Amazon Aurora MySQL. Concurrent database connections are now being used for OLTP stress testing at the firm. During the first round of testing, a database expert noticed poor performance for certain specific write operations. The Aurora DB cluster’s Amazon CloudWatch statistics revealed a CPU utilisation of 90%. Which steps should a database specialist take to effectively identify the root cause of high CPU usage and poor performance? (Choose two.)
- Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
- Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
- Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
- Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
- Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.
Correct Answer: Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O; Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
Explanation: The API provides visibility into instance performance when Performance Insights is enabled. The authoritative source for vended monitoring metrics for AWS services is Amazon CloudWatch Logs.
Database load is evaluated as average active sessions, and Performance Insights provides a domain-specific view of database load (AAS). API users see this measure as a two-dimensional time-series dataset. For each time point in the searched time range, the data’s time dimension offers DB load data.
Refer: Retrieving metrics with the Performance Insights API
Q7)The customer feedback application of a company is powered by Amazon Aurora MySQL. Every day, the company runs a report to collect customer feedback, which is then examined by a team to determine if the comments are positive or unfavourable. It could take working days to contact unsatisfied customers and take corrective action. Machine learning is being used by the company to automate this process. Which option requires the LEAST amount of effort to meet this criterion?
- Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.
- Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.
- Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.
- Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.
Correct Answer: Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.
Explanation: Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to extract useful insights and connections from text.
Refer: Amazon Comprehend
Q8)On-premises SQL Server databases are maintain by a company. Users gain access to the database with Active Directory authentication. The company successfully moved their database to Amazon RDS for SQL Server. However, the company is concerned about user authentication in the AWS Cloud environment. What kind of authentication solution should a database expert offer?
- Deploy Active Directory Federation Services (AD FS) on premises and configure it with an on-premises Active Directory. Set up delegation between the on- premises AD FS and AWS Security Token Service (AWS STS) to map user identities to a role using theAmazonRDSDirectoryServiceAccess managed IAM policy.
- Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Use AWS SSO to configure an Active Directory user delegated to access the databases in RDS for SQL Server.
- Use Active Directory Connector to redirect directory requests to the company’s on-premises Active Directory without caching any information in the cloud. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.
- Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Ensure RDS for SQL Server is using mixed mode authentication. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.
Correct Answer: Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Use AWS SSO to configure an Active Directory user delegated to access the databases in RDS for SQL Server.
Q9) A database expert must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery plan Which backup mechanism should a database administrator use to cut down on administrative time?
- Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that runs the command on a nightly basis.
- Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that runs the Lambda function on a nightly basis.
- Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.
- Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.
Correct Answer: Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.
Explanation: On-demand backup lets you create comprehensive copies of your Amazon DynamoDB table for data archiving, which can help you meet corporate and governmental regulatory obligations. Tables ranging in size from a few megabytes to hundreds of terabytes of data can be backed up without affecting the speed or availability of your production applications. You don’t have to worry about backup schedules or long-running procedures because backups happen in seconds, regardless of the size of your tables. Furthermore, all backups are automatically encrypted, catalogued, and accessible until they are deliberately destroyed.
Refer: Amazon DynamoDB
Q10)A database specialist is creating a test graph database on Amazon Neptune for the first time. From an Amazon S3.csv file, the database expert must input millions of rows of test observations. A sequence of API calls were use by the database professional to upload the data to the Neptune DB instance.Which action sequence allows the database specialist to upload the data the fastest? (Choose three.)
- Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
- Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
- Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
- Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
- Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
- Create an S3 VPC endpoint and issue an HTTP POST to the database’s loader endpoint.
Correct Answer: Ensure the vertices and edges are specified in different .csv files with proper header column formatting;Use AWS DMS to move data from Amazon S3 to the Neptune Loader;Create an S3 VPC endpoint and issue an HTTP POST to the database’s loader endpoint.
Explanation: AWS Database Migration Service (AWS DMS) can swiftly and securely import data into Neptune from supported source databases. During the migration, the source database remains fully active, reducing downtime for applications that rely on it.
Refer: Using AWS Database Migration Service to load data into Amazon Neptune from a different data store
Q11) A gaming company is developing a new mobile game and intends to store customer data on Amazon DynamoDB. To make the process as simple as possible, users can register using their existing Facebook or Amazon accounts. Over 10,000 people are expected to use the service, according to the company. What is the simplest way for a database administrator to establish access control with the least amount of operational effort?
- Use web identity federation on the mobile app and AWS STS with an attached IAM role to get temporary credentials to access DynamoDB.
- Use web identity federation on the mobile app and create individual IAM users with credentials to access DynamoDB.
- Use a self-developed user management system on the mobile app that lets users access the data from DynamoDB through an API.
- Use a single IAM user on the mobile app to access DynamoDB.
Correct Answer: Use web identity federation on the mobile app and AWS STS with an attached IAM role to get temporary credentials to access DynamoDB.
Explanation: You can utilise web identity federation for authentication and authorisation if you’re creating an application for a big number of users. Individual IAM users are no longer required thanks to web identity federation. Instead, customers can sign up for an identity provider and then utilise the AWS Security Token Service to get temporary security credentials (AWS STS). These credentials can then be use by the app to access AWS services.
Refer: Using Web Identity Federation
Q12)A financial application is running on an Amazon RDS for MySQL DB instance. The application is regulated by a number of financial regulatory bodies.Security groups are set up on the RDS database instance to limit access to specific Amazon EC2 hosts. The AWS KMS service encrypts data in transit. Which treatment will provide an additional layer of protection?
- Set up NACLs that allow the entire EC2 subnet to access the DB instance
- Disable the master user account
- Set up a security group that blocks SSH to the DB instance
- Set up RDS to use SSL for data in transit
Correct Answer: Set up RDS to use SSL for data in transit
Explanation: Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, run, and scale databases in the cloud. Choose from seven common engines to deploy on-premises with Amazon RDS on AWS Outposts: Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server.
Refer: Applying best practices for securing sensitive data in Amazon RDS
Q13) A company is using Amazon DynamoDB global tables to support an online gaming game. Gamers from all over the world participate in the game. The number of queries to DynamoDB increased dramatically as the game grew in popularity. Gamers have recently complained about the game’s condition being inconsistent across countries. The ReplicationLatency measure for numerous replica tables is set to an abnormally high value, according to a database professional. Which strategy will be most effective in resolving the problem?
- Firstly, configure all replica tables to use DynamoDB auto scaling.
- Secondly, configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.
- Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.
- Also, configure the table-level write throughput limit service quota to a higher value.
Correct Answer: Configure all replica tables to use DynamoDB auto scaling.
Explanation: The recommended method for managing throughput capacity settings for replica tables in provisioned mode is to use DynamoDB auto scaling.
Refer: Best Practices and Requirements for Managing Global Tables
Q14)A database professional manages a fleet of Amazon RDS database instances with the default database parameter group enabled. A database expert will need to link a custom parameter group to specific database instances. When the database specialist makes this adjustment, when will the instances be assign to this new parameter group?
- Instantaneously after the change is made to the parameter group
- In the next scheduled maintenance window of the DB instances
- After the DB instances are manually rebooted
- Within 24 hours after the change is made to the parameter group
Correct Answer: After the DB instances are manually rebooted
Explanation: Manually restart the database instance to apply the most recent parameter changes.
Refer: Working with parameter groups
Q15)On an Amazon RDS for Oracle Multi-AZ DB instance, a large firm runs a Java application. As part of its annual disaster recovery testing, the business would like to simulate an Availability Zone failure and document how the application responds during the DB instance failover activity. The company does not want to make any changes to the code that governs this behaviour. What should the company undertake to accomplish this in the shortest time possible?
- Firstly, use a blue-green deployment with a complete application-level failover test
- Secondly, use the RDS console to reboot the DB instance by choosing the option to reboot with failover
- Use RDS fault injection queries to simulate the primary node failure
- Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
Correct Answer: Use RDS fault injection queries to simulate the primary node failure
Q16)An Amazon RDS for SQL Server DB instance is being use to move a database from one AWS Region to another. During the migration, the business wants to minimise database downtime to a minimal. For this cross-regional relocation, which migration approach should the company use?
- Firstly, back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.
- Secondly, back up the source database using native backup to an Amazon S3 bucket in the same Region. Use Amazon S3 Cross-Region Replication to copy the backup to an S3 bucket in the target Region. Then restore the backup in the target Region.
- Configure AWS Database Migration Service (AWS DMS) to replicate data between the source and the target databases. Once the replication is in sync, terminate the DMS task.
- Add an RDS for SQL Server cross-Region read replica in the target Region. Once the replication is in sync, promote the read replica to master.
Correct Answer: Back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.
Explanation: Using full backup files, Amazon RDS provides native backup and restore for Microsoft SQL Server databases (.bak files). Instead of using the database server’s native file system, RDS allows you to access files stored in Amazon S3.
Refer: Importing and exporting SQL Server databases using native backup and restore
Q17)A business maintains a MySQL database for its ecommerce application on a single Amazon RDS DB instance. When you save application purchases to the database automatically, you get a lot of writes. Employees create purchase reports for the organisation on a regular basis. The company aims to improve database speed while reducing downtime caused by upgrade patching. Which technique has the LOWEST operating overhead and meets these criteria?
- Firstly, enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.
- Secondly, enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.
- Next, enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.
- Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.
Correct Answer: Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.
Exaplantion: One or two standby DB instances can be use in multi-AZ installations. A Multi-AZ DB instance deployment is when there is only one standby DB instance in the deployment. One standby DB instance is use in a Multi-AZ DB instance deployment to provide failover support but not to service read traffic. A Multi-AZ DB cluster deployment is when there are two backup DB instances in the deployment. Standby DB instances provide failover support and can also provide read traffic in a Multi-AZ DB cluster configuration.
Refer: Multi-AZ deployments for high availability
Q18)A company is using Amazon Neptune as the graph database for one of their products. The company’s data science team mistakenly produced massive amounts of temporary data during an ETL function. The Neptune DB cluster immediately increased its storage capacity to accommodate the additional data, but the data science team deleted the redundant data. What should a database administrator do to avoid having to pay for cluster volume space that isn’t being use?
- Firstly, take a snapshot of the cluster volume. Restore the snapshot in another cluster with a smaller volume size.
- Secondly, use the AWS CLI to turn on automatic resizing of the cluster volume.
- Further, export the cluster data into a new Neptune DB cluster.
- Add a Neptune read replica to the cluster. Promote this replica as a new primary DB instance. Reset the storage space of the cluster.
Correct Answer: Use the AWS CLI to turn on automatic resizing of the cluster volume.
Explanation:The versatility of Amazon SageMaker enables you to manage more activities with fewer resources, resulting in a faster and more efficient workflow. SageMaker is a fully manage machine learning (ML) service that allows you to develop, train, deploy, and monitor models. Its modular design allows you to pick and choose which capabilities are appropriate for your use cases at various phases of the machine learning lifecycle. With a variety of features and a pay-as-you-use pricing approach, Amazon SageMaker abstracts the heavy work of infrastructure management and provides the agility and scalability you need for large-scale ML activities.
Refer: Ensure efficient compute resources on Amazon SageMaker
Q19)A gaming company is working on a mobile gaming app that will be available to a big number of people all over the world. The company demands replication as well as full multi-master write functionality. In addition, the business wants to ensure that app users have low latency and consistent performance. Which solution meets these requirements?
- Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
- Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
- Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
- Use Amazon Neptune for storage
Correct Answer: Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
Explanation: AWS clients are increasingly wishing to make their apps available to users all over the world by deploying them in various AWS Regions. These users from all across the world anticipate rapid application performance.
Refer: How to use Amazon DynamoDB global tables to power multiregion architectures
Q20)An Amazon Aurora PostgreSQL DB cluster is use by a financial services company to host an application on AWS. During a recent audit, no log files describing database administrator activity were located. A database expert must recommend a solution that allows access to the database while also keeping track of activity logs. The solution should be easy to install and have minimal performance impact. 0Which database guru solution should be suggested?
- Firstly, enable Aurora Database Activity Streams in synchronous mode on the database. Connect the Kinesis Data Firehose to the Amazon Kinesis data stream. Set an Amazon S3 bucket as the Kinesis Data Firehose destination.
- Secondly, in the region where the database runs, create an AWS CloudTrail trail. Connect the trail to the database activity logs.
- Enable Aurora Database Activity Streams in asynchronous mode on the database. Connect the Kinesis Data Firehose to the Amazon Kinesis data stream. Set an Amazon S3 bucket as the Firehose destination.
- Further, only allow connections to the database cluster via a bastion host. Access to the database should be limited to the bastion host and application servers. The CloudWatch Logs agent is use to push the bastion host logs to Amazon CloudWatch Logs.
Correct Answer: Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.
Explanation: To guarantee security audits and compliance, most firms must monitor activities on databases containing sensitive information. Although some security operations teams may be interest in monitoring all activity such as reads, writes, and logons, others may prefer to monitor only those that result in data and data structure changes. In this post, we’ll show you how to leverage Amazon Aurora database activity streams to filter, analyse, and store actions that are relevant to certain business use cases. Database activity streams are a free feature of Aurora that provide a near-real-time stream of activity in your database, assist with monitoring and compliance, and provide a near-real-time stream of activity in your database.
Refer: Filter Amazon Aurora database activity stream data for segregation and monitoring