Google Certified Professional Cloud Architect Sample Questions

  1. Home
  2. Google Certified Professional Cloud Architect Sample Questions
Google Certified Professional Cloud Architect Sample Questions

Google Cloud Platform is one of the most rapidly developing cloud service programmes available today, allowing candidates to execute their applications and data operations on a ‘Google-sized’ scale. The Google Certified Professional Cloud Architect GCP certification is one of many highly sought-after IT credentials today. It is without a doubt one of the most difficult exams offered by any cloud vendor today. This credential expands your professional opportunities and raises your earning potential. The article provides a list of Google Certified Professional Cloud Architect Sample Questions that cover core exam topics including –

  • Design and plan a cloud solution architecture
  • Manage and provision the cloud solution infrastructure
  • Design for security and compliance
  • Analyze and optimize technical and business processes
  • Manage implementations of cloud architecture
  • Ensure solution and operations reliability

Advanced Sample Questions

What is the main purpose of the Google Cloud Platform (GCP)?

  • a) To host websites and web applications
  • b) To provide a scalable and secure infrastructure for running applications
  • c) To provide a data storage solution
  • d) To provide a messaging and collaboration platform

Answer: b

Explanation: The main purpose of the Google Cloud Platform (GCP) is to provide a scalable and secure infrastructure for running applications. GCP provides a range of services and tools that enable organizations to build, deploy, and manage their applications on the cloud. With GCP, organizations can take advantage of the benefits of cloud computing, such as improved scalability, reliability, and cost-effectiveness.

What is the Google Compute Engine used for in GCP?

  • a) To provide a secure and scalable infrastructure for running applications
  • b) To host websites and web applications
  • c) To provide virtual machines for running applications
  • d) To provide a data storage solution

Answer: c

Explanation: The Google Compute Engine is used to provide virtual machines for running applications in GCP. It enables you to create and manage virtual machines, configure their operating systems and applications, and scale them up or down as needed. With the Compute Engine, you can quickly and easily create virtual machines that are optimized for performance, security, and cost-effectiveness.

What is the Google Cloud Storage used for in GCP?

  • a) To provide a secure and scalable infrastructure for running applications
  • b) To host websites and web applications
  • c) To provide virtual machines for running applications
  • d) To provide a data storage solution

Answer: d

Explanation: Google Cloud Storage is used to provide a data storage solution in GCP. It enables you to store and manage large amounts of data in the cloud, including binary data, text files, and media files. With Cloud Storage, you can access your data from anywhere in the world, and you can scale your storage needs up or down as needed.

What is the Google Cloud SQL used for in GCP?

  • a) To provide a relational database solution
  • b) To host websites and web applications
  • c) To provide virtual machines for running applications
  • d) To provide a data storage solution

Answer: a

Explanation: Google Cloud SQL is used to provide a relational database solution in GCP. It enables you to create and manage cloud-based SQL databases, which can be used to store structured data such as customer information, product catalogs, and sales transactions. With Cloud SQL, you can take advantage of the benefits of cloud computing, such as improved scalability, reliability, and cost-effectiveness, for your database needs.

What is the purpose of the Stackdriver service in the Google Cloud Platform?

  • A) To monitor and manage Google Cloud resources.
  • B) To automate the deployment of Google Cloud resources.
  • C) To store and retrieve data in Google Cloud.
  • D) To analyze data stored in Google Cloud.

Answer: A

Explanation: Stackdriver is a monitoring and management service in the Google Cloud Platform. It provides real-time visibility into the performance and health of Google Cloud resources, including virtual machines, containers, and applications. Stackdriver also provides alerts and notifications, log management, and performance analytics.

What is the purpose of the Google Cloud Storage service?

  • A) To store and retrieve data in Google Cloud.
  • B) To monitor and manage Google Cloud resources.
  • C) To automate the deployment of Google Cloud resources.
  • D) To analyze data stored in Google Cloud.

Answer: A

Explanation: Google Cloud Storage is a highly scalable and durable object storage service in the Google Cloud Platform. It provides a secure and highly available solution for storing and retrieving large amounts of data. Google Cloud Storage can be used for a wide range of use cases, including backups, archives, and big data processing.

What is the purpose of the Google Cloud Datastore service?

  • A) To store and retrieve structured data in Google Cloud.
  • B) To store and retrieve unstructured data in Google Cloud.
  • C) To monitor and manage Google Cloud resources.
  • D) To automate the deployment of Google Cloud resources.

Answer: A

Explanation: Google Cloud Datastore is a NoSQL document database service in the Google Cloud Platform. It provides a flexible and scalable solution for storing and retrieving structured data. Google Cloud Datastore supports transactions, can scale to handle billions of entities and queries per second, and provides automatic, real-time data indexing.

What are the core components of Google Cloud Platform?

  • A. Compute Engine, App Engine, Big Data and Storage
  • B. Compute Engine, App Engine, Big Data, Storage and Networking
  • C. Compute Engine, App Engine, Big Data, Storage, Networking and Developers Tools
  • D. Compute Engine, App Engine, Big Data, Storage, Networking, Developers Tools and Machine Learning

Answer: D. Compute Engine, App Engine, Big Data, Storage, Networking, Developers Tools and Machine Learning

Explanation: Google Cloud Platform consists of a set of cloud computing services that run on the same infrastructure that Google uses for its own services. The core components of Google Cloud Platform are Compute Engine, App Engine, Big Data, Storage, Networking, Developers Tools, and Machine Learning. These components provide a range of services and solutions for cloud computing and allow organizations to build, deploy and manage applications on the cloud.

What is Google Cloud Storage used for?

  • A. Store and manage structured data
  • B. Store and manage unstructured data
  • C. Store and manage both structured and unstructured data
  • D. Store and manage metadata

Answer: C. Store and manage both structured and unstructured data

Explanation: Google Cloud Storage is a scalable, fully-managed object storage system that can store and manage both structured and unstructured data. It is used for storing and accessing data from a variety of sources, such as images, videos, audio, logs and backups. Google Cloud Storage is highly scalable, secure and reliable, and offers a range of features for managing, storing and serving data at scale.

Basic Sample Questions

Q1)Your business is considering moving a multi-petabyte data set to the cloud. The data set must be accessible 24 hours a day, seven days a week. Your business analysts have only used a SQL interface before. How should you save the data to make it as easy to analyze as possible?

  1. Load data into Google BigQuery 
  2. Insert data into Google Cloud SQL
  3. Put flat files into Google Cloud Storage
  4. Stream data into Google Cloud Datastore

Correct Answer: Load data into Google BigQuery

Explanation: BigQuery is a serverless, highly scalable, low-cost enterprise data warehouse developed by Google to help all of your data analysts be more productive. You don’t require a database administrator because there is no infrastructure to handle. You can focus on analysing data to obtain valuable insights using familiar SQL. BigQuery creates a logical data warehouse using controlled, columnar storage and data from object storage and spreadsheets, allowing you to analyze all of your data.

Refer: BigQuery

Q2) Your organization needs to know if someone is in a meeting room that has been reserved for a scheduled meeting. 1000 conference rooms are spread throughout five offices on three continents. Every room has a motion sensor that sends out a status report every second. Only a sensor ID and a few distinct pieces of information are included in the motion detector’s data. This information, along with information about account owners and office locations, will be used by analysts. Should you use a relational database or a relational database?

  1.  Flat file
  2.  NoSQL
  3.  Relational
  4.  Blobstore

Correct Answer: NoSQL

Explanation: Relational databases were not created to handle the scalability and agility difficulties that modern applications confront, nor were they designed to make use of today’s commodity storage and processing capabilities. NoSQL is ideal for: Developers are tasked with working with applications that generate huge amounts of new, fast-changing data kinds, including structured, semi-structured, unstructured, and polymorphic data.

Refer: What is NoSQL?

Q3)For an impending launch, you create an autoscaling instance group to service web traffic. You notice that virtual machine (VM) instances are being terminated and re-launched every minute after setting the instance group as a backend service to an HTTP(S) load balancer. There is no public IP address for the instances. Using the curl command, you checked that each instance is returning the correct web response. You’ll want to double-check that the backend is set up appropriately. So, what are your options?

  1.  Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
  2. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
  3. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
  4. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

Correct Answer: Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

Explanation: When configuring a health check, the recommended practise is to verify health and serve traffic on the same port. It is feasible, however, to run health checks on one port while serving traffic on another. If you do utilise two distinct ports, make sure your firewall rules and services on the instances are set up correctly. If you run health checks and serve traffic on the same port but later decide to change ports, make sure both the backend service and the health check are updated. Backend services without a valid global forwarding rule addressing them will not be health-checked and will have no health status.

Refer: Backend services overview

Q4)Your customer is migrating an established corporate application from an on-premises data center to the Google Cloud Platform. The proprietors of the company want as little user disruption as possible. When it comes to password storage, the security staff has tight guidelines. What method of authentication should they employ?

  1. Use G Suite Password Sync to replicate passwords into Google
  2. Federate authentication via SAML 2.0 to the existing Identity Provider 
  3. Provision users in Google using the Google Cloud Directory Sync tool
  4. Ask users to set their Google password to match their corporate password

Correct Answer: Federate authentication via SAML 2.0 to the existing Identity Provider

Explanation: Users should be added to Google’s directory.
Both Cloud Platform and G Suite resources have access to the global Directory, which can be provided in a variety of ways. Rich authentication tools such as single sign-on (SSO), OAuth, and two-factor verification are available to provisioned users.
You can use one of the following tools and services to provide users automatically:
Syncing your Google Cloud Directory (GCDS)

SDK for Google Admins –

GCDS is a third-party connector that can supply users and groups for both Cloud Platform and G Suite on your behalf. You can use GCDS to automatically add, modify, and delete users, groups, and non-employee contacts. LDAP queries can be used to synchronise data from your LDAP directory server to your Cloud Platform domain.

Refer: Best practices for enterprise organizations

Q5)Your business has successfully moved to the cloud and now wishes to examine its data stream in order to improve operations. They don’t have any existing code for this analysis, therefore they’re looking into all possibilities. As they are running certain hourly operations and live-processing some data as it comes in, these possibilities include a mix of batch and stream processing  Which technologies should they employ to do this?

  1. Google Cloud Dataproc
  2. Google Cloud Dataflow 
  3. Google Container Engine with Bigtable
  4. Google Compute Engine with Google BigQuery

Correct Answer: Google Cloud Dataflow 

Explanation: Cloud Dataflow is a fully-managed service for converting and enhancing data in both stream (real-time) and batch (historical) modes with similar dependability and expressiveness — no more complicated workarounds or compromises are required.

Refer: Dataflow

Q6)Some of your customer’s users are reporting that their newly upgraded Google App Engine application is taking about 30 seconds to load. Before the update, no one has reported this behaviour. What is the best course of action for you?

  1.  Work with your ISP to diagnose the problem
  2. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
  3. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
  4. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem

Correct Answer: Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment.

Explanation: Log data and events from Google Cloud Platform and Amazon Web Services can be stored, searched, analysed, monitored, and alerted using Stackdriver Logging (AWS). In addition, any bespoke log data from any source can be ingested via our API. Stackdriver Logging is a fully managed service that can ingest application and system log data from thousands of VMs and operates at scale. Even better, all of that log data can be analysed in real time.

Refer: Cloud Logging

Q7)Data files are stored on an ext4-formatted persistent drive in a production database virtual machine on Google Compute Engine. The database is on the verge of running out of space.How can you fix the issue with the least amount of downtime possible?

  1. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
  2. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
  3. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
  4. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
  5. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service

Correct Answer: In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.

Explanation: Connect to your Linux instance and manually resize your partitions and file systems to make use of the additional storage space.
To make use of the extra capacity, extend the file system on the disc or partition. If you have a partition on your disc that has grown, indicate it. If your disc doesn’t have a partition table, just give it a number. /dev/[DISK ID] sudo resize2fs [PARTITION NUMBER], where [DISK ID] is the device name and [PARTITION NUMBER] is the partition number for the device on which the file system is being resized.

Refer: Add a persistent disk to your VM

Q8)The number and amount of Apache Spark and Hadoop jobs running on your local datacenter are expected to skyrocket, according to your company’s projections. You want to use the cloud to help you scale this upcoming demand with as little operational work and code change as possible. Which product do you think you should use?

  1. Google Cloud Dataflow
  2. Google Cloud Dataproc
  3. Google Compute Engine
  4. Google Kubernetes Engine

Correct Answer: Google Cloud Dataproc

Explanation: Google Cloud Dataproc is a fully managed, fast, easy-to-use, low-cost solution that allows you to run the Apache Spark and Apache Hadoop ecosystems on Google Cloud Platform. Cloud Dataproc quickly creates large or small clusters, supports a wide range of task types, and is connected with other Google Cloud Platform services like Google Cloud Storage and Stackdriver Logging, lowering your total cost of ownership.

Refer: Dataproc 

Q9)You want to improve the performance of a weather-charting application that is accurate and real-time. The information is gathered from 50,000 sensors that send 10 readings per second in the form of a timestamp and sensor reading. What should you do with the data?

  1.  Google BigQuery
  2. Google Cloud SQL
  3. Google Cloud Bigtable
  4. Google Cloud Storage

Correct Answer: Google Cloud Bigtable

Explanation:

  • Good for:
    • Low-latency read/write access
    • High-throughput analytics
    • Native time series support
  • Common workloads:
    • IoT, finance, adtech
    • Personalization, recommendations
    • Monitoring
    • Geospatial datasets
    • Graphs

Refer: Google Cloud online storage products

Q10)Your company requires that all metrics from all applications be kept for a period of five years so that they can be analysed in the event of legal action. Which strategy should you take?

  1. Grant the security team access to the logs in each Project
  2. Configure Stackdriver Monitoring for all Projects, and export to BigQuery 
  3. Configure Stackdriver Monitoring for all Projects with the default retention policies
  4. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage 

Correct Answer: Configure Stackdriver Monitoring for all Projects, and export to BigQuery 

Explanation: You can use Stackdriver Logging to filter, search, and view logs from your cloud and open source application services. Allows you to create metrics based on log information, which can then be used in dashboards and alerts. You can export logs to BigQuery, Google Cloud Storage, and Pub/Sub with this extension.

Refer: Google Cloud’s operations suite (formerly Stackdriver)

Q11)Your organisation has decided to use Google Cloud Platform to create a backup duplicate of its on-premises user authentication PostgreSQL database. The database is 4 TB in size, and substantial changes are made on a regular basis. Replication necessitates communication in a private address space. Which networking strategy should you employ?

  1. Google Cloud Dedicated Interconnect
  2. Google Cloud VPN connected to the data center network
  3. A NAT and TLS translation gateway installed on-premises
  4. A Google Compute Engine instance with a VPN server installed connected to the data center network

Correct Answer: Google Cloud Dedicated Interconnect

Explanation: Google Cloud Dedicated Interconnect connects your on-premises network to Google’s network via direct physical connections and RFC 1918 communication. Dedicated Interconnect allows you to move huge amounts of data between networks at a lower cost than buying more bandwidth on the public Internet or utilising VPN tunnels.

Refer: Dedicated Interconnect overview

Q12)A new application is being developed by a development manager. He wants you to look over his needs and see what cloud technologies you can come up with to help him meet them. For cloud portability, the application must:

  1. be built on open-source technologies.
  1. Scale compute capacity dynamically based on demand
  2. Assist in the continuous release of software
  3. Run multiple copies of the same application stack in separate locations.
  4. Use dynamic templates to deploy application bundles.
  5. Use URLs to direct network traffic to specific services.

Which technology combo will suit all of his requirements?

  1. Google Kubernetes Engine, Jenkins, and Helm 
  2. Google Kubernetes Engine and Cloud Load Balancing
  3. Google Kubernetes Engine and Cloud Deployment Manager
  4. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing

Correct Answer: Google Kubernetes Engine, Jenkins, and Cloud Load Balancing

Explanation: Jenkins is an open-source automation server that allows you to organise your build, test, and deployment pipelines in a flexible manner. Kubernetes Engine is a hosted version of Kubernetes, which is a strong cluster manager and container orchestration technology. When it comes to setting up a continuous delivery (CD) pipeline, Jenkins on Kubernetes Engine offers significant advantages over a traditional VM-based deployment.

Refer: Jenkins on Kubernetes Engine

Q13)Using Google Compute Engine, you’ve generated numerous preemptible Linux virtual machine instances. Before the virtual machines are preempted, you want to properly shut down your application. So, what are your options?

  1. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory
  2. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service
  3. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance 
  4. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url

Correct Answer: Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance

Explanation: The metadata server uses startup script metadata keys to specify a startup script or shutdown script.

Refer:  Linux VMs

Q14)On Google Cloud Platform, your company has a three-tier web application running in the same network. Each of the three tiers (web, API, and database) can scale independently of the others. The web should direct traffic to the API tier, which should then direct traffic to the database tier. There should be no traffic flowing between the web and database tiers. What network configuration should you use?

  1. Add each tier to a different subnetwork
  2. Set up software based firewalls on individual VMs
  3. Add tags to each tier and set up routes to allow the desired traffic flow
  4. Add tags to each tier and set up firewall rules to allow the desired traffic flow

Correct Answer: Add tags to each tier and set up firewall rules to allow the desired traffic flow

Explanation: Firewall restrictions are enforced by Google Cloud Platform (GCP) using rules and tags. GCP tags and rules can be defined once and applied to all regions.

Refer: Configure your clusters to use OpenStack

Q15)Your company needs to be able to manage IAM policies for multiple departments separately, but from a central location. Which strategy should you use?

  1. Multiple Organizations with multiple Folders
  2. Multiple Organizations, one for each department
  3. A single Organization with Folders for each department
  4. A single Organization with multiple projects, each with a central owner

Correct Answer: A single Organization with Folders for each department

Explanation: In the Cloud Platform Resource Hierarchy, folders are nodes. Projects, other folders, or a combination of both can be found in a folder. Folders can be used to organise projects into a hierarchy inside an organisation. For instance, your company may have several departments, each with its own set of GCP resources. Folders allow you to organise these resources by department. Folders are used to organise resources with similar IAM policies. A folder can have several folders or resources, but each folder or resource can only have one parent.

Refer: Creating and managing Folders

Q16)Your customer needs to collect many GBs of aggregate real-time key performance indicators (KPIs) from their Google Cloud Platform-based game servers and monitor them with low latency. How should the KPIs be recorded?

  1. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
  2.  Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them. 
  3. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.
  4. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.

Correct Answer: Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.

Explanation: Google Cloud is a boundless platform built on decades of experience designing unique database systems.

Refer: Google Cloud databases

Q17)You’re delivering static HTTP(S) website content hosted on a Compute Engine instance group with Cloud CDN. You want to increase the percentage of cache hits. So, what are your options?

  1. Firstly, customize the cache keys to omit the protocol from the key.
  2. Next, shorten the expiration time of the cached objects.
  3. Make sure the HTTP(S) header ג€Cache-Regionג€ points to the closest region of your users.
  4. Also, replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket.

Correct Answer: Customize the cache keys to omit the protocol from the key.

Explanation: For cacheable content, Cloud CDN uses HTTP(S) Load Balancing as the origin. Through a single global IP address, an external HTTP(S) load balancer can distribute a mix of static and dynamically produced content to users from the following sorts of backends:

  • Groups of instances
  • Endpoint groups in zonal networks (NEGs)
  • NEGs without a server: One or more services from App Engine, Cloud Run, or Cloud Functions
  • NEGs for external backends on the internet
  • Cloud Storage Buckets

Refer: Best practices for content delivery

Q18) You’re working on a globally scalable frontend for an old streaming data API. For proper processing, this API requires events to be in exact chronological order with no repeat data. Which products should you use to ensure that data is delivered in a guaranteed-once FIFO (first-in, first-out) fashion?

  1. Firstly, Cloud Pub/Sub alone
  2. Secondly, Cloud Pub/Sub to Cloud Dataflow 
  3. Cloud Pub/Sub to Stackdriver
  4. Further, Cloud Pub/Sub to Cloud SQL

Correct Answer: Cloud Pub/Sub to Cloud SQL

Explanation: When the Pub/Sub service redelivers a message containing an ordering key, it also redelivers all future messages containing the same ordering key, including acknowledged messages. This may not be true if a subscription has both message ordering and a dead-letter topic enabled, because Pub/Sub transmits messages to dead-letter topics on a best-effort basis. When you receive redelivered messages, you must acknowledge them again. There is no guarantee that following messages will be sent without acknowledgment of the previous ones.

Refer: Ordering messages

Q19) Your organisation has numerous Compute Engine instances executing an application. You must guarantee that the application can communicate with an on-premises service that requires high throughput while minimising latency via internal IPs. So, what are your options?

  1. Firstly, use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
  2. Secondly, configure a direct peering connection between the on-premises environment and Google Cloud.
  3. Next, use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
  4. Further, configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud. 

Correct Answer: Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.

Explanation: On-premises systems can access to Google APIs and services using Private Google Access, which routes traffic over a Cloud VPN tunnel or a Cloud Interconnect connection (VLAN). On-premises hosts can use Private Google Access instead of connecting to Google APIs and services over the internet.

Refer: Configuring Private Google Access for on-premises hosts

Q20) You’re setting up a single Cloud SQL MySQL second-generation database with mission-critical transaction data. In the event of a catastrophic failure, you want to make sure that the least amount of data is lost. Which of the two features should you use? (Select two.)

  1. Sharding
  2. Read replicas
  3. Binary logging
  4. Automated backups
  5. Semisynchronous replication

Correct Answer: Binary logging and Automated backups

Explanation: Backups allow you to restore data to your Cloud SQL instance if it becomes corrupted. In addition, if an instance has a problem, you can restore it to a previous state by overwriting it with the backup. Activate automated backups for any instance containing critical data. Backups safeguard your data against loss or damage.Some processes, such as clone and replica creation, require the use of automated backups as well as binary logging.

Refer: About Cloud SQL backups

Google Certified Professional Cloud Architect free practice test
Menu