Google Professional Cloud DevOps Engineer (GCP) Sample Questions
Google Cloud Platform has emerged as one of the most well-known cloud platforms. It has successfully given high competence to the already existing cloud platform giants – Amazon Web Services and Microsoft Azure – in a short period of time. The Google Cloud platform has achieved the highest level of recognition, and the Google Professional Cloud DevOps Engineer certification is highly recommended for its applications in analytics, machine learning, and cloud-native computing.
The Google Professional Cloud DevOps Engineer Exam is intended primarily for these individuals –
- Administrators of on-premises IT systems
- Architects of cloud solutions and application developers
- DevOps experts with industry experience
- Aspiring DevOps specialists with little GCP experience
- Engineers for on-premise systems
Advanced Sample Questions
What is the purpose of the gcloud command-line tool in GCP?
- A. To manage resources and deploy applications in GCP
- B. To monitor the performance of GCP services
- C. To manage user accounts in GCP
- D. To perform administrative tasks in GCP
Answer: A
Explanation: The gcloud command-line tool is used to manage resources and deploy applications in GCP, including creating and managing virtual machines, managing networks, and deploying applications. It also provides a way to automate and script operations in GCP.
How would you configure autoscaling for a GCP Compute Engine instance group?
- A. By setting a target CPU utilization threshold in the instance group configuration
- B. By creating a load balancer and setting a target utilization threshold
- C. By setting a target memory utilization threshold in the instance group configuration
- D. By setting a target disk utilization threshold in the instance group configuration
Answer: A
Explanation: Autoscaling for a GCP Compute Engine instance group can be configured by setting a target CPU utilization threshold in the instance group configuration. The autoscaling system will monitor the CPU utilization of the instances in the group and add or remove instances as necessary to keep the average CPU utilization at or near the target threshold.
What is the purpose of Stackdriver in GCP?
- A. To monitor the performance of GCP services
- B. To manage user accounts in GCP
- C. To manage resources and deploy applications in GCP
- D. To perform administrative tasks in GCP
Answer: A
Explanation: Stackdriver is a monitoring and logging service in GCP that provides real-time visibility into the performance of GCP services, as well as custom applications running on GCP or other cloud platforms. It can be used to monitor the performance of virtual machines, Kubernetes clusters, and other GCP services, as well as send alerts and notifications when performance issues are detected.
How would you configure a firewall rule in GCP to allow incoming traffic from a specific IP address range?
- A. By creating a new firewall rule with a source IP range and a target tag
- B. By modifying the default firewall rule to allow incoming traffic from the specified IP address range
- C. By creating a new firewall rule with a source IP range and a target service
- D. By modifying the default firewall rule to allow incoming traffic from the specified IP address range and a target service
Answer: A
Explanation: To allow incoming traffic from a specific IP address range in GCP, a new firewall rule should be created with the source IP range specified and a target tag. The target tag specifies the instances or network resources that the firewall rule applies to, and the source IP range specifies the IP addresses or CIDR blocks that are allowed to initiate traffic to the specified target.
How would you create a managed instance group in GCP?
- A. By creating an instance group with a single instance and then enabling automatic scaling
- B. By creating a group of instances manually and then adding them to a load balancer
- C. By using the Google Cloud Console or the gcloud command-line tool to create an instance template, then creating a managed instance group from that template
- D. By creating a group of instances manually and then enabling automatic scaling
Answer: C
Explanation: To create a managed instance group in GCP, you need to first create an instance template that defines the configuration of the instances in the group. This can be done using the Google Cloud Console or the gcloud command-line tool. Once the instance template is created, you can create a managed instance group from that template, which will automatically manage the scaling and deployment of instances based on the template.
What is the purpose of Kubernetes Engine in GCP?
- A. To manage and deploy containerized applications
- B. To monitor the performance of GCP services
- C. To manage user accounts in GCP
- D. To perform administrative tasks in GCP
Answer: A
Explanation: Kubernetes Engine is a managed service in GCP that provides a scalable and efficient way to manage and deploy containerized applications. It is based on the open-source Kubernetes system and provides a fully-managed environment for deploying, scaling, and managing containerized applications. With Kubernetes Engine, you can easily deploy and manage containerized applications in a scalable and efficient manner, without having to worry about managing the underlying infrastructure.
How would you store and manage sensitive data in GCP?
- A. By using Google Cloud Storage for data storage and Google Key Management Service for key management
- B. By using Google Cloud SQL for data storage and Google Identity and Access Management for access control
- C. By using Google Cloud Bigtable for data storage and Google Cloud Data Loss Prevention API for data protection
- D. By using Google Cloud Datastore for data storage and Google Cloud Data Loss Prevention API for data protection
Answer: A
Explanation: To store and manage sensitive data in GCP, it is recommended to use Google Cloud Storage for data storage and Google Key Management Service for key management. Google Key Management Service provides a secure way to manage encryption keys for data stored in Google Cloud Storage, and Google Cloud Storage provides a secure and scalable way to store data. With these services, you can ensure that your sensitive data is stored securely and encrypted at rest and in transit.
What is the purpose of Google Cloud Functions in GCP?
- A. To run serverless applications in the cloud
- B. To monitor the performance of GCP services
- C. To manage user accounts in GCP
- D. To perform administrative tasks in GCP
Answer: A
Explanation: Google Cloud Functions is a serverless computing service in GCP that allows you to run code in response to events without having to manage any infrastructure. With Cloud Functions, you can write and deploy small pieces of code that are triggered by events such as changes in a database, new file uploads to a cloud storage bucket, or incoming HTTP requests. This makes it easy to run serverless applications in the cloud and scale them as needed, without having to worry about managing any infrastructure.
What is the purpose of Stackdriver in GCP?
- A. To monitor and troubleshoot the performance of GCP services and applications
- B. To manage user accounts in GCP
- C. To perform administrative tasks in GCP
- D. To deploy and manage containerized applications
Answer: A
Explanation: Stackdriver is a monitoring, logging, and diagnostics platform in GCP that provides a comprehensive view of the performance of GCP services and applications. Stackdriver enables you to monitor the performance of your applications and services, troubleshoot issues, and diagnose problems. It provides a centralized view of logs, metrics, and events from various sources, including GCP services and custom applications, making it easy to understand the health and performance of your systems.
How would you deploy a highly available web application in GCP?
- A. By creating a single instance of the web application and relying on automatic scaling to handle traffic spikes
- B. By creating a group of instances and using a load balancer to distribute traffic across the instances
- C. By using App Engine Flexible Environment to deploy the web application
- D. By creating a group of instances and relying on automatic scaling to handle traffic spikes
Answer: B
Explanation: To deploy a highly available web application in GCP, it is recommended to create a group of instances and use a load balancer to distribute traffic across the instances.
Basic Sample Questions
Google Professional Cloud DevOps Engineer (GCP) Sample Questions
Question 1 – You provide production support for a Node.js application running on Google Kubernetes Engine (GKE). Several HTTP requests are sent to dependent applications by the application. You want to know which dependent applications are likely to cause performance issues. What are your options?
- A. Instrument all applications with Stackdriver Profiler.
- B. Instrument all applications with Stackdriver Trace and review inter-service HTTP requests.
- C. Use Stackdriver Debugger to review the execution of logic within each application to instrument all applications.
- D. Modify the Node.js application to log HTTP requests and response times to dependent applications. Use Stackdriver Logging to find dependent applications that are performing poorly.
Correct Answer – B
Question 2 – You created a Stackdriver chart for CPU utilization in a dashboard within your workspace project. You’d like to show the chart to your Site Reliability Engineering team. Only the (SRE) team. You must adhere to the principle of least privilege. What are your options?
- A. Provide the SRE team with the workspace Project ID. Assign the Monitoring Viewer IAM role in the workspace project to the SRE team.
- B. Provide the SRE team with the workspace Project ID. Assign the Dashboard Viewer IAM role in the workspace project to the SRE team.
- C. Select “Share chart by URL” and send the URL to the SRE team. Assign the Monitoring Viewer IAM role in the workspace project to the SRE team.
- D. Select “Share chart by URL” and send the URL to the SRE team. Assign the Dashboard Viewer IAM role in the workspace project to the SRE team.
Correct Answer – A
Question 3 – Your organization wants to implement Site Reliability Engineering (SRE) culture and principles. A service that you support recently experienced a brief outage. A manager from another team requests a formal explanation of what occurred so that corrective actions can be taken. What are your options?
- A. Create a postmortem that includes the root causes, resolution, lessons learned, and a prioritized action plan. Only the manager should have access to it.
- B. Create a postmortem that includes the root causes, resolution, lessons learned, and a prioritized action plan. Distribute it through the engineering organization’s document portal.
- C. Create a postmortem that includes the root causes, resolution, lessons learned, a list of those responsible, and action items for each person. Only the manager should have access to it.
- D. Create a postmortem that includes the root causes, resolution, lessons learned, a list of those responsible, and action items for each person. Distribute it through the engineering organization’s document portal.
Correct Answer – B
Question 4 – You run a set of applications on a Google Kubernetes Engine (GKE) cluster and use Stackdriver Kubernetes Engine Monitoring. You are putting a new containerized application that your company requires into production. This app was created by a third party and cannot be changed or reconfigured. The application logs to /var/log/app messages.log, and you want to forward these log entries to Stackdriver Logging. What are your options?
- A. Use the Stackdriver Kubernetes Engine Monitoring agent configuration by default.
- B. Install a Fluentd daemonset on GKE. Then, in the application’s pods, create a customised input and output configuration to tail the log file and write to Stackdriver Logging.
- C. Re-deploy your applications after installing Kubernetes on Google Compute Engine (GCE). Then, using the built-in Stackdriver Logging configuration, tail the log file in the application’s pods and write it to Stackdriver Logging.
- D. Create a script to tail the pod’s log file and write entries to standard output. Run the script as a sidecar container alongside the app’s pod. Configure a shared volume between the containers to give the script read access to the application container’s /var/log.
Correct Answer – B
Reference: https://cloud.google.com/solutions/customizing-stackdriver-logs-fluentd
Question 5 – You are running an application in a virtual machine (VM) that is based on a custom Debian image. The Stackdriver Logging agent is installed on the image. The cloud-platform scope is assigned to the VM. The application is logging data to syslog. To visualise the logs, you should use Stackdriver Logging in the Google Cloud Platform Console. You notice that syslog is missing from the Logs Viewer’s “All logs” dropdown list. What should your first step be?
- A. In the Logs Viewer, look for the agent’s test log entry.
- B. Download and instal the most recent Stackdriver agent.
- C. Confirm that the VM service account access scope includes monitoring. write the scope.
- D. SSH into the VM and run the following commands on it: grep fluentd ps axe
Question 6 – To build and deploy your application to Google Kubernetes Engine, you use a multi-step Cloud Build pipeline (GKE). You want to integrate with a third-party monitoring platform by sending build information via HTTP POST to a webhook. You want to keep the development effort to a minimum. What are your options?
- A. Include logic in each Cloud Build step to HTTP POST build data to a webhook.
- B. In Cloud Build, add a new step at the end of the pipeline to HTTP POST the build information to a webhook.
- C. Create a logs-based metric from the Cloud Build logs using Stackdriver Logging. Create an Alert that includes a Webhook notification type.
- D. Set up a Cloud Pub/Sub push subscription to the Cloud Build cloud-builds PubSub topic in order to HTTP POST build information to a webhook.
Correct Answer – D
Question 7 – You deploy your application with Spinnaker and have set up a canary deployment stage in the pipeline. At startup, your application uses an in-memory cache to load objects. You want to automate the comparison of the canary and production versions. How should the canary analysis be configured?
- A. Contrast the canary with a fresh installation of the current production version.
- B. Contrast the canary with a newly deployed previous production version.
- C. Contrast the canary with the current production version’s deployment.
- D. Contrast the canary with the average performance of previous production versions’ sliding windows.
Correct Answer – D
Reference: https://cloud.google.com/solutions/automated-canary-analysis-kubernetes-engine-spinnaker
Question 8 – You support a high-traffic web application and want to make sure the home page loads quickly. As a first step, you decide to use a Service Level Indicator (SLI) to represent home page request latency, with a page load time of 100 ms considered acceptable. What is the Google-recommended method for calculating SLI?
- A. Divide the request latencies into ranges and compute the percentile at 100 milliseconds.
- B. Divide the request latencies into ranges and calculate the median and 90th percentiles.
- C. Count the number of home page requests that take less than 100 milliseconds to load, then divide by the total number of home page requests.
- D. Count the number of home page requests that load in less than 100 milliseconds and divide the total number of web application requests.
Correct Answer – C
Reference: https://sre.google/workbook/implementing-slos/
Question 9 – You deploy a new version of an internal application during a weekend maintenance window when user tragedy is minimal. After the window closes, you discover that one of the new features isn’t functioning properly in the production environment. You roll back the new release and deploy a fix after an extended outage. You want to change your release process to reduce the mean time to recovery in order to avoid future extended outages. What are your options? (Select two.)
- A. Before merging new code, have two different peers review the changes.
- B. When releasing new code via a CD server, use the blue/green deployment strategy.
- C. Include a code linting tool to validate coding standards prior to accepting any code into the repository.
- D. Prior to release, require developers to run automated integration tests on their local development environments.
- E. Set up a CI server. Add a suite of unit tests to your code and have your continuous integration server run them on commit to verify any changes.
Correct Answer – A, C
Question 10 – You have a pool of application servers running on Compute Engine. You must provide a secure solution that requires as little configuration as possible and allows developers to easily access application logs for troubleshooting. How would you put the solution into action on GCP?
- A. ג€¢ Install the Stackdriver logging agent on each application server. To access Stackdriver and view logs, grant the developers the IAM Logs Viewer role.
- B. Install the Stackdriver logging agent on the application servers. To access Stackdriver and view logs, grant the developers the IAM Logs Private Logs Viewer role.
- C. ג€¢ Install the Stackdriver monitoring agent on each application server. ג€¢ To access Stackdriver and view metrics, grant the developers the IAM Monitoring Viewer role.
- D. ג€¢ On your application servers, install the gsutil command-line tool. ג€¢ Create a gsutil script to upload your application log to a Cloud Storage bucket, and then schedule it to run every 5 minutes with cron. ג€¢ Allow the developers to view the logs in the IAM Object Viewer.
Correct Answer – B
Question 11 – You are responsible for the backend of a mobile phone game that is hosted on a Google Kubernetes Engine (GKE) cluster. Users’ HTTP requests are served by the application. You must implement a solution to reduce network costs. What are your options?
- A. Make the VPC a Shared VPC host project.
- B. Set your network services to Standard Tier.
- C. Set your Kubernetes cluster to be a Private Cluster.
- D. Set up a Google Cloud HTTP Load Balancer as the Ingress.
Correct Answer – C
Question 12 – You experienced a major service outage that impacted all service users for several hours. After several hours of incident management, the service was restored and user access was re-established. Following the Site Reliability Engineering recommended practices, you must provide an incident summary to relevant stakeholders. What should you start with?
- A. Contact each individual stakeholder to explain what occurred.
- B. Create a post-mortem report that will be distributed to stakeholders.
- C. Distribute the Incident State Document to all stakeholders.
- D. Require the responsible engineer to send an apology email to all stakeholders.
Correct Answer – A
Question 13 – Cloud Build is used to create your application images, which are then uploaded to Google Container Registry (GCR). You want to be able to deploy a specific version of your application based on the release version tagged in source control. What should you do after pressing the image?
- A. In the source control tag, include a reference to the image digest.
- B. Include the source control tag in the image name as a parameter.
- C. Incorporate the release version tag into the application image using Cloud Build.
- D. Match the image to the tag in source control using GCR digest versioning.
Correct Answer – C
Question 14 – You are conducting semi-annual capacity planning for your flagship service. Over the next six months, you anticipate a 10% month-over-month increase in service users. Your service is fully containerized and runs on Google Cloud Platform (GCP) on three zones with cluster autoscaler enabled, using a Google Kubernetes Engine (GKE) Standard regional cluster. You currently consume approximately 30% of your total deployed CPU capacity, and you require resilience against zone failure. You want to minimise the negative impact on your users as a result of this growth or zone failure, while avoiding unnecessary costs. How should you prepare for the anticipated growth?
- A. Check the maximum node pool size, enable a horizontal pod autoscaler, and then run a load test to confirm your expected resource requirements.
- B. Because you are using a cluster autoscaler and are using GKE, your GKE cluster will scale automatically regardless of growth rate.
- C. Because you are only at 30% utilisation, you have significant headroom and will not need to add any additional capacity to keep up with this rate of growth.
- D. Proactively add 60% more node capacity to account for a six-month growth rate of 10%, and then run a load test to ensure you have enough capacity.
Correct Answer – B
Question 15 – Your application images are created and submitted to the Google Container Registry (GCR). You want to create an automated pipeline that deploys the application when the image is updated while reducing development time. What are your options?
- A. Start a Spinnaker pipeline with Cloud Build.
- B. Trigger a Spinnaker pipeline using Cloud Pub/Sub.
- C. Trigger Jenkins pipeline using a custom builder in Cloud Build.
- D. Use Cloud Pub/Sub to start a custom deployment service in Google Kubernetes Engine (GKE).
Correct Answer – D
Question 16 – You are in charge of the production deployment to a cluster of Google Kubernetes Engine (GKE). You want to ensure that only images created by your trusted CI/CD pipeline are deployed to production. What are your options?
- A. On the clusters, enable Cloud Security Scanner.
- B. On the Container Registry, enable Vulnerability Analysis.
- C. Create private clusters for the Kubernetes Engine clusters.
- D. Configure the Kubernetes Engine clusters to use Binary Authorization.
Correct Answer – B
Reference: https://codelabs.developers.google.com/codelabs/cloud-builder-gke-continuous-deploy/index.html#1
Question 17 – You provide support for an e-commerce application that is hosted on a large Google Kubernetes Engine (GKE) cluster that is both on-premises and hosted on the Google Cloud Platform. The app is made up of microservices that run in containers. You want to find out which containers are using the most CPU and memory. What are your options?
- A: Make use of Stackdriver Kubernetes Engine Monitoring.
- B. Collect and aggregate logs per container with Prometheus, then analyse the results in Grafana.
- C. Create custom metrics using the Stackdriver Monitoring API, then organise your containers using groups.
- D. Export application logs to BigQuery using Stackdriver Logging, aggregate logs per container, and then analyse CPU and memory consumption.
Correct Answer – B
Question 18 – Your company’s production systems are plagued by bugs, outages, and slowness. The production environment is where developers work on new features and bug fixes. Configuration and experiments are carried out in the production environment, resulting in user outages. Load testing is done in the production environment by testers, which often slows down the production systems. To reduce the number of bugs and outages in production and to allow testers to toad test new features, you must redesign the environment. What are your options?
- A. In production, create an automated testing script to detect failures as soon as they occur.
- B. Set up a development environment with a smaller server capacity and restrict access to developers and testers only.
- C. Secure the production environment to prevent developers from modifying it, and schedule one controlled update per year.
- D. Set up a development environment for coding and a testing environment for configurations, experiments, and load testing.
Correct Answer – A
Question 19 – You provide assistance to an App Engine-powered application. The application is used globally and can be accessed from a variety of devices. You’re curious about the number of connections. You’re making use of Stackdriver Monitoring for App Engine. Which metric should you employ?
- A. flex/connections/current
- B. tcp_ssl_proxy/new_connections
- C. tcp_ssl_proxy/open_connections
- D. flex/instance/connections/current
Correct Answer – D
Reference: https://cloud.google.com/monitoring/api/metrics_gcp
Question 20 – You provide support for a Compute Engine-deployed application. To store and retrieve data, the application connects to a Cloud SQL instance. Users report errors displaying database timeout messages after updating the application. The number of concurrent active users has remained consistent. You must determine the most likely cause of the database timeout. What are your options?
- A. Examine the Compute Engine instance’s serial port logs.
- B. Use Stackdriver Profiler to visualise the application’s resource utilisation.
- C. Determine whether there is an increased number of connections to the Cloud SQL instance.
- D. Use Cloud Security Scanner to determine whether your Cloud SQL is under attack from a Distributed Denial of Service (DDoS).
Correct Answer – C
Hurry up and start preparing for Google Professional Cloud DevOps Engineer (GCP) now!