Google Professional Cloud Architect Exam Questions FreeBy itexambyte.com / 6 March 2024 Google Professional Cloud Architect Certification Exam Questions Exam Sections% of ExamSection 1: Designing and planning a cloud solution architecture24% of the examSection 2: Managing and provisioning a solution infrastructure15% of the examSection 3: Designing for security and compliance18% of the examSection 4: Analyzing and optimizing technical and business processes18% of the examSection 5: Managing implementation11% of the examSection 6: Ensuring solution and operations reliability14% of the exam 1 / 50 1. A manufacturing plant's IoT devices continuously send data during operations. You need to process and analyze the incoming telemetry data. After processing, the data should be retained, but it will only be accessed once every month or two. Your CIO has issued a directive to incorporate managed services wherever possible. You want a cost-effective solution to process the incoming streams of data. What steps should you take to achieve this goal? A) Ingest data with ClearBlade IoT Core, process it with Dataprep, and store it in a Coldline Cloud Storage bucket. B) Ingest data with ClearBlade IoT Core, and then store it in BigQuery. C) Ingest data with ClearBlade IoT Core, and then publish to Pub/Sub. Use Dataflow to process the data, and store it in a Nearline Cloud Storage bucket. D) Ingest data with ClearBlade IoT Core, and then publish to Pub/Sub. Use BigQuery to process the data, and store it in a Standard Cloud Storage bucket. 2 / 50 2. A healthcare organization is migrating its on-premises infrastructure to Google Cloud. The organization wants to define a catalog of pre-approved resources for different departments to provision in the cloud. Additionally, they want to enforce compliance policies during the provisioning process. Which Google Cloud service can help achieve these requirements? A) Google Cloud Resource Provisioning Framework B) Google Cloud Service Directory C) Google Cloud Service Catalog D) Google Cloud Resource Manager 3 / 50 3. A media company is seeking assistance in expanding the reach of existing recorded video content to new audiences in emerging regions. Taking into account both the business and technical requirements of company, what steps should be taken to achieve this goal? A) Use Cloud CDN to cache the video content from HRL’s existing public cloud provider. B) Serve the video content directly from a multi-region Cloud Storage bucket. C) Use Apigee Edge to cache the video content from HRL’s existing public cloud provider. D) Replicate the video content in Google Kubernetes Engine clusters in regions close to the fans. 4 / 50 4. You have several Compute Engine instances running NGINX and Tomcat for a web application. In your web server logs, many login failures come from a single IP address, which looks like a brute force attack. How can you block this traffic? A) Edit the Compute Engine instances running your web application, and enable Google Cloud Armor. Create a Google Cloud Armor policy with a default rule action of "Allow." Add a new rule that specifies the IP address causing the login failures as the Condition, with an action of "Deny” and a deny status of "403," and accept the default priority (1000). B) Ensure that an HTTP(S) load balancer is configured to send traffic to the backend Compute Engine instances running your web server. Create a Google Cloud Armor policy with a default rule action of "Deny." Add a new rule that specifies the IP address causing the login failures as the Condition, with an action of "Deny" and a deny status of "403," and accept the default priority (1000). Add the load balancer backend service's HTTP-backend as the target. C) Ensure that an HTTP(S) load balancer is configured to send traffic to the backend Compute Engine instances running your web server. Create a Google Cloud Armor policy with a default rule action of "Allow." Add a new rule that specifies the IP address causing the login failures as the Condition, with an action of "Deny" and a deny status of "403," and accept the default priority (1000). Add the load balancer backend service's HTTP-backend as the target. D) Ensure that an HTTP(S) load balancer is configured to send traffic to your backend Compute Engine instances running your web server. Create a Google Cloud Armor policy using the instance’s local firewall with a default rule action of "Allow." Add a new local firewall rule that specifies the IP address causing the login failures as the Condition, with an action of "Deny" and a deny status of "403," and accept the default priority (1000). 5 / 50 5. Your company's DevOps team used Cloud Source Repositories, Cloud Build, and Artifact Registry to successfully implement the build portion of an application's CI/CD process.. However, the deployment process is erroring out. Initial troubleshooting shows that the runtime environment does not have access to the build images. You need to advise the team on how to resolve the issue.What could cause this problem? A) The runtime environment does not have permissions to the Artifact Registry in your current project. B) The Artifact Registry might be in a different project. C) The runtime environment does not have permissions to Cloud Source Repositories in your current project. D) You need to specify the Artifact Registry image by name. 6 / 50 6. Your client is mandated by law to adhere to the Payment Card Industry Data Security Standard (PCI-DSS). Although the client undergoes formal audits periodically, these audits may not suffice for continuous compliance. To facilitate adherence to PCI-DSS requirements more seamlessly, the client seeks to proactively monitor for common violations and detect them early, without replacing the existing audit processes.what recommendations would you propose to help the client engage in continuous compliance and promptly identify violations? A) Enable the Security Command Center (SCC) dashboard, asset discovery, and Security Health Analytics in the Premium tier. Export or view the PCI-DSS Report from the SCC dashboard's Compliance tab. B) Enable the Security Command Center (SCC) dashboard, asset discovery, and Security Health Analytics in the Standard tier. Export or view the PCI-DSS Report from the SCC dashboard's Compliance tab. C) Enable the Security Command Center (SCC) dashboard, asset discovery, and Security Health Analytics in the Premium tier. Export or view the PCI-DSS Report from the SCC dashboard's Vulnerabilities tab. D) Enable the Security Command Center (SCC) dashboard, asset discovery, and Security Health Analytics in the Standard tier. Export or view the PCI-DSS Report from the SCC dashboard's Vulnerabilities tab. 7 / 50 7. Your organization aims to monitor the occupancy status of meeting rooms reserved for scheduled meetings. With 1000 meeting rooms distributed across 5 offices on 3 continents, each room is fitted with a motion sensor that transmits its status every second. To cater to the data ingestion requirements of this sensor network, the receiving infrastructure must be capable of handling potential inconsistencies in device connectivity. What type of solution should be devised to address this scenario? A) Have devices poll for connectivity to Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices. B) Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application. C) Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table. D) Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Datastore. 8 / 50 8. You are designing a future-proof hybrid environment that will require network connectivity between Google Cloud and your on-premises environment. You want to ensure that the Google Cloud environment you are designing is compatible with your on-premises networking environment. What steps should be taken to achieve this compatibility? A) Use the default VPC in your Google Cloud project. Use a Cloud VPN connection between your on-premises environment and Google Cloud. B) Create a custom VPC in Google Cloud in auto mode. Use a Cloud VPN connection between your on-premises environment and Google Cloud. C) Create a network plan for your VPC in Google Cloud that uses CIDR ranges that overlap with your on-premises environment. Use a Cloud Interconnect connection between your on-premises environment and Google Cloud. D) Create a network plan for your VPC in Google Cloud that uses non-overlapping CIDR ranges with your on-premises environment. Use a Cloud Interconnect connection between your on-premises environment and Google Cloud. 9 / 50 9. A company, Skyhigh, is exploring database solutions to store the analytics data generated from its trial delivery operations. Currently relying on a small cluster of MongoDB NoSQL database servers, the company aims to transition to a managed NoSQL database service offering consistent low latency, seamless throughput scalability, and the capability to manage the expected petabytes of data as they expand into new markets. What steps should be taken in this scenario? A) Create a Bigtable instance, extract the data from MongoDB, and insert the data into Bigtable. B) Extract the data from MongoDB, and insert the data into BigQuery. C) Extract the data from MongoDB. Insert the data into Firestore using Datastore mode. D) Extract the data from MongoDB. Insert the data into Firestore using Native mode. 10 / 50 10. Employees at Star corporation will utilize Google Workspace. The existing on-premises network does not meet the necessary criteria for connecting to Google's public infrastructure. What steps should be taken in this situation? A) Order a Partner Interconnect from a Google Cloud partner, and ensure that proper routes are configured. B) Order a Dedicated Interconnect from a Google Cloud partner, and ensure that proper routes are configured. C) Connect the on-premises network to Google’s public infrastructure via a partner that supports Carrier Peering. D) Connect the network to a Google point of presence, and enable Direct Peering. 11 / 50 11. A startup is interested in implementing a multi-layered security approach for their Compute Engine instances. What are some strategies that you can be employed to enhance the security of Compute Engine instances for startup? A) Use labels to allow traffic only from certain sources and ports. Turn on Secure boot and vTPM. B) Use labels to allow traffic only from certain sources and ports. Use a Compute Engine service account. C) Use network tags to allow traffic only from certain sources and ports. Turn on Secure boot and vTPM. D) Use network tags to allow traffic only from certain sources and ports. Use a Compute Engine service account. 12 / 50 12. A company has recently migrated its critical applications to Google Cloud Platform (GCP) to take advantage of its scalability and flexibility. The company wants to ensure business continuity in case of unexpected disasters or outages. As part of their disaster recovery strategy, they have implemented the following:Regular data backups to Google Cloud Storage.Utilization of multiple GCP regions for redundancy.Implementation of Google Cloud's Traffic Director for load balancing across regions.Which of the following statements regarding the company's disaster recovery strategy is most accurate? A) Relying on multiple GCP regions alone is a comprehensive disaster recovery plan that covers all possible scenarios. B) Using Traffic Director for load balancing across regions does not contribute to the company's disaster recovery efforts. C) Storing backups solely on Google Cloud Storage is sufficient to guarantee data recovery in case of a disaster. D) The company has implemented a multi-region strategy to ensure high availability and reduce the risk of data loss. 13 / 50 13. You are responsible for monitoring a critical application hosted on Google Cloud Platform. The application consists of multiple microservices running on Compute Engine instances. You need to set up monitoring and alerting to ensure the availability and performance of these services.Which combination of Google Cloud services would you use to achieve comprehensive monitoring, logging, and alerting for this application? A) Cloud Monitoring, Cloud Logging, and Cloud Trace B) Stackdriver Monitoring, Stackdriver Logging, and Stackdriver Error Reporting C) Cloud Profiler, Cloud Debugger, and Cloud Trace D) Cloud Monitoring, Cloud Logging, and Cloud Scheduler 14 / 50 14. A Games studio wants you to make sure their new gaming platform is being operated according to Google best practices. You want to verify that Google-recommended security best practices are being met while also providing the operations teams with the metrics they need. What should you do? (Choose two) A) Ensure that you aren’t running privileged containers. B) Ensure that you are using obfuscated Tags on workloads. C) Ensure that you are using the native logging mechanisms. D) Ensure that workloads are not using securityContext to run as a group. E) Ensure that each cluster is running GKE metering so each team can be charged for their usage. 15 / 50 15. One of Healthcare firm's customers is an internationally renowned research and hospital facility. Many of their patients are well-known public personalities. Sources both inside and outside have tried many times to obtain health information on these patients for malicious purposes. The hospital requires that patient information stored in Cloud Storage buckets not leave the geographic areas in which the buckets are hosted. You need to ensure that information stored in Cloud Storage buckets in the "europe-west2" region does not leave that area. What should you do? A) Enable Virtual Private Cloud Service Controls, and create a service perimeter around the Cloud Storage resources. B) Encrypt the data in the application on-premises before the data is stored in the "europe-west2" region. C) Assign the Identity and Access Management (IAM) "storage.objectViewer" role only to users and service accounts that need to use the data. D) Create an access control list (ACL) that limits access to the bucket to authorized users only, and apply it to the buckets in the "europe-west2" region. 16 / 50 16. In your organization, there is a 3-tier web application running within the same Google Cloud Virtual Private Cloud (VPC). The web, API, and database tiers can scale independently. The desired network traffic flow should move from the web tier to the API tier and then to the database tier, without any direct traffic between the web and database tiers. How can you configure the network with minimal steps to achieve this setup? A) Add each tier to a different subnetwork. B) Set up software-based firewalls on individual VMs. C) Add tags to each tier and set up routes to allow the desired traffic flow. D) Add tags to each tier and set up firewall rules to allow the desired traffic flow. 17 / 50 17. A financial services company operating on Google Cloud is required to comply with strict regulatory guidelines for disaster recovery planning. The company needs to ensure that data is replicated across geographically dispersed locations and that recovery time objectives (RTOs) are minimized. Which combination of Google Cloud services and features would best meet these requirements? A) Leveraging Cloud Spanner for globally consistent databases, Cloud Storage for geo-redundant backups, and implementing a disaster recovery strategy with Cloud DNS for failover routing. B) Using Cloud SQL for database replication, Cloud Storage for backups, and setting up a disaster recovery plan with Cloud Endure for automated failover. C) Establishing cross-region replication with Cloud Bigtable for high-throughput data storage, Cloud Filestore for NFS backups, and configuring a disaster recovery plan with Cloud VPN for secure connectivity. D) Implementing multi-regional deployments with Cloud Memorystore for Redis for caching, Cloud SQL for data redundancy, and setting up a disaster recovery solution with Cloud Interconnect for private connectivity. 18 / 50 18. A healthcare company wants to connect one of their data centers to Google Cloud. The data center is in a remote location over 100 kilometers from a Google-owned point of presence. They can't afford new hardware, but their existing firewall can accommodate future throughput growth. They also shared these data points:• Servers in their on-premises data center need to talk to Google Kubernetes Engine (GKE) resources in the cloud.• Both on-premises servers and cloud resources are configured with private RFC 1918 IP addresses.• The service provider has informed the customer that basic Internet connectivity is a best-effort service with no SLA.You need to recommend a connectivity option. What should you recommend? A) Provision Carrier Peering. B) Provision a new Internet connection. C) Provision a Partner Interconnect connection. D) Provision a Dedicated Interconnect connection. 19 / 50 19. Your new software, hosted on Google Cloud, is in public beta, and you want to design meaningful service level objectives (SLOs) before the software becomes generally available. What should you do? A) Define one SLO as 99.9% game server availability. Define the other SLO as less than 100-ms latency. B) Define one SLO as 99% HTTP requests return the 2xx status code. Define the other SLO as 99% requests return within 100 ms. C) Define one SLO as service availability that is the same as Google Cloud's availability. Define the other SLO as 100-ms latency. D) Define one SLO as total uptime of the game server within a week. Define the other SLO as the mean response time of all HTTP requests that are less than 100 ms. 20 / 50 20. Your client established an Identity and Access Management (IAM) resource structure within Google Cloud during the startup phase. As the company has expanded, multiple departments and teams have emerged. To align with Google's recommended practices, you aim to propose a resource hierarchy. What steps should you take? A) Keep all resources in one project, and use a flat resource hierarchy to reduce complexity and simplify management. B) Use multiple projects with established trust boundaries, and change the resource hierarchy to reflect company organization. C) Keep all resources in one project, but change the resource hierarchy to reflect company organization. D) Use a flat resource hierarchy and multiple projects with established trust boundaries. 21 / 50 21. A shipping company's warehouse and inventory system, developed in Java and employing a microservices architecture within GKE, has encountered a perplexing issue. Seemingly at unpredictable intervals, specific requests experience a considerable 5-10 times slowdown compared to their usual performance. Despite exhaustive attempts by the development team to recreate the problem in testing environments, the root cause of this erratic behavior eludes identification. In light of this complex scenario, What steps should be taken to address this situation? A) Create metrics in Cloud Monitoring for your microservices to test whether they are intermittently unavailable or slow to respond to HTTPS requests. Use Cloud Trace to determine which functions/methods in your application’s code use the most system resources. Use Cloud Profiler to identify slow requests and determine which microservices/calls take the most time to respond. B) Use Error Reporting to test whether your microservices are intermittently unavailable or slow to respond to HTTPS requests. Use Cloud Profiler to determine which functions/methods in your application’s code use the most system resources. Use Cloud Trace to identify slow requests and determine which microservices/calls take the most time to respond. C) Create metrics in Cloud Monitoring for your microservices to test whether they are intermittently unavailable or slow to respond to HTTPS requests. Use Cloud Profiler to determine which functions/methods in your application’s code use the most system resources. Use Cloud Trace to identify slow requests and determine which microservices/calls take the most time to respond. D) Use Error Reporting to test whether your microservices are intermittently unavailable or slow to respond to HTTPS requests. Use Cloud Trace to determine which functions/methods in your application’s code Use the most system resources. Use Cloud Profiler to identify slow requests and determine which microservices/calls take the most time to respond. 22 / 50 22. Anonymous users from all over the world access a public health information website hosted in an on-premises EHR data center. The servers that host this website are older, and users are complaining about sluggish response times. There has also been a recent increase of distributed denial-of-service attacks toward the website. The attacks always come from the same IP address ranges. EHR management has identified the public health information website as an easy, low risk application to migrate to Google Cloud. You need to improve access latency and provide a security solution that will prevent the denial-of-service traffic from entering your Virtual Private Cloud (VPC) network. What should you do? A) Deploy an external HTTP(S) load balancer, configure VPC firewall rules, and move the applications onto Compute Engine virtual machines. B) Deploy an external HTTP(S) load balancer, configure Google Cloud Armor, and move the application onto Compute Engine virtual machines. C) Containerize the application and move it into Google Kubernetes Engine (GKE). Create a GKE service to expose the pods within the cluster, and set up a GKE network policy. D) Containerize the application and move it into Google Kubernetes Engine (GKE). Create an internal load balancer to expose the pods outside the cluster, and configure Identity-Aware Proxy (IAP) for access. 23 / 50 23. The sales team of XYZ Corporation operates remotely and travels to various sites for their work. Irrespective of their whereabouts, these employees require access to web-based sales tools hosted in the XYZ data center. XYZ has decided to phase out its existing Virtual Private Network (VPN) infrastructure and transition to a BeyondCorp access model for enhanced security. Each sales representative possesses a Google Workspace account, which they utilize for single sign-on (SSO). What steps should you take to implement this transition effectively? A) Create a Google group for the sales tool application, and upgrade that group to a security group. B) Deploy an external HTTP(S) load balancer and create a custom Cloud Armor policy for the sales tool application. C) For every sales employee who needs access to the sales tool application, give their Google Workspace user account the predefined AppEngine Viewer role. D) Create an Identity-Aware Proxy (IAP) connector that points to the sales tool application. 24 / 50 24. You are working with a client who is using Google Kubernetes Engine (GKE) to migrate applications from a virtual machine–based environment to a microservices-based architecture. Your client has a complex legacy application that stores a significant amount of data on the file system of its VM. You do not want to re-write the application to use an external service to store the file system data. What should you do? A) In Cloud Shell, create a YAML file defining your StatefulSet called statefulset.yaml. Create a StatefulSet in GKE by running the command kubectl apply -f statefulset.yaml B) In Cloud Shell, create a YAML file defining your Container called build.yaml. Create a Container in GKE by running the command gcloud builds submit –config build.yaml C) In Cloud Shell, create a YAML file defining your Deployment called deployment.yaml. Create a Deployment in GKE by running the command kubectl apply -f deployment.yaml D) In Cloud Shell, create a YAML file defining your Pod called pod.yaml. Create a Pod in GKE by running the command kubectl apply -f pod.yaml 25 / 50 25. You need to grant a user (user@example.com) the Editor role (roles/editor) on your GCP project using the gcloud CLI.Which of the following commands would you use to update the IAM policy and add the user with the Editor role? A) gcloud projects add-iam-policy-binding my-project --member=user:user@example.com --role=roles/editor B) gsutil iam ch user:user@example.com:roles/editor gs://my-project C) gcloud update --add_iam_policy_binding my-project:user=user@example.com,role=roles/editor D) gcloud auth login user@example.com --role=roles/editor 26 / 50 26. You are working at Building Block, a software development company, currently hosts their existing application on Ubuntu Linux VMs in an on-premises hypervisor. They want to migrate their application to Google Cloud with minimal refactoring. What should you do? A) Set up a Google Kubernetes Engine (GKE) cluster, and then create a deployment with an autoscaler. B) Isolate the core features that the application provides. Use Cloud Run to deploy each feature independently as a microservice. C) Use Dedicated or Partner Interconnect to connect the on-premises network where your application is running to your VPC. Configure an endpoint for a global external HTTP(S) load balancer that connects to the existing VMs. D) Write Terraform scripts to deploy the application as Compute Engine instances. 27 / 50 27. A company is planning to migrate its on-premises data warehouse to Google BigQuery for better scalability and performance. The company wants to optimize costs while ensuring minimal impact on existing analytics processes. Which approach would be most effective in achieving these goals? A) Use BigQuery flat-rate pricing to ensure predictable costs for heavy workloads. B) Implement BigQuery reservations to secure capacity and save costs for steady-state workloads. C) Leverage BigQuery slots to dynamically allocate resources based on query demand. D) Utilize BigQuery Data Transfer Service to minimize data transfer costs from the on-premises data warehouse. 28 / 50 28. Pixel Solution regularly updates its IOT software every 4 to 6 weeks. Despite the majority of releases being successful, you have encountered some instances where problematic releases resulted in the unavailability of IOT software, requiring software developers to roll back the release. To enhance the reliability of software releases and avoid similar issues in the future, what steps should you take? A) Adopt a “waterfall” development process. Maintain the current release schedule. Ensure that documentation explains how all the features interact. Ensure that the entire application is tested in a staging environment before the release. Ensure that the process to roll back the release is documented. Use Cloud Monitoring, Cloud Logging, and Cloud Alerting to ensure visibility. B) Adopt a “waterfall” development process. Maintain the current release schedule. Ensure that documentation explains how all the features interact. Automate testing of the application. Ensure that the process to roll back the release is well documented. Use Cloud Monitoring, Cloud Logging, and Cloud Alerting to ensure visibility. C) Adopt an “agile” development process. Maintain the current release schedule. Automate build processes from a source repository. Automate testing after the build process. Use Cloud Monitoring, Cloud Logging, and Cloud Alerting to ensure visibility. Deploy the previous version if problems are detected and you need to roll back. D) Adopt an “agile” development process. Reduce the time between releases as much as possible. Automate the build process from a source repository, which includes versioning and self-testing. Use Cloud Monitoring, Cloud Logging, and Cloud Alerting to ensure visibility. Use a canary deployment to detect issues that could cause rollback. 29 / 50 29. Your company wants to try out the cloud with low risk. They intend to archive around 100 TB of log data to the cloud to explore the serverless analytics capabilities offered there, all while using this data for long-term disaster recovery purposes. What are the two recommended steps they should follow? (Select two) A) Load logs into BigQuery. B) Load logs into Cloud SQL. C) Import logs into Cloud Logging. D) Insert logs into Cloud Bigtable. E) Upload log files into Cloud Storage. 30 / 50 30. You are working in a mixed environment of VMs and Kubernetes. Some of your resources are on-premises, and some are in Google Cloud. Using containers as a part of your CI/CD pipeline has sped up releases significantly. You want to start migrating some of those VMs to containers so you can get similar benefits. You want to automate the migration process where possible. What should you do? A) Use Migrate for Anthos to automate the creation of Compute Engine instances to import VMs and convert them to containers. B) Use Migrate for Compute Engine to import VMs and convert them to containers. C) Manually create a GKE cluster, and then use Migrate for Anthos to set up the cluster, import VMs, and convert them to containers. D) Manually create a GKE cluster. Use Cloud Build to import VMs and convert them to containers. 31 / 50 31. A development team is preparing for a major update to a critical application running on Google Cloud Compute Engine instances. Which testing approach would be most effective in ensuring minimal downtime and optimal performance during the update process? A) Blue-green deployment with automated canary analysis using Spinnaker B) A/B testing using Google Cloud Functions and Cloud Pub/Sub C) Chaos engineering with Google Cloud Monitoring and Cloud Logging D) Performance testing with JMeter and manual rollback strategy 32 / 50 32. You have deployed a Flask web application named test.py written in Python using Cloud Run. While the application performed as expected in testing and staging environments, upon deployment to the production environment, product search results displayed items that should have been filtered out based on user preferences. The developer suspects that the performance issue may be linked to the 'user.productFilter' variable, either being unset or incorrectly evaluated. You aim to gain visibility into the situation while minimizing user impact, considering this is not a critical bug. What steps should be taken to address this situation? A) Use ssh to connect to the Compute Engine instance where Cloud Run is running. Run the command 'python3 -m pdb test.py' to debug the application. B) Use ssh to connect to the Compute Engine instance where Cloud Run is running. Use the command 'pip install google-python-cloud-debugger' to install Cloud Debugger. Use the 'gcloud debug' command to debug the application. C) Modify the Dockerfile for the Cloud Run application. Change the RUN command to 'python3 -m pdb /test.py'. Modify the script to import pdb. Deploy to Cloud Run as a canary build. D) Modify the Dockerfile for the Cloud Run application. Add 'RUN 'python3 -m pip install snapshot-dbg-cli' to the Dockerfile. Modify the script to import snapshot-dbg-cli. Use 'snapshot-dbg-cli list_debuggees' to begin the debugging process. 33 / 50 33. To implement load balancing for a web-based application with multiple backends in different regions, you aim to route traffic to the closest backend to the end user and also to different backends based on the accessed URL. Which of the following methods could achieve this? A) The request is received by the global external HTTP(S) load balancer. A global forwarding rule sends the request to a target proxy, which checks the URL map and selects the backend service. The backend service sends the request to Compute Engine instance groups in multiple regions. B) The request is matched by a URL map and then sent to a global external HTTP(S) load balancer. A global forwarding rule sends the request to a target proxy, which selects a backend service. The backend service sends the request to Compute Engine instance groups in multiple regions. C) The request is received by the SSL proxy load balancer, which uses a global forwarding rule to check the URL map, then sends the request to a backend service. The request is processed by Compute Engine instance groups in multiple regions. D) The request is matched by a URL map and then sent to a SSL proxy load balancer. A global forwarding rule sends the request to a target proxy, which selects a backend service and sends the request to Compute Engine instance groups in multiple regions. 34 / 50 34. The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk which they can't restart until the next maintenance event. What should they change to get better performance from this system as soon as possible and in a cost-effective manner? A) Increase the virtual machine’s memory to 64 GB. B) Dynamically resize the SSD persistent disk to 500 GB. C) Create a new virtual machine running PostgreSQL. D) Migrate their performance metrics warehouse to BigQuery. 35 / 50 35. You are a DevOps engineer responsible for managing a healthcare application hosted on Google Cloud Platform (GCP). The application handles sensitive patient data and must comply with strict regulatory requirements for data security and privacy. As part of your role, you need to evaluate the quality control measures implemented in the GCP environment to ensure data integrity and security. When evaluating quality control measures in Google Cloud for the healthcare application, which of the following strategies would be most effective in ensuring data integrity and security? A) Implementing automated testing scripts to validate data encryption at rest and in transit. B) Conducting regular manual reviews of access controls and permissions for cloud resources. C) Utilizing third-party monitoring tools to track system performance and resource utilization. D) Performing periodic penetration testing to identify and remediate security vulnerabilities. 36 / 50 36. You are the data compliance officer for Codehard Games and must protect customers' personally identifiable information (PII). Codehard Games wants to make sure they can generate anonymized usage reports about their new game and delete PII data after a specific period of time. The solution should have minimal cost. You need to ensure compliance while meeting business and technical requirements. What should you do? A) Archive audit logs in Cloud Storage, and manually generate reports. B) Write a Cloud Logging filter to export specific date ranges to Pub/Sub. C) Archive audit logs in BigQuery, and generate reports using Google Data Studio. D) Archive user logs on a locally attached persistent disk, and cat them to a text file for auditing. 37 / 50 37. You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials? A) In the source code B) In an environment variable C) In a google cloud secret manager D) In a config file that has restricted access through ACLs 38 / 50 38. Your task involves setting up Virtual Private Cloud (VPC) Service Controls for Startup. Startup aims to permit Cloud Shell usage for its developers while ensuring that they do not possess complete access to managed services. Balancing these opposing objectives with startup' business necessities is essential. What actions would you recommend to address these challenges effectively? A) Use VPC Service Controls for the entire platform. B) Prioritize VPC Service Controls implementation over Cloud Shell usage for the entire platform. C) Include all developers in an access level associated with the service perimeter, and allow them to use Cloud Shell. D) Create a service perimeter around only the projects that handle sensitive data, and do not grant your developers access to it. 39 / 50 39. You are the data compliance officer for a MNC and must protect customers' personally identifiable information (PII), like credit card information. Company wants to personalize product recommendations for its large industrial customers. You need to respect data privacy and deliver a solution. What should you do? A) Use AutoML to provide data to the recommendation service. B) Process PII data on-premises to keep the private information more secure C) Manually build, train, and test machine learning models to provide product recommendations anonymously. D) Use the Cloud Data Loss Prevention (DLP) API to provide data to the recommendation service. 40 / 50 40. Your team is working on an application that utilizes Cloud Bigtable for its high throughput and scalability. To ensure changes do not break existing functionality, you plan to integrate the Cloud Bigtable Emulator into your Continuous Integration/Continuous Deployment (CI/CD) pipeline.Which of the following approaches is the most effective way to integrate the Google Cloud Bigtable Emulator into your CI/CD pipeline for automated testing? A) Install the Cloud Bigtable Emulator on each developer's machine and require manual testing before merging any code changes. B) Configure your CI/CD pipeline to start the Cloud Bigtable Emulator before running integration tests, ensuring tests use the emulator by setting the appropriate environment variable. C) Modify the application code to check for a CI/CD environment flag at runtime and automatically switch to using the Cloud Bigtable Emulator. D) Create a permanent Cloud Bigtable instance in GCP to be exclusively used by the CI/CD pipeline for testing purposes. 41 / 50 41. You have been approached by a client who has developed a secure messaging application. This application is built on open source technology and consists of two components. The first component is a web application, developed in Go, which handles user registration and IP address authorization. The second component is an encrypted chat protocol that utilizes TCP to communicate with the backend chat servers running Debian. The application is designed to terminate a user's session if their IP address does not match the registered IP address. The client expects the number of users to fluctuate significantly throughout the day and wants the application to be easily scalable to meet the demand. What steps you can take to address their requirements? A) Deploy the web application using the App Engine standard environment using a global external HTTP(S) load balancer and a network endpoint group. Use an unmanaged instance group for the backend chat servers. Use an external network load balancer to load-balance traffic across the backend chat servers. B) Deploy the web application using the App Engine flexible environment using a global external HTTP(S) load balancer and a network endpoint group. Use an unmanaged instance group for the backend chat servers. Use an external network load balancer to load-balance traffic across the backend chat servers. C) Deploy the web application using the App Engine standard environment using a global external HTTP(S) load balancer and a network endpoint group. Use a managed instance group for the backend chat servers. Use a global SSL proxy load balancer to load-balance traffic across the backend chat servers. D) Deploy the web application using the App Engine standard environment with a global external HTTP(S) load balancer and a network endpoint group. Use a managed instance group for the backend chat servers. Use an external network load balancer to load-balance traffic across the backend chat servers. 42 / 50 42. Hackbox, a software development company, wants to streamline the process of releasing new applications. They aim to establish an automation pipeline that will enable them to efficiently develop, test, and deploy their applications. A) Set up a source code repository. Run unit tests. Check in code. Deploy. Build a Docker container. B) Check in code. Set up a source code repository. Run unit tests. Deploy. Build a Docker container. C) Set up a source code repository. Check in code. Run unit tests. Build a Docker container. Deploy. D) Run unit tests. Deploy. Build a Docker container. Check in code. Set up a source code repository. 43 / 50 43. You set up an autoscaling managed instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do? A) Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer. B) Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP. C) Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group. D) Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination. 44 / 50 44. TechWave Solutions is developing a service for integrating social media platforms using Google Cloud. As a manager with no technical background, Mahesh is focused on keeping the project within its allocated budget and swiftly addressing any unforeseen spikes in costs. Your responsibility involves configuring both access permissions and billing arrangements for this project.What should you do? A) Assign the predefined Billing Account Administrator role to Mahesh. Create a project budget. Configure billing alerts to be sent to the Billing Administrator. Use resource quotas to cap how many resources can be deployed. B) Assign the predefined Billing Account Administrator role to Mahesh. Create a project budget. Configure billing alerts to be sent to the Project Owner. Use resource quotas to cap how much money can be spent. C) Use the predefined Billing Account Administrator role for the Billing Administrator group, and assign Mahesh to the group. Create a project budget. Configure billing alerts to be sent to the Billing Administrator. Use resource quotas to cap how many resources can be deployed. D) Use the predefined Billing Account Administrator role for the Billing Administrator group, and assign Mahesh to the group. Create a project budget. Configure billing alerts to be sent to the Billing Account Administrator. Use resource quotas to cap how much money can be spent. 45 / 50 45. A software company developers have developed a new application. Initially, the application was set to run on Compute Engine instances with 15 GB of RAM and 4 CPUs. These instances stored data locally. However, after several months of running the application, historical data shows that it now requires 30 GB of RAM. The management at software company is looking to reduce costs. What should you do? A) Stop the instance and then use the command gcloud compute instances set-machine-type VM_NAME --machine-type 2-custom-4-30720. Start the instance again with the preemptible metadata set to true. B) Stop the instance and then use the command gcloud compute instances set-machine-type VM_NAME --machine-type 2-custom-4-30720. Start the instance again. C) Stop the instance and then use the command gcloud compute instances set-machine-type VM_NAME --machine-type e2-standard-8. Start the instance again. D) Stop the instance and then use the command gcloud compute instances set-machine-type VM_NAME --machine-type e2-standard-8. Start the instance again with the preemptible metadata set to true. 46 / 50 46. As you embark on a new project, your upcoming task involves establishing a Dedicated Interconnect connecting two data centers. To guarantee that your resources are exclusively deployed in regions where your data centers reside, it is crucial to avoid any IP address overlaps that may lead to conflicts during the interconnect setup. Opting for RFC 1918 class B address space is your preference. What steps should you take to achieve this objective? A) Create a new project, delete the default VPC network, set up an auto mode VPC network, and then use the default 10.x.x.x network range to create subnets in your desired regions. B) Create a new project, delete the default VPC network, set up the network in custom mode, and then use IP addresses in the 192.168.x.x address range to create subnets in your desired zones. Use VPC Network Peering to connect the zones in the same region to create regional networks. C) Create a new project, leave the default network in place, and then use the default 10.x.x.x network range to create subnets in your desired regions. D) Create a new project, delete the default VPC network, set up a custom mode VPC network, and then use IP addresses in the 172.16.x.x address range to create subnets in your desired regions. 47 / 50 47. Your customer is moving their corporate applications to Google Cloud. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the Organization Administrator. Which Identity and Access Management (IAM) roles should you give to the security team while following Google recommended practices? A) Organization viewer, Project owner B) Organization viewer, Project viewer C) Organization administrator, Project browser D) Project owner, Network administrator 48 / 50 48. Symphony Systems operates a complex application hosted on a Compute Engine instance within the Google Cloud ecosystem. The application demands seamless access to multiple Google Cloud services for its functionality. However, Symphony Systems adheres to stringent security protocols and aims to avoid storing any sensitive credentials directly on the VM instance.In this intricate scenario, the challenge lies in establishing a secure mechanism that grants the application the necessary permissions to interact with various Google Cloud services without compromising the system's integrity or exposing sensitive credentials.How should Symphony Systems strategically address this complex scenario while maintaining the security and efficiency of its operations? A) Create a service account for each of the services the VM needs to access. Associate the service accounts with the Compute Engine instance. B) Create a service account and assign it the project owner role, which enables access to any needed service. C) Create a service account for the instance. Use Access scopes to enable access to the required services. D) Create a service account with one or more predefined or custom roles, which give access to the required services. 49 / 50 49. Melody Marketplace's user account management app enables users to delete their accounts at their convenience. In addition, the company offers a generous 60-day return policy for users. The customer service team aims to ensure that they can process refunds or replacements for items, even if a customer's account has been deleted. A) Temporarily disable the account for 30 days. Export account information to Cloud Storage, and enable lifecycle management to delete the data in 60 days. B) Ensure that the user clearly understands that after they delete their account, all their information will also be deleted. Remind them to download a copy of their order history and account information before deleting their account. Have the support agent copy any open or recent orders to a shared spreadsheet. C) Restore a previous copy of the user information database from a snapshot. Have a database administrator capture needed information about the customer. D) Disable the account. Export account information to Cloud Storage. Have the customer service team permanently delete the data after 30 days. 50 / 50 50. In order to optimize expenses, the Engineering Director has mandated that all developers migrate their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud. These resources undergo frequent start/stop events throughout the day and need to maintain their state. Your task is to devise a plan for running the development environment on Google Cloud while ensuring the finance department has clear visibility into the costs. Which two steps should you follow? (Choose two) A) Use persistent disks to store the state. Start and stop the VM as needed. B) Use the "gcloud --auto-delete" flag on all persistent disks before stopping the VM. C) Apply VM CPU utilization label and include it in the BigQuery billing export. D) Use BigQuery billing export and labels to relate cost to groups. E) Store all state in a Local SSD, snapshot the persistent disks, and terminate the VM. Your score is LinkedIn Facebook Twitter 0% Restart quiz Exit