This time, more questions were about architecture, CI,CD, devops, database, infrastructure and networking, less about code.
- You want to deploy a new revision of microservice to GKE as a deployment. You are not sure if the new version will have issues. You want to minimize the impact and roll back if issues occur. Which practice do you adopt? 1) configure PodDisruptionBudget at 80% on the deployment. 2) configure Horizontal pod autoscaler with minimal = 0 on the deployment. 3) convert the deployment to statefulSet and configure PodDisruptionBudget at 80%. 4) convert the deployment to statefulSet and configure Horizontal pod autoscaler with minimal = 0. I Selected 1)
- About 5 questions were about binary authorization. 2 are about creating binary authorization policies to allow only attested images to production GKE clusters, not development or test GKE clusters. Refer to the following documentation:
- specific rules: each cluster-specific rule is specified in aclusterAdmissionRule
node.
- attestations: understand how to use cloud KMS to create attestors; how do attestors create attestations? Where are attestations stored?
- know about the options in the following partial policy:
```defaultAdmissionRule:
```
evaluationMode: REQUIRE_ATTESTATION
enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
requireAttestationsBy:
- projects/PROJECT_ID/attestors/built-by-cloud-build
name: projects/PROJECT_ID/policy - The question was about using Google cloud artifact registry’s container scanning for vulnerabilities (CVEs). You already have cloud build steps to build the container, push to artifact registry, and deploy to GKE. You want to add a cloud build step to deploy only images without detected CVEs to the GKE cluster. The best option is to use artifact registry’s artifact analysis and vulnerability scanning feature.
- The question was about an existing GKE deployment accessing 2 other GKE services in 2 different GKE clusters. The traffic needs to be encrypted with mTLS. How do you configure the microservice to call the 2 GKE services? I selected the option of configuring Cloud Service mesh with proxy sidecar and VPC firewall rules to allow source IPs in the GKE cluster’s service IP range. I did not select the option of gRPC proxyless implementation.
- You need to share videos and image files in Google cloud storage buckets to a few internal users and external users. Files may have expiry DateTime. How do you develop the application to implement it? I selected storage object signed URL.
- Multiple questions were about e-commerce use cases. 2 questions were about order management. You’ve developed multiple microservices to process orders. You want to orchestrate the invocation of the microservices based on the results of the prior invocations. You want to monitor the workflow progress and status. What’s the solution that has minimal development effort and least infrastructure management overhead? I selected to orchestrate Workflows and implement conditional jumps.
- 3 questions were about Apigee. 1 is about Apigee API analytics. Sharing the API metrics data with other teams by granting Apigee Analytics Agent IAM role in the environment. Don’t grant Apigee orginazation admin IAM role in the envionment group.
- How do you design Apigee environment group and environment for a e-commerce use case for dev, test environments in store front and inventory microservices? How do you use key value maps in environments in API proxies between dev, test? Refer to the environment group example.
- How do you configure Apigee for API consumers at 2 different tiers: basic, premium? Use policies like spike arrest is bad. Create Api Key for the API product and developer app where you define tiers. Configure Quota policy for each tier.
- You are using Google Cloud Bigtable for a e-commerce application. The current app profile is single cluster routing. The BigTable does not have HA enabled and failover is manual. The solution is to enable HA and change the app profile to use multi-cluster routing to any cluster.
- 2 questions about Firestore. The key difference is to understand offline data feature in the application. When the device comes online, it synchronizes with cloud Firestore to get changes. Firestore is the only ACID database that supports this type of offline features and synchronization between multiple devices.
- At least 2 questions are about using cloud SQL, AlloyDB auth proxy to enable IAM authentication and encryption to the database instance. The Auth proxies don’t create new network connection path. So the wrong answers would assume the proxy to establish the network path to the database instances of private IPs.
- You have transaction and sales data in a Cloud SQL instance in a e-commerce company. The data science team runs SQL queries to create reports in the production Cloud SQL for PostgreSQL database. That has caused the performance of the database to degrade and connection timeouts at the clients. I selected replicating data from Cloud SQL to a BigQuery dataset using Datastream.
- Be familiar with Cloud build. About 10 questions were around it. You want to configure a CI pipeline using cloud build. Developers push code the cloud source repository. You have developed a build config file which executes steps to build container images. How do you start the build when developers pushes commits. I selected to configure cloud build trigger from the git branch to start the build. Any options to do so manually or using pub/sub were wrong.
- Understand which microservice hosting service has the least to the most infrastructure management overhead: cloud run function < cloud run < GKE autopilot < GKE standard.
- The question is about service directory. You manage multiple microservices in cloud run. Current implementation of tracking each service’s feature and version is to store them in environment variables. The cloud build step executes
gcloud run deploy
with arguments to set the environment variables. It has become difficult to discover features and versions in those variables as the numbers and complexity grow. You want to implement a cloud native method to resolve it. I selected the option where the cloud build step publishes the service’s name, version to Service Directory. Option of calling Cloud Run admin to discover environment variables was wrong.