When I passed my 1st professional networking engineer exam, it was at a testing center. There were many network VPC design related questions I struggled to answer. 2 years have gone by. I’ve designed and troubleshooted many enterprise network architectures. I took the online exam and felt it was easier. Here are the topics covered in the exam:
I found the following links useful as the foundation of the study guide. Sign in with your company’s Google account to check if you have access.
Google Cloud Fundamentals: Core Infrastructure
Managing Security in Google Cloud Platform
Security Best Practices in Google Cloud
Mitigating Security Vulnerabilities on Google Cloud Platform
Google cloud practice exam questions
This is the 2nd time taking the exam and I found the renewal offer gave the flexibility of the same expiry date if I take and pass the exam between the offer start date to the current certification’s expiry date. It’s almost 2 months in…
Managing infrastructure as code has always been a challenge. Google cloud deployment manager and Hashicorp Terraform have been the top choices for creating reusable templates and modules to create the organizational folders, projects, and project resources. As you can see in this simple VPC example, deployment manager template is able to reference other resources with $(ref.Resource_Name.selfLink). Deployment manager has a great feature which GKE config connector lacks: reference a resource’s output by its properties. The reason that’s important is because in a deployment, a resource’s output is often another resource’s input. Using $(ref.RESOURCE.OUTPUT), …
It was my 2nd time taking the exam. The 1st time was 2 years ago while the exam was beta with roughly 118 questions within 2 hours. The exam I took had 60 questions with the case study. I was surprised to see the questions related to the case study has decreased to [4,5] and were not that related to the case study. I am guessing the new exam format is all 60 questions now. I’m listing the topics that you should study to succeed in the exam:
Google cloud has a great product called Anthos. Its best feature is able to span the Kubernetes engine cluster to other cloud providers or to VMWare vSphere in a data center on premises. The licensing fee costs a few thousand dollars monthly per project. If you don’t need multi-cloud or don’t have a VMWare vSphere on premises, this article shows you how to configure a GKE cluster with Istio service mesh (comparable to Anthos service mesh) and GKE config sync (comparable to Anthos config management).
I started working on AI, ML related tasks in my current project some time in May. Before that, I had limited exposure to Machine learning. Even my tasks in the project are more ML ops related than anything data science or model developer related. Sure, I am in the infrastructure team interfacing with data scientists and ML engineers. But, I did not understand how to write python code to build any sophisticated models in Tensorflow 2.x.
It has always been a pain to install cloud monitoring agents to 40 VMs in a single project by following the installation guide on a single VM. Luckily, there is an solution to create agent policies in a project to automate the installation process. The full documentation is at Managing agents on multiple VMs which points to Managing Agent Policies. I had to create a policy to install monitoring agents to all Debian 9,10 VMs in a single project regardless of the labels so I’m writing a simplified version of creating such policy. First, download set-permissions.sh to bind proper predefined…
Recently, I have the chance to build 2 Cloud Run services that use Google Cloud Trace, Debug, Error reporting, and logging. Logging comes easy in Cloud Run without putting google-cloud-logging in requirements.txt by writing to stdout.
Setting up Cloud Trace for Python or Java isn’t hard but not knowing the tricks may cause you to stumble. I recommend using a dedicated service account for development and grant Cloud Trace Agent role to that service account. It will help you with trying to publish trace data from local to a Google cloud project that has Trace API enabled. If the cloud…
Many of my large enterprise clients in the cloud architecture and DevOps space have asked me a common question about how to best configure Google recommended IAM authentication on configuring GCP service accounts to manage different database services such as Spanner or BigQuery.
Store the secrets of the service account JSON file in the Kubernetes secrets.
$ kubectl create secret generic pubsub-key --from-file=key.json=PATH-TO-KEY-FILE.json
----------- pod spec -----------
- name: google-cloud-key
- name: subscriber
- name: google-cloud-key
- name: GOOGLE_APPLICATION_CREDENTIALS
the approach above was the recommended solution…
I have seen the IAM configuration of bastion hosts where the local root user was configured with gcloud auth activate-service-account by passing the — key-file. The project owner then deleted the key file on the host for security reasons; Google user accounts with
service account user role were instructed to log into the bastion hosts and execute
sudo su - to run
gcloud commands as the service account. There are several downsides to the approach which makes it unsuitable for production:
Identity and Access Management (IAM) APIAudit logs are enabled.