Passing the Google cloud professional networking engineer exam the 3rd time

Hil Liao
8 min readMay 7, 2023

The exam has become harder compared to the last time.

  1. This question appeared on the prior exam. Given the output of gcloud container clusters describe , you see the control plane has public, private endpoints, authorized network disabled; all nodes are private. How can you configure a compute engine instance with only private IP to execute kubectl against the control plane IP and block other hosts from accessing it? Pay attention to Public endpoint access enabled, authorized networks enabled where the authorized networks apply to the control plane’s public endpoint. Applying the GCE private IP does not work. 3 options had adding the private IP or IP’s range to control plane authorized network. The correct option was to assign a public IP and add that to the control plane authorized network.
  2. Study Google Cloud DNS best practices. The scenario is that the data center on premises has a interconnect to a shared VPC. How do you configure compute engine instances to resolve DNS records of *.corp.example.com on premises? The on premises hosts have IP ranges of 10.10.0.0/16 and GCE hosts have subnet range of 192.168.1.0/24. The correct answer was to create a private (not public which was the wrong option) forwarding zone for corp.example.com and pick the DNS server IP on premises. The correct answer contains In Cloud Router instances, add a custom route advertisement for the IP range 35.199.192.0/19 in your VPC network to the on-premises environment. A wrong option was to configure outbound server policy and pick the DNS server on premises as the alternative name server.
  3. Understand a compute engine instance can have maximum 8 NICs. The question was to connect 10 VPC networks with interconnect VLAN attachments to the network on premises. 10 different teams are using the 10 VPCs and IAM to manage permissions. Traffic from the VPC networks must pass through Palo Alto for traffic scanning to the Internet. Options were 1) create a transitive VPC and use the Palo Alto VM’s NICs to connect to the other 10 VPCs. 2) delete the 10 VPCs create a single VPC with 10 subnets. 3) Create a hub VPC and 10 spoke VPCs peered to the hub. Create VLAN attachment in the hub VPC and Palo Alto instances. 1) was wrong; that exceeded the 8 maximum NICs. 2) was wrong; it breaks the existing team’s IAM in 10 projects. 3) was correct.
  4. A company’s logo has been deployed to multiple company websites. How do you configure Cloud CDN to improve cache hits? 1) Create a custom backend bucket and configure multi-region. 2) Enabling Cloud CDN with customized cached keys. Uncheck the protecol, host checkboxes. 3) change Cloud CDN’s TTL to 0. 1) was wrong; I don’t think custom backend backet exists. 2) should be correct; the cache would store the filename to the image, not the host or protocol. 3) was wrong; setting TTL to 0 means no cache.
  5. How do you restrict HA VPN’s peer gateway IP to allow only specific IPs to connect to the Cloud VPN gateway in multiple projects? The correct answer is to implement use the Resource Manager constraint constraints/compute.restrictVpnPeerIPs. There was a wrong option to configure HA VPN tunnel allow list. I don’t think that exists.
  6. The network on premises is connected to a VPC in Google cloud. You created a Cloud SQL instance in the VPC and configured private services acccess. The hosts in the VPC can access the Cloud SQL private IP but not the hosts on premises. How can you troubleshoot? The correct answer was Configure the VPC peering with Cloud SQL’s managed VPC. Select import and export custom routes. Advertise the Cloud SQL’s private IP in the Cloud Router. Wrong answers were creating a different interconnect with a lower MED value. I believe the same method applies to GKE standard clusters in addition to cloud SQL.
  7. Know the differences between difference Cloud Load Balancing. The scenario is to load balance an external facing TCP application. The VMs have TCP port 700 but the load balancer exposes port 800. What’s the right approach. 1) create an external http load balancer. 2) create an external network load balancer. 3) create an external TCP proxy load balancer and configure a backend service using managed instance group. 4) similar to 3 but use unmanaged instance group. 1) was wrong; the requirement is TCP. 2) was wrong; the source and destination ports can’t be changed. 3) was correct. 4) was wrong; it does not scale.
  8. How do you isolate a possible malicious client IP from consuming a backend service behind an external http load balancer? 1) configure firewall rules to block the IP to the tagged VMs. 2) configure cloud armor’s security policy to block the client IP and select preview mode. 3) similar to 2 but without preview. 1) was wrong as the http LB is proxied from the health check IP range. 2) was correct as preview mode allows you investigate the logs and check the request URLs. 3) was wrong as that may block the client who may be a legitimate user.
  9. You are planning a subnet to host a GKE standard cluster. The GKE cluster initially has 1 node but will scale to 3 nodes. What’s ideal the pod IP range. 1) /24, 2) /23, 3) /22, 4) /21. By default each node has 110 pods. Similar to the prior exam guide, the calculation is 110 pods needs 220 Pod IPs on a node for IP reuse and rotation. That’s 2⁸ pod IPs which requires a /24 range per node. 3 nodes means you’d need 3 of /24 IP ranges. /23 has only 2. 3) /22 was correct; it allows scaling to 4 nodes. If the question were to require 32==2⁵ nodes, you’d need /(24–5) => /19.
  10. You have a dedicated interconnect in us-west1. 75% of your workloads are in us-central1. 25% are in us-east4. Workloads at the colocation facility needs to connect to the VPC. How do you optimize your network performance and minimize costs? The correct answer is to enable global dynamic routing and re-creating the interconnect and VLAN attachment in us-central1. Wrong options were choosing regional routing in VPC or keep using the interconnect in us-west1.
  11. How do you direct network traffic from the data center on premises to the interconnect in us-west1 and us-east1 where us-west1 is the top choice and route to us-east1 only when the interconnect in us-west1 is down? Specify the route to us-west1 with MED value 1 meaning very preferred. Specify the route to us-east1’s MED to be [100,1000]. I don’t know which is correct but I picked 100. 100 appears to be the 2nd choice as route from another region would have priority > 201 per route priority.
  12. The VPC has the following 3 firewall rules: deny egress to 10.180.10.0/24 at priority 800, allow ingress to 10.5.10.0/24 at priority 1000, allow egress to 10.179.0.0/16 at priority 1000. You need to configure a new firewall rule to deny egress to 10.180.10.9 and log attempted traffic. What’s the IAM role and firewall rule to configure? The IAM role is compute.securityAdmin, not networkAdmin, not compute.orgSecurityPolicyAdmin. The firewall rule to create is deny egress to 10.180.10.9/32 at priority 150, not 1000. I believe setting it to 1000 will be ignored because the existing firewall rule at priority 800 takes effect first.
  13. You are using a HTTPs global external load balancer; the application servers need to know the incoming connnection client IPs. What’s the correct approach? Configure the https load balancer‘s target proxy backend service to create custom headers of X-Forwarded-For header: X-Forwarded-For: <client-ip>,<load-balancer-ip>
  14. You need to enforce firewall rules in the entire organization to block ingress to compute engine instances from the Internet except 35.191.0.0/16. You don’t want any project level firewall rules to override it by priority. How do you achieve this? 1) Create a Hierarchical firewall policy to deny ingress from 0.0.0.0/0 and set priority to 0, another firewall policy at priority 1 to allow ingress from 35.191.0.0/16. 2) Create a Hierarchical firewall policy to deny ingress from 0.0.0.0/0 at priority to 1, another firewall policy at priority 0 to allow ingress from 35.191.0.0/16. 3,4) use project level firewall rules in a single project. 1) was wrong; the deny policy at priority 0 overrides the allow policy at priority 1. 2) was correct; the allow policy has the highest priority. 3,4) was wrong; it does not take effect in other projects.
  15. You need to create a VPC design that’s cost effective, secure, and works well with centralized network administration for 3 different departments at a company. What’s the approach? 1) Create 3 VPCs and peer them in a mesh. 2) create 3 VPCs and connect them via Cloud VPN. 3) attach public IPs to all compute engine instances and have applications connect over public IPs. 4) Create a shared VPC and assign compute.networkUser IAM role to the department service project administrators. 4) was correct.
  16. You need to know VoIP applications are based on UDP. You need to load balance a VoIP application deployed to compute engine instances for external users. What’s the load balancer to pick? 1) http external global load balancer. 2) TCP proxy load balancer. 3) network load balancer. 4) SSL proxy load balancer. 2,4) was wrong. 1) was wrong; VoIP does not run on http. 3) was correct. it implied external TCP/UDP network load balancer.
  17. You have partner interconnect connections to the data center on premises. You notice that some VMs in GCP can’t reach the VMs on premises. You checked the BGP sessions and cloud routers are working properly. You suspect firewall rules may be blocking it. What’s the best method to troubleshoot? 1) ssh into the VMs and execute the traceroute command. 2) Launch connectivity tests in Network intelligence. 3) ssh into the VM and execute ping tests. The correct answer is 2). Network connectivity tests appeared twice in the exam.
  18. You have 1 dedicated interconnect connection to a data center at 20 Gbp/s. There is already a VLAN attachment created in a VPC at 10 Gbp/s. The usage has grown to require increasing the bandwidth. What’s the most cost effective and quickest method to add more capacity? 1) delete the VLAN attachment and add a new one at 20 gbp/s. 2) modify the existing VLAN attachment and change the capacity. 3) Create another VLAN attachment in the same VPC using a new cloud router. 4) create a new VLAN attachment in the same VPC with the existing cloud router. 1) was wrong; that will disrupt the workloads. 2) was correct; the capacity changes are immediate.
  19. You have legacy TCP applications that require preserving the client IPs. You need to deploy the applications in 2 regions. What’s the approach? 1) Create TCP proxy load balancer and create backend services in 2 regions. 2) create external global https load balancer. 3) create 2 regional network load balancers and create round robin DNS A records to point to the 2 IPs. 3) was correct.
  20. You need to inspect ingress traffic to VMs in GCP. You want to inspect all the traffic, not just sampled traffic. Which method is correct? 1) Configure VPC flow logs in the subnet. 2) configure firewall logs in the subnet. 3) Create a packet mirroring policy in the subnet and specify the IDS endpoint after -- collector-ilb= . 4) Install wireshark on VMs to monitor them. 3) was correct.
  21. You need to configure private access to Cloud Storage for hosts on premises in a VPC service control service perimeter. You don’t need to care about other Google services. How can you achieve private connectivity from hosts on premises to Cloud storage buckets? 1) create DNS A records to resolve private.googleapis.com to 199.36.153.8/30. 2) create DNS A records to resolve restricted.googleapis.com to 199.36.153.4/30. 3) create a private service connect backend to access Cloud storage API via an internal load balancing IP. 4) create DNS records for public cloud storage IP ranges. 3) was correct.
  22. You need to protect data stored in BigQuery and Cloud Storage from external access except from the public corporate network IP range. What’s the correct method? 1) create a basic access level and add the project in Access Context Manager. 2) create a basic access level with IP address condition for the public corporate IP range in Access Context Manager. Create a service perimeter and attach the policy during creation. 3) create allow ingress firewall rules from the source of the public corporate IP range. 2) was correct.

--

--