Understanding Google Cloud VPC and Subnets
What is a VPC?
A Virtual Private Cloud (VPC) is an isolated network environment within Google Cloud. Every GCP project gets a default VPC, but you can create custom VPCs for specific networking needs. VPCs are global resources that span all Google Cloud regions, letting you deploy resources across multiple geographic locations.
Subnets and CIDR Blocks
Subnets are regional resources within a VPC that define IP address ranges and availability zones. When creating a subnet, you specify a primary CIDR block using classless inter-domain routing (CIDR) notation. For example, a subnet with CIDR 10.0.1.0/24 provides 256 IP addresses total, with 253 usable IPs (GCP reserves gateway, metadata, and broadcast addresses).
You can add secondary CIDR blocks to subnets for additional IP addressing needs. This flexibility supports multi-pod Kubernetes clusters and complex application architectures.
Auto Mode vs. Custom Mode VPCs
Auto mode VPCs automatically create one subnet in each region with predefined CIDR blocks. This approach is simple but less flexible. Custom mode VPCs give you complete control over subnet creation and CIDR ranges, requiring you to manually create subnets. For production environments, custom mode is typically recommended.
Advanced VPC Features
Enable flow logs on subnets to capture network traffic metadata for debugging and security analysis. Private Google Access allows instances with only internal IP addresses to access Google APIs without using the public internet. This is critical for security-conscious deployments that shouldn't expose infrastructure externally.
Firewalls and Security in Google Cloud Networking
How Firewalls Operate
Firewalls are stateful resources that control inbound and outbound traffic to GCP resources. Unlike traditional network perimeter firewalls, GCP firewalls operate at the instance level, providing granular security controls. This approach gives you finer-grained protection compared to older firewall models.
Firewall Rule Components
Each firewall rule includes these elements:
- Direction (ingress for inbound, egress for outbound traffic)
- Priority (0-65534 range, lower numbers evaluated first)
- Action (allow or deny)
- Match criteria (source/destination IP ranges, protocols, ports)
- Target resources (specific instances or all instances)
GCP evaluates firewall rules in priority order. The first matching rule determines whether traffic is allowed or denied. Default deny rules exist for both ingress and egress, meaning you must explicitly allow traffic you want to permit.
Identity-Based Security with Service Accounts
Service accounts are GCP-managed accounts representing applications and services. Assign IAM roles to service accounts to grant specific permissions. Firewalls can target specific service accounts, providing identity-based security rather than just network-based controls.
Firewall Rule Targeting Strategies
Network tags are simple labels applied to instances that you reference in firewall rules. This makes managing rules for groups of instances much easier than listing IP ranges. Implied rules allow all outbound traffic and internal communication between instances within the same VPC by default.
You can create firewall rules that apply to all VPC instances or target specific instances using tags or service accounts. Designing effective firewall rules is essential for securing cloud deployments and passing certification exams.
Load Balancing and Traffic Management
Understanding Load Balancing Basics
Load balancing distributes incoming network traffic across multiple backend resources. This ensures high availability, fault tolerance, and optimal performance. Google Cloud offers several load balancing options depending on your protocol and use case.
Layer 7 vs. Layer 4 Load Balancing
HTTP(S) Load Balancing operates at layer 7 (application layer) and understands HTTP/HTTPS protocols. You can route traffic based on URL paths, hostnames, and other application-specific criteria. This is ideal for web applications requiring intelligent routing decisions.
Network Load Balancing operates at layer 4 (transport layer) and handles TCP and UDP protocols. It provides ultra-high performance for non-HTTP protocols and extreme throughput scenarios. Network Load Balancing is perfect for gaming, streaming, and IoT applications.
Internal Load Balancing
Internal Load Balancing distributes traffic only within your VPC. This approach is ideal for multi-tier applications where frontend servers route requests to internal backend services. Internal load balancers don't expose services to the public internet, improving security.
Backend Services and Health Checks
Backend services define how load balancers route traffic to groups of instances. Configure health checks, session affinity, and other routing behaviors here. Health checks monitor backend instance health by periodically sending requests and checking responses. Failed instances are automatically removed from rotation.
Improving Performance with Cloud CDN
Cloud CDN caches content at Google's edge locations worldwide, reducing latency for end users. This service works with both static and dynamic content, improving application performance globally. Traffic policies allow complex routing rules that consider geography, custom headers, and other factors. Mastering load balancing is crucial for designing scalable applications and passing GCP certification exams.
Cloud Interconnect, VPN, and Hybrid Connectivity
Dedicated Connectivity Options
Cloud Interconnect provides dedicated network connections between on-premises infrastructure and Google Cloud. Dedicated Interconnect offers physical connections with dedicated bandwidth (typically 10 Gbps or 100 Gbps), suitable for organizations with consistent, high-volume traffic. Partner Interconnect enables connectivity through Google-approved partners when direct connections aren't feasible.
Dedicated connections provide lower latency and higher bandwidth compared to internet-based approaches. They're ideal for data-intensive workloads requiring consistent performance.
Cloud VPN for Encrypted Connections
Cloud VPN secures network traffic through encrypted tunnels over the public internet using the IPsec protocol. Site-to-Site VPN establishes connections between on-premises VPN gateways and Cloud VPN gateways. You can configure multiple tunnels for redundancy and automatic failover.
Cloud Router is a managed BGP (Border Gateway Protocol) router enabling dynamic routing between your VPC and on-premises networks. Unlike static routing where you manually define routes, dynamic routing automatically updates routes based on network topology changes.
VPC Peering and Shared VPC
VPC peering connects two VPCs at the network level, allowing instances to communicate using private IP addresses. Peering is free, provides lower latency, and keeps traffic off the public internet. However, peering is not transitive (VPC A peering with B and B with C doesn't let A and C communicate through B).
Shared VPC allows an organization to share a single VPC across multiple GCP projects. This approach simplifies multi-project deployments and centralizes network administration. Understanding these connectivity options is essential for designing hybrid cloud architectures.
Cloud DNS and Advanced Networking Features
Cloud DNS Fundamentals
Cloud DNS is a managed DNS service translating domain names into IP addresses using Google's global network. Create public zones for domains accessible over the internet and private zones for internal DNS resolution within your VPC.
DNS records define how domains resolve:
- A records map domain names to IPv4 addresses
- AAAA records map to IPv6 addresses
- CNAME records create aliases for domains
- MX records direct mail traffic
- TXT records store text information
DNS Security and Service Discovery
DNS Security Extensions (DNSSEC) add cryptographic signatures to DNS records, protecting against DNS spoofing and cache poisoning attacks. Service discovery automatically registers and deregisters services based on health status, enabling dynamic backend discovery.
Network Service Tiers and Performance
Network Service Tiers offer different performance levels for external IP addresses and outbound traffic. Premium tier delivers traffic over Google's private network for better performance and lower latency. Standard tier uses public internet paths, reducing costs for non-critical workloads.
Advanced Traffic and Connectivity Features
Packet Mirroring copies traffic from instances to separate systems for analysis, monitoring, and threat detection. Private Service Connection allows access to Google APIs and BigQuery through private connections without public internet routing. Configure Private Service Connection endpoints and associate them with your VPC. Understanding these advanced features demonstrates comprehensive GCP networking knowledge essential for advanced certifications.
