Updated 10 hours ago
Google Cloud Platform networking is different from AWS and Azure in ways that initially seem strange—until you realize GCP wasn't designed by people building a cloud. It was designed by people who already ran one of the largest networks on Earth and decided to let others use it.
That origin story explains almost everything about how GCP networking works.
VPCs Are Global (Because Google's Network Is)
In AWS, a VPC lives in one region. Want resources in another region? Create another VPC. In GCP, a VPC spans all regions automatically.
This isn't a feature Google added. It's a confession. Google's internal network was always global—their services don't think in terms of regions, they think in terms of "the network." Making VPCs regional would have been artificial, a constraint imposed to match what other clouds do rather than what Google's infrastructure actually looks like.
The practical implication: multi-region architectures in GCP don't require peering VPCs across regions. A single VPC with subnets in different regions just works. Resources in us-east1 can talk to resources in europe-west1 using private IPs, no extra configuration.
Subnets Without VPC CIDR Blocks
Here's something that confuses people coming from AWS: when you create a GCP VPC, you don't specify an IP range.
In AWS, you create a VPC with a CIDR block (say, 10.0.0.0/16), then carve subnets out of that space. You're pre-committing to an IP range before you know exactly how you'll use it.
GCP skips this step. You create a VPC, then create subnets with whatever IP ranges you want (as long as they don't overlap). Each subnet is independent. Want to add a subnet with a completely different IP range later? Go ahead.
This is more flexible but requires more planning. You don't get the automatic constraint of "all subnets must fit within this block," which can be either liberating or rope to hang yourself with.
Auto Mode vs. Custom Mode
GCP offers two approaches to subnet creation:
Auto mode creates a subnet in every region automatically, using predefined ranges from 10.128.0.0/9. Convenient for experimentation, problematic for production—you don't control the IP ranges, and they might conflict with your on-premises network.
Custom mode requires you to create subnets explicitly with your chosen IP ranges. More work upfront, full control forever.
You can convert auto mode to custom mode (keeping the existing subnets but stopping automatic creation). You cannot go the other direction. Start with custom mode for anything that matters.
Secondary Ranges: Kubernetes's Best Friend
GCP subnets can have secondary IP ranges in addition to the primary range. This sounds like an obscure feature until you run Kubernetes.
In GKE (Google Kubernetes Engine), pods need IP addresses. Lots of them. The traditional approach uses NAT or overlay networks—pods get fake IPs that get translated when traffic leaves the node.
GCP's approach: give pods real, routable IPs from a secondary range. A VM might have IP 10.0.1.5 (primary range) while its pods have IPs from 10.1.0.0/16 (secondary range). Both are real VPC addresses. Pods can communicate directly with other VPC resources without NAT.
This simplifies debugging (pod IPs appear in flow logs), improves performance (no NAT overhead), and integrates cleanly with firewall rules.
Firewall Rules: Tags Over Instance Groups
AWS uses Security Groups attached to specific instances. GCP uses firewall rules that match instances by tags or service accounts.
The difference matters at scale. In AWS, when you launch a new instance, you specify which Security Groups apply. In GCP, you tag the instance (say, web-server), and any firewall rule targeting that tag automatically applies.
This inverts the mental model. Instead of "this instance has these rules," you think "instances with this role get these rules." Add a new web server? Tag it web-server and firewall rules apply automatically. No need to update instance configurations.
Firewall rules also support service accounts as targets. Instead of tagging instances, you can apply rules based on which service account the instance runs as—useful when the service account already defines the instance's role.
Rules have priorities (lower numbers evaluated first) and can explicitly allow or deny traffic. The default behavior: all ingress denied, all egress allowed.
Global Load Balancing with Anycast
Most load balancers are regional. GCP's HTTP(S) load balancer is global—one IP address, served from edge locations worldwide, routing to backends in any region.
How? Anycast. The same IP address is announced from multiple locations. When a user connects, they reach the nearest edge location, which routes the request to the healthiest backend (considering both health checks and latency).
A user in Tokyo hits a Tokyo edge location. A user in London hits a London edge location. Both use the same IP address. The load balancer figures out where to send the traffic.
This eliminates the need for DNS-based geographic routing (with its TTL delays and caching issues). The routing happens at the network layer, instantly adjusting as backends fail or recover.
GCP offers several load balancer types:
- HTTP(S) Load Balancer: Layer 7, global, content-based routing, SSL termination
- TCP/UDP Network Load Balancer: Layer 4, regional or global
- Internal Load Balancer: Private load balancing within a VPC
Premium vs. Standard Network Tier
GCP offers two network tiers with a straightforward tradeoff: performance vs. cost.
Premium Tier routes traffic through Google's private backbone. A user in Sydney accessing your service in us-central1 travels on Google's network from Sydney to Iowa—low latency, consistent performance, Google's infrastructure the whole way.
Standard Tier routes traffic over the public Internet. The same Sydney user's traffic hits Google's network only when it reaches the region where your resource lives. Cheaper, but latency depends on whatever path the public Internet takes.
You choose per resource. A latency-sensitive API might use Premium; a batch processing endpoint might use Standard.
Connecting to On-Premises: Interconnect and VPN
Cloud Interconnect provides dedicated physical connections:
- Dedicated Interconnect: 10 or 100 Gbps connections directly to Google's network. For organizations with data centers near Google's colocation facilities.
- Partner Interconnect: Connections through service providers. 50 Mbps to 50 Gbps. For organizations without proximity to Google facilities or needing smaller connections.
Cloud VPN creates encrypted tunnels over the public Internet:
- HA VPN: Two tunnels to two interfaces, 99.99% availability SLA
- Classic VPN: Single tunnel, no HA guarantees
Interconnect provides better performance and reliability; VPN is simpler and works from anywhere with Internet access.
Shared VPC: Centralized Networking Across Projects
GCP encourages separating resources into projects (roughly equivalent to AWS accounts). But separate projects with separate VPCs means managing many networks.
Shared VPC solves this. One project (the host) owns the VPC. Other projects (service projects) deploy resources into the host's subnets. Network configuration stays centralized; resource deployment stays distributed.
A networking team manages firewall rules, routes, and connectivity in the host project. Application teams deploy VMs and services in their service projects, using subnets the networking team provides. Everyone gets appropriate control without duplicating network infrastructure.
VPC Peering
VPC peering connects two VPCs for private communication. Unlike AWS (where you peer VPCs that are inherently regional), GCP peering connects global VPCs. Set up peering once; resources in any region of either VPC can communicate.
Peering works across GCP organizations, enabling connectivity between separate companies or business units. Both VPCs must have non-overlapping IP ranges.
Private Google Access
Instances without external IPs can't reach the Internet. But they might still need Google services—Cloud Storage, BigQuery, Pub/Sub.
Private Google Access lets instances reach Google APIs using their internal IPs. Enable it on a subnet, and instances there can access Google services without NAT or external IPs.
This keeps traffic on Google's network (never touching the public Internet) and avoids the complexity of managing external connectivity for services that only need Google APIs.
Cloud NAT
For instances that genuinely need Internet access (software updates, external APIs), Cloud NAT provides outbound connectivity without assigning external IPs.
Cloud NAT is regional, scales automatically, and doesn't consume IP addresses in your subnets (it's software-defined, not a NAT gateway instance). Configure it per region, and instances can reach the Internet while remaining privately addressed.
Cloud Armor: DDoS and WAF
Cloud Armor integrates with HTTP(S) load balancers to provide protection:
- DDoS mitigation: Absorbs volumetric attacks at Google's edge
- WAF rules: Block SQL injection, XSS, and other application attacks
- Custom rules: IP allowlists/blocklists, geo-blocking, rate limiting
Because it sits at the load balancer (which is global), protection applies worldwide without deploying anything per-region.
Network Intelligence Center
Diagnosing network issues in the cloud is notoriously difficult. Network Intelligence Center provides tools:
- Connectivity Tests: Verify whether traffic can flow between two endpoints and diagnose why it can't
- Performance Dashboard: Packet loss, latency, and throughput metrics
- Topology: Visual representation of your VPC structure
- Firewall Insights: Find unused rules, overly permissive rules, and optimization opportunities
The GCP Networking Mental Model
GCP networking makes sense once you accept its premise: you're not building a virtual network from scratch. You're plugging into Google's network and defining which parts of it you want to use.
VPCs are global because the underlying network is global. Firewall rules use tags because that's how you describe intent at scale. Load balancers use anycast because that's how Google routes traffic internally. Premium Tier exists because Google has a global backbone and will let you use it—for a price.
This isn't better or worse than AWS or Azure. It's different, reflecting different origins. AWS built networking for customers who wanted virtual data centers. GCP built networking for customers who wanted access to Google's infrastructure.
Understand which one matches your mental model, and the details fall into place.
Frequently Asked Questions About GCP Networking
Was this page helpful?