Skip to main content
Cross-VPC Datastore Connectivity: When VPC Peering Works and When You Need PSC

Cross-VPC Datastore Connectivity: When VPC Peering Works and When You Need PSC

Amanpreet Kaur By Amanpreet Kaur
Published: May 11, 2026 11 min read

A team provisions a Cloud SQL Postgres instance in one GCP VPC and wants to connect it to a GKE cluster in a different VPC. The team’s instinct, learned from AWS habits, is to set up VPC peering between the two networks. They do that. Peering shows as ACTIVE in the console. They configure routes on both sides. Routes look right. They try to connect from the GKE pod to the Cloud SQL private IP. The connection times out.

Two hours of debugging later (sometimes more — a Google support thread surfaces eventually), the team learns the real story: Cloud SQL’s private IP does not live in their VPC or in the peered VPC. It lives in Google’s managed network, reached through a separate primitive called Private Service Connect (PSC). VPC peering does not route to it. The team starts over with PSC.

This is the failure mode most provisioning tools accidentally inherit from their AWS-first design. They built a “connect this datastore to that cluster” button that assumes VPC peering is the right answer everywhere, then either silently fail on Cloud SQL or politely skip it and tell the customer to handle networking themselves.

ZopNight shipped the same-VPC, cross-VPC same-region, and cross-region paths. A later update extended it to cross-region for AWS, fixed VPC-ID propagation for discovered clusters, and shipped PSC as the correct primitive for cross-VPC Cloud SQL. Each cloud’s networking layer gets the right answer instead of one button trying to be universal.

This post walks through what cross-VPC connectivity actually is, why each cloud needs its own primitive, what PSC does that peering does not, and how the provisioning job surfaces the work as named steps you can debug.

Why cross-VPC datastore connection is the step most tools botch

The cloud-to-cloud “connect” button looks simple on paper. The reality has three failure modes depending on which provisioning tool you pick.

Tool behaviourWhat happensCustomer effort
Skips cross-VPC entirely”Same VPC only” — error if the cluster and datastore are in different VPCsCustomer sets up peering, routes, security groups by hand
One-size-fits-all peeringAttempts VPC peering for every cloud + datastore combinationFails on Cloud SQL; succeeds elsewhere with partial-failure modes
Per-cloud, per-datastore-classPicks the correct primitive per case (peering / PSC / cross-region peering)Click Connect, watch the job complete

The third row is the work. The other two leave customers wiring networking by hand in a tool that was supposed to abstract it. The cost of getting it wrong is high: a half-configured peering that looks live in one console and dead in another is the worst kind of bug to debug at 2 AM.

ZopNight’s cross-VPC path picks the primitive based on three inputs: the cluster’s cloud and VPC, the datastore’s cloud and VPC, and the datastore’s class (RDS vs Cloud SQL vs Memorystore vs Azure MySQL). The matrix of (cloud × datastore-class × same-region-or-not) drives which provisioning steps run. The customer sees one button, the right thing happens behind it.

Same-VPC is the simple case (and the right starting point)

Same-VPC connection is the easy case. Cluster and datastore live in the same VPC, the datastore is reachable on its private IP from the cluster’s CIDR, and the only work is exposing the connection details to the application.

Diagram 1

ZopNight’s same-VPC flow does three things:

StepWhat it does
1Generate a db-connection Kubernetes secret in the application’s namespace
2Populate the secret with host, port, username, password, database, and a full DATABASE_URL
3Surface the secret name to the operator so they can wire env or envFrom

Total elapsed time: 5 to 15 seconds. No peering work, no route tables, no security-group changes. The datastore is reachable on its private IP from the cluster’s existing CIDR; the operator’s job is to wire the application to read the secret.

Same-VPC is the right starting point for new datastores. Cross-VPC is only worth the complexity when same-VPC is genuinely impossible: the datastore is provisioned in a shared services VPC, the cluster cannot move, the org has a network architecture that requires the separation. For most new applications, provisioning the datastore in the cluster’s own VPC is cheaper than the cross-VPC dance later.

Three primitives, one per cloud

When same-VPC is not an option, the cross-VPC primitive depends on the cloud and the datastore class.

CloudDatastore classPrimitiveWhy this primitive
AWSRDS, ElastiCacheVPC peeringRDS / ElastiCache run in customer VPCs; peering routes to them
GCPMemorystore RedisVPC peering + custom-route exchangeMemorystore runs in a producer VPC; needs custom-route exchange for route propagation
GCPCloud SQLPrivate Service Connect (PSC)Cloud SQL’s private IP is in Google’s managed network; peering does not reach it
AzureMySQL Flexible, Postgres Flexible, Cache for Redis, SQLVNet peeringSame shape as AWS; the resource runs in customer VNet

Each row has its own setup steps, its own validation, its own audit trail. The provisioning job knows which row applies based on the cluster + datastore pair the operator picked.

The choice is not optional. Trying VPC peering for Cloud SQL produces a setup that looks valid in gcloud compute networks peerings describe but cannot route to the Cloud SQL private IP. The customer cannot fix this by adding more peering — the right primitive is PSC, and the only way to get there is to start over. ZopNight’s cross-VPC flow picks PSC at job-creation time so the wrong primitive never runs.

PSC for Cloud SQL: the under-appreciated primitive

Cloud SQL’s networking model is the part most cross-cloud teams discover the hard way. Cloud SQL instances live in a VPC owned by Google, not in the customer’s VPC. The private IP the Cloud SQL console shows is reachable from inside Google’s managed network through one of two mechanisms: Private Services Access (PSA, the older approach) or Private Service Connect (PSC, the newer one).

VPC peering between two customer VPCs does not bridge to Google’s managed network. PSC does, by creating a service-attached endpoint inside the customer’s VPC that proxies to the Cloud SQL instance on Google’s side.

Diagram 2

ZopNight’s PSC flow runs five steps for cross-VPC Cloud SQL:

StepWhat it does
1Enable PSC on the Cloud SQL instance (idempotent if already enabled)
2Auto-create a subnet in the cluster’s network, in the datastore’s region
3Reserve an internal IP for the PSC endpoint
4Create a forwarding rule with AllowPscGlobalAccess to support cross-region
5Inject the secret with the PSC endpoint IP (not the Cloud SQL private IP)

The pod connects to the PSC endpoint IP. Google’s network routes the connection to the Cloud SQL instance. From the application’s perspective, the connection looks like any other Postgres connection; the routing is transparent.

PSC is the right answer for any cross-VPC Cloud SQL setup. ZopNight’s job-creation logic picks PSC whenever the datastore is Cloud SQL and the cluster is in a different VPC. The customer does not have to know PSC exists; they click Connect and the right primitive runs.

AWS cross-region VPC peering

AWS RDS and ElastiCache run in customer-owned VPCs, so VPC peering is the right primitive for cross-VPC AWS. A later update extended this to cross-region: the cluster is in us-east-1, the datastore is in eu-west-2, peering can span the two regions.

StepWhat it doesWhere it runs
Detect target regionRead the datastore’s region from its ARNZopNight backend
Initiate peering requestaws ec2 create-vpc-peering-connection --peer-region eu-west-2Cluster’s region
Accept peeringaws ec2 accept-vpc-peering-connection in the target regionDatastore’s region
Enable DNS resolution--accepter-peering-options AllowDnsResolutionFromRemoteVpc=true and reciprocalBoth regions
Add reciprocal route tablesAdd routes on both VPCs pointing to the peer’s CIDRBoth regions
Narrow security group ingressAdd ingress on the datastore’s security group: source = cluster’s CIDR, port = engine’s portDatastore’s region

The accept step is the part most one-region-aware tools get wrong. AWS requires the accept to happen in the accepter’s region, not the initiator’s. Sending the accept API call to us-east-1 for a peering whose accepter is in eu-west-2 returns an error. ZopNight’s region-aware accept does the API call against the right region without operator intervention.

Cross-region peering has a real cost: cross-region traffic charges apply ($0.02/GB on AWS as of 2026). The provisioning job logs the topology so the operator can see the cost implication; the Atlas view (v2.0) shows the edge between the regions so the cost is visible in the dashboard.

Discovered clusters now carry VPC ID

A subtle bug existed in earlier versions: clusters added via “Add Existing” (not provisioned through ZopDay) were missing the VPC ID field in their space config. The discovery pipeline saw the cluster but did not persist the network identifier the cross-VPC flow needs.

The effect: trying to connect a discovered cluster to a different-VPC datastore would silently skip the peering steps because the provisioner could not tell the cluster’s VPC. The connection succeeded but did not actually route across VPCs; the application failed to connect.

v1.8 fixes this end-to-end. The Add-Existing flow now captures and persists the VPC ID for each cluster type:

Cluster typeField captured
EKSvpcId from the cluster’s resourcesVpcConfig
GKEnetwork (full network resource URL)
AKSvnetName from the agent pool’s vnetSubnetID

Discovered clusters now work in the cross-VPC flow exactly the same way as provisioned ones. The operator does not have to do anything special; the field is captured at registration and used by the cross-VPC primitives.

Step-named provisioning logs

Provisioning jobs that span multiple cloud APIs are noisy by default. Most tools log them as step_1, step_2, step_3. When something stalls at step 6, the operator has no way to know what step 6 was without reading the source code.

ZopNight’s step labels are humanised. The same pipeline above logs as:

Step nameWhat it does
Setup VPC PeeringCreate peering request
Wait for PeeringPoll for ACTIVE status
Enable DNS ResolutionSet DNS flags on both sides
Add Route TablesAdd reciprocal routes
Configure Security GroupNarrow ingress rule
Inject Connection SecretWrite K8s secret in app namespace

The step name appears in the Provisioning Jobs UI, the job-detail view, and the audit log. An operator debugging a stalled job at 2 AM reads “Wait for Peering” and knows to check the AWS console’s VPC Peering page in both regions. With step_2, the operator would have to spelunk.

The step names also distinguish job kinds. CREATE jobs say “Provision VPC”. UPDATE jobs say “Modify Instance Type”. CONNECT jobs say “Setup VPC Peering”. DELETE jobs say “Teardown Peering”. The job list filter has a Type chip set that maps to these kinds, so the operator can find every CONNECT job in the last 30 days with one click.

How to use the cross-VPC flow

The cross-VPC flow is one button in the dashboard.

StepActionWhere
1Open the datastore in the dashboardDatastores list → click into the datastore
2Click ConnectTop-right of the datastore detail page
3Pick a clusterCluster picker drops down; clusters in the same VPC marked “same-vpc”
4ConfirmZopNight detects the topology and picks the right primitive
5Watch the jobProvisioning Jobs page shows named steps progressing
6Use the secretThe injected secret appears in the cluster’s namespace; wire envFrom

The job takes 60 to 180 seconds end-to-end depending on the provider:

TopologyTypical duration
Same-VPC5-15 seconds
Cross-VPC, same-region45-90 seconds
AWS cross-region peering90-180 seconds
PSC for Cloud SQL60-120 seconds

The fastest case dominates new datastore provisioning. The longer cases happen when the architecture is set up: a cluster already exists in one VPC, a shared datastore is in another, the operator needs to bridge them once and forget.

After the job completes, the application reads the secret. No further networking work is needed; the peering or PSC endpoint stays in place until the datastore is disconnected or deleted. Disconnection runs the inverse job (named “Teardown Peering” or “Remove PSC Endpoint”) and the cloud resources are cleaned up.

What’s next

Two pieces of cross-VPC work are queued for future releases:

Coming workWhat it adds
Cross-cloud datastore connectionsAWS RDS connected to a GKE cluster, or vice versa. Likely via VPN or transit-gateway equivalents
Per-region cross-VPC cost trackingEgress charges across the peering surfaced on Cost Reports
Azure parity for cross-region peeringCross-region VNet peering with the same operator UX as AWS

The current state covers the common cases for AWS, GCP, and Azure customers who run a cluster in one VPC and a datastore in another. The PSC path for Cloud SQL is the differentiator: most cross-cloud provisioning tools either skip Cloud SQL or fail on it, and ZopNight gets it right out of the box.

If you have a datastore that needs to talk to a cluster in a different VPC, the right starting point is the Connect button on the datastore detail page. ZopNight picks the primitive, runs the provisioning job, and injects the secret. The work that used to take a half-day of console clicks and a Stack Overflow tab fits into one button.

Amanpreet Kaur

Written by

Amanpreet Kaur Author

Engineer at Zop.Dev

ZopDev Resources

Stay in the loop

Get the latest articles, ebooks, and guides
delivered to your inbox. No spam, unsubscribe anytime.