Skip to main content
ZopNight Atlas: Region-Aware Cloud Mapping (Globe + Canvas)

ZopNight Atlas: Region-Aware Cloud Mapping (Globe + Canvas)

Amanpreet Kaur By Amanpreet Kaur
Published: May 11, 2026 10 min read

A multi-cloud dashboard usually starts as a flat list of resources. One row per resource: name, type, region, status. Easy to scroll, easy to filter, easy to forget that the region column is the most important field on the row. A team running EKS in us-east-1 and Cloud SQL in us-central1 sees two rows that look identical except for the region cell, with no indication that those resources are 2,100 km apart, that traffic between them adds 9 to 15 ms of round-trip latency, and that every gigabyte they exchange costs $0.02 in cross-region egress.

ZopNight Atlas, shipped as part of v2.0, is the answer to the flat-list problem. Atlas renders every discovered resource across AWS, GCP, and Azure on a map. Two presentations: a Globe view that prioritises geography (3D, regions in their real locations) and a Canvas view that prioritises topology (2D, force-directed dependency layout). The operator picks the view that matches the question.

This post walks through what Atlas is, what each view is good for, how it works under the hood, and why it matters beyond the visualisation itself — Atlas is the foundation that makes region-aware governance possible .

Why a flat resource list is the wrong shape for multi-cloud

Multi-cloud operators answer four questions every day. A flat list answers one of them well.

QuestionFlat listMap
What resources do I have?Excellent (scroll, filter, search)Good (visual count per region)
Where are my resources?Poor (region column is hidden in detail)Native — geography is the layout
What depends on what across regions?Impossible without flow logsNative — edges on the map
Are residency constraints being violated?Manual audit, tag-driven, lossyNative — one EU region badge is enough

The “where” and “what depends on what” questions get harder as the fleet grows. A 20-resource fleet fits in a single mental model. A 300-resource fleet across three clouds and seven regions does not. Operators end up keeping a separate diagram in Lucidchart or Miro that goes stale within a sprint of being drawn. Atlas is the diagram that updates itself from discovery.

The list view stays useful for the “what do I have” question. Atlas does not replace it. The dashboard ships both views and the operator chooses.

What Atlas actually is

Atlas is a region-aware map of every resource ZopNight has discovered across the connected cloud accounts. The map renders in two presentations, switchable with one toggle in the header:

ViewLayoutBest forTrade-off
Globe3D, geography-accurateWhere things are, cross-region distances, residency checksTopology obscured by overlapping markers in dense regions
Canvas2D, force-directedWhat depends on what, dependency clusteringGeography lost (a us-east-1 resource can appear next to an ap-south-1 resource if they’re connected)

The two views are not separate products. They are two ways of arranging the same node-and-edge graph. The data is identical; the layout algorithm differs.

A typical session: the operator opens Atlas, sees the Globe, spots an unexpected resource cluster in eu-central-1 (the team is supposed to be running everything in us-east-1), clicks the cluster to see what’s there, switches to Canvas to trace what depends on those resources, clicks through to the resource detail drawer, takes action. Two views, one workflow.

How Atlas works under the hood

Atlas reuses ZopNight’s existing discovery pipeline. There is no new instrumentation on the customer side; every resource already carries a region tag from the cloud provider’s discovery API.

Diagram 1

The lat/lon map is the only new data ZopNight maintains for Atlas. AWS publishes the geographic location of each region (us-east-1 → Northern Virginia, eu-west-2 → London); GCP and Azure do the same. Atlas keeps a curated mapping table and renders each resource at its region’s centroid.

Cross-region dependencies come from the same topology data ZopNight already uses for the resource detail drawers. When a Cloud SQL instance in us-central1 is connected to an EKS cluster in us-east-1 (via the datastore-connection flow), Atlas draws an edge between them. The edge thickness or colour can be configured to encode traffic volume or cost — both come from the existing cost-allocation pipeline.

Refresh cadence matches the rest of the dashboard. A new resource created at 14:32 appears in Atlas by 14:37 (one discovery cycle later). A resource deleted at 14:32 disappears by 14:37. The map is the same freshness as the resource list; the rendering is the only thing that changes.

Globe view: cross-region dependency check

The Globe’s primary use is finding cross-region dependencies that should not exist or should be optimised.

FindingWhy it mattersTypical action
EKS cluster in region A talking to RDS in region B9-15 ms added latency per query; data egress at $0.02/GBMigrate the RDS to region A or accept the cost with eyes open
Customer data in non-residency-permitted regionCompliance violationMove the data or update the residency exception
Production traffic flowing through a sandbox account regionMisrouted traffic, likely tag driftInvestigate the routing config
Two clusters in the same region talking through a public endpointAvoidable egress, security riskSwitch to private endpoint / VPC peering
Single-region resource for a multi-region serviceLatent failover gapReplicate to a second region

Before Atlas, each of these findings required either reading cloud flow logs (expensive, slow, often disabled in dev environments) or asking the team that built the service. The map surfaces them without a query.

The most common reaction to opening Atlas for the first time is “I didn’t know we were doing that.” A platform team that thought they had everything in two regions finds resources scattered across five. A FinOps team that expected most traffic to stay within us-east-1 sees a thick edge from us-east-1 to eu-west-2 and discovers a misconfigured S3 cross-region replication that has been silently moving 800 GB/day for six weeks.

Canvas view: when topology matters more than geography

The Globe’s geographic accuracy becomes a liability when the question is “what depends on what” rather than “where is it.” A dense us-east-1 region with 80 resources renders as an unreadable cluster of overlapping markers; the dependencies inside the cluster are invisible.

Canvas is the same data with a different layout. The force-directed algorithm positions connected resources near each other regardless of geography. A resource graph that looks like a hairball on the Globe becomes a navigable dependency map on the Canvas: each service’s call graph fans out from its central node, downstream dependencies cluster on one side, upstream on the other.

The operator chooses based on the question. “Where is my fleet?” → Globe. “What is the dependency tree of this service?” → Canvas. The toggle is one button in the Atlas header; the underlying graph is identical.

The Canvas view also makes Atlas usable on smaller screens. A 3D globe needs real estate; a 2D force layout works at any aspect ratio. A laptop user during an incident at 2 AM can see the dependency picture without zooming and panning the Globe.

Foundation for region-aware governance

Atlas is more than a visualisation. It is the shared region-mapping data model that every other feature in ZopNight and beyond builds on.

Feature built on AtlasWhat it does with the region model
Cross-VPC datastore connectivityUses the region tag to decide between same-region peering and cross-region peering (or Private Service Connect for Cloud SQL)
Cost-per-region attributionAggregates cost by region using the same lat/lon map
Residency-aware policy rulesLets policies ask “is this resource in an EU region” without each rule re-implementing geography
Multi-region failover topologyVisualises which regions back up which other regions
Future: region-pinned schedulingSchedules that fire only in specific regions for residency-driven shutdown windows

Before Atlas, each of these features would have had to maintain its own region awareness. Some did, badly. The cost-attribution code knew about regions; the policy engine did not. Tagging a resource with residency=eu worked until someone forgot the tag.

Atlas centralises the region model. Every feature that needs to know “where is this resource” or “is this in an EU region” reads from the same data layer. The platform-wide story for region-aware governance compounds from there.

How to use Atlas day to day

The typical Atlas session takes 30 to 90 seconds and produces a finding the operator can act on.

StepActionWhat you see
1Open Atlas from the sidebarGlobe view by default; all discovered regions lit up
2Look for unexpected clustersA region with resources you didn’t expect, or no resources in a region you expected
3Inspect cross-region edgesThick edges = high traffic; coloured edges = high cost or high latency
4Click a resource or clusterSame detail drawer as the resource list
5Toggle to Canvas if neededWhen the question shifts to “what depends on what”
6Filter by cluster, account, or resource typeFocuses the map on what you care about

The drawer has the same action surface as the list view: stop, start, tag, schedule, attach a remediation. Atlas does not duplicate the actions; it duplicates the navigation. The map is a fast index into the resources you want to touch; the actions stay where they always were.

Atlas is read-only as a primary surface. It does not let you drag resources between regions (no map can do that), and it does not let you create resources by clicking on a region (use ZopDay for provisioning). It shows what is and lets you click through to act on it.

For most operators the Globe is the right starting view. Switch to Canvas when the question is about dependencies rather than geography. Both views share the same 5-minute refresh cadence as the rest of the dashboard, so what you see is what the rest of ZopNight sees, no separate caching layer to reason about.

What’s next for Atlas

Atlas in v2.0 is the foundation. The version-by-version evolution layers more meaning on the same map:

Coming workWhat it adds to Atlas
Per-region cost overlaysHeat-map colour-coded by spend per region
Per-region drift indicatorsMarkers that flag regions where drift has been detected
Embedded Atlas in customer status pagesPublic read-only view of geography (regions, not resources)
Custom annotationsOperator-defined labels on regions or resources
Saved viewsPer-team filtered views with shareable URLs

Each of these is additive on top of the existing region model. Nothing about Atlas’s data layer needs to change to support them. The pattern is to keep adding meaning to the same map, not to fork new views.

If you have not opened Atlas yet, the first time is usually informative. Most teams find at least one resource in an unexpected place or one cross-region dependency they would have addressed if they had known about it. The map is the surface that turns “I think we run things in two regions” into “here is exactly where everything is, refreshed five minutes ago.”

Amanpreet Kaur

Written by

Amanpreet Kaur Author

Engineer at Zop.Dev

ZopDev Resources

Stay in the loop

Get the latest articles, ebooks, and guides
delivered to your inbox. No spam, unsubscribe anytime.