I recently started working on a logistics side project that required real-time geofencing—specifically, detecting when assets enter or exit defined polygon zones.
I looked at the market leaders (Radar, Google Maps Geofencing API, etc.), and while they are excellent, the pricing models usually charge per tracked user or per API call. For a bootstrapped project where I might have thousands of "pings" but zero revenue initially, paying for every spatial check wasn’t viable.
So, I decided to engineer my own solution.
Here is a breakdown of how I built a serverless, event-driven Geo-fencing Engine using Go, PostGIS, and Cloud Run.
The Requirements
- Real-time: The latency between a location ping and a webhook event needed to be sub-second.
- **Scalable to Ze…
I recently started working on a logistics side project that required real-time geofencing—specifically, detecting when assets enter or exit defined polygon zones.
I looked at the market leaders (Radar, Google Maps Geofencing API, etc.), and while they are excellent, the pricing models usually charge per tracked user or per API call. For a bootstrapped project where I might have thousands of "pings" but zero revenue initially, paying for every spatial check wasn’t viable.
So, I decided to engineer my own solution.
Here is a breakdown of how I built a serverless, event-driven Geo-fencing Engine using Go, PostGIS, and Cloud Run.
The Requirements
- Real-time: The latency between a location ping and a webhook event needed to be sub-second.
- Scalable to Zero: I didn’t want to pay for a K8s cluster idling at 3 AM.
- Stateless: The system needed to handle concurrent streams without sticky sessions. ## The Architecture
I chose Google Cloud Platform (GCP) for the infrastructure, managed via Terraform.
1. The Compute Layer: Go + Cloud Run
I wrote the ingestion service in Go 1.22. Go was the obvious choice for two reasons:
- Concurrency: Handling thousands of incoming HTTP requests with lightweight goroutines.
- Cold Starts: Since I’m using Cloud Run (serverless), the service scales down to zero when not in use. Go binaries start up incredibly fast compared to JVM or Node.js containers, minimizing the "cold start" latency penalty. ### 2. The Spatial Layer: PostGIS This is where the heavy lifting happens. I’m using Cloud SQL (PostgreSQL) with the PostGIS extension.
Instead of doing "Point-in-Polygon" math in the application layer (which is CPU intensive and complex to handle for complex polygons/multipolygons), I offload this to the database.
The core logic effectively boils down to efficient spatial indexing using GiST indexes and queries like:
SELECT zone_id
FROM geofences
WHERE ST_Intersects(geofence_geometry, ST_SetSRID(ST_MakePoint($1, $2), 4326));
3. The "Glue": The Client SDKs
Building the backend was only half the battle. The friction usually lies in the mobile app integration—handling location permissions, battery-efficient tracking, and buffering offline requests.
To solve this, I built (and open-sourced) client SDKs. For example, the Flutter SDK handles the ingestion stream and retries, acting as a clean interface to the engine. Trade-offs & Decisions
Why not Redis (Geo)? Redis has geospatial capabilities (GEOADD, GEORADIUS), but it is primarily optimized for "radius" (point + distance) queries. My use case required strict Polygon geofencing (complex shapes). While Redis 6.2+ added some shape support, PostGIS remains the gold standard for robust topological operations.
Why Serverless? The traffic pattern for logistics is spiky. It peaks during business hours and drops to near zero at night. Cloud Run allows me to pay strictly for the CPU time used during ingestion, rather than provisioning a fixed server. Open Source?
While the core backend engine runs internally for my project (to keep the infrastructure managed), I realized the Client SDKs are valuable on their own as a reference for structuring location ingestion.
I’ve open-sourced the SDKs to share how the protocol works:
- Go SDK: View on GitHub
- Flutter SDK: View on GitHub
What’s Next?
I’m currently optimizing the "State Drift" issue in Terraform and looking into moving the event bus to Pub/Sub for better decoupling.
I’d love to hear feedback on the architecture—specifically if anyone has experience scaling PostGIS for high-write workloads!