The maturation of WebAssembly (WASM) and emerging browser storage primitives have shifted the web application paradigm from traditional stateless models toward Local-First distributed systems. This technical analysis explores the implementation of a high-performance stack engineered to eliminate network latency (sub-millisecond UI) by running a full relational engine directly on the client.
The maturation of WebAssembly (WASM) and emerging browser storage primitives have shifted the web application paradigm from traditional stateless models toward Local-First distributed systems. This technical analysis explores the implementation of a high-performance stack engineered to eliminate network latency (sub-millisecond UI) by running a full relational engine directly on the client. The Technical Ecosystem Throughout this analysis, we will deconstruct the integration of three core layers that facilitate bidirectional data persistence and synchronization: The goal is to demonstrate the transformation of the browser into a resilient database node, capable of handling local transactions and reconciling state with the server in a transparent, asynchronous manner. 1.1. The Rationale for WASM-based SQLite Historically, web storage has been the primary bottleneck for complex applications: WebAssembly (WASM) redefines these constraints by enabling the original C source of SQLite to run directly within the browser. This is not an emulation, but a bytecode compilation executed by the browser engine with the following advantages: 1.2. Storage: The OPFS Revolution Running SQLite solely in-memory is volatile; data is lost upon page refresh. To transform SQLite into an industrial-grade persistent database, we leverage the Origin Private File System (OPFS). OPFS is a private storage ecosystem within the File System Access API that enables the browser to manage files with an efficiency previously unattainable by legacy web APIs. 1.3. Implementation Strategy: Worker Threading Executing SQLite on the Main Thread is non-viable, as intensive queries would cause the UI to hang. The standard architectural pattern for mitigating this is: 1.4. Frontend <-> SQLite Interaction: The Observer Pattern In a Local-First architecture, the frontend does not "request" data in the traditional sense; instead, it observes the local state. To ensure this interaction remains efficient, we implement a reactivity system centered on table-level tracking. // This mutation triggers a table-level invalidation in the Worker.
// The UI updates automatically in <1ms without a network round-trip.
const addTask = async (task: Task) => {
await db.execute(
“INSERT INTO tasks (id, content, status) VALUES (?, ?, ?)”,
[task.id, task.content, task.status]
);
};
1.5. The Technical Challenge: Type Parity and Marshalling The primary architectural friction between client and server stems from type system discrepancies. While PostgreSQL is a strictly typed system supporting complex data structures, SQLite utilizes Manifest Typing—where the data type is associated with the value itself rather than the column—and supports only five native storage classes: To ensure system-wide integrity, a robust Type Mapping layer must be implemented: const mapToSQLite = (payload: SyncPayload) => {
return {
…payload,
// SQLite doesn’t have native JSONB, we must stringify
data: JSON.stringify(payload.data),
// Ensure consistent timestamp format for SQLite date functions
updated_at: new Date(payload.updated_at).toISOString()
};
};
In a Local-First architecture, the backend does not originate data—that process has already occurred on the client. Instead, the server functions as the authoritative entity responsible for validation, long-term persistence, and global redistribution. 2.1. The WAL (Write-Ahead Log) as a Messaging System Traditional synchronization architectures typically rely on polling or manual application-level event triggers, both of which are resource-intensive and prone to race conditions. This stack transforms PostgreSQL into a reactive system by tapping directly into its internal transaction engine. Critical Server-side Configuration For the infrastructure to emit these granular deltas, the PostgreSQL instance must be tuned specifically for logical decoding. This involves modifying core server parameters to support persistent replication slots and high-volume log streaming. – Increase the number of replication slots based on expected load
max_replication_slots = 10
max_wal_senders = 10
2.2. Schema Strategy: The Postgres-SQLite Mirror In a high-level architecture, we treat PostgreSQL as the Canonical Source of Truth and SQLite as an Optimized Projection. 2.2.1. Shadow Columns: Managing Synchronization State To enable the system to be aware of its own synchronization state without polluting the business domain model, the SQLite schema incorporates technical metadata columns. These act as control headers for every record: Atomic Reconciliation Flow When a local operation is performed, the engine updates both the domain data and the synchronization metadata within a single, atomic transaction: 2.3. Multi-tenancy and Isolation (Buckets) The objective is to ensure that each SQLite instance contains only the data authorized for the active user, optimizing both bandwidth consumption and security. Edge Filtering Unlike traditional query patterns, filtering does not occur on the client side; instead, it is handled within the Sync Layer before deltas are dispatched. The server evaluates access rules against the WAL and forwards only the rows that match the claims within the user's JWT. 2.4. The Write Flow: The “Async Bridge” In this model, reads are part of a passive stream (via the WAL), whereas writes are imperative actions that require server-side validation. The "Async Bridge" ensures that local mutations are promoted to global canonical truth. Conflict Resolution: The Concurrency Challenge In an offline-first system, write conflicts are inevitable. The architecture must define how the server arbitrates between competing updates: In traditional models, the backend serves as a passive gatekeeper. In this architecture, the synchronization layer—utilizing PowerSync—functions as an active orchestrator, maintaining a consistent data graph between the server and thousands of local clients. 3.1. The Data Tunnel: WebSockets and Delta Streaming Unlike atomic REST requests that terminate upon response, this architecture establishes a persistent binary tunnel via WebSockets. 3.2. PowerSync as a Filtering Engine (Sync Rules) PowerSync's primary strength lies in its ability to execute server-side Sync Rules. These rules function as a dynamic firewall: 3.3. Data Buckets and Sync Rules This is a critical architectural concept for security. Data is never filtered on the client; filtering occurs within the Sync Service via SQL rules. 3.4. Authentication and Security (JWT + Claims) In this stack, the JWT (JSON Web Token) is more than an access token; it is the primary key for data filtering. 3.5. The “Upload Path”: The Transactional Outbox Pattern While WAL streaming handles the downstream "pulse," the Upload Path is the engine that pushes mutations upstream. To ensure no changes are lost during network partitions, we implement the Transactional Outbox pattern directly within the local engine. Backend Integrity The backend serves as the ultimate validator. If an upstream mutation violates business rules—such as updating a task already closed by another user—the backend rejects the change. The sync system then notifies the client to revert the local change or flag it for manual resolution, preventing offline conflicts from corrupting the central source of truth. Turso's entry into the browser via WebAssembly demonstrates that the local database model is no longer merely a trend, but the de facto standard. However, its underlying implementation architecture reflects distinct design decisions compared to PowerSync that merit technical analysis: Conclusion: Which Stack to Choose? Choosing the right architecture depends on your specific data requirements: What this architecture is not This stack is powerful, but it is not universally applicable. It is intentionally opinionated and comes with real trade-offs. Not for simple CRUD applications Not beginner-friendly Not free of operational complexity Not a replacement for domain-specific conflict resolution Not necessary unless latency or offline capability truly matter Originally published at bdovenbird.com
FileSystemSyncAccessHandle. Unlike standard asynchronous web APIs—which introduce latency via the event loop—this interface allows for synchronous read and write operations. This is critical for SQLite, as the engine was architected under the assumption that the file system responds immediately to its low-level calls.
postMessage).
SELECT statement. For instance, if an INSERT occurs in the tasks table, the system detects the mutation and only notifies the hooks or components specifically dependent on tasks.// High-level reactive hook example
const { data, isLoading } = useQuery(
"SELECT * FROM projects WHERE id = ?",
[projectId]
);
NULL, INTEGER, REAL, TEXT, and BLOB.
TEXT or BLOB. The technical challenge lies in ensuring that the "bridging" between the WASM binary and JavaScript environment does not introduce significant overhead during string conversions in high-frequency transactions.TIMESTAMPTZ, we standardize local storage as ISO 8601 strings or BigInts (Unix epoch). This ensures that SQLite's built-in date functions (datetime(), strftime()) remain fully functional for filtering and sorting operations.JSONB type in PostgreSQL is binary and highly efficient. On the client, SQLite stores this data as TEXT. Consequently, the synchronization layer must perform selective parsing: deserializing into JavaScript objects only when required by the UI to avoid unnecessary CPU penalties at the storage layer.
/**
* Type Mapping Utility (Conceptual)
* Ensures Postgres-compatible types are correctly handled in SQLite WASM
*/
interface SyncPayload {
id: string; // UUID from Postgres
data: Record<string, unknown>; // JSONB mapped to TEXT
updated_at: string; // TIMESTAMPTZ mapped to ISO-8601
}
wal_level = logical, we enable Postgres to decode physical disk changes into logical row-level operations. This allows for the extraction of a continuous stream of INSERT, UPDATE, and DELETE events in structured formats like JSON or Protobuf.
-- Required configuration in postgresql.conf
-- wal_level must be 'logical' to enable logical decoding
wal_level = logical
JSONB for semi-structured documents, TIMESTAMPTZ for absolute temporal precision, and GIS for geospatial data—alongside aggressive constraints. Postgres ensures data integrity across the entire organization.
UUID or TIMESTAMPTZ, the synchronization layer must perform real-time transformations. When the WAL emits a change for a TIMESTAMPTZ field, the sync layer normalizes it (typically to ISO 8601 or Unix Epoch) so that SQLite's date functions remain performant on the client side.
_status: Defines the local data lifecycle (synced, pending_insert, pending_update). This is the engine behind the Optimistic UI, allowing the interface to visually distinguish between confirmed data and data currently in transit._version: A sequence identifier or hash used for Conflict Detection. It prevents "stale" server updates from overwriting more recent local changes._last_synced_at: A timestamp of the last validation against the source of truth. It facilitates cache eviction policies and ensures the client knows the "freshness" of its local projection.
pending.synced, closing the consistency loop.
user_id, team_id, or project_id).-- Defining the publication for the synchronization engine
CREATE PUBLICATION my_app_sync FOR TABLE tasks, projects, comments;
pending status.synced.
updated_at timestamp. If a client attempts to push a mutation with a stale version—typically caused by offline drift while another user updated the same record—the backend rejects the change or initiates a merge process.
-- Conceptual Server-Side Sync Rule
-- Tasks flow only if the user is a member of the project
SELECT * FROM tasks
WHERE project_id IN (
SELECT id FROM projects WHERE user_id = request.auth.user_id
)
user_id and roles).user_id to execute the bucket rules described in section 3.2.
UPDATE tasks...), the SDK performs a dual operation within a single SQLite transaction: it modifies the business table and inserts the change representation into a technical outbox table. This ensures that either the change and its pending upload are saved together, or nothing is saved at all.200 OK from the server. Instead, it waits for the processed change to arrive via the WAL stream. Receiving its own update back from the server serves as definitive proof of persistence in Postgres. At this point, the local record transitions from pending to synced, and the outbox is cleared.
napi-rs—executes computation on the Main Thread. It delegates only file I/O to a Worker through a SharedArrayBuffer. The rationale is that for lightweight queries, the overhead of cross-thread communication can exceed the execution time of the query itself.
better-sqlite3 within a browser environment, and are comfortable managing COOP/COEP security headers.
If your app is a basic form-over-API system with minimal offline requirements, a traditional REST or GraphQL backend will be simpler, cheaper, and easier to maintain.
This architecture assumes solid knowledge of relational databases, WAL semantics, concurrency, and distributed systems. Debugging synchronization issues requires backend and database expertise—not just frontend tooling.
Logical replication, replication slots, sync rules, and client reconciliation introduce operational overhead. You are trading API simplicity for correctness, performance, and offline guarantees.
While patterns like Last-Write-Wins work for many use cases, collaborative or high-contention domains often require explicit merge strategies or user-assisted conflict resolution.
The benefits of this architecture only justify themselves when instant local feedback, offline operation, and data ownership are core product requirements—not nice-to-haves.








