In this exercise, we’re going to create a website backed by a client-side SQL database. The browser is going to download the database file and run queries against it. For this purpose, we’re using SQLite.
Building a site this way has a number of benefits. Since you no longer need a server instance to run a RDBMS, your hosting cost would be significantly lower–possibly zero. You don’t have to worry about scaling as all computation occurs on your visitors’ machines. No worries about hackers defacing your site either. Static files are, for the most part, invulnerable to attacks.
From a visitor’s perspective the experience is excellent. As the database is local, data queries will finish nearly instantaneously. No waiting for pages to load. Requested informa…
In this exercise, we’re going to create a website backed by a client-side SQL database. The browser is going to download the database file and run queries against it. For this purpose, we’re using SQLite.
Building a site this way has a number of benefits. Since you no longer need a server instance to run a RDBMS, your hosting cost would be significantly lower–possibly zero. You don’t have to worry about scaling as all computation occurs on your visitors’ machines. No worries about hackers defacing your site either. Static files are, for the most part, invulnerable to attacks.
From a visitor’s perspective the experience is excellent. As the database is local, data queries will finish nearly instantaneously. No waiting for pages to load. Requested information will simply appear. And the site will continue to function even after he’s gone offline (after boarding a plane, for example).
Of course, only certain type of websites can work with a client-side database. You’re limited to read-only access. For something like a CMS, that’s sufficient.
For this exercise we’re going to use the contents from a real website. Real World Tech, ran by David Kanter, was an online publication that provided in-depth articles about CPU architectures. Currently, the site is dormant. It’s kept alive mainly for historic record. The data set covers a timespan of two decades, with 374 articles in total.
Finished product
Since this tutorial is fairly long, you might want to have a look at the resulting website first to get a sense of what we’re building.
Basic approach
The database containg the site contents is around 14 MB in size–or 5 MB after compression. On a sub-par internet connection it could take a few seconds to download. In order to avoid long delay prior to the Largest Contentful Paint (LCP), we’re going to handle the initial page load differently. When SQLite performs a read operation, instead of returning the data immediaitely we’ll send a HTTP range request to the server for that chunk. Only a small portion of the database file will be retrieved as a result of the initial queries.
After the page has rendered, we initiate the full download. When that finishes, we close the database and reopen it using the in-memory object as the virtual file.
So our backend API is going to end up having two sets of functions: one asynchronous the other synchronous.
Database schema
The database schema for our website is relatively simple:
The table posts holds the articles, while users, tags, and categories hold additional information.
As its name suggests, posts_wo_contents mirrors posts except for the content column. It exists to enable quicker page load. Its more compact size allows SQLite to fetch less data during the stage when the app is reading the database file using HTTP range requests (as discussed in the previous section).
Missing from the diagram above is the search table. It’s a FTS5 virtual table. It functions basically as an full-text index for the table posts. It’s an external content table, so it’s also like a view in a way.
Getting started
Okay, let us get started building the site. As usual, we start out by creating a new Vite project:
npm create vite@latest
│
◇ Project name:
│ sqlite-rwt
│
◇ Select a framework:
│ React
│
◇ Select a variant:
│ JavaScript + SWC
│
◇ Use rolldown-vite (Experimental)?:
│ No
│
◇ Install with npm and start now?
│ No
│
◇ Scaffolding project in /home/rwiggum/sqlite...
│
└ Done. Now run:
cd sqlite-rwt
npm install
npm run dev
Go into the directory and install the necessary files:
cd sqlite
npm install
npm install --save-dev rollup-plugin-zigar
Go to the zig-sqlite’s Github page and copy the "Download ZIP" link:

Go back to the terminal, create the sub-directory zig, and cd to it:
mkdir zig
cd zig
Create an empty build.zig:
touch build.zig
Run "zig fetch –save " on the copied URL:
zig fetch --save https://github.com/vrischmann/zig-sqlite/archive/refs/heads/master.zip
That’ll fetch the package and create a build.zig.zon listing it as a dependency. The empty build.zig only exists to enable the fetch --save command. It won’t be used. The import of the package happens in build.extra.zig. Create a bare-bone copy by running the following command:
npx zigar extra
In getImports(), add sqlite as a dependency and as an import:
pub fn getImports(b: *std.Build, args: anytype) []const std.Build.Module.Import {
const sqlite = b.dependency("sqlite", .{
.target = args.target,
.optimize = args.optimize,
.fts5 = true,
}).module("sqlite");
return &.{
.{ .name = "sqlite", .module = sqlite },
};
}
Note the enabling of FTS5.
To enable the Zigar plugin, open vite.config.js and insert the following:
import react from '@vitejs/plugin-react-swc'
import zigar from 'rollup-plugin-zigar'
import { defineConfig } from 'vite'
export default defineConfig({
plugins: [
react(),
zigar({ optimize: 'ReleaseSmall', multithreaded: true }),
],
server: {
headers: {
'Cross-Origin-Opener-Policy': 'same-origin',
'Cross-Origin-Embedder-Policy': 'require-corp',
}
},
assetsInclude: [ '**/*.db' ],
})
We need to hardcore the Zig optimization level to ReleaseSmall because zig-sqlite wouldn’t compile under Debug. One of SQLite’s functions apparently has too many local variables for WebAssembly.
We’re planning to use an async data source as a virtual file so we need to enable multithreading. Multithreading in turn requires shared memory, which is available only when certain HTTP headers are present. We instruct Vite’s test server to provide these. When we deploy the app, we’ll need to configure the production server to do the same.
Patching the standard library
As of version 0.15.2, Zig doesn’t properly support threads in WebAssembly out of the work. You need to patch the standard library if you haven’t done so already. Simply run the following command:
npx zigar patch
First database functions
With the project set up, it’s time to start coding. In the zig sub-directory create database.zig. We’ll start out with two functions:
const std = @import("std");
const sqlite = @import("sqlite");
var database: ?sqlite.Db = null;
var database_id: usize = 0;
pub fn open(path: [:0]const u8) !void {
if (database != null) return;
database = try sqlite.Db.init(.{
.mode = .{ .File = path },
.open_flags = .{},
.threading_mode = .SingleThread,
});
database_id += 1;
}
pub fn close() void {
if (database == null) return;
database.?.deinit();
database = null;
}
So we import zig-sqlite and use its functions to open and close a SQLite database. Nothing complicated. When we open the file we increment a counter. We’ll use this number later to detect stale prepared statements.
We are not going to import database.zig into JavaScript directly. We’re going to use a second file backend.zig, which contains the following:
const std = @import("std");
const zigar = @import("zigar");
pub const database = @import("./database.zig");
var work_queue: zigar.thread.WorkQueue(database) = .{};
pub const remote = struct {
pub const shutdown = work_queue.promisify(.shutdown);
pub const open = work_queue.promisify(database.open);
pub const close = work_queue.promisify(database.close);
};
Our functions in database.zig are made available through the namespace database. Meanwhile, we have a second namespace remote, which contains the promisified versions of the same functions. When they’re called, the actual work is offloaded to a worker thread.
Setting up virtual file systems
Open App.jsx in the src directory and add these imports:
import { __zigar, remote } from '../zig/backend.zig';
import { WebFile } from './web-file.js';
import rwt from './assets/rwt.db';
WebFile is a class we developed in an earlier tutorial. It implements the [file interface][https://github.com/chung-leong/zigar/wiki/File-interface], allowing it to be used as a virtual file. The code looks like this:
export class WebFile {
static async create(url, prefetch = 16384) {
const resp = await fetch(url, {
headers: {
Range: `bytes=0-${prefetch - 1}`,
},
});
const data = await resp.bytes();
let size;
if (resp.status === 206) {
const range = resp.headers.get("content-range");
size = parseInt(range.slice(range.indexOf('/') + 1)); // e.g. "bytes 0-16384/1261568"
} else {
size = data.length;
}
const self = new WebFile();
self.url = url;
self.size = size;
self.pos = 0;
self.cache = data;
return self;
}
async pread(len, offset) {
const end = offset + len;
if (end <= this.cache.length) {
return this.cache.slice(offset, end);
}
const resp = await fetch(this.url, {
headers: {
Range: `bytes=${offset}-${end - 1}`,
},
});
if (!resp.headers.get('content-range')) {
throw new Error('Missing range header');
}
const buffer = await resp.arrayBuffer();
return new Uint8Array(buffer);
}
async read(len) {
const chunk = await this.pread(len, this.pos);
this.pos += chunk.length;
return chunk;
}
tell() {
return this.pos;
}
seek(offset, whence) {
let pos = -1;
switch (whence) {
case 0: pos = offset; break;
case 1: pos = this.pos + offset; break;
case 2: pos = this.size + offset; break;
}
if (!(pos >= 0 && pos <= this.size)) throw new Error('Invalid argument');
this.pos = pos;
return pos;
}
}
It starts out by performing a HTTP range request for the initial chunk of the file. SQLite acccesses this area very frequently so we cache it in memory. The response to this request also tells us the size of the file. From here on read operations either get translated into additional range requests or are satisfied by the cache.
Further down in App.jsx, create an instance of this class (a promise, actually) and return it when an open event occurs:
const remoteFile = WebFile.create(rwt);
let localFile = null;
__zigar.on('open', ({ path }) => {
switch (path) {
case 'remote.db':
return remoteFile;
case 'local.db':
return localFile;
default: return false;
}
});
__zigar.on('mkdir', () => true);
__zigar.on('rmdir', () => true);
We also add listeners for mkdir and rmdir since we know SQLite uses them for locking purpose.
localFile will eventually become be a Uint8Array holding the full content of the database file. We’ll work on that part later. For now, let us verify that our functions work:
await remote.open('/remote.db');
await remote.close();
Before you can start up Vite you’ll of course need rwt.db. Download the file from here and move it into src/assets. Earlier, we’ve set up Vite to recognize .db file as an asset. So importing the file yields an URL to the file.
Once that’s done, run npm run dev and open the test link. Initial compilation of SQLite might take a while so be patient. If you eventually see the Vite boilerplate, that no errors occurred. Open the development console to see what went wrong if you get a blank screen.
First database query
The SQL statement below is used to retrieve a list of posts in reverse chronological order for use in the main page:
SELECT
a.slug,
a.date,
a.title,
a.excerpt,
NULL as content,
b.name || '|' || b.slug AS author,
(
SELECT group_concat(d.name || '|' || d.slug, ',')
FROM post_tags c
INNER JOIN tags d ON c.tag_id = d.id
WHERE c.post_id = a.id
) AS tags,
(
SELECT group_concat(d.name || '|' || d.slug, ',')
FROM post_categories c
INNER JOIN categories d ON c.category_id = d.id
WHERE c.post_id = a.id
) AS categories
FROM posts_wo_contents a
INNER JOIN users b ON a.author_id = b.id
ORDER BY a.date DESC
LIMIT ?
OFFSET ?;
While it might look complicated, it’s actually fairly straightforward. We use one sub-query to retrieve tags attached to a post and another to retrieve the categories it falls into. We then concatenate these lists into comma-delimited strings.
With SQLite, sub-queries typically yield better performance than the equivalent operation through table joints. That’s why we’re using them here.
As our SQL statements are going to be fairly long, it’s more manageable to keep each of them in a separate file (as opposed to embedded them in our source code). Create the sub-directory sql in the zig directory and save the statement above as get-posts.sql.
In database.zig, add the following function:
fn SQL(comptime path: []const u8) type {
return struct {
const sql = @embedFile(path);
const StatementType = sqlite.StatementType(.{}, sql);
var statement: ?StatementType = null;
var for_database_id: usize = 0;
pub fn prepare() !*StatementType {
if (database == null) return error.NoDatabaseConnection;
// free the statement if it's not from the current connection
if (statement != null and for_database_id != database_id) {
statement.?.deinit();
statement = null;
}
// prepare the statement if it hasn't been prepared already
if (statement == null) {
statement = try database.?.prepare(sql);
for_database_id = database_id;
}
return &statement.?;
}
};
}
It defines a new namespace for a particular SQL statement (stored in path). This namespace has two variables: statement stores the prepared statement itself while for_database_id is used to ensure it is referencing the active database connection.
After that, define a struct with fields matching the query columns and a function that executes the prepared statement:
const Post = struct {
slug: []const u8,
date: f64,
title: []const u8,
excerpt: []const u8,
content: ?[]const u8 = null,
author: []const u8,
tags: []const u8,
categories: []const u8,
};
pub fn getPosts(allocator: std.mem.Allocator, offset: usize, limit: usize) ![]Post {
var stmt = try SQL("sql/get-posts.sql").prepare();
defer stmt.reset();
return try stmt.all(Post, allocator, .{}, .{ limit, offset });
}
In backend.zig, add the promisified version of the function in remote:
pub const getPosts = work_queue.promisify(database.getPosts);
Setting meta types
At this point, let us add some comptime functions that control how data is interpreted on the JavaScript side. First add the following statement in backend.zig:
pub const @"meta(zigar)" = @import("./meta.zig");
Then create meta.zig iteself:
const std = @import("std");
pub fn isFieldString(comptime T: type, comptime _: std.meta.FieldEnum(T)) bool {
return true;
}
pub fn isDeclPlain(comptime T: type, comptime _: std.meta.DeclEnum(T)) bool {
return true;
}
The first function determines whether a struct field of a type commonly used for storing a text string (like []const u8) is in fact a string. The second function determines whether data structures returned by functions should be converted to regular JavaScript objects.
Since both functions always return true, it means that getPosts() will return an array of objects containing strings.
Testing the query
Insert a call to the new function in App.jsx:
await remote.open('/remote.db');
const posts = await remote.getPosts(0, 20);
console.log(posts);
await remote.close();
And you should see the objects in the dev console:

If you Switch to the "Network" tab and reload with disabled cache, you’ll see the range requests at the bottom:

So SQLite needs to load nine 16K pages in order to run the query. That’s around 150K total. We expect the app itself to take up 500K or so compressed. Load time should be okay even with legacy Internet connections.
Basic user interface
Now that we have a somewhat functional "backend", it’s time to build out the frontend. Since this isn’t a React tutorial, we’re going to quickly run through the components.
First, create the sub-directory widgets in src. Then add Html.jsx, a simple widget for displaying HTML:
function Html({ content }) {
return <div className="Html" dangerouslySetInnerHTML={{ __html: content }} />
}
export default Html;
Next, add Loading.jsx:
import { useEffect } from "react";
function Loading({ onUnmount }) {
useEffect(() => onUnmount);
return <div className="Loading">LOADING</div>;
}
export default Loading;
Then TopNav.jsx:
function TopNav({}) {
return (
<div className="TopNav">
<form className="search">
<input type="text" placeholder="Search"/>
</form>
<h2><a href="">Client-side database demo</a></h2>
</div>
);
}
export default TopNav;
And finally PostList.jsx, a widget for showing a list of articles:
import { use } from "react";
import { isPromise } from "../utils";
import Html from "./Html";
function PostListing({ post }) {
const categories = post.categories.split(',');
const tags = post.tags.split(',').filter(t => !!t);
const [ catName, catSlug ] = categories[0].split('|');
const date = new Date(post.date).toLocaleDateString();
const [ authorName, authorSlug ] = post.author.split('|');
return (
<li className="PostListing">
<div className="date">{date}</div>
<div className="title">
<a href={`${catSlug}/${post.slug}/`}><Html content={post.title} /></a>
</div>
<div className="excerpt">
<Html content={post.excerpt} />
</div>
<div className="author">
by <a href={`authors/${authorSlug}/`}>{authorName}</a>
</div>
<div className="tags">
{
tags.map((t, i) => {
const [ name, slug ] = t.split('|');
return <a key={i} href={`tags/${slug}/`}>{name}</a>;
})
}
</div>
</li>
);
}
function PostList({ posts }) {
posts = isPromise(posts) ? use(posts) : posts;
return (
<ul className="PostList">
{posts.map((p, i) => <PostListing key={i} post={p} />)}
</ul>
);
}
export default PostList;
PostList can receive either an array of post objects or a promise of one. If it’s the latter, we call React’s use function. If the promise is unresolved, React would suspend the component until it’s fulfilled. At a higher level a suspense boundary would display a fallback in the meantime.
Now, create the sub-directory pages in src and add MainPage:
import { Suspense, useState } from "react";
import Loading from "../widgets/Loading";
import PostList from "../widgets/PostList";
function MainPage({ api, onAsyncLoad }) {
const [ posts, setPosts ] = useState(() => api.getPosts(0, 20));
return (
<div className="Page">
<Suspense fallback={<Loading onUnmount={onAsyncLoad}/>}>
<PostList posts={posts} />
</Suspense>
</div>
);
}
export default MainPage;
Here’s where we have the aforementioned suspense boundary. When the fallback component is unmounted, we know the page is ready. This is the best time to initiate the download of the full database. We’ll do this later.
In App.jsx, we remove all the boilerplate stuff and change the code to the following:
import { useState } from 'react';
import { __zigar, remote } from '../zig/backend.zig';
import './App.css';
import rwt from './assets/rwt.db';
import MainPage from './pages/MainPage.jsx';
import { WebFile } from './web-file.js';
import TopNav from './widgets/TopNav.jsx';
const remoteFile = WebFile.create(rwt);
let localFile = null;
__zigar.on('open', ({ path }) => {
switch (path) {
case 'remote.db':
return remoteFile;
case 'local.db':
return localFile;
default: return false;
}
});
__zigar.on('mkdir', () => true);
__zigar.on('rmdir', () => true);
remote.open('/remote.db');
function App() {
const [ api, setApi ] = useState(() => remote);
let Page = MainPage;
return (
<div className="App">
<TopNav api={api} />
<Page api={api} />
</div>
);
}
export default App
Initially, api will be our async database functions. We have to set it using a function because remote is a struct constructor. useState() would run it if we pass it directly.
We’re almost done with the change. We still need to add utils.js:
export function isPromise(arg) {
return typeof(arg?.then) === 'function';
}
And update App.css:
#root {
max-width: 1280px;
margin: 0 auto;
flex: 1 1 auto;
overflow: hidden;
}
.Loading {
text-align: center;
opacity: 0%;
font-size: 2em;
font-weight: bold;
margin-top: 2em;
animation: fadeIn 4s ease-in 500ms infinite normal;
}
@keyframes fadeIn {
0% {
opacity: 0;
}
50% {
opacity: 25%;
}
100% {
opacity: 0;
}
}
.Page {
padding: 1em;
}
.TopNav {
width: 100%;
background-color: #ccccee;
}
.TopNav h2 {
margin-left: .5em;
}
.TopNav h2 {
margin: 0;
padding: 0.5em;
}
.TopNav a:link,
.TopNav a:visited {
color: #000000;
}
.TopNav .search {
float: right;
padding: .5em;
}
.PostList {
padding-inline-start: 0;
list-style-type: none;
}
.PostListing {
margin-left: 0;
margin-bottom: 0.6em;
clear: both;
}
.PostListing .date {
float: right;
color: #999999
}
.PostListing .title {
font-weight: 500;
}
.PostListing .excerpt p {
margin-top: 0.2em;
margin-bottom: 0.3em;
}
.PostListing .author {
float: right;
margin-bottom: 0.3em;
}
.PostListing .author a {
color: #000000;
}
.PostListing .tags a {
display: inline-block;
background-color: #333355;
color: #ffffff;
font-size: 0.8em;
padding: 1px 6px 1px 6px;
margin-right: 4px;
border-radius: 5px;
}
.PostListing .tags a:hover {
color: #eeee00;
}
As well as index.css:
:root {
font-family: system-ui, Avenir, Helvetica, Arial, sans-serif;
line-height: 1.5;
font-weight: 400;
font-synthesis: none;
text-rendering: optimizeLegibility;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
* {
box-sizing: border-box
}
body {
margin: 0;
display: flex;
}
a:visited, a:link {
text-decoration: none;
}
img {
max-width: 90vw;
height: auto;
}
After all that we have something that resembles a proper website:

None of the links works for now. We’ll get to that.
Implementing infinite scroll
To load more content when the visitor gets near the bottom of the page, we use an IntersectionObserver to observe an empty div placed at the bottom of the list:
function PostList({ posts, onBottomReached }) {
posts = isPromise(posts) ? use(posts) : posts;
const bottom = useRef();
useEffect(() => {
const callback = (entries) => {
if (entries[0].isIntersecting) {
onBottomReached?.();
}
};
const observer = new IntersectionObserver(callback, { rootMargin: '0px 0px 1000px 0px' });
observer.observe(bottom.current);
return () => observer.disconnect();
}, [ onBottomReached ]);
return (
<ul className="PostList">
{posts.map((p, i) => <PostListing key={i} post={p} />)}
<div ref={bottom} />
</ul>
);
}
In MainPage.jsx, we respond to an onBottomReached event by apending the list with more objects:
function MainPage({ api, onAsyncLoad }) {
const [ posts, setPosts ] = useState(() => api.getPosts(0, 20));
const more = async () => {
const existing = await posts;
const extra = await api.getPosts(existing.length, 20);
setPosts([ ...existing, ...extra ]);
};
return (
<div className="Page">
<Suspense fallback={<Loading onUnmount={onAsyncLoad}/>}>
<PostList posts={posts} onBottomReached={more} />
</Suspense>
</div>
);
}
That works pretty well. Since the other pages are going to need the same logic, let us encapsulate it in a custom hook in utils.js:
export function usePagination(cb, count = 20) {
const [ objects, setObjects ] = useState(() => cb(0, count));
const more = async () => {
const existing = await objects;
const extra = await cb(existing.length, count);
if (extra.length > 0) {
setObjects([ ...existing, ...extra ]);
}
};
return [ objects, more ];
}
And MainPage becomes:
function MainPage({ api, onAsyncLoad }) {
const [ posts, more ] = usePagination((offset, limit) => api.getPosts(offset, limit));
return (
<div className="Page">
<Suspense fallback={<Loading onUnmount={onAsyncLoad}/>}>
<PostList posts={posts} onBottomReached={more} />
</Suspense>
</div>
);
}
Transition to synchronous operation
Let us implement the switch to using the full file. First we need to import the database namespace, which has our synchronous functions:
import { __zigar, database, remote } from '../zig/backend.zig';
Then we add a callback that downloads the file and reopens the database:
function App() {
const [ api, setApi ] = useState(() => remote);
let Page = MainPage;
let started = false;
const onTransition = useCallback(async () => {
if (started) return;
// download the file
const response = await fetch(rwt);
const buffer = await response.arrayBuffer();
localFile = new Uint8Array(buffer);
// close the database and shut down the web worker
await remote.close();
remote.shutdown();
// reopen it using the cached file and set the API
database.open('/local.db');
setApi(() => database);
}, [ database, remote ]);
return (
<div className="App">
<TopNav api={api} />
<Page api={api} onAsyncLoad={onTransition}/>
</div>
);
}
From now on, data retrieval will be near instantaneous. You wouldn’t notice that scrolling down the page though, since the infinite scroll mechanism is designed to mask loading time. It’ll be more apparent when you navigate to a new page. So let us get to that part.
Routing
We’re not going to employ any fancy routing scheme. Simple conditional statements are good enough. The first thing we’ll do is create placeholder components for the site’s other pages: ArticlePage, AuthorPage, CategoryPage, SearchPage, and TagPage. With just an "Under constructor" statement for the time being:
function TagPage({ api, route }) {
return <h1>Under construction</h1>;
}
export default TagPage;
Then we add a function in utils.js that parses a location or anchor:
export function parseRoute(location) {
const url = new URL(location);
const path = url.pathname.slice(import.meta.env.BASE_URL.length);
const parts = path.split('/').filter(p => !!p);
const params = {};
for (const [ name, value ] of url.searchParams) {
params[name] = value;
}
return { parts, params };
}
In App.jsx, we add the state route and use it to control which page gets rendered:
const [ route, setRoute ] = useState(() => parseRoute(window.location));
let Page, key;
if (route.params.search) {
Page = SearchPage;
key = route.params.search;
} else {
switch (route.parts[0]) {
case undefined:
Page = MainPage;
key = '';
break;
case 'authors':
Page = AuthorPage;
key = route.parts[1];
break;
case 'tags':
Page = TagPage;
key = route.parts[1];
break;
default:
if (route.parts[1]) {
Page = ArticlePage
key = route.parts[1];
} else {
Page = CategoryPage;
key = route.parts[0];
}
break;
}
}
We use a useEffect hook to install listeners that trap clicks on links and usage of the back/forward buttons:
useEffect(() => {
const onLinkClick = (evt) => {
const { location, history } = window;
const { target, button, defaultPrevented } = evt;
if (button === 0 && !defaultPrevented) {
const link = target.closest('A');
if (link && !link.target && !link.download && link.origin === location.origin) {
if (link.pathname !== location.pathname || link.search !== location.search) {
history.pushState({}, '', link);
setRoute(parseRoute(link));
}
evt.preventDefault();
}
}
};
const onPopState = (evt) => {
const { location } = window;
setRoute(parseRoute(location));
};
window.addEventListener('click', onLinkClick, true);
window.addEventListener('popstate', onPopState, true);
return () => {
window.removeEventListener('click', onLinkClick, true);
window.removeEventListener('popstate', onPopState, true);
};
}, []);
Finally, we pass the route into the top nav and the page component:
<div className="App">
<base href={import.meta.env.BASE_URL} />
<TopNav api={api} route={route} />
<Page key={key} api={api} route={route} onAsyncLoad={onTransition}/>
</div>
We also add a <base> tag here so the site would work when deployed to a sub-directory.
Loading an article
Returning our gaze to the backend, we’ll add a function that retrieve an article from the database. First, we need a SQL query. In zig/sql, create get-post.sql:
SELECT
a.slug,
a.date,
a.title,
a.excerpt,
a.content,
b.name || '|' || b.slug AS author,
(
SELECT group_concat(d.name || '|' || d.slug, ',')
FROM post_tags c
INNER JOIN tags d ON c.tag_id = d.id
WHERE c.post_id = a.id
) AS tags,
(
SELECT group_concat(d.name || '|' || d.slug, ',')
FROM post_categories c
INNER JOIN categories d ON c.category_id = d.id
WHERE c.post_id = a.id
) AS categories
FROM posts a
INNER JOIN users b ON a.author_id = b.id
WHERE a.slug = ?;
It’s largely similar to our previous query. We’ve swapped in a WHERE clause and are reading now the table with contents.
In database.zig, we add the function that runs it:
pub fn getPost(allocator: std.mem.Allocator, slug: []const u8) !Post {
var stmt = try SQL("sql/get-post.sql").prepare();
defer stmt.reset();
return try stmt.oneAlloc(Post, allocator, .{}, .{slug}) orelse error.NotFound;
}
It’s also similar to what we had written before. oneAlloc() returns null when there is no match. When that happens we choose to return an error instead.
Go to backend.zig and add the async version of the function:
pub const getPost = work_queue.promisify(database.getPost);
And that’s it–the backend portion is done. We now return to the JavaScript side and build out the page. ArticlePage has the same basic structure as MainPage:
import { Suspense, use } from "react";
import { isPromise } from "../utils";
import Html from "../widgets/Html";
import Loading from "../widgets/Loading";
function ArticlePage({ api, route, onAsyncLoad }) {
const slug = route.parts[1];
const post = api.getPost(slug);
return (
<div className="Page">
<Suspense fallback={<Loading onUnmount={onAsyncLoad}/>}>
<Article post={post} />
</Suspense>
</div>
);
}
function Article({ post }) {
post = isPromise(post) ? use(post) : post;
const tags = post.tags.split(',').filter(t => !!t);
const [ authorName, authorSlug ] = post.author.split('|');
const date = new Date(post.date).toLocaleDateString();
return (
<div className="Article">
<div className="title"><Html content={post.title} /></div>
<div className="author-date">
by <a href={`authors/${authorSlug}/`}>{authorName}</a> <span className="date">({date})</span>
</div>
<div className="tags">
{
tags.map((t, i) => {
const [ name, slug ] = t.split('|');
return <a key={i} href={`tags/${slug}/`}>{name}</a>;
})
}
</div>
<div className="content"><Html content={post.content} /></div>
</div>
);
}
export default ArticlePage;
ArticlePage makes the API call and pass the result to Article, which would draw itself if data is immediately available. Otherwise it’d suspend itself until the promise is fulfilled.
Finally we need some extra CSS:
.PostListing .tags a,
.Article .tags a {
display: inline-block;
background-color: #333355;
color: #ffffff;
font-size: 0.8em;
padding: 1px 6px 1px 6px;
margin-right: 4px;
border-radius: 5px;
}
.PostListing .tags a:hover,
.Article .tags a:hover {
color: #eeee00;
}
.Article .title {
font-size: 2em;
font-weight: bold;
}
.Article .author-date {
float: left;
}
.Article .date {
margin-left: .5em;
}
.Article .tags {
float: right;
}
.Article .content {
clear: both;
padding-top: 1em;
}
And the result:

The site’s speediness is much more apparent now. When you on a link, the contents appear immediately. There’s no perceivable delay.
Articles in the database links to images. If you want to see these, download the tarball and extract the files to public/includes.
Adding error boundary
At this point, let us put an error boundary around the page component so that we’d see what happened when an error occurs–for instance, when an article does not exist.
Create ErrorBoundary.jsx in widgets:
import { Component } from "react";
class ErrorBoundary extends Component {
constructor(props) {
super(props);
this.state = { error: null };
}
static getDerivedStateFromError(error) {
return { error };
}
render() {
if (this.state.error) {
return <h1>{this.state.error.message}</h1>;
}
return this.props.children;
}
}
export default ErrorBoundary;
Then in App.jsx:
<ErrorBoundary key={key}>
<Page api={api} route={route} onAsyncLoad={onTransition}/>
</ErrorBoundary>
Implementing search
So far, we haven’t done anything that we couldn’t do with static HTML files. Modern web servers can serve static files blazingly fast without requiring much computational resources. Data gets sent straight from the kernel. The one feature that justifies the existence of this tutorial is full-text search. That’s what we’re going to tackle next.
Our TopNav component is just a placeholder currently. Nothing happens if you type something into the search box. The first thing we’re going to do is to wire it up so that it’d send what the user has entered to the parent component (i.e. App) once he’s stopped typing:
import { useCallback, useEffect, useState } from "react";
function TopNav({ api, route, onSearch }) {
const { search: currentSearch = '' } = route.params;
const [ search, setSearch ] = useState(currentSearch);
const onChange = useCallback(evt => setSearch(evt.target.value), []);
const onAction = useCallback(() => {
if (search !== currentSearch) {
onSearch?.({ search });
}
}, [ search, currentSearch ]);
useEffect(() => {
const timeout = setTimeout(onAction, 500);
return () => clearTimeout(timeout);
}, [ onAction ]);
useEffect(() => setSearch(currentSearch), [ currentSearch ]);
return (
<div className="TopNav">
<form className="search" action={onAction}>
<input type="text" value={search} placeholder="Search" onChange={onChange}
/>
</form>
<h2><a href="">Client-side database demo</a></h2>
</div>
);
}
export default TopNav;
The in App.jsx, we add a callback function:
const onSearch = useCallback(({ search, replace = false }) => {
const { location, history } = window;
const url = new URL(location);
if (search) {
url.searchParams.set('search', search);
} else {
url.searchParams.delete('search');
}
if (replace) {
history.replaceState({}, '', url);
} else {
history.pushState({}, '', url);
}
setRoute(parseRoute(url, true));
}, []);
And pass it to TopNav:
<TopNav api={api} route={route} onSearch={onSearch}/>
Now we need to write a SQL query for performing the actual search. In the sub-directory zig/sql, add find-posts.sql:
SELECT
a.slug,
a.date,
highlight(search, 0, '<mark>', '</mark>') as title,
highlight(search, 1, '<mark>', '</mark>') as excerpt,
NULL as content,
b.name || '|' || b.slug AS author,
(
SELECT group_concat(d.name || '|' || d.slug, ',')
FROM post_tags c
INNER JOIN tags d ON c.tag_id = d.id
WHERE c.post_id = a.rowid
) AS tags,
(
SELECT group_concat(d.name || '|' || d.slug, ',')
FROM post_categories c
INNER JOIN categories d ON c.category_id = d.id
WHERE c.post_id = a.rowid
) AS categories
FROM search a
INNER JOIN users b ON a.author_id = b.id
WHERE a.search MATCH ?
ORDER BY {s}
LIMIT ?
OFFSET ?;
Here we query the virtual table search, which mirrors the contents of posts. We use FTS5’s highlight function to put <mark> tags around matching words. For the sort order, we’ll insert different text into {s} using comptimePrint().
We also need a second query that returns the number of search results. find-post-count.sql is very simple:
SELECT
COUNT(*) AS count
FROM search
WHERE search MATCH ?
Having written the SQL statements, we can now create the functions. In database.zig, first modify SQL() so that it accepts parameters for comptimePrint():
fn SQL(comptime path: []const u8, comptime params: anytype) type {
return struct {
const sql_templ = @embedFile(path);
const sql = std.fmt.comptimePrint(sql_templ, params);
Make necessary adjustments to getPosts() and getPost(). Then add an enum for the possible sort orders:
const Order = enum { rank, @"date asc", @"date desc" };
Visitors will be able to choose between sort by relavance and by date (ascending or descending).
We make use of this enum in findPosts():
pub fn findPosts(allocator: std.mem.Allocator, search: []const u8, order: Order, offset: usize, limit: usize) ![]Post {
switch (order) {
inline else => |o| {
var stmt = try SQL("sql/find-posts.sql", .{@tagName(o)}).prepare();
defer stmt.reset();
return try stmt.all(Post, allocator, .{}, .{ search, limit, offset });
},
}
}
We also add findPostCount():
const Count = struct { count: u32 };
pub fn findPostCount(search: []const u8) !u32 {
var stmt = try SQL("sql/find-post-count.sql", .{}).prepare();
defer stmt.reset();
const result = try stmt.one(Count, .{}, .{search}) orelse unreachable;
return result.count;
}
And add both of them to the remote namespace in backend.zig:
pub const findPosts = work_queue.promisify(database.findPosts);
pub const findPostCount = work_queue.promisify(database.findPostCount);
After writing the backend functions, we go back onto the JavaScript side and create the search page:
import { Suspense, use, useCallback } from "react";
import { isPromise, usePagination } from "../utils";
import Loading from "../widgets/Loading";
import PostList from "../widgets/PostList";
function SearchPage({ api, route, onAsyncLoad }) {
const { search, sort = 'rank' } = route.params;
const [ posts, more ] = usePagination((offset, limit) => api.findPosts(search, sort, offset, limit));
const count = api.findPostCount(search);
return (
<div className="Page">
<Suspense fallback={<Loading onUnmount={onAsyncLoad}/>}>
<PostCount count={count} />
<PostList posts={posts} onBottomReached={more} />
</Suspense>
</div>
);
}
function PostCount({ count }) {
count = isPromise(count) ? use(count) : count;
const s = (count !== 1) ? 's' : '';
return (
<div className="PostCount">
{count} result{s}
</div>
);
}
And a CSS class in App.css:
.PostCount {
font-size: 1.2em;
font-weight: bold;
}
If we type "Intel Apple" into the search box, we get the following result:

Changing sort order
We’re still missing a drop-down for controlling the sort order. Let us add that to the search page:
function SearchPage({ api, route, onAsyncLoad, onSearch }) {
const { search, sort = 'rank' } = route.params;
const [ posts, more ] = usePagination((offset, limit) => api.findPosts(search, sort, offset, limit));
const count = api.findPostCount(search);
const onOrderChange = useCallback((evt) => {
const sort = evt.target.value;
onSearch?.({ search, sort, replace: true });
}, [ search, onSearch ]);
return (
<div className="Page">
<Suspense fallback={<Loading onUnmount={onAsyncLoad}/>}>
<PageOrder order={sort} onChange={onOrderChange} />
<PostCount count={count} />
<PostList posts={posts} onBottomReached={more} />
</Suspense>
</div>
);
}
function PageOrder({ order, onChange }) {
return (
<div className="PostOrder">
<select onChange={onChange} value={order}>
<option value="rank">Relevance</option>
<option value="date desc">Date</option>
<option value="date asc">Date (ascending)</option>
</select>
</div>
);
}
.PostOrder {
float: right;
}
In App.jsx, we have to adjust the key:
key = `${route.params.search}-${route.params.sort}`;
And the callback:
const onSearch = useCallback(({ search, sort, replace = false }) => {
const { location, history } = window;
const url = new URL(location);
if (search) {
url.searchParams.set('search', search);
if (sort) {
url.searchParams.set('sort', sort);
}
} else {
url.searchParams.delete('search');
url.searchParams.delete('sort');
}
if (replace) {
history.replaceState({}, '', url);
} else {
history.pushState({}, '', url);
}
setRoute(parseRoute(url, true));
}, []);
Which needs to be passed into the page component:
<Page api={api} route={route} onAsyncLoad={onTransition} onSearch={onSearch}/>
Checking for syntax error
SQLite’s FTS5 extension let you use boolean operator to refine a search. For example, the query "Intel NOT AMD" would return articles containing "Intel" but not "AMD". The phrase needs to comply with FTS5’s syntax rule. If you type in "Intel NOT", you would get an error. Obviously, this is not user-friendly behavior. We don’t want the timer to trigger the search when it wouldn’t work.
In TopNav.jsx, we insert a try-catch block that calls findPosts():
const [ status, setStatus ] = useState('ok');
const onAction = useCallback(() => {
let status = 'ok';
if (search !== currentSearch) {
try {
api.findPosts(search, 'rank', 0, 1);
const replace = search.startsWith(currentSearch);
onSearch?.({ search, replace });
} catch {
status = 'faulty';
}
}
setStatus(status);
}, [ search, currentSearch ]);
We set our input’s class name to status:
<input className={status} type="text" value={search} placeholder="Search" onChange={onChange}/>
And color it red when it’s faulty:
.TopNav input.faulty {
background-color: #ffdddd;
}
Adding other pages
The other pages of the site largely resemble MainPage. Each runs a query that retrieves a list of articles based on specific critera and list them using the PostList component. Instead of explaining them I’m just going to provide links to the JavaScript code and SQL statements used:
- AuthorPage.jsx (get-post-by-author.sql)
- TagPage.jsx (get-post-by-tag.sql)
- CategoryPage.jsx (get-post-by-category.sql)
The menu bar is also implemented using the same exact technique already described:
Fixing issues related to scrolling
Our website still suffers from issues related to scrolling. The most glaring is how the page doesn’t scroll back to the top when we navigate to an article. We can fix this easily. In utils.js, add a second argument to parseRoute indicating whether a page transition is forward or backward:
export function parseRoute(location, forward = false) {
const url = new URL(location);
const path = url.pathname.slice(import.meta.env.BASE_URL.length);
const parts = path.split('/').filter(p => !!p);
const params = {};
for (const [ name, value ] of url.searchParams) {
params[name] = value;
}
return { parts, params, forward };
}
In onLinkClick in App.jsx, passed true as this second argument:
setRoute(parseRoute(link, true));
Then add a useEffect() hook that acts upon it:
useEffect(() => {
if (route.forward) {
window.scroll(0, 0);
}
}, [ route ]);
Another noticable (and quite annoying) problem is the misbehavior of the back button. After navigating to an article, clicking the back button would not restore the original scroll position if the article in question is further down the list, outside the initial 20 that get retrieved. The page is simply not tall enough yet.
Again, we can fix this easily. We just need to store the already-retrieved count in the URL:
export function usePagination(cb, count = 20) {
const [ objects, setObjects ] = useState(() => {
const { hash } = window.location;
const initialCount = /^#\d+$/.test(hash) ? parseInt(hash.slice(1)) : count;
return cb(0, initialCount)
});
const more = async () => {
const existing = await objects;
const extra = await cb(existing.length, count);
if (extra.length > 0) {
const list = [ ...existing, ...extra ];
setObjects(list);
const url = new URL(window.location);
url.hash = `#${list.length}`;
window.history.replaceState({}, '', url);
}
};
return [ objects, more ];
}
Fixing compatibility issues
In web-file.js, we’re using bytes() to obtain data from the HTTP response. As the documentation points out, the function is a recent addition. It’s no available in older browsers. We should use the widely available arrayBuffer() instead.
Another problem occurs in Chrome when the database file exists in the browser cache. Instead of performing an actual range request Chrome would stimulate it using the cached copy. Since its contents are compressed, the length we obtain from the content-range header would be wrong. What we want to do is retrieve the whole file when we detect the presence of a content-encoding header.
Here’s WebFile’s create function with the fixes applied:
static async create(url, prefetch = 16384) {
let resp = await fetch(url, {
headers: {
Range: `bytes=0-16383`,
},
});
let size;
if (resp.status === 206) {
const encoding = resp.headers.get("content-encoding");
if (encoding) {
resp = await fetch(url);
} else {
const range = resp.headers.get("content-range");
size = parseInt(range.slice(range.indexOf('/') + 1)); // e.g. "bytes 0-16384/1261568"
}
}
const buffer = await resp.arrayBuffer();
if (size === undefined) {
size = buffer.byteLength;
}
const self = new WebFile();
self.url = url;
self.size = size;
self.pos = 0;
self.cache = new Uint8Array(buffer);
return self;
}
Lowering First Contentful Paint (FCP)
By default, Zigar uses top level await to wait for the download and compilation of the WebAssembly code. This ensures that synchronous Zig functions are immediately available for use upon import. Since the first function that gets called is asynchronous, we don’t need this. Setting topLevelAwait to false allows our app to draw the top nav more quickly, making it feels more responsive.
Add the option in vite.config.js:
zigar({ optimize: 'ReleaseSmall', multithreaded: true, topLevelAwait: false }),
Deployment
The demo site for this tutorial is hosted at Cloudflare. By default, Cloudflare doesn’t know how to handle a .db file. We need to add a cache rule to enable range requests:

And a compression rule to enable compression:

Also needed is a _headers file at the site’s root level, providing the HTTP headers needed for multithreading:
/*
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
Conclusion
To test the