I’ve been following the Zig programming language with keen interest over the past several months. I’ve been eagerly awaiting the new async/await API’s release ever since I watched this video.
The foundation for this long-awaited feature arrived via pull request #25592, which was merged just one day before I began writing this. I was so excited to try it out that I compiled Zig from the master branch on my local machine—the easier way to access it right now.
Zig removed async/await support from earlier versions while the team completely redesigned the API from the ground up. The goal is to create something entirely new that differs from what other languages like Go, JavaScript, and Rust do. The new async I/O framework is set for release …
I’ve been following the Zig programming language with keen interest over the past several months. I’ve been eagerly awaiting the new async/await API’s release ever since I watched this video.
The foundation for this long-awaited feature arrived via pull request #25592, which was merged just one day before I began writing this. I was so excited to try it out that I compiled Zig from the master branch on my local machine—the easier way to access it right now.
Zig removed async/await support from earlier versions while the team completely redesigned the API from the ground up. The goal is to create something entirely new that differs from what other languages like Go, JavaScript, and Rust do. The new async I/O framework is set for release in version 0.16.0. To use it, you can either compile the master branch or download the latest tarball on this page.
To understand why this redesign matters, consider JavaScript’s function model. There are regular functions and async functions—two distinct “colors” that don’t mix well. You can call regular functions from within async functions, but async functions cannot be called from regular functions without special handling.
Those who lived through JavaScript’s initial async/await rollout will remember the pain. The migration was brutal. Projects like Node.js had to maintain dual standard libraries: one callback-based and another using async/await. Codebases were effectively split in two.
The IO API is intended to work similarly to the Allocator interface; select a different allocator, and the system will continue to function normally. Similarly, you can change the Io interface without changing anything else in the codebase. For example, to declare a single-threaded Io.
const std = @import("std");
var io: std.Io.Threaded = .init_single_threaded;
Multithreaded will work like:
var io: std.Io.Threaded = .init(allocator);
// The number of threads can be configured by setting cpu_count
io.cpu_count = 4;
If you want async IO (io_uring/kqueue), it’s as simple as:
var event_io: std.Io.Evented = undefined;
try event_io.init(allocator, .{});
const io = event_io.io();
I am no expert on language design, but this is quite sophisticated.
Disclaimer: The interface std.Io.Evented does not yet work on my machine, because the Zig version I compiled (version 0.16.0-dev.1187+1d80c9540) only has io_uring implemented, while the macOS version—kqueue—is not.
One of the most important patterns in Zig’s new async I/O model is proper resource cleanup. As Andrew Kelley emphasizes in his article about the new Zig feature, directly chaining try with await can cause resource leaks when errors occur early in your code, skipping subsequent cleanup operations.
Both cancel and await are idempotent operations that share identical semantics, with one key difference: cancellation also requests task termination. This means you can safely call cancel on a task that has already been completed—the operations are designed to be safe and predictable. During my tests, if the cancel function is not used, the operation hangs.
Here’s the pattern you should follow:
var task1 = try io.async(someOperation, .{args})
defer task1.cancel(io) catch {};
var task2 = try io.async(anotherOperation, .{args});
defer task2.cancel(io) catch {};
// Await all tasks BEFORE handling errors
try task1.await(io);
try task2.await(io);
The key insight is to always use defer cancel() immediately after creating each task, then await all tasks before handling any errors. Similar to calling a defer deinit() function to clean up resources after allocating memory in the heap. This ensures that:
Resources are cleaned up even if errors occur early (before awaiting) 1.
Cancel is safely called on completed tasks (it’s idempotent) 1.
No resource leaks occur from early error returns
Note that we use defer because cancel should happen regardless of success or failure—it’s safe to cancel a completed task.
Asynchronous code does not automatically mean concurrent execution. This distinction is essential for understanding how Zig’s async I/O works.
Using a single-threaded pool with the async function can result in deadlock in producer-consumer patterns. The async function simply separates the call to a function from its return—it does not guarantee that multiple operations will run concurrently.
This is where the concurrent function becomes important. Unlike async, concurrent explicitly expresses concurrency needs. If you use concurrent on a truly single-threaded system, it will fail with error.ConcurrencyUnavailable rather than deadlocking, which is great, as it can be treated in the error handling flow.
The key takeaway: use async when you want asynchronous behavior (decoupled call/return), but use concurrent when you actually require operations to run simultaneously. This distinction prevents subtle bugs and makes your concurrency requirements explicit in the code.
Here’s a complete example demonstrating true concurrent execution with multiple HTTP requests:
const std = @import(”std”);
const HostName = std.Io.net.HostName;
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
var io_impl: std.Io.Threaded = .init(allocator);
defer io_impl.deinit();
const io = io_impl.io();
const host_name: HostName = try .init(”example.com”);
// Run 3 requests concurrently
var results: [3]usize = undefined;
// Start all concurrent tasks
var task1 = try io.concurrent(request_website, .{ allocator, io, host_name, 0, &results[0] });
defer task1.cancel(io) catch {};
var task2 = try io.concurrent(request_website, .{ allocator, io, host_name, 1, &results[1] });
defer task2.cancel(io) catch {};
var task3 = try io.concurrent(request_website, .{ allocator, io, host_name, 2, &results[2] });
defer task3.cancel(io) catch {};
// Wait for all tasks to complete
try task1.await(io);
try task2.await(io);
try task3.await(io);
std.log.info(”All requests completed successfully!”, .{});
std.log.info(”Results: {any}”, .{results});
}
fn request_website(allocator: std.mem.Allocator, io: std.Io, host_name: HostName, index: usize, result: *usize) !void {
var http_client: std.http.Client = .{ .allocator = allocator, .io = io };
defer http_client.deinit();
var request = try http_client.request(.HEAD, .{
.scheme = “http”,
.host = .{ .percent_encoded = host_name.bytes },
.port = 80,
.path = .{ .percent_encoded = “/” },
}, .{});
defer request.deinit();
try request.sendBodiless();
var redirect_buffer: [1024]u8 = undefined;
const response = try request.receiveHead(&redirect_buffer);
std.log.info(”Index {d} received {d} {s}”, .{ index, response.head.status, response.head.reason });
result.* = index;
}
Running this program produces output like:
info: Index 2 received 200 OK
info: Index 1 received 200 OK
info: Index 0 received 200 OK
info: All requests completed successfully!
info: Results: { 0, 1, 2 }
Notice how the HTTP responses arrive in non-deterministic order (2→1→0 in this run), demonstrating true concurrent execution. Despite the completion order varying, each task correctly writes to its designated position in the results array, showing proper concurrent data handling. The defer task.cancel(io) ensures cleanup happens even if errors occur.
Since launching Zedis, I’ve been eager to benchmark it against Redis. Presently, Zedis is multithreaded—but that’s temporary. My end goal is to make it single-threaded.
The current multithreaded architecture exists solely to enable testing of pub/sub and other modules. Other high-performance databases written in Zig, like TigerBeetle, are single-threaded with async I/O based on io_uring. Redis takes a different approach: single-threaded execution with multithreaded I/O.
Knowing the Zig team was working on a proper async/await implementation, I made a conscious decision to wait. Now that the new API is available, it’s time to properly rebuild Zedis’ network layer.