All-flash and software-defined storage business PEAK:AIO has a single-node metadata server technology which it has evolved into a multi-node, scale-out product using open-source parallel NFS code.
The PEAK Open pNFS product is a vendor-agnostic metadata server that allows both metadata and storage performance to grow simply and smoothly as AI and HPC workloads demand more storage. It has been developed in collaboration with Los Alamos National Laboratory and Carnegie Mellon University, and, PEAK says, it “delivers a genuinely open, linearly scalable alternative to aging, proprietary file systems.”
 handles LOOKUP and GETATTR, issues layouts, and the client streams directly to the correct NFSv3 data server (DS) over RDMA or TCP. The aim is predictable, centrally managed scale without proprietary clients. This is intended for AI and HPC users who often want to start small and grow, while keeping operational control in the storage tier. Designed so there is no penalty to those that do not want to start large.”
He says: ”The MDS provides layout assignment, layout commit handling and return-on-close. Clients open via the MDS, then perform data I/O directly with DS nodes. The initial release focuses on Linux NFSv4.2 pNFS on the client and NFSv3 on DS. RDMA is the preferred transport. GPUDirect alignment comes from the RDMA path.”
The MDS scaling is based on a sharded metadata service: “The shard map grows as the demand grows. The team is evaluating learned heuristics to decide when and what to shard, rather than relying only on static policies used in legacy implementations.”
Klarzynski was open about the systems performance. The per-node performance is dependent upon the ODM server used and is reference architecture dependent. He mentions “about 160 GB/s reads on a 2U Gen5 NVMe DS. The target is about 320 GB/s per node in 2026, subject to NIC and media. In fio phase testing, aggregate GB/s and 4K IOPS scale linearly with DS count.”
PEAK has been active in the MLPerf storage area, and Klarzynski points out: “Early MLPerf Storage work has matched figures previously attributed to configurations with roughly 22 data servers by using only two PEAK:AIO data servers (1/10th of the infrastructure to achieve the same performance). This reduces infrastructure and operational complexity by about an order of magnitude.”
Several industry players support the Open pNS tech; LosAlamos National Labs, Kioxia America, Scan Computing, Solidigm, Western Digital, and Wiwynn.
Customers can start with a single HA node that can host both MDS and DS functions. A small site can begin at around 100 TB, then add DS nodes for capacity or performance. At 160 GB/s per DS, about seven DS reach roughly 1 TB/s, which is the size some vendors require just to stand up a minimum cluster. Network guidance is dual 400 GbE per DS or higher. The bill of materials is a repeatable node type with standard NICs and NVMe.
He has seen a 42 RU Open pNFS rack delivering 3TBps and is targeting 6TBps in 2026.
Open pNFS and Tier-0
Hammerspace is a prominent competitor, and promotes the notion of using Tier-0 storage, the locally-attached drives in a GPU server, for fast GPU-to-data access. Klarzynski says: ”Tier-0 uses host-local NVMe on training or inference nodes and can be presented in a global namespace. It measures local media, not the network path, so it is not comparable to pNFS scale-out results.
“More importantly, Tier-0 pushes RAID, NFS, RDMA, firmware and OS lifecycle tasks to the user. PEAK:AIO supports Tier-0 for advanced teams, but has seen simple, repeatable failures in the field. The company is working on a safer Tier-0 that keeps responsibility with the storage tier, rather than pushing the responsibility to the user and relying on resilient copies managed by users.”
He has overall views about competing suppliers as well:
- Hammerspace promotes a pNFS-aware stack and Tier-0 performance messaging due to lack of data servers. Previous none-Tier-0, results show a PEAK:AIO with a10-fold improvement.
- VAST and WEKA post strong results using their own methods, with WEKA requiring a proprietary client and VAST,lthough using NFS, still using proprietary agents to achieve results.
- DDN’s Lustre is fast and mature with Lustre clients and specialist ops.
- NetApp supports some level of pNFS but is controller-centric.
- Pure is moving towards pNFS as is Dell, just different markets and legacy code.
Klarzynski positions PEAK’s Open pNFS as having a standard Linux client, RDMA transport, single-node start, and linear scale without a proprietary agent, developed in a modern-day approach to enable a truly, fully open solution to replace those that are tired, and due a refresh, or proprietary.”
PEAK will be present at the SC25 event at Booth # 6359 and Exhibitor Suite ES10 for a live demo of PEAK Open pNFS and to speak directly with the engineers. To explore the project or request early access, visit: www.peakaio.com/openpnfs .