Estonia-based Leil Storage has raised €1.5 million ($1.73 million) funding to develop its vast data set storage system based on spun-down shingled magnetic recording (SMR) disk drives.
As envisaged by Leil last year, the technology can be used to archive data on-premises, with faster access than tape libraries, as spinning up disks takes less time than reading tape. It is now saying, with the rise of AI, that customers can store enormous datasets more cost-efficiently and with lower energy use compared to always-on disk drives. It says that hyperscalers, such as Google, AWS, and Meta, also Dropbox, use SMR drives, with c20 percent more capacity than standard, conventional (CMR) d…
Estonia-based Leil Storage has raised €1.5 million ($1.73 million) funding to develop its vast data set storage system based on spun-down shingled magnetic recording (SMR) disk drives.
As envisaged by Leil last year, the technology can be used to archive data on-premises, with faster access than tape libraries, as spinning up disks takes less time than reading tape. It is now saying, with the rise of AI, that customers can store enormous datasets more cost-efficiently and with lower energy use compared to always-on disk drives. It says that hyperscalers, such as Google, AWS, and Meta, also Dropbox, use SMR drives, with c20 percent more capacity than standard, conventional (CMR) disk drives. They have developed their own SW to overcome SMR’s zone-rewrite property due to having partially overlapped write tracks. This slows write access to previously written SMR disk tracks as entire zones of tracks have to be read, fresh data added, and the zone rewritten instead of just rewriting a section of a single track as you do with CMR drives.
Leil says its special SW compensates for the zone re-write problem and makes an SMR fleet of HDDs usable by enterprises.
Aleksandr Ragel.
Aleksandr Ragel, Co-founder and CEO of Leil, said: “The next wave of innovation in AI and science is buckling under the weight and cost of its own data. We founded Leil to change that.
“We are making hyperscale storage economics a reality for every enterprise, delivering significant cost savings, reducing environmental impact, and ensuring critical data remains under our customers’ control. Our HDD-native approach builds a high-performing, more resilient and efficient foundation for the data-intensive future.”
The technology, known as the Infinite Cold Engine (ICE), is based on Leil’s SaunaFS distributed and parallel POSIX file system, which supports Host-Managed SMR drives (HM-SMR), drive-managed SMR, and CMR drives, allowing their mixing in the same cluster for gradual adoption. ICE focusses on SMR drives and has integrated advanced data placement, power management (active, idle, spun-down), and erasure coding within the SaunaFS distributed file system. This is an open-source evolution of SAAFS, written in C++ with a Google File System-inspired architecture.
The software keeps frequently accessed content on active drives, while moving older data to drives kept in a spun-down state. It requires SATA drives as it uses a SATA Pin 3 disable signal to power-down the drives. When that data is needed, the system powers up the relevant drive (taking seconds) and accesses the content. This is similar to the old Massive Array of Idle Disks (MAID) concept.
ICE operates on clusters of commodity servers configured as Just a Bunch of Disks (JBODs), such as Western Digital Data60 and Data102 JBODs, typically comprising 8 servers, each housing 60–102 hard drives (e.g., 28 TB or 32 TB Western Digital HM-SMR models). Data is stored in 64 MB chunks with 64 KB blocks, protected by erasure coding (e.g., 6+2 scheme: 6 data fragments + 2 parity fragments) across servers to ensures data recovery if up to two servers fail.
SaunaFS stores metadata on NVMe/SSDs for speed, with the bulk data placed on the SMR/CMR drives. Configuration files define logical write groups, with rows across servers, isolating data operations to subsets of drives. It supports multiple groups for tiering (e.g., hot/warm/cold data), compatible with SMR’s zone-based writing and CMR’s random access.
When SaunaFS adds a server (and JBOD) or recovers from failure, it selects a “diagonal” slice of data (one drive’s worth from each existing write group) and migrates it to the new server. It frees space incrementally, enabling new write groups without full-cluster spin-up or data reshuffling. The write-grouping prevents “wake-up storms.”
Leil claims its technology “manages how data is written, moved and recovered across thousands of drives while retaining its higher density and lower power consumption. Crucially, it classifies content by access patterns, groups together inactive files, and powers down the respective drives until the data is needed again. Depending on the use case, this enables up to 70 percent energy savings without compromising availability.”
This means that it should only be considered for enterprises needing multi-petabyte scale data set storage, and not “every enterprise.”
Leil’s SW is offered in editions tailored to customer needs, with an open-source variant providing core features. It has a strong green story with its power-saving feature, when compared to always-on, equivalent capacity, disk drive stores.
The seed cash will accelerate Leil’s go-to-market strategy, expanding the product roadmap and growing the commercial team. It is planning AI/heuristic-based “smart placement” for adaptive tiering, broader open-source SAAFS integration, and community adoption of write-grouping.
We might envisage future developments looking at faster data delivery to GPUs.
Bootnote
Leil stands for L (Large scale), E (Energy-efficient), I (Infinite), and L (Local). Find SaunaFS documentation here.