Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🔢 Binary Formats
Serialization, Endianness, Schema Evolution, Compact Encoding
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
160265
posts in
20.8
ms
Container Transforms: Working with
Nested
Binary
Formats
🔄
Binary Translation
binary.ninja
·
2d
·
…
mmgehlot/rusdantic
: A unified, high-performance data validation and serialization framework for Rust, inspired by Pydantic's ergonomics and powered by Serde.
⚙️
TOML Parsers
github.com
·
3d
·
Hacker News
·
…
Towards Formal Security
Proofs
of
MQOM
📡
Binary Protocols
eprint.iacr.org
·
2d
·
…
Compression.zstd – Compression
compatible
with the
Zstandard
format
📦
Compression Algorithms
docs.python.org
·
3d
·
Hacker News
·
…
Fujitsu
One Compression (LLM
Quantization
)
📦
Compression Algorithms
fujitsuresearch.github.io
·
1d
·
Hacker News
·
…
jsongrep
- A path query language for JSON, YAML,
TOML
, and other serialization formats.
📋
JSON Parsers
terminaltrove.com
·
2d
·
…
TIL:
Quantisation
∀
Quantified Types
anup.io
·
5d
·
…
Structural
modifications
in strain-engineered bilayer
nickelate
thin films
✨
Effect Inference
nature.com
·
1d
·
…
Archive Format Guide 2024: ZIP vs
7Z
vs RAR vs TAR vs
GZIP
- Complete Compression Comparison
📦
Compression Algorithms
luxa.org
·
3d
·
…
Autoencoders,
VAEs
&
GANs
Explained Simply
🪜
Recursive Descent
medium.com
·
2d
·
…
I Read a
Gzip
Decompressor
Written in 250 Lines of Rust — and Compression Finally Made Sense
📦
Compression Algorithms
medium.com
·
5d
·
…
jhammant/Turbo1bit
:
Turbo1Bit
: Combining 1-bit LLM weights (Bonsai) with TurboQuant KV cache compression for maximum inference efficiency. 4.2x KV cache compression + 16x weight compression = ~10x total memory reduction.
🗺️
Region Inference
github.com
·
2h
·
Hacker News
·
…
How
Boost.Asio
and
Boost.Serialization
powered a reinforcement learning cognitive radio on the ISS
📡
Async Channels
boost.org
·
6d
·
r/cpp
·
…
Googles
TurboQuant
Changes the Economics of Local AI Inference
🗺️
Region Inference
medium.com
·
4d
·
…
Google
TurboQuant
and What It Changes in Language Models
🪜
Recursive Descent
medium.com
·
4d
·
…
Pure C implementation of the
TurboQuant
paper (
ICLR
2026) for KV cache compression in LLM inference.
🗺️
Region Inference
github.com
·
1d
·
r/LocalLLaMA
·
…
ppb
: A non-allocating
lexer
for protocol buffers
🌊
Streaming Lexers
github.com
·
1d
·
Lobsters
·
…
mmgehlot/bitpolar
:
BitPolar
: near-optimal vector quantization — 3-8 bit compression with zero training. 58 integrations across every major AI framework.
🎯
Bit Vectors
github.com
·
3d
·
Hacker News
·
…
castnettech/mnemosyne
: LLM context compression and retrieval engine. Zero dependencies. Sub-100ms queries. 40-70% token reduction.
🔄
Subinterpreters
github.com
·
5d
·
r/SideProject
·
…
yasha1971-coder/aceapex
: compression, lossless, lz77, zstd, cplusplus, performance
📦
Compression Algorithms
github.com
·
6d
·
Hacker News
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help