Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
📝 Text Algorithms
String Matching, Suffix Trees, Edit Distance, Compression
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
159869
posts in
27.7
ms
castnettech/mnemosyne
: LLM context compression and retrieval engine. Zero dependencies. Sub-100ms queries. 40-70% token reduction.
🔄
Subinterpreters
github.com
·
5d
·
r/SideProject
·
…
Lisa Core – semantic
compression
for AI conversations (80:1
ratio
, 100% local)
🔄
Incremental Tokenizers
chromewebstore.google.com
·
1h
·
Hacker News
·
…
Fujitsu
One Compression (LLM
Quantization
)
📦
Compression Algorithms
fujitsuresearch.github.io
·
1d
·
Hacker News
·
…
Google Research talks
compression
technology it says will
greatly
reduce memory needed for AI processing
💾
Cache-Oblivious Algorithms
networkworld.com
·
3h
·
…
Towards Formal Security
Proofs
of
MQOM
📡
Binary Protocols
eprint.iacr.org
·
2d
·
…
Google
TurboQuant
and What It Changes in Language Models
🪜
Recursive Descent
medium.com
·
4d
·
…
Autoencoders,
VAEs
&
GANs
Explained Simply
🪜
Recursive Descent
medium.com
·
2d
·
…
Google's
TurboQuant
saves memory, but won't save us from
DRAM-pricing
hell
🗺️
Region Inference
theregister.com
·
1d
·
…
Compression.zstd – Compression
compatible
with the
Zstandard
format
📦
Compression Algorithms
docs.python.org
·
3d
·
Hacker News
·
…
Googles
TurboQuant
Changes the Economics of Local AI Inference
🗺️
Region Inference
medium.com
·
4d
·
…
The
Sequence
Radar #832: Last Week in AI:
Compression
, Voice, and Why It All Matters
🏁
Language Benchmarks
thesequence.substack.com
·
4d
·
Substack
·
…
jhammant/Turbo1bit
:
Turbo1Bit
: Combining 1-bit LLM weights (Bonsai) with TurboQuant KV cache compression for maximum inference efficiency. 4.2x KV cache compression + 16x weight compression = ~10x total memory reduction.
🗺️
Region Inference
github.com
·
3h
·
Hacker News
·
…
Archive Format Guide 2024: ZIP vs
7Z
vs RAR vs TAR vs
GZIP
- Complete Compression Comparison
📦
Compression Algorithms
luxa.org
·
3d
·
…
I Read a
Gzip
Decompressor
Written in 250 Lines of Rust — and Compression Finally Made Sense
📦
Compression Algorithms
medium.com
·
5d
·
…
TIL:
Quantisation
∀
Quantified Types
anup.io
·
5d
·
…
Google's New AI
Compression
Could Help Lower
RAM
Prices - Here's How
⚡
Cache-Aware Algorithms
bgr.com
·
5d
·
…
Google research cuts LLM memory use by 6x
📊
Memory Profilers
kite.kagi.com
·
6d
·
…
Pure C implementation of the
TurboQuant
paper (
ICLR
2026) for KV cache compression in LLM inference.
🗺️
Region Inference
github.com
·
1d
·
r/LocalLLaMA
·
…
mmgehlot/bitpolar
:
BitPolar
: near-optimal vector quantization — 3-8 bit compression with zero training. 58 integrations across every major AI framework.
🎯
Bit Vectors
github.com
·
3d
·
Hacker News
·
…
yasha1971-coder/aceapex
: compression, lossless, lz77, zstd, cplusplus, performance
📦
Compression Algorithms
github.com
·
6d
·
Hacker News
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help