Efficient and accurate search in petabase-scale sequence repositories
nature.com·5h
🔄Burrows-Wheeler
A small rant about compression
codecs.multimedia.cx·3h
📦Deflate
Linguistic Characteristics of AI-Generated Text: A Survey
arxiv.org·17h
📝Text Parsing
Show HN: CodeLens.AI– Community benchmark comparing 6 LLMs on real code tasks
codelens.ai·7h·
Discuss: Hacker News
🎙️Whisper
Introducing OpenZL: An Open Source Format-Aware Compression Framework
engineering.fb.com·2d·
Modern Compression
Leaner, More Efficient Storage Infrastructure for the AI Era
thenewstack.io·2h
💻Local LLMs
H1B-KV: Hybrid One-Bit Caches for Memory-Efficient Large Language Model Inference
arxiv.org·17h
💨Cache Optimization
Krish Naik: Complete RAG Crash Course With Langchain In 2 Hours
dev.to·3h·
Discuss: DEV
📊Multi-vector RAG
Language Support for Marginalia Search
marginalia.nu·2d
🔍BitFunnel
Channel Simulation and Distributed Compression with Ensemble Rejection Sampling
arxiv.org·17h
Information Bottleneck
Compressed Convolutional Attention: Efficient Attention in a Compressed Latent Space
arxiv.org·1d
Information Bottleneck
The Bit Shift Paradox: How "Optimizing" Can Make Code 6× Slower
hackernoon.com·17h
🧮Compute Optimization
LLM Optimization Notes: Memory, Compute and Inference Techniques
gaurigupta19.github.io·2d·
Discuss: Hacker News
💻Local LLMs
LexiCon: a Benchmark for Planning under Temporal Constraints in Natural Language
arxiv.org·17h
🧮Kolmogorov Complexity
SSDD: Single-Step Diffusion Decoder for Efficient Image Tokenization
github.com·1d·
Discuss: Hacker News
Modern Compression
**Automated Variant Annotation & Prioritization via Multi-Metric Scoring**
dev.to·1d·
Discuss: DEV
🧬Copy Number Variants
Evaluating the Sensitivity of LLMs to Harmful Contents in Long Input
arxiv.org·17h
📝ABNF Extensions
Latency vs. Accuracy for LLM Apps — How to Choose and How a Memory Layer Lets You Win Both
dev.to·1d·
Discuss: DEV
Performance Mythology
Context Length Alone Hurts LLM Performance Despite Perfect Retrieval
arxiv.org·17h
🧮Kolmogorov Complexity