๐Ÿฟ๏ธ ScourBrowse
LoginSign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
๐ŸŒณ Context free grammars
Reverse Engineering the Microchip CLB Part 1: Background and Reverse Engineering the BLEs
mcp-clb.markomo.meยท14mยท
Discuss: Lobsters
๐Ÿ”ฌBinary Analysis
The one-more-re-nightmare compiler (2021)
applied-langua.geยท1dยท
Discuss: Lobsters, Hacker News, r/programming
๐Ÿ”RegEx Engines
Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems
arxiv.orgยท2h
๐Ÿ“šDigital Humanities
MATER: Multi-level Acoustic and Textual Emotion Representation for Interpretable Speech Emotion Recognition
arxiv.orgยท2h
๐ŸŽตAudio ML
CCRS: A Zero-Shot LLM-as-a-Judge Framework for Comprehensive RAG Evaluation
arxiv.orgยท2h
๐Ÿ“Linear Logic
CEGA: A Cost-Effective Approach for Graph-Based Model Extraction and Acquisition
arxiv.orgยท2d
๐ŸงฎProlog Parsing
Argumentative Ensembling for Robust Recourse under Model Multiplicity
arxiv.orgยท2h
๐Ÿ”—Parser Combinators
Semantic-Aware Parsing for Security Logs
arxiv.orgยท2d
๐Ÿ“Log Parsing
Biomed-Enriched: A Biomedical Dataset Enriched with LLMs for Pretraining and Extracting Rare and Hidden Content
arxiv.orgยท2h
๐Ÿ”Information Retrieval
Named Entity Recognition using Bidirectional LSTM and Conditional Random Fields
dev.toยท3dยท
Discuss: DEV
๐Ÿค–Grammar Induction
On Union-Closedness of Language Generation
arxiv.orgยท2d
๐Ÿ”—Monadic Parsing
PARALLELPROMPT: Extracting Parallelism from Large Language Model Queries
arxiv.orgยท2d
๐Ÿš€SIMD Text Processing
Dialogic Pedagogy for Large Language Models: Aligning Conversational AI with Proven Theories of Learning
arxiv.orgยท1d
๐Ÿค–Grammar Induction
Computational Complexity of Model-Checking Quantum Pushdown Systems
arxiv.orgยท2d
๐Ÿ”Quantum Security
Probing AI Safety with Source Code
arxiv.orgยท2h
โœจEffect Handlers
CLGRPO: Reasoning Ability Enhancement for Small VLMs
arxiv.orgยท2d
๐Ÿ“Linear Logic
Multilingual innovation in LLMs: How open models help unlock global communication
developers.googleblog.comยท2d
๐ŸŒ€Brotli Internals
Existing LLMs Are Not Self-Consistent For Simple Tasks
arxiv.orgยท2d
๐Ÿ’ปLocal LLMs
Mirage of Mastery: Memorization Tricks LLMs into Artificially Inflated Self-Knowledge
arxiv.orgยท1d
๐Ÿง Intelligence Compression
LangChain vs. TLRAG: A Comparative Analysis for Investors
dev.toยท1dยท
Discuss: DEV
๐ŸŒ€Brotli Internals
Loading...Loading more...
AboutBlogChangelogRoadmap