🐿️ ScourBrowse
LoginSign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
🧠 Large Language Models (LLMs)
Exploiting Vocabulary Frequency Imbalance in Language Model Pre-training
arxiv.org·2d
🔍Retrieval-augmented generation
An Empirical Study on How Video-LLMs Answer Video Questions
arxiv.org·2d
🔍Retrieval-augmented generation
A Systematic Study of Post-Training Quantization for Diffusion LLMs
arxiv.org·3d·
Discuss: Hacker News
🔢Quantization of LLMs
Subjective Behaviors and Preferences in LLM: Language of Browsing
arxiv.org·2d
🔧Systems-level optimizations for LLM serving
GRILE: A Benchmark for Grammar Reasoning and Explanation in Romanian LLMs
arxiv.org·3d
✨Model optimizations in LLMs
EmoSLLM: Parameter-Efficient Adaptation of LLMs for Speech Emotion Recognition
arxiv.org·3d
✨Model optimizations in LLMs
Can Large Language Models (LLMs) Describe Pictures Like Children? A Comparative Corpus Study
arxiv.org·4d
🔍Retrieval-augmented generation
RAG-Boost: Retrieval-Augmented Generation Enhanced LLM-based Speech Recognition
arxiv.org·3d
🔍Retrieval-augmented generation
LLM4Sweat: A Trustworthy Large Language Model for Hyperhidrosis Support
arxiv.org·2d
🚀LLM serving frameworks
Cognitive Surgery: The Awakening of Implicit Territorial Awareness in LLMs
arxiv.org·3d
💬Prompt optimizations for LLM serving
EMNLP: Educator-role Moral and Normative Large Language Models Profiling
arxiv.org·2d
✨Model optimizations in LLMs
Classification errors distort findings in automated speech processing: examples and solutions from child-development research
arxiv.org·2d
✨Model optimizations in LLMs
Large Language Models are Highly Aligned with Human Ratings of Emotional Stimuli
arxiv.org·3d
🔍Retrieval-augmented generation
ContextualLVLM-Agent: A Holistic Framework for Multi-Turn Visually-Grounded Dialogue and Complex Instruction Following
arxiv.org·2d
💬Prompt optimizations for LLM serving
What do Speech Foundation Models Learn? Analysis and Applications
arxiv.org·5d
✨Model optimizations in LLMs
Hydra: A 1.6B-Parameter State-Space Language Model with Sparse Attention, Mixture-of-Experts, and Memory
arxiv.org·2d
🔧Systems-level optimizations for LLM serving
A Multi-Task Evaluation of LLMs' Processing of Academic Text Input
arxiv.org·5d
💬Prompt optimizations for LLM serving
Unplug and Play Language Models: Decomposing Experts in Language Models at Inference Time
arxiv.org·2d
📊AI Performance Profiling
Multiple Memory Systems for Enhancing the Long-term Memory of Agent
arxiv.org·2d
🤖Agents using LLMs
Social Debiasing for Fair Multi-modal LLMs
arxiv.org·3d
✨Model optimizations in LLMs
Loading...Loading more...
AboutBlogChangelogRoadmap