Binary Neural Networks, Low-precision Training, Efficient Inference, Weight Compression
The Magic Minimum for AI Agents
kill-the-newsletter.comยท10h
On Information Geometry and Iterative Optimization in Model Compression: Operator Factorization
arxiv.orgยท21h
A Training-Free, Task-Agnostic Framework for Enhancing MLLM Performance on High-Resolution Images
arxiv.orgยท21h
The Man Behind the Sound: Demystifying Audio Private Attribute Profiling via Multimodal Large Language Model Agents
arxiv.orgยท21h
Towards High Supervised Learning Utility Training Data Generation: Data Pruning and Column Reordering
arxiv.orgยท21h
ViTCoT: Video-Text Interleaved Chain-of-Thought for Boosting Video Understanding in Large Language Models
arxiv.orgยท21h
SLIM: A Heterogeneous Accelerator for Edge Inference of Sparse Large Language Model via Adaptive Thresholding
arxiv.orgยท21h
Efficient Triple Modular Redundancy for Reliability Enhancement of DNNs Using Explainable AI
arxiv.orgยท21h
Ambiguity-Aware and High-Order Relation Learning for Multi-Grained Image-Text Matching
arxiv.orgยท21h
BENYO-S2ST-Corpus-1: A Bilingual English-to-Yoruba Direct Speech-to-Speech Translation Corpus
arxiv.orgยท21h
Representation learning with a transformer by contrastive learning for money laundering detection
arxiv.orgยท21h
Loading...Loading more...