Binary Neural Networks, Low-precision Training, Efficient Inference, Weight Compression
TAI #167: US and China’s Open-Weight Divergence; Do You Really Need Open-Weight LLMs?
pub.towardsai.net·10h
Limits of message passing for node classification: How class-bottlenecks restrict signal-to-noise ratio
arxiv.org·21h
AdapSNE: Adaptive Fireworks-Optimized and Entropy-Guided Dataset Sampling for Edge DNN Training
arxiv.org·21h
Constrained Prompt Enhancement for Improving Zero-Shot Generalization of Vision-Language Models
arxiv.org·21h
Disentangling Polysemantic Neurons with a Null-Calibrated Polysemanticity Index and Causal Patch Interventions
arxiv.org·21h
Graph-R1: Incentivizing the Zero-Shot Graph Learning Capability in LLMs via Explicit Reasoning
arxiv.org·21h
MSNav: Zero-Shot Vision-and-Language Navigation with Dynamic Memory and LLM Spatial Reasoning
arxiv.org·21h
Loading...Loading more...