Optimizing LLMs for Performance and Accuracy with Post-Training Quantization
developer.nvidia.com·3d
LLM-Crowdsourced: A Benchmark-Free Paradigm for Mutual Evaluation of Large Language Models
arxiv.org·3d
Improving annotator selection in Active Learning using a mood and fatigue-aware Recommender System
arxiv.org·2d
LoRA-PAR: A Flexible Dual-System LoRA Partitioning Approach to Efficient LLM Fine-Tuning
arxiv.org·5d
Good Learners Think Their Thinking: Generative PRM Makes Large Reasoning Model More Efficient Math Learner
arxiv.org·2d
Trustworthy Reasoning: Evaluating and Enhancing Factual Accuracy in LLM Intermediate Thought Processes
arxiv.org·2d
Loading...Loading more...