Advancing Cognitive Science with LLMs
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
FLoRA: Fused forward-backward adapters for parameter efficient fine-tuning and reducing inference-time latencies of LLMs
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Training LLMs Beyond Next Token Prediction - Filling the Mutual Information Gap
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Zero-RAG: Towards Retrieval-Augmented Generation with Zero Redundant Knowledge
arxiv.org·20h
🔍Retrieval-augmented generation
Flag this post
The Riddle of Reflection: Evaluating Reasoning and Self-Awareness in Multilingual LLMs using Indian Riddles
arxiv.org·20h
💬Prompt optimizations for LLM serving
Flag this post
Prompt Injection as an Emerging Threat: Evaluating the Resilience of Large Language Models
arxiv.org·20h
💬Prompt optimizations for LLM serving
Flag this post
A Comparative Analysis of LLM Adaptation: SFT, LoRA, and ICL in Data-Scarce Scenarios
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Effectiveness of LLMs in Temporal User Profiling for Recommendation
arxiv.org·20h
💬Prompt optimizations for LLM serving
Flag this post
The Biased Oracle: Assessing LLMs' Understandability and Empathy in Medical Diagnoses
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Complex QA and language models hybrid architectures, Survey
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Optimizing Native Sparse Attention with Latent Attention and Local Global Alternating Strategies
arxiv.org·20h
💬Prompt optimizations for LLM serving
Flag this post
Calibration Across Layers: Understanding Calibration Evolution in LLMs
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Adding New Capability in Existing Scientific Application with LLM Assistance
arxiv.org·20h
💬Prompt optimizations for LLM serving
Flag this post
MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Explore More, Learn Better: Parallel MLLM Embeddings under Mutual Information Minimization
arxiv.org·20h
🔢Quantization of LLMs
Flag this post
Contrastive Knowledge Transfer and Robust Optimization for Secure Alignment of Large Language Models
arxiv.org·1d
✨Model optimizations in LLMs
Flag this post
Belief Dynamics Reveal the Dual Nature of In-Context Learning and Activation Steering
arxiv.org·20h
✨Model optimizations in LLMs
Flag this post
Accumulating Context Changes the Beliefs of Language Models
arxiv.org·20h
🔍Retrieval-augmented generation
Flag this post
AraFinNews: Arabic Financial Summarisation with Domain-Adapted LLMs
arxiv.org·20h
💬Prompt optimizations for LLM serving
Flag this post
ParaScopes: What do Language Models Activations Encode About Future Text?
arxiv.org·20h
🔍Retrieval-augmented generation
Flag this post
Loading...Loading more...