Feeds to Scour
SubscribedAll
Scoured 2119 posts in 17.1 ms
Tokenization in Transformers v5: Simpler, Clearer, and More Modular
huggingface.co·23h
🏭Code Generation
Preview
Report Post
Attention Is All You Need
dev.to·1d·
Discuss: DEV
📝NLP
Preview
Report Post
Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation
arxiv.org·18h
🔍RAG
Preview
Report Post
Two Kinds of Vibe Coding
davidbau.com·2h·
Discuss: Hacker News
🎭Anthropic Claude
Preview
Report Post
Part 1: Why Transformers Still Forget
future.forem.com·11h·
Discuss: DEV
💬Prompt Engineering
Preview
Report Post
Hosting Language Models on a Budget
kdnuggets.com·8h
🦙Ollama
Preview
Report Post
**Revisiting the Transformer: A Breakthrough in Handling Out
dev.to·5h·
Discuss: DEV
🎭Anthropic Claude
Preview
Report Post
Cross-Modal Knowledge Distillation for heritage language revitalization programs across multilingual stakeholder groups
dev.to·14h·
Discuss: DEV
🧮Embeddings
Preview
Report Post
How a Bit Becomes a Story: Semantic Steering via Differentiable Fault Injection
arxiv.org·18h
💬Prompt Engineering
Preview
Report Post
BERT and CNN integrated Neural Collaborative Filtering for Recommender Systems
arxiv.org·18h
🔍RAG
Preview
Report Post
How Transformers Think: The Information Flow That Makes Language Models Work
kdnuggets.com·3d
📝NLP
Preview
Report Post
AI for Ruby Devs Part I: From the Basics to building a neural network
dev.to·5h·
Discuss: DEV
💬Prompt Engineering
Preview
Report Post
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
dev.to·13h·
Discuss: DEV
🗄️Vector Databases
Preview
Report Post
T5Gemma 2: The next generation of encoder-decoder models
blog.google·23h·
Discuss: Hacker News
💬Prompt Engineering
Preview
Report Post
4 Ways to Supercharge Your Data Science Workflow with Google AI Studio | Towards Data Science
towardsdatascience.com·7h
🔄Make
Preview
Report Post
A Complete Guide to Spherical Equivariant Graph Transformers
arxiv.org·1d
🗄️Vector Databases
Preview
Report Post
Inflation Attitudes of Large Language Models
arxiv.org·1d
📝NLP
Preview
Report Post
RoBERTa: A Robustly Optimized BERT Pretraining Approach
dev.to·1d·
Discuss: DEV
🛡️AI Security
Preview
Report Post
Scaling Laws for Neural Language Models
dev.to·2h·
Discuss: DEV
💬Prompt Engineering
Preview
Report Post
State-Dependent Refusal and Learned Incapacity in RLHF-Aligned Language Models
arxiv.org·1d
💬Prompt Engineering
Preview
Report Post