Large Language Models, GPT, Transformers, Prompting

Attention Illuminates LLM Reasoning: The Preplan-and-Anchor Rhythm EnablesFine-Grained Policy Optimization
paperium.net·16h·
Discuss: DEV
💬Prompt Engineering
Flag this post
Unlocking LLMs: The Self-Steering Revolution
dev.to·12h·
Discuss: DEV
💬Prompt Engineering
Flag this post
Incremental Compilation in Recursive‑Descent Parser (Roslyn)
langdev.stackexchange.com·8h·
Discuss: Hacker News
🐚Shell Scripting
Flag this post
Unlock Autonomy: Next-Gen LLMs Learn to Decode Themselves by Arvind Sundararajan
dev.to·10h·
Discuss: DEV
💬Prompt Engineering
Flag this post
Testing Unnatural Prompt Engineering Across Five Large Language Models
blog.codeminer42.com·2d
💬Prompt Engineering
Flag this post
A Beginner’s Guide to Getting Started with add_messages Reducer in LangGraph
langcasts.com·2d·
Discuss: DEV
💬Prompt Engineering
Flag this post
Can-t stop till you get enough
cant.bearblog.dev·8h·
Discuss: Hacker News
🤖AI
Flag this post
Do LLMs Signal When They're Right? Evidence from Neuron Agreement
arxiv.org·2d
📝NLP
Flag this post
From Parrot to Partner - How Reinforcement Learning Taught LLMs to Talk Like Humans
dev.to·18h·
Discuss: DEV
💬Prompt Engineering
Flag this post
Using “ibm-granite/granite-speech-3.3–8b” 🪨 for ASR
dev.to·14h·
Discuss: DEV
📝NLP
Flag this post
From Lossy to Lossless Reasoning
manidoraisamy.com·2d·
Discuss: Hacker News
💬Prompt Engineering
Flag this post
Practical Steps Towards Vibe Writing with AI Positron
blog.oxygenxml.com·15h
💬Prompt Engineering
Flag this post
Machine Learning Fundamentals: Everything I Wish I Knew When I Started
dev.to·20h·
Discuss: DEV
🧠Machine Learning
Flag this post
An underqualified reading list about the transformer architecture
fvictorio.github.io·3d·
Discuss: Hacker News
💬Prompt Engineering
Flag this post
Adaptive Stemming via Graph-Augmented Recurrent Variational Autoencoders
dev.to·21h·
Discuss: DEV
💬Natural Language Processing
Flag this post
Polish emerges as top language in multilingual AI benchmark testing
ppc.land·18h
📝NLP
Flag this post
Beyond the Magic: How LLMs Work
tag1.com·5d·
Discuss: Hacker News
💬Prompt Engineering
Flag this post
Generation at the Speed of Thought: Speculative Decoding
bittere.substack.com·15h·
Discuss: Substack
💬Prompt Engineering
Flag this post
Kimi Linear: An Expressive, Efficient Attention Architecture
arxiviq.substack.com·1d·
Discuss: Substack
💬Prompt Engineering
Flag this post
I made a tensor runtime & inference framework in C (good for learning how inference works)
github.com·2h·
🤖AI
Flag this post