Leveraging Power of Large Language Model in Entity Linking via Multi-step Prompting and Targeted Reasoning
machinelearning.apple.com·1d
Flag this post

AuthorsYajie Li†, Albert Galimov†, Mitra Datta Ganapaneni†, Pujitha Thejaswi†, De Meng, Priyanshu Kumar, Saloni Potdar

Entity Linking (EL) has traditionally relied on large annotated datasets and extensive model fine-tuning. While recent few-shot methods leverage large language models (LLMs) through prompting to reduce training requirements, they often suffer from inefficiencies due to expensive LLM-based reasoning. ARTER (Adaptive Routing and Targeted Entity Reasoning) presents a structured pipeline that achieves high performance without deep fine-tuning by strategically combining candidate generation, context-based scoring, adaptive routing, and selective reasoning. ARTER computes a small set of complementary signals(both embedding and LLM-based) over the retrieved candidates…

Similar Posts

Loading similar posts...