Quantifying the reasoning abilities of LLMs on clinical cases
nature.com·8h
Flag this post

Introduction

Large language models (LLMs) have advanced significantly in recent years, with systems such as OpenAI-o1[1](https://www.nature.com/articles/s41467-025-64769-1#ref-CR1 “Jaech, A. et al. Openai o1 System Card. Preprint at arXiv https://doi.org/10.48550/arXiv.2412.16720

(2024).“) and DeepSeek-R12 demonstrating remarkable reasoning capabilities. These models have excelled in structured problem-solving and logical inference, achieving notable success in fields like mathematics and programming[2](#ref-CR2 “Guo, D. et al. DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning. N…

Similar Posts

Loading similar posts...