Anthropic never says “use these prompts.”
threadreaderapp.com·10h
🎭Claude
Preview
Report Post

This paper from Harvard and MIT quietly answers the most important AI question nobody benchmarks properly:

Can LLMs actually discover science, or are they just good at talking about it?

The paper is called “Evaluating Large Language Models in Scientific Discovery”, and instead of asking models trivia questions, it tests something much harder:

Can models form hypotheses, design experiments, interpret results, and update beliefs like real scientists?

Here’s what the authors did differently 👇

• They evaluate LLMs across the full discovery loop hypothesis → experiment → observation → revision • Tasks span biology, chemistry, and physics, not toy puzzles • Models must work with incomplete data, noisy results, and false leads • Success is measured by scientific progress, not fluency or conf…

Similar Posts

Loading similar posts...