Study: Shrinking AI memory boosts accuracy
ed.ac.uk·3d·
Discuss: Hacker News
📱Edge AI
Preview
Report Post

Experts from University of Edinburgh and NVIDIA found that large language models (LLMs) using memory eight times smaller than an uncompressed LLM scored better on maths, science and coding tests while spending the same amount of time reasoning.

The method can be used in an alternative way to help LLMs respond to more user queries simultaneously, reducing the amount of power needed per task.

As well as energy savings, experts say the improvements could benefit AI systems that are used to solve complicated tasks or in devices that have slow or limited memory, such as smart home devices and wearable technology.

Problem solving

By “thinking” about more complex hypotheses or exploring more hypotheses concurrently, AI models improve their problem-solving abilities. In practice, this…

Similar Posts

Loading similar posts...