Small Vs. Large Language Models
semiengineering.com·8h
Flag this post

The proliferation of edge AI will require fundamental changes in language models and chip architectures to make inferencing and learning outside of AI data centers a viable option.

The initial goal for small language models (SLMs) — roughly 10 billion parameters or less, compared to more than a trillion parameters in the biggest LLMs — was to leverage them exclusively for inferencing. Increasingly, however, they also include some learning capability. And because they are purpose-built for narrowly defined tasks, SLMs can generate results in a fraction of the time it takes to send a query, directive, or sensor data to an AI data center and receive a response.

SLMs are not new. EDA companies have been playing around with optimized computational software for years, and scientists hav…

Similar Posts

Loading similar posts...