LLM Weights vs. the Papercuts of Corporate
ghuntley.com·2h·
Discuss: Hacker News
💻Local LLMs
Preview
Report Post

In woodworking, there’s a saying that you should work with the grain, not against the grain and I’ve been thinking about how this concept may apply to large language models.

These large language models are built by training on existing data. This data forms the backbone which creates output based upon the preferences of the underlying model weights.

We are now one year in where a new category of companies has been founded whereby the majority of the software behind that company was code-generated.

From here on out I’m going to call to these companies as model weight first. This category of companies can be defined as any company that is building with the data (“grain”) that has been baked into the large language models.

Model weight first companies do not require as much contex…

Similar Posts

Loading similar posts...