Human-Curated Benchmarking
reddit.com·8h·
Discuss: r/LocalLLaMA
🚀Performance
Preview
Report Post

Ok, I will say it out loud first to get it out of the way. LLMs develop, benchmarks suck and become useless, were standing in place when it comes to the USEFUL benchmarking. Benchmarks literally mean nothing to the user at this point, it’s not like typical benchmarks of different software or hardware anymore. Benchmarking LLMs stopped working somewhere around spring/summer 2024, in my opinion. It may be discussed, like anything, there are caveats, sure, but I come from this position, let’s make it clear.

However, when enough time passes, a generalized consensus within the community arrives and you can usually trust it. It’s something like - this scores high but sucks in actual coding, this is underestimated, this is unstable, this is stable but requires holding by hand through promp…

Similar Posts

Loading similar posts...