llama-server - UI parameters not reflecting command-line settings
github.com·18h·
Discuss: r/LocalLLaMA

Sorry, but I am too busy and tired to dive into the code to do pull requests. So, I’m making this post.

Fix the misleading behavior

I have found two misleading behaviors with Llama.cpp.

  1. When we load a model with the specified parameters from the command line (llama-server), these parameters are not reflected in the UI.
  2. When we switch to another model, the old parameters in the UI are still applied, while we would expect the command-line parameters to be used. This behavior causes bad experiences, as the model can become very disappointing.

To “fix” these behaviors, I use a hack with the help of Tampermonkey. And after switching models, I usually do a “hard refresh” of the page (Ctrl+Shift+R or Cmd+Shift+R).

const props = fetch('/props')
.then((response) ...

Similar Posts

Loading similar posts...