Uploads in Progress
This disclaimer will be removed upon upload completion
Solar-Open-100B-GGUF

Description
This repository contains GGUF format model files for Upstage’s Solar-Open-100B.
Solar Open is a massive 102B-parameter Mixture-of-Experts (MoE) model trained from scratch on 19.7 trillion tokens. Despite its large total size, it uses only 12B active parameters during inference, offering a unique combination of massive knowledge capacity and efficient generation speed.
Note: Please check the specific file sizes in the "Files and versions" tab.
How to Run (llama.cpp)
**Recommend…
Uploads in Progress
This disclaimer will be removed upon upload completion
Solar-Open-100B-GGUF

Description
This repository contains GGUF format model files for Upstage’s Solar-Open-100B.
Solar Open is a massive 102B-parameter Mixture-of-Experts (MoE) model trained from scratch on 19.7 trillion tokens. Despite its large total size, it uses only 12B active parameters during inference, offering a unique combination of massive knowledge capacity and efficient generation speed.
Note: Please check the specific file sizes in the "Files and versions" tab.
How to Run (llama.cpp)
Recommended Parameters: Upstage recommends the following sampling parameters for Solar Open:
- Temperature:
0.8 - Top-P:
0.95 - Top-K:
50
CLI Example
./llama-cli -m Solar-Open-100B.Q4_K_M.gguf \
-c 8192 \
--temp 0.8 \
--top-p 0.95 \
--top-k 50 \
-p "User: Who are you?\nAssistant:" \
-cnv
Server Example
./llama-server -m Solar-Open-100B.Q4_K_M.gguf \
--port 8080 \
--host 0.0.0.0 \
-c 8192 \
-ngl 99
License
The model weights are licensed under the Solar-Apache License 2.0. Please review the full license terms here: LICENSE
Citation
If you use Solar Open in your research, please cite:
@misc{solar-open-2025,
title={Solar Open: Scaling Upstage's LLM Capabilities with MoE},
author={Upstage AI},
year={2025},
url={https://huggingface.co/Upstage/Solar-Open-100B}
}