Llama architecture, don’t need any special Llama.cpp support, works out of the box.
Downloads last month-
GGUF
Model size
40B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
8-bit
Inference Providers NEW
This model isn’t deployed by any Inference Provider. [🙋 Ask for provider support](https://huggingface.co/spaces/huggingface/InferenceSupport/discussions/new?title=ilintar/IQuest-Coder-V1-40B-Instruct-GGUF&description=React%20to%20this%20comment%20with%20an%20emoji%20to%20vote%20for%20%5Bilintar%2FIQuest-Coder-V1-40B-Instruct-GGUF%5D(%2Filintar%2FIQuest-Coder-V1-40B-Instruct-GGUF)%20to%20be%20supported%20by%20Inference%20Providers.%0A%0A(optional)%20Which%20provi…
Llama architecture, don’t need any special Llama.cpp support, works out of the box.
Downloads last month-
GGUF
Model size
40B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
8-bit
Inference Providers NEW
This model isn’t deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for ilintar/IQuest-Coder-V1-40B-Instruct-GGUF
Base model
IQuestLab/IQuest-Coder-V1-40B-Instruct
Quantized
(5)
this model