ilintar/IQuest-Coder-V1-40B-Instruct-GGUF
huggingface.co·8h·
Discuss: r/LocalLLaMA
Preview
Report Post

Llama architecture, don’t need any special Llama.cpp support, works out of the box.

Downloads last month-

GGUF

Model size

40B params

Architecture

llama

Hardware compatibility

Log In to view the estimation

8-bit

Inference Providers NEW

This model isn’t deployed by any Inference Provider. [🙋 Ask for provider support](https://huggingface.co/spaces/huggingface/InferenceSupport/discussions/new?title=ilintar/IQuest-Coder-V1-40B-Instruct-GGUF&description=React%20to%20this%20comment%20with%20an%20emoji%20to%20vote%20for%20%5Bilintar%2FIQuest-Coder-V1-40B-Instruct-GGUF%5D(%2Filintar%2FIQuest-Coder-V1-40B-Instruct-GGUF)%20to%20be%20supported%20by%20Inference%20Providers.%0A%0A(optional)%20Which%20provi…

Similar Posts

Loading similar posts...