Llama architecture, don’t need any special Llama.cpp support, works out of the box.

Downloads last month-

GGUF

Model size

40B params

Architecture

llama

Hardware compatibility

Log In to view the estimation

8-bit

Inference Providers NEW

This model isn’t deployed by any Inference Provider. [🙋 Ask for provider support](https://huggingface.co/spaces/huggingface/InferenceSupport/discussions/new?title=ilintar/IQuest-Coder-V1-40B-Instruct-GGUF&description=React%20to%20this%20comment%20with%20an%20emoji%20to%20vote%20for%20%5Bilintar%2FIQuest-Coder-V1-40B-Instruct-GGUF%5D(%2Filintar%2FIQuest-Coder-V1-40B-Instruct-GGUF)%20to%20be%20supported%20by%20Inference%20Providers.%0A%0A(optional)%20Which%20provi…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help