Culprit: https://iquestlab.github.io/ / https://huggingface.co/IQuestLab

BLUF: I assess with near certainty that IQuest-Coder's models are a hybrid of LLaMA-3.1-70B's attention config with Qwen2.5-32B's dimensions and tokenizer. The "trained from scratch" are misleading and false. While the models themselves are not pretrained by IFakeLab, the loop mechanism seems to be a frankenstein combination of four papers.

Longer;

Here are the points that this lab/model release is lying about.

  1. Claims* of training "from sc...

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help