Qwen3ForCausalLM - Architecture issue
#26 opened 6 days ago
by
cr-gkn
Request to Release the Base Model for Qwen3-32B
β
π
6
#25 opened 9 days ago
by
eramax

How to control thinking length?
β
3
2
#24 opened 9 days ago
by
lidh15
Qwen3 does not deploy on Endpoints
#23 opened 10 days ago
by
zenfiric

The model's instructions follow too poorly
β
1
3
#22 opened 15 days ago
by
xldistance
Update README.md
#21 opened 15 days ago
by
Logical-Transcendence84

please release AWQ version
#20 opened 16 days ago
by
classdemo
Collections of Bad Cases User Reviews and Comments of Qwen3 32B model
#19 opened 20 days ago
by
DeepNLP
Potential issue with large context sizes - can someone confirm?
13
#18 opened 22 days ago
by
Thireus
Qwen 3 presence of tools affect output length?
#17 opened 22 days ago
by
evetsagg
"/no_think" control is unstable
1
#16 opened 22 days ago
by
Smorty100
LICENSE files missing
π
1
#14 opened 23 days ago
by
johndoe2001
After setting /nothinking or enable_thinking=False, can the empty <thinking> tag be omitted from the response?
π
3
2
#13 opened 23 days ago
by
pteromyini

Feedback: It's a good model, however it hallucinates very badly at local facts (Germany)
π
π
9
2
#12 opened 23 days ago
by
Dampfinchen
The correct way of fine-tuning on multi-turn trajectories
π
6
1
#11 opened 23 days ago
by
hr0nix
Providing a GPTQ version
π
3
12
#10 opened 23 days ago
by
blueteamqq1
how to set, enable_thinking=False, on ollama
π
6
2
#9 opened 23 days ago
by
TatsuhiroC
π[Fine-tuning] Implementation and Best Practices for Qwen3 CPT/SFT/DPO/GRPO Trainingπ
π₯
π
3
#7 opened 23 days ago
by
study-hjt

Reasoning or Non-reasoning model?
4
#6 opened 23 days ago
by
dipta007

Local Installation Video and Testing - Step by Step
#5 opened 23 days ago
by
fahdmirzac

γEvaluationγBest practice for evaluating Qwen3 !!
π₯
π
5
#4 opened 23 days ago
by
wangxingjun778

Base Model?
π
β
4
8
#3 opened 23 days ago
by
Downtown-Case
Is this multimodal?
1
#2 opened 23 days ago
by
pbarker

Add languages tag
π
2
#1 opened 24 days ago
by
de-francophones
