期待更大规模的RP模型
#10 opened 4 days ago
by
lingyezhixing
ollama下载不成功,不太清楚为啥
#9 opened 9 days ago
by
likefallwind

期待用QwQ:32b版本训练,7b的联想发散能力差点意思
#8 opened about 1 month ago
by
IITTU
wired performance between 7b nocot cot and normal version
2
1
#6 opened about 2 months ago
by
dd332
安装时,如何指定“Files and versions”列出的某个特定版本
4
#5 opened about 2 months ago
by
pekingmine

为何总是在输出完文字后又画线盖掉
7
#4 opened 2 months ago
by
ftsucker
官网的大号Tifa模型实际上是Claude API,您为何要谎称您有大模型?
1
#3 opened 2 months ago
by
deleted
ollama deploy problem
1
#2 opened 2 months ago
by
Kevinsouth