matteo
matteogeniaccio
AI & ML interests
None yet
Recent Activity
new activity
3 days ago
bartowski/THUDM_GLM-4-32B-0414-GGUF:Template bug fixed in llama.cpp
new activity
9 days ago
matteogeniaccio/GLM-4-32B-0414-GGUF-fixed:vllm depolyment error
updated
a model
10 days ago
matteogeniaccio/GLM-4-9B-0414-GGUF-fixed
Organizations
None yet
matteogeniaccio's activity
Template bug fixed in llama.cpp
4
5
#11 opened 8 days ago
by
matteogeniaccio
vllm depolyment error
4
#4 opened 9 days ago
by
Saicy
Fix template when add_generation_prompt=true
#14 opened 10 days ago
by
matteogeniaccio
Can you make Q8?
1
1
#2 opened 13 days ago
by
segmond
Notably better than Phi3.5 in many ways, but something is wrong.
1
8
#5 opened 5 months ago
by
phil111
I must say this is a good one
3
#2 opened 5 months ago
by
antonycer

tokenizer.model ?
1
#1 opened 5 months ago
by
pipilok
