Timon
KeyboardMasher
AI & ML interests
None yet
Recent Activity
new activity
37 minutes ago
unsloth/DeepSeek-R1-GGUF:Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
new activity
9 days ago
unsloth/DeepSeek-V3-GGUF:Issue with --n-gpu-layers 5 Parameter: Model Only Running on CPU
Organizations
None yet
KeyboardMasher's activity
Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
3
#13 opened about 21 hours ago
by
ubergarm
Issue with --n-gpu-layers 5 Parameter: Model Only Running on CPU
12
#10 opened 18 days ago
by
vuk123
Advice on running llama-server with Q2_K_L quant
3
#6 opened 21 days ago
by
vmajor
I loaded DeepSeek-V3-Q5_K_M up on my 10yrs old old Tesla M40 (Dell C4130)
3
#8 opened 21 days ago
by
gng2info
Model will need to be requantized, rope issues for long context
3
#2 opened about 1 month ago
by
treehugg3
Instruct version?
3
#1 opened 2 months ago
by
KeyboardMasher
Feedback
1
#2 opened 2 months ago
by
KeyboardMasher
we need llama athene 3.1 70b
5
#5 opened 6 months ago
by
gopi87
Change the 'Original model' link to tree/9092a8a, which contains the updated weights.
1
#2 opened 3 months ago
by
AaronFeng753
Remove this model from Recent highlights collection
1
#9 opened 3 months ago
by
KeyboardMasher
Continuous output
8
#1 opened 3 months ago
by
kth8
Q8_0 file is damaged.
5
#1 opened 10 months ago
by
KeyboardMasher
Q8_0 file is damaged.
5
#1 opened 10 months ago
by
KeyboardMasher