my favorite all-purpose models from 12B to 70B, all uncensored and using ChatML
Nicholas Beerbower PRO
nbeerbower
AI & ML interests
QLoRA finetuning and merging LLMs for fun
Recent Activity
updated
a collection
about 9 hours ago
personal faves
updated
a collection
about 9 hours ago
personal faves
updated
a collection
about 9 hours ago
personal faves
Organizations
models
195

nbeerbower/Llama3-SkullGang-70B
Text Generation
•
Updated
•
7

nbeerbower/Llama3-Asobi-70B
Text Generation
•
Updated
•
14

nbeerbower/UwU-Qwen2.5-32B
Text Generation
•
Updated
•
16
•
1

nbeerbower/Llama3-Sapientia-70B
Text Generation
•
Updated
•
17
•
1

nbeerbower/Shiina-Qwen2.5-32B
Text Generation
•
Updated
•
50

nbeerbower/BigKartoffel-mistral-nemo-20B
Text Generation
•
Updated
•
60
•
3

nbeerbower/Azura-Qwen2.5-32B
Text Generation
•
Updated
•
19

nbeerbower/PirateShip-ChatML-4x12B
Updated
•
25

nbeerbower/Kawaiides-llama3.1-70B
Text Generation
•
Updated
•
21

nbeerbower/Kartoffel-Deepfry-12B
Text Generation
•
Updated
•
29
•
1
datasets
9
nbeerbower/cover-images
Viewer
•
Updated
•
6
•
583
•
1
nbeerbower/GreatFirewall-DPO
Viewer
•
Updated
•
492
•
165
•
8
nbeerbower/reddit-dpo
Viewer
•
Updated
•
76.9k
•
77
•
1
nbeerbower/gutenberg-moderne-dpo
Viewer
•
Updated
•
346
•
58
•
3
nbeerbower/gutenberg2-dpo
Viewer
•
Updated
•
293
•
154
•
19
nbeerbower/Schule-DPO
Viewer
•
Updated
•
34
•
54
•
1
nbeerbower/Arkhaios-DPO
Viewer
•
Updated
•
222
•
86
•
8
nbeerbower/Purpura-DPO
Viewer
•
Updated
•
230
•
90
•
8
nbeerbower/bible-dpo
Viewer
•
Updated
•
31.1k
•
75