LLaMA 3 8B Dirty Writing Prompts LORA

Testing with L3 8B Base

When alpha is left at default (64), it just acts like a r/DWP comment generator for a given prompt.

Testing with Stheno 3.3

When alpha is bumped to 256, it shows effects in the prompts we trained, lower alpha or out of scope prompts are unaffected.

When alpha is bumbed to 768, it always steers the conversation to be horny, and makes up excuses to create lewd scenarios.

This is completely emergent behaviour, we haven't trained for it, all we did was... read here in the model card

Downloads last month
45
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nothingiisreal/llama3-8B-DWP-lora

Adapter
(276)
this model
Merges
1 model