LLaMA 3 8B Dirty Writing Prompts LORA

Testing with L3 8B Base

When alpha is left at default (64), it just acts like a r/DWP comment generator for a given prompt.

Testing with Stheno 3.3

When alpha is bumped to 256, it shows effects in the prompts we trained, lower alpha or out of scope prompts are unaffected.

When alpha is bumbed to 768, it always steers the conversation to be horny, and makes up excuses to create lewd scenarios.

This is completely emergent behaviour, we haven't trained for it, all we did was... read here in the model card

Downloads last month
101
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for nothingiisreal/llama3-8B-DWP-lora

Adapter
(226)
this model
Merges
1 model