Model Card for praxis-bookwriter-r8-qwen2.5-14b-sft-lora
Praxis Bookwriter, trained on a synthetic writers guide and book data.
- Developed by: Praxis Maldevide
- Model type: LoRA, rank 8
- License: CC-BY-NC-4.0
- Finetuned from model: Qwen/Qwen2.5-14B-Instruct
Model Details
Model Description
The following is an example of how to use the model.
system_prompt = """You are my writing assistant. Keep the story going.
// Author: Neal Stephenson
// Tags: sci-fi, romance, space opera"""
prompt = """The following interaction begins in the park.
The night is cool and the stars are bright. Tim and Val sit on a bench, talking about life and the universe.
| Character | Influence | Interactions | Impact on Plot |
|-----------------|-------------------------------------------|--------------------------------------------|-----------------------------------------|
| **Tim** | Asks existential questions; challenges beliefs. | Engages with Val about love and mortality. | Drives philosophical inquiry. |
| **Val** | Uses cosmic imagery (comet, black hole) to reframe love. | Offers metaphysical perspective; softens Tim's cynicism. | Provides an anchor to earthly life. |
This passage is a *philosophical anchor* for the novel. It explores:
- The paradox of love’s invisibility despite its centrality.
- Human attempts to codify intangible concepts (love, time).
- Existential balance between connection and solitude.
- **Tim**: A pragmatic observer, framing life as a "puzzle" with logical solutions. His curiosity is tempered by existential fatigue ("Death will answer").
- **Val**: A romantic idealist using metaphors (comets, black holes) to poeticize love. Her warmth contrasts Tim’s analytical rigidity.
**Character Development**: Their dialogue exposes Tim’s vulnerability (fear of losing Val) and Val’s capacity for profound empathy.
1. **Dialogue as Philosophy**: Use exchanges to explore abstract themes (e.g., love vs. logic).
2. **Metaphor Over Explanation**: Let characters reframe ideas through imagery (e..g., love as a comet).
3. **Contrast Tones**: Juxtapose melancholy (death) with whimsy (starry skies) to deepen emotional resonance.
4. **Subtext in Action**: Small gestures (holding hands, watching stars) reveal character dynamics more than explicit dialogue.
---
This excerpt exemplifies how speculative fiction can grapple with timeless questions while grounding them in relatable human experiences. Writers should note the interplay of intellect and emotion, ensuring that philosophy never eclipses humanity.
In **Chapter 1**, the duo debates whether love is a tangible entity or an illusion. Tim wonders if love could "hide in a star," while Val likens it to a comet that "doesn't exist until it appears." In **Chapter**, Val reframes love as an absence where two people meet—a metaphorical "black hole" where space-time warps. Both chapters juxtapose cosmic grandeur with intimate vulnerability.
A lyrical blend of **melancholic reflection** and **cosmic wonder**. Dialogue oscillates between wistful acceptance ("Death's a necessary thing") and awe-inspired speculation ("the sky's a better place to be with you").
- **Existential Inquiry**: Love as both illusion and cosmic force.
- **Cosmic Humility**: Humanity’s insignificance against infinite time/space.
- **Opposing Perspectives**: Contrasts between logic (Tim) and intuition (Val).
// Chapter: 1
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt},
]
Training Details
Training Data
Trained on the SillyTilly/fiction-writer-596 dataset.
Training Procedure
Trained using unsloth.
dtype = None
max_seq_length = 17920
load_in_4bit = True
rslora_rank = 8
output_dir = "outputs"
MODEL_NAME_TO_LOAD = "Qwen/Qwen2.5-14B-Instruct"
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = MODEL_NAME_TO_LOAD,
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model, r = rslora_rank,
target_modules = [
"q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_alpha = 8, lora_dropout = 0.02, bias = "none",
rank_pattern = {"k_proj": 4, "down_proj": 4},
alpha_pattern = {"k_proj": 4, "down_proj": 4},
use_gradient_checkpointing = "unsloth", random_state = 3407,
use_rslora = True
)
targs = TrainingArguments(
per_device_train_batch_size = 2, gradient_accumulation_steps = 2,
learning_rate = 1.5e-4, weight_decay = 0.001, gradient_checkpointing = True,
max_grad_norm = 1.0, warmup_steps = 50, num_train_epochs = 3,
optim = "adamw_8bit", lr_scheduler_type = "cosine", seed = 3407,
fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(),
logging_steps = 1, per_device_eval_batch_size = 1, eval_strategy = "steps",
eval_steps = 25, save_strategy = "steps", save_steps = 10,
save_total_limit = 3, output_dir = output_dir,
report_to="wandb", remove_unused_columns=False,
)
trainer = SFTTrainer(
model=model, tokenizer=tokenizer,
train_dataset=ds_train_sft, eval_dataset=ds_eval_sft,
max_seq_length=max_seq_length, packing=False, args=targs,
)
Framework versions
- PEFT 0.15.2
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support