Plantinga-RL
A Lightweight Language Model
Model Description 📝
Plantinga-RL is a fine-tuned version of Qwen2.5-0.5B-Instruct, trained specifically on philosophical texts. The model specializes in understanding and generating responses related to complex philosophical concepts, arguments, and debates. It not only provides accurate explanations, thoughtful analyses, and context-aware answers, but also performs structured reasoning—breaking down arguments, evaluating premises, and drawing logical conclusions. It is particularly effective in addressing philosophical questions in metaphysics, epistemology, ethics, and the philosophy of mind, offering both clarity and depth in reasoning.
Key Features ✨
- Architecture: Transformer-based language model 🏗️
- Training Data: Philosophy-focused dataset covering multiple branches of philosophy and structured philosophical Q&A. 📚
- Developed by: Rustam Shiriyev
- Language(s): English
- License: MIT
- Fine-Tuning Method: GRPO with LoRA
- Domain: Philosophy
- Finetuned from model: unsloth/Qwen2.5-0.5B-Instruct
- Model name: The model’s name was inspired by Alvin Plantinga, one of the most influential philosophers of the 21st century.
- Dataset: jilp00/YouToks-Instruct-Philosophy
Intended Use
- Generating clear and concise explanations of philosophical concepts. 🏆
- Providing structured responses to philosophical questions. 🎯
- Assisting students, researchers, and enthusiasts in exploring philosophical arguments.⚡
Limitations ⚠️
- While fine-tuned on philosophy, the model may still occasionally generate hallucinations or less precise interpretations of highly nuanced philosophical arguments.
- The model does not replace expert human philosophical judgment.
How to Get Started with the Model 💻
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen2.5-0.5B-Instruct",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Plantinga-RL")
question = """
In the philosophical discussion comparing the mind to harmony, what is the core argument, and why could it imply that the mind is destructible?
"""
system = """
Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>
"""
messages = [
{"role" : "system", "content" : system},
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 2000,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
Framework versions
- PEFT 0.15.2
- Downloads last month
- 59
Model tree for Rustamshry/Plantinga-RL
Base model
Qwen/Qwen2.5-0.5B
Finetuned
Qwen/Qwen2.5-0.5B-Instruct
Finetuned
unsloth/Qwen2.5-0.5B-Instruct