File size: 2,315 Bytes
e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 e265fe8 4c820e8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
library_name: transformers
tags: [causal-lm, lora, fine-tuned, qwen, deepseek]
---
# Model Card for Qwen-1.5B-LoRA-philosophy
This model is a LoRA-fine-tuned causal language model based on `deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`. It was trained on a custom philosophy dataset with fields `"prompt"` and `"completion"`.
## Model Details
### Model Description
A parameter-efficient fine-tuning of a 1.5B-parameter Qwen-based model.
At inference time, you can feed it a text prompt and it will generate the continuation.
- **Developed by:**
- **Funded by [optional]:**
- **Shared by [optional]:**
- **Model type:** Causal Language Model (LM) with LoRA adapters
- **Language(s) (NLP):** English
- **License:**
- **Finetuned from model [optional]:** deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
### Model Sources [optional]
- **Repository:** https://huggingface.co/your-username/small-fine-tunes
- **Paper [optional]:**
- **Demo [optional]:**
## Uses
### Direct Use
This model can be used out-of-the-box for text generation tasks such as chatbots, text completion, and conversational AI workflows.
### Downstream Use [optional]
Developers can further fine-tune or adapt the model for domain-specific conversation, question answering, or summarization tasks.
### Out-of-Scope Use
- High-stakes decision making without human oversight
- Generation of disallowed or sensitive content
- Real-time safety-critical systems
## Bias, Risks, and Limitations
Since the base model and the fine-tuning data are proprietary or custom, unknown biases may exist. The model may:
- Produce incorrect or hallucinatory statements
- Reflect biases present in the source data
### Recommendations
- Always review generated text for factual accuracy.
- Do not rely on this model for safety-critical applications without additional guardrails.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("your-username/small-fine-tunes")
model = AutoModelForCausalLM.from_pretrained("your-username/small-fine-tunes")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
output = generator("Once upon a time", max_length=100)
print(output[0]["generated_text"])
|