|
--- |
|
license: apache-2.0 |
|
tags: |
|
- causal-lm |
|
- llama |
|
- peft |
|
- lora |
|
- fine-tuning |
|
- text-generation |
|
- productivity |
|
library_name: transformers |
|
--- |
|
|
|
# Daily Tasks Fine Tuned LLaMA 3.2 1B – Weekly & Daily Task Planner |
|
|
|
## Model Description |
|
This is a **fine-tuned LLaMA 3.2 1B model** designed to generate structured weekly and daily plans. It can produce: |
|
|
|
- Workout routines |
|
- Study schedules |
|
- Meal plans |
|
- Other daily task setups |
|
|
|
Fine-tuning was done using **PEFT LoRA** with float16 precision for efficient training on GPU. |
|
|
|
--- |
|
|
|
## Intended Use |
|
This model is intended for **personal productivity, fitness planning, and educational scheduling**. It is **not meant for medical, legal, or critical decision-making**. |
|
|
|
--- |
|
|
|
## Usage |
|
|
|
```python |
|
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_name = "your-username/daily_tasks_fine_tuned_llama3_2_1b" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
|
|
generator = pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
device=0 # Use -1 for CPU |
|
) |
|
|
|
prompt = "Plan a 7-day workout routine for cardiovascular health." |
|
output = generator(prompt, max_new_tokens=600) |
|
print(output[0]['generated_text']) |
|
|