Shijasmon commited on
Commit
cacff40
·
verified ·
1 Parent(s): 142ca61

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -10
README.md CHANGED
@@ -1,35 +1,46 @@
1
- # Fine-Tuned LLaMA 3.2 1B – Weekly & Daily Task Planner
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  ## Model Description
4
- This is a **fine-tuned LLaMA 3.2 1B Instruct model** for generating structured weekly and daily plans.
5
- It can produce:
6
 
7
  - Workout routines
8
  - Study schedules
9
  - Meal plans
10
  - Other daily task setups
11
 
12
- Fine-tuning was done using **PEFT LoRA** in **float16** to reduce memory usage.
13
 
14
  ---
15
 
16
  ## Intended Use
17
- This model is intended for **personal productivity planning, fitness scheduling, and educational planning**.
18
- It is **not meant for medical or critical decision-making**.
19
 
20
  ---
21
 
22
- ## How to Use
23
 
24
  ```python
25
  from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
26
 
27
- model_name = "your-username/fine-tuned-llama3-2-1b"
28
 
29
  tokenizer = AutoTokenizer.from_pretrained(model_name)
30
  model = AutoModelForCausalLM.from_pretrained(model_name)
31
 
32
- pipe = pipeline(
33
  "text-generation",
34
  model=model,
35
  tokenizer=tokenizer,
@@ -37,5 +48,5 @@ pipe = pipeline(
37
  )
38
 
39
  prompt = "Plan a 7-day workout routine for cardiovascular health."
40
- output = pipe(prompt, max_new_tokens=600)
41
  print(output[0]['generated_text'])
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - causal-lm
5
+ - llama
6
+ - peft
7
+ - lora
8
+ - fine-tuning
9
+ - text-generation
10
+ - productivity
11
+ library_name: transformers
12
+ ---
13
+
14
+ # Daily Tasks Fine Tuned LLaMA 3.2 1B – Weekly & Daily Task Planner
15
 
16
  ## Model Description
17
+ This is a **fine-tuned LLaMA 3.2 1B model** designed to generate structured weekly and daily plans. It can produce:
 
18
 
19
  - Workout routines
20
  - Study schedules
21
  - Meal plans
22
  - Other daily task setups
23
 
24
+ Fine-tuning was done using **PEFT LoRA** with float16 precision for efficient training on GPU.
25
 
26
  ---
27
 
28
  ## Intended Use
29
+ This model is intended for **personal productivity, fitness planning, and educational scheduling**. It is **not meant for medical, legal, or critical decision-making**.
 
30
 
31
  ---
32
 
33
+ ## Usage
34
 
35
  ```python
36
  from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
37
 
38
+ model_name = "your-username/daily_tasks_fine_tuned_llama3_2_1b"
39
 
40
  tokenizer = AutoTokenizer.from_pretrained(model_name)
41
  model = AutoModelForCausalLM.from_pretrained(model_name)
42
 
43
+ generator = pipeline(
44
  "text-generation",
45
  model=model,
46
  tokenizer=tokenizer,
 
48
  )
49
 
50
  prompt = "Plan a 7-day workout routine for cardiovascular health."
51
+ output = generator(prompt, max_new_tokens=600)
52
  print(output[0]['generated_text'])