azdin commited on
Commit
d4e7065
·
verified ·
1 Parent(s): c785036

Add README

Browse files
Files changed (1) hide show
  1. README.md +36 -47
README.md CHANGED
@@ -1,62 +1,51 @@
1
  ---
 
2
  base_model: llava-hf/llava-onevision-qwen2-7b-ov-hf
3
- library_name: peft
4
- model_name: llava_adalora_weather_model
5
  tags:
6
- - base_model:adapter:llava-hf/llava-onevision-qwen2-7b-ov-hf
7
- - lora
8
- - sft
9
- - transformers
10
- - trl
11
- licence: license
12
- pipeline_tag: text-generation
 
13
  ---
14
 
15
- # Model Card for llava_adalora_weather_model
16
 
17
- This model is a fine-tuned version of [llava-hf/llava-onevision-qwen2-7b-ov-hf](https://huggingface.co/llava-hf/llava-onevision-qwen2-7b-ov-hf).
18
- It has been trained using [TRL](https://github.com/huggingface/trl).
19
 
20
- ## Quick start
21
 
22
- ```python
23
- from transformers import pipeline
24
-
25
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
26
- generator = pipeline("text-generation", model="None", device="cuda")
27
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
28
- print(output["generated_text"])
29
- ```
30
-
31
- ## Training procedure
32
 
33
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/azdinsahir11-university-mohamed-v/llava-onevision-adalora-weather/runs/m8m1fc1p)
34
 
 
 
 
35
 
36
- This model was trained with SFT.
37
-
38
- ### Framework versions
39
-
40
- - PEFT 0.16.0
41
- - TRL: 0.19.1
42
- - Transformers: 4.53.3
43
- - Pytorch: 2.6.0+cu124
44
- - Datasets: 4.0.0
45
- - Tokenizers: 0.21.2
46
 
47
- ## Citations
 
48
 
 
 
49
 
 
50
 
51
- Cite TRL as:
52
-
53
- ```bibtex
54
- @misc{vonwerra2022trl,
55
- title = {{TRL: Transformer Reinforcement Learning}},
56
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
57
- year = 2020,
58
- journal = {GitHub repository},
59
- publisher = {GitHub},
60
- howpublished = {\url{https://github.com/huggingface/trl}}
61
- }
62
- ```
 
1
  ---
2
+ license: apache-2.0
3
  base_model: llava-hf/llava-onevision-qwen2-7b-ov-hf
 
 
4
  tags:
5
+ - llava
6
+ - llava-onevision
7
+ - weather
8
+ - satellite
9
+ - morocco
10
+ - meteorology
11
+ - adalora
12
+ - fine-tuned
13
  ---
14
 
15
+ # LLaVA-OneVision Weather Analysis - AdaLoRA
16
 
17
+ Fine-tuned using **AdaLoRA** technique for weather satellite imagery analysis.
 
18
 
19
+ ## Model Details
20
 
21
+ - **Base Model:** llava-hf/llava-onevision-qwen2-7b-ov-hf
22
+ - **Technique:** AdaLoRA
23
+ - **Domain:** Weather satellite imagery analysis
24
+ - **Dataset:** Weather satellite images with meteorological metadata
 
 
 
 
 
 
25
 
26
+ ## Usage
27
 
28
+ ```python
29
+ from transformers import LlavaOnevisionForConditionalGeneration, AutoProcessor
30
+ import torch
31
 
32
+ # Load base model
33
+ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
34
+ "llava-hf/llava-onevision-qwen2-7b-ov-hf",
35
+ torch_dtype=torch.bfloat16,
36
+ device_map="auto"
37
+ )
38
+ processor = AutoProcessor.from_pretrained("llava-hf/llava-onevision-qwen2-7b-ov-hf")
 
 
 
39
 
40
+ # Load fine-tuned adapter
41
+ model.load_adapter("azdin/llava-onevision-weather-adalora")
42
 
43
+ # Use for weather analysis...
44
+ ```
45
 
46
+ ## Training Details
47
 
48
+ - **Technique:** AdaLoRA
49
+ - **Quantization:** 4-bit NF4
50
+ - **Training Data:** Weather satellite imagery with metadata
51
+ - **Target Modules:** Attention and projection layers