Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -1,6 +1,23 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
## Model Description
|
6 |
|
@@ -10,22 +27,33 @@ The model has been fine-tuned on a diverse collection of song lyrics to capture
|
|
10 |
|
11 |
- Base model: GPT-Neo 2.7B
|
12 |
- Architecture: Transformer-based autoregressive language model
|
13 |
-
-
|
|
|
14 |
- Context window: 2048 tokens
|
15 |
-
- Training approach:
|
16 |
|
17 |
## Usage
|
18 |
|
|
|
|
|
19 |
```python
|
20 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
-
|
23 |
-
model =
|
24 |
|
|
|
25 |
prompt = "Write lyrics for a song with the following themes: love, summer, memories. The lyrics should be:"
|
26 |
inputs = tokenizer(prompt, return_tensors="pt")
|
27 |
|
28 |
-
# Generate text
|
29 |
outputs = model.generate(
|
30 |
inputs.input_ids,
|
31 |
max_length=300,
|
@@ -59,6 +87,16 @@ Nothing's lost and nothing dies
|
|
59 |
In this moment frozen in time
|
60 |
```
|
61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
## Training Process
|
63 |
|
64 |
The model was fine-tuned on lyrics from multiple genres, focusing on:
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: mit
|
4 |
+
library_name: transformers
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- gpt-neo
|
8 |
+
- causal-lm
|
9 |
+
- text-generation
|
10 |
+
- lora
|
11 |
+
- lyrics
|
12 |
+
- peft
|
13 |
+
- adapter
|
14 |
+
datasets:
|
15 |
+
- smgriffin/modern-pop-lyrics
|
16 |
+
---
|
17 |
+
|
18 |
+
# GPT-Neo 2.7B Fine-tuned LoRA Adapter for Lyrics Generation
|
19 |
+
|
20 |
+
This is a **LoRA adapter** for [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) that was fine-tuned to generate creative song lyrics based on themes and musical styles.
|
21 |
|
22 |
## Model Description
|
23 |
|
|
|
27 |
|
28 |
- Base model: GPT-Neo 2.7B
|
29 |
- Architecture: Transformer-based autoregressive language model
|
30 |
+
- Fine-tuning: LoRA (Low-Rank Adaptation) with PEFT
|
31 |
+
- Parameters: Full model 2.7 billion, adapter weights much smaller
|
32 |
- Context window: 2048 tokens
|
33 |
+
- Training approach: Parameter-efficient fine-tuning on lyrics dataset
|
34 |
|
35 |
## Usage
|
36 |
|
37 |
+
This is a LoRA adapter model and must be loaded using the PEFT library:
|
38 |
+
|
39 |
```python
|
40 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
41 |
+
from peft import PeftModel, PeftConfig
|
42 |
+
|
43 |
+
# Load the base model and tokenizer
|
44 |
+
base_model = "EleutherAI/gpt-neo-2.7B"
|
45 |
+
adapter_model = "jacob-c/gptneo-2.7Bloratunning"
|
46 |
+
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained(base_model)
|
48 |
+
base_model = AutoModelForCausalLM.from_pretrained(base_model)
|
49 |
|
50 |
+
# Load the LoRA adapter
|
51 |
+
model = PeftModel.from_pretrained(base_model, adapter_model)
|
52 |
|
53 |
+
# Generate lyrics
|
54 |
prompt = "Write lyrics for a song with the following themes: love, summer, memories. The lyrics should be:"
|
55 |
inputs = tokenizer(prompt, return_tensors="pt")
|
56 |
|
|
|
57 |
outputs = model.generate(
|
58 |
inputs.input_ids,
|
59 |
max_length=300,
|
|
|
87 |
In this moment frozen in time
|
88 |
```
|
89 |
|
90 |
+
## LoRA Adapter Details
|
91 |
+
|
92 |
+
This model uses Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that significantly reduces the number of trainable parameters by adding pairs of rank-decomposition matrices to existing weights while freezing the original parameters.
|
93 |
+
|
94 |
+
LoRA configuration:
|
95 |
+
- r: 16
|
96 |
+
- alpha: 32
|
97 |
+
- Target modules: q_proj, k_proj, v_proj, out_proj
|
98 |
+
- Dropout: 0.05
|
99 |
+
|
100 |
## Training Process
|
101 |
|
102 |
The model was fine-tuned on lyrics from multiple genres, focusing on:
|