Model Card for Lyric Generation Model

A fine-tuned language model for generating song lyrics based on musical features extracted from audio.

Model Details

Model Description

This model is fine-tuned to generate song lyrics conditioned on musical features extracted from audio files. The model takes musical characteristics as input and produces contextually appropriate lyrics that match the musical style and mood.

  • Developed by: umerbappi
  • Model type: Text Generation (Fine-tuned Language Model)
  • Language(s) (NLP): English
  • License: No specific license
  • Finetuned from model [optional]: unsloth/llama-3.2-3b-instruct-bnb-4bit

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

This model can be used to generate song lyrics based on musical features. Input the musical characteristics of an audio track, and the model will generate appropriate lyrics that match the musical style, tempo, and mood.

Downstream Use [optional]

The model can be integrated into music production workflows, songwriting applications, or creative tools for musicians and content creators.

Out-of-Scope Use

This model should not be used for generating lyrics that promote harmful content, hate speech, or copyright infringement. The model is designed for creative and educational purposes.

Bias, Risks, and Limitations

The model is trained on existing song lyrics and may reflect biases present in the training data. Generated lyrics should be reviewed for appropriateness and potential copyright concerns before commercial use.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Review generated content for appropriateness and potential copyright issues before use.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("umerbappi/YOUR_MODEL_NAME")
model = AutoModelForCausalLM.from_pretrained("umerbappi/YOUR_MODEL_NAME")

def generate_lyrics(musical_features: str, max_length: int = 200) -> str:
    """Generate lyrics using the trained model."""
    
    # Create Llama-formatted prompt
    instruction = f"Generate lyrics for a song with these characteristics:\n{musical_features}"
    prompt = f"<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
    
    # Tokenize
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    
    # Generate
    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=max_length,
            temperature=0.8,
            do_sample=True,
            pad_token_id=tokenizer.eos_token_id,
            eos_token_id=tokenizer.eos_token_id,
        )
    
    # Decode and clean
    generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
    lyrics = generated_text.replace(prompt, "").strip()
    
    return lyrics

# Example usage
test_features = """Tempo: 120.0 BPM
Duration: 3:30
Loudness: -8.5 dB
Key: G major
Time Signature: 4/4"""

sample_lyrics = generate_lyrics(test_features)
print(sample_lyrics)

Training Details

Training Data

The model was trained on the LyricGen dataset, which combines musical features from the Million Song Dataset with corresponding lyrics obtained through the Genius API.

Dataset: umerbappi/LyricGen

The dataset contains:

  • Musical features extracted from the Million Song Dataset
  • Song names and metadata
  • Corresponding lyrics retrieved via Genius API

Training Procedure

The model was fine-tuned using the PEFT (Parameter Efficient Fine-Tuning) library on the base model unsloth/llama-3.2-3b-instruct-bnb-4bit.

Preprocessing [optional]

Musical features were extracted and formatted into structured text inputs including:

  • Tempo (BPM)
  • Duration
  • Loudness (dB)
  • Key signature
  • Time signature

The data was formatted using Llama's chat format with specific user/assistant tokens for instruction following.

Training Hyperparameters

  • Base Model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
  • Max Sequence Length: 2048
  • Load in 4-bit: True
  • LoRA r: 32
  • LoRA alpha: 64
  • LoRA dropout: 0.05
  • Target modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
  • Number of epochs: 15
  • Batch size: 6
  • Evaluation batch size: 4
  • Gradient accumulation steps: 3 (effective batch size: 18)
  • Learning rate: 1e-4
  • Weight decay: 0.02
  • Warmup steps: 300
  • Optimizer: adamw_torch_fused
  • LR scheduler: cosine_with_restarts
  • Training regime: bf16 mixed precision
  • Gradient checkpointing: True
  • Max gradient norm: 0.5

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

No formal evaluation metrics were computed for this model.

Summary

The model was trained to generate lyrics conditioned on musical features but no quantitative evaluation was performed.

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

Based on Llama 3.2 3B architecture, fine-tuned for conditional text generation where musical features serve as conditioning input for lyric generation.

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Framework versions

  • PEFT 0.15.2
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support