๐Ÿ‡บ๐Ÿ‡ธ Biden Mistral Adapter ๐Ÿ‡บ๐Ÿ‡ธ

"Look, folks, this adapter, it's about our common purpose, our shared values. That's no joke."

This LoRA adapter for Mistral-7B-Instruct-v0.2 has been fine-tuned to emulate Joe Biden's distinctive speaking style, discourse patterns, and policy positions. The model captures the measured cadence, personal anecdotes, and characteristic expressions associated with the current U.S. President.

โœจ Model Details

Feature Description
Base Model mistralai/Mistral-7B-Instruct-v0.2
Architecture LoRA adapter (Low-Rank Adaptation)
LoRA Rank 16
Language English
Training Focus Biden's communication style, rhetoric, and response patterns
Merged Adapters Combines style and identity LoRA weights from:
- nnat03/biden-mistral-adapter (original adapter)
- ./identity-adapters/biden-identity-adapter

๐ŸŽฏ Intended Use

๐Ÿ“š Education ๐Ÿ” Research ๐ŸŽญ Creative
Political discourse analysis Rhetoric pattern studies Interactive simulations

๐Ÿ“Š Training Data

This model was trained on carefully curated datasets that capture authentic speech patterns:

These datasets were processed into a specialized instruction format to optimize learning of distinctive speech patterns.

โš™๏ธ Technical Specifications

Training Configuration

๐Ÿง  Framework: Hugging Face Transformers + PEFT
๐Ÿ“Š Optimization: 4-bit quantization
๐Ÿ”ง LoRA Config: r=16, alpha=64, dropout=0.05
๐ŸŽ›๏ธ Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

Training Parameters

๐Ÿ“ฆ Batch size: 4
๐Ÿ”„ Gradient accumulation: 4
๐Ÿ“ˆ Learning rate: 2e-4
๐Ÿ” Epochs: 3
๐Ÿ“‰ LR scheduler: cosine
โšก Optimizer: paged_adamw_8bit
๐Ÿงฎ Precision: BF16

โš ๏ธ Limitations and Biases

  • This model mimics a speaking style but doesn't guarantee factual accuracy
  • While emulating Biden's rhetoric, it doesn't represent his actual views
  • May reproduce biases present in the training data
  • Not suitable for production applications without additional fact-checking

๐Ÿ’ป Usage

Run this code to start using the adapter with the Mistral-7B-Instruct-v0.2 base model:

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch

# Load base model with 4-bit quantization
base_model_id = "mistralai/Mistral-7B-Instruct-v0.2"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
)

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    quantization_config=bnb_config,
    device_map="auto", 
    torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Apply the adapter
model = PeftModel.from_pretrained(model, "nnat03/biden-mistral-adapter")

# Generate a response
prompt = "What's your vision for America's future?"
input_text = f"<s>[INST] {prompt} [/INST]"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("[/INST]")[-1].strip())

๐Ÿ“š Citation

If you use this model in your research, please cite:

@misc{nnat03-biden-mistral-adapter,
  author = {nnat03},
  title = {Biden Mistral Adapter},
  year = {2023},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/nnat03/biden-mistral-adapter}}
}

๐Ÿ” Ethical Considerations

This model is created for educational and research purposes. It attempts to mimic the speaking style of a public figure but does not represent their actual views or statements. Use responsibly.


Framework version: PEFT 0.15.0

Made with โค๏ธ for NLP research and education

Downloads last month
129
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nnat03/biden-mistral-adapter

Adapter
(920)
this model