Trump Mistral Adapter

Model: Mistral-7B Adapter: LoRA Style: Trump

"This adapter, believe me folks, it's tremendous. It's the best adapter, everyone says so. We're going to do things with this model that nobody's ever seen before."

A fine-tuned language model that captures Donald Trump's distinctive speaking style, discourse patterns, and policy positions. This LoRA adapter transforms Mistral-7B-Instruct-v0.2 to emulate the unique rhetorical flourishes and speech cadence of the former U.S. President.

Speech Patterns Policy Positions Repetition Style Hand Gestures

๐Ÿ” Overview

Feature Description
Base Model Mistral-7B-Instruct-v0.2
Architecture LoRA adapter (Low-Rank Adaptation)
Training Focus Communication style, rhetoric, and response patterns
Language English

๐Ÿš€ Getting Started

๐Ÿ’ป Python Implementation

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch

# Configuration
base_model_id = "mistralai/Mistral-7B-Instruct-v0.2"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
)

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    quantization_config=bnb_config,
    device_map="auto",
    torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Apply adapter
model = PeftModel.from_pretrained(model, "nnat03/trump-mistral-adapter")

# Generate response
prompt = "What's your plan for border security?"
input_text = f"<s>[INST] {prompt} [/INST]"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("[/INST]")[-1].strip())

๐Ÿ”ฎ Ollama Integration

For simplified local deployment:

# Pull the model
ollama pull nnat03/trump-mistral

# Run the model
ollama run nnat03/trump-mistral

Access this model via the Ollama library.


๐Ÿ“Š Example Output

Topic Response
Border Security "First of all, we need the wall. The wall is very important. It's not just a wall, it's steel and concrete and things that are very, very strong. We have 450 miles completed. It's an incredible job."
Joe Biden "Joe Biden, I call him 1% Joe. His numbers are way down. He's a corrupt politician. He's been there for 47 years. Where has he been? What's he done? There's nothing."

โš™๏ธ Technical Details

๐Ÿ“š Training Data

This model was trained on authentic speech patterns from:

๐Ÿ”ง Model Configuration

LoRA rank: 16 (tremendous rank, the best rank)
Alpha: 64
Dropout: 0.05
Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

๐Ÿง  Training Parameters

Batch size: 4
Gradient accumulation: 4
Learning rate: 2e-4
Epochs: 3
LR scheduler: cosine
Optimizer: paged_adamw_8bit
Precision: BF16

๐ŸŽฏ Applications

๐ŸŽ“ Education
Political discourse analysis
๐Ÿ”ฌ Research
Rhetoric pattern studies
๐ŸŽญ Creative
Interactive simulations

โš ๏ธ Notes and Limitations

This model mimics a speaking style but does not guarantee factual accuracy or represent actual views. It may reproduce biases present in the training data and is primarily intended for research and educational purposes.

๐Ÿ“„ Citation

@misc{nnat03-trump-mistral-adapter,
  author = {nnat03},
  title = {Trump Mistral Adapter},
  year = {2023},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/nnat03/trump-mistral-adapter}}
}

Framework version: PEFT 0.15.0

Created for NLP research and education

"We're gonna have the best models, believe me."

Downloads last month
131
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nnat03/trump-mistral-adapter

Adapter
(920)
this model

Datasets used to train nnat03/trump-mistral-adapter