|
|
--- |
|
|
license: mit |
|
|
library_name: transformers |
|
|
pipeline_tag: text-generation |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- gpt2 |
|
|
- historical |
|
|
- london |
|
|
- text-generation |
|
|
- history |
|
|
- english |
|
|
- safetensors |
|
|
- large-language-model |
|
|
- llm |
|
|
--- |
|
|
|
|
|
# London Historical LLM |
|
|
|
|
|
A custom GPT-2 model **trained from scratch** on historical London texts from 1500-1850. Fast to run on CPU, and supports NVIDIA (CUDA) and AMD (ROCm) GPUs. |
|
|
|
|
|
> **Note**: This model was **trained from scratch** - not fine-tuned from existing models. |
|
|
|
|
|
> This page includes simple **virtual-env setup**, **install choices for CPU/CUDA/ROCm**, and an **auto-device inference** example so anyone can get going quickly. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Model Description |
|
|
|
|
|
This is a **Regular Language Model** built from scratch using GPT-2 architecture, trained on a comprehensive collection of historical London documents spanning 1500-1850, including: |
|
|
- Parliamentary records and debates |
|
|
- Historical newspapers and journals |
|
|
- Literary works and correspondence |
|
|
- Government documents and reports |
|
|
- Personal letters and diaries |
|
|
|
|
|
### Key Features |
|
|
- **~354M parameters** (vs ~117M in the SLM version) |
|
|
- **Custom historical tokenizer** (~30k vocab) with London-specific tokens |
|
|
- **London-specific context awareness** and historical language patterns |
|
|
- **Trained from scratch** - not fine-tuned from existing models |
|
|
- **Optimized for historical text generation** (1500-1850) |
|
|
|
|
|
--- |
|
|
|
|
|
## π§ͺ Intended Use & Limitations |
|
|
|
|
|
**Use cases:** historical-style narrative generation, prompt-based exploration of London themes (1500-1850), creative writing aids. |
|
|
**Limitations:** may produce anachronisms or historically inaccurate statements; complex sampling parameters may produce gibberish due to the historical nature of the training data. Validate outputs before downstream use. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Set up a virtual environment (Linux/macOS/Windows) |
|
|
|
|
|
> Virtual environments isolate project dependencies. Official Python docs: `venv`. |
|
|
|
|
|
**Check Python & pip** |
|
|
```bash |
|
|
# Linux/macOS |
|
|
python3 --version && python3 -m pip --version |
|
|
``` |
|
|
|
|
|
```powershell |
|
|
# Windows (PowerShell) |
|
|
python --version; python -m pip --version |
|
|
``` |
|
|
|
|
|
**Create the env** |
|
|
|
|
|
```bash |
|
|
# Linux/macOS |
|
|
python3 -m venv helloLondon |
|
|
``` |
|
|
|
|
|
```powershell |
|
|
# Windows (PowerShell) |
|
|
python -m venv helloLondon |
|
|
``` |
|
|
|
|
|
```cmd |
|
|
:: Windows (Command Prompt) |
|
|
python -m venv helloLondon |
|
|
``` |
|
|
|
|
|
> **Note**: You can name your virtual environment anything you like, e.g., `.venv`, `my_env`, `london_env`. |
|
|
|
|
|
**Activate** |
|
|
|
|
|
```bash |
|
|
# Linux/macOS |
|
|
source helloLondon/bin/activate |
|
|
``` |
|
|
|
|
|
```powershell |
|
|
# Windows (PowerShell) |
|
|
.\\helloLondon\\Scripts\\Activate.ps1 |
|
|
``` |
|
|
|
|
|
```cmd |
|
|
:: Windows (CMD) |
|
|
.\\helloLondon\\Scripts\\activate.bat |
|
|
``` |
|
|
|
|
|
> If PowerShell blocks activation (*"running scripts is disabled"*), set the policy then retry activation: |
|
|
|
|
|
```powershell |
|
|
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned |
|
|
# or just for this session: |
|
|
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π¦ Install libraries |
|
|
|
|
|
Upgrade basics, then install Hugging Face libs: |
|
|
|
|
|
```bash |
|
|
python -m pip install -U pip setuptools wheel |
|
|
python -m pip install "transformers" "accelerate" "safetensors" |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## βοΈ Install **one** PyTorch variant (CPU / NVIDIA / AMD) |
|
|
|
|
|
Use **one** of the commands below. For the most accurate command per OS/accelerator and version, prefer PyTorch's **Get Started** selector. |
|
|
|
|
|
### A) CPU-only (Linux/Windows/macOS) |
|
|
|
|
|
```bash |
|
|
pip install torch --index-url https://download.pytorch.org/whl/cpu |
|
|
``` |
|
|
|
|
|
### B) NVIDIA GPU (CUDA) |
|
|
|
|
|
Pick the CUDA series that matches your system (examples below): |
|
|
|
|
|
```bash |
|
|
# CUDA 12.6 |
|
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126 |
|
|
|
|
|
# CUDA 12.4 |
|
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 |
|
|
|
|
|
# CUDA 11.8 |
|
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 |
|
|
``` |
|
|
|
|
|
### C) AMD GPU (ROCm, **Linux-only**) |
|
|
|
|
|
Install the ROCm build matching your ROCm runtime (examples): |
|
|
|
|
|
```bash |
|
|
# ROCm 6.3 |
|
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.3 |
|
|
|
|
|
# ROCm 6.2 (incl. 6.2.x) |
|
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2.4 |
|
|
|
|
|
# ROCm 6.1 |
|
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1 |
|
|
``` |
|
|
|
|
|
**Quick sanity check** |
|
|
|
|
|
```bash |
|
|
python - <<'PY' |
|
|
import torch |
|
|
print("torch:", torch.__version__) |
|
|
print("GPU available:", torch.cuda.is_available()) |
|
|
if torch.cuda.is_available(): |
|
|
print("device:", torch.cuda.get_device_name(0)) |
|
|
PY |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π Inference (auto-detect device) |
|
|
|
|
|
This snippet picks the best device (CUDA/ROCm if available, else CPU) and uses sensible generation defaults for this model. |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
import torch |
|
|
|
|
|
model_id = "bahree/london-historical-llm" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_id) |
|
|
|
|
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
model = model.to(device) |
|
|
|
|
|
prompt = "In the year 1834, I walked through the streets of London and witnessed" |
|
|
inputs = tokenizer(prompt, return_tensors="pt").to(device) |
|
|
|
|
|
outputs = model.generate( |
|
|
inputs["input_ids"], |
|
|
max_new_tokens=50, |
|
|
do_sample=True, |
|
|
temperature=0.8, |
|
|
top_p=0.95, |
|
|
top_k=40, |
|
|
repetition_penalty=1.2, |
|
|
no_repeat_ngram_size=3, |
|
|
pad_token_id=tokenizer.eos_token_id, |
|
|
eos_token_id=tokenizer.eos_token_id, |
|
|
early_stopping=True, |
|
|
) |
|
|
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
## π **Sample Output** |
|
|
|
|
|
**Prompt:** "In the year 1834, I walked through the streets of London and witnessed" |
|
|
|
|
|
**Generated Text:** |
|
|
> "In the year 1834, I walked through the streets of London and witnessed a scene in which some of those who had no inclination to come in contact with him took part in his discourse. It was on this occasion that I perceived that he had been engaged in some new business connected with the house, but for some days it had not taken place, nor did he appear so desirous of pursuing any further display of interest. The result was, however, that if he came in contact witli any one else in company with him he must be regarded as an old acquaintance or companion, and when he came to the point of leaving, I had no leisure to take up his abode. The same evening, having ram ##bled about the streets, I observed that the young man who had just arrived from a neighbouring village at the time, was enjoying himself at a certain hour, and I thought that he would sleep quietly until morning, when he said in a low voice β " You are coming. Miss β I have come from the West Indies . " Then my father bade me go into the shop, and bid me put on his spectacles, which he had in his hand; but he replied no: the room was empty, and he did not want to see what had passed. When I asked him the cause of all this conversation, he answered in the affirmative, and turned away, saying that as soon as the lad could recover, the sight of him might be renewed. " Well, Mr. , " said I, " you have got a little more of your wages, do you ? " " No, sir, thank ' ee kindly, " returned the boy, " but we don ' t want to pay the poor rates . We" |
|
|
|
|
|
**Notice how the model captures:** |
|
|
- **Period-appropriate language** ("thank 'ee kindly," "bade me go," "spectacles") |
|
|
- **Historical dialogue patterns** (formal speech, period-appropriate contractions) |
|
|
- **Historical context** (West Indies, poor rates, needle work, pocket-book) |
|
|
- **Authentic historical narrative** (detailed scene setting, period-appropriate social interactions) |
|
|
|
|
|
## π§ͺ **Testing Your Model** |
|
|
|
|
|
### **Quick Testing (10 Automated Prompts)** |
|
|
```bash |
|
|
# Test with 10 automated historical prompts |
|
|
python 06_inference/test_published_models.py --model_type regular |
|
|
``` |
|
|
|
|
|
**Expected Output:** |
|
|
``` |
|
|
π§ͺ Testing Regular Model: bahree/london-historical-llm |
|
|
============================================================ |
|
|
π Loading model... |
|
|
β
Model loaded in 12.5 seconds |
|
|
π Model Info: |
|
|
Type: REGULAR |
|
|
Description: Regular Language Model (354M parameters) |
|
|
Device: cuda |
|
|
Vocabulary size: 30,000 |
|
|
Max length: 1024 |
|
|
|
|
|
π― Testing generation with 10 prompts... |
|
|
[10 automated tests with historical text generation] |
|
|
``` |
|
|
|
|
|
### **Interactive Testing** |
|
|
```bash |
|
|
# Interactive mode for custom prompts |
|
|
python 06_inference/inference_unified.py --published --model_type regular --interactive |
|
|
|
|
|
# Single prompt test |
|
|
python 06_inference/inference_unified.py --published --model_type regular --prompt "In the year 1834, I walked through the streets of London and witnessed" |
|
|
``` |
|
|
|
|
|
**Need more headroom later?** Load with π€ Accelerate and `device_map="auto"` to spread layers across available devices/CPU automatically. |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
tok = AutoTokenizer.from_pretrained(model_id) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πͺ Windows Terminal one-liners |
|
|
|
|
|
**PowerShell** |
|
|
|
|
|
```powershell |
|
|
python -c "from transformers import AutoTokenizer,AutoModelForCausalLM; m='bahree/london-historical-llm'; t=AutoTokenizer.from_pretrained(m); model=AutoModelForCausalLM.from_pretrained(m); p='Today I walked through the streets of London and witnessed'; i=t(p,return_tensors='pt'); print(t.decode(model.generate(i['input_ids'],max_new_tokens=50,do_sample=True)[0],skip_special_tokens=True))" |
|
|
``` |
|
|
|
|
|
**Command Prompt (CMD)** |
|
|
|
|
|
```cmd |
|
|
python -c "from transformers import AutoTokenizer, AutoModelForCausalLM ^&^& import torch ^&^& m='bahree/london-historical-llm' ^&^& t=AutoTokenizer.from_pretrained(m) ^&^& model=AutoModelForCausalLM.from_pretrained(m) ^&^& p='Today I walked through the streets of London and witnessed' ^&^& i=t(p, return_tensors='pt') ^&^& print(t.decode(model.generate(i['input_ids'], max_new_tokens=50, do_sample=True)[0], skip_special_tokens=True))" |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π‘ Basic Usage (Python) |
|
|
|
|
|
β οΈ **Important**: This model works best with **greedy decoding** for historical text generation. Complex sampling parameters may produce gibberish due to the historical nature of the training data. |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("bahree/london-historical-llm") |
|
|
model = AutoModelForCausalLM.from_pretrained("bahree/london-historical-llm") |
|
|
|
|
|
if tokenizer.pad_token is None: |
|
|
tokenizer.pad_token = tokenizer.eos_token |
|
|
|
|
|
prompt = "Today I walked through the streets of London and witnessed" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate( |
|
|
inputs["input_ids"], |
|
|
max_new_tokens=50, |
|
|
do_sample=True, |
|
|
temperature=0.7, |
|
|
top_p=0.9, |
|
|
top_k=30, |
|
|
repetition_penalty=1.25, |
|
|
no_repeat_ngram_size=4, |
|
|
pad_token_id=tokenizer.pad_token_id, |
|
|
eos_token_id=tokenizer.eos_token_id, |
|
|
early_stopping=True, |
|
|
) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π§° Example Prompts |
|
|
|
|
|
* **Tudor (1558):** "On this day in 1558, Queen Mary has died and β¦" |
|
|
* **Stuart (1666):** "The Great Fire of London has consumed much of the city, and β¦" |
|
|
* **Georgian/Victorian:** "As I journeyed through the streets of London, I observed β¦" |
|
|
* **London specifics:** "Parliament sat in Westminster Hall β¦", "The Thames flowed dark and mysterious β¦" |
|
|
|
|
|
--- |
|
|
|
|
|
## π οΈ Training Details |
|
|
|
|
|
* **Architecture:** Custom GPT-2 (built from scratch) |
|
|
* **Parameters:** ~354M |
|
|
* **Tokenizer:** Custom historical tokenizer (~30k vocab) with London-specific and historical tokens |
|
|
* **Data:** Historical London corpus (1500-1850) with proper segmentation |
|
|
* **Steps:** 60,000+ steps (extended training for better convergence) |
|
|
* **Final Training Loss:** ~2.78 (excellent convergence) |
|
|
* **Final Validation Loss:** ~3.62 (good generalization) |
|
|
* **Training Time:** ~13+ hours |
|
|
* **Hardware:** 2Γ GPU training with Distributed Data Parallel |
|
|
* **Training Method:** **Trained from scratch** - not fine-tuned |
|
|
* **Context Length:** 1024 tokens (optimized for historical text segments) |
|
|
* **Status:** β
**Successfully published and tested** - ready for production use |
|
|
|
|
|
--- |
|
|
|
|
|
## β οΈ Troubleshooting |
|
|
|
|
|
* **`ImportError: AutoModelForCausalLM requires the PyTorch library`** |
|
|
β Install PyTorch with the correct accelerator variant (see CPU/CUDA/ROCm above or use the official selector). |
|
|
|
|
|
* **AMD GPU not used** |
|
|
β Ensure you installed a ROCm build and you're on Linux (`pip install ... --index-url https://download.pytorch.org/whl/rocmX.Y`). Verify with `torch.cuda.is_available()` and check the device name. ROCm wheels are Linux-only. |
|
|
|
|
|
* **Running out of VRAM** |
|
|
β Try smaller batch/sequence lengths, or load with `device_map="auto"` via π€ Accelerate to offload layers to CPU/disk. |
|
|
|
|
|
* **Gibberish output with historical text** |
|
|
β Use greedy decoding (`do_sample=False`) and avoid complex sampling parameters. This model works best with simple generation settings due to the historical nature of the training data. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Citation |
|
|
|
|
|
If you use this model, please cite: |
|
|
|
|
|
```bibtex |
|
|
@misc{london-historical-llm, |
|
|
title = {London Historical LLM: A Custom GPT-2 for Historical Text Generation}, |
|
|
author = {Amit Bahree}, |
|
|
year = {2025}, |
|
|
url = {https://huggingface.co/bahree/london-historical-llm} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Repository |
|
|
|
|
|
The complete source code, training scripts, and documentation for this model are available on GitHub: |
|
|
|
|
|
**π [https://github.com/bahree/helloLondon](https://github.com/bahree/helloLondon)** |
|
|
|
|
|
This repository includes: |
|
|
- Complete data collection pipeline for 1500-1850 historical English |
|
|
- Custom tokenizer optimized for historical text |
|
|
- Training infrastructure with GPU optimization |
|
|
- Evaluation and deployment tools |
|
|
- Comprehensive documentation and examples |
|
|
|
|
|
### Quick Start with Repository |
|
|
```bash |
|
|
git clone https://github.com/bahree/helloLondon.git |
|
|
cd helloLondon |
|
|
python 06_inference/test_published_models.py --model_type regular |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π§Ύ License |
|
|
|
|
|
MIT (see [LICENSE](https://github.com/bahree/helloLondon/blob/main/LICENSE) in repo). |
|
|
|