pumlGenV1-1

This model is a fine-tuned version of Qwen/Qwen3-8B-Base on the pumlGen dataset. It specializes in generating PlantUML diagrams from natural language questions.

Model description

pumlGenV1-1 is a specialized language model that converts complex questions into structured PlantUML diagrams. The model takes philosophical, historical, legal, or analytical questions as input and generates comprehensive PlantUML code that visualizes the relationships, hierarchies, and connections between concepts mentioned in the question.

Key features:

  • Generates syntactically correct PlantUML diagrams
  • Creates structured visualizations with packages, entities, and relationships
  • Adds contextual notes and annotations
  • Handles complex domain-specific topics across various fields

Intended uses & limitations

Intended uses

  • Educational purposes: Creating visual diagrams to explain complex concepts
  • Research visualization: Mapping relationships between ideas, theories, or historical events
  • Documentation: Generating diagrams for technical or conceptual documentation
  • Analysis tools: Visualizing interconnections in philosophical, legal, or social topics

Limitations

  • The model is specifically trained for PlantUML output format
  • Best performance on analytical, philosophical, historical, and conceptual questions
  • May require post-processing for specific PlantUML styling preferences
  • Generated diagrams should be reviewed for accuracy and completeness

Training and evaluation data

The model was trained on the pumlGen dataset, which consists of question-answer pairs where:

  • Input: Complex analytical questions about various topics (philosophy, history, law, social sciences)
  • Output: Corresponding PlantUML diagram code that visualizes the concepts and relationships

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 3.0

Training results

The model demonstrates strong capabilities in:

  • Generating valid PlantUML syntax
  • Creating meaningful entity relationships
  • Adding appropriate annotations and notes
  • Structuring complex information hierarchically

Framework versions

  • Transformers 4.52.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1

Usage Example

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/pumlGenV1-1")
tokenizer = AutoTokenizer.from_pretrained("your-username/pumlGenV1-1")

# Prepare the input in conversation format
question = "What role does the annual flooding of the Nile play in the overall agricultural success and survival of the kingdoms along its banks?"

messages = [
    {"from": "human", "value": question},
]

# Format the input (adjust based on your specific tokenizer's chat template)
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt")

# Generate PlantUML diagram
outputs = model.generate(
    **inputs, 
    max_length=2048,
    temperature=0.7,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id
)

# Decode and extract the PlantUML code
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract the PlantUML code from the response (between @startuml and @enduml)
plantuml_code = response.split("@startuml")[-1].split("@enduml")[0]
plantuml_code = "@startuml" + plantuml_code + "@enduml"

print(plantuml_code)
Downloads last month
12
Safetensors
Model size
8.19B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for chrisrutherford/pumlGenV1

Base model

Qwen/Qwen3-8B-Base
Finetuned
(216)
this model