Model Card for fvossel/OLMo-2-0325-32B-Instruct-nl-to-fol

This model contains LoRA adapter weights for the base model allenai/OLMo-2-0325-32B-Instruct. It was trained to translate natural language statements into First-Order Logic (FOL) representations.

Model Details

Model Description

  • Developed by: Vossel et al. at OsnabrΓΌck University
  • Funded by: Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) 456666331
  • Model type: Decoder-only causal language model (OLMo architecture)
  • Language(s) (NLP): English, FOL
  • License: This repository contains only LoRA adapter weights, trained using the base model allenai/OLMo-2-0325-32B-Instruct, which is released under the Apache 2.0 License. These adapter weights are also released under the Apache 2.0 License.
  • Finetuned from model: allenai/OLMo-2-0325-32B-Instruct

Uses

Direct Use

This model is designed to translate natural language (NL) sentences into corresponding first-order logic (FOL) expressions. Use cases include:

  • Automated semantic parsing and formalization of NL statements into symbolic logic.
  • Supporting explainable AI systems that require symbolic reasoning based on language input.
  • Research in neurosymbolic AI, logic-based natural language understanding, and formal verification.
  • Integration into pipelines for natural language inference, question answering, or knowledge base population.

Users should verify and validate symbolic formulas generated by the model for correctness depending on the application.

Downstream Use

The LoRA adapter can be further fine-tuned or combined with other models for domain-specific formalization tasks (e.g., legal, biomedical). Suitable for interactive systems requiring formal reasoning.

Out-of-Scope Use

  • Not designed for general natural language generation.
  • May struggle with ambiguous, highly figurative, or out-of-domain input.
  • Outputs should not be used as final decisions in critical areas without expert review.

Recommendations

  • Validate outputs carefully before use in critical applications.
  • Be aware of possible biases from training data and synthetic data sources.
  • Specialized for English NL and FOL; may not generalize to other languages or logics.
  • Use human-in-the-loop workflows for sensitive tasks.
  • Intended for research and prototyping, not standalone critical systems.

How to Get Started with the Model

import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_name = "allenai/OLMo-2-0325-32B-Instruct"
lora_weights = "fvossel/OLMo-2-0325-32B-Instruct-nl-to-fol"

tokenizer = AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"

model = AutoModelForCausalLM.from_pretrained(base_model_name, trust_remote_code=True, device_map="auto")
model = PeftModel.from_pretrained(model, lora_weights, device_map="auto")

def formatting_func(text):
    return tokenizer.apply_chat_template(
        [
            {
                "role": "system",
                "content": (
                    "You are a helpful AI assistant that translates Natural Language (NL) text "
                    "into First-Order Logic (FOL) using only the given quantors and junctors: "
                    "βˆ€ (for all), βˆƒ (there exists), Β¬ (not), ∧ (and), ∨ (or), β†’ (implies), "
                    "↔ (if and only if), βŠ• (xor). "
                    "Start your answer with 'πœ™=' followed by the FOL-formula. Do not include any other text."
                ),
            },
            {"role": "user", "content": text},
        ],
        tokenize=False,
        add_generation_prompt=False,
    )

input_text = "All dogs are animals."
prompt = formatting_func(input_text)
inputs = tokenizer(prompt, return_tensors="pt", padding=True)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

Training Data

The model was fine-tuned on two datasets:

  • WillowNLtoFOL: Contains over 16,000 NL-FOL pairs. Published in:
    Deveci, Δ°. E. (2024). Transformer models for translating natural language sentences into formal logical expressions.
    Licensed under CC BY-NC-ND 4.0.

  • MALLS-v0: 34,000 NL-FOL pairs generated by GPT-4, syntactically checked.
    Licensed under Attribution-NonCommercial 4.0, subject to OpenAI terms.

Training Procedure

Fine-tuning used LoRA adapters on the pre-trained OLMo model with:

  • Prompt-based instruction tuning
  • Multi-GPU (2 GPUs) training with bf16 mixed precision
  • Gradient checkpointing enabled for memory efficiency

Training Hyperparameters

  • Training regime: bf16 mixed precision
  • Batch size: 8 (per device)
  • Learning rate: 1e-5
  • Number of epochs: 12
  • Optimizer: AdamW
  • Scheduler: Cosine learning rate scheduler
  • Warmup ratio: 0.05
  • Gradient accumulation steps: 2
  • Weight decay: 0.01
  • LoRA rank (r): 16
  • LoRA alpha: 32
  • LoRA dropout: 0.05
  • Target modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
  • Bias: none
  • Task type: CAUSAL_LM
  • Early stopping patience: 4 epochs
  • DDP parameters:
    • ddp_find_unused_parameters=False
    • ddp_backend="nccl"
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for fvossel/OLMo-2-0325-32B-Instruct-nl-to-fol

Datasets used to train fvossel/OLMo-2-0325-32B-Instruct-nl-to-fol

Collection including fvossel/OLMo-2-0325-32B-Instruct-nl-to-fol