Edit model card

AstroLLaMA-2-7B-Base_AIC

AstroLLaMA-2-7B-Base_AIC is a specialized base language model for astronomy, developed by fine-tuning Meta's LLaMA-2-7b architecture on astronomical literature. This model was originally developed by the AstroLLaMA team as part of the UniverseTBD initiative. It is designed for next token prediction tasks and is not an instruct/chat model.

Note: This model is provided for completeness in the series of AstroLLaMA models. The core AstroLLaMA team has since moved on to develop more advanced models under AstroMLab. For the original UniverseTBD version, please visit their repository.

Model Details

  • Base Architecture: LLaMA-2-7b
  • Training Data: Abstract, Introduction, and Conclusion (AIC) sections from arXiv's astro-ph category papers (from arXiv's inception up to July 2023)
  • Data Processing: The training data was derived from LaTeX source files using regex-based extraction methods to identify and extract the relevant sections (Abstract, Introduction, and Conclusion).
  • Fine-tuning Method: Parameter-Efficient Fine-Tuning (PEFT) with LowRank Adaptation (LoRA)
  • Primary Use: Next token prediction for astronomy-related text generation and analysis
  • Reference: Perkowski et al. 2024

Generating text from a prompt

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("AstroMLab/astrollama-2-7b-base_aic")
model = AutoModelForCausalLM.from_pretrained("AstroMLab/astrollama-2-7b-base_aic", device_map="auto")

# Create the pipeline with explicit truncation
from transformers import pipeline
generator = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    device_map="auto",
    truncation=True,
    max_length=512
)

# Example prompt from an astronomy paper
prompt = "In this letter, we report the discovery of the highest redshift, " \
    "heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, " \
    "mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. "

# Set seed for reproducibility
torch.manual_seed(42)

# Generate text
generated_text = generator(prompt, do_sample=True)
print(generated_text[0]['generated_text'])

Model Limitations and Biases

This model is specifically trained on astronomy literature (abstracts, introductions, and conclusions) and may not generalize well to other domains. Users should be aware of potential biases in the training data, which may reflect historical trends and biases in astronomical research publications. Additionally, the regex-based extraction method used for processing the LaTeX source files may introduce some biases or inconsistencies in the training data.

Importantly, this model has been superseded by more advanced versions. Here's a performance comparison chart based upon the astronomical benchmarking Q&A as described in Ting et al. 2024, and Pan et al. 2024.

Model Score (%)
AstroSage-8B (AstroMLab) 79.1
AstroLLaMA-2-70B (AstroMLab) 76.0
LLaMA-3.1-8B 73.7
Gemma-2-9B 71.5
Qwen-2.5-7B 70.4
Yi-1.5-9B 68.4
InternLM-2.5-7B 64.5
Mistral-7B-v0.3 63.9
ChatGLM3-6B 50.4
AstroLLaMA-2-7B-AIC 44.3
AstroLLaMA-2-7B-Abstract 43.5

As shown, AstroLLaMA-2-7B series are outperformed by newer models. For state-of-the-art performance, we recommend using the latest models.

Ethical Considerations

While this model is designed for scientific use, users should be mindful of potential misuse, such as generating misleading scientific content. Always verify model outputs against peer-reviewed sources for critical applications.

Citation

If you use this model in your research, please cite:

@ARTICLE{2024RNAAS...8....7P,
       author = {{Perkowski}, Ernest and {Pan}, Rui and {Nguyen}, Tuan Dung and {Ting}, Yuan-Sen and {Kruk}, Sandor and {Zhang}, Tong and {O'Neill}, Charlie and {Jablonska}, Maja and {Sun}, Zechang and {Smith}, Michael J. and {Liu}, Huiling and {Schawinski}, Kevin and {Iyer}, Kartheik and {Ciuc{\u{a}}}, Ioana and {UniverseTBD}},
        title = "{AstroLLaMA-Chat: Scaling AstroLLaMA with Conversational and Diverse Datasets}",
      journal = {Research Notes of the American Astronomical Society},
     keywords = {Astronomy software, Publicly available software, Astronomical instrumentation, 1855, 1864, 799, Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Astrophysics of Galaxies, Astrophysics - Solar and Stellar Astrophysics, Computer Science - Computation and Language, Computer Science - Machine Learning},
         year = 2024,
        month = jan,
       volume = {8},
       number = {1},
          eid = {7},
        pages = {7},
          doi = {10.3847/2515-5172/ad1abe},
archivePrefix = {arXiv},
       eprint = {2401.01916},
 primaryClass = {astro-ph.IM},
       adsurl = {https://ui.adsabs.harvard.edu/abs/2024RNAAS...8....7P},
      adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
Downloads last month
13
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for AstroMLab/astrollama-2-7b-base_aic

Finetuned
(588)
this model