GeoGPT-R1-Preview

1. Introduction
The GeoGPT collection of models are large language models for advancing geosciences research. Built upon state-of-the-art foundation models, GeoGPT models offer enhanced capabilities in specialized areas of geoscience through a series of post-training processes, including CPT, SFT, and DPO.
GeoGPT embraces the open science principles of collaboration, sharing, and co-construction, with a strong commitment to supporting the global geosciences research community. To this end, we have openly released three models:
- Llama3.1-70B-GeoGPT: a large language model based on the foundation of Llama3.1-70B.
- Qwen2.5-72B-GeoGPT: a large language model based on the foundation of Qwen2.5-72B.
- GeoGPT-R1-Preview: a large language reasoning model based on the foundation of Qwen2.5-72B, featuring remarkable reasoning capabilities in answering geoscience questions.
Our goal is to provide valuable AI tools to scientists, researchers, and professionals engaged in geoscience research worldwide.
2. Model Information
Training Data
GeoGPT respects intellectual property rights and highly values the copyright and proper attribution of authors, researchers, and publishers. To uphold the credibility and integrity of scientific research, GeoGPT relies solely on authoritative and impartial data from trusted sources. The data utilized in training GeoGPT is derived from the following sources:
A geoscience-specific subset of CommonCrawl. CommonCrawl is a publicly-available collection of web pages curated by crawling open websites. It is widely-leveraged to train leading large language models. We apply data mining algorithms to extract geoscience-related content from the raw CommonCrawl dataset. For more details, see GeoGPT Training Data from Geoscience Subset of CommonCrawl. The metadata information is available on Hugging Face.
Open access publications licensed under CC BY or CC BY-NC. Through meticulous license filtering, we have curated approximately 280,000 papers from 15 publishers and 182 journals. The full list is described at GeoGPT Training Data from Open Access Papers.
Training Process
The GeoGPT models are trained in three stages:
Continual Pre-training (CPT): This stage utilizes a diverse set of geoscience-related corpora to obtain a solid specialized model for geoscience.
Supervised Fine-tuning (SFT): This stage enhances the model’s ability to follow geoscience-specific instructions by incorporating QA pairs labeled by geoscientists, along with those generated from the training corpus in CPT stage.
Human Preference Alignment: This stage uses the Direct Preference Optimization (DPO) with preference data labeled by large language models to align model's responses with human expectations and preferences.
3. Model Downloads
GeoGPT models can be downloaded from Hugging Face and ModelScope.
Model | Total Params | Supported Language | Base Model | Hugging Face | ModelScope |
---|---|---|---|---|---|
GeoGPT-R1-Preview | 72B | Primary English and Chinese | Qwen2.5-72B | 🤗 Hugging Face | 🤖 ModelScope |
4. Quickstart
GeoGPT-R1-Preview
To load the GeoGPT-R1-Preview model with Transformers, use the following snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "GeoGPT-Research-Project/GeoGPT-R1-Preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the main components of granite?"
messages = [
{"role": "system", "content": "You are a helpful assistant named GeoGPT."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
5. License and Uses
License:GeoGPT-R1-Preview is licensed under the Qwen2.5-72B-GeoGPT License Agreement. Please note that:
- GeoGPT-R1-Preview is trained on the foundation of Qwen2.5-72B. Your use of GeoGPT-R1-Preview shall therefore comply with the Qwen LICENSE AGREEMENT.
Primary intended use:The primary use of GeoGPT models is to support geoscience research, providing geoscientists with innovative tools and capabilities enhanced by large language models. It is specifically designed for non-commercial research and educational purposes.
Out-of-scope use:GeoGPT models are not intended for use in any manner that violates applicable laws or regulations, nor for any activities prohibited by the license agreement. Additionally, it is not intended for use in languages other than those explicitly supported, as outlined in this model card.
6. Ethical Considerations and Limitations
Values: GeoGPT promotes the open science principles of collaboration, sharing, and co-construction. By facilitating collaboration across disciplines and geographical boundaries, GeoGPT seeks to empower experts and innovators with the tools they need to address complex global challenges. We welcome individuals from various backgrounds, experiences, and perspectives to join us in exploring the opportunities and challenges brought by AI and large-scale models.
Limitations: Similar to other language models, the GeoGPT models may occasionally behave in ways that pose potential risks. These models might generate inaccurate, biased, or otherwise objectionable responses to user inputs. Therefore, before deploying applications built on GeoGPT models, developers should conduct thorough safety testing and implement measures to mitigate risks specific to their intended use cases, considering cultural and linguistic contexts.
7. Contact
If you have any questions, please raise an issue or contact us at [email protected].
- Downloads last month
- 84
Model tree for GeoGPT-Research-Project/GeoGPT-R1-Preview
Base model
Qwen/Qwen2.5-72B