LinyingLyu commited on
Commit
a14985d
·
verified ·
1 Parent(s): 859655a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +100 -39
README.md CHANGED
@@ -12,73 +12,135 @@ inference: false
12
  ---
13
  # ChronoGPT
14
 
15
- ## Model Description
16
 
17
  ChronoGPT is a series **high-performance chronologically consistent large language models (LLMs)** designed to eliminate lookahead bias and training leakage while maintaining good language understanding in time-sensitive applications. The model is pretrained on **diverse, high-quality, open-source, and timestamped text** to maintain chronological consistency.
18
 
19
- All models in the series achieve **HellaSwag benchmark scores that surpass those of the GPT-2 124M model with the same parameter count.** This approach preserves the integrity of historical analysis and enables more reliable economic and financial modeling.
20
 
21
  - **Developed by:** Songrun He, Linying Lv, Asaf Manela, Jimmy Wu
22
  - **Model type:** Transformer-based autoregressive decoder (Modified modded-NanoGPT architecture)
23
  - **Language(s) (NLP):** English
24
  - **License:** MIT License
25
 
26
- ## Model Sources
27
 
28
- - **Paper:** "Chronologically Consistent Large Language Models" (He, Lv, Manela, Wu, 2025)
 
 
 
 
 
 
 
29
 
30
- ## How to Get Started with the Model
31
 
32
- The model is compatible with following requirements:
33
 
34
- ```sh
 
 
 
 
 
 
 
 
35
  pip install -r requirements.txt
36
  ```
37
 
38
- Here is an example code of using the model:
 
 
39
 
40
  ```python
41
- from modeling_chronogpt import ChronoGPT
42
- import tiktoken
43
  import torch
 
 
 
 
44
 
45
- device = 'cuda:0'
46
- max_length = 1792
 
47
 
48
  tokenizer = tiktoken.get_encoding("gpt2")
49
- model = ChronoGPT.from_pretrained("manelalab/chrono-gpt-v1-19991231", trust_remote_code=True).to(device)
50
-
51
- text = "Obviously, the time continuum has been disrupted, creating a new temporal event sequence resulting in this alternate reality. -- Dr. Brown, Back to the Future Part II"
52
-
53
- inputs = torch.tensor(tokenizer.encode(text))[:max_length].reshape(1,-1).to(device)
54
- logits, emb = model(inputs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ```
56
 
57
- ## Training Details
58
-
59
- ### Training Data
60
 
61
- - **Pretraining corpus:** Our initial model chrono-gpt-v1-19991231 is pretrained on 21 billion tokens of pre-2000, diverse, high-quality, and open-source text data to ensure no leakage of data afterwards.
62
- - **Incremental updates:** Yearly updates from 2000 to 2024 with an additional 65 billion tokens of timestamped text.
63
 
64
- ### Training Procedure
65
-
66
- - **Architecture:** modded NanoGPT-based model with the Muon optimizer, Skip connections, rotary embeddings and flex attention.
67
- - **Objective:** Autoregressive text generation.
68
-
69
- ## Evaluation
70
 
71
- ### Testing Data, Factors & Metrics
 
 
72
 
73
- - **Language understanding:** Evaluated on **HellaSwag benchmark** tasks.
74
- - **Financial forecasting:** Evaluated using **return prediction task** based on Dow Jones Newswire data.
75
- - **Comparison models:** ChronoGPT was benchmarked against **BERT, FinBERT, StoriesLM-v1-1963, and Llama 3.1**.
76
 
77
- ### Results
 
 
 
 
 
78
 
79
- - **HellaSwag Score:** chrono-gpt-v1-19991231 and chrono-gpt-v1-20241231 achieved HellaSwag score of 0.295 and 0.324 respectively, outperforming GPT-2 (0.294).
80
- - **Stock return predictions:** During the sample from 2008-01 to 2023-07, chrono-gpt-v1-realtime achieves a long-short portfolio **Sharpe ratio of 4.50**, outperforming BERT, FinBERT, and StoriesLM-v1-1963, and comparable to **Llama 3.1 8B (4.90)**.
81
 
 
 
 
 
82
 
83
  ## Citation
84
 
@@ -91,10 +153,9 @@ logits, emb = model(inputs)
91
  }
92
  ```
93
 
94
- ## Model Card Authors
95
 
96
  - Songrun He (Washington University in St. Louis, [email protected])
97
  - Linying Lv (Washington University in St. Louis, [email protected])
98
  - Asaf Manela (Washington University in St. Louis, [email protected])
99
- - Jimmy Wu (Washington University in St. Louis, [email protected])
100
-
 
12
  ---
13
  # ChronoGPT
14
 
15
+ ## ChronoGPT Highlights
16
 
17
  ChronoGPT is a series **high-performance chronologically consistent large language models (LLMs)** designed to eliminate lookahead bias and training leakage while maintaining good language understanding in time-sensitive applications. The model is pretrained on **diverse, high-quality, open-source, and timestamped text** to maintain chronological consistency.
18
 
19
+ All models in the series achieve **HellaSwag benchmark scores that surpass those of the GPT-2 124M model.** This approach preserves the integrity of historical analysis and enables more reliable economic and financial modeling.
20
 
21
  - **Developed by:** Songrun He, Linying Lv, Asaf Manela, Jimmy Wu
22
  - **Model type:** Transformer-based autoregressive decoder (Modified modded-NanoGPT architecture)
23
  - **Language(s) (NLP):** English
24
  - **License:** MIT License
25
 
26
+ ## Model Overview
27
 
28
+ **ChronoGPT** has the following features:
29
+ - Type: Causal Language Models
30
+ - Training Stage: Pretraining
31
+ - Number of Parameters: ~124 Million
32
+ - Encoder & Decoder Partitioning: 6 encoder and 6 decoder layers
33
+ - Tokenizer: GPT2Tokenizer from HuggingFace
34
+ - Context Length: 1,792
35
+ - Embedding Dimension: 768
36
 
37
+ ## 🚀 Quickstart
38
 
39
+ You can try ChronoGPT directly in your browser via Google Colab:
40
 
41
+ <p align="left">
42
+ <a href="https://colab.research.google.com/github/LinyingLyu/ChronoGPT/blob/main/ChronoGPT_tutorial.ipynb" target="_blank">
43
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/>
44
+ </a>
45
+ </p>
46
+
47
+ Or run it locally with:
48
+
49
+ ```bash
50
  pip install -r requirements.txt
51
  ```
52
 
53
+ ### Text Generation
54
+
55
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
56
 
57
  ```python
 
 
58
  import torch
59
+ import torch.nn.functional as F
60
+ import tiktoken
61
+ from huggingface_hub import HfApi, login
62
+ from ChronoGPT_inference import *
63
 
64
+ # ----------------------------- Setup -----------------------------
65
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
66
+ cache_dir = 'cache' # Update this path as needed
67
 
68
  tokenizer = tiktoken.get_encoding("gpt2")
69
+ max_length = 50
70
+ num_return_sequences = 5
71
+ seed = 123
72
+
73
+ # -------------------------- Load Model --------------------------
74
+ model = ChronoGPT.from_pretrained(
75
+ "manelalab/chrono-gpt-v1-20241231",
76
+ trust_remote_code=True,
77
+ cache_dir=cache_dir
78
+ ).to(device)
79
+
80
+ # ------------------------ Prepare Input -------------------------
81
+ prompt = "Hello, I am a language model,"
82
+ tokens = tokenizer.encode(prompt)
83
+ tokens = torch.tensor(tokens, dtype=torch.long).unsqueeze(0)
84
+ tokens = tokens.repeat(num_return_sequences, 1).to(device)
85
+
86
+ # -------------------- Sampling Initialization -------------------
87
+ xgen = tokens.clone()
88
+ sample_rng = torch.Generator(device=device)
89
+ sample_rng.manual_seed(seed)
90
+
91
+ # ------------------------- Text Generation -----------------------
92
+ while xgen.size(1) < max_length:
93
+ with torch.no_grad():
94
+ with torch.autocast(device_type='cuda', dtype=torch.bfloat16):
95
+ logits, _ = model(xgen)
96
+
97
+ logits = logits[:, -1, :] # Last token logits
98
+ probs = F.softmax(logits, dim=-1)
99
+ topk_probs, topk_indices = torch.topk(probs, 50, dim=-1)
100
+
101
+ sampled_idx = torch.multinomial(topk_probs, 1, generator=sample_rng)
102
+ next_token = torch.gather(topk_indices, -1, sampled_idx)
103
+
104
+ xgen = torch.cat([xgen, next_token], dim=1)
105
+
106
+ # ------------------------- Decode Output -------------------------
107
+ for i in range(num_return_sequences):
108
+ decoded_tokens = xgen[i, :max_length].tolist()
109
+ decoded_text = tokenizer.decode(decoded_tokens)
110
+ print(f"Rank sample {i}:\n{decoded_text}\n")
111
  ```
112
 
113
+ ### Extract Embeddings
 
 
114
 
115
+ The following contains a code snippet illustrating how to use the model generate embeddings of all layers based on given inputs.
 
116
 
117
+ ```python
118
+ import torch
119
+ import torch.nn.functional as F
120
+ import tiktoken
121
+ from huggingface_hub import HfApi, login
122
+ from ChronoGPT_inference import *
123
 
124
+ # ----------------------------- Setup -----------------------------
125
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
126
+ cache_dir = 'cache' # Update this path as needed
127
 
128
+ tokenizer = tiktoken.get_encoding("gpt2")
 
 
129
 
130
+ # -------------------------- Load Model --------------------------
131
+ model = ChronoGPT.from_pretrained(
132
+ "manelalab/chrono-gpt-v1-20241231",
133
+ trust_remote_code=True,
134
+ cache_dir=cache_dir
135
+ ).to(device)
136
 
137
+ # ----------------------- Embedding Generation ---------------------
138
+ text = "Obviously, the time continuum has been disrupted, creating a new temporal event sequence resulting in this alternate reality."
139
 
140
+ inputs = torch.tensor(tokenizer.encode(text))[:max_length].reshape(1,-1).to(device)
141
+ logits, emb = model(inputs)
142
+ print('Dimension of embeddings:', emb[0].shape)
143
+ ```
144
 
145
  ## Citation
146
 
 
153
  }
154
  ```
155
 
156
+ ### Model Card Authors
157
 
158
  - Songrun He (Washington University in St. Louis, [email protected])
159
  - Linying Lv (Washington University in St. Louis, [email protected])
160
  - Asaf Manela (Washington University in St. Louis, [email protected])
161
+ - Jimmy Wu (Washington University in St. Louis, [email protected])