File size: 9,044 Bytes
ea61e04
 
 
4887fd4
 
 
 
ea61e04
 
 
 
 
4887fd4
0f401d1
ea61e04
c72a78e
4887fd4
 
c72a78e
4887fd4
 
 
 
c72a78e
 
 
 
 
 
 
 
 
117f319
 
ea61e04
117f319
 
ea61e04
63cfc66
4831041
 
 
63cfc66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c72a78e
4fdc0d2
 
 
 
 
 
 
 
 
 
63cfc66
117f319
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea61e04
 
 
 
 
 
60d0e7f
 
 
 
ea61e04
 
 
 
 
 
 
 
 
 
 
 
60d0e7f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
---
library_name: transformers
tags:
- Think
- Plan
- Reason 
- Reflect 
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
# Intuitive Thinker
To enhance the reasoning capabilities of smaller-sized language models, employ a system of thinking that incorporates mental models, structured Chain-of-Thought processes, and thoughtful reflection before responding to user queries.

***Problem:*** <br/>
smaller-sized transformer models exhibit inferior reasoning capabilities compared to their larger counterparts, whose advanced reasoning abilities stem from broader connection networks that facilitate cross-domain inference.

***Solution:*** <br/>
Two-Step Approach:
> 1. Finetuning: Commence by fine-tuning the Llama 3.1, a smaller-sized transformer model with 8 billion parameters, on an enhanced reasoning dataset to bolster its cognitive capabilities.
> 2. Revelation of Internal Processes: Subsequently, leverage a system of thinking model guidance techniques (Think, Plan, Reasoning and Reflection) to unveil the model's internal thought processes and the rationales underlying its processing mechanisms.

***Available Mental Models*** <br/>
System of thinking, reasoning and reflection
> 1. Chain-of-Thoughts
> 2. Thinking Fast and Slow
> 3. Critical Thinking
> 4. Iceberg Mental Model
> 5. Second Order Thinking
<hr/>
***Models*** <br/>
Base Model: mychen76/Llama-3.1_Intuitive-Thinker
https://huggingface.co/mychen76/Llama-3.1_Intuitive-Thinker

Quantized:  mychen76/Llama-3.1_Intuitive-Thinker_8B_2309_GGUF
https://huggingface.co/mychen76/Llama-3.1_Intuitive-Thinker_8B_2309_GGUF

***HF Usage*** <br/>

notebook: https://github.com/minyang-chen/intuitive_thinker/blob/main/intuitive_thinker_demo.ipynb

```python
from intuitive_thinker.mental_model import MentalModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
import json

question="count number of r in word strawberry?"

## format question using mental model template
mental_model = MentalModel(MentalModel.CHAIN_OF_THOUGHTS)
prompt = json.loads(mental_model(question))

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=True
)
        
# Prepare the input as before
messages = [
    {"role": "system", "content": prompt['system_message']},
    {"role": "user", "content": prompt['user_input'] }
]

hf_model_id = "mychen76/Llama-3.1_Intuitive-Thinker"

# 1: Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(hf_model_id, device_map="auto", quantization_config=bnb_config, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(hf_model_id)

# 2: Apply the chat template
formatted_chat = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
#print("Formatted chat:\n", formatted_chat)

# 3: Tokenize the chat (This can be combined with the previous step using tokenize=True)
inputs = tokenizer(formatted_chat, return_tensors="pt", add_special_tokens=False)
# Move the tokenized inputs to the same device the model is on (GPU/CPU)
inputs = {key: tensor.to(model.device) for key, tensor in inputs.items()}
#print("Tokenized inputs:\n", inputs)

# 4: Generate text from the model
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.1)
#print("Generated tokens:\n", outputs)

# 5: Decode the output back to a string
decoded_output = tokenizer.decode(outputs[0][inputs['input_ids'].size(1):], skip_special_tokens=True)
print("Decoded output:\n", decoded_output)
```
result
```bash
Decoded output:
 <thinking>
a. Briefly analyze the question and outline our approach:
The question asks for the count of the letter 'r' in the word'strawberry'. This is a simple text analysis task that requires examining the word for the target letter.

b. Present a clear plan of steps to solve the problem:
1. Write out the word'strawberry' in its full form.
2. Count the occurrences of the letter 'r' in the word.
3. Provide the final count as the answer.

c. Use a "Chain of Thought" reasoning process:
1. Start by writing out the word: s-t-r-a-w-b-e-r-r-y
2. Now, count the 'r's: r (1), r (2), r (3)
3. We have found all 'r's in the word.

<reflection>
a. Review our reasoning:
We have broken down the task into simple steps and followed through with the counting process.

b. Check for potential errors or oversights:
- We have accounted for all letters in the word.
- We have specifically targeted the letter 'r' for counting.
- We have counted each occurrence of 'r' correctly.

c. Confirm our conclusion:
Yes, our conclusion is correct. The word'strawberry' contains 3 occurrences of the letter 'r'.

</reflection>
</thinking>

<output>
The word'strawberry' contains 3 occurrences of the letter 'r'.
</output>
```

***Ollama.com*** <br/>
https://ollama.com/mychen76/llama3.1-intuitive-thinker

For direct easy to use each mental model has been package on own model package. 

1. Chain-of-Thoughs: [mychen76/llama3.1-intuitive-thinker:chain-of-thoughts.q5]
2. Thinking Fast and Slow: [mychen76/llama3.1-intuitive-thinker:thinking-fast-n-slow.q5]
3. Critical Thinking: [mychen76/llama3.1-intuitive-thinker:critical-thinking.q5]
4. Iceberg Mental Model: [mychen76/llama3.1-intuitive-thinker:iceberg-mental-model.q5]
5. Second Order Thinking: [mychen76/llama3.1-intuitive-thinker:second-order-thinking.q5]


### Samples

***Sample: Chain-of-Thoughts***

=========== user question  ===================
```
 Patel family who is planning a road trip from Toronto to Vancouver, covering a total distance of 4,400 km. They want to divide the drive into equal parts over 5 days, with each
...  day's drive being 15 km longer than the previous day. If they drive x km on the first day, what is the value of x that will allow them to complete the trip in 5 days?
```
=========== model Response ===================
```
<thinking>
To find the value of x, we need to determine how much distance the family needs to cover each day and ensure it adds up to 4,400 km over 5 days.

Let's break down their driving plan:
1. They drive x km on the first day.
2. Each subsequent day's drive is 15 km longer than the previous day.
3. The trip covers a total distance of 4,400 km.

We can express the total distance covered as a sum of distances driven each day:
Day 1: x
Day 2: x + 15
Day 3: (x + 15) + 15 = x + 30
Day 4: (x + 30) + 15 = x + 45
Day 5: (x + 45) + 15 = x + 60

The sum of these distances should equal the total trip distance:
x + (x + 15) + (x + 30) + (x + 45) + (x + 60) = 4,400 km

Let's simplify this equation by combining like terms:
5x + 150 = 4,400 km

<reflection>
Checking our reasoning process:
- We've correctly expressed the distances driven each day.
- The equation we formed is accurate and reflects the problem statement.
- Let's solve for x to confirm our understanding.

Subtracting 150 from both sides of the equation gives us:
5x = 4,250 km

Dividing both sides by 5 yields:
x = 850 km

Our calculation appears sound. This value for x ensures the family will drive equal distances each day, with the last two days being 30 and 45 km longer than the previous 
ones.

</reflection>
</thinking>

<output>
The Patel family should drive x = 850 km on the first day to complete their trip in 5 days, covering an additional 15 km each subsequent day (resulting in drives of 865 km on 
Day 2, 880 km on Day 3, 895 km on Day 4, and 910 km on Day 5).

This problem illustrates how algebra can be used to solve real-world problems involving equal parts and progressive increases. You might find it interesting to explore other 
scenarios where similar mathematical concepts are applied.

Is there anything else you'd like to clarify or any more questions about this type of problem?
</output>
```
#### Ollama Usage
https://ollama.com/mychen76/llama3.1-intuitive-thinker

For direct easy to use each mental model has been package on own model package. 

1. Chain-of-Thoughs: [mychen76/llama3.1-intuitive-thinker:chain-of-thoughts.q5]
2. Thinking Fast and Slow: [mychen76/llama3.1-intuitive-thinker:thinking-fast-n-slow.q5]
3. Critical Thinking: [mychen76/llama3.1-intuitive-thinker:critical-thinking.q5]
4. Iceberg Mental Model: [mychen76/llama3.1-intuitive-thinker:iceberg-mental-model.q5]
5. Second Order Thinking: [mychen76/llama3.1-intuitive-thinker:second-order-thinking.q5]


<!-- Provide a longer summary of what this model is. -->

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** [email protected]
- **Model type:** Llama
- **License:** Follow Llama 3 Licenses
- **Finetuned from model:** Llama3.1 8B


## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]

### Training Procedure

Finetuning