shallow6414 pruning commited on
Commit
eb042af
·
verified ·
0 Parent(s):

Duplicate from pruning/sn11-gemma2-27b-v2

Browse files

Co-authored-by: Pruning <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-3-27b-it
3
+ library_name: transformers
4
+ tags:
5
+ - generated_from_trainer
6
+ - trl
7
+ - sft
8
+ licence: license
9
+ license: gemma
10
+ ---
11
+
12
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
13
+
14
+
15
+ # Model Card for Synthia-S1-27b
16
+
17
+ **Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
18
+
19
+
20
+ **Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
21
+
22
+ **Authors**: Tesslate
23
+
24
+ ## Model Information
25
+
26
+ ### Description
27
+
28
+ Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
29
+
30
+ ### KEY PARAMS TO RUN:
31
+
32
+ #### Creative Writing System Prompt:
33
+ ```
34
+ Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
35
+ ```
36
+ #### Reasoning System Prompt:
37
+ ```
38
+ Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
39
+ ```
40
+ #### Coding System Prompt:
41
+ ```
42
+ Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
43
+ ```
44
+
45
+ Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
46
+
47
+ OR (recommended)
48
+
49
+ `Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
50
+
51
+ ### Inputs and Outputs
52
+
53
+ * **Input:**
54
+ * Text prompts for questions, instructions, coding tasks, or summarizations
55
+ * Total input context of 128K tokens
56
+
57
+ * **Output:**
58
+ * Reasoned and structured text outputs
59
+ * Maximum output length of 8192 tokens
60
+
61
+ ## Key Metrics
62
+
63
+ Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
64
+
65
+ I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
66
+
67
+ GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
68
+ MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
69
+
70
+ Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
71
+
72
+ ## Usage
73
+
74
+ Install the latest version of Transformers (>=4.50.0):
75
+
76
+ ```Shell
77
+ pip install -U transformers
78
+ ```
79
+
80
+ ### Running with Pipeline API
81
+
82
+ ```Python
83
+ from transformers import pipeline
84
+ import torch
85
+
86
+ pipe = pipeline(
87
+ "image-text-to-text",
88
+ model="tesslate/synthia-s1-27b",
89
+ device="cuda",
90
+ torch_dtype=torch.bfloat16
91
+ )
92
+
93
+ messages = [
94
+ {"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
95
+ {"role": "user", "content": [
96
+ {"type": "image", "url": "https://example.com/sample.jpg"},
97
+ {"type": "text", "text": "Explain the image."}
98
+ ]}
99
+ ]
100
+
101
+ output = pipe(text=messages, max_new_tokens=200)
102
+ print(output[0]["generated_text"][-1]["content"])
103
+ ```
104
+
105
+ ## Training Data
106
+
107
+ Synthia-S1-27b was trained on diverse data including:
108
+
109
+ * Multiple web documents
110
+ * Programming debugging and solutions
111
+ * Mathematical solutions and thinking steps
112
+
113
+ Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
114
+
115
+ ## Model Architecture
116
+
117
+ * **Base Model**: Gemma3
118
+ * **Size**: 27 billion parameters
119
+ * **Type**: Decoder-only Transformer
120
+ * **Precision**: bf16 with int8 quantization
121
+ * **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
122
+
123
+ ## Quantized Models
124
+
125
+ * [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
126
+ * [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
127
+
128
+ ## Limitations
129
+
130
+ * May require detailed prompt engineering for highly specific tasks
131
+ * Occasional hallucinations in less-explored domains
132
+
133
+ ## Citation
134
+
135
+ ```bibtex
136
+ @misc{tesslate_synthias127b,
137
+ title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
138
+ author={tesslate},
139
+ year={2025},
140
+ publisher={tesslate},
141
+ url={https://tesslate.com}
142
+ }
143
+ ```
144
+
145
+ **Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
146
+ [Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/)
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"<image_soft_token>": 262144}
chat_template.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"chat_template": "{{ bos_token }}\n{%- if messages[0]['role'] == 'system' -%}\n {%- if messages[0]['content'] is string -%}\n {%- set first_user_prefix = messages[0]['content'] + '\n\n' -%}\n {%- else -%}\n {%- set first_user_prefix = messages[0]['content'][0]['text'] + '\n\n' -%}\n {%- endif -%}\n {%- set loop_messages = messages[1:] -%}\n{%- else -%}\n {%- set first_user_prefix = \"\" -%}\n {%- set loop_messages = messages -%}\n{%- endif -%}\n{%- for message in loop_messages -%}\n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}\n {{ raise_exception(\"Conversation roles must alternate user/assistant/user/assistant/...\") }}\n {%- endif -%}\n {%- if (message['role'] == 'assistant') -%}\n {%- set role = \"model\" -%}\n {%- else -%}\n {%- set role = message['role'] -%}\n {%- endif -%}\n {{ '<start_of_turn>' + role + '\n' + (first_user_prefix if loop.first else \"\") }}\n {%- if message['content'] is string -%}\n {{ message['content'] | trim }}\n {%- elif message['content'] is iterable -%}\n {%- for item in message['content'] -%}\n {%- if item['type'] == 'image' -%}\n {{ '<start_of_image>' }}\n {%- elif item['type'] == 'text' -%}\n {{ item['text'] | trim }}\n {%- endif -%}\n {%- endfor -%}\n {%- else -%}\n {{ raise_exception(\"Invalid content type\") }}\n {%- endif -%}\n {{ '<end_of_turn>\n' }}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{'<start_of_turn>model\n'}}\n{%- endif -%}\n"}
config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"architectures": ["Gemma3ForConditionalGeneration"], "boi_token_index": 255999, "eoi_token_index": 256000, "eos_token_id": [1, 106], "image_token_index": 262144, "initializer_range": 0.02, "mm_tokens_per_image": 256, "model_type": "gemma3", "text_config": {"head_dim": 128, "hidden_size": 5376, "intermediate_size": 21504, "model_type": "gemma3_text", "num_attention_heads": 32, "num_hidden_layers": 62, "num_key_value_heads": 16, "query_pre_attn_scalar": 168, "rope_scaling": {"factor": 8.0, "rope_type": "linear"}, "sliding_window": 1024}, "torch_dtype": "bfloat16", "transformers_version": "4.50.0.dev0", "vision_config": {"hidden_size": 1152, "image_size": 896, "intermediate_size": 4304, "model_type": "siglip_vision_model", "num_attention_heads": 16, "num_hidden_layers": 27, "patch_size": 14, "vision_use_head": false}}
generation_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token_id": 2, "cache_implementation": "hybrid", "do_sample": true, "eos_token_id": [1, 106], "pad_token_id": 0, "top_k": 64, "top_p": 0.95, "transformers_version": "4.50.0.dev0"}
model-00001-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd00d2c14a8ce5b8d356b425c514612903f6aa527b5fb7e660ff9c2c8a6dd0e3
3
+ size 4854573768
model-00002-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d799b58271450e592254820316856c4da60370059b02ab62f9bf1e8662d0bac7
3
+ size 4954793008
model-00003-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20491acaa3f71229ab2742be973b88badd0ed0dd6d795c09697d5f2b50d18c7c
3
+ size 4954793048
model-00004-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15138bf0ba5a4f8c5f48ece5cd98ea549ed9df9f2653dfd1b31de9bc1fbda642
3
+ size 4954793088
model-00005-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd8265781f5374eddca2c9831bdaa86081a0515158c5d67ba5bf9e3183e6e620
3
+ size 4954793088
model-00006-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:475d4cc895aaf1c3d54076d59d040c989224615a3883539fde2940fa374ee866
3
+ size 4954793088
model-00007-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35bd5c886d51c18c0a86ede6096f9898b4e5081ca19f79f5b4ecb34725e5b9ca
3
+ size 4954793088
model-00008-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a813200ce5d057eb048298e29eca82962275c89241629b3f4666cee10430b8f
3
+ size 4954793088
model-00009-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3350d3e2bce22cd75c1932ead986d1579e75b2cc0e96f236f05d099cf068426
3
+ size 4954793088
model-00010-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0a2c5a70e9fff2d183a517530e217f72cb05a8a21d17af988b33bbb6d470ce4
3
+ size 4954793088
model-00011-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39f0122cfbf7a219356a8c4f5e7224c817621fb9d69f661f81416ffc5e60c384
3
+ size 4954793088
model-00012-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65200cef7b3f6f838ad8714639f973800946c395dc83205b03ef766469fd4556
3
+ size 462476760
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_convert_rgb": null, "do_normalize": true, "do_pan_and_scan": null, "do_rescale": true, "do_resize": true, "image_mean": [0.5, 0.5, 0.5], "image_processor_type": "Gemma3ImageProcessor", "image_seq_length": 256, "image_std": [0.5, 0.5, 0.5], "pan_and_scan_max_num_crops": null, "pan_and_scan_min_crop_size": null, "pan_and_scan_min_ratio_to_activate": null, "processor_class": "Gemma3Processor", "resample": 2, "rescale_factor": 0.00392156862745098, "size": {"height": 896, "width": 896}}
processor_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"image_seq_length": 256, "processor_class": "Gemma3Processor"}
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"boi_token": "<start_of_image>", "bos_token": {"content": "<bos>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false}, "eoi_token": "<end_of_image>", "eos_token": {"content": "<eos>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false}, "image_token": "<image_soft_token>", "pad_token": {"content": "<pad>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false}, "unk_token": {"content": "<unk>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false}}
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a37445c55028d6406e9490d2be970bf316b87ecc5606544f721201ea43c4c6eb
3
+ size 20323114
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff