saishshinde15 commited on
Commit
8cf255b
·
verified ·
1 Parent(s): 5d9f64c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  base_model:
3
- - saishshinde15/TBH.AI_Base_Reasoning
4
  tags:
5
  - vortex-family
6
  - sft
@@ -14,19 +14,19 @@ language:
14
  - en
15
  ---
16
 
17
- # TBH.AI Vortex
18
 
19
- - **Developed by:** TBH.AI
20
  - **License:** apache-2.0
21
- - **Fine-tuned from:** saishshinde15/TBH.AI_Base_Reasoning
22
  - **Part of:** Vortex Family (A collection of four fine-tuned SFT models)
23
 
24
  ## **Model Description**
25
- TBH.AI Vortex is a **highly refined reasoning model** built upon `saishshinde15/TBH.AI_Base_Reasoning`, further enhanced with **high-quality, curated datasets** that the base model lacked. This model is part of the **Vortex Family**, a series of four fine-tuned models designed for advanced reasoning, knowledge synthesis, and structured response generation.
26
 
27
  Unlike typical reinforcement learning-based improvements, **Supervised Fine-Tuning (SFT) was chosen** to ensure greater **control, stability, and alignment with human-preferred responses**, making Vortex more **reliable, interpretable, and useful** across a wide range of tasks.
28
 
29
- ## **Why TBH.AI Vortex Stands Out**
30
  - **Enhanced Knowledge & Reasoning**: Incorporates **higher-quality training data** to fill gaps in the base model, improving factual accuracy and logical reasoning.
31
  - **Better Response Coherence**: Fine-tuned to provide **more structured, well-reasoned, and contextually relevant answers** across different domains.
32
  - **Improved Handling of Complex Queries**: Excels in **multi-step logical deductions, research-oriented tasks, and structured decision-making**.
@@ -54,7 +54,7 @@ max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
54
  dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
55
  load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
56
  model, tokenizer = FastLanguageModel.from_pretrained(
57
- model_name = "saishshinde15/TBH.AI_Vortex",
58
  max_seq_length = max_seq_length,
59
  dtype = dtype,
60
  load_in_4bit = load_in_4bit
@@ -96,7 +96,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
96
  import torch
97
 
98
  # Load tokenizer and model
99
- model_name = "saishshinde15/TBH.AI_Vortex"
100
  tokenizer = AutoTokenizer.from_pretrained(model_name)
101
  model = AutoModelForCausalLM.from_pretrained(model_name)
102
 
 
1
  ---
2
  base_model:
3
+ - saishshinde15/Clyrai_Base_Reasoning
4
  tags:
5
  - vortex-family
6
  - sft
 
14
  - en
15
  ---
16
 
17
+ # Clyrai Vortex
18
 
19
+ - **Developed by:** clyrai
20
  - **License:** apache-2.0
21
+ - **Fine-tuned from:** saishshinde15/Clyrai_Base_Reasoning
22
  - **Part of:** Vortex Family (A collection of four fine-tuned SFT models)
23
 
24
  ## **Model Description**
25
+ Clyrai Vortex is a **highly refined reasoning model** built upon `saishshinde15/Clyrai_Base_Reasoning`, further enhanced with **high-quality, curated datasets** that the base model lacked. This model is part of the **Vortex Family**, a series of four fine-tuned models designed for advanced reasoning, knowledge synthesis, and structured response generation.
26
 
27
  Unlike typical reinforcement learning-based improvements, **Supervised Fine-Tuning (SFT) was chosen** to ensure greater **control, stability, and alignment with human-preferred responses**, making Vortex more **reliable, interpretable, and useful** across a wide range of tasks.
28
 
29
+ ## **Why Clyrai Vortex Stands Out**
30
  - **Enhanced Knowledge & Reasoning**: Incorporates **higher-quality training data** to fill gaps in the base model, improving factual accuracy and logical reasoning.
31
  - **Better Response Coherence**: Fine-tuned to provide **more structured, well-reasoned, and contextually relevant answers** across different domains.
32
  - **Improved Handling of Complex Queries**: Excels in **multi-step logical deductions, research-oriented tasks, and structured decision-making**.
 
54
  dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
55
  load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
56
  model, tokenizer = FastLanguageModel.from_pretrained(
57
+ model_name = "saishshinde15/Clyrai_Vortex",
58
  max_seq_length = max_seq_length,
59
  dtype = dtype,
60
  load_in_4bit = load_in_4bit
 
96
  import torch
97
 
98
  # Load tokenizer and model
99
+ model_name = "saishshinde15/Clyrai_Vortex"
100
  tokenizer = AutoTokenizer.from_pretrained(model_name)
101
  model = AutoModelForCausalLM.from_pretrained(model_name)
102