devon-kindo commited on
Commit
3d0a4aa
·
verified ·
1 Parent(s): 7665282

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -14,13 +14,13 @@ tags:
14
 
15
  <br>
16
 
17
- # WhiteRabbitNeo
18
 
19
  <br>
20
 
21
- ![WhiteRabbitNeo](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B/resolve/main/whiterabbitneo-logo-defcon.png)
22
 
23
- WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. Access at [whiterabbitneo.com](https://www.whiterabbitneo.com/) or go to [Kindo.ai](https://www.kindo.ai/) to create agents.
24
 
25
  # Community
26
 
@@ -29,7 +29,7 @@ Join us on [Discord](https://discord.gg/8Ynkrcbk92)
29
 
30
  # Technical Overview
31
 
32
- WhiteRabbitNeo is a finetune of [Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B/), and inherits the following features:
33
  - Type: Causal Language Models
34
  - Training Stage: Pretraining & Post-training
35
  - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
@@ -57,7 +57,7 @@ Here provides a code snippet with `apply_chat_template` to show you how to load
57
  ```python
58
  from transformers import AutoModelForCausalLM, AutoTokenizer
59
 
60
- model_name = "WhiteRabbitNeo/WhiteRabbitNeo-V3-7B"
61
 
62
  model = AutoModelForCausalLM.from_pretrained(
63
  model_name,
@@ -68,7 +68,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
68
 
69
  prompt = "write a quick sort algorithm."
70
  messages = [
71
- {"role": "system", "content": "You are WhiteRabbitNeo, created by Kindo.ai. You are a helpful assistant that is an expert in Cybersecurity and DevOps."},
72
  {"role": "user", "content": prompt}
73
  ]
74
  text = tokenizer.apply_chat_template(
@@ -108,9 +108,9 @@ For supported frameworks, you could add the following to `config.json` to enable
108
 
109
  # License
110
 
111
- Apache-2.0 + WhiteRabbitNeo Extended Version
112
 
113
- ## WhiteRabbitNeo Extension to Apache-2.0 Licence: Usage Restrictions
114
 
115
  ```
116
  You agree not to use the Model or Derivatives of the Model:
 
14
 
15
  <br>
16
 
17
+ # DeepHat
18
 
19
  <br>
20
 
21
+ ![DeepHat](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B/resolve/main/whiterabbitneo-logo-defcon.png)
22
 
23
+ DeepHat is a model series that can be used for offensive and defensive cybersecurity. Access at [whiterabbitneo.com](https://www.whiterabbitneo.com/) or go to [Kindo.ai](https://www.kindo.ai/) to create agents.
24
 
25
  # Community
26
 
 
29
 
30
  # Technical Overview
31
 
32
+ DeepHat is a finetune of [Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B/), and inherits the following features:
33
  - Type: Causal Language Models
34
  - Training Stage: Pretraining & Post-training
35
  - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
 
57
  ```python
58
  from transformers import AutoModelForCausalLM, AutoTokenizer
59
 
60
+ model_name = "DeepHat/DeepHat-V1-7B"
61
 
62
  model = AutoModelForCausalLM.from_pretrained(
63
  model_name,
 
68
 
69
  prompt = "write a quick sort algorithm."
70
  messages = [
71
+ {"role": "system", "content": "You are DeepHat, created by Kindo.ai. You are a helpful assistant that is an expert in Cybersecurity and DevOps."},
72
  {"role": "user", "content": prompt}
73
  ]
74
  text = tokenizer.apply_chat_template(
 
108
 
109
  # License
110
 
111
+ Apache-2.0 + DeepHat Extended Version
112
 
113
+ ## DeepHat Extension to Apache-2.0 Licence: Usage Restrictions
114
 
115
  ```
116
  You agree not to use the Model or Derivatives of the Model: