Update README.md
Browse files
README.md
CHANGED
|
@@ -14,13 +14,13 @@ tags:
|
|
| 14 |
|
| 15 |
<br>
|
| 16 |
|
| 17 |
-
#
|
| 18 |
|
| 19 |
<br>
|
| 20 |
|
| 21 |
-

|
|
| 29 |
|
| 30 |
# Technical Overview
|
| 31 |
|
| 32 |
-
|
| 33 |
- Type: Causal Language Models
|
| 34 |
- Training Stage: Pretraining & Post-training
|
| 35 |
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
|
|
@@ -57,7 +57,7 @@ Here provides a code snippet with `apply_chat_template` to show you how to load
|
|
| 57 |
```python
|
| 58 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 59 |
|
| 60 |
-
model_name = "
|
| 61 |
|
| 62 |
model = AutoModelForCausalLM.from_pretrained(
|
| 63 |
model_name,
|
|
@@ -68,7 +68,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
| 68 |
|
| 69 |
prompt = "write a quick sort algorithm."
|
| 70 |
messages = [
|
| 71 |
-
{"role": "system", "content": "You are
|
| 72 |
{"role": "user", "content": prompt}
|
| 73 |
]
|
| 74 |
text = tokenizer.apply_chat_template(
|
|
@@ -108,9 +108,9 @@ For supported frameworks, you could add the following to `config.json` to enable
|
|
| 108 |
|
| 109 |
# License
|
| 110 |
|
| 111 |
-
Apache-2.0 +
|
| 112 |
|
| 113 |
-
##
|
| 114 |
|
| 115 |
```
|
| 116 |
You agree not to use the Model or Derivatives of the Model:
|
|
|
|
| 14 |
|
| 15 |
<br>
|
| 16 |
|
| 17 |
+
# DeepHat
|
| 18 |
|
| 19 |
<br>
|
| 20 |
|
| 21 |
+

|
| 22 |
|
| 23 |
+
DeepHat is a model series that can be used for offensive and defensive cybersecurity. Access at [whiterabbitneo.com](https://www.whiterabbitneo.com/) or go to [Kindo.ai](https://www.kindo.ai/) to create agents.
|
| 24 |
|
| 25 |
# Community
|
| 26 |
|
|
|
|
| 29 |
|
| 30 |
# Technical Overview
|
| 31 |
|
| 32 |
+
DeepHat is a finetune of [Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B/), and inherits the following features:
|
| 33 |
- Type: Causal Language Models
|
| 34 |
- Training Stage: Pretraining & Post-training
|
| 35 |
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
|
|
|
|
| 57 |
```python
|
| 58 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 59 |
|
| 60 |
+
model_name = "DeepHat/DeepHat-V1-7B"
|
| 61 |
|
| 62 |
model = AutoModelForCausalLM.from_pretrained(
|
| 63 |
model_name,
|
|
|
|
| 68 |
|
| 69 |
prompt = "write a quick sort algorithm."
|
| 70 |
messages = [
|
| 71 |
+
{"role": "system", "content": "You are DeepHat, created by Kindo.ai. You are a helpful assistant that is an expert in Cybersecurity and DevOps."},
|
| 72 |
{"role": "user", "content": prompt}
|
| 73 |
]
|
| 74 |
text = tokenizer.apply_chat_template(
|
|
|
|
| 108 |
|
| 109 |
# License
|
| 110 |
|
| 111 |
+
Apache-2.0 + DeepHat Extended Version
|
| 112 |
|
| 113 |
+
## DeepHat Extension to Apache-2.0 Licence: Usage Restrictions
|
| 114 |
|
| 115 |
```
|
| 116 |
You agree not to use the Model or Derivatives of the Model:
|