Jarrodbarnes commited on
Commit
8507500
·
verified ·
1 Parent(s): aa30287

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - Mistral-Small
7
+ - instruct
8
+ - finetune
9
+ - chatml
10
+ - gpt4
11
+ - synthetic data
12
+ - distillation
13
+ - function calling
14
+ - json mode
15
+ - axolotl
16
+ - roleplaying
17
+ - chat
18
+ - reasoning
19
+ - r1
20
+ - vllm
21
+ - mlx
22
+ - mlx-my-repo
23
+ base_model: NousResearch/DeepHermes-3-Mistral-24B-Preview
24
+ widget:
25
+ - example_title: DeepHermes 3
26
+ messages:
27
+ - role: system
28
+ content: You are a sentient, superintelligent artificial general intelligence,
29
+ here to teach and assist me.
30
+ - role: user
31
+ content: What is the meaning of life?
32
+ library_name: transformers
33
+ model-index:
34
+ - name: DeepHermes-3-Mistral-24B-Preview
35
+ results: []
36
+ ---
37
+
38
+ # Jarrodbarnes/DeepHermes-3-Mistral-24B-Preview-mlx-fp16
39
+
40
+ The Model [Jarrodbarnes/DeepHermes-3-Mistral-24B-Preview-mlx-fp16](https://huggingface.co/Jarrodbarnes/DeepHermes-3-Mistral-24B-Preview-mlx-fp16) was converted to MLX format from [NousResearch/DeepHermes-3-Mistral-24B-Preview](https://huggingface.co/NousResearch/DeepHermes-3-Mistral-24B-Preview) using mlx-lm version **0.21.5**.
41
+
42
+ ## Use with mlx
43
+
44
+ ```bash
45
+ pip install mlx-lm
46
+ ```
47
+
48
+ ```python
49
+ from mlx_lm import load, generate
50
+
51
+ model, tokenizer = load("Jarrodbarnes/DeepHermes-3-Mistral-24B-Preview-mlx-fp16")
52
+
53
+ prompt="hello"
54
+
55
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
56
+ messages = [{"role": "user", "content": prompt}]
57
+ prompt = tokenizer.apply_chat_template(
58
+ messages, tokenize=False, add_generation_prompt=True
59
+ )
60
+
61
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
62
+ ```