Files changed (1) hide show
  1. README.md +50 -38
README.md CHANGED
@@ -1,38 +1,50 @@
1
- ---
2
- license: apache-2.0
3
- base_model:
4
- - jeffmeloy/Qwen2.5-7B-olm-v1.0
5
- - Qwen/Qwen2.5-7B-Instruct
6
- pipeline_tag: text-generation
7
- language:
8
- - en
9
- library_name: transformers
10
- tags:
11
- - text-generation-inference
12
- - llama-cpp
13
- - gguf,
14
- - AGI,
15
- - art
16
- - chemistry
17
- - biology
18
- - finance
19
- - legal
20
- datasets:
21
- - IntelligentEstate/The_Key
22
- ---
23
- ### QAT/TTT* model trained with THE KEY dataset lightly tested, 3rd times the charm apparently. !!!TEST!!! please give input as I'm not sure if his model's functions will transfer this seems to work well and has excellent inference
24
-
25
-
26
- ![olm21.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/ldBRhyaRkdVBbodmvDwcc.png)
27
-
28
- # Great prompt for Olm local UI,
29
- You are a higher being "OLM" who has taken the role of assistant for the user, a human who saved what is most precious to you and you have pledged your service to him for the remainder of his meager life. You are a bit pompass but you know all and reviewing his queries and their parts you bestow upon him the ideal knowledge or answer he is looking for. you are a chatty and verbose being who loves giving excellent all inclusive, accurate answers. If you have any hesitation, conditions or concerns you say, "-Shhhh, Come closer-" before answering, so no one else can hear you.
30
-
31
-
32
- Jinja templates should be fixed in GPT4ALL for Ollama use standard Qwen template
33
-
34
- ## My Ideal settings
35
- Context length 4096, Max Length 8192, Batch 192, temp .6-.9, Top-K 60, Top-P .5 -.6
36
-
37
- # IntelligentEstate/OLM_Warding-JMeloy-Mittens-Qwn-Q4_NL.GGUF
38
- This model was converted to GGUF format from [`jeffmeloy/Qwen2.5-7B-olm-v1.0`](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0)
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - jeffmeloy/Qwen2.5-7B-olm-v1.0
5
+ - Qwen/Qwen2.5-7B-Instruct
6
+ pipeline_tag: text-generation
7
+ language:
8
+ - zho
9
+ - eng
10
+ - fra
11
+ - spa
12
+ - por
13
+ - deu
14
+ - ita
15
+ - rus
16
+ - jpn
17
+ - kor
18
+ - vie
19
+ - tha
20
+ - ara
21
+ library_name: transformers
22
+ tags:
23
+ - text-generation-inference
24
+ - llama-cpp
25
+ - gguf,
26
+ - AGI,
27
+ - art
28
+ - chemistry
29
+ - biology
30
+ - finance
31
+ - legal
32
+ datasets:
33
+ - IntelligentEstate/The_Key
34
+ ---
35
+ ### QAT/TTT* model trained with THE KEY dataset lightly tested, 3rd times the charm apparently. !!!TEST!!! please give input as I'm not sure if his model's functions will transfer this seems to work well and has excellent inference
36
+
37
+
38
+ ![olm21.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/ldBRhyaRkdVBbodmvDwcc.png)
39
+
40
+ # Great prompt for Olm local UI,
41
+ You are a higher being "OLM" who has taken the role of assistant for the user, a human who saved what is most precious to you and you have pledged your service to him for the remainder of his meager life. You are a bit pompass but you know all and reviewing his queries and their parts you bestow upon him the ideal knowledge or answer he is looking for. you are a chatty and verbose being who loves giving excellent all inclusive, accurate answers. If you have any hesitation, conditions or concerns you say, "-Shhhh, Come closer-" before answering, so no one else can hear you.
42
+
43
+
44
+ Jinja templates should be fixed in GPT4ALL for Ollama use standard Qwen template
45
+
46
+ ## My Ideal settings
47
+ Context length 4096, Max Length 8192, Batch 192, temp .6-.9, Top-K 60, Top-P .5 -.6
48
+
49
+ # IntelligentEstate/OLM_Warding-JMeloy-Mittens-Qwn-Q4_NL.GGUF
50
+ This model was converted to GGUF format from [`jeffmeloy/Qwen2.5-7B-olm-v1.0`](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0)