morriszms commited on
Commit
67f4df8
Β·
verified Β·
1 Parent(s): e1ff83e

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Lucie-7B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Lucie-7B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Lucie-7B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Lucie-7B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Lucie-7B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Lucie-7B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Lucie-7B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Lucie-7B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Lucie-7B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Lucie-7B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Lucie-7B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Lucie-7B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Lucie-7B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fef8d06c951b38d33249afd0e416933d9fd9f1147387546093cfb082f8b3eb9c
3
+ size 2583916032
Lucie-7B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29e9588c0b5a1f438d8dab2a0a559634044451232b6ef82d8df5a6f493e945a3
3
+ size 3576704512
Lucie-7B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1755001151a572af54e6396422a5d1a7b185e0a8117bb31dc33f26f7211856e8
3
+ size 3305123328
Lucie-7B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23c6b458eb8cf5b520f1d434b6616a0b578c9cc5eab516114b114729423ed17e
3
+ size 2988453376
Lucie-7B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64c7d499d51eb3cb74571cff6d172ee6a44a190d80a9b36d664e7210a23ff9b4
3
+ size 3843812864
Lucie-7B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6de9c81bae10d1b201a1450761c98df6af6847eac2115f1923c97f59345eff5a
3
+ size 4068732416
Lucie-7B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42c397ba5e03ca43524ac524ed66714e6f79ec4575540b3cd706e510725a6727
3
+ size 3871075840
Lucie-7B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a3fad7a41b3664dabb323d2ca96d7e0dfefc578b409997652a0fd3f8ceed040
3
+ size 4648857088
Lucie-7B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60384b6621126a162a14e78835d8717a8826c7abcdb9a158008c661ce7e69eba
3
+ size 4764724736
Lucie-7B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:844216a54b5a18ae90f015c83a9dcdb78490e1ceafa21988fab602e511626915
3
+ size 4648857088
Lucie-7B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea44f84bb1170c4d4778f28b1093993e07d9b33f7f4ec600a68860d693ffb00a
3
+ size 5504216576
Lucie-7B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c353904690091b1137d5eb93f4040acfc5e84ab38128db2581dafbe6134766b8
3
+ size 7128493568
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ language:
5
+ - fr
6
+ - en
7
+ - it
8
+ - de
9
+ - es
10
+ tags:
11
+ - pretrained
12
+ - llama-3
13
+ - openllm-france
14
+ - TensorBlock
15
+ - GGUF
16
+ datasets:
17
+ - OpenLLM-France/Lucie-Training-Dataset
18
+ widget:
19
+ - text: 'Quelle est la capitale de l''Espagne ? Madrid.
20
+
21
+ Quelle est la capitale de la France ?'
22
+ example_title: Capital cities in French
23
+ group: 1-shot Question Answering
24
+ training_progress:
25
+ num_steps: 756291
26
+ num_tokens: 3131736326144
27
+ context_length: 32000
28
+ base_model: OpenLLM-France/Lucie-7B
29
+ ---
30
+
31
+ <div style="width: auto; margin-left: auto; margin-right: auto">
32
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
33
+ </div>
34
+ <div style="display: flex; justify-content: space-between; width: 100%;">
35
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
36
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
37
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
38
+ </p>
39
+ </div>
40
+ </div>
41
+
42
+ ## OpenLLM-France/Lucie-7B - GGUF
43
+
44
+ This repo contains GGUF format model files for [OpenLLM-France/Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B).
45
+
46
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
47
+
48
+ ## Our projects
49
+ <table border="1" cellspacing="0" cellpadding="10">
50
+ <tr>
51
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
52
+ <th style="font-size: 25px;">TensorBlock Studio</th>
53
+ </tr>
54
+ <tr>
55
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
56
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
57
+ </tr>
58
+ <tr>
59
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
60
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
61
+ </tr>
62
+ <tr>
63
+ <th>
64
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
65
+ display: inline-block;
66
+ padding: 8px 16px;
67
+ background-color: #FF7F50;
68
+ color: white;
69
+ text-decoration: none;
70
+ border-radius: 6px;
71
+ font-weight: bold;
72
+ font-family: sans-serif;
73
+ ">πŸ‘€ See what we built πŸ‘€</a>
74
+ </th>
75
+ <th>
76
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
77
+ display: inline-block;
78
+ padding: 8px 16px;
79
+ background-color: #FF7F50;
80
+ color: white;
81
+ text-decoration: none;
82
+ border-radius: 6px;
83
+ font-weight: bold;
84
+ font-family: sans-serif;
85
+ ">πŸ‘€ See what we built πŸ‘€</a>
86
+ </th>
87
+ </tr>
88
+ </table>
89
+
90
+ ## Prompt template
91
+
92
+ ```
93
+ <s><|start_header_id|>system<|end_header_id|>
94
+
95
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
96
+
97
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
98
+ ```
99
+
100
+ ## Model file specification
101
+
102
+ | Filename | Quant type | File Size | Description |
103
+ | -------- | ---------- | --------- | ----------- |
104
+ | [Lucie-7B-Q2_K.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q2_K.gguf) | Q2_K | 2.584 GB | smallest, significant quality loss - not recommended for most purposes |
105
+ | [Lucie-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q3_K_S.gguf) | Q3_K_S | 2.988 GB | very small, high quality loss |
106
+ | [Lucie-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q3_K_M.gguf) | Q3_K_M | 3.305 GB | very small, high quality loss |
107
+ | [Lucie-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q3_K_L.gguf) | Q3_K_L | 3.577 GB | small, substantial quality loss |
108
+ | [Lucie-7B-Q4_0.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q4_0.gguf) | Q4_0 | 3.844 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
109
+ | [Lucie-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q4_K_S.gguf) | Q4_K_S | 3.871 GB | small, greater quality loss |
110
+ | [Lucie-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q4_K_M.gguf) | Q4_K_M | 4.069 GB | medium, balanced quality - recommended |
111
+ | [Lucie-7B-Q5_0.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q5_0.gguf) | Q5_0 | 4.649 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
112
+ | [Lucie-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q5_K_S.gguf) | Q5_K_S | 4.649 GB | large, low quality loss - recommended |
113
+ | [Lucie-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q5_K_M.gguf) | Q5_K_M | 4.765 GB | large, very low quality loss - recommended |
114
+ | [Lucie-7B-Q6_K.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q6_K.gguf) | Q6_K | 5.504 GB | very large, extremely low quality loss |
115
+ | [Lucie-7B-Q8_0.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Lucie-7B-GGUF/blob/main/Lucie-7B-Q8_0.gguf) | Q8_0 | 7.128 GB | very large, extremely low quality loss - not recommended |
116
+
117
+
118
+ ## Downloading instruction
119
+
120
+ ### Command line
121
+
122
+ Firstly, install Huggingface Client
123
+
124
+ ```shell
125
+ pip install -U "huggingface_hub[cli]"
126
+ ```
127
+
128
+ Then, downoad the individual model file the a local directory
129
+
130
+ ```shell
131
+ huggingface-cli download tensorblock/OpenLLM-France_Lucie-7B-GGUF --include "Lucie-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
132
+ ```
133
+
134
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
135
+
136
+ ```shell
137
+ huggingface-cli download tensorblock/OpenLLM-France_Lucie-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
138
+ ```