AnySecret Bot Claude commited on
Commit
7235249
·
1 Parent(s): 6282f7a

Prepare folder structure for 3B and 7B models

Browse files

- Create 3B/, 7B/, 3B-GGUF/, 7B-GGUF/ folders
- Add README files with usage instructions
- Ready for future model uploads

🤖 Generated with Claude Code

Co-Authored-By: Claude <[email protected]>

Files changed (4) hide show
  1. 3B-GGUF/README.md +14 -0
  2. 3B/README.md +12 -0
  3. 7B-GGUF/README.md +14 -0
  4. 7B/README.md +12 -0
3B-GGUF/README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AnySecret Assistant - 3B GGUF Models
2
+
3
+ Quantized GGUF versions of the 3B model for use with llama.cpp and Ollama.
4
+
5
+ Available quantizations:
6
+ - `anysecret-assistant-3B-Q4_K_M.gguf` - 4-bit quantization (smallest)
7
+ - `anysecret-assistant-3B-Q5_K_M.gguf` - 5-bit quantization (recommended)
8
+ - `anysecret-assistant-3B-Q8_0.gguf` - 8-bit quantization (highest quality)
9
+
10
+ ## Usage with Ollama
11
+ ```bash
12
+ wget https://huggingface.co/anysecret-io/anysecret-assistant/resolve/main/3B-GGUF/anysecret-assistant-3B-Q5_K_M.gguf
13
+ ollama create anysecret-3b -f Modelfile
14
+ ```
3B/README.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AnySecret Assistant - Llama-3.2-3B Model
2
+
3
+ Fine-tuned Llama-3.2-3B model for AnySecret configuration assistance.
4
+
5
+ ## Usage
6
+ ```python
7
+ from peft import AutoPeftModelForCausalLM
8
+ from transformers import AutoTokenizer
9
+
10
+ model = AutoPeftModelForCausalLM.from_pretrained("anysecret-io/anysecret-assistant", subfolder="3B")
11
+ tokenizer = AutoTokenizer.from_pretrained("anysecret-io/anysecret-assistant", subfolder="3B")
12
+ ```
7B-GGUF/README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AnySecret Assistant - 7B GGUF Models
2
+
3
+ Quantized GGUF versions of the 7B model for use with llama.cpp and Ollama.
4
+
5
+ Available quantizations:
6
+ - `anysecret-assistant-7B-Q4_K_M.gguf` - 4-bit quantization (smallest)
7
+ - `anysecret-assistant-7B-Q5_K_M.gguf` - 5-bit quantization (recommended)
8
+ - `anysecret-assistant-7B-Q8_0.gguf` - 8-bit quantization (highest quality)
9
+
10
+ ## Usage with Ollama
11
+ ```bash
12
+ wget https://huggingface.co/anysecret-io/anysecret-assistant/resolve/main/7B-GGUF/anysecret-assistant-7B-Q5_K_M.gguf
13
+ ollama create anysecret-7b -f Modelfile
14
+ ```
7B/README.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AnySecret Assistant - CodeLlama-7B Model
2
+
3
+ Fine-tuned CodeLlama-7B model for AnySecret configuration assistance.
4
+
5
+ ## Usage
6
+ ```python
7
+ from peft import AutoPeftModelForCausalLM
8
+ from transformers import AutoTokenizer
9
+
10
+ model = AutoPeftModelForCausalLM.from_pretrained("anysecret-io/anysecret-assistant", subfolder="7B")
11
+ tokenizer = AutoTokenizer.from_pretrained("anysecret-io/anysecret-assistant", subfolder="7B")
12
+ ```