File size: 985 Bytes
a2a30ce
 
 
 
 
 
 
 
 
 
 
8bb0749
 
 
707634e
 
 
806cfb4
 
707634e
806cfb4
 
 
 
 
 
3d573c1
806cfb4
 
3d573c1
806cfb4
 
13adbac
277ce71
13adbac
66bf69c
13adbac
66bf69c
13adbac
806cfb4
 
3dd8202
 
13adbac
3dd8202
 
806cfb4
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - embedder
  - embedding
  - moedels
  - GGUF
  - text-embeddings-inference 
misc:
  - text-embeddings-inference 
language:
- en
- de
---

# All models tested with ALLM(AnythingLLM) with LM as server
they work more or less

my short impression:
- nomic-embed-text
- mxbai-embed-large
- mug-b-1.6

working well, all other its up to you!


short hints for using:
set your (Max Tokens)context-lenght 16000t main-model, set your embedder-model (Max Embedding Chunk Length) 1024t,set (Max Context Snippets) 14

-> ok what that mean!

you can receive 14-snippets a 1024t (14336t) from your document ~10000words and 1600t left for the answer ~1000words

you can play and set for your needs, eg 8-snippets a 2048t, or 28-snippets a 512t ...

16000t ~1GB VRAM usage

...

...

...




(ALL Licenses and terms of use go to original author)