Sao10K commited on
Commit
89a3ac0
·
verified ·
1 Parent(s): e8288ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,6 +22,6 @@ Notes:
22
  <br> \- I get consistent and reliable answers at ~11K context fine.
23
  <br> \- Still coherent at up to 16K though! Just works not that well.
24
 
25
- I recommend sticking up to 12K context, but loading the model at 16K. It has a really accurate context up to 10K from multiple different extended long context tests. 16K works fine for roleplays, but not for more detailed tasks.
26
 
27
  ![Needle](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K/resolve/main/output.png)
 
22
  <br> \- I get consistent and reliable answers at ~11K context fine.
23
  <br> \- Still coherent at up to 16K though! Just works not that well.
24
 
25
+ I recommend sticking up to 12K context, but loading the model at 16K for inference. It has a really accurate context up to 10K from multiple different extended long context tests. 16K works fine for roleplays, but not for more detailed tasks.
26
 
27
  ![Needle](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K/resolve/main/output.png)