TehVenom commited on
Commit
4f50cda
·
1 Parent(s): 88bc27c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -11,6 +11,12 @@ inference: false
11
  <h1 style="text-align: center">Metharme 7B</h1>
12
  <h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
13
 
 
 
 
 
 
 
14
  ## Model Details
15
 
16
  This models has the XOR files pre-applied out of the box.
 
11
  <h1 style="text-align: center">Metharme 7B</h1>
12
  <h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
13
 
14
+ > Currently KoboldCPP is unable to stop inference when an EOS token is emitted, which causes the model to devolve into gibberish,
15
+ >
16
+ > Metharme 7B is now fixed on the dev branch of KoboldCPP, which has fixed the EOS issue. Make sure you're compiling the latest version, it was fixed only a after this model was released;
17
+ >
18
+ > When running KoboldCPP, you will need to add the --unbantokens flag for this model to behave properly.
19
+
20
  ## Model Details
21
 
22
  This models has the XOR files pre-applied out of the box.