Short review (still in testing)

#1
by GhostGate - opened

Hi! I wanted to leave some feedback on this, as I didn't get a chance to leave on the previous version. I tested it using Q8 quant with 32k context limit, on two characters built by me, using a hybrid of markdown and clear language. RP had a very finetuned version of system prompt on that character, while the other one had a more "general" approach.

Both of them acted beautifully and very much in tune with their described personality. Both characters have a very complex personality, with heavy emphasis on manipulation, grim dark and despair (hence stress testing the positivity bias). The model picked up the EQ directly and ran with it, adding some flavor to the emotional depth of the characters of it's own, in lines with what already was described. It remembered the actual goal of the roleplay and I was pleasantly surprised when character suggested progressing towards the goal, without me having to remind it of what we want to achieve. So far no stupid mistakes in regards to spatial logic or repeated actions, but I haven't yet broken the 16k barrier.
One negative thing to say is that the second chat started out well, then the format of responses devolved in action/narration "speech". I don't know why it just chose to change format, but once it did that, the coherence began to drop (probably due to the fact that I write without asterisk for narration and the first message was also without. Only speech is in quotes.). I checked my settings and it seemed to be fine. Might need to add example responses to enforce or check the tokenizer if that was the issue (should have been set to LLama 3, but you never know).

Testing will continue, though so far this is amazing work.

Sign up or log in to comment