It's basically just the Universal-Light preset with min_p turned up a skosh. This addition helped amend some of the token leakage, which in hindsight, was most likely caused by mixing models with different tokenization bases.
Mistral Nemo is a weird beast, its tokenizer was poorly understood at launch and the Anthracite group released a version of it with its formatting tokens replaced with those from ChatML. Mag Mell combined models based on that (notably Magnum itself from which it gets part of its name) and models based on the original instruction-following model. I may have done something wrong with the MergeKit config, but Toasty and I couldn't hammer it out.
Anyways, we ended up recommending MinP of 0.2 because it keeps the character of the replies while preventing the majority of token problems out to a sufficient context length to fit a whole scene into.