Try to get it to answer your questions, if you even can...
A frankenMoE of TinyLlama-1.1B-1T-OpenOrca, TinyLlama-1.1B-intermediate-step-1195k-token-2.5T, and tiny-llama-1.1b-chat-medical.
Most 1.1B models are decoherent and can't even answer simple questions. I picked out some models that aren't as bad, then mashed 32 copies of those 3 models together into a 32x MoE
OpenOrca experts have been given the task of creating responses for simple questions about things like pop culture, history, and science...step-1195k experts have been chosen to provide warmth and a positive environment, while chat-medical experts have been chosen to provide further detail about human subjects, and to give small little bits of medical advice: I.E. "how do I get rid of this headache I gave myself from making you?"