![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/TmuC9sNBA4sNfDNY9UhU5.jpeg) # Try to get it to answer your questions, if you even can... A frankenMoE of [TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca), [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T), and [tiny-llama-1.1b-chat-medical](https://huggingface.co/SumayyaAli/tiny-llama-1.1b-chat-medical). # Most 1.1B models are decoherent and can't even answer simple questions. I picked out some models that aren't as bad, then mashed 32 copies of those 3 models together into a 32x MoE OpenOrca experts have been given the task of creating responses for simple questions about things like pop culture, history, and science...step-1195k experts have been chosen to provide warmth and a positive environment, while chat-medical experts have been chosen to provide further detail about human subjects, and to give small little bits of medical advice: I.E. "how do I get rid of this headache I gave myself from making you?" ### p.s. ...this is using 32 different experts mashed together, it's more likely to be paranoid schizophrenic than anything else.