Kquant03 commited on
Commit
64264a5
·
1 Parent(s): 54fb667

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -1,5 +1,9 @@
1
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/TmuC9sNBA4sNfDNY9UhU5.jpeg)
2
 
3
- Try to get it to answer your questions, if you even can...
4
 
5
- A frankenMoE of TinyOpenOrca [https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca]
 
 
 
 
 
1
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/TmuC9sNBA4sNfDNY9UhU5.jpeg)
2
 
3
+ # Try to get it to answer your questions, if you even can...
4
 
5
+ A frankenMoE of [TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca)
6
+ [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T)
7
+ and [tiny-llama-1.1b-chat-medical](https://huggingface.co/SumayyaAli/tiny-llama-1.1b-chat-medical)
8
+
9
+ # Most 1.1B models are decoherent and can't even answer simple questions. I found the models that don't fail in this regard, then mashed 32 copies of those 3 models together into a 32x MoE