davzoku commited on
Commit
7aef2de
·
verified ·
1 Parent(s): 36a2f1f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/llama-3-8b-Instruct
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - llama
8
+ - trl
9
+ license: apache-2.0
10
+ language:
11
+ - en
12
+ datasets:
13
+ - davzoku/moecule-stock-market-outlook
14
+ ---
15
+
16
+ # 🫐🥫 stock_market_adapter
17
+
18
+ <p align="center">
19
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63c51d0e72db0f638ff1eb82/yLsrUZma5WnLzJrhI6ldZ.png" width="150" height="150" alt="logo">
20
+ </p>
21
+
22
+ ## Model Details
23
+
24
+ This is a LoRA adapter for [Moecule](https://huggingface.co/collections/davzoku/moecule-67dabc6bb469dcd00ad2a7c5) family of MoE models.
25
+
26
+ It is part of [Moecule Ingredients](https://huggingface.co/collections/davzoku/moecule-ingredients-67dac0e6210eb1d95abc6411) and all relevant expert models, LoRA adapters, and datasets can be found there.
27
+
28
+ ### Additional Information
29
+
30
+ - QLoRA 4-bit fine-tuning with Unsloth
31
+ - Base Model: `unsloth/llama-3-8b-Instruct`
32
+
33
+ ## The Team
34
+
35
+ - CHOCK Wan Kee
36
+ - Farlin Deva Binusha DEVASUGIN MERLISUGITHA
37
+ - GOH Bao Sheng
38
+ - Jessica LEK Si Jia
39
+ - Sinha KHUSHI
40
+ - TENG Kok Wai (Walter)
41
+
42
+ ## References
43
+
44
+ - [Unsloth Tutorial](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama)
45
+
46
+ - [Unsloth Finetuning Colab Notebook](<https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb#scrollTo=uMuVrWbjAzhc>)