The best I’ve found and consider to be the state of the art. Critique welcomed, but respectfully, you are wrong 🤌
Michael Kirchner PRO
kirch
AI & ML interests
Reinforcement learning, multi-lingual multi-modal models, human-computer interfaces
Organizations
Scotch & SOTA 🥃 Pt. 3: Image Sorcery 🔮
Scotch & SOTA 🥃 Pt. 5: Instruction Tuning Datasets 👩🏫
Question & answer, task completion, general SFT and otherwise finetuney data.
Scotch & SOTA 🥃 Pt. 7: Human Feedback Datasets 🫣
The elusive “human” feedback
Scotch & SOTA 🥃 Pt. 2: Quantized Small Boi LLM 👉👈
Run on potato, sir. GGUF & GPTQ are good friends.
Scotch & SOTA 🥃 Pt. 4: Pre-Training Datasets 📜
We gotta start somewhere, these jsonl's aren't gonna train themselves.
Scotch & SOTA 🥃 Pt. 6: Dialogue Tuning Datasets 💬
Conversations, turn-based dialog, and things that can be turned into that.
Scotch & SOTA 🥃 Pt. 4: Multi-Modal 🔀
State of the art for multi-modal models
Scotch & SOTA 🥃 Pt. 1: Big Boi LLM 🚛
The best I’ve found and consider to be the state of the art. Critique welcomed, but respectfully, you are wrong 🤌
Scotch & SOTA 🥃 Pt. 2: Quantized Small Boi LLM 👉👈
Run on potato, sir. GGUF & GPTQ are good friends.
Scotch & SOTA 🥃 Pt. 3: Image Sorcery 🔮
Scotch & SOTA 🥃 Pt. 4: Pre-Training Datasets 📜
We gotta start somewhere, these jsonl's aren't gonna train themselves.
Scotch & SOTA 🥃 Pt. 5: Instruction Tuning Datasets 👩🏫
Question & answer, task completion, general SFT and otherwise finetuney data.
Scotch & SOTA 🥃 Pt. 6: Dialogue Tuning Datasets 💬
Conversations, turn-based dialog, and things that can be turned into that.
Scotch & SOTA 🥃 Pt. 7: Human Feedback Datasets 🫣
The elusive “human” feedback
Scotch & SOTA 🥃 Pt. 4: Multi-Modal 🔀
State of the art for multi-modal models