Sthenno's picture

Sthenno

sthenno

AI & ML interests

To contact me: [email protected]

Recent Activity

liked a model 1 day ago
Qwen/Qwen3-Embedding-8B
liked a dataset 2 days ago
sentence-transformers/stsb
View all activity

Organizations

MLX Community's profile picture Hugging Face Discord Community's profile picture sthenno-com's profile picture

sthenno's activity

reacted to sometimesanotion's post with πŸ‘ about 1 month ago
view post
Post
1772
The capabilities of the new Qwen 3 models are fascinating, and I am watching that space!

My experience, however, is that context management is vastly more important with them. If you use a client with a typical session log with rolling compression, a Qwen 3 model will start to generate the same messages over and over. I don't think that detracts from them. They're optimized for a more advanced MCP environment. I honestly think the 8B is optimal for home use, given proper RAG/CAG.

In typical session chats, Lamarck and Chocolatine are still my daily drives. I worked hard to give Lamarck v0.7 a sprinkling of CoT from both DRT and Deepseek R1. While those models got surpassed on the leaderboards, in practice, I still really enjoy their output.

My projects are focusing on application and context management, because that's where the payoff in improved quality is right now. But should there be a mix of finetunes to make just the right mix of - my recipes are standing by.
New activity in sthenno-com/miscii-14b-0218 about 1 month ago
reacted to sometimesanotion's post with πŸ‘€β€οΈ about 1 month ago
view post
Post
1772
The capabilities of the new Qwen 3 models are fascinating, and I am watching that space!

My experience, however, is that context management is vastly more important with them. If you use a client with a typical session log with rolling compression, a Qwen 3 model will start to generate the same messages over and over. I don't think that detracts from them. They're optimized for a more advanced MCP environment. I honestly think the 8B is optimal for home use, given proper RAG/CAG.

In typical session chats, Lamarck and Chocolatine are still my daily drives. I worked hard to give Lamarck v0.7 a sprinkling of CoT from both DRT and Deepseek R1. While those models got surpassed on the leaderboards, in practice, I still really enjoy their output.

My projects are focusing on application and context management, because that's where the payoff in improved quality is right now. But should there be a mix of finetunes to make just the right mix of - my recipes are standing by.
New activity in sthenno-com/miscii-14b-1028 about 1 month ago

Improve language tag

πŸ€— 1
#5 opened about 2 months ago by
lbourdois