Ksenia Se

Kseniase

AI & ML interests

None yet

Recent Activity

replied to their post 1 day ago
10 new Chain-of-Thoughts (CoT) methods CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today. Here's a list of 10 latest enhanced CoT approaches: 1. Chain-of-Defensive-Thought -> https://huggingface.co/papers/2504.20769 Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs 2. Hybrid-CoT -> https://huggingface.co/papers/2504.21659 Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles 3. Semantic-level and token-level CoT -> https://huggingface.co/papers/2505.00703 Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both 4. Speculative CoT (SCoT) -> https://huggingface.co/papers/2504.19095 SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48–66% 5. Collaborative CoT (Co-CoT) -> https://huggingface.co/papers/2504.17091 Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals 6. XS-CoT -> https://huggingface.co/papers/2504.20835 It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45% Read further in the comments πŸ‘‡ If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
posted an update 1 day ago
10 new Chain-of-Thoughts (CoT) methods CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today. Here's a list of 10 latest enhanced CoT approaches: 1. Chain-of-Defensive-Thought -> https://huggingface.co/papers/2504.20769 Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs 2. Hybrid-CoT -> https://huggingface.co/papers/2504.21659 Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles 3. Semantic-level and token-level CoT -> https://huggingface.co/papers/2505.00703 Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both 4. Speculative CoT (SCoT) -> https://huggingface.co/papers/2504.19095 SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48–66% 5. Collaborative CoT (Co-CoT) -> https://huggingface.co/papers/2504.17091 Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals 6. XS-CoT -> https://huggingface.co/papers/2504.20835 It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45% Read further in the comments πŸ‘‡ If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
View all activity

Organizations

Turing Post's profile picture Journalists on Hugging Face's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture Sandbox's profile picture

Kseniase's activity

published an article 8 days ago
view article
Article

What is MoE 2.0? Update Your Knowledge about Mixture-of-experts

By Kseniase and 1 other β€’
β€’ 5
published an article about 1 month ago
view article
Article

Topic 33: Slim Attention, KArAt, XAttention and Multi-Token Attention Explained – What’s Really Changing in Transformers?

By Kseniase and 1 other β€’
β€’ 14
published an article about 2 months ago
view article
Article

What is Qwen-Agent framework? Inside the Qwen family

By Kseniase and 1 other β€’
β€’ 10
published an article about 2 months ago
view article
Article

🌁#92: Fight for Developers and the Year of Orchestration

By Kseniase β€’
β€’ 5
published an article about 2 months ago
view article
Article

🦸🏻#14: What Is MCP, and Why Is Everyone – Suddenly!– Talking About It?

By Kseniase β€’
β€’ 229
published an article about 2 months ago
view article
Article

How to Reduce Memory Use in Reasoning Models

By Kseniase and 1 other β€’
β€’ 14
published an article about 2 months ago
view article
Article

🌁#91: We are failing in AI literacy

By Kseniase and 1 other β€’
β€’ 3
published an article about 2 months ago
view article
Article

🌁#90: Why AI’s Reasoning Tests Keep Failing Us

By Kseniase β€’
β€’ 9
published an article about 2 months ago
view article
Article

🦸🏻#13: Action! How AI Agents Execute Tasks with UI and API Tools

By Kseniase β€’
β€’ 8
published an article about 2 months ago
view article
Article

🦸🏻#12: How Do Agents Learn from Their Own Mistakes? The Role of Reflection in AI

By Kseniase β€’
β€’ 6
published an article 2 months ago
view article
Article

Everything You Need to Know about Knowledge Distillation

By Kseniase and 1 other β€’
β€’ 22
published an article 2 months ago
published an article 2 months ago
view article
Article

🌁#89: AI in Action: How AI Engineers, Self-Optimizing Models, and Humanoid Robots Are Reshaping 2025

By Kseniase β€’
β€’ 4
published an article 2 months ago
view article
Article

🦸🏻#11: How Do Agents Plan and Reason?

By Kseniase β€’
β€’ 12
published an article 2 months ago
view article
Article

Topic 28: What is Mixture-of-Mamba?

By Kseniase and 1 other β€’
β€’ 3
published an article 3 months ago
view article
Article

🌁#88: Can DeepSeek Inspire Global Collaboration?

By Kseniase β€’
β€’ 3
published an article 3 months ago
view article
Article

🦸🏻#10: Does Present-Day GenAI Actually Reason?

By Kseniase β€’
β€’ 7
published an article 3 months ago
view article
Article

Topic 27: What are Chain-of-Agents and Chain-of-RAG?

By Kseniase and 1 other β€’
β€’ 13