Ksenia Se
Kseniase
Β·
AI & ML interests
None yet
Recent Activity
replied to
their
post
1 day ago
10 new Chain-of-Thoughts (CoT) methods
CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today.
Here's a list of 10 latest enhanced CoT approaches:
1. Chain-of-Defensive-Thought -> https://huggingface.co/papers/2504.20769
Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs
2. Hybrid-CoT -> https://huggingface.co/papers/2504.21659
Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles
3. Semantic-level and token-level CoT -> https://huggingface.co/papers/2505.00703
Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both
4. Speculative CoT (SCoT) -> https://huggingface.co/papers/2504.19095
SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48β66%
5. Collaborative CoT (Co-CoT) -> https://huggingface.co/papers/2504.17091
Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals
6. XS-CoT -> https://huggingface.co/papers/2504.20835
It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45%
Read further in the comments π
If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
posted
an
update
1 day ago
10 new Chain-of-Thoughts (CoT) methods
CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today.
Here's a list of 10 latest enhanced CoT approaches:
1. Chain-of-Defensive-Thought -> https://huggingface.co/papers/2504.20769
Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs
2. Hybrid-CoT -> https://huggingface.co/papers/2504.21659
Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles
3. Semantic-level and token-level CoT -> https://huggingface.co/papers/2505.00703
Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both
4. Speculative CoT (SCoT) -> https://huggingface.co/papers/2504.19095
SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48β66%
5. Collaborative CoT (Co-CoT) -> https://huggingface.co/papers/2504.17091
Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals
6. XS-CoT -> https://huggingface.co/papers/2504.20835
It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45%
Read further in the comments π
If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
View all activity
Organizations
Kseniase's activity
-
-
-
-
-
-
-
-
-
-
-
view article
What is MoE 2.0? Update Your Knowledge about Mixture-of-experts
published
an
article
about 1 month ago
view article
Topic 33: Slim Attention, KArAt, XAttention and Multi-Token Attention Explained β Whatβs Really Changing in Transformers?
published
an
article
about 2 months ago
view article
What is Qwen-Agent framework? Inside the Qwen family
published
an
article
about 2 months ago
view article
π#92: Fight for Developers and the Year of Orchestration
published
an
article
about 2 months ago
view article
π¦Έπ»#14: What Is MCP, and Why Is Everyone β Suddenly!β Talking About It?
published
an
article
about 2 months ago
published
an
article
about 2 months ago
published
an
article
about 2 months ago
view article
π#90: Why AIβs Reasoning Tests Keep Failing Us
published
an
article
about 2 months ago
view article
π¦Έπ»#13: Action! How AI Agents Execute Tasks with UI and API Tools
published
an
article
about 2 months ago
view article
π¦Έπ»#12: How Do Agents Learn from Their Own Mistakes? The Role of Reflection in AI
view article
Everything You Need to Know about Knowledge Distillation
view article
π#89: AI in Action: How AI Engineers, Self-Optimizing Models, and Humanoid Robots Are Reshaping 2025
view article
π#88: Can DeepSeek Inspire Global Collaboration?
view article
π¦Έπ»#10: Does Present-Day GenAI Actually Reason?
view article
Topic 27: What are Chain-of-Agents and Chain-of-RAG?