Ksenia Se

Kseniase

AI & ML interests

None yet

Recent Activity

replied to their post 1 day ago
10 new Chain-of-Thoughts (CoT) methods CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today. Here's a list of 10 latest enhanced CoT approaches: 1. Chain-of-Defensive-Thought -> https://huggingface.co/papers/2504.20769 Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs 2. Hybrid-CoT -> https://huggingface.co/papers/2504.21659 Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles 3. Semantic-level and token-level CoT -> https://huggingface.co/papers/2505.00703 Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both 4. Speculative CoT (SCoT) -> https://huggingface.co/papers/2504.19095 SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48–66% 5. Collaborative CoT (Co-CoT) -> https://huggingface.co/papers/2504.17091 Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals 6. XS-CoT -> https://huggingface.co/papers/2504.20835 It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45% Read further in the comments πŸ‘‡ If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
posted an update 1 day ago
10 new Chain-of-Thoughts (CoT) methods CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today. Here's a list of 10 latest enhanced CoT approaches: 1. Chain-of-Defensive-Thought -> https://huggingface.co/papers/2504.20769 Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs 2. Hybrid-CoT -> https://huggingface.co/papers/2504.21659 Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles 3. Semantic-level and token-level CoT -> https://huggingface.co/papers/2505.00703 Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both 4. Speculative CoT (SCoT) -> https://huggingface.co/papers/2504.19095 SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48–66% 5. Collaborative CoT (Co-CoT) -> https://huggingface.co/papers/2504.17091 Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals 6. XS-CoT -> https://huggingface.co/papers/2504.20835 It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45% Read further in the comments πŸ‘‡ If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
View all activity

Organizations

Turing Post's profile picture Journalists on Hugging Face's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture Sandbox's profile picture

Posts 21

view post
Post
1588
10 new Chain-of-Thoughts (CoT) methods

CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today.

Here's a list of 10 latest enhanced CoT approaches:

1. Chain-of-Defensive-Thought -> Chain-of-Defensive-Thought: Structured Reasoning Elicits Robustness in Large Language Models against Reference Corruption (2504.20769)
Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs

2. Hybrid-CoT -> AdaR1: From Long-CoT to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization (2504.21659)
Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles

3. Semantic-level and token-level CoT -> T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT (2505.00703)
Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both

4. Speculative CoT (SCoT) -> Efficient Reasoning for LLMs through Speculative Chain-of-Thought (2504.19095)
SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48–66%

5. Collaborative CoT (Co-CoT) -> Co-CoT: A Prompt-Based Framework for Collaborative Chain-of-Thought Reasoning (2504.17091)
Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals

6. XS-CoT -> Enhancing Non-Core Language Instruction-Following in Speech LLMs via Semi-Implicit Cross-Lingual CoT Reasoning (2504.20835)
It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45%

Read further in the comments πŸ‘‡

If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe

Articles 40

Article
5

What is MoE 2.0? Update Your Knowledge about Mixture-of-experts

models 0

None public yet

datasets 0

None public yet