Sandbox

community

AI & ML interests

None defined yet.

projects-sandbox's activity

fdaudensΒ 
posted an update 3 days ago
view post
Post
226
This is the story of how open source AI created a $3M business for a news company:

Clare Spencer tells on the GAIN blog how a Danish software engineer found OpenAI's Whisper model and turned it into Good Tape. It's now generating $3M ARR for news service Zetland.

Great playbook on how to build a good product:
- This idea came from a software engineer, Jakob Steinn, who was not only able to spot a new model, but also listen to feedback from his colleagues in the newsrooms (he thought they would use it for translation, but they were more interested in transcription in Danish)
- They built iteratively: they went from running the model in the terminal to a notebook to a full-fledged web interface
- They didn't just wrap the API. They rebuilt the transcription engine from scratch, moved it to TPUs for 45-second processing of hour-long audio, and added EU-based data sovereignty

Now Good Tape has 2.5M users worldwide, with only 30-35% being journalists.
Small languages (Danish, Finnish, Croatian, Hebrew) were underserved by existing tools - suddenly there's a "very very big market" when you put them together.

This shows how open source AI can solve real workflow problems and create sustainable businesses. Sometimes the best opportunities emerge from solving your own daily problems.

Worth a read: https://generative-ai-newsroom.com/how-a-danish-news-service-made-a-profit-with-its-transcription-tool-285bc05b7cf9
KseniaseΒ 
posted an update 5 days ago
view post
Post
1765
13 Awesome MCP Servers

MCP changed how agents connect with tools.

After writing the most read explanation of MCP on Hugging Face (https://huggingface.co/blog/Kseniase/mcp), we chose this 13 awesome MCP servers that you can work with:

1. Agentset MCP -> https://github.com/agentset-ai/mcp-server
For efficient and quick building of intelligent, doc-based apps using open-source Agentset platform for RAG

2. GitHub MCP Server -> https://github.com/github/github-mcp-server
Integrates GitHub APIs into your workflow, allowing to build AI tools and apps that interact with GitHub's ecosystem

3. arXiv MCP -> https://github.com/andybrandt/mcp-simple-arxiv
Allows working with research papers on arXiv through effective search and access to their metadata, abstracts, and links

4. MCP Run Python -> https://github.com/pydantic/pydantic-ai/tree/main/mcp-run-python
Enables to run Python code in a sandbox via Pyodide in Deno, so it can be isolated from the rest of the operating system

5. Safe Local Python Executor -> https://github.com/maxim-saplin/mcp_safe_local_python_executor
A lightweight tool for running LLM-generated Python code locally, using Hugging Face’s LocalPythonExecutor (from smolagents framework) and exposing it via MCP for AI assistant integration

6. Cursor MCP Installer -> https://github.com/matthewdcage/cursor-mcp-installer
Allows to automatically add MCP servers to Cursor for development convenience

7. Basic Memory -> https://memory.basicmachines.co/docs/introduction
This knowledge management system connects to LLMs and lets you build a persistent semantic graph from AI conversations with AI agents

Read further in the comments πŸ‘‡

If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
  • 1 reply
Β·
fdaudensΒ 
posted an update 9 days ago
view post
Post
2851
🎡 Dream come true for content creators! TIGER AI can extract voice, effects & music from ANY audio file 🀯
This lightweight model uses frequency band-split technology to separate speech like magic. Kudos to @fffiloni for the amazing demo! fffiloni/TIGER-audio-extraction
fdaudensΒ 
posted an update 10 days ago
view post
Post
3760
Just completed the AI Agents course and wow, that capstone project really makes you understand how to build agents that can handle real-world complexity!

The final project uses the GAIA dataset - your agent has to solve tasks like analyzing Excel files, processing audio recordings, answering questions about YouTube videos, and diving into research papers. This isn't toy examples, it's the messy, multimodal stuff agents need to handle in practice.

Whether you’re just getting started with agents or want to go deeper with tools like LangChain, LlamaIndex, and SmolAgents, this course has tons of useful stuff. A few key insights:
- Code agents are incredibly versatile once you get the architecture right
- The sweet spot is finding the right balance of guidance vs autonomy for each use case
- Once the logic clicks, the possibilities really are endless - it's like letting LLMs break free from the chatbox

The course is free and the certification deadline is July 1st, 2025.

The Hugging Face team built something special here. If you're tired of AI that impresses in demos but fails in practice, this is your path to building agents that actually deliver. https://huggingface.co/learn/agents-course/unit0/introduction

Best part? There's the MCP course next!
KseniaseΒ 
posted an update 12 days ago
view post
Post
4228
12 Types of JEPA

JEPA, or Joint Embedding Predictive Architecture, is an approach to building AI models introduced by Yann LeCun. It differs from transformers by predicting the representation of a missing or future part of the input, rather than the next token or pixel. This encourages conceptual understanding, not just low-level pattern matching. So JEPA allows teaching AI to reason abstractly.

Here are 12 types of JEPA you should know about:

1. I-JEPA -> Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (2301.08243)
A non-generative, self-supervised learning framework designed for processing images. It works by masking parts of the images and then trying to predict those masked parts

2. MC-JEPA -> MC-JEPA: A Joint-Embedding Predictive Architecture for Self-Supervised Learning of Motion and Content Features (2307.12698)
Simultaneously interprets video data - dynamic elements (motion) and static details (content) - using a shared encoder

3. V-JEPA -> Revisiting Feature Prediction for Learning Visual Representations from Video (2404.08471)
Presents vision models trained by predicting future video features, without pretrained image encoders, text, negative sampling, or reconstruction

4. UI-JEPA -> UI-JEPA: Towards Active Perception of User Intent through Onscreen User Activity (2409.04081)
Masks unlabeled UI sequences to learn abstract embeddings, then adds a fine-tuned LLM decoder for intent prediction.

5. Audio-based JEPA (A-JEPA) -> A-JEPA: Joint-Embedding Predictive Architecture Can Listen (2311.15830)
Masks spectrogram patches with a curriculum, encodes them, and predicts hidden representations.

6. S-JEPA -> S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention (2403.11772)
Signal-JEPA is used in EEG analysis. It adds a spatial block-masking scheme and three lightweight downstream classifiers

7. TI-JEPA -> TI-JEPA: An Innovative Energy-based Joint Embedding Strategy for Text-Image Multimodal Systems (2503.06380)
Text-Image JEPA uses self-supervised, energy-based pre-training to map text and images into a shared embedding space, improving cross-modal transfer to downstream tasks

Find more types below πŸ‘‡

Also, explore the basics of JEPA in our article: https://www.turingpost.com/p/jepa

If you liked it, subscribe to the Turing Post: https://www.turingpost.com/subscribe
  • 1 reply
Β·
fdaudensΒ 
posted an update 12 days ago
view post
Post
2516
Two lines in your terminal and you have an AI agent running whatever model and tools you want 🀯

Just tried the new Tiny Agents in Python. Asked it which team won the Italian Serie A soccer league and to export the final table to CSV. Coolest thing is you can interact with the agent, guide it, and correct its mistakes.

The agent connected to web browsing tools, searched for Serie A standings, identified the champion, and generated a CSV export.

The setup:
pip install "huggingface_hub[mcp]>=0.32.0"
tiny-agents run


That's it. The MCP protocol handles all the tool integrations automatically - no custom APIs to write, no complex setups. Want file system access? It's already there. Need web browsing? Built in.

You can swap models, change inference providers, run local models, or add new tools just by editing a simple JSON config. You can also use Gradio Spaces as MCP servers! The entire agent is ~70 lines of Python - essentially a while loop that streams responses and executes tools. Everything is open-source. ❀️ Hugging Face

Blog post: https://huggingface.co/blog/python-tiny-agents
  • 1 reply
Β·
fdaudensΒ 
posted an update 14 days ago
view post
Post
2439
Here’s what happens when a national institution builds its own digital intelligence: France’s Ministry of Culture just released 17K+ real users testing 30+ chatbots in French. Raw, diverse, and a goldmine for studying LLMs in the wild.

ministere-culture/comparia-conversations
KseniaseΒ 
posted an update 19 days ago
view post
Post
3135
7 Free resources to master Multi-Agent Systems (MAS)

Collective intelligence is the future of AI. Sometimes, a single agent isn't enough β€” a team of simpler, specialized agents working together to solve a task can be a much better option. Building Multi-Agent Systems (MAS) isn’t easy, that's why today we’re offering you a list of sources that may help you master MAS:

1. CrewAI tutorials -> https://docs.crewai.com/introduction#ready-to-start-building%3F
At the end of the page you'll find a guide on how to build a crew of agents that research and analyze a topic, and create a report. Also, there are useful guides on how to build a single CrewAI agent and a workflow

2. Building with CAMEL multi-agent framework -> https://github.com/camel-ai/camel
Offers guides, cookbooks and other useful information to build even million agent societies, explore and work with MAS

3. Lang Chain multi-agent tutorial -> https://langchain-ai.github.io/langgraph/agents/multi-agent/
Explains how to make agents communicate via handoffs pattern on the example of 2 multi-agent architectures - supervisor and swarm

4. "Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations" by Yoav Shoham and Kevin Leyton-Brown -> https://www.masfoundations.org/download.html
This book explains learning between agents, how multiple agents solve shared problems and communicate with focus on theory, practical examples and algorithms, diving into the game theory and logical approaches

Also, check out The Turing Post article about MAS -> https://www.turingpost.com/p/mas
Our article can be a good starting guide for you to explore what MAS is, its components, architectures, types, top recent developments and current trends

More resources in the comments πŸ‘‡

If you liked it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
  • 2 replies
Β·
fdaudensΒ 
posted an update 22 days ago
view post
Post
5124
Tried something new: an AI-generated podcast that breaks down the top research paper each day. Fully automated, now live on Spotify.

I built this prototype to help keep up with the rapid pace of AI developments and, hopefully, make cutting-edge research more accessible. I don’t know about you, but just listening to a conversation about a paper really helps the content sink in for me.

This build taught me a lot about full automation. If you’re into the technical weeds: Qwen3 runs on Inference to handle the script, Kokoro does the voice, and the whole thing gets published automatically thanks to the Hugging Face Jobs API and Gradio deployment.

It’s not perfect yet β€” I’ll be monitoring for hallucinations and incoherence. The voice model still needs polish, but it’s a promising start. Would love to build this with the community β€” submit a PR or send feedback. It’s just a beta of an experimental idea!

Big kudos to @m-ric , whose Open NotebookLM this is based on, and to @nielsr for his terrific work making research papers more accessible.

- Podcast on Spotify: https://open.spotify.com/show/3PTucIW1w1GIkqTYm32ka7?si=c7a851f83e6d4331 (Apple Podcasts soon)
- Code: fdaudens/podcast-jobs
- Open NotebookLM: m-ric/open-notebooklm
- Also super helpful, @qgallouedec 's tutorial on HF Jobs API: qgallouedec/run-hello-world
  • 1 reply
Β·
fdaudensΒ 
posted an update 25 days ago
view post
Post
788
Hey! I built an AI Agent to query the FOIA API for a workshop at the Hacks/Hackers Summit in Baltimore and you can do it too!

It’s a quick proof of concept to demo what agents can do, how to design workflows, and how to approach the coding side. TWant a fun project to learn how AI agents work? I built one that queries the FOIA API β€” and you can too!

It's a quick proof of concept I did for a workshop at the Hacks/Hackers Summit in Baltimore, demonstrating what agents can do, how to design workflows, and approaches to coding them.

- Slides https://docs.google.com/presentation/d/1lbf5K0yi213N7uxGnVKJdGWq2i0GayWj4vIcLkVlwD8/edit?usp=sharing
- Colab notebook https://colab.research.google.com/drive/1iw0qZyTni_6BcK0jj1x6gTfjm85NlaGv
- Gradio app: https://huggingface.co/spaces/JournalistsonHF/foia-agent
- MCP version to plug into Claude, Cursor, etc: https://huggingface.co/spaces/JournalistsonHF/foia-mcp-tools

Feel free to use the Gradio app for real FOIA requests, but also to improve it (I'm far from being a good coder) or adapt it for other countries.

And shout-out to everyone who powered through the workshop! πŸ˜…
  • 1 reply
Β·
KseniaseΒ 
posted an update 26 days ago
view post
Post
4978
11 Alignment and Optimization Algorithms for LLMs

When we need to align models' behavior with the desired objectives, we rely on specialized algorithms that support helpfulness, accuracy, reasoning, safety, and alignment with user preferences. Much of a model’s usefulness comes from post-training optimization methods.

Here are the main optimization algorithms (both classic and new) in one place:

1. PPO (Proximal Policy Optimization) -> Proximal Policy Optimization Algorithms (1707.06347)
Clips the probability ratio to prevent the new policy from diverging too far from the old one. It helps keep everything stable

2. DPO (Direct Preference Optimization) -> Direct Preference Optimization: Your Language Model is Secretly a Reward Model (2305.18290)
It's a non RL method, where an LM is an implicit reward model. It uses a simple loss to boost the preferred answer’s probability over the less preferred one

3. GRPO (Group Relative Policy Optimization) -> DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models (2402.03300)
An RL method that compares a group of model outputs for the same input and updates the policy based on relative rankings. It doesn't need a separate critic model
It's latest application is Flow-GRPO which adds online RL into flow matching models -> Flow-GRPO: Training Flow Matching Models via Online RL (2505.05470)

4. DAPO (Decoupled Clip and Dynamic sAmpling Policy Optimization) -> DAPO: An Open-Source LLM Reinforcement Learning System at Scale (2503.14476)
Decouples the clipping bounds for flexibility, introducing 4 key techniques: clip-higher (to maintain exploration), dynamic sampling (to ensure gradient updates), token-level loss (to balance learning across long outputs), and overlong reward shaping (to handle long, truncated answers)

5. Supervised Fine-Tuning (SFT) -> Training language models to follow instructions with human feedback (2203.02155)
Often the first post-pretraining step. A model is fine-tuned on a dataset of high-quality human-written input-output pairs to directly teach desired behaviors

More in the comments πŸ‘‡

If you liked it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
  • 1 reply
Β·
KseniaseΒ 
posted an update about 1 month ago
view post
Post
4196
10 new Chain-of-Thoughts (CoT) methods

CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today.

Here's a list of 10 latest enhanced CoT approaches:

1. Chain-of-Defensive-Thought -> Chain-of-Defensive-Thought: Structured Reasoning Elicits Robustness in Large Language Models against Reference Corruption (2504.20769)
Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs

2. Hybrid-CoT -> AdaR1: From Long-CoT to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization (2504.21659)
Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles

3. Semantic-level and token-level CoT -> T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT (2505.00703)
Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both

4. Speculative CoT (SCoT) -> Efficient Reasoning for LLMs through Speculative Chain-of-Thought (2504.19095)
SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48–66%

5. Collaborative CoT (Co-CoT) -> Co-CoT: A Prompt-Based Framework for Collaborative Chain-of-Thought Reasoning (2504.17091)
Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals

6. XS-CoT -> Enhancing Non-Core Language Instruction-Following in Speech LLMs via Semi-Implicit Cross-Lingual CoT Reasoning (2504.20835)
It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45%

Read further in the comments πŸ‘‡

If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
  • 1 reply
Β·
fdaudensΒ 
posted an update about 1 month ago
view post
Post
3184
Forget everything you know about transcription models - NVIDIA's parakeet-tdt-0.6b-v2 changed the game for me!

Just tested it with Steve Jobs' Stanford speech and was speechless (pun intended). The video isn’t sped up.

3 things that floored me:
- Transcription took just 10 seconds for a 15-min file
- Got a CSV with perfect timestamps, punctuation & capitalization
- Stunning accuracy (correctly captured "Reed College" and other specifics)

NVIDIA also released a demo where you can click any transcribed segment to play it instantly.

The improvement is significant: number 1 on the ASR Leaderboard, 6% error rate (best in class) with complete commercial freedom (cc-by-4.0 license).

Time to update those Whisper pipelines! H/t @Steveeeeeeen for the finding!

Model: nvidia/parakeet-tdt-0.6b-v2
Demo: nvidia/parakeet-tdt-0.6b-v2
ASR Leaderboard: hf-audio/open_asr_leaderboard
  • 1 reply
Β·
fdaudensΒ 
posted an update about 1 month ago
view post
Post
601
I just gave my chatbots a massive upgrade: they can now generate audio from text, modify images β€” you name it. Here’s how:

The Gradio team shipped MCP support. That means you can plug any AI app built with it into Claude or Cursor using the Model Context Protocol (MCP) β€” think of it like a USB port for LLMs.

I put it to the test:
- Whipped up a quick text-to-speech app with Kokoro on HF (with an LLM riding shotgun, naturally)
- Added "mcp_server=True" in the code
- Connected it to Claude

Now I can generate audio from any text. The possibilities are next-level: you can potentially plug any of the 500K+ AI apps on Hugging Face to your favorite LLM.

Is this the new UI for AI?

- My tts app (feel free to use/duplicate it): fdaudens/kokoro-mcp
- Blog post: https://huggingface.co/blog/gradio-mcp
fdaudensΒ 
posted an update about 1 month ago
view post
Post
1858
Want to know which AI models are least likely to hallucinate β€” and how to keep yours from spiking hallucinations by 20%?

A new benchmark called Phare, by Giskard, tested leading models across multiple languages, revealing three key findings:

1️⃣ Popular models aren't necessarily factual. Some models ranking highest in user satisfaction benchmarks like LMArena are actually more prone to hallucination.

2️⃣ The way you ask matters - a lot. When users present claims confidently ("My teacher said..."), models are 15% less likely to correct misinformation vs. neutral framing ("I heard...").

3️⃣ Telling models to "be concise" can increase hallucination by up to 20%.

What's also cool is that the full dataset is public - use them to test your own models or dive deeper into the results! H/t @davidberenstein1957 for the link.

- Study: https://www.giskard.ai/knowledge/good-answers-are-not-necessarily-factual-answers-an-analysis-of-hallucination-in-leading-llms
- Leaderboard: https://phare.giskard.ai/
- Dataset: giskardai/phare
KseniaseΒ 
posted an update about 1 month ago
view post
Post
6504
6 Free resources on Reinforcement Learning (RL)

RL now is where the real action is, it's the engine behind autonomous tech, robots, and the next wave of AI that thinks, moves and solves problems on its own. To stay up to date with what’s happening in RL, we offer some fresh materials on it:

1. "Reinforcement Learning from Human Feedback" by Nathan Lambert -> https://rlhfbook.com/
It's a short introduction to RLHF, explaining instruction tuning, reward modeling, alignment methods, synthetic data, evaluation, and more

2. "A Course in Reinforcement Learning (2nd Edition)" by Dimitri P. Bertsekas -> https://www.mit.edu/~dimitrib/RLbook.html
Explains dynamic programming (DP) and RL, diving into rollout algorithms, neural networks, policy learning, etc. It’s packed with solved exercises and real-world examples

3. "Mathematical Foundations of Reinforcement Learning" video course by Shiyu Zhao -> https://www.youtube.com/playlist?list=PLEhdbSEZZbDaFWPX4gehhwB9vJZJ1DNm8
Offers a mathematical yet friendly introduction to RL, covering Bellman Equation, value iteration, Monte Carlo learning, approximation, policy gradient, actor-critic methods, etc.
+ Check out the repo for more: https://github.com/MathFoundationRL/Book-Mathematical-Foundation-of-Reinforcement-Learning

4. "Multi-Agent Reinforcement Learning" by Stefano V. Albrecht, Filippos Christianos, and Lukas SchΓ€fer -> https://www.marl-book.com/
Covers models, core ideas of multi-agent RL (MARL) and modern approaches to combining it with deep learning

5. "Reinforcement Learning: A Comprehensive Overview" by Kevin P. Murphy -> https://arxiv.org/pdf/2412.05265
Explains RL and sequential decision making, covering value-based, policy-gradient, model-based, multi-agent RL methods, RL+LLMs, and RL+inference and other topics

6. Our collection of free courses and books on RL -> https://huggingface.co/posts/Kseniase/884818121094439

If you liked this, also subscribe to The Turing Post: https://www.turingpost.com/subscribe
fdaudensΒ 
posted an update about 1 month ago
KseniaseΒ 
posted an update about 2 months ago
view post
Post
7236
11 new types of RAG

RAG is evolving fast, keeping pace with cutting-edge AI trends. Today it becomes more agentic and smarter at navigating complex structures like hypergraphs.

Here are 11 latest RAG types:

1. InstructRAG -> InstructRAG: Leveraging Retrieval-Augmented Generation on Instruction Graphs for LLM-Based Task Planning (2504.13032)
Combines RAG with a multi-agent framework, using a graph-based structure, an RL agent to expand task coverage, and a meta-learning agent for better generalization

2. CoRAG (Collaborative RAG) -> CoRAG: Collaborative Retrieval-Augmented Generation (2504.01883)
A collaborative framework that extends RAG to settings where clients train a shared model using a joint passage store

3. ReaRAG -> ReaRAG: Knowledge-guided Reasoning Enhances Factuality of Large Reasoning Models with Iterative Retrieval Augmented Generation (2503.21729)
It uses a Thought-Action-Observation loop to decide at each step whether to retrieve information or finalize an answer, reducing unnecessary reasoning and errors

4. MCTS-RAG -> MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree Search (2503.20757)
Combines RAG with Monte Carlo Tree Search (MCTS) to help small LMs handle complex, knowledge-heavy tasks

5. Typed-RAG - > Typed-RAG: Type-aware Multi-Aspect Decomposition for Non-Factoid Question Answering (2503.15879)
Improves answers on open-ended questions by identifying question types (a debate, personal experience, or comparison) and breaking it down into simpler parts

6. MADAM-RAG -> Retrieval-Augmented Generation with Conflicting Evidence (2504.13079)
A multi-agent system where models debate answers over multiple rounds and an aggregator filters noise and misinformation

7. HM-RAG -> HM-RAG: Hierarchical Multi-Agent Multimodal Retrieval Augmented Generation (2504.12330)
A hierarchical multi-agent RAG framework that uses 3 agents: one to split queries, one to retrieve across multiple data types (text, graphs and web), and one to merge and refine answers

8. CDF-RAG -> CDF-RAG: Causal Dynamic Feedback for Adaptive Retrieval-Augmented Generation (2504.12560)
Works with causal graphs and enables multi-hop causal reasoning, refining queries. It validates responses against causal pathways

To explore what is Causal AI, read our article: https://www.turingpost.com/p/causalai

Subscribe to the Turing Post: https://www.turingpost.com/subscribe

Read further πŸ‘‡
  • 1 reply
Β·
fdaudensΒ 
posted an update about 2 months ago
view post
Post
1606
Just tested something this morning that feels kind of game-changing for how we publish, discover, and consume news with AI: connecting Claude directly to the New York Times through MCP.

Picture this: You ask Claude about a topic, and it instantly pulls verified and trusted NYT content β€” no more guessing if the info is accurate.

The cool part? Publishers stay in control of what they share via API, and users get fast, reliable access through the AI tools they already use. Instead of scraping random stuff off the web, we get a future where publishers actively shape how their journalism shows up in AI.

It’s still a bit technical to set up right now, but this could get super simple soon β€” like installing apps on your phone, but for your chatbot. And you keep the brand connection, too.

Not saying it solves everything, but it’s definitely a new way to distribute content β€” and maybe even find some fresh value in the middle of this whole news + AI shakeup. Early movers will have a head start.

Curious what folks think β€” could MCPs be a real opportunity for journalism?
  • 1 reply
Β·
KseniaseΒ 
posted an update about 2 months ago
view post
Post
5574
16 new research on inference-time scaling:

For the last couple of weeks a large amount of studies on inference-time scaling has emerged. And it's so cool, because each new paper adds a trick to the toolbox, making LLMs more capable without needing to scale parameter count of the models.

So here are 13 new methods + 3 comprehensive studies on test-time scaling:

1. Inference-Time Scaling for Generalist Reward Modeling (2504.02495)
Probably, the most popular study. It proposes to boost inference-time scalability by improving reward modeling. To enhance performance, DeepSeek-GRM uses adaptive critiques, parallel sampling, pointwise generative RM, and Self-Principled Critique Tuning (SPCT)

2. T1: Tool-integrated Self-verification for Test-time Compute Scaling in Small Language Models (2504.04718)
Allows small models to use external tools, like code interpreters and calculator, to enhance self-verification

3. Z1: Efficient Test-time Scaling with Code (2504.00810)
Proposes to train LLMs on code-based reasoning paths to make test-time scaling more efficient, limiting unnecessary tokens with a special dataset and a Shifted Thinking Window

4. GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning (2504.00891)
Introduces GenPRM, a generative PRM, that uses CoT reasoning and code verification for step-by-step judgment. With only 23K training examples, GenPRM outperforms prior PRMs and larger models

5. Can Test-Time Scaling Improve World Foundation Model? (2503.24320)
SWIFT test-time scaling framework improves World Models' performance without retraining, using strategies like fast tokenization, Top-K pruning, and efficient beam search

6. Relevance Isn't All You Need: Scaling RAG Systems With Inference-Time Compute Via Multi-Criteria Reranking (2504.07104)
Proposes REBEL for RAG systems scaling, which uses multi-criteria optimization with CoT prompting for better performance-speed tradeoffs as inference compute increases

7. $Ο†$-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation (2503.13288)
Proposes a Ο†-Decoding strategy that uses foresight sampling, clustering and adaptive pruning to estimate and select optimal reasoning steps

Read further below πŸ‘‡

Also, subscribe to the Turing Post https://www.turingpost.com/subscribe
  • 2 replies
Β·