view article Article Good answers are not necessarily factual answers: an analysis of hallucination in leading LLMs By davidberenstein1957 and 1 other • 9 days ago • 22
DUMP: Automated Distribution-Level Curriculum Learning for RL-based LLM Post-training Paper • 2504.09710 • Published Apr 13 • 19
RealHarm: A Collection of Real-World Language Model Application Failures Paper • 2504.10277 • Published Apr 14 • 11
RealHarm: A Collection of Real-World Language Model Application Failures Paper • 2504.10277 • Published Apr 14 • 11 • 3
RealHarm: A Collection of Real-World Language Model Application Failures Paper • 2504.10277 • Published Apr 14 • 11