Jian Hu's picture
2 4

Jian Hu

lwpyh
·

AI & ML interests

Knowledge Transfer, Semi-supervised Learning

Recent Activity

Organizations

None yet

Posts 1

view post
Post
582
Is Hallucination Always Harmful? Unlike traditional approaches that view hallucinations as detrimental, our work in NeurIPS'24 proposes a novel perspective: hallucinations as intrinsic prior knowledge. Derived from the commonsense knowledge acquired during pre-training, these hallucinations are not merely noise but a source of task-relevant information. By leveraging hallucinations as a form of prior knowledge, we can effectively mine difficult samples without the need for customized prompts, streamlining tasks like camouflage sample detection and medical image segmentation.

Check out our paper for more insights and detailed methodologies:https://huggingface.co/papers/2408.15205

datasets

None public yet