This RoBERTa-based model ("MindMiner") can classify the degree of mind perception in English language text in 2 classes:
- high mind perception ๐ฉ
- low mind perception ๐ค
The model was fine-tuned on 997 manually annotated open-ended survey responses. The hold-out accuracy is 75.5% (vs. a balanced 50% random-chance baseline).
Hartmann, J., Bergner, A., & Hildebrand, C. (2023). MindMiner: Uncovering Linguistic Markers of Mind Perception as a New Lens to Understand Consumer-Smart Object Relationships. Journal of Consumer Psychology, Forthcoming.
- Downloads last month
- 23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.