This model takes an English Wikipedia article and tries to decide if the topic is kid-friendly. If it is not it assigns to one of the categories: crime, terrorism, political, mental health, controvserial, disaster, sex, drugs
The model is not vetted for accuracy or bias. Anecdotally it seems to perform well though it may be overly sensistive and it is likely biased. Mainly meant for prototypes where you don't yet know how you need to be filtering before making a more complete model.
Labels for training this model were generated both by human curation and with an LLM.
How to Get Started with the Model
Use the code below to get started with the model.
>>> from transformers import pipeline
>>> p = pipeline("text-classification", model='derenrich/enwiki-kid-friendly-classifier')
>>> p("Armenian genocide")
[{'label': 'crime', 'score': 0.9999203681945801}]
Model Details
Model Description
- Developed by: Daniel Erenrich
- Funded by [optional]: WMF
- Model type: Bert
- Language(s) (NLP): English
- License: Public Domain
- Finetuned from model [optional]: answerdotai/ModernBERT-base
Model Card Contact
Daniel Erenrich [email protected]
- Downloads last month
- 26
Model tree for derenrich/wiki-kid-friendly-classification-pipeline
Base model
answerdotai/ModernBERT-base