--- library_name: transformers language: tr license: mit widget: - text: Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı. base_model: - artiwise-ai/modernbert-base-tr-uncased --- # Turkish Named Entity Recognition (NER) Model This model is the fine-tuned model of "artiwise-ai/modernbert-base-tr-uncased" using a reviewed version of well known Turkish NER dataset (https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt). # Fine-tuning parameters: ``` task = "ner" model_checkpoint = "artiwise-ai/modernbert-base-tr-uncased" batch_size = 8 label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'] max_length = 8192 learning_rate = 2e-5 num_train_epochs = 5 weight_decay = 0.01 ``` # How to use: ``` from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline model = AutoModelForTokenClassification.from_pretrained("akdeniz27/modernbert-base-tr-uncased-ner") tokenizer = AutoTokenizer.from_pretrained("akdeniz27/modernbert-base-tr-uncased-ner") # tokenizer.model_max_length = 512 # Model max_length could be set here (max 8192 as default) ner = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="first") ner("your text here") ``` Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter. # Reference test results: * accuracy: 0.9910922551637875 * f1: 0.9323197128075177 * precision: 0.9292780467270049 * recall: 0.9353813559322034