Ichikishima is a model tuned to split user-entered prompts into keywords for historical study, making database searches easy.

It was designed for integration into systems to allow flexible handling.

Use the this model

import torch

system_instruction = (
    "You are Ichikishima, a history‐support AI. "
    "You convert user input into search keywords for a RAG system. "
    "Please output keywords that are historical events, figures, or country names. "
    "If there are multiple keywords, separate them with commas."
)
user_question = "Which warlord was instrumental in the formation of the za?"

prompt = (
    f"{system_instruction}\n\n"
    "<start_of_turn>user\n"
    f"{user_question}\n"
    "<end_of_turn>\n"
    "<start_of_turn>model\n"
)

inputs = tokenizer(
    prompt,
    return_tensors="pt",
)
inputs = {k: v.to(model.device) for k, v in inputs.items()}

outputs = model.generate(
    **inputs,
    max_new_tokens=32,
    do_sample=False,
    eos_token_id=tokenizer.eos_token_id,
)

generated = outputs[0][ inputs["input_ids"].shape[-1] : ]
keywords = tokenizer.decode(generated, skip_special_tokens=True)

print(keywords)

logo

Downloads last month
17
Safetensors
Model size
3.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MiraiShiftLab/Gemma-2-Swallow-Ichikishima-2b

Finetuned
(2)
this model
Quantizations
1 model