ํ™”์žฅํ’ˆ QA ํŒŒ์ธํŠœ๋‹ ๋ชจ๋ธ

๋ชจ๋ธ ์„ค๋ช…

์ด ๋ชจ๋ธ์€ ์‹์•ฝ์ฒ˜์˜ '2024 ์ž์ฃผํ•˜๋Š” ์งˆ๋ฌธ์ง‘(ํ™”์žฅํ’ˆ)' e-book ๋‚ด ์งˆ์˜์‘๋‹ต ๋ฐ์ดํ„ฐ๋กœ ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

ํ•™์Šต ๋ฐ์ดํ„ฐ

e-book ๋‚ด ์•ฝ 300๊ฐœ์˜ ์งˆ์˜์‘๋‹ต์Œ์„ ๊ธฐ๋ฐ˜์œผ๋กœ Open AI API๋ฅผ ํ™œ์šฉํ•˜์—ฌ Paraphrasing ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ์ ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.

  • ์ตœ์ข… ์งˆ์˜์‘๋‹ต์Œ ์ˆ˜: ์•ฝ 2,800์Œ
  • ํŒŒ์ธํŠœ๋‹ ๋ฐฉ์‹: ์งˆ๋ฌธ์„ ์ž…๋ ฅ์œผ๋กœ, ์ •๋‹ต์„ ์ถœ๋ ฅ์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ์‹
  • ํ‰๊ฐ€ ์ง€ํ‘œ: Rouge-L

์‚ฌ์šฉ ๋ฐฉ๋ฒ•

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

# ๋ชจ๋ธ ๋ฐ ํ† ํฌ๋‚˜์ด์ € ๋กœ๋“œ
model = AutoModelForSeq2SeqLM.from_pretrained("seokhyokang/kobart-cosmetics-qa")
tokenizer = AutoTokenizer.from_pretrained("seokhyokang/kobart-cosmetics-qa")

# ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€ ์ƒ์„ฑ ํ•จ์ˆ˜
def generate_answer(question, model, tokenizer, device="cuda"):
    """
    ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•˜๋Š” ํ•จ์ˆ˜
    """
    model.eval()
    inputs = tokenizer(
        question, 
        return_tensors="pt", 
        max_length=128, 
        padding="max_length", 
        truncation=True
    ).to(device)
    
    outputs = model.generate(
        inputs.input_ids,
        max_length=128,
        num_beams=4,
        early_stopping=True
    )
    
    answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return answer

์ถ”๋ก  ์˜ˆ์‹œ

test_questions = [
    "์ˆ˜์ž…ํ™”์žฅํ’ˆ์˜ ์ œ์กฐ์› ์ฃผ์†Œ๋Š” ์˜๋ฌธ ๋˜๋Š” ๊ตญ๋ฌธ ์ค‘ ๋ฌด์—‡์œผ๋กœ ๊ธฐ์žฌํ•ด์•ผ ํ•˜๋‚˜์š”?",
    "ํ™”์žฅํ’ˆ ์ œ์กฐ์‚ฌ๊ฐ€ ์ œํ’ˆ์„ ์ฑ…์ž„ํŒ๋งค์—…์ž์—๊ฒŒ ์ œ๊ณตํ•  ๋•Œ, ๋“ฑ๋ก์ด ์š”๊ตฌ๋˜๋‚˜์š”?",
    "์ง€๋ฃจ์„ฑ ๋‘ํ”ผ์šฉ ์ƒดํ‘ธ๋กœ ๊ด‘๊ณ ํ•ด๋„ ๊ดœ์ฐฎ์€๊ฐ€์š”?",
    "์šฐ์œ ํŒฉ ํ˜•ํƒœ์˜ ํ•ธ๋“œ์›Œ์‹œ ์šฉ๊ธฐ๊ฐ€ ์‹ํ’ˆ ๋ชจ๋ฐฉ ํ™”์žฅํ’ˆ์— ํ•ด๋‹น๋˜๋‚˜์š”?",
    "ํ™”์žฅํ’ˆ ํ‘œ์‹œ์‚ฌํ•ญ์œผ๋กœ ๋ฐ”์ฝ”๋“œ ๋Œ€์‹  QR์ฝ”๋“œ ํ‘œ์‹œ๊ฐ€ ๊ฐ€๋Šฅํ•œ๊ฐ€์š”?",
]

print("-" * 50)
for question in test_questions:
    answer = generate_answer(question, model, tokenizer, device)
    print(f"์งˆ๋ฌธ: {question}")
    print(f"์ƒ์„ฑ๋œ ๋‹ต๋ณ€: {answer}")
    print("-" * 50)

# ์ถœ๋ ฅ ์˜ˆ์‹œ
# --------------------------------------------------
# ์งˆ๋ฌธ: ์ˆ˜์ž…ํ™”์žฅํ’ˆ์˜ ์ œ์กฐ์› ์ฃผ์†Œ๋Š” ์˜๋ฌธ ๋˜๋Š” ๊ตญ๋ฌธ ์ค‘ ๋ฌด์—‡์œผ๋กœ ๊ธฐ์žฌํ•ด์•ผ ํ•˜๋‚˜์š”?
# ์ƒ์„ฑ๋œ ๋‹ต๋ณ€: ์ œ์กฐ์› ์ •๋ณด๋Š” ์‰ฝ๊ฒŒ ์ฝํžˆ๊ณ  ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋Š” ํ•œ๊ธ€๋กœ ํ‘œ์‹œํ•ด์•ผ ํ•˜๋ฉฐ, ์™ธ๊ตญ์–ด๋„ ํ•จ๊ป˜ ๊ธฐ์žฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
# --------------------------------------------------
# ์งˆ๋ฌธ: ํ™”์žฅํ’ˆ ์ œ์กฐ์‚ฌ๊ฐ€ ์ œํ’ˆ์„ ์ฑ…์ž„ํŒ๋งค์—…์ž์—๊ฒŒ ์ œ๊ณตํ•  ๋•Œ, ๋“ฑ๋ก์ด ์š”๊ตฌ๋˜๋‚˜์š”?
# ์ƒ์„ฑ๋œ ๋‹ต๋ณ€: ์†Œ๋น„์ž์—๊ฒŒ ์ง์ ‘ ํŒ๋งคํ•˜์ง€ ์•Š์œผ๋ฉด, ์ฑ…์ž„ํŒ๋งค์—… ๋“ฑ๋ก ๋Œ€์ƒ์ด ์•„๋‹™๋‹ˆ๋‹ค.
# --------------------------------------------------
# ์งˆ๋ฌธ: ์ง€๋ฃจ์„ฑ ๋‘ํ”ผ์šฉ ์ƒดํ‘ธ๋กœ ๊ด‘๊ณ ํ•ด๋„ ๊ดœ์ฐฎ์€๊ฐ€์š”?
# ์ƒ์„ฑ๋œ ๋‹ต๋ณ€: ์ง€๋ฃจ์„ฑ ๋‘ํ”ผ๋Š” ์งˆ๋ณ‘์„ ์•”์‹œํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ํ™”์žฅํ’ˆ ๊ด‘๊ณ ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๋ถ€์ ์ ˆํ•ฉ๋‹ˆ๋‹ค. 
# --------------------------------------------------
# ์งˆ๋ฌธ: ์šฐ์œ ํŒฉ ํ˜•ํƒœ์˜ ํ•ธ๋“œ์›Œ์‹œ ์šฉ๊ธฐ๊ฐ€ ์‹ํ’ˆ ๋ชจ๋ฐฉ ํ™”์žฅํ’ˆ์— ํ•ด๋‹น๋˜๋‚˜์š”?
# ์ƒ์„ฑ๋œ ๋‹ต๋ณ€: ์‹ํ’ˆ์œผ๋กœ ์˜ค์ธ๋  ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋Š” ์šฉ๊ธฐ๋Š” ํ™”์žฅํ’ˆ๋ฒ• ์œ„๋ฐ˜์— ํ•ด๋‹นํ•  ์ˆ˜ ์žˆ์œผ๋‹ˆ ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 
# --------------------------------------------------
# ์งˆ๋ฌธ: ํ™”์žฅํ’ˆ ํ‘œ์‹œ์‚ฌํ•ญ์œผ๋กœ ๋ฐ”์ฝ”๋“œ ๋Œ€์‹  QR์ฝ”๋“œ ํ‘œ์‹œ๊ฐ€ ๊ฐ€๋Šฅํ•œ๊ฐ€์š”?
# ์ƒ์„ฑ๋œ ๋‹ต๋ณ€: ํ™”์žฅํ’ˆ ๋ฐ”์ฝ”๋“œ ํ‘œ์‹œ ๋ฐ ๊ด€๋ฆฌ์š”๋ น์—์„œ ๊ทœ์ •ํ•˜๊ณ  ์žˆ๋Š” ๋ฐ”์ฝ”๋“œ ์ข…๋ฅ˜๋งŒ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค.
# --------------------------------------------------
Downloads last month
2
Safetensors
Model size
124M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for seokhyokang/kobart-cosmetics-qa

Finetuned
(22)
this model

Space using seokhyokang/kobart-cosmetics-qa 1