Commit
·
3fec28b
1
Parent(s):
d181b55
Update README.md
Browse files
README.md
CHANGED
@@ -20,16 +20,24 @@ This could be able to distinguish between positive and negative content.
|
|
20 |
This model was fine-tuned by using Natsume Souseki's documents.
|
21 |
For example Kokoro, Bocchan, Sanshiro and so on...
|
22 |
|
23 |
-
# what is Luke?[1]
|
24 |
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
|
25 |
|
26 |
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing).
|
|
|
27 |
|
|
|
28 |
# how to use 使い方
|
29 |
出力としてはpre.logitsが得られます。
|
30 |
-
pre.logitsはtensor[[x,y]]というテンソルになっています。
|
31 |
num = SOFTMAX(pre.logits)にすることで、num[0]がネガティブである確率、num[1]がポジティブである確率を表すようになります。
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
-------------------------------------------------------------
|
34 |
|
35 |
import torch
|
|
|
20 |
This model was fine-tuned by using Natsume Souseki's documents.
|
21 |
For example Kokoro, Bocchan, Sanshiro and so on...
|
22 |
|
23 |
+
# what is Luke? Lukeとは?[1]
|
24 |
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
|
25 |
|
26 |
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing).
|
27 |
+
luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。詳細については、GitHub リポジトリを参照してください。
|
28 |
|
29 |
+
このモデルは、通常の NLP タスクでは使われない Wikipedia エンティティのエンベディングを含んでいます。単語の入力のみを使うタスクには、lite versionを使用してください。
|
30 |
# how to use 使い方
|
31 |
出力としてはpre.logitsが得られます。
|
32 |
+
pre.logitsはtensor[[x, y]]というテンソルになっています。
|
33 |
num = SOFTMAX(pre.logits)にすることで、num[0]がネガティブである確率、num[1]がポジティブである確率を表すようになります。
|
34 |
|
35 |
+
we could get "pre.logits" as the output.
|
36 |
+
"pre.logits" is the shape like tensor[[x, y]].
|
37 |
+
"num = SOFTMAX(pre.logits)"
|
38 |
+
num[0] will show the probability of negative, num[1] will show the probability of positive.
|
39 |
+
|
40 |
+
|
41 |
-------------------------------------------------------------
|
42 |
|
43 |
import torch
|