is_exist
bool 2
classes | url
stringlengths 28
74
| created_at
stringlengths 20
20
| description
stringlengths 5
348
| pdf_text
stringlengths 98
67.1k
| readme_text
stringlengths 0
81.3k
| nlp_taxonomy_classifier_labels
sequencelengths 0
11
| awesome_japanese_nlp_labels
sequencelengths 0
2
|
---|---|---|---|---|---|---|---|
true | https://github.com/takapy0210/nlplot | 2020-05-06T15:09:24Z | Visualization Module for Natural Language Processing | takapy0210 / nlplot
Public
Branches
Tags
Go to file
Go to file
Code
.github/workflows
docs
nlplot
tests
.gitignore
LICENSE
MANIFEST.in
README.md
requirements-dev.txt
requirements.txt
setup.py
nlplot: Analysis and visualization module for Natural Language Processing 📈
Facilitates the visualization of natural language processing and provides quicker analysis
You can draw the following graph
1. N-gram bar chart
2. N-gram tree Map
3. Histogram of the word count
4. wordcloud
5. co-occurrence networks
6. sunburst chart
(Tested in English and Japanese)
python package
About
Visualization Module for Natural
Language Processing
# visualization # python # nlp # analytics
# plotly # wordcloud
Readme
MIT license
Activity
237 stars
4 watching
13 forks
Report repository
Releases 3
version 1.6.0 Release
Latest
on Sep 21, 2022
+ 2 releases
Packages
No packages published
Contributors
8
Languages
Python 100.0%
Code
Issues
6
Pull requests
Actions
Projects
Security
Insights
📝 nlplot
Description
Requirement
README
MIT license
I've posted on this blog about the specific use. (Japanese)
And, The sample code is also available in the kernel of kaggle. (English)
The column to be analyzed must be a space-delimited string
text
0
Think rich look poor
1
When you come to a roadblock, take a detour
2
When it is dark enough, you can see the stars
3
Never let your memories be greater than your dreams
4
Victory is sweetest when you’ve known defeat
Installation
pip install nlplot
Quick start - Data Preparation
# sample data
target_col = "text"
texts = [
"Think rich look poor",
"When you come to a roadblock, take a detour",
"When it is dark enough, you can see the stars",
"Never let your memories be greater than your dreams",
"Victory is sweetest when you’ve known defeat"
]
df = pd.DataFrame({target_col: texts})
df.head()
Quick start - Python API
import nlplot
import pandas as pd
import plotly
from plotly.subplots import make_subplots
from plotly.offline import iplot
import matplotlib.pyplot as plt
%matplotlib inline
# target_col as a list type or a string separated by a space.
npt = nlplot.NLPlot(df, target_col='text')
# Stopword calculations can be performed.
stopwords = npt.get_stopword(top_n=30, min_freq=0)
# 1. N-gram bar chart
fig_unigram = npt.bar_ngram(
title='uni-gram',
xaxis_label='word_count',
yaxis_label='word',
ngram=1,
top_n=50,
width=800,
height=1100,
color=None,
horizon=True,
stopwords=stopwords,
verbose=False,
save=False,
)
fig_unigram.show()
fig_bigram = npt.bar_ngram(
title='bi-gram',
xaxis_label='word_count',
yaxis_label='word',
ngram=2,
top_n=50,
width=800,
height=1100,
color=None,
horizon=True,
stopwords=stopwords,
verbose=False,
save=False,
)
fig_bigram.show()
# 2. N-gram tree Map
fig_treemap = npt.treemap(
title='Tree map',
ngram=1,
top_n=50,
width=1300,
height=600,
stopwords=stopwords,
verbose=False,
save=False
)
fig_treemap.show()
# 3. Histogram of the word count
fig_histgram = npt.word_distribution(
title='word distribution',
xaxis_label='count',
yaxis_label='',
width=1000,
height=500,
color=None,
template='plotly',
bins=None,
save=False,
)
fig_histgram.show()
# 4. wordcloud
fig_wc = npt.wordcloud(
width=1000,
height=600,
max_words=100,
max_font_size=100,
colormap='tab20_r',
stopwords=stopwords,
mask_file=None,
save=False
)
plt.figure(figsize=(15, 25))
plt.imshow(fig_wc, interpolation="bilinear")
plt.axis("off")
plt.show()
# 5. co-occurrence networks
npt.build_graph(stopwords=stopwords, min_edge_frequency=10)
# The number of nodes and edges to which this output is plotted.
# If this number is too large, plotting will take a long time, so adjust the [min_edge_frequency] well.
TBD
Plotly is used to plot the figure
https://plotly.com/python/
co-occurrence networks is used to calculate the co-occurrence network
https://networkx.github.io/documentation/stable/tutorial.html
wordcloud uses the following fonts
# >> node_size:70, edge_size:166
fig_co_network = npt.co_network(
title='Co-occurrence network',
sizing=100,
node_size='adjacency_frequency',
color_palette='hls',
width=1100,
height=700,
save=False
)
iplot(fig_co_network)
# 6. sunburst chart
fig_sunburst = npt.sunburst(
title='sunburst chart',
colorscale=True,
color_continuous_scale='Oryel',
width=1000,
height=800,
save=False
)
fig_sunburst.show()
# other
# The original data frame of the co-occurrence network can also be accessed
display(
npt.node_df.head(), npt.node_df.shape,
npt.edge_df.head(), npt.edge_df.shape
)
Document
Test
cd tests
pytest
Other
| # Carp
<img src="resources/logo/carp_logo_300_c.png" alt="Logo" align="right"/>
[](https://github.com/carp-lang/Carp/actions?query=workflow%3A%22Linux+CI%22)
[](https://github.com/carp-lang/Carp/actions?query=workflow%3A"MacOS+CI")
[](https://github.com/carp-lang/Carp/actions?query=workflow%3A"Windows+CI")
<i>WARNING! This is a research project and a lot of information here might become outdated and misleading without any explanation. Don't use it for anything important just yet!</i>
<i>[Version 0.5.5 of the language is out!](https://github.com/carp-lang/Carp/releases/)</i>
## About
Carp is a programming language designed to work well for interactive and performance sensitive use cases like games, sound synthesis and visualizations.
The key features of Carp are the following:
* Automatic and deterministic memory management (no garbage collector or VM)
* Inferred static types for great speed and reliability
* Ownership tracking enables a functional programming style while still using mutation of cache-friendly data structures under the hood
* No hidden performance penalties – allocation and copying are explicit
* Straightforward integration with existing C code
* Lisp macros, compile time scripting and a helpful REPL
## Learn more
* [The Compiler Manual](docs/Manual.md) - how to install and use the compiler
* [Carp Language Guide](docs/LanguageGuide.md) - syntax and semantics of the language
* [Core Docs](http://carp-lang.github.io/carp-docs/core/core_index.html) - documentation for our standard library
[](https://gitter.im/eriksvedang/Carp?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
## A Very Small Example
```clojure
(load-and-use SDL)
(defn tick [state]
(+ state 10))
(defn draw [app rend state]
(bg rend &(rgb (/ @state 2) (/ @state 3) (/ @state 4))))
(defn main []
(let [app (SDLApp.create "The Minimalistic Color Generator" 400 300)
state 0]
(SDLApp.run-with-callbacks &app SDLApp.quit-on-esc tick draw state)))
```
For instructions on how to run Carp code, see [this document](docs/HowToRunCode.md).
For more examples, check out the [examples](examples) directory.
## Maintainers
- [Erik Svedäng](https://github.com/eriksvedang)
- [Veit Heller](https://github.com/hellerve)
- [Jorge Acereda](https://github.com/jacereda)
- [Scott Olsen](https://github.com/scolsen)
- [Tim Dévé](https://github.com/TimDeve)
## Contributing
Thanks to all the [awesome people](https://github.com/carp-lang/Carp/graphs/contributors) who have contributed to Carp over the years!
We are always looking for more help – check out the [contributing guide](docs/Contributing.md) to get started.
## License
Copyright 2016 - 2021 Erik Svedäng
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The regular expression implementation as found in src/carp_regex.h are
Copyright (C) 1994-2017 Lua.org, PUC-Rio under the terms of the MIT license.
Details can be found in the License file LUA_LICENSE.
| [
"Natural Language Interfaces",
"Structured Data in NLP",
"Syntactic Text Processing",
"Visual Data in NLP"
] | [] |
true | https://github.com/chezou/Mykytea-python | 2011-07-15T08:34:12Z | Python wrapper for KyTea | chezou / Mykytea-python
Public
Branches
Tags
Go to file
Go to file
Code
.github/workfl…
lib
.gitattribute
.gitignore
LICENSE
MANIFEST.in
Makefile
README.md
pyproject.toml
setup.cfg
setup.py
Sponsor
Sponsor
Mykytea-python is a Python wrapper module for KyTea, a general text analysis toolkit. KyTea is developed by
KyTea Development Team.
Detailed information on KyTea can be found at: http://www.phontron.com/kytea
You can install Mykytea-python via pip .
You don't have to install KyTea anymore before installing Mykytea-python when you install it by using wheel on
PyPI.
About
Python wrapper for KyTea
chezo.uno/post/2011-07-15-kytea…
Readme
MIT license
Activity
36 stars
4 watching
13 forks
Report repository
Releases 9
Build Apple Silicon whe…
Latest
on Jan 15
+ 8 releases
Packages
No packages published
Contributors
5
Languages
C++ 99.0%
Other 1.0%
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Insights
KyTea wrapper for Python
Installation
Install Mykytea-python via pip
pip install kytea
README
MIT license
You should have any KyTea model on your machine.
If you want to build from source, you need to install KyTea.
Then, run
After make, you can install Mykytea-python by running
If you fail to make, please try to install SWIG and run
Or if you still fail on Max OS X, run with some variables
If you compiled kytea with clang, you need ARCHFLAGS only.
Or, you use macOS and Homebrew, you can use KYTEA_DIR to pass the directory of KyTea.
Here is the example code to use Mykytea-python.
Build Mykytea-python from source
make
make install
swig -c++ -python -I/usr/local/include mykytea.i
$ ARCHFLAGS="-arch x86_64" CC=gcc CXX=g++ make
brew install kytea
KYTEA_DIR=$(brew --prefix) make all
How to use it?
import Mykytea
def showTags(t):
for word in t:
out = word.surface + "\t"
for t1 in word.tag:
for t2 in t1:
for t3 in t2:
out = out + "/" + str(t3)
out += "\t"
out += "\t"
print(out)
def list_tags(t):
def convert(t2):
return (t2[0], type(t2[1]))
return [(word.surface, [[convert(t2) for t2 in t1] for t1 in word.tag]) for word in t]
# Pass arguments for KyTea as the following:
opt = "-model /usr/local/share/kytea/model.bin"
mk = Mykytea.Mykytea(opt)
MIT License
s = "今日はいい天気です。"
# Fetch segmented words
for word in mk.getWS(s):
print(word)
# Show analysis results
print(mk.getTagsToString(s))
# Fetch first best tag
t = mk.getTags(s)
showTags(t)
# Show all tags
tt = mk.getAllTags(s)
showTags(tt)
License
| # KyTea wrapper for Python
[](https://badge.fury.io/py/kytea)
[](https://github.com/sponsors/chezou)
Mykytea-python is a Python wrapper module for KyTea, a general text analysis toolkit.
KyTea is developed by KyTea Development Team.
Detailed information on KyTea can be found at:
http://www.phontron.com/kytea
## Installation
### Install Mykytea-python via pip
You can install Mykytea-python via `pip`.
```sn
pip install kytea
```
You don't have to install KyTea anymore before installing Mykytea-python when you install it by using wheel on PyPI.
You should have any KyTea model on your machine.
### Build Mykytea-python from source
If you want to build from source, you need to install KyTea.
Then, run
```sh
make
```
After make, you can install Mykytea-python by running
```sh
make install
```
If you fail to make, please try to install SWIG and run
```sh
swig -c++ -python -I/usr/local/include mykytea.i
```
Or if you still fail on Max OS X, run with some variables
```sh
$ ARCHFLAGS="-arch x86_64" CC=gcc CXX=g++ make
```
If you compiled kytea with clang, you need ARCHFLAGS only.
Or, you use macOS and Homebrew, you can use `KYTEA_DIR` to pass the directory of KyTea.
```sh
brew install kytea
KYTEA_DIR=$(brew --prefix) make all
```
## How to use it?
Here is the example code to use Mykytea-python.
```python
import Mykytea
def showTags(t):
for word in t:
out = word.surface + "\t"
for t1 in word.tag:
for t2 in t1:
for t3 in t2:
out = out + "/" + str(t3)
out += "\t"
out += "\t"
print(out)
def list_tags(t):
def convert(t2):
return (t2[0], type(t2[1]))
return [(word.surface, [[convert(t2) for t2 in t1] for t1 in word.tag]) for word in t]
# Pass arguments for KyTea as the following:
opt = "-model /usr/local/share/kytea/model.bin"
mk = Mykytea.Mykytea(opt)
s = "今日はいい天気です。"
# Fetch segmented words
for word in mk.getWS(s):
print(word)
# Show analysis results
print(mk.getTagsToString(s))
# Fetch first best tag
t = mk.getTags(s)
showTags(t)
# Show all tags
tt = mk.getAllTags(s)
showTags(tt)
```
## License
MIT License
| [
"Morphology",
"Robustness in NLP",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] | [] |
true | https://github.com/nicolas-raoul/kakasi-java | 2012-01-18T08:30:56Z | Kanji transliteration to hiragana/katakana/romaji, in Java | nicolas-raoul / kakasi-java
Public
Branches
Tags
Go to file
Go to file
Code
dict
docs
src/co…
.gitignore
AUTH…
COPYI…
READ…
build.xml
UPDATE: I just created Jakaroma, its kanji
transliteration is much more accurate than Kakasi-java
so please use it instead, thanks! Also open source.
Kakasi-java Convert Japanese kanji into romaji See also
http://kakasi.namazu.org
Originally written by Tomoyuki Kawao Forked from the
last code found at http://blog.kenichimaehashi.com/?
article=13048363750 If you know any more recent
version, please let me know by email nicolas.raoul at
gmail
To build just run: ant
Usage:
About
Kanji transliteration to
hiragana/katakana/romaji, in Java
# java # japanese # romaji # kana # kanji
# java-library # japanese-language # kakasi
Readme
GPL-2.0 license
Activity
54 stars
3 watching
19 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Java 100.0%
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
java -Dkakasi.home=. -jar
lib/kakasi.jar [-JH | -JK | -Ja] [-HK |
-Ha] [-KH | -Ka]
README
GPL-2.0 license
Example:
Original documentation (in Japanese): http://nicolas-
raoul.github.com/kakasi-java
[-i<input-encoding>] [-
o<output-encoding>]
[-p] [-f] [-c] [-s] [-b]
[-r{hepburn|kunrei}] [-C | -U]
[-w]
[dictionary1 [dictionary2
[,,,]]]
Character Set Conversions:
-JH: kanji to hiragana
-JK: kanji to katakana
-Ja: kanji to romaji
-HK: hiragana to katakana
-Ha: hiragana to romaji
-KH: katakana to hiragana
-Ka: katakana to romaji
Options:
-i: input encoding
-o: output encoding
-p: list all readings (with -J option)
-f: furigana mode (with -J option)
-c: skip whitespace chars within
jukugo
-s: insert separate characters
-b: output buffer is not flushed when
a newline character is written
-r: romaji conversion system
-C: romaji Capitalize
-U: romaji Uppercase
-w: wakachigaki mode
java -Dkakasi.home=. -jar
lib/kakasi.jar -Ja
国際財務報告基準
kokusaizaimuhoukokukijun
| UPDATE: I just created [Jakaroma](https://github.com/nicolas-raoul/jakaroma), its kanji transliteration is much more accurate than Kakasi-java so please use it instead, thanks! Also open source.
Kakasi-java
Convert Japanese kanji into romaji
See also http://kakasi.namazu.org
Originally written by Tomoyuki Kawao
Forked from the last code found at http://blog.kenichimaehashi.com/?article=13048363750
If you know any more recent version, please let me know by email nicolas.raoul at gmail
To build just run: `ant`
Usage:
java -Dkakasi.home=. -jar lib/kakasi.jar [-JH | -JK | -Ja] [-HK | -Ha] [-KH | -Ka]
[-i<input-encoding>] [-o<output-encoding>]
[-p] [-f] [-c] [-s] [-b]
[-r{hepburn|kunrei}] [-C | -U] [-w]
[dictionary1 [dictionary2 [,,,]]]
Character Set Conversions:
-JH: kanji to hiragana
-JK: kanji to katakana
-Ja: kanji to romaji
-HK: hiragana to katakana
-Ha: hiragana to romaji
-KH: katakana to hiragana
-Ka: katakana to romaji
Options:
-i: input encoding
-o: output encoding
-p: list all readings (with -J option)
-f: furigana mode (with -J option)
-c: skip whitespace chars within jukugo
-s: insert separate characters
-b: output buffer is not flushed when a newline character is written
-r: romaji conversion system
-C: romaji Capitalize
-U: romaji Uppercase
-w: wakachigaki mode
Example:
java -Dkakasi.home=. -jar lib/kakasi.jar -Ja
国際財務報告基準
kokusaizaimuhoukokukijun
Original documentation (in Japanese): http://nicolas-raoul.github.com/kakasi-java
| [
"Syntactic Text Processing",
"Text Normalization"
] | [] |
true | https://github.com/miurahr/pykakasi | 2012-08-14T14:48:14Z | Lightweight converter from Japanese Kana-kanji sentences into Kana-Roman. | miurahr / pykakasi
Public archive
6 Branches
52 Tags
Go to file
Go to file
Code
miurahr README: fix link
51fe14d · 2 years ago
.github
Actions: instal…
3 years ago
bin
Introduce dep…
3 years ago
docs
PEP8: black f…
3 years ago
src
Support latin1…
2 years ago
tests
Support latin1…
2 years ago
utils
Merge pull re…
3 years ago
.gitignore
Update .gitign…
4 years ago
.readth…
Add readthed…
4 years ago
AUTH…
Add unidic co…
3 years ago
CHAN…
Update chang…
2 years ago
CHAN…
Update chang…
3 years ago
CONT…
docs: update …
5 years ago
COPYI…
Update Manif…
6 years ago
MANIF…
Add unidic for…
3 years ago
READ…
README: fix …
2 years ago
SECU…
Bump to 2.3.x
3 years ago
pyproj…
Move covera…
4 years ago
setup.cfg
setup: Update…
3 years ago
setup.py
PEP8/black: r…
3 years ago
tox.ini
CI: drop py36…
3 years ago
About
Lightweight converter from
Japanese Kana-kanji sentences
into Kana-Roman.
codeberg.org/miurahr/pykakasi
# python # natural-language-processing
# japanese # transliterator
# transliterate-japanese
Readme
GPL-3.0 license
Security policy
Activity
421 stars
5 watching
54 forks
Report repository
Releases 37
Release v2.2.1
Latest
on Jul 10, 2021
+ 36 releases
Sponsor this project
liberapay.com/miurahr
Contributors
12
This repository has been archived by the owner on Jul 22, 2022. It is now read-only.
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
docs
docs passing
passing
Run Tox tests
Run Tox tests
no status
no status
Azure Pipelines
Azure Pipelines
set up now
set up now coverage
coverage
92%
92%
#StandWithUkraine
#StandWithUkraine
pykakasi is a Python Natural Language Processing
(NLP) library to transliterate hiragana, katakana and
kanji (Japanese text) into rōmaji (Latin/Roman
alphabet). It can handle characters in NFC form.
Its algorithms are based on the kakasi library, which is
written in C.
Install (from PyPI): pip install pykakasi
Install (from conda-forge): conda install -c
conda-forge pykakasi
Documentation available on readthedocs
This project has given up GitHub. (See Software
Freedom Conservancy's Give Up GitHub site for details)
You can now find this project at
https://codeberg.org/miurahr/pykakasi instead.
Any use of this project's code by GitHub Copilot, past or
present, is done without our permission. We do not
consent to GitHub's use of this project's code in Copilot.
Join us; you can Give Up GitHub too!
Languages
Python 100.0%
Pykakasi
Overview
Give Up GitHub
README
GPL-3.0 license
| ========
Pykakasi
========
Overview
========
.. image:: https://readthedocs.org/projects/pykakasi/badge/?version=latest
:target: https://pykakasi.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://badge.fury.io/py/pykakasi.png
:target: http://badge.fury.io/py/Pykakasi
:alt: PyPI version
.. image:: https://github.com/miurahr/pykakasi/workflows/Run%20Tox%20tests/badge.svg
:target: https://github.com/miurahr/pykakasi/actions?query=workflow%3A%22Run+Tox+tests%22
:alt: Run Tox tests
.. image:: https://dev.azure.com/miurahr/github/_apis/build/status/miurahr.pykakasi?branchName=master
:target: https://dev.azure.com/miurahr/github/_build?definitionId=13&branchName=master
:alt: Azure-Pipelines
.. image:: https://coveralls.io/repos/miurahr/pykakasi/badge.svg?branch=master
:target: https://coveralls.io/r/miurahr/pykakasi?branch=master
:alt: Coverage status
.. image:: https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/badges/StandWithUkraine.svg
:target: https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md
``pykakasi`` is a Python Natural Language Processing (NLP) library to transliterate *hiragana*, *katakana* and *kanji* (Japanese text) into *rōmaji* (Latin/Roman alphabet). It can handle characters in NFC form.
Its algorithms are based on the `kakasi`_ library, which is written in C.
* Install (from `PyPI`_): ``pip install pykakasi``
* Install (from `conda-forge`_): ``conda install -c conda-forge pykakasi``
* `Documentation available on readthedocs`_
.. _`PyPI`: https://pypi.org/project/pykakasi/
.. _`conda-forge`: https://github.com/conda-forge/pykakasi-feedstock
.. _`kakasi`: http://kakasi.namazu.org/
.. _`Documentation available on readthedocs`: https://pykakasi.readthedocs.io/en/latest/index.html
Give Up GitHub
--------------
This project has given up GitHub. (See Software Freedom Conservancy's `Give Up GitHub`_ site for details)
You can now find this project at https://codeberg.org/miurahr/pykakasi instead.
Any use of this project's code by GitHub Copilot, past or present, is done without our permission. We do not consent to GitHub's use of this project's code in Copilot.
Join us; you can `Give Up GitHub`_ too!
.. _`Give Up GitHub`: https://GiveUpGitHub.org
.. image:: https://sfconservancy.org/img/GiveUpGitHub.png
:width: 400
:alt: Logo of the GiveUpGitHub campaign
| [
"Syntactic Text Processing",
"Text Normalization"
] | [] |
true | https://github.com/yohokuno/jsc | 2012-08-23T23:39:40Z | Joint source channel model for Japanese Kana Kanji conversion, Chinese pinyin input and CJE mixed input. | yohokuno / jsc
Public
Branches
Tags
Go to file
Go to file
Code
data
src
tools
README.md
TODO
waf
wscript
JSC is an implementation of joint source channel or joint n-gram model with monotonic decoder.
It can be used for machine transliteration, Japanese kana-kanji conversion, Chinese pinyin input, English word segmentation or pronunciation
inference.
JSC requires Unix, gcc, python, and marisa-trie. If you want to use RPC server, you also need libevent.
To install JSC, type these commands into your console.
jsc-decode command convert source string into target string via joint source channel model. You can provide queries through standard input
line by line.
About
Joint source channel model for
Japanese Kana Kanji conversion,
Chinese pinyin input and CJE
mixed input.
Readme
Activity
14 stars
6 watching
2 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
C++ 78.6%
C 18.2%
Python 3.2%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
JSC: Joint Souce Channel Model and Decoder
A Joint Source-Channel Model for Machine Transliteration, Li Haizhou, Zhang Min, Su Jian.
http://acl.ldc.upenn.edu/acl2004/main/pdf/121_pdf_2-col.pdf
Requirement
marisa-trie 0.2.0 or later
http://code.google.com/p/marisa-trie/
libevent 2.0 or later
http://libevent.org/
Install
$ ./waf configure [--prefix=INSTALL_DIRECTORY]
$ ./waf build
$ sudo ./waf install
Usage
jsc-decode
README
jsc-build command build model files in binary format from n-gram file in text format.
jsc-server command provides RPC server via simple TCP protocol. You can provide queries through telnet command line by line.
For Japanese Kana Kanji conversion, a model is provided at data/japanese directory. By default, both romaji and hiragana input are allowed.
For Japanese pronunciation inference, a model is provided at data/japanese-reverse directory.
For Chinese Pinyin input, a model is provided at data/chinese/ directory.
For Chinese Hanzi-to-Pinyin Conversion, a model is provided at data/chinese-reverse/ directory.
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-l: turn off sentence-beginning/ending label
jsc-build
options:
-d directory: specify data directory or prefix (default: ./)
-m model: specify model file name (default: ngram)
-t trie_num: specify trie number in marisa-trie (default: 3)
-r: build reverse model
jsc-server
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-p port: specify port number (default: 40714)
-l: turn off sentence-beginning/ending label
Sample Applications
Japanese Kana Kanji Conversion
$ ./build/jsc-decode -d data/japanese/
わたしのなまえはなかのです。
わたし の 名前 は 中野 です 。
arayurugenjitsuwosubetejibunnnohouhenejimagetanoda
あらゆる 現実 を 全て 自分 の ほう へ ネジ 曲げ た の だ
Japanese Pronunciation Inference
$ ./build/jsc-decode -d data/japanese-reverse/
魔理沙は大変なものを盗んでいきました
ま りさ は たいへん な もの を ぬす ん で い き ま し た
Chinese Pinyin Input
$ ./build/jsc-decode -d data/chinese/
woaiziranyuyanchuli
我 爱 自然 语言 处理
zhejianshitagegehaibuzhidaone
这 件 事 她 哥哥 海部 知道 呢
Chinese Hanzi-to-Pinyin Conversion
$ ./build/jsc-decode -d data/chinese-reverse/
汉字拼音转换
hanzi pinyin zhuanhuan
For English input, a model is provided at data/english/ directory.
For English/Japanese/Chinese mixed input, a model is provided at data/mixed/ directory. The language is detected automatically.
Top directory contains these files and directories:
N-gram file should be SRILM format.
Target string and source string should be coupled with character '/'; e.g. "私/わたし"
Language
F-score
Size
Japanese
0.937
10MB
Chinese
0.895
9MB
English
not ready
9MB
Mixed
not ready
27MB
Please refer this paper if you need.
English word segmentation / automatic capitalization
$ ./build/jsc-decode -d data/english/
alicewasbeginningtogetverytiredofsittingbyhersisteronthebank
Alice was beginning to get very tired of sitting by her sister on the bank
istandheretodayhumbledbythetaskbeforeusgratefulforthetrustyouhavebestowedmindfulofthesacrificesbornebyourancestors
I Stand here today humbled by the task before us grateful for the trust you have bestowed mindful of the
sacrifices borne by our ancestors
Mixed Input
$ ./build/jsc-decode -d data/mixed/
thisisapencil
This is a pencil
kyouhayoitenkidesune
今日 は 良い 天気 です ね
woshizhongguoren
我 是 中国 人
thisistotemohaochi!
This is とても 好吃 !
Data Structure
Directories
README.md this file
build/ built by waf automatically
data/ model files
src/ source and header files for C++
tools/ command tools by C++
waf waf build script
wscript waf settings
File format
http://www.speech.sri.com/projects/srilm/
Reference
Accuracy
Paper
Yoh Okuno and Shinsuke Mori, An Ensemble Model of Word-based and Character-based Models for Japanese and Chinese
Input Method, Workshop on Advances in Text Input Methods, 2012.
| JSC: Joint Souce Channel Model and Decoder
===
**JSC** is an implementation of joint source channel or joint n-gram model with monotonic decoder.
A Joint Source-Channel Model for Machine Transliteration, Li Haizhou, Zhang Min, Su Jian.
http://acl.ldc.upenn.edu/acl2004/main/pdf/121_pdf_2-col.pdf
It can be used for machine transliteration, Japanese kana-kanji conversion, Chinese pinyin input, English word segmentation or pronunciation inference.
Requirement
---
JSC requires Unix, gcc, python, and marisa-trie. If you want to use RPC server, you also need libevent.
marisa-trie 0.2.0 or later
http://code.google.com/p/marisa-trie/
libevent 2.0 or later
http://libevent.org/
Install
---
To install JSC, type these commands into your console.
$ ./waf configure [--prefix=INSTALL_DIRECTORY]
$ ./waf build
$ sudo ./waf install
Usage
---
### jsc-decode
**jsc-decode** command convert source string into target string via joint source channel model.
You can provide queries through standard input line by line.
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-l: turn off sentence-beginning/ending label
### jsc-build
**jsc-build** command build model files in binary format from n-gram file in text format.
options:
-d directory: specify data directory or prefix (default: ./)
-m model: specify model file name (default: ngram)
-t trie_num: specify trie number in marisa-trie (default: 3)
-r: build reverse model
### jsc-server
**jsc-server** command provides RPC server via simple TCP protocol.
You can provide queries through telnet command line by line.
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-p port: specify port number (default: 40714)
-l: turn off sentence-beginning/ending label
Sample Applications
---
### Japanese Kana Kanji Conversion
For Japanese Kana Kanji conversion, a model is provided at data/japanese directory. By default, both romaji and hiragana input are allowed.
$ ./build/jsc-decode -d data/japanese/
わたしのなまえはなかのです。
わたし の 名前 は 中野 です 。
arayurugenjitsuwosubetejibunnnohouhenejimagetanoda
あらゆる 現実 を 全て 自分 の ほう へ ネジ 曲げ た の だ
### Japanese Pronunciation Inference
For Japanese pronunciation inference, a model is provided at data/japanese-reverse directory.
$ ./build/jsc-decode -d data/japanese-reverse/
魔理沙は大変なものを盗んでいきました
ま りさ は たいへん な もの を ぬす ん で い き ま し た
### Chinese Pinyin Input
For Chinese Pinyin input, a model is provided at data/chinese/ directory.
$ ./build/jsc-decode -d data/chinese/
woaiziranyuyanchuli
我 爱 自然 语言 处理
zhejianshitagegehaibuzhidaone
这 件 事 她 哥哥 海部 知道 呢
### Chinese Hanzi-to-Pinyin Conversion
For Chinese Hanzi-to-Pinyin Conversion, a model is provided at data/chinese-reverse/ directory.
$ ./build/jsc-decode -d data/chinese-reverse/
汉字拼音转换
hanzi pinyin zhuanhuan
### English word segmentation / automatic capitalization
For English input, a model is provided at data/english/ directory.
$ ./build/jsc-decode -d data/english/
alicewasbeginningtogetverytiredofsittingbyhersisteronthebank
Alice was beginning to get very tired of sitting by her sister on the bank
istandheretodayhumbledbythetaskbeforeusgratefulforthetrustyouhavebestowedmindfulofthesacrificesbornebyourancestors
I Stand here today humbled by the task before us grateful for the trust you have bestowed mindful of the sacrifices borne by our ancestors
### Mixed Input
For English/Japanese/Chinese mixed input, a model is provided at data/mixed/ directory. The language is detected automatically.
$ ./build/jsc-decode -d data/mixed/
thisisapencil
This is a pencil
kyouhayoitenkidesune
今日 は 良い 天気 です ね
woshizhongguoren
我 是 中国 人
thisistotemohaochi!
This is とても 好吃 !
Data Structure
---
### Directories
Top directory contains these files and directories:
README.md this file
build/ built by waf automatically
data/ model files
src/ source and header files for C++
tools/ command tools by C++
waf waf build script
wscript waf settings
### File format
N-gram file should be SRILM format.
http://www.speech.sri.com/projects/srilm/
Target string and source string should be coupled with character '/'; e.g. "私/わたし"
Reference
---
### Accuracy
<table>
<tr><th>Language</th><th>F-score</th><th>Size</th></tr>
<tr><td>Japanese</td><td>0.937</td><td>10MB</td></tr>
<tr><td>Chinese</td><td>0.895</td><td>9MB</td></tr>
<tr><td>English</td><td>not ready</td><td>9MB</td></tr>
<tr><td>Mixed</td><td>not ready</td><td>27MB</td></tr>
</table>
### Paper
Please refer this paper if you need.
Yoh Okuno and Shinsuke Mori, An Ensemble Model of Word-based and Character-based Models for Japanese and Chinese Input Method, Workshop on Advances in Text Input Methods, 2012.
http://yoh.okuno.name/pdf/wtim2012.pdf
| [
"Language Models",
"Multilinguality",
"Syntactic Text Processing"
] | [] |
true | https://github.com/lovell/hepburn | 2013-06-28T10:06:51Z | Node.js module for converting Japanese Hiragana and Katakana script to, and from, Romaji using Hepburn romanisation | lovell / hepburn
Public
Branches
Tags
Go to file
Go to file
Code
.github…
lib
tests
.gitignore
LICEN…
READ…
packag…
Node.js module for converting Japanese Hiragana and Katakana
script to, and from, Romaji using Hepburn romanisation.
Based partly on Takaaki Komura's kana2hepburn.
About
Node.js module for converting
Japanese Hiragana and Katakana
script to, and from, Romaji using
Hepburn romanisation
# nodejs # javascript # katakana # hiragana
# romaji # hepburn # hepburn-romanisation
Readme
Apache-2.0 license
Activity
130 stars
4 watching
23 forks
Report repository
Releases
16 tags
Packages
No packages published
Contributors
10
Languages
JavaScript 100.0%
Code
Issues
4
Pull requests
Actions
Security
Insights
Hepburn
Install
npm install hepburn
Usage
var hepburn = require("hepburn");
fromKana(string)
README
Apache-2.0 license
Converts a string containing Kana, either Hiragana or Katakana,
to Romaji.
In this example romaji1 will have the value HIRAGANA ,
romaji2 will have the value KATAKANA .
Converts a string containing Romaji to Hiragana.
In this example hiragana will have the value ひらがな.
Converts a string containing Romaji to Katakana.
In this example katakana will have the value カタカナ and
tokyo will have the value トーキョー.
Cleans up a romaji string, changing old romaji forms into the
more-modern Hepburn form (for further processing). Generally
matches the style used by Wapro romaji. A larger guide to
modern romaji conventions was used in building this method.
What this methods fixes:
Incorrect usage of the letter M. For example "Shumman"
should be written as "Shunman".
Changing usage of NN into N', for example "Shunnei"
becomes "Shun'ei".
Converting the usage of OU and OH (to indicate a long
vowel) into OO.
var romaji1 = hepburn.fromKana("ひらがな");
var romaji2 = hepburn.fromKana("カタカナ");
toHiragana(string)
var hiragana = hepburn.toHiragana("HIRAGANA");
toKatakana(string)
var katakana = hepburn.toKatakana("KATAKANA");
var tokyo = hepburn.toKatakana("TŌKYŌ");
cleanRomaji(string)
var cleaned = hepburn.cleanRomaji("SYUNNEI");
// cleaned === "SHUN'EI"
Correct old usages Nihon-shiki romanization into Hepburn
form. A full list of the conversions can be found in the
hepburn.js file. For example "Eisyosai" becomes
"Eishosai" and "Yoshihuji" becomes "Yoshifuji".
Splits a string containing Katakana or Hiragana into a syllables
array.
In this example hiragana will have the value ["ひ", "ら",
"が", "な"] and tokyo will have the value ["トー", "キョ
ー"] .
Splits a string containing Romaji into a syllables array.
In this example tokyo will have the value ["TŌ", "KYŌ"] and
pakkingu will have the value ["PAK", "KI", "N", "GU"] .
Returns true if string contains Hiragana.
Returns true if string contains Katakana.
Returns true if string contains any Kana.
Returns true if string contains any Kanji.
Run the unit tests with:
splitKana(string)
var hiragana = hepburn.splitKana("ひらがな");
var tokyo = hepburn.splitKana("トーキョー");
splitRomaji(string)
var tokyo = hepburn.splitRomaji("TŌKYŌ");
var pakkingu = hepburn.splitRomaji("PAKKINGU");
containsHiragana(string)
containsKatakana(string)
containsKana(string)
containsKanji(string)
Testing Build Status
Copyright 2013, 2014, 2015, 2018, 2020 Lovell Fuller and
contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under
npm test
Licence
| # Hepburn
Node.js module for converting Japanese Hiragana and Katakana script to, and from, Romaji using [Hepburn romanisation](http://en.wikipedia.org/wiki/Hepburn_romanization).
Based partly on Takaaki Komura's [kana2hepburn](https://github.com/emon/kana2hepburn).
## Install
npm install hepburn
## Usage
```javascript
var hepburn = require("hepburn");
```
### fromKana(string)
```javascript
var romaji1 = hepburn.fromKana("ひらがな");
var romaji2 = hepburn.fromKana("カタカナ");
```
Converts a string containing Kana, either Hiragana or Katakana, to Romaji.
In this example `romaji1` will have the value `HIRAGANA`, `romaji2` will have the value `KATAKANA`.
### toHiragana(string)
```javascript
var hiragana = hepburn.toHiragana("HIRAGANA");
```
Converts a string containing Romaji to Hiragana.
In this example `hiragana` will have the value `ひらがな`.
### toKatakana(string)
```javascript
var katakana = hepburn.toKatakana("KATAKANA");
var tokyo = hepburn.toKatakana("TŌKYŌ");
```
Converts a string containing Romaji to Katakana.
In this example `katakana` will have the value `カタカナ` and `tokyo` will have the value `トーキョー`.
### cleanRomaji(string)
```javascript
var cleaned = hepburn.cleanRomaji("SYUNNEI");
// cleaned === "SHUN'EI"
```
Cleans up a romaji string, changing old romaji forms into the more-modern
Hepburn form (for further processing). Generally matches the style used by
[Wapro romaji](https://en.wikipedia.org/wiki/W%C4%81puro_r%C5%8Dmaji).
A larger [guide to modern romaji conventions](https://www.nayuki.io/page/variations-on-japanese-romanization)
was used in building this method.
What this methods fixes:
* Incorrect usage of the letter M. For example "Shumman" should be written as "Shunman".
* Changing usage of NN into N', for example "Shunnei" becomes "Shun'ei".
* Converting the usage of OU and OH (to indicate a long vowel) into OO.
* Correct old usages [Nihon-shiki romanization](https://en.wikipedia.org/wiki/Nihon-shiki_romanization) into Hepburn form. A full list of the conversions can be found in the `hepburn.js` file. For example "Eisyosai" becomes "Eishosai" and "Yoshihuji" becomes "Yoshifuji".
### splitKana(string)
```javascript
var hiragana = hepburn.splitKana("ひらがな");
var tokyo = hepburn.splitKana("トーキョー");
```
Splits a string containing Katakana or Hiragana into a syllables array.
In this example `hiragana` will have the value `["ひ", "ら", "が", "な"]` and `tokyo` will have the value `["トー", "キョー"]`.
### splitRomaji(string)
```javascript
var tokyo = hepburn.splitRomaji("TŌKYŌ");
var pakkingu = hepburn.splitRomaji("PAKKINGU");
```
Splits a string containing Romaji into a syllables array.
In this example `tokyo` will have the value `["TŌ", "KYŌ"]` and `pakkingu` will have the value `["PAK", "KI", "N", "GU"]`.
### containsHiragana(string)
Returns `true` if `string` contains Hiragana.
### containsKatakana(string)
Returns `true` if `string` contains Katakana.
### containsKana(string)
Returns `true` if `string` contains any Kana.
### containsKanji(string)
Returns `true` if `string` contains any Kanji.
## Testing [](https://travis-ci.org/lovell/hepburn)
Run the unit tests with:
npm test
## Licence
Copyright 2013, 2014, 2015, 2018, 2020 Lovell Fuller and contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0.html)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| [
"Phonology",
"Syntactic Text Processing",
"Text Normalization"
] | [] |
true | https://github.com/jeresig/node-romaji-name | 2013-08-24T17:50:11Z | Normalize and fix common issues with Romaji-based Japanese names. | jeresig / node-romaji-name
Public
Branches
Tags
Go to file
Go to file
Code
.gitignore
LICEN…
READ…
packag…
packag…
romaji-…
setting…
test-pe…
test.js
This is a utility primarily designed for consuming, parsing, and correcting Japanese
names written in rōmaji using proper Hepburn romanization form.
Beyond fixing common problems with Japanese names written with rōmaji, it's also
able to do a number of amazing things:
1. It's able to figure out which part of the name is the surname and which is the
given name and correct the order, if need be (using the enamdict module).
2. It's able to fix names that are missing important punctuation or stress marks
(such as missing long vowel marks, like ō, or ' for splitting confusing n-vowel
usage).
3. It's able to detect non-Japanese names and leave them intact for future
processing.
About
Normalize and fix common issues
with Romaji-based Japanese
names.
www.npmjs.org/package/romaji-…
Readme
MIT license
Activity
41 stars
2 watching
5 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
JavaScript 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
romaji-name
README
MIT license
4. It's able to provide the kana form of the Japanese name (using Hiragana and
the hepburn module).
5. It's able to correctly split Japanese names, written with Kanji, into their proper
given and surnames.
6. It can detect and properly handle the "generation" portion of the name, both in
English and in Japanese (e.g. III, IV, etc.).
This utility was created to help consume all of the (extremely-poorly-written)
Japanese names found when collecting data for the Ukiyo-e Database and Search
Engine.
All code is written by John Resig and is released under an MIT license.
If you like this module this you may also be interested in two other modules that
this module depends upon: enamdict and hepburn.
Which will log out objects that looks something like this:
Example
var romajiName = require("romaji-name");
// Wait for the module to completely load
// (loads the ENAMDICT dictionary)
romajiName.init(function() {
console.log(romajiName.parseName("Kenichi Nakamura"));
console.log(romajiName.parseName("Gakuryo Nakamura"));
console.log(romajiName.parseName("Charles Bartlett"));
});
// Note the correction of the order of the given/surname
// Also note the correct kana generated and the injection
// of the missing '
{
original: 'Kenichi Nakamura',
locale: 'ja',
given: 'Ken\'ichi',
given_kana: 'けんいち',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Ken\'ichi',
ascii: 'Nakamura Ken\'ichi',
plain: 'Nakamura Ken\'ichi',
kana: 'なかむらけんいち'
}
// Note the correction of the order of the given/surname
// Also note the correction of the missing ō
{
original: 'Gakuryo Nakamura',
locale: 'ja',
given: 'Gakuryō',
This is available as a node module on npm. You can find it here:
https://npmjs.org/package/romaji-name It can be installed by running the following:
This library provides a large number of utility methods for working with names
(especially Japanese names). That being said you'll probably only ever make use
of just the few main methods:
Loads the dependent modules (namely, loads the enamdict name database). If,
for some reason, you don't need to do any surname/given name correction, or
correction of stress marks, then you can skip this step (this would likely be a very
abnormal usage of this library).
Parses a single string name and returns an object representing that name.
Optionally you can specify some settings to modify how the name is parsed, see
below for a list of all the settings.
The returned object will have some, or all, of the following properties:
original : The original string that was passed in to parseName .
given_kana: 'がくりょう',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Gakuryō',
ascii: 'Nakamura Gakuryoo',
plain: 'Nakamura Gakuryo',
kana: 'なかむらがくりょう'
}
// Note that it detects that this is likely not a Japanese name
// (and leaves the locale empty, accordingly)
{
original: 'Charles Bartlett',
locale: '',
given: 'Charles',
surname: 'Bartlett',
name: 'Charles Bartlett',
ascii: 'Charles Bartlett',
plain: 'Charles Bartlett'
}
Installation
npm install romaji-name
Documentation
init(Function)
parseName(String [, Object])
settings : An object holding the settings that were passed in to the
parseName method.
locale : A guess at the locale of the name. Only two values exist: "ja" and
"" . Note that just because "ja" was returned it does not guarantee that the
person is actually Japanese, just that the name looks to be Japanese-like (for
example: Some Chinese names also return "ja" ).
given : A string of the Romaji form of the given name. (Will only exist if a
Romaji form was originally provided.)
given_kana : A string of the Kana form of the given name. (Will only exist if a
Romaji form was originally provided and if the locale is "ja" .)
given_kanji : A string of the Kanji form of the given name. (Will only exist if a
Kanji form was originally provided.)
middle :
surname : A string of the Romaji form of the surname. (Will only exist if a
Romaji form was originally provided.)
surname_kana : A string of the Kana form of the surname. (Will only exist if a
Romaji form was originally provided and if the locale is "ja" .)
surname_kanji : A string of the Kanji form of the surname. (Will only exist if a
Kanji form was originally provided.)
generation : A number representing the generation of the name. For example
"John Smith II" would have a generation of 2 .
name : The full name, in properly-stressed romaji, including the generation.
For example: "Nakamura Gakuryō II" .
ascii : The full name, in ascii text, including the generation. This is a proper
ascii representation of the name (with long vowels converted from "ō" into
"oo", for example). Example: "Nakamura Gakuryoo II" .
plain : The full name, in plain text, including the generation. This is the same
as the name property but with all stress formatting stripped from it. This could
be useful to use in the generation of a URL slug, or some such. It should never
be displayed to an end-user as it will almost always be incorrect. Example:
"Nakamura Gakuryo II" .
kana : The full name, in kana, without the generation. For example: "なかむら
がくりょう".
kanji : The full name, in kanji, including the generation. For example: "戯画堂
芦幸 2世" .
unknown : If the name is a representation of an unknown individual (e.g. it's
the string "Unknown", "Not known", or many of the other variations) then this
property will exist and be true .
attributed : If the name includes a prefix like "Attributed to" then this will be
true .
after : If the name includes some sort of "After" or "In the style of" or similar
prefix then this will be true .
school : If the name includes a prefix like "School of", "Pupil of", or similar
then this will be true .
Settings:
The following are optional settings that change how the name parsing functions.
flipNonJa : Names that don't have a "ja" locale should be flipped ("Smith
John" becomes "John Smith").
stripParens : Removes anything that's wrapped in parentheses. Normally
this is left intact and any extra information is parsed from it.
givenFirst : Assumes that the first name is always the given name.
Same as the normal parseName method but accepts an object that's in the same
form as the object returned from parseName . This is useful as you can take
parseName(Object)
| romaji-name
================
This is a utility primarily designed for consuming, parsing, and correcting Japanese names written in [rōmaji](https://en.wikipedia.org/wiki/Romanization_of_Japanese) using proper [Hepburn romanization](https://en.wikipedia.org/wiki/Hepburn_romanization) form.
Beyond fixing common problems with Japanese names written with rōmaji, it's also able to do a number of amazing things:
1. It's able to figure out which part of the name is the surname and which is the given name and correct the order, if need be (using the [enamdict](https://npmjs.org/package/enamdict) module).
2. It's able to fix names that are missing important punctuation or stress marks (such as missing long vowel marks, like **ō**, or `'` for splitting confusing n-vowel usage).
3. It's able to detect non-Japanese names and leave them intact for future processing.
4. It's able to provide the kana form of the Japanese name (using [Hiragana](https://en.wikipedia.org/wiki/Hiragana) and the [hepburn](https://npmjs.org/package/hepburn) module).
5. It's able to correctly split Japanese names, written with Kanji, into their proper given and surnames.
6. It can detect and properly handle the "generation" portion of the name, both in English and in Japanese (e.g. III, IV, etc.).
This utility was created to help consume all of the (extremely-poorly-written) Japanese names found when collecting data for the [Ukiyo-e Database and Search Engine](http://ukiyo-e.org/).
All code is written by [John Resig](http://ejohn.org/) and is released under an MIT license.
If you like this module this you may also be interested in two other modules that this module depends upon: [enamdict](https://npmjs.org/package/enamdict) and [hepburn](https://npmjs.org/package/hepburn).
Example
-------
```javascript
var romajiName = require("romaji-name");
// Wait for the module to completely load
// (loads the ENAMDICT dictionary)
romajiName.init(function() {
console.log(romajiName.parseName("Kenichi Nakamura"));
console.log(romajiName.parseName("Gakuryo Nakamura"));
console.log(romajiName.parseName("Charles Bartlett"));
});
```
Which will log out objects that looks something like this:
```javascript
// Note the correction of the order of the given/surname
// Also note the correct kana generated and the injection
// of the missing '
{
original: 'Kenichi Nakamura',
locale: 'ja',
given: 'Ken\'ichi',
given_kana: 'けんいち',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Ken\'ichi',
ascii: 'Nakamura Ken\'ichi',
plain: 'Nakamura Ken\'ichi',
kana: 'なかむらけんいち'
}
// Note the correction of the order of the given/surname
// Also note the correction of the missing ō
{
original: 'Gakuryo Nakamura',
locale: 'ja',
given: 'Gakuryō',
given_kana: 'がくりょう',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Gakuryō',
ascii: 'Nakamura Gakuryoo',
plain: 'Nakamura Gakuryo',
kana: 'なかむらがくりょう'
}
// Note that it detects that this is likely not a Japanese name
// (and leaves the locale empty, accordingly)
{
original: 'Charles Bartlett',
locale: '',
given: 'Charles',
surname: 'Bartlett',
name: 'Charles Bartlett',
ascii: 'Charles Bartlett',
plain: 'Charles Bartlett'
}
```
Installation
------------
This is available as a node module on npm. You can find it here: https://npmjs.org/package/romaji-name It can be installed by running the following:
npm install romaji-name
Documentation
-------------
This library provides a large number of utility methods for working with names (especially Japanese names). That being said you'll probably only ever make use of just the few main methods:
### `init(Function)`
Loads the dependent modules (namely, loads the `enamdict` name database). If, for some reason, you don't need to do any surname/given name correction, or correction of stress marks, then you can skip this step (this would likely be a very abnormal usage of this library).
### `parseName(String [, Object])`
Parses a single string name and returns an object representing that name. Optionally you can specify some settings to modify how the name is parsed, see below for a list of all the settings.
The returned object will have some, or all, of the following properties:
* `original`: The original string that was passed in to `parseName`.
* `settings`: An object holding the settings that were passed in to the `parseName` method.
* `locale`: A guess at the locale of the name. Only two values exist: `"ja"` and `""`. Note that just because `"ja"` was returned it does not guarantee that the person is actually Japanese, just that the name looks to be Japanese-like (for example: Some Chinese names also return `"ja"`).
* `given`: A string of the Romaji form of the given name. (Will only exist if a Romaji form was originally provided.)
* `given_kana`: A string of the Kana form of the given name. (Will only exist if a Romaji form was originally provided and if the locale is `"ja"`.)
* `given_kanji`: A string of the Kanji form of the given name. (Will only exist if a Kanji form was originally provided.)
* `middle`:
* `surname`: A string of the Romaji form of the surname. (Will only exist if a Romaji form was originally provided.)
* `surname_kana`: A string of the Kana form of the surname. (Will only exist if a Romaji form was originally provided and if the locale is `"ja"`.)
* `surname_kanji`: A string of the Kanji form of the surname. (Will only exist if a Kanji form was originally provided.)
* `generation`: A number representing the generation of the name. For example "John Smith II" would have a generation of `2`.
* `name`: The full name, in properly-stressed romaji, including the generation. For example: `"Nakamura Gakuryō II"`.
* `ascii`: The full name, in ascii text, including the generation. This is a proper ascii representation of the name (with long vowels converted from "ō" into "oo", for example). Example: `"Nakamura Gakuryoo II"`.
* `plain`: The full name, in plain text, including the generation. This is the same as the `name` property but with all stress formatting stripped from it. This could be useful to use in the generation of a URL slug, or some such. It should never be displayed to an end-user as it will almost always be incorrect. Example: `"Nakamura Gakuryo II"`.
* `kana`: The full name, in kana, without the generation. For example: "なかむらがくりょう".
* `kanji`: The full name, in kanji, including the generation. For example: `"戯画堂芦幸 2世"`.
* `unknown`: If the name is a representation of an unknown individual (e.g. it's the string "Unknown", "Not known", or many of the other variations) then this property will exist and be `true`.
* `attributed`: If the name includes a prefix like "Attributed to" then this will be `true`.
* `after`: If the name includes some sort of "After" or "In the style of" or similar prefix then this will be `true`.
* `school`: If the name includes a prefix like "School of", "Pupil of", or similar then this will be `true`.
**Settings:**
The following are optional settings that change how the name parsing functions.
* `flipNonJa`: Names that don't have a "ja" locale should be flipped ("Smith John" becomes "John Smith").
* `stripParens`: Removes anything that's wrapped in parentheses. Normally this is left intact and any extra information is parsed from it.
* `givenFirst`: Assumes that the first name is always the given name.
### `parseName(Object)`
Same as the normal `parseName` method but accepts an object that's in the same form as the object returned from `parseName`. This is useful as you can take existing `romaji-name`-generated name objects and re-parse them again (to easily upgrade them when new changes are made to the `romaji-name` module). | [
"Syntactic Text Processing",
"Text Error Correction",
"Text Normalization"
] | [] |
true | https://github.com/WaniKani/WanaKana | 2013-08-27T19:57:41Z | Javascript library for detecting and transliterating Hiragana <--> Katakana <--> Romaji | WaniKani / WanaKana
Public
Branches
Tags
Go to file
Go to file
Code
.github
cypress
gh-pages
scripts
src
test
.browserslis…
.editorconfig
.eslintrc
.gitignore
.prettierrc
.travis.yml
CHANGEL…
CONTRIBU…
LICENSE
README.md
VERSION
babel.confi…
cypress.json
jsdoc.json
package.json
rollup.confi…
tsconfig.json
yarn.lock
About
Javascript library for detecting and
transforming between Hiragana,
Katakana, and Romaji
wanakana.com
Readme
MIT license
Activity
Custom properties
765 stars
14 watching
76 forks
Report repository
Releases 24
5.3.1
Latest
on Nov 20, 2023
+ 23 releases
Packages
No packages published
Used by 1k
+ 1,002
Contributors
17
+ 3 contributors
Languages
JavaScript 78.7%
HTML 8.7%
CSS 5.2%
SCSS 4.3%
Code
Issues
14
Pull requests
5
Actions
Projects
Wiki
Security
Insights
npm
npm v5.3.1
v5.3.1
Build Status coverage
coverage 99%
99% cypress
cypress dashboard
dashboard
Visit the website to see WanaKana in action.
https://unpkg.com/wanakana
TypeScript 3.1%
ワナカナ <--> WanaKana <--> わなかな
Javascript utility library for detecting and transliterating Hiragana, Katakana, and Romaji
Demo
Usage
In the browser without a build step, use the minified (UMD) bundle (with browser
polyfills)
<head>
<meta charset="UTF-8">
<script src="https://unpkg.com/wanakana"></script>
</head>
<body>
<input type="text" id="wanakana-input" />
<script>
var textInput = document.getElementById('wanakana-input');
wanakana.bind(textInput, /* options */); // uses IMEMode with toKana() as default
// to remove event listeners: wanakana.unbind(textInput);
</script>
</body>
ES Modules or Node
Install
npm install wanakana
ES Modules
import * as wanakana from 'wanakana';
// or
import { toKana, isRomaji } from 'wanakana';
Node (>=12 supported)
const wanakana = require('wanakana');
README
MIT license
Extended API reference
Documentation
Quick Reference
/*** DOM HELPERS ***/
// Automatically converts text using an eventListener on input
// Sets option: { IMEMode: true } with toKana() as converter by default
wanakana.bind(domElement [, options]);
// Removes event listener
wanakana.unbind(domElement);
/*** TEXT CHECKING UTILITIES ***/
wanakana.isJapanese('泣き虫。!〜2¥zenkaku')
// => true
wanakana.isKana('あーア')
// => true
wanakana.isHiragana('すげー')
// => true
wanakana.isKatakana('ゲーム')
// => true
wanakana.isKanji('切腹')
// => true
wanakana.isKanji('勢い')
// => false
wanakana.isRomaji('Tōkyō and Ōsaka')
// => true
wanakana.toKana('ONAJI buttsuuji')
// => 'オナジ ぶっつうじ'
wanakana.toKana('座禅‘zazen’スタイル')
// => '座禅「ざぜん」スタイル'
wanakana.toKana('batsuge-mu')
// => 'ばつげーむ'
wanakana.toKana('wanakana', { customKanaMapping: { na: 'に', ka: 'bana' }) });
// => 'わにbanaに'
wanakana.toHiragana('toukyou, オオサカ')
// => 'とうきょう、 おおさか'
wanakana.toHiragana('only カナ', { passRomaji: true })
// => 'only かな'
wanakana.toHiragana('wi', { useObsoleteKana: true })
// => 'ゐ'
wanakana.toKatakana('toukyou, おおさか')
// => 'トウキョウ、 オオサカ'
wanakana.toKatakana('only かな', { passRomaji: true })
// => 'only カナ'
wanakana.toKatakana('wi', { useObsoleteKana: true })
// => 'ヰ'
wanakana.toRomaji('ひらがな カタカナ')
// => 'hiragana katakana'
Only the browser build via unpkg or the root wanakana.min.js includes polyfills for older browsers.
Please see CONTRIBUTING.md
Mims H. Wright – Author
Duncan Bay – Author
Geggles – Contributor
James McNamee – Contributor
Project sponsored by Tofugu & WaniKani
The following ports have been created by the community:
Python (Starwort/wanakana-py) on PyPI as wanakana-python
Java (MasterKale/WanaKanaJava)
Rust (PSeitz/wana_kana_rust)
Swift (profburke/WanaKanaSwift)
Kotlin (esnaultdev/wanakana-kt)
wanakana.toRomaji('ひらがな カタカナ', { upcaseKatakana: true })
// => 'hiragana KATAKANA'
wanakana.toRomaji('つじぎり', { customRomajiMapping: { じ: 'zi', つ: 'tu', り: 'li' }) };
// => 'tuzigili'
/*** EXTRA UTILITIES ***/
wanakana.stripOkurigana('お祝い')
// => 'お祝'
wanakana.stripOkurigana('踏み込む')
// => '踏み込'
wanakana.stripOkurigana('お腹', { leading: true });
// => '腹'
wanakana.stripOkurigana('ふみこむ', { matchKanji: '踏み込む' });
// => 'ふみこ'
wanakana.stripOkurigana('おみまい', { matchKanji: 'お祝い', leading: true });
// => 'みまい'
wanakana.tokenize('ふふフフ')
// => ['ふふ', 'フフ']
wanakana.tokenize('hello 田中さん')
// => ['hello', ' ', '田中', 'さん']
wanakana.tokenize('I said 私はすごく悲しい', { compact: true })
// => [ 'I said ', '私はすごく悲しい']
Important
Contributing
Contributors
Credits
Ports
C# (kmoroz/WanaKanaShaapu)
Go (deelawn/wanakana)
| <div align="center">
<!-- Npm Version -->
<a href="https://www.npmjs.com/package/wanakana">
<img src="https://img.shields.io/npm/v/wanakana.svg" alt="NPM package" />
</a>
<!-- Build Status -->
<a href="https://travis-ci.org/WaniKani/WanaKana">
<img src="https://img.shields.io/travis/WaniKani/WanaKana.svg" alt="Build Status" />
</a>
<!-- Test Coverage -->
<a href="https://coveralls.io/github/WaniKani/WanaKana">
<img src="https://img.shields.io/coveralls/WaniKani/WanaKana.svg" alt="Test Coverage" />
</a>
<a href="https://dashboard.cypress.io/#/projects/tmdhov/runs">
<img src="https://img.shields.io/badge/cypress-dashboard-brightgreen.svg" alt="Cypress Dashboard" />
</a>
</div>
<div align="center">
<h1>ワナカナ <--> WanaKana <--> わなかな</h1>
<h4>Javascript utility library for detecting and transliterating Hiragana, Katakana, and Romaji</h4>
</div>
## Demo
Visit the [website](http://www.wanakana.com) to see WanaKana in action.
## Usage
### In the browser without a build step, use the minified (UMD) bundle (with browser polyfills)
[https://unpkg.com/wanakana](https://unpkg.com/wanakana)
```html
<head>
<meta charset="UTF-8">
<script src="https://unpkg.com/wanakana"></script>
</head>
<body>
<input type="text" id="wanakana-input" />
<script>
var textInput = document.getElementById('wanakana-input');
wanakana.bind(textInput, /* options */); // uses IMEMode with toKana() as default
// to remove event listeners: wanakana.unbind(textInput);
</script>
</body>
```
### ES Modules or Node
#### Install
```shell
npm install wanakana
```
#### ES Modules
```javascript
import * as wanakana from 'wanakana';
// or
import { toKana, isRomaji } from 'wanakana';
```
#### Node (>=12 supported)
```javascript
const wanakana = require('wanakana');
```
## Documentation
[Extended API reference](http://www.wanakana.com/docs/global.html)
## Quick Reference
```javascript
/*** DOM HELPERS ***/
// Automatically converts text using an eventListener on input
// Sets option: { IMEMode: true } with toKana() as converter by default
wanakana.bind(domElement [, options]);
// Removes event listener
wanakana.unbind(domElement);
/*** TEXT CHECKING UTILITIES ***/
wanakana.isJapanese('泣き虫。!〜2¥zenkaku')
// => true
wanakana.isKana('あーア')
// => true
wanakana.isHiragana('すげー')
// => true
wanakana.isKatakana('ゲーム')
// => true
wanakana.isKanji('切腹')
// => true
wanakana.isKanji('勢い')
// => false
wanakana.isRomaji('Tōkyō and Ōsaka')
// => true
wanakana.toKana('ONAJI buttsuuji')
// => 'オナジ ぶっつうじ'
wanakana.toKana('座禅‘zazen’スタイル')
// => '座禅「ざぜん」スタイル'
wanakana.toKana('batsuge-mu')
// => 'ばつげーむ'
wanakana.toKana('wanakana', { customKanaMapping: { na: 'に', ka: 'bana' }) });
// => 'わにbanaに'
wanakana.toHiragana('toukyou, オオサカ')
// => 'とうきょう、 おおさか'
wanakana.toHiragana('only カナ', { passRomaji: true })
// => 'only かな'
wanakana.toHiragana('wi', { useObsoleteKana: true })
// => 'ゐ'
wanakana.toKatakana('toukyou, おおさか')
// => 'トウキョウ、 オオサカ'
wanakana.toKatakana('only かな', { passRomaji: true })
// => 'only カナ'
wanakana.toKatakana('wi', { useObsoleteKana: true })
// => 'ヰ'
wanakana.toRomaji('ひらがな カタカナ')
// => 'hiragana katakana'
wanakana.toRomaji('ひらがな カタカナ', { upcaseKatakana: true })
// => 'hiragana KATAKANA'
wanakana.toRomaji('つじぎり', { customRomajiMapping: { じ: 'zi', つ: 'tu', り: 'li' }) };
// => 'tuzigili'
/*** EXTRA UTILITIES ***/
wanakana.stripOkurigana('お祝い')
// => 'お祝'
wanakana.stripOkurigana('踏み込む')
// => '踏み込'
wanakana.stripOkurigana('お腹', { leading: true });
// => '腹'
wanakana.stripOkurigana('ふみこむ', { matchKanji: '踏み込む' });
// => 'ふみこ'
wanakana.stripOkurigana('おみまい', { matchKanji: 'お祝い', leading: true });
// => 'みまい'
wanakana.tokenize('ふふフフ')
// => ['ふふ', 'フフ']
wanakana.tokenize('hello 田中さん')
// => ['hello', ' ', '田中', 'さん']
wanakana.tokenize('I said 私はすごく悲しい', { compact: true })
// => [ 'I said ', '私はすごく悲しい']
```
## Important
Only the browser build via unpkg or the root `wanakana.min.js` includes polyfills for older browsers.
## Contributing
Please see [CONTRIBUTING.md](CONTRIBUTING.md)
## Contributors
* [Mims H. Wright](https://github.com/mimshwright) – Author
* [Duncan Bay](https://github.com/DJTB) – Author
* [Geggles](https://github.com/geggles) – Contributor
* [James McNamee](https://github.com/dotfold) – Contributor
## Credits
Project sponsored by [Tofugu](http://www.tofugu.com) & [WaniKani](http://www.wanikani.com)
## Ports
The following ports have been created by the community:
* Python ([Starwort/wanakana-py](https://github.com/Starwort/wanakana-py)) on PyPI as `wanakana-python`
* Java ([MasterKale/WanaKanaJava](https://github.com/MasterKale/WanaKanaJava))
* Rust ([PSeitz/wana_kana_rust](https://github.com/PSeitz/wana_kana_rust))
* Swift ([profburke/WanaKanaSwift](https://github.com/profburke/WanaKanaSwift))
* Kotlin ([esnaultdev/wanakana-kt](https://github.com/esnaultdev/wanakana-kt))
* C# ([kmoroz/WanaKanaShaapu](https://github.com/kmoroz/WanaKanaShaapu))
* Go ([deelawn/wanakana](https://github.com/deelawn/wanakana))
| [
"Syntactic Text Processing",
"Text Normalization"
] | [] |
true | https://github.com/gojp/nihongo | 2013-09-02T15:17:52Z | Japanese Dictionary | gojp / nihongo
Public
Branches
Tags
Go to file
Go to file
Code
.github…
data
edict2
lib
static
templa…
.gitignore
LICEN…
READ…
go.mod
go.sum
main.go
go report
go report A+
A+
Open source Japanese Dictionary written in Go:
https://nihongo.io
1. git clone
https://github.com/gojp/nihongo.git
About
Japanese Dictionary
# go # golang # dictionary # japanese
# nihongo
Readme
MIT license
Activity
Custom properties
78 stars
6 watching
5 forks
Report repository
Contributors
2
Languages
Go 51.1%
HTML 25.2%
Python 18.6%
CSS 4.6%
Shell 0.5%
Code
Issues
7
Pull requests
Actions
Security
Insights
nihongo.io
How to run:
README
MIT license
2. Run the app: go run main.go
| nihongo.io
=========
[](https://goreportcard.com/report/github.com/gojp/nihongo)
Open source Japanese Dictionary written in Go: [https://nihongo.io](https://nihongo.io)
### How to run:
1. `git clone https://github.com/gojp/nihongo.git`
2. Run the app: `go run main.go`
| [] | [
"Vocabulary, Dictionary, and Language Input Method"
] |
true | https://github.com/studio-ousia/mojimoji | 2013-11-02T16:23:06Z | A fast converter between Japanese hankaku and zenkaku characters | studio-ousia / mojimoji
Public
Branches
Tags
Go to file
Go to file
Code
.github…
mojimoji
CONT…
LICEN…
MANIF…
READ…
mojimo…
pyproj…
require…
setup.cfg
setup.py
test_m…
Test
Test
passing
passing pypi
pypi v0.0.13
v0.0.13 pip downloads
pip downloads 7M
7M
A Cython-based fast converter between Japanese hankaku and zenkaku characters.
About
A fast converter between Japanese
hankaku and zenkaku characters
Readme
View license
Activity
Custom properties
144 stars
8 watching
21 forks
Report repository
Releases 3
0.0.13
Latest
on Jan 12
+ 2 releases
Packages
No packages published
Contributors
6
Languages
Cython 81.3%
Python 18.7%
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
mojimoji
Installation
$ pip install mojimoji
README
License
mojimoji: 0.0.1
zenhan: 0.4
unicodedata: Bundled with Python 2.7.3
Examples
Zenkaku to Hankaku
>>> import mojimoji
>>> print(mojimoji.zen_to_han('アイウabc012'))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', ascii=False))
アイウabc012
Hankaku to Zenkaku
>>> import mojimoji
>>> print(mojimoji.han_to_zen('アイウabc012'))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', ascii=False))
アイウabc012
Benchmarks
Library versions
Results
In [19]: s = 'ABCDEFG012345' * 10
In [20]: %time for n in range(1000000): mojimoji.zen_to_han(s)
CPU times: user 2.86 s, sys: 0.10 s, total: 2.97 s
Wall time: 2.88 s
In [21]: %time for n in range(1000000): unicodedata.normalize('NFKC', s)
CPU times: user 5.43 s, sys: 0.12 s, total: 5.55 s
Wall time: 5.44 s
In [22]: %time for n in range(1000000): zenhan.z2h(s)
mojimoji-rs: The Rust implementation of mojimoji
CPU times: user 69.18 s, sys: 0.11 s, total: 69.29 s
Wall time: 69.48 s
Links
| mojimoji
========
.. image:: https://github.com/studio-ousia/mojimoji/actions/workflows/test.yml/badge.svg
:target: https://github.com/studio-ousia/mojimoji/actions/workflows/test.yml
.. image:: https://img.shields.io/pypi/v/mojimoji.svg
:target: https://pypi.org/project/mojimoji/
.. image:: https://static.pepy.tech/personalized-badge/mojimoji?period=total&units=international_system&left_color=grey&right_color=orange&left_text=pip%20downloads
:target: https://pypi.org/project/mojimoji/
A Cython-based fast converter between Japanese hankaku and zenkaku characters.
Installation
------------
.. code-block:: bash
$ pip install mojimoji
Examples
--------
Zenkaku to Hankaku
^^^^^^^^^^^^^^^^^^
.. code-block:: python
>>> import mojimoji
>>> print(mojimoji.zen_to_han('アイウabc012'))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', ascii=False))
アイウabc012
Hankaku to Zenkaku
^^^^^^^^^^^^^^^^^^
.. code-block:: python
>>> import mojimoji
>>> print(mojimoji.han_to_zen('アイウabc012'))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', ascii=False))
アイウabc012
Benchmarks
----------
Library versions
^^^^^^^^^^^^^^^^
- mojimoji: 0.0.1
- `zenhan <https://pypi.python.org/pypi/zenhan>`_: 0.4
- `unicodedata <http://docs.python.org/2/library/unicodedata.html>`_: Bundled with Python 2.7.3
Results
^^^^^^^
.. code-block:: python
In [19]: s = 'ABCDEFG012345' * 10
In [20]: %time for n in range(1000000): mojimoji.zen_to_han(s)
CPU times: user 2.86 s, sys: 0.10 s, total: 2.97 s
Wall time: 2.88 s
In [21]: %time for n in range(1000000): unicodedata.normalize('NFKC', s)
CPU times: user 5.43 s, sys: 0.12 s, total: 5.55 s
Wall time: 5.44 s
In [22]: %time for n in range(1000000): zenhan.z2h(s)
CPU times: user 69.18 s, sys: 0.11 s, total: 69.29 s
Wall time: 69.48 s
Links
-----
- `mojimoji-rs <https://github.com/europeanplaice/mojimoji-rs>`_: The Rust implementation of mojimoji
- `gomojimoji <https://github.com/rusq/gomojimoji>`_: The Go implementation of mojimoji
| [
"Syntactic Text Processing",
"Text Normalization"
] | [] |
true | https://github.com/cihai/cihai | 2013-12-03T17:42:52Z | Python library for CJK (Chinese, Japanese, and Korean) language dictionary | Access to this site has been restricted.
If you believe this is an error, please contact Support.
GitHub Status — @githubstatus
| # cihai · [](https://pypi.org/project/cihai/) [](https://github.com/cihai/cihai/blob/master/LICENSE) [](https://codecov.io/gh/cihai/cihai)
Python library for [CJK](https://cihai.git-pull.com/glossary.html#term-cjk) (chinese, japanese,
korean) data.
This project is under active development. Follow our progress and check back for updates!
## Quickstart
### API / Library (this repository)
```console
$ pip install --user cihai
```
```python
from cihai.core import Cihai
c = Cihai()
if not c.unihan.is_bootstrapped: # download and install Unihan to db
c.unihan.bootstrap()
query = c.unihan.lookup_char('好')
glyph = query.first()
print("lookup for 好: %s" % glyph.kDefinition)
# lookup for 好: good, excellent, fine; well
query = c.unihan.reverse_char('good')
print('matches for "good": %s ' % ', '.join([glph.char for glph in query]))
# matches for "good": 㑘, 㑤, 㓛, 㘬, 㙉, 㚃, 㚒, 㚥, 㛦, 㜴, 㜺, 㝖, 㤛, 㦝, ...
```
See [API](https://cihai.git-pull.com/api.html) documentation and
[/examples](https://github.com/cihai/cihai/tree/master/examples).
### CLI ([cihai-cli](https://cihai-cli.git-pull.com))
```console
$ pip install --user cihai-cli
```
Character lookup:
```console
$ cihai info 好
```
```yaml
char: 好
kCantonese: hou2 hou3
kDefinition: good, excellent, fine; well
kHangul: 호
kJapaneseOn: KOU
kKorean: HO
kMandarin: hǎo
kTang: "*xɑ̀u *xɑ̌u"
kTotalStrokes: "6"
kVietnamese: háo
ucn: U+597D
```
Reverse lookup:
```console
$ cihai reverse library
```
```yaml
char: 圕
kCangjie: WLGA
kCantonese: syu1
kCihaiT: '308.302'
kDefinition: library
kMandarin: tú
kTotalStrokes: '13'
ucn: U+5715
--------
```
### UNIHAN data
All datasets that cihai uses have stand-alone tools to export their data. No library required.
- [unihan-etl](https://unihan-etl.git-pull.com) - [UNIHAN](http://unicode.org/charts/unihan.html)
data exports for csv, yaml and json.
## Developing
```console
$ git clone https://github.com/cihai/cihai.git`
```
```console
$ cd cihai/
```
[Bootstrap your environment and learn more about contributing](https://cihai.git-pull.com/contributing/). We use the same conventions / tools across all cihai projects: `pytest`, `sphinx`, `mypy`, `ruff`, `tmuxp`, and file watcher helpers (e.g. `entr(1)`).
## Python versions
- 0.19.0: Last Python 3.7 release
## Quick links
- [Quickstart](https://cihai.git-pull.com/quickstart.html)
- [Datasets](https://cihai.git-pull.com/datasets.html) a full list of current and future data sets
- Python [API](https://cihai.git-pull.com/api.html)
- [Roadmap](https://cihai.git-pull.com/design-and-planning/)
- Python support: >= 3.8, pypy
- Source: <https://github.com/cihai/cihai>
- Docs: <https://cihai.git-pull.com>
- Changelog: <https://cihai.git-pull.com/history.html>
- API: <https://cihai.git-pull.com/api.html>
- Issues: <https://github.com/cihai/cihai/issues>
- Test coverage: <https://codecov.io/gh/cihai/cihai>
- pypi: <https://pypi.python.org/pypi/cihai>
- OpenHub: <https://www.openhub.net/p/cihai>
- License: MIT
[](https://cihai.git-pull.com/)
[](https://github.com/cihai/cihai/actions?query=workflow%3A%22tests%22)
| [
"Multilinguality",
"Syntactic Text Processing"
] | [
"Annotation and Dataset Development"
] |
true | https://github.com/SamuraiT/mecab-python3 | 2014-05-31T08:47:04Z | mecab-python. mecab-python. you can find original version here:http://taku910.github.io/mecab/ | SamuraiT / mecab-python3
Public
Branches
Tags
Go to file
Go to file
Code
.github
debian
src/Me…
test
.gitattri…
.gitignore
AUTH…
BSD
COPYI…
Docker…
GPL
LGPL
MANIF…
READ…
setup.py
tox.ini
test-manylinux
test-manylinux
passing
passing downloads
downloads 745k/month
745k/month
platforms
platforms linux macosx windows
linux macosx windows
About
🐍 mecab-python. you can find
original version
here:http://taku910.github.io/mecab/
pypi.python.org/pypi/mecab-pyt…
Readme
View license
Activity
539 stars
11 watching
51 forks
Report repository
Releases 10
v1.0.7: Extra Debugging…
Latest
on Sep 14, 2023
+ 9 releases
Packages
No packages published
Used by 4.1k
+ 4,068
Contributors
11
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
mecab-python3
README
License
This is a Python wrapper for the MeCab morphological analyzer for Japanese
text. It currently works with Python 3.8 and greater.
Note: If using MacOS Big Sur, you'll need to upgrade pip to version 20.3 or
higher to use wheels due to a pip issue.
issueを英語で書く必要はありません。
Note that Windows wheels require a Microsoft Visual C++ Redistributable, so be
sure to install that.
The API for mecab-python3 closely follows the API for MeCab itself, even when
this makes it not very “Pythonic.” Please consult the official MeCab
documentation for more information.
Binary wheels are available for MacOS X, Linux, and Windows (64bit) are
installed by default when you use pip :
These wheels include a copy of the MeCab library, but not a dictionary. In order
to use MeCab you'll need to install a dictionary. unidic-lite is a good one to
start with:
To build from source using pip,
Languages
C++ 93.2%
Python 5.4%
SWIG 1.3%
Dockerfile 0.1%
Basic usage
>>> import MeCab
>>> wakati = MeCab.Tagger("-Owakati")
>>> wakati.parse("pythonが大好きです").split()
['python', 'が', '大好き', 'です']
>>> tagger = MeCab.Tagger()
>>> print(tagger.parse("pythonが大好きです"))
python python python python 名詞-普通名詞-一般
が ガ ガ が 助詞-格助詞
大好き ダイスキ ダイスキ 大好き 形状詞-一般
です デス デス です 助動詞 助動詞-デス 終止形-一般
EOS
Installation
pip install mecab-python3
pip install unidic-lite
pip install --no-binary :all: mecab-python3
In order to use MeCab, you must install a dictionary. There are many different
dictionaries available for MeCab. These UniDic packages, which include slight
modifications for ease of use, are recommended:
unidic: The latest full UniDic.
unidic-lite: A slightly modified UniDic 2.1.2, chosen for its small size.
The dictionaries below are not recommended due to being unmaintained for
many years, but they are available for use with legacy applications.
ipadic
jumandic
For more details on the differences between dictionaries see here.
If you get a RuntimeError when you try to run MeCab, here are some things to
check:
You have to install this to use this package on Windows.
Run pip install unidic-lite and confirm that works. If that fixes your
problem, you either don't have a dictionary installed, or you need to specify your
dictionary path like this:
Note: on Windows, use nul instead of /dev/null . Alternately, if you have a
mecabrc you can use the path after -r .
If you get this error:
Dictionaries
Common Issues
Windows Redistributable
Installing a Dictionary
tagger = MeCab.Tagger('-r /dev/null -d
/usr/local/lib/mecab/dic/mydic')
Specifying a mecabrc
error message: [ifs] no such file or directory:
You need to specify a mecabrc file. It's OK to specify an empty file, it just has to
exist. You can specify a mecabrc with -r . This may be necessary on Debian or
Ubuntu, where the mecabrc is in /etc/mecabrc .
You can specify an empty mecabrc like this:
Chasen output is not a built-in feature of MeCab, you must specify it in your
dicrc or mecabrc . Notably, Unidic does not include Chasen output format.
Please see the MeCab documentation.
fugashi is a Cython wrapper for MeCab with a Pythonic interface, by the
current maintainer of this library
SudachiPy is a modern tokenizer with an actively maintained dictionary
pymecab-ko is a wrapper of the Korean MeCab fork mecab-ko based on
mecab-python3
KoNLPy is a library for Korean NLP that includes a MeCab wrapper
Like MeCab itself, mecab-python3 is copyrighted free software by Taku Kudo
[email protected] and Nippon Telegraph and Telephone Corporation, and is
distributed under a 3-clause BSD license (see the file BSD ). Alternatively, it may
be redistributed under the terms of the GNU General Public License, version 2
(see the file GPL ) or the GNU Lesser General Public License, version 2.1 (see
the file LGPL ).
/usr/local/etc/mecabrc
tagger = MeCab.Tagger('-r/dev/null -d/home/hoge/mydic')
Using Unsupported Output Modes like -Ochasen
Alternatives
Licensing
| [](https://pypi.org/project/mecab-python3/)

[](https://pypi.org/project/mecab-python3/)

# mecab-python3
This is a Python wrapper for the [MeCab][] morphological analyzer for Japanese
text. It currently works with Python 3.8 and greater.
**Note:** If using MacOS Big Sur, you'll need to upgrade pip to version 20.3 or
higher to use wheels due to a pip issue.
**issueを英語で書く必要はありません。**
[MeCab]: https://taku910.github.io/mecab/
Note that Windows wheels require a [Microsoft Visual C++
Redistributable][msvc], so be sure to install that.
[msvc]: https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads
# Basic usage
```py
>>> import MeCab
>>> wakati = MeCab.Tagger("-Owakati")
>>> wakati.parse("pythonが大好きです").split()
['python', 'が', '大好き', 'です']
>>> tagger = MeCab.Tagger()
>>> print(tagger.parse("pythonが大好きです"))
python python python python 名詞-普通名詞-一般
が ガ ガ が 助詞-格助詞
大好き ダイスキ ダイスキ 大好き 形状詞-一般
です デス デス です 助動詞 助動詞-デス 終止形-一般
EOS
```
The API for `mecab-python3` closely follows the API for MeCab itself,
even when this makes it not very “Pythonic.” Please consult the [official MeCab
documentation][mecab-docs] for more information.
[mecab-docs]: https://taku910.github.io/mecab/
# Installation
Binary wheels are available for MacOS X, Linux, and Windows (64bit) are
installed by default when you use `pip`:
```sh
pip install mecab-python3
```
These wheels include a copy of the MeCab library, but not a dictionary. In
order to use MeCab you'll need to install a dictionary. `unidic-lite` is a good
one to start with:
```sh
pip install unidic-lite
```
To build from source using pip,
```sh
pip install --no-binary :all: mecab-python3
```
## Dictionaries
In order to use MeCab, you must install a dictionary. There are many different dictionaries available for MeCab. These UniDic packages, which include slight modifications for ease of use, are recommended:
- [unidic](https://github.com/polm/unidic-py): The latest full UniDic.
- [unidic-lite](https://github.com/polm/unidic-lite): A slightly modified UniDic 2.1.2, chosen for its small size.
The dictionaries below are not recommended due to being unmaintained for many years, but they are available for use with legacy applications.
- [ipadic](https://github.com/polm/ipadic-py)
- [jumandic](https://github.com/polm/jumandic-py)
For more details on the differences between dictionaries see [here](https://www.dampfkraft.com/nlp/japanese-tokenizer-dictionaries.html).
# Common Issues
If you get a `RuntimeError` when you try to run MeCab, here are some things to check:
## Windows Redistributable
You have to install [this][msvc] to use this package on Windows.
## Installing a Dictionary
Run `pip install unidic-lite` and confirm that works. If that fixes your
problem, you either don't have a dictionary installed, or you need to specify
your dictionary path like this:
tagger = MeCab.Tagger('-r /dev/null -d /usr/local/lib/mecab/dic/mydic')
Note: on Windows, use `nul` instead of `/dev/null`. Alternately, if you have a
`mecabrc` you can use the path after `-r`.
## Specifying a mecabrc
If you get this error:
error message: [ifs] no such file or directory: /usr/local/etc/mecabrc
You need to specify a `mecabrc` file. It's OK to specify an empty file, it just
has to exist. You can specify a `mecabrc` with `-r`. This may be necessary on
Debian or Ubuntu, where the `mecabrc` is in `/etc/mecabrc`.
You can specify an empty `mecabrc` like this:
tagger = MeCab.Tagger('-r/dev/null -d/home/hoge/mydic')
## Using Unsupported Output Modes like `-Ochasen`
Chasen output is not a built-in feature of MeCab, you must specify it in your
`dicrc` or `mecabrc`. Notably, Unidic does not include Chasen output format.
Please see [the MeCab documentation](https://taku910.github.io/mecab/#format).
# Alternatives
- [fugashi](https://github.com/polm/fugashi) is a Cython wrapper for MeCab with a Pythonic interface, by the current maintainer of this library
- [SudachiPy](https://github.com/WorksApplications/sudachi.rs) is a modern tokenizer with an actively maintained dictionary
- [pymecab-ko](https://github.com/NoUnique/pymecab-ko) is a wrapper of the Korean MeCab fork [mecab-ko](https://bitbucket.org/eunjeon/mecab-ko/src/master/) based on mecab-python3
- [KoNLPy](https://konlpy.org/en/latest/) is a library for Korean NLP that includes a MeCab wrapper
# Licensing
Like MeCab itself, `mecab-python3` is copyrighted free software by
Taku Kudo <[email protected]> and Nippon Telegraph and Telephone Corporation,
and is distributed under a 3-clause BSD license (see the file `BSD`).
Alternatively, it may be redistributed under the terms of the
GNU General Public License, version 2 (see the file `GPL`) or the
GNU Lesser General Public License, version 2.1 (see the file `LGPL`).
| [
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] | [] |
true | https://github.com/hakatashi/kyujitai.js | 2014-09-06T08:05:01Z | Utility collections for making Japanese text old-fashioned | hakatashi / kyujitai.js
Public
7 Branches
8 Tags
Go to file
Go to file
Code
hakatashi 1.3.0
8b627c7 · 4 years ago
calibrate
Remove BO…
10 years ago
data
Add a charact…
4 years ago
dist
Update dist
10 years ago
lib
Change pack…
9 years ago
test
Change test t…
10 years ago
.gitattri…
Add .gitattribu…
4 years ago
.gitignore
Distribute pac…
10 years ago
.npmig…
Disignore dat…
10 years ago
.travis.…
Update travis …
4 years ago
Gruntfil…
Change moch…
9 years ago
READ…
Modernize re…
4 years ago
packag…
1.3.0
4 years ago
packag…
1.3.0
4 years ago
build
build passing
passing Greenkeeper
Greenkeeper Move to Snyk
Move to Snyk
Utility collections for making Japanese text old-fashioned.
About
Utility collections for making
Japanese text old-fashioned
hakatashi.github.io/kyujitai.js/
# javascript # japanese # text-processing
Readme
Activity
20 stars
4 watching
3 forks
Report repository
Releases 4
v0.2.1
Latest
on Sep 22, 2014
+ 3 releases
Packages
No packages published
Contributors
4
hakatashi Koki Takahashi
greenkeeperio-bot Greenkeeper
greenkeeper[bot]
cmplstofB B̅
Languages
JavaScript 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
kyujitai.js
README
Constructor.
options : [Object]
callback : [Function(error)] Called when construction
completed.
error : [Error] Supplied if construction failed.
Encode string from shinjitai to kyujitai.
string : [String] Input string
options : [Object]
options.IVD : [Boolean] true if you want to allow
IVS for the encoded string. Default is false.
Returns: [String] Output string
Decode string from kyujitai to shinjitai.
string : [String] Input string
install
npm install kyujitai
Use
const Kyujitai = require('kyujitai');
const kyujitai = new Kyujitai((error) => {
kyujitai.encode('旧字体'); //=> '舊字體'
});
Usage
new Kyujitai([options], [callback])
kyujitai.encode(string, [options])
kyujitai.encode('旧字体'); //=> '舊字體'
kyujitai.encode('画期的図画'); //=> '劃期的圖畫'
kyujitai.encode('弁明'); //=> '辯明'
kyujitai.encode('弁償'); //=> '辨償'
kyujitai.encode('花弁'); //=> '花瓣'
kyujitai.encode('弁髪'); //=> '辮髮'
kyujitai.decode(string, [options])
| # kyujitai.js
[](https://travis-ci.org/hakatashi/kyujitai.js)
[](https://greenkeeper.io/)
Utility collections for making Japanese text old-fashioned.
## install
npm install kyujitai
## Use
```javascript
const Kyujitai = require('kyujitai');
const kyujitai = new Kyujitai((error) => {
kyujitai.encode('旧字体'); //=> '舊字體'
});
```
## Usage
### new Kyujitai([options], [callback])
Constructor.
* `options`: [Object]
* `callback`: [Function(error)] Called when construction completed.
- `error`: [Error] Supplied if construction failed.
### kyujitai.encode(string, [options])
Encode string from shinjitai to kyujitai.
* `string`: [String] Input string
* `options`: [Object]
- `options.IVD`: [Boolean] `true` if you want to allow IVS for the encoded string. Default is false.
* Returns: [String] Output string
```javascript
kyujitai.encode('旧字体'); //=> '舊字體'
kyujitai.encode('画期的図画'); //=> '劃期的圖畫'
kyujitai.encode('弁明'); //=> '辯明'
kyujitai.encode('弁償'); //=> '辨償'
kyujitai.encode('花弁'); //=> '花瓣'
kyujitai.encode('弁髪'); //=> '辮髮'
```
### kyujitai.decode(string, [options])
Decode string from kyujitai to shinjitai.
* `string`: [String] Input string
* `options`: [Object]
* Returns: [String] Output string
| [
"Low-Resource NLP",
"Syntactic Text Processing"
] | [] |
true | https://github.com/ikegami-yukino/rakutenma-python | 2015-01-01T21:40:43Z | Rakuten MA (Python version) | ikegami-yukino / rakutenma-python
Public
Branches
Tags
Go to file
Go to file
Code
bin
rakutenma
tests
.gitignore
.travis.yml
CHANGES.rst
LICENSE.txt
MANIFEST.in
README.rst
setup.py
travis-ci.org
python
python 2.6 | 2.7 | 3.3 | 3.4 | 3.5 | 3.6
2.6 | 2.7 | 3.3 | 3.4 | 3.5 | 3.6 pypi
pypi v0.3.3
v0.3.3
Code Health license
license Apache Software License
Apache Software License
Rakuten MA Python (morphological analyzer) is a Python version of Rakuten MA (word segmentor + PoS Tagger) for
Chinese and Japanese.
For details about Rakuten MA, See https://github.com/rakuten-nlp/rakutenma
See also http://qiita.com/yukinoi/items/925bc238185aa2fad8a7 (In Japanese)
Contributions are welcome!
pip install rakutenma
About
Rakuten MA (Python version)
# python # nlp # chinese # japanese-language
# word-segmentation # pos-tagging
# part-of-speech-tagger
Readme
Apache-2.0 license
Activity
22 stars
5 watching
1 fork
Report repository
Releases 1
v0.1.1
Latest
on Jan 10, 2015
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
Rakuten MA Python
Installation
Example
from rakutenma import RakutenMA
# Initialize a RakutenMA instance with an empty model
# the default ja feature set is set already
rma = RakutenMA()
README
Apache-2.0 license
As compared to original RakutenMA, following methods are added:
RakutenMA::load(model_path) - Load model from JSON file
RakutenMA::save(model_path) - Save model to path
As initial setting, following values are set:
rma.featset = CTYPE_JA_PATTERNS # RakutenMA.default_featset_ja
rma.hash_func = rma.create_hash_func(15)
rma.tag_scheme = "SBIEO" # if using Chinese, set "IOB2"
Apache License version 2.0
# Let's analyze a sample sentence (from http://tatoeba.org/jpn/sentences/show/103809)
# With a disastrous result, since the model is empty!
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Feed the model with ten sample sentences from tatoeba.com
# "tatoeba.json" is available at https://github.com/rakuten-nlp/rakutenma
import json
tatoeba = json.load(open("tatoeba.json"))
for i in tatoeba:
rma.train_one(i)
# Now what does the result look like?
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Initialize a RakutenMA instance with a pre-trained model
rma = RakutenMA(phi=1024, c=0.007812) # Specify hyperparameter for SCW (for demonstration purpose)
rma.load("model_ja.json")
# Set the feature hash function (15bit)
rma.hash_func = rma.create_hash_func(15)
# Tokenize one sample sentence
print(rma.tokenize("うらにわにはにわにわとりがいる"));
# Re-train the model feeding the right answer (pairs of [token, PoS tag])
res = rma.train_one(
[["うらにわ","N-nc"],
["に","P-k"],
["は","P-rj"],
["にわ","N-n"],
["にわとり","N-nc"],
["が","P-k"],
["いる","V-c"]])
# The result of train_one contains:
# sys: the system output (using the current model)
# ans: answer fed by the user
# update: whether the model was updated
print(res)
# Now what does the result look like?
print(rma.tokenize("うらにわにはにわにわとりがいる"))
NOTE
Added API
misc
LICENSE
Rakuten MA Python (c) 2015- Yukino Ikegami. All Rights Reserved.
Rakuten MA (original) (c) 2014 Rakuten NLP Project. All Rights Reserved.
Copyright
| Rakuten MA Python
===================
|travis| |coveralls| |pyversion| |version| |landscape| |license|
Rakuten MA Python (morphological analyzer) is a Python version of Rakuten MA (word segmentor + PoS Tagger) for Chinese and Japanese.
For details about Rakuten MA, See https://github.com/rakuten-nlp/rakutenma
See also http://qiita.com/yukinoi/items/925bc238185aa2fad8a7 (In Japanese)
Contributions are welcome!
Installation
==============
::
pip install rakutenma
Example
===========
.. code:: python
from rakutenma import RakutenMA
# Initialize a RakutenMA instance with an empty model
# the default ja feature set is set already
rma = RakutenMA()
# Let's analyze a sample sentence (from http://tatoeba.org/jpn/sentences/show/103809)
# With a disastrous result, since the model is empty!
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Feed the model with ten sample sentences from tatoeba.com
# "tatoeba.json" is available at https://github.com/rakuten-nlp/rakutenma
import json
tatoeba = json.load(open("tatoeba.json"))
for i in tatoeba:
rma.train_one(i)
# Now what does the result look like?
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Initialize a RakutenMA instance with a pre-trained model
rma = RakutenMA(phi=1024, c=0.007812) # Specify hyperparameter for SCW (for demonstration purpose)
rma.load("model_ja.json")
# Set the feature hash function (15bit)
rma.hash_func = rma.create_hash_func(15)
# Tokenize one sample sentence
print(rma.tokenize("うらにわにはにわにわとりがいる"));
# Re-train the model feeding the right answer (pairs of [token, PoS tag])
res = rma.train_one(
[["うらにわ","N-nc"],
["に","P-k"],
["は","P-rj"],
["にわ","N-n"],
["にわとり","N-nc"],
["が","P-k"],
["いる","V-c"]])
# The result of train_one contains:
# sys: the system output (using the current model)
# ans: answer fed by the user
# update: whether the model was updated
print(res)
# Now what does the result look like?
print(rma.tokenize("うらにわにはにわにわとりがいる"))
NOTE
===========
Added API
--------------
As compared to original RakutenMA, following methods are added:
- RakutenMA::load(model_path)
- Load model from JSON file
- RakutenMA::save(model_path)
- Save model to path
misc
--------------
As initial setting, following values are set:
- rma.featset = CTYPE_JA_PATTERNS # RakutenMA.default_featset_ja
- rma.hash_func = rma.create_hash_func(15)
- rma.tag_scheme = "SBIEO" # if using Chinese, set "IOB2"
LICENSE
=========
Apache License version 2.0
Copyright
=============
Rakuten MA Python
(c) 2015- Yukino Ikegami. All Rights Reserved.
Rakuten MA (original)
(c) 2014 Rakuten NLP Project. All Rights Reserved.
.. |travis| image:: https://travis-ci.org/ikegami-yukino/rakutenma-python.svg?branch=master
:target: https://travis-ci.org/ikegami-yukino/rakutenma-python
:alt: travis-ci.org
.. |coveralls| image:: https://coveralls.io/repos/ikegami-yukino/rakutenma-python/badge.png
:target: https://coveralls.io/r/ikegami-yukino/rakutenma-python
:alt: coveralls.io
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/rakutenma.svg
.. |version| image:: https://img.shields.io/pypi/v/rakutenma.svg
:target: http://pypi.python.org/pypi/rakutenma/
:alt: latest version
.. |landscape| image:: https://landscape.io/github/ikegami-yukino/rakutenma-python/master/landscape.svg?style=flat
:target: https://landscape.io/github/ikegami-yukino/rakutenma-python/master
:alt: Code Health
.. |license| image:: https://img.shields.io/pypi/l/rakutenma.svg
:target: http://pypi.python.org/pypi/rakutenma/
:alt: license
| [
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] | [] |
true | https://github.com/google/budoux | 2015-03-18T18:22:31Z | Standalone. Small. Language-neutral. BudouX is the successor to Budou, the machine learning powered line break organizer tool. | google / budoux
Public
Branches
Tags
Go to file
Go to file
Code
.github
budoux
data/finetuning/ja
demo
java
javascript
scripts
tests
.gitignore
.markdownlint.yaml
CONTRIBUTING.md
LICENSE
MANIFEST.in
README.md
bump_version.py
example.png
pyproject.toml
setup.cfg
setup.py
pypi
pypi v0.6.3
v0.6.3 npm
npm v0.6.3
v0.6.3 maven-central
maven-central v0.6.4
v0.6.4
Standalone. Small. Language-neutral.
BudouX is the successor to Budou, the machine learning powered line break organizer tool.
About
google.github.io/budoux/
# javascript # python # nlp # machine-learning
Readme
Apache-2.0 license
Code of conduct
Security policy
Activity
Custom properties
1.4k stars
9 watching
32 forks
Report repository
Releases 19
v0.6.3
Latest
2 weeks ago
+ 18 releases
Packages
No packages published
Used by 217
+ 209
Contributors
13
Languages
Python 53.9%
TypeScript 32.3%
Java 10.4%
JavaScript 3.4%
Code
Issues
9
Pull requests
Actions
Projects
Security
Insights
BudouX
README
Code of conduct
Apache-2.0 license
Security
It is standalone. It works with no dependency on third-party word segmenters such as Google cloud natural language API.
It is small. It takes only around 15 KB including its machine learning model. It's reasonable to use it even on the client-side.
It is language-neutral. You can train a model for any language by feeding a dataset to BudouX’s training script.
Last but not least, BudouX supports HTML inputs.
https://google.github.io/budoux
Japanese
Simplified Chinese
Traditional Chinese
Thai
Korean uses spaces between words, so you can generally prevent words from being split across lines by applying the CSS property
word-break: keep-all to the paragraph, which should be much more performant than installing BudouX. That said, we're happy to
explore dedicated Korean language support if the above solution proves insufficient.
Python
JavaScript
Java
You can get a list of phrases by feeding a sentence to the parser. The easiest way is to get a parser is loading the default parser for
each language.
Japanese:
Demo
Natural languages supported by pretrained models
Korean support?
Supported Programming languages
Python module
Install
$ pip install budoux
Usage
Library
import budoux
parser = budoux.load_default_japanese_parser()
Simplified Chinese:
Traditional Chinese:
Thai:
You can also translate an HTML string to wrap phrases with non-breaking markup. The default parser uses zero-width space (U+200B)
to separate phrases.
Please note that separators are denoted as \u200b in the example above for illustrative purposes, but the actual output is an invisible
string as it's a zero-width space.
If you have a custom model, you can use it as follows.
A model file for BudouX is a JSON file that contains pairs of a feature and its score extracted by machine learning training. Each score
represents the significance of the feature in determining whether to break the sentence at a specific point.
For more details of the JavaScript model, please refer to JavaScript module README.
You can also format inputs on your terminal with budoux command.
print(parser.parse('今日は天気です。'))
# ['今日は', '天気です。']
import budoux
parser = budoux.load_default_simplified_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
import budoux
parser = budoux.load_default_traditional_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
import budoux
parser = budoux.load_default_thai_parser()
print(parser.parse('วันนี้อากาศดี'))
# ['วัน', 'นี้', 'อากาศ', 'ดี']
print(parser.translate_html_string('今日は<b>とても天気</b>です。'))
# <span style="word-break: keep-all; overflow-wrap: anywhere;">今日は<b>\u200bとても\u200b天気</b>です。</span>
with open('/path/to/your/model.json') as f:
model = json.load(f)
parser = budoux.Parser(model)
CLI
$ budoux 本日は晴天です。 # default: japanese
本日は
晴天です。
$ budoux -l ja 本日は晴天です。
本日は
晴天です。
$ budoux -l zh-hans 今天天气晴朗。
今天
天气
晴朗。
$ budoux -l zh-hant 今天天氣晴朗。
今天
天氣
晴朗。
Please note that separators are denoted as \u200b in the example above for illustrative purposes, but the actual output is an invisible
string as it's a zero-width space.
If you want to see help, run budoux -h .
BudouX supports HTML inputs and outputs HTML strings with markup that wraps phrases, but it's not meant to be used as an HTML
sanitizer. BudouX doesn't sanitize any inputs. Malicious HTML inputs yield malicious HTML outputs. Please use it with an
appropriate sanitizer library if you don't trust the input.
English text has many clues, like spacing and hyphenation, that enable beautiful and readable line breaks. However, some CJK
languages lack these clues, and so are notoriously more difficult to process. Line breaks can occur randomly and usually in the middle
of a word or a phrase without a more careful approach. This is a long-standing issue in typography on the Web, which results in a
degradation of readability.
Budou was proposed as a solution to this problem in 2016. It automatically translates CJK sentences into HTML with lexical phrases
wrapped in non-breaking markup, so as to semantically control line breaks. Budou has solved this problem to some extent, but it still
has some problems integrating with modern web production workflow.
$ budoux -l th วันนี้อากาศดี
วัน
นี้
อากาศ
ดี
$ echo $'本日は晴天です。\n明日は曇りでしょう。' | budoux
本日は
晴天です。
---
明日は
曇りでしょう。
$ budoux 本日は晴天です。 -H
<span style="word-break: keep-all; overflow-wrap: anywhere;">本日は\u200b晴天です。</span>
$ budoux -h
usage: budoux [-h] [-H] [-m JSON | -l LANG] [-d STR] [-V] [TXT]
BudouX is the successor to Budou,
the machine learning powered line break organizer tool.
positional arguments:
TXT text (default: None)
optional arguments:
-h, --help show this help message and exit
-H, --html HTML mode (default: False)
-m JSON, --model JSON custom model file path (default: /path/to/budoux/models/ja.json)
-l LANG, --lang LANG language of custom model (default: None)
-d STR, --delim STR output delimiter in TEXT mode (default: ---)
-V, --version show program's version number and exit
supported languages of `-l`, `--lang`:
- ja
- zh-hans
- zh-hant
- th
Caveat
Background
The biggest barrier in applying Budou to a website is that it has dependency on third-party word segmenters. Usually a word
segmenter is a large program that is infeasible to download for every web page request. It would also be an undesirable option making
a request to a cloud-based word segmentation service for every sentence, considering the speed and cost. That’s why we need a
standalone line break organizer tool equipped with its own segmentation engine small enough to be bundled in a client-side JavaScript
code.
BudouX is the successor to Budou, designed to be integrated with your website with no hassle.
BudouX uses the AdaBoost algorithm to segment a sentence into phrases by considering the task as a binary classification problem to
predict whether to break or not between all characters. It uses features such as the characters around the break point, their Unicode
blocks, and combinations of them to make a prediction. The output machine learning model, which is encoded as a JSON file, stores
pairs of the feature and its significance score. The BudouX parser takes a model file to construct a segmenter and translates input
sentences into a list of phrases.
You can build your own custom model for any language by preparing training data in the target language. A training dataset is a large
text file that consists of sentences separated by phrases with the separator symbol "▁" (U+2581) like below.
Assuming the text file is saved as mysource.txt , you can build your own custom model by running the following commands.
Please note that train.py takes time to complete depending on your computer resources. Good news is that the training algorithm is
an anytime algorithm, so you can get a weights file even if you interrupt the execution. You can build a valid model file by passing that
weights file to build_model.py even in such a case.
The default model for Japanese ( budoux/models/ja.json ) is built using the KNBC corpus. You can create a training dataset, which
we name source_knbc.txt below for example, from the corpus by running the following commands:
How it works
Building a custom model
私は▁遅刻魔で、▁待ち合わせに▁いつも▁遅刻してしまいます。
メールで▁待ち合わせ▁相手に▁一言、▁「ごめんね」と▁謝れば▁どうにか▁なると▁思っていました。
海外では▁ケータイを▁持っていない。
$ pip install .[dev]
$ python scripts/encode_data.py mysource.txt -o encoded_data.txt
$ python scripts/train.py encoded_data.txt -o weights.txt
$ python scripts/build_model.py weights.txt -o mymodel.json
Constructing a training dataset from the KNBC corpus for Japanese
$ curl -o knbc.tar.bz2 https://nlp.ist.i.kyoto-u.ac.jp/kuntt/KNBC_v1.0_090925_utf8.tar.bz2
| <!-- markdownlint-disable MD014 -->
# BudouX
[](https://pypi.org/project/budoux/) [](https://www.npmjs.com/package/budoux) [](https://mvnrepository.com/artifact/com.google.budoux/budoux)
Standalone. Small. Language-neutral.
BudouX is the successor to [Budou](https://github.com/google/budou), the machine learning powered line break organizer tool.

It is **standalone**. It works with no dependency on third-party word segmenters such as Google cloud natural language API.
It is **small**. It takes only around 15 KB including its machine learning model. It's reasonable to use it even on the client-side.
It is **language-neutral**. You can train a model for any language by feeding a dataset to BudouX’s training script.
Last but not least, BudouX supports HTML inputs.
## Demo
<https://google.github.io/budoux>
## Natural languages supported by pretrained models
- Japanese
- Simplified Chinese
- Traditional Chinese
- Thai
### Korean support?
Korean uses spaces between words, so you can generally prevent words from being split across lines by applying the CSS property `word-break: keep-all` to the paragraph, which should be much more performant than installing BudouX.
That said, we're happy to explore dedicated Korean language support if the above solution proves insufficient.
## Supported Programming languages
- Python
- [JavaScript](https://github.com/google/budoux/tree/main/javascript/)
- [Java](https://github.com/google/budoux/tree/main/java/)
## Python module
### Install
```shellsession
$ pip install budoux
```
### Usage
#### Library
You can get a list of phrases by feeding a sentence to the parser.
The easiest way is to get a parser is loading the default parser for each language.
**Japanese:**
```python
import budoux
parser = budoux.load_default_japanese_parser()
print(parser.parse('今日は天気です。'))
# ['今日は', '天気です。']
```
**Simplified Chinese:**
```python
import budoux
parser = budoux.load_default_simplified_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
```
**Traditional Chinese:**
```python
import budoux
parser = budoux.load_default_traditional_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
```
**Thai:**
```python
import budoux
parser = budoux.load_default_thai_parser()
print(parser.parse('วันนี้อากาศดี'))
# ['วัน', 'นี้', 'อากาศ', 'ดี']
```
You can also translate an HTML string to wrap phrases with non-breaking markup.
The default parser uses zero-width space (U+200B) to separate phrases.
```python
print(parser.translate_html_string('今日は<b>とても天気</b>です。'))
# <span style="word-break: keep-all; overflow-wrap: anywhere;">今日は<b>\u200bとても\u200b天気</b>です。</span>
```
Please note that separators are denoted as `\u200b` in the example above for
illustrative purposes, but the actual output is an invisible string as it's a
zero-width space.
If you have a custom model, you can use it as follows.
```python
with open('/path/to/your/model.json') as f:
model = json.load(f)
parser = budoux.Parser(model)
```
A model file for BudouX is a JSON file that contains pairs of a feature and its score extracted by machine learning training.
Each score represents the significance of the feature in determining whether to break the sentence at a specific point.
For more details of the JavaScript model, please refer to [JavaScript module README](https://github.com/google/budoux/tree/main/javascript/README.md).
#### CLI
You can also format inputs on your terminal with `budoux` command.
```shellsession
$ budoux 本日は晴天です。 # default: japanese
本日は
晴天です。
$ budoux -l ja 本日は晴天です。
本日は
晴天です。
$ budoux -l zh-hans 今天天气晴朗。
今天
天气
晴朗。
$ budoux -l zh-hant 今天天氣晴朗。
今天
天氣
晴朗。
$ budoux -l th วันนี้อากาศดี
วัน
นี้
อากาศ
ดี
```
```shellsession
$ echo $'本日は晴天です。\n明日は曇りでしょう。' | budoux
本日は
晴天です。
---
明日は
曇りでしょう。
```
```shellsession
$ budoux 本日は晴天です。 -H
<span style="word-break: keep-all; overflow-wrap: anywhere;">本日は\u200b晴天です。</span>
```
Please note that separators are denoted as `\u200b` in the example above for
illustrative purposes, but the actual output is an invisible string as it's a
zero-width space.
If you want to see help, run `budoux -h`.
```shellsession
$ budoux -h
usage: budoux [-h] [-H] [-m JSON | -l LANG] [-d STR] [-V] [TXT]
BudouX is the successor to Budou,
the machine learning powered line break organizer tool.
positional arguments:
TXT text (default: None)
optional arguments:
-h, --help show this help message and exit
-H, --html HTML mode (default: False)
-m JSON, --model JSON custom model file path (default: /path/to/budoux/models/ja.json)
-l LANG, --lang LANG language of custom model (default: None)
-d STR, --delim STR output delimiter in TEXT mode (default: ---)
-V, --version show program's version number and exit
supported languages of `-l`, `--lang`:
- ja
- zh-hans
- zh-hant
- th
```
## Caveat
BudouX supports HTML inputs and outputs HTML strings with markup that wraps phrases, but it's not meant to be used as an HTML sanitizer. **BudouX doesn't sanitize any inputs.** Malicious HTML inputs yield malicious HTML outputs. Please use it with an appropriate sanitizer library if you don't trust the input.
## Background
English text has many clues, like spacing and hyphenation, that enable beautiful and readable line breaks. However, some CJK languages lack these clues, and so are notoriously more difficult to process. Line breaks can occur randomly and usually in the middle of a word or a phrase without a more careful approach. This is a long-standing issue in typography on the Web, which results in a degradation of readability.
Budou was proposed as a solution to this problem in 2016. It automatically translates CJK sentences into HTML with lexical phrases wrapped in non-breaking markup, so as to semantically control line breaks. Budou has solved this problem to some extent, but it still has some problems integrating with modern web production workflow.
The biggest barrier in applying Budou to a website is that it has dependency on third-party word segmenters. Usually a word segmenter is a large program that is infeasible to download for every web page request. It would also be an undesirable option making a request to a cloud-based word segmentation service for every sentence, considering the speed and cost. That’s why we need a standalone line break organizer tool equipped with its own segmentation engine small enough to be bundled in a client-side JavaScript code.
Budou*X* is the successor to Budou, designed to be integrated with your website with no hassle.
## How it works
BudouX uses the [AdaBoost algorithm](https://en.wikipedia.org/wiki/AdaBoost) to segment a sentence into phrases by considering the task as a binary classification problem to predict whether to break or not between all characters. It uses features such as the characters around the break point, their Unicode blocks, and combinations of them to make a prediction. The output machine learning model, which is encoded as a JSON file, stores pairs of the feature and its significance score. The BudouX parser takes a model file to construct a segmenter and translates input sentences into a list of phrases.
## Building a custom model
You can build your own custom model for any language by preparing training data in the target language.
A training dataset is a large text file that consists of sentences separated by phrases with the separator symbol "▁" (U+2581) like below.
```text
私は▁遅刻魔で、▁待ち合わせに▁いつも▁遅刻してしまいます。
メールで▁待ち合わせ▁相手に▁一言、▁「ごめんね」と▁謝れば▁どうにか▁なると▁思っていました。
海外では▁ケータイを▁持っていない。
```
Assuming the text file is saved as `mysource.txt`, you can build your own custom model by running the following commands.
```shellsession
$ pip install .[dev]
$ python scripts/encode_data.py mysource.txt -o encoded_data.txt
$ python scripts/train.py encoded_data.txt -o weights.txt
$ python scripts/build_model.py weights.txt -o mymodel.json
```
Please note that `train.py` takes time to complete depending on your computer resources.
Good news is that the training algorithm is an [anytime algorithm](https://en.wikipedia.org/wiki/Anytime_algorithm), so you can get a weights file even if you interrupt the execution. You can build a valid model file by passing that weights file to `build_model.py` even in such a case.
## Constructing a training dataset from the KNBC corpus for Japanese
The default model for Japanese (`budoux/models/ja.json`) is built using the [KNBC corpus](https://nlp.ist.i.kyoto-u.ac.jp/kuntt/).
You can create a training dataset, which we name `source_knbc.txt` below for example, from the corpus by running the following commands:
```shellsession
$ curl -o knbc.tar.bz2 https://nlp.ist.i.kyoto-u.ac.jp/kuntt/KNBC_v1.0_090925_utf8.tar.bz2
$ tar -xf knbc.tar.bz2 # outputs KNBC_v1.0_090925_utf8 directory
$ python scripts/prepare_knbc.py KNBC_v1.0_090925_utf8 -o source_knbc.txt
```
## Author
[Shuhei Iitsuka](https://tushuhei.com)
## Disclaimer
This is not an officially supported Google product.
| [
"Chunking",
"Syntactic Text Processing",
"Text Segmentation"
] | [] |
true | https://github.com/scriptin/topokanji | 2015-05-28T17:52:28Z | Topologically ordered lists of kanji for effective learning | scriptin / topokanji
Public
Branches
Tags
Go to file
Go to file
Code
data
depen…
lib
lists
.gitignore
.jshintrc
READ…
build.js
packag…
30 seconds explanation for people who want to
learn kanji:
It is best to learn kanji starting from simple
characters and then learning complex ones as
compositions of "parts", which are called "radicals"
or "components". For example:
一 → 二 → 三
丨 → 凵 → 山 → 出
言 → 五 → 口 → 語
It is also smart to learn more common kanji first.
About
Topologically ordered lists of kanji
for effective learning
# language # data # japanese
# language-learning # cjk # kanji
# japanese-language # frequency-lists
# radical # memorization # cjk-characters
# kanji-frequency # kanjivg
Readme
Activity
180 stars
10 watching
21 forks
Report repository
Releases
1 tags
Packages
No packages published
Languages
JavaScript 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
TopoKanji
README
This project is based on those two ideas and
provides properly ordered lists of kanji to make
your learning process as fast, simple, and effective
as possible.
Motivation for this project initially came from reading this
article: The 5 Biggest Mistakes People Make When
Learning Kanji.
First 100 kanji from lists/aozora.txt (formatted for
convenience):
These lists can be found in lists directory. They only
differ in order of kanji. Each file contains a list of kanji,
ordered as described in following sections. There are
few options (see Used data for details):
aozora.(json|txt) - ordered by kanji frequency
in Japanese fiction and non-fiction books; I
recommend this list if you're starting to learn kanji
news.(json|txt) - ordered by kanji frequency in
online news
twitter.(json|txt) - ordered by kanji frequency
in Twitter messages
wikipedia.(json|txt) - ordered by kanji
frequency in Wikipedia articles
all.(json|txt) - combined "average" version of
all previous; this one is experimental, I don't
recommend using it
You can use these lists to build an Anki deck or just as a
guidance. If you're looking for "names" or meanings of
kanji, you might want to check my kanji-keys project.
人一丨口日目儿見凵山
出十八木未丶来大亅了
子心土冂田思二丁彳行
寸寺時卜上丿刀分厶禾
私中彐尹事可亻何自乂
又皮彼亠方生月門間扌
手言女本乙气気干年三
耂者刂前勹勿豕冖宀家
今下白勺的云牛物立小
文矢知入乍作聿書学合
If you look at a kanji like 語, you can see it consists of at
least three distinct parts: 言, 五, 口. Those are kanji by
themselves too. The idea behind this project is to find
the order of about 2000-2500 common kanji, in which no
kanji appears before its' parts, so you only learn a new
kanji when you already know its' components.
1. No kanji appear before it's parts (components).
In fact, in you treat kanji as nodes in a graph
structure, and connect them with directed edges,
where each edge means "kanji A includes kanji B as
a component", it all forms a directed acyclic graph
(DAG). For any DAG, it is possible to build a
topological order, which is basically what "no kanji
appear before it's parts" means.
2. More common kanji come first. That way you
learn useful characters as soon as possible.
Topological sorting is done by using a modified version
of Kahn (1962) algorithm with intermediate sorting step
which deals with the second property above. This
intermediate sorting uses the "weight" of each character:
common kanji (lighter) tend appear before rare kanji
(heavier). See source code for details.
Initial unsorted list contains only kanji which are present
in KanjiVG project, so for each character there is a data
of its' shape and stroke order.
Characters are split into components using CJK
Decompositions Data project, along with "fixes" to
simplify final lists and avoid characters which are not
present in initial list.
Statistical data of kanji usage frequencies was collected
by processing raw textual data from various sources.
See kanji-frequency repository for details.
What is a properly ordered list of
kanji?
Properties of properly ordered lists
Algorithm
Used data
Kanji list covers about 95-99% of kanji found in various
Japanese texts. Generally, the goal is provide something
similar to Jōyō kanji, but based on actual data. Radicals
are also included, but only those which are parts of
some kanji in the list.
Kanji/radical must NOT appear in this list if it is:
not included in KanjiVG character set
primarily used in names (people, places, etc.) or in
some specific terms (religion, mythology, etc.)
mostly used because of its' shape, e.g. a part of text
emoticons/kaomoji like ( ^ω^)个
a part of currently popular meme,
manga/anime/dorama/movie title, #hashtag, etc.,
and otherwise is not commonly used
Files in lists directory are final lists.
*.txt files contain lists as plain text, one
character per line; those files can be interpreted as
CSV/TSV files with a single column
*.json files contain lists as JSON arrays
All files are encoded in UTF-8, without byte order mark
(BOM), and have unix-style line endings, LF .
Files in dependencies directory are "flat" equivalents of
CJK-decompositions (see below). "Dependency" here
roughly means "a component of the visual
decomposition" for kanji.
1-to-1.txt has a format compatible with tsort
command line utility; first character in each line is
"target" kanji, second character is target's
dependency or 0
1-to-1.json contains a JSON array with the
same data as in 1-to-1.txt
Which kanji are (not) included?
Files and formats
lists directory
dependencies directory
1-to-N.txt is similar, but lists all "dependecies" at
once
1-to-N.json contains a JSON object with the
same data as in 1-to-N.txt
All files are encoded in UTF-8, without byte order mark
(BOM), and have unix-style line endings, LF .
kanji.json - data for kanji included in final
ordered lists, including radicals
kanjivg.txt - list of kanji from KanjiVG
cjk-decomp-{VERSION}.txt - data from CJK
Decompositions Data, without any modifications
cjk-decomp-override.txt - data to override some
CJK's decompositions
kanji-frequency/*.json - kanji frequency tables
All files are encoded in UTF-8, without byte order mark
(BOM). All files, except for cjk-decomp-{VERSION}.txt ,
have unix-style line endings, LF .
Contains table with data for kanji, including radicals.
Columns are:
1. Character itself
2. Stroke count
3. Frequency flag:
true if it is a common kanji
false if it is primarily used as a
radical/component and unlikely to be seen
within top 3000 in kanji usage frequency tables.
In this case character is only listed because it's
useful for decomposition, not as a standalone
kanji
Resrictions:
No duplicates
Each character must be listed in kanjivg.txt
Each character must be listed on the left hand side
in exactly one line in cjk-decomp-{VERSION}.txt
Each character may be listed on the left hand side
in exactly one line in cjk-decomp-override.txt
data directory
data/kanji.json
Simple list of characters which are present in KanjiVG
project. Those are from the list of *.svg files in
KanjiVG's Github repository.
Data file from CJK Decompositions Data project, see
description of its' format.
Same format as cjk-decomp-{VERSION}.txt , except:
comments starting with # allowed
purpose of each record in this file is to override the
one from cjk-decomp-{VERSION}.txt
type of decomposition is always fix , which just
means "fix a record for the same character from
original file"
Special character 0 is used to distinguish invalid
decompositions (which lead to characters with no
graphical representation) from those which just can't be
decomposed further into something meaningful. For
example, 一:fix(0) means that this kanji can't be
further decomposed, since it's just a single stroke.
NOTE: Strictly speaking, records in this file are not
always "visual decompositions" (but most of them are).
Instead, it's just an attempt to provide meaningful
recommendations of kanji learning order.
See kanji-frequency repository for details.
You must have Node.js and Git installed
1. git clone https://github.com/THIS/REPO.git
2. npm install
3. node build.js + commands and arguments
described below
data/kanjivg.txt
data/cjk-decomp-{VERSION}.txt
data/cjk-decomp-override.txt
data/kanji-frequency/*.json
Usage
Command-line commands and arguments
show - only display sorted list without writing into
files
(optional) --per-line=NUM - explicitly tell how
many characters per line to display. 50 by
default. Applicable only to (no arguments)
(optional) --freq-table=TABLE_NAME - use
only one frequency table. Table names are file
names from data/kanji-frequency directory,
without .json extension, e.g. all
("combined" list), aozora , etc. When omitted,
all frequency tables are used
coverage - show tables coverage, i.e. which
fraction of characters from each frequency table is
included into kanji list
suggest-add - suggest kanji to add in a list, based
on coverage within kanji usage frequency tables
(required) --num=NUM - how many
(optional) --mean-type=MEAN_TYPE - same as
previous, sort by given mean type:
arithmetic (most "extreme"), geometric ,
harmonic (default, most "conservative"). See
Pythagorean means for details
suggest-remove - suggest kanji to remove from a
list, reverse of suggest-add
(required) --num=NUM - see above
(optional) --mean-type=MEAN_TYPE - see
above
save - update files with final lists
This is a multi-license project. Choose any license from
this list:
Apache-2.0 or any later version
License
| # TopoKanji
> **30 seconds explanation for people who want to learn kanji:**
>
> It is best to learn kanji starting from simple characters and then learning complex ones as compositions of "parts", which are called "radicals" or "components". For example:
>
> - 一 → 二 → 三
> - 丨 → 凵 → 山 → 出
> - 言 → 五 → 口 → 語
>
> It is also smart to learn more common kanji first.
>
> This project is based on those two ideas and provides properly ordered lists of kanji to make your learning process as fast, simple, and effective as possible.
Motivation for this project initially came from reading this article: [The 5 Biggest Mistakes People Make When Learning Kanji][mistakes].
First 100 kanji from [lists/aozora.txt](lists/aozora.txt) (formatted for convenience):
人一丨口日目儿見凵山
出十八木未丶来大亅了
子心土冂田思二丁彳行
寸寺時卜上丿刀分厶禾
私中彐尹事可亻何自乂
又皮彼亠方生月門間扌
手言女本乙气気干年三
耂者刂前勹勿豕冖宀家
今下白勺的云牛物立小
文矢知入乍作聿書学合
These lists can be found in [`lists` directory](lists). They only differ in order of kanji. Each file contains a list of kanji, ordered as described in following sections. There are few options (see [Used data](#used-data) for details):
- `aozora.(json|txt)` - ordered by kanji frequency in Japanese fiction and non-fiction books; I recommend this list if you're starting to learn kanji
- `news.(json|txt)` - ordered by kanji frequency in online news
- `twitter.(json|txt)` - ordered by kanji frequency in Twitter messages
- `wikipedia.(json|txt)` - ordered by kanji frequency in Wikipedia articles
- `all.(json|txt)` - combined "average" version of all previous; this one is experimental, I don't recommend using it
You can use these lists to build an [Anki][] deck or just as a guidance. If you're looking for "names" or meanings of kanji, you might want to check my [kanji-keys](https://github.com/scriptin/kanji-keys) project.
## What is a properly ordered list of kanji?
If you look at a kanji like 語, you can see it consists of at least three distinct parts: 言, 五, 口. Those are kanji by themselves too. The idea behind this project is to find the order of about 2000-2500 common kanji, in which no kanji appears before its' parts, so you only learn a new kanji when you already know its' components.
### Properties of properly ordered lists
1. **No kanji appear before it's parts (components).** In fact, in you treat kanji as nodes in a [graph][] structure, and connect them with directed edges, where each edge means "kanji A includes kanji B as a component", it all forms a [directed acyclic graph (DAG)][dag]. For any DAG, it is possible to build a [topological order][topsort], which is basically what "no kanji appear before it's parts" means.
2. **More common kanji come first.** That way you learn useful characters as soon as possible.
### Algorithm
[Topological sorting][topsort] is done by using a modified version of [Kahn (1962) algorithm][kahn] with intermediate sorting step which deals with the second property above. This intermediate sorting uses the "weight" of each character: common kanji (lighter) tend appear before rare kanji (heavier). See source code for details.
## Used data
Initial unsorted list contains only kanji which are present in [KanjiVG][] project, so for each character there is a data of its' shape and stroke order.
Characters are split into components using [CJK Decompositions Data][cjk] project, along with "fixes" to simplify final lists and avoid characters which are not present in initial list.
Statistical data of kanji usage frequencies was collected by processing raw textual data from various sources. See [kanji-frequency][] repository for details.
## Which kanji are (not) included?
Kanji list covers about 95-99% of kanji found in various Japanese texts. Generally, the goal is provide something similar to [Jōyō kanji][jouyou], but based on actual data. Radicals are also included, but only those which are parts of some kanji in the list.
Kanji/radical must **NOT** appear in this list if it is:
- not included in KanjiVG character set
- primarily used in names (people, places, etc.) or in some specific terms (religion, mythology, etc.)
- mostly used because of its' shape, e.g. a part of text emoticons/kaomoji like `( ^ω^)个`
- a part of currently popular meme, manga/anime/dorama/movie title, #hashtag, etc., and otherwise is not commonly used
## Files and formats
### `lists` directory
Files in `lists` directory are final lists.
- `*.txt` files contain lists as plain text, one character per line; those files can be interpreted as CSV/TSV files with a single column
- `*.json` files contain lists as [JSON][] arrays
All files are encoded in UTF-8, without [byte order mark (BOM)][bom], and have unix-style [line endings][eol], `LF`.
### `dependencies` directory
Files in `dependencies` directory are "flat" equivalents of CJK-decompositions (see below). "Dependency" here roughly means "a component of the visual decomposition" for kanji.
- `1-to-1.txt` has a format compatible with [tsort][] command line utility; first character in each line is "target" kanji, second character is target's dependency or `0`
- `1-to-1.json` contains a JSON array with the same data as in `1-to-1.txt`
- `1-to-N.txt` is similar, but lists all "dependecies" at once
- `1-to-N.json` contains a JSON object with the same data as in `1-to-N.txt`
All files are encoded in UTF-8, without [byte order mark (BOM)][bom], and have unix-style [line endings][eol], `LF`.
### `data` directory
- `kanji.json` - data for kanji included in final ordered lists, including [radicals][kangxi]
- `kanjivg.txt` - list of kanji from [KanjiVG][]
- `cjk-decomp-{VERSION}.txt` - data from [CJK Decompositions Data][cjk], without any modifications
- `cjk-decomp-override.txt` - data to override some CJK's decompositions
- `kanji-frequency/*.json` - kanji frequency tables
All files are encoded in UTF-8, without [byte order mark (BOM)][bom]. All files, except for `cjk-decomp-{VERSION}.txt`, have unix-style [line endings][eol], `LF`.
#### `data/kanji.json`
Contains table with data for kanji, including radicals. Columns are:
1. Character itself
2. Stroke count
3. Frequency flag:
- `true` if it is a common kanji
- `false` if it is primarily used as a radical/component and unlikely to be seen within top 3000 in kanji usage frequency tables. In this case character is only listed because it's useful for decomposition, not as a standalone kanji
Resrictions:
- No duplicates
- Each character must be listed in `kanjivg.txt`
- Each character must be listed on the left hand side in exactly one line in `cjk-decomp-{VERSION}.txt`
- Each character *may* be listed on the left hand side in exactly one line in `cjk-decomp-override.txt`
#### `data/kanjivg.txt`
Simple list of characters which are present in KanjiVG project. Those are from the list of `*.svg` files in [KanjiVG's Github repository][kanjivg-github].
#### `data/cjk-decomp-{VERSION}.txt`
Data file from [CJK Decompositions Data][cjk] project, see [description of its' format][cjk-format].
#### `data/cjk-decomp-override.txt`
Same format as `cjk-decomp-{VERSION}.txt`, except:
- comments starting with `#` allowed
- purpose of each record in this file is to override the one from `cjk-decomp-{VERSION}.txt`
- type of decomposition is always `fix`, which just means "fix a record for the same character from original file"
Special character `0` is used to distinguish invalid decompositions (which lead to characters with no graphical representation) from those which just can't be decomposed further into something meaningful. For example, `一:fix(0)` means that this kanji can't be further decomposed, since it's just a single stroke.
NOTE: Strictly speaking, records in this file are not always "visual decompositions" (but most of them are). Instead, it's just an attempt to provide meaningful recommendations of kanji learning order.
#### `data/kanji-frequency/*.json`
See [kanji-frequency][] repository for details.
## Usage
You must have Node.js and Git installed
1. `git clone https://github.com/THIS/REPO.git`
2. `npm install`
3. `node build.js` + commands and arguments described below
### Command-line commands and arguments
- `show` - only display sorted list without writing into files
- (optional) `--per-line=NUM` - explicitly tell how many characters per line to display. `50` by default. Applicable only to (no arguments)
- (optional) `--freq-table=TABLE_NAME` - use only one frequency table. Table names are file names from `data/kanji-frequency` directory, without `.json` extension, e.g. `all` ("combined" list), `aozora`, etc. When omitted, all frequency tables are used
- `coverage` - show tables coverage, i.e. which fraction of characters from each frequency table is included into kanji list
- `suggest-add` - suggest kanji to add in a list, based on coverage within kanji usage frequency tables
- (required) `--num=NUM` - how many
- (optional) `--mean-type=MEAN_TYPE` - same as previous, sort by given mean type: `arithmetic` (most "extreme"), `geometric`, `harmonic` (default, most "conservative"). See [Pythagorean means][mean-type] for details
- `suggest-remove` - suggest kanji to remove from a list, reverse of `suggest-add`
- (required) `--num=NUM` - see above
- (optional) `--mean-type=MEAN_TYPE` - see above
- `save` - update files with final lists
## License
This is a multi-license project. Choose any license from this list:
- [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) or any later version
- [CC-BY-4.0](http://creativecommons.org/licenses/by/4.0/) or any later version
- [EPL-1.0](https://www.eclipse.org/legal/epl-v10.html) or any later version
- [LGPL-3.0](http://www.gnu.org/licenses/lgpl-3.0.html) or any later version
- [MIT](http://opensource.org/licenses/MIT)
[mistakes]: http://www.tofugu.com/2010/03/25/the-5-biggest-mistakes-people-make-when-learning-kanji/
[anki]: http://ankisrs.net/
[graph]: https://en.wikipedia.org/wiki/Graph_(mathematics)
[dag]: https://en.wikipedia.org/wiki/Directed_acyclic_graph
[topsort]: https://en.wikipedia.org/wiki/Topological_sorting
[tsort]: https://en.wikipedia.org/wiki/Tsort
[kahn]: http://dl.acm.org/citation.cfm?doid=368996.369025
[wiki-dumps]: https://dumps.wikimedia.org/
[jawiki]: https://dumps.wikimedia.org/jawiki/
[aozora]: http://www.aozora.gr.jp/
[twitter-stream]: https://dev.twitter.com/streaming/overview
[twitter-bot]: https://github.com/scriptin/twitter-kanji-frequency
[jouyou]: https://en.wikipedia.org/wiki/J%C5%8Dy%C5%8D_kanji
[kangxi]: https://en.wikipedia.org/wiki/Kangxi_radical
[kanjivg]: http://kanjivg.tagaini.net/
[kanjivg-github]: https://github.com/KanjiVG/kanjivg
[cjk]: https://cjkdecomp.codeplex.com/
[cjk-format]: https://cjkdecomp.codeplex.com/wikipage?title=cjk-decomp
[json]: http://json.org/
[bom]: https://en.wikipedia.org/wiki/Byte_order_mark
[eol]: https://en.wikipedia.org/wiki/Newline
[mean-type]: https://en.wikipedia.org/wiki/Pythagorean_means
[kanji-frequency]: https://github.com/scriptin/kanji-frequency
| [] | [
"Annotation and Dataset Development"
] |
true | https://github.com/Kensuke-Mitsuzawa/JapaneseTokenizers | 2015-09-01T10:24:45Z | A set of metrics for feature selection from text data | Kensuke-Mitsuzawa / JapaneseTokenizers
Public
Branches
Tags
Go to file
Go to file
Code
Japan…
exampl…
test
.gitignore
.travis.…
LICEN…
MANIF…
Makefile
READ…
install_…
setup.py
travis-…
license
license MIT
MIT
Build Status
This is simple python-wrapper for Japanese Tokenizers(A.K.A Tokenizer)
This project aims to call tokenizers and split a sentence into tokens as easy as possible.
And, this project supports various Tokenization tools common interface. Thus, it's easy to
compare output from various tokenizers.
This project is available also in Github.
About
aim to use JapaneseTokenizer as
easy as possible
# nlp # tokenizer # japanese-language
# mecab # juman # kytea
# mecab-neologd-dictionary
# dictionary-extension # jumanpp
Readme
MIT license
Activity
138 stars
4 watching
20 forks
Report repository
Releases 20
Possible to call jumandi…
Latest
on Mar 26, 2019
+ 19 releases
Packages
No packages published
Contributors
3
Languages
Code
Issues
8
Pull requests
Actions
Projects
Wiki
Security
Ins
What's this?
README
MIT license
If you find any bugs, please report them to github issues. Or any pull requests are welcomed!
Python 2.7
Python 3.x
checked in 3.5, 3.6, 3.7
simple/common interface among various tokenizers
simple/common interface for filtering with stopwords or Part-of-Speech condition
simple interface to add user-dictionary(mecab only)
Mecab is open source tokenizer system for various language(if you have dictionary for it)
See english documentation for detail
Juman is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman is strong for ambiguous writing style in Japanese, and is strong for new-comming words
thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
Juman++ is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman++ is succeeding system of Juman. It adopts RNN model for tokenization.
Juman++ is strong for ambigious writing style in Japanese, and is strong for new-comming words
thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
Note: New Juman++ dev-version(later than 2.x) is available at Github
Kytea is tokenizer tool developped by Graham Neubig.
Python 92.5%
Dockerfile 3.6%
Shell 3.6%
Makefile 0.3%
Requirements
Features
Supported Tokenizers
Mecab
Juman
Juman++
Kytea
Kytea has a different algorithm from one of Mecab or Juman.
See here to install MeCab system.
Mecab-neologd dictionary is a dictionary-extension based on ipadic-dictionary, which is basic
dictionary of Mecab.
With, Mecab-neologd dictionary, you're able to parse new-coming words make one token.
Here, new-coming words is such like, movie actor name or company name.....
See here and install mecab-neologd dictionary.
GCC version must be >= 5
Setting up
Tokenizers auto-install
make install
mecab-neologd dictionary auto-install
make install_neologd
Tokenizers manual-install
MeCab
Mecab Neologd dictionary
Juman
wget -O juman7.0.1.tar.bz2 "http://nlp.ist.i.kyoto-
u.ac.jp/DLcounter/lime.cgi?down=http://nlp.ist.i.kyoto-u.ac.jp/nl-
resource/juman/juman-7.01.tar.bz2&name=juman-7.01.tar.bz2"
bzip2 -dc juman7.0.1.tar.bz2 | tar xvf -
cd juman-7.01
./configure
make
[sudo] make install
Juman++
Install Kytea system
Kytea has python wrapper thanks to michiaki ariga. Install Kytea-python wrapper
During install, you see warning message when it fails to install pyknp or kytea .
if you see these messages, try to re-install these packages manually.
Tokenization Example(For python3.x. To see exmaple code for Python2.x, plaese see here)
wget http://lotus.kuee.kyoto-u.ac.jp/nl-resource/jumanpp/jumanpp-
1.02.tar.xz
tar xJvf jumanpp-1.02.tar.xz
cd jumanpp-1.02/
./configure
make
[sudo] make install
Kytea
wget http://www.phontron.com/kytea/download/kytea-0.4.7.tar.gz
tar -xvf kytea-0.4.7.tar
cd kytea-0.4.7
./configure
make
make install
pip install kytea
install
[sudo] python setup.py install
Note
Usage
import JapaneseTokenizer
input_sentence = '10日放送の「中居正広のミになる図書館」(テレビ朝日系)で、SMAPの中
居正広が、篠原信一の過去の勘違いを明かす一幕があった。'
# ipadic is well-maintained dictionary #
mecab_wrapper = JapaneseTokenizer.MecabWrapper(dictType='ipadic')
print(mecab_wrapper.tokenize(input_sentence).convert_list_object())
# neologd is automatically-generated dictionary from huge web-corpus #
Mecab, Juman, Kytea have different system of Part-of-Speech(POS).
You can check tables of Part-of-Speech(POS) here
natto-py is sophisticated package for tokenization. It supports following features
easy interface for tokenization
importing additional dictionary
partial parsing mode
MIT license
You could build an environment which has dependencies to test this package.
Simply, you build docker image and run docker container.
Develop environment is defined with test/docker-compose-dev.yml .
With the docker-compose.yml file, you could call python2.7 or python3.7
mecab_neologd_wrapper = JapaneseTokenizer.MecabWrapper(dictType='neologd')
print(mecab_neologd_wrapper.tokenize(input_sentence).convert_list_object())
Filtering example
import JapaneseTokenizer
# with word filtering by stopword & part-of-speech condition #
print(mecab_wrapper.tokenize(input_sentence).filter(stopwords=['テレビ朝
日'], pos_condition=[('名詞', '固有名詞')]).convert_list_object())
Part-of-speech structure
Similar Package
natto-py
LICENSE
For developers
Dev environment
If you're using Pycharm Professional edition, you could set docker-compose.yml as remote
interpreter.
To call python2.7, set /opt/conda/envs/p27/bin/python2.7
To call python3.7, set /opt/conda/envs/p37/bin/python3.7
These commands checks from procedures of package install until test of package.
Test environment
$ docker-compose build
| [](LICENSE)[](https://travis-ci.org/Kensuke-Mitsuzawa/JapaneseTokenizers)
# What's this?
This is simple python-wrapper for Japanese Tokenizers(A.K.A Tokenizer)
This project aims to call tokenizers and split a sentence into tokens as easy as possible.
And, this project supports various Tokenization tools common interface. Thus, it's easy to compare output from various tokenizers.
This project is available also in [Github](https://github.com/Kensuke-Mitsuzawa/JapaneseTokenizers).
If you find any bugs, please report them to github issues. Or any pull requests are welcomed!
# Requirements
- Python 2.7
- Python 3.x
- checked in 3.5, 3.6, 3.7
# Features
* simple/common interface among various tokenizers
* simple/common interface for filtering with stopwords or Part-of-Speech condition
* simple interface to add user-dictionary(mecab only)
## Supported Tokenizers
### Mecab
[Mecab](http://mecab.googlecode.com/svn/trunk/mecab/doc/index.html?sess=3f6a4f9896295ef2480fa2482de521f6) is open source tokenizer system for various language(if you have dictionary for it)
See [english documentation](https://github.com/jordwest/mecab-docs-en) for detail
### Juman
[Juman](http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN) is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman is strong for ambiguous writing style in Japanese, and is strong for new-comming words thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
### Juman++
[Juman++](http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN++) is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman++ is succeeding system of Juman. It adopts RNN model for tokenization.
Juman++ is strong for ambigious writing style in Japanese, and is strong for new-comming words thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
Note: New Juman++ dev-version(later than 2.x) is available at [Github](https://github.com/ku-nlp/jumanpp)
### Kytea
[Kytea](http://www.phontron.com/kytea/) is tokenizer tool developped by Graham Neubig.
Kytea has a different algorithm from one of Mecab or Juman.
# Setting up
## Tokenizers auto-install
```
make install
```
### mecab-neologd dictionary auto-install
```
make install_neologd
```
## Tokenizers manual-install
### MeCab
See [here](https://github.com/jordwest/mecab-docs-en) to install MeCab system.
### Mecab Neologd dictionary
Mecab-neologd dictionary is a dictionary-extension based on ipadic-dictionary, which is basic dictionary of Mecab.
With, Mecab-neologd dictionary, you're able to parse new-coming words make one token.
Here, new-coming words is such like, movie actor name or company name.....
See [here](https://github.com/neologd/mecab-ipadic-neologd) and install mecab-neologd dictionary.
### Juman
```
wget -O juman7.0.1.tar.bz2 "http://nlp.ist.i.kyoto-u.ac.jp/DLcounter/lime.cgi?down=http://nlp.ist.i.kyoto-u.ac.jp/nl-resource/juman/juman-7.01.tar.bz2&name=juman-7.01.tar.bz2"
bzip2 -dc juman7.0.1.tar.bz2 | tar xvf -
cd juman-7.01
./configure
make
[sudo] make install
```
## Juman++
* GCC version must be >= 5
```
wget http://lotus.kuee.kyoto-u.ac.jp/nl-resource/jumanpp/jumanpp-1.02.tar.xz
tar xJvf jumanpp-1.02.tar.xz
cd jumanpp-1.02/
./configure
make
[sudo] make install
```
## Kytea
Install Kytea system
```
wget http://www.phontron.com/kytea/download/kytea-0.4.7.tar.gz
tar -xvf kytea-0.4.7.tar
cd kytea-0.4.7
./configure
make
make install
```
Kytea has [python wrapper](https://github.com/chezou/Mykytea-python) thanks to michiaki ariga.
Install Kytea-python wrapper
```
pip install kytea
```
## install
```
[sudo] python setup.py install
```
### Note
During install, you see warning message when it fails to install `pyknp` or `kytea`.
if you see these messages, try to re-install these packages manually.
# Usage
Tokenization Example(For python3.x. To see exmaple code for Python2.x, plaese see [here](https://github.com/Kensuke-Mitsuzawa/JapaneseTokenizers/blob/master/examples/examples.py))
```
import JapaneseTokenizer
input_sentence = '10日放送の「中居正広のミになる図書館」(テレビ朝日系)で、SMAPの中居正広が、篠原信一の過去の勘違いを明かす一幕があった。'
# ipadic is well-maintained dictionary #
mecab_wrapper = JapaneseTokenizer.MecabWrapper(dictType='ipadic')
print(mecab_wrapper.tokenize(input_sentence).convert_list_object())
# neologd is automatically-generated dictionary from huge web-corpus #
mecab_neologd_wrapper = JapaneseTokenizer.MecabWrapper(dictType='neologd')
print(mecab_neologd_wrapper.tokenize(input_sentence).convert_list_object())
```
## Filtering example
```
import JapaneseTokenizer
# with word filtering by stopword & part-of-speech condition #
print(mecab_wrapper.tokenize(input_sentence).filter(stopwords=['テレビ朝日'], pos_condition=[('名詞', '固有名詞')]).convert_list_object())
```
## Part-of-speech structure
Mecab, Juman, Kytea have different system of Part-of-Speech(POS).
You can check tables of Part-of-Speech(POS) [here](http://www.unixuser.org/~euske/doc/postag/)
# Similar Package
## natto-py
natto-py is sophisticated package for tokenization. It supports following features
* easy interface for tokenization
* importing additional dictionary
* partial parsing mode
# LICENSE
MIT license
# For developers
You could build an environment which has dependencies to test this package.
Simply, you build docker image and run docker container.
## Dev environment
Develop environment is defined with `test/docker-compose-dev.yml`.
With the docker-compose.yml file, you could call python2.7 or python3.7
If you're using Pycharm Professional edition, you could set docker-compose.yml as remote interpreter.
To call python2.7, set `/opt/conda/envs/p27/bin/python2.7`
To call python3.7, set `/opt/conda/envs/p37/bin/python3.7`
## Test environment
These commands checks from procedures of package install until test of package.
```bash
$ docker-compose build
$ docker-compose up
```
| [
"Morphology",
"Responsible & Trustworthy NLP",
"Robustness in NLP",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] | [] |
true | https://github.com/tokuhirom/akaza | 2015-10-14T01:17:00Z | Yet another Japanese IME for IBus/Linux | akaza-im / akaza
Public
Branches
Tags
Go to file
Go to file
Code
.github…
.run
akaza-…
akaza-…
akaza-…
develo…
docs
ibus-ak…
ibus-sys
keymap
libakaza
marisa…
romkan
.gitattri…
.gitignore
.gitmo…
CONT…
Cargo.l…
Cargo.…
Chang…
LICEN…
About
Yet another Japanese IME for
IBus/Linux
# nlp # rust # ime # ibus
Readme
MIT license
Activity
Custom properties
216 stars
7 watching
7 forks
Report repository
Releases 4
v0.2.0
Latest
on Jan 29, 2023
+ 3 releases
Packages
No packages published
Contributors
5
Languages
Rust 95.6%
Perl 1.8%
C 1.1%
Other 1.5%
Code
Issues
18
Pull requests
3
Discussions
Actions
Wiki
Security
Makefile
Note.md
READ…
deny.to…
renova…
Yet another kana-kanji-converter on IBus, written in Rust.
統計的かな漢字変換による日本語IMEです。 Rust で書いていま
す。
現在、開発途中のプロダクトです。非互換の変更が予告なくはい
ります
いじりやすくて ある程度 UIが使いやすいかな漢字変換があったら
面白いなと思ったので作ってみています。 「いじりやすくて」と
いうのはつまり、Hack-able であるという意味です。
モデルデータを自分で生成できて、特定の企業に依存しない自由
なかな漢字変換エンジンを作りたい。
UI/Logic をすべて Rust で書いてあるので、拡張が容易です。
統計的かな漢字変換モデルを採用しています
言語モデルの生成元は日本語 Wikipedia と青空文庫で
す。
形態素解析器 Vibrato で分析した結果をもとに
2gram 言語モデルを構築しています。
利用者の環境で 1 から言語モデルを再生成すること
が可能です。
ユーザー環境で、利用者の変換結果を学習します(unigram,
bigramの頻度を学習します)
ibus-akaza
モチベーション
特徴
README
MIT license
ibus 1.5+
marisa-trie
gtk4
rust
Linux 6.0 以上
ibus 1.5 以上
リトルエンディアン環境
モデルファイルをダウンロードして展開してください。
ibus-akaza をインストールしてください。
Akaza は典型的には以下の順番で探します。
1. ~/.local/share/akaza/keymap/{KEYMAP_NAME}.yml
2. /usr/local/share/akaza/keymap/{KEYMAP_NAME}.yml
3. /usr/share/akaza/keymap/{KEYMAP_NAME}.yml
Dependencies
Runtime dependencies
Build time dependencies
Supported environment
Install 方法
sudo mkdir -p /usr/share/akaza/model/default/
curl -L https://github.com/akaza-im/akaza-
default-
model/releases/download/<<VERSION>>/akaza-
default-model.tar.gz | sudo tar xzv --strip-
components=1 -C /usr/share/akaza/model/default/
rustup install stable
make
sudo make install
ibus restart
ibus engine akaza
設定方法
Keymap の設定
このパスは、XDG ユーザーディレクトリ の仕様に基づいていま
す。 Akaza は Keymap は XDG_DATA_HOME と XDG_DATA_DIRS か
らさがします。 XDG_DATA_HOME は設定していなければ
~/.local/share/ です。XDGA_DATA_DIR は設定していなければ
/usr/local/share:/usr/share/ です。
ローマ字かなマップも同様のパスからさがします。
1. ~/.local/share/akaza/romkan/{KEYMAP_NAME}.yml
2. /usr/local/share/akaza/romkan/{KEYMAP_NAME}.yml
3. /usr/share/akaza/romkan/{KEYMAP_NAME}.yml
model は複数のファイルからなります。
unigram.model
bigram.model
SKK-JISYO.akaza
この切り替えは以下のようなところから読まれます。
~/.local/share/akaza/model/{MODEL_NAME}/unigram.model
~/.local/share/akaza/model/{MODEL_NAME}/bigram.model
~/.local/share/akaza/model/{MODEL_NAME}/SKK-
JISYO.akaza
keymap, romkan と同様に、XDG_DATA_DIRS から読むこともでき
ます。
流行り言葉が入力できない場合、jawiki-kana-kanji-dict の利用を検
討してください。 Wikipedia から自動的に抽出されたデータを元
に SKK 辞書を作成しています。 Github Actions で自動的に実行さ
れているため、常に新鮮です。
一方で、自動抽出しているために変なワードも入っています。変
なワードが登録されていることに気づいたら、github issues で報
告してください。
RomKan の設定
model の設定
FAQ
最近の言葉が変換できません/固有名詞が変換できま
せん
人名が入力できません。など。
必要な SKK の辞書を読み込んでください。 現時点では config.yml
を手で編集する必要があります。
https://skk-dev.github.io/dict/
ibus-uniemoji を参考に初期の実装を行いました。
日本語入力を支える技術 を読み込んで実装しました。この本
が
実装
THANKS TO
| # ibus-akaza
Yet another kana-kanji-converter on IBus, written in Rust.
統計的かな漢字変換による日本語IMEです。
Rust で書いています。
**現在、開発途中のプロダクトです。非互換の変更が予告なくはいります**
## モチベーション
いじりやすくて **ある程度** UIが使いやすいかな漢字変換があったら面白いなと思ったので作ってみています。
「いじりやすくて」というのはつまり、Hack-able であるという意味です。
モデルデータを自分で生成できて、特定の企業に依存しない自由なかな漢字変換エンジンを作りたい。
## 特徴
* UI/Logic をすべて Rust で書いてあるので、拡張が容易です。
* 統計的かな漢字変換モデルを採用しています
* 言語モデルの生成元は日本語 Wikipedia と青空文庫です。
* 形態素解析器 Vibrato で分析した結果をもとに 2gram 言語モデルを構築しています。
* 利用者の環境で 1 から言語モデルを再生成することが可能です。
* ユーザー環境で、利用者の変換結果を学習します(unigram, bigramの頻度を学習します)
## Dependencies
### Runtime dependencies
* ibus 1.5+
* marisa-trie
* gtk4
### Build time dependencies
* rust
### Supported environment
* Linux 6.0 以上
* ibus 1.5 以上
* リトルエンディアン環境
## Install 方法
モデルファイルをダウンロードして展開してください。
sudo mkdir -p /usr/share/akaza/model/default/
curl -L https://github.com/akaza-im/akaza-default-model/releases/download/<<VERSION>>/akaza-default-model.tar.gz | sudo tar xzv --strip-components=1 -C /usr/share/akaza/model/default/
ibus-akaza をインストールしてください。
rustup install stable
make
sudo make install
ibus restart
ibus engine akaza
## 設定方法
### Keymap の設定
Akaza は典型的には以下の順番で探します。
1. `~/.local/share/akaza/keymap/{KEYMAP_NAME}.yml`
2. `/usr/local/share/akaza/keymap/{KEYMAP_NAME}.yml`
3. `/usr/share/akaza/keymap/{KEYMAP_NAME}.yml`
このパスは、[XDG ユーザーディレクトリ](https://wiki.archlinux.jp/index.php/XDG_%E3%83%A6%E3%83%BC%E3%82%B6%E3%83%BC%E3%83%87%E3%82%A3%E3%83%AC%E3%82%AF%E3%83%88%E3%83%AA)
の仕様に基づいています。
Akaza は Keymap は `XDG_DATA_HOME` と `XDG_DATA_DIRS` からさがします。
`XDG_DATA_HOME` は設定していなければ `~/.local/share/` です。`XDGA_DATA_DIR` は設定していなければ `/usr/local/share:/usr/share/` です。
### RomKan の設定
ローマ字かなマップも同様のパスからさがします。
1. `~/.local/share/akaza/romkan/{KEYMAP_NAME}.yml`
2. `/usr/local/share/akaza/romkan/{KEYMAP_NAME}.yml`
3. `/usr/share/akaza/romkan/{KEYMAP_NAME}.yml`
### model の設定
model は複数のファイルからなります。
- unigram.model
- bigram.model
- SKK-JISYO.akaza
この切り替えは以下のようなところから読まれます。
- `~/.local/share/akaza/model/{MODEL_NAME}/unigram.model`
- `~/.local/share/akaza/model/{MODEL_NAME}/bigram.model`
- `~/.local/share/akaza/model/{MODEL_NAME}/SKK-JISYO.akaza`
keymap, romkan と同様に、`XDG_DATA_DIRS` から読むこともできます。
## FAQ
### 最近の言葉が変換できません/固有名詞が変換できません
流行り言葉が入力できない場合、[jawiki-kana-kanji-dict](https://github.com/tokuhirom/jawiki-kana-kanji-dict) の利用を検討してください。
Wikipedia から自動的に抽出されたデータを元に SKK 辞書を作成しています。
Github Actions で自動的に実行されているため、常に新鮮です。
一方で、自動抽出しているために変なワードも入っています。変なワードが登録されていることに気づいたら、github issues で報告してください。
### 人名が入力できません。など。
必要な SKK の辞書を読み込んでください。
現時点では config.yml を手で編集する必要があります。
https://skk-dev.github.io/dict/
## THANKS TO
* [ibus-uniemoji](https://github.com/salty-horse/ibus-uniemoji) を参考に初期の実装を行いました。
* [日本語入力を支える技術](https://gihyo.jp/book/2012/978-4-7741-4993-6) を読み込んで実装しました。この本がなかったら実装しようと思わなかったと思います。
| [
"Language Models",
"Syntactic Text Processing"
] | [
"Vocabulary, Dictionary, and Language Input Method"
] |
true | https://github.com/hexenq/kuroshiro | 2016-01-03T09:16:40Z | Japanese language library for converting Japanese sentence to Hiragana, Katakana or Romaji with furigana and okurigana modes supported. | hexenq / kuroshiro
Public
7 Branches
18 Tags
Go to file
Go to file
Code
src
test
.babelrc
.eslintrc
.gitignore
.npmignore
.travis.yml
CHANGELOG.md
CONTRIBUTING.md
LICENSE
README.eo-eo.md
README.jp.md
README.md
README.zh-cn.md
README.zh-tw.md
package.json
Build Status coverage
coverage
96%
96%
gitter
gitter
join chat
join chat license
license MIT
MIT
kuroshiro is a Japanese language library for converting Japanese sentence to Hiragana, Katakana or Romaji with furigana and okurigana
modes supported.
Read this in other languages: English, 日本語, 简体中文, 繁體中文, Esperanto.
You can check the demo here.
Japanese Sentence => Hiragana, Katakana or Romaji
Furigana and okurigana supported
About
Japanese language library for
converting Japanese sentence to
Hiragana, Katakana or Romaji with
furigana and okurigana modes
supported.
kuroshiro.org
# katakana # hiragana # japanese # romaji
# kana # kanji # hepburn # mecab # furigana
# kuromoji # okurigana
Readme
MIT license
Activity
836 stars
16 watching
94 forks
Report repository
Releases 2
1.2.0
Latest
on Jun 8, 2021
+ 1 release
Used by 486
+ 478
Contributors
8
Languages
JavaScript 100.0%
Code
Issues
42
Pull requests
6
Actions
Projects
Security
Insights
kuroshiro
Demo
Feature
README
MIT license
🆕Multiple morphological analyzers supported
🆕Multiple romanization systems supported
Useful Japanese utils
Seperate morphological analyzer from phonetic notation logic to make it possible that we can use different morphological analyzers
(ready-made or customized)
Embrace ES8/ES2017 to use async/await functions
Use ES6 Module instead of CommonJS
You should check the environment compatibility of each analyzer before you start working with them
Analyzer
Node.js Support
Browser Support
Plugin Repo
Developer
Kuromoji
✓
✓
kuroshiro-analyzer-kuromoji
Hexen Qi
Mecab
✓
✗
kuroshiro-analyzer-mecab
Hexen Qi
Yahoo Web API
✓
✗
kuroshiro-analyzer-yahoo-webapi
Hexen Qi
Install with npm package manager:
Load the library:
Support ES6 Module import
And CommonJS require
Add dist/kuroshiro.min.js to your frontend project (you may first build it from source with npm run build after npm install ), and in
your HTML:
Breaking Change in 1.x
Ready-made Analyzer Plugins
Usage
Node.js (or using a module bundler (e.g. Webpack))
$ npm install kuroshiro
import Kuroshiro from "kuroshiro";
// Initialize kuroshiro with an instance of analyzer (You could check the [apidoc](#initanalyzer) for more informatio
// For this example, you should npm install and import the kuromoji analyzer first
import KuromojiAnalyzer from "kuroshiro-analyzer-kuromoji";
// Instantiate
const kuroshiro = new Kuroshiro();
// Initialize
// Here uses async/await, you could also use Promise
await kuroshiro.init(new KuromojiAnalyzer());
// Convert what you want
const result = await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" }
const Kuroshiro = require("kuroshiro");
const KuromojiAnalyzer = require("kuroshiro-analyzer-kuromoji");
const kuroshiro = new Kuroshiro();
kuroshiro.init(new KuromojiAnalyzer())
.then(function(){
return kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
})
.then(function(result){
console.log(result);
})
Browser
For this example, you should also include kuroshiro-analyzer-kuromoji.min.js which you could get from kuroshiro-analyzer-kuromoji
Instantiate:
Initialize kuroshiro with an instance of analyzer, then convert what you want:
Examples
Initialize kuroshiro with an instance of analyzer. You should first import an analyzer and initialize it. You can make use of the Ready-made
Analyzers listed above. And please refer to documentation of analyzers for analyzer initialization instructions
Arguments
analyzer - An instance of analyzer.
Examples
Convert given string to target syllabary with options available
Arguments
str - A String to be converted.
options - Optional kuroshiro has several convert options as below.
Options
Type
Default
Description
to
String
"hiragana"
Target syllabary [ hiragana , katakana , romaji ]
mode
String
"normal"
Convert mode [ normal , spaced , okurigana , furigana ]
romajiSystem
String
"hepburn"
Romanization system [ nippon , passport , hepburn ]
delimiter_start
String
"("
Delimiter(Start)
delimiter_end
String
")"
Delimiter(End)
*: Param romajiSystem is only applied when the value of param to is romaji . For more about it, check Romanization System
<script src="url/to/kuroshiro.min.js"></script>
<script src="url/to/kuroshiro-analyzer-kuromoji.min.js"></script>
var kuroshiro = new Kuroshiro();
kuroshiro.init(new KuromojiAnalyzer({ dictPath: "url/to/dictFiles" }))
.then(function () {
return kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
})
.then(function(result){
console.log(result);
})
API
Constructor
const kuroshiro = new Kuroshiro();
Instance Medthods
init(analyzer)
await kuroshiro.init(new KuromojiAnalyzer());
convert(str, [options])
*
Examples
// furigana
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"furigana", to:"hiragana"});
// result: 感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!
かん
と
て
つな
かさ
じんせい
さいこう
Examples
Check if input char is hiragana.
Check if input char is katakana.
Check if input char is kana.
Check if input char is kanji.
Check if input char is Japanese.
Check if input string has hiragana.
Check if input string has katakana.
Check if input string has kana.
Check if input string has kanji.
Check if input string has Japanese.
Convert input kana string to hiragana.
// normal
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"}
// result:かんじとれたらてをつなごう、かさなるのはじんせいのライン and レミリアさいこう!
// spaced
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"}
// result:かんじとれ たら て を つなご う 、 かさなる の は じんせい の ライン and レミ リア さいこう !
// okurigana
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"}
// result: 感(かん)じ取(と)れたら手(て)を繋(つな)ごう、重(かさ)なるのは人生(じんせい)のライン and レミリア最高(さいこう)!
Utils
const result = Kuroshiro.Util.isHiragana("あ"));
isHiragana(char)
isKatakana(char)
isKana(char)
isKanji(char)
isJapanese(char)
hasHiragana(str)
hasKatakana(str)
hasKana(str)
hasKanji(str)
hasJapanese(str)
kanaToHiragna(str)
Convert input kana string to katakana.
Convert input kana string to romaji. Param system accepts "nippon" , "passport" , "hepburn" (Default: "hepburn").
kuroshiro supports three kinds of romanization systems.
nippon : Nippon-shiki romanization. Refer to ISO 3602 Strict.
passport : Passport-shiki romanization. Refer to Japanese romanization table published by Ministry of Foreign Affairs of Japan.
hepburn : Hepburn romanization. Refer to BS 4812 : 1972.
There is a useful webpage for you to check the difference between these romanization systems.
Since it's impossible to fully automatically convert furigana directly to romaji because furigana lacks information on pronunciation (Refer to な
ぜ フリガナでは ダメなのか?).
kuroshiro will not handle chōon when processing directly furigana (kana) -> romaji conversion with every romanization system (Except that
Chōonpu will be handled)
For example, you'll get "kousi", "koushi", "koushi" respectively when converts kana "こうし" to romaji using nippon , passport , hepburn
romanization system.
The kanji -> romaji conversion with/without furigana mode is unaffected by this logic.
Please check CONTRIBUTING.
kuromoji
wanakana
MIT
kanaToKatakana(str)
kanaToRomaji(str, system)
Romanization System
Notice for Romaji Conversion
Contributing
Inspired By
License
| 
# kuroshiro
[](https://travis-ci.org/hexenq/kuroshiro)
[](https://coveralls.io/r/hexenq/kuroshiro)
[](http://badge.fury.io/js/kuroshiro)
[](https://gitter.im/hexenq/kuroshiro)
[](LICENSE)
kuroshiroは日本語文をローマ字や仮名なとに変換できるライブラリです。フリガナ・送り仮名の機能も搭載します。
*ほかの言語:[English](README.md), [日本語](README.jp.md), [简体中文](README.zh-cn.md), [繁體中文](README.zh-tw.md), [Esperanto](README.eo-eo.md)。*
## デモ
オンラインデモは[こちら](https://kuroshiro.org/#demo)です。
## 特徴
- 日本語文 => ひらがな、カタカナ、ローマ字
- フリガナ、送り仮名サポート
- 🆕複数の形態素解析器をサポート
- 🆕複数のローマ字表記法をサポート
- 実用ツール付き
## バッジョン1.xでの重大な変更
- 形態素解析器がルビロジックから分離される。それゆえ、様々な形態素解析器([レディーメイド](#形態素解析器プラグイン)も[カスタマイズ](CONTRIBUTING.md#how-to-submit-new-analyzer-plugins)も)を利用できることになります。
- ES2017の新機能「async/await」を利用します
- CommonJSからES Modulesへ移行します
## 形態素解析器プラグイン
*始まる前にプラグインの適合性をチェックしてください*
| 解析器 | Node.js サポート | ブラウザ サポート | レポジトリ | 開発者 |
|---|---|---|---|---|
|Kuromoji|✓|✓|[kuroshiro-analyzer-kuromoji](https://github.com/hexenq/kuroshiro-analyzer-kuromoji)|[Hexen Qi](https://github.com/hexenq)|
|Mecab|✓|✗|[kuroshiro-analyzer-mecab](https://github.com/hexenq/kuroshiro-analyzer-mecab)|[Hexen Qi](https://github.com/hexenq)|
|Yahoo Web API|✓|✗|[kuroshiro-analyzer-yahoo-webapi](https://github.com/hexenq/kuroshiro-analyzer-yahoo-webapi)|[Hexen Qi](https://github.com/hexenq)|
## 使い方
### Node.js (又はWebpackなどのモジュールバンドラを使ってる時)
npmでインストール:
```sh
$ npm install kuroshiro
```
kuroshiroをロードします:
*ES6 Module `import` と CommonJS `require`、どちらでもOK*
```js
import Kuroshiro from "kuroshiro";
```
インスタンス化します:
```js
const kuroshiro = new Kuroshiro();
```
形態素解析器のインスタンスを引数にしてkuroshiroを初期化する ([API説明](#initanalyzer)を参考にしてください):
```js
// この例では,まずnpm installとimportを通じてkuromojiの形態素解析器を導入します
import KuromojiAnalyzer from "kuroshiro-analyzer-kuromoji";
// ...
// 初期化
// ここでasync/awaitを使ってますが, Promiseも使えます
await kuroshiro.init(new KuromojiAnalyzer());
```
変換の実行:
```js
const result = await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
```
### ブラウザ
`dist/kuroshiro.min.js`を導入し (その前に`npm install`と`npm run build`を通じて`kuroshiro.min.js`を生成します)、そしてHTMLに:
```html
<script src="url/to/kuroshiro.min.js"></script>
```
この例では`kuroshiro-analyzer-kuromoji.min.js`の導入は必要です。詳しくは[kuroshiro-analyzer-kuromoji](https://github.com/hexenq/kuroshiro-analyzer-kuromoji)を参考にしてください
```html
<script src="url/to/kuroshiro-analyzer-kuromoji.min.js"></script>
```
インスタンス化します:
```js
var kuroshiro = new Kuroshiro();
```
形態素解析器のインスタンスを引数にしてkuroshiroを初期化するから,変換を実行します:
```js
kuroshiro.init(new KuromojiAnalyzer({ dictPath: "url/to/dictFiles" }))
.then(function () {
return kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
})
.then(function(result){
console.log(result);
})
```
## APIの説明
### コンストラクタ
__例__
```js
const kuroshiro = new Kuroshiro();
```
### インスタンス関数
#### init(analyzer)
形態素解析器のインスタンスを引数にしてkuroshiroを初期化する。先に形態素解析器の導入と初期化は必要です。前述の[形態素解析器プラグイン](#形態素解析器プラグイン)を利用できます。形態素解析器の初期化方法は各自のドキュメントを参照してください。
__引数__
* `analyzer` - 形態素解析器のインスタンス。
__例__
```js
await kuroshiro.init(new KuromojiAnalyzer());
```
#### convert(str, [options])
文字列を目標音節文字に変換します(変換モードが設置できます)。
__引数__
* `str` - 変換される文字列。
* `options` - *任意* 変換のパラメータ。下表の通り。
| オプション | タイプ | デフォルト値 | 説明 |
|---|---|---|---|
| to | String | 'hiragana' | 目標音節文字<br />`hiragana` (ひらがな),<br />`katakana` (カタカナ),<br />`romaji` (ローマ字) |
| mode | String | 'normal' | 変換モード<br />`normal` (一般),<br />`spaced` (スペースで組み分け),<br />`okurigana` (送り仮名),<br />`furigana` (フリガナ) |
| romajiSystem<sup>*</sup> | String | "hepburn" | ローマ字<br />`nippon` (日本式),<br />`passport` (パスポート式),<br />`hepburn` (ヘボン式) |
| delimiter_start | String | '(' | 区切り文字 (始め) |
| delimiter_end | String | ')' | 区切り文字 (終り) |
**: 引数`romajiSystem`は引数`to`が`romaji`に設定されてる場合にのみ有効です。詳細については, [ローマ字表記法](#ローマ字表記法)を参考にしてください。*
__例__
```js
// normal (一般)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"});
// 結果:かんじとれたらてをつなごう、かさなるのはじんせいのライン and レミリアさいこう!
```
```js
// spaced (スペースで組み分け)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"});
// 結果:かんじとれ たら て を つなご う 、 かさなる の は じんせい の ライン and レミ リア さいこう !
```
```js
// okurigana (送り仮名)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"});
// 結果: 感(かん)じ取(と)れたら手(て)を繋(つな)ごう、重(かさ)なるのは人生(じんせい)のライン and レミリア最高(さいこう)!
```
<pre>
// furigana (フリガナ)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"furigana", to:"hiragana"});
// 結果: <ruby>感<rp>(</rp><rt>かん</rt><rp>)</rp></ruby>じ<ruby>取<rp>(</rp><rt>と</rt><rp>)</rp></ruby>れたら<ruby>手<rp>(</rp><rt>て</rt><rp>)</rp></ruby>を<ruby>繋<rp>(</rp><rt>つな</rt><rp>)</rp></ruby>ごう、<ruby>重<rp>(</rp><rt>かさ</rt><rp>)</rp></ruby>なるのは<ruby>人生<rp>(</rp><rt>じんせい</rt><rp>)</rp></ruby>のライン and レミリア<ruby>最高<rp>(</rp><rt>さいこう</rt><rp>)</rp></ruby>!
</pre>
### 実用ツール
__例__
```js
const result = Kuroshiro.Util.isHiragana("あ"));
```
#### isHiragana(char)
入力文字はひらがなかどうかを判断します。
#### isKatakana(char)
入力文字はカタカナかどうかを判断します。
#### isKana(char)
入力文字は仮名かどうかを判断します。
#### isKanji(char)
入力文字は漢字かどうかを判断します。
#### isJapanese(char)
入力文字は日本語かどうかを判断します。
#### hasHiragana(str)
入力文字列にひらがながあるかどうかを確認する。
#### hasKatakana(str)
入力文字列にカタカナがあるかどうかを確認する。
#### hasKana(str)
入力文字列に仮名があるかどうかを確認する。
#### hasKanji(str)
入力文字列に漢字があるかどうかを確認する。
#### hasJapanese(str)
入力文字列に日本語があるかどうかを確認する。
#### kanaToHiragna(str)
入力仮名文字列をひらがなへ変換します。
#### kanaToKatakana(str)
入力仮名文字列をカタカナへ変換します。
#### kanaToRomaji(str, system)
入力仮名文字列をローマ字へ変換します。引数`system`の指定可能値は`"nippon"`, `"passport"`, `"hepburn"` (デフォルト値: "hepburn")
## ローマ字表記法
kuroshiroは三種類のローマ字表記法をサポートします。
`nippon`: 日本式ローマ字。[ISO 3602 Strict](http://www.age.ne.jp/x/nrs/iso3602/iso3602.html) を参照。
`passport`: パスポート式ローマ字。 日本外務省が発表した [ヘボン式ローマ字綴方表](https://www.ezairyu.mofa.go.jp/passport/hebon.html) を参照。
`hepburn`: ヘボン式ローマ字。[BS 4812 : 1972](https://archive.is/PiJ4) を参照。
各種ローマ字表の比較は[こちら](http://jgrammar.life.coocan.jp/ja/data/rohmaji2.htm)を参考にしてください。
### ローマ字変換のお知らせ
フリガナは音声を正確にあらわしていないため、__フリガナ__ を __ローマ字__ に完全自動的に変換することは不可能です。([なぜフリガナではダメなのか?](https://green.adam.ne.jp/roomazi/onamae.html#naze)を参照)
そのゆえ、任意のローマ字表記法を使って、フリガナ(仮名)-> ローマ字 変換を行うとき、kuroshiroは長音の処理を実行しません。(長音符は処理されます)
*例えば`nippon`、` passport`、 `hepburn`のローマ字表記法を使って フリガナ->ローマ字 変換を行うと、それぞれ"kousi"、 "koushi"、 "koushi"が得られます。*
フリガナモードを使うかどうかにかかわらず、漢字->ローマ字の変換はこの仕組みに影響を与えられないです。
## 貢献したい方
[CONTRIBUTING](CONTRIBUTING.md) を参考にしてみてください。
## 感謝
- kuromoji
- wanakana
## ライセンス
MIT
| [
"Syntactic Text Processing",
"Text Normalization"
] | [] |
true | https://github.com/scriptin/kanji-frequency | 2016-01-24T01:51:10Z | Kanji usage frequency data collected from various sources | scriptin / kanji-frequency
Public
Branches
Tags
Go to file
Go to file
Code
.github…
.vscode
data
data2015
public
scripts
src
.editor…
.gitignore
.prettie…
.prettie…
CONT…
LICEN…
READ…
astro.c…
packag…
packag…
tailwin…
tsconfi…
About
Kanji usage frequency data
collected from various sources
scriptin.github.io/kanji-frequency/
# data # japanese # corpus
# data-visualization # cjk # kanji
# japanese-language # corpus-linguistics
# frequency-lists # cjk-characters
# kanji-frequency
Readme
CC-BY-4.0 license
Activity
131 stars
5 watching
19 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Astro 46.6%
JavaScript 40.9%
MDX 9.9%
TypeScript 2.6%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
README
CC-BY-4.0 license
Datasets built from various Japanese language corpora
https://scriptin.github.io/kanji-frequency/ - see this
website for the dataset description. This readme
describes only technical aspects.
You can download the datasets here:
https://github.com/scriptin/kanji-
frequency/tree/master/data
You'll need Node.js 18 or later.
See scripts section in package.json.
Aozora:
aozora:download - use crawler/scraper to collect
the data
aozora:gaiji:extract - extract gaiji notations
data from scraped pages. Gaiji refers to kanji
charasters which are replaced with images in the
documents, because Shift-JIS encoding cannot
represent them
aozora:gaiji:replacements - build gaiji
replacements file - produces only partial results,
which may need to be manually completed
aozora:clean - clean the scraped pages (apply
gaiji replacements)
aozora:count - create the dataset
Wikipedia:
wikipedia:fetch - fetch random pages using
MediaWiki API
wikipedia:count - create the dataset
News:
news:wikinews:fetch - fetch random pages from
Wikinews using MediaWiki API
news:count - create the dataset
news:dates - create additional file with dates of
articles
Kanji usage frequency
Building the datasets
See Astro docs and the scripts section in
package.json.
Building the website
| # Kanji usage frequency
Datasets built from various Japanese language corpora
<https://scriptin.github.io/kanji-frequency/> - see this website for the dataset description. This readme describes only technical aspects.
You can download the datasets here: <https://github.com/scriptin/kanji-frequency/tree/master/data>
## Building the datasets
You'll need Node.js 18 or later.
See `scripts` section in [package.json](./package.json).
Aozora:
- `aozora:download` - use crawler/scraper to collect the data
- `aozora:gaiji:extract` - extract gaiji notations data from scraped pages. Gaiji refers to kanji charasters which are replaced with images in the documents, because Shift-JIS encoding cannot represent them
- `aozora:gaiji:replacements` - build gaiji replacements file - produces only partial results, which may need to be manually completed
- `aozora:clean` - clean the scraped pages (apply gaiji replacements)
- `aozora:count` - create the dataset
Wikipedia:
- `wikipedia:fetch` - fetch random pages using MediaWiki API
- `wikipedia:count` - create the dataset
News:
- `news:wikinews:fetch` - fetch random pages from Wikinews using MediaWiki API
- `news:count` - create the dataset
- `news:dates` - create additional file with dates of articles
## Building the website
See [Astro](https://astro.build/) [docs](https://docs.astro.build/en/getting-started/) and the `scripts` section in [package.json](./package.json).
| [
"Information Extraction & Text Mining",
"Structured Data in NLP",
"Term Extraction"
] | [
"Annotation and Dataset Development"
] |
Dataset overview
This is a dataset for Japanese natural language processing with multi-label annotations of research field labels for GitHub repositories in the NLP domain. Please refer to this paper for the specific method of constructing the dataset. It is written in Japanese.
Input and Output
- Input: Information from GitHub repositories (description, README text, PDF text, screenshot images)
- Output: Multi-label classification of NLP research fields
Problem Setting of the Dataset:
- Training Data: GitHub repositories from before 2022
- Test Data: GitHub repositories from 2023
- Objective: To predict multi-labels of research field labels in the NLP domain
- We used the positive examples from awesome-japanese-nlp-classification-dataset.
Dataset Features:
- A multimodal dataset including GitHub summaries, README files, PDF files, and screenshot images.
- The annotation labels were assigned with reference to Exploring-NLP-Research.
If you need image data and PDF files, please submit a request to the community. This dataset uses text extracted from PDF files and does not include image data.
Based on GitHub's terms of service, please use this dataset for research purposes only.
How to use this dataset
Install the datasets
library.
pip install datasets
How to load in Python.
from datasets import load_dataset
dataset = load_dataset("taishi-i/awesome-japanese-nlp-multilabel-dataset")
# To access the dataset sample
print(dataset["train"][0])
Here is a sample of the dataset.
{
"is_exist": True,
"url": "https://github.com/scriptin/jmdict-simplified",
"created_at": "2016-02-07T16:34:32Z",
"description": "JMdict and JMnedict in JSON format",
"pdf_text": "scriptin / jmdict-simplified\nPublic\nBranches\nTags\n",
"readme_text": "# jmdict-simplified\n\n**[JMdict][], [JMnedict][], [Kanjidic][], and [Kradfile/Radkfile][Kradfile] in JSON format**<br>\n",
"nlp_taxonomy_classifier_labels": [
"Multilinguality"
],
"awesome_japanese_nlp_labels": [
"Annotation and Dataset Development",
"Vocabulary, Dictionary, and Language Input Method"
]
}
is_exist
: A boolean value indicating whether the GitHub repository exists (True
if it exists) at the time of retrieval.url
: The URL of the GitHub repository.created_at
: The timestamp indicating when the repository was created (in ISO 8601 format).description
: A short description of the repository provided on GitHub.pdf_text
: Extracted text from a PDF file that contains a saved version of the GitHub repository's top page.readme_text
: Extracted text from the repository's README file.nlp_taxonomy_classifier_labels
: Manually annotated multi-label annotations of NLP research field labels.awesome_japanese_nlp_labels
: Manually annotated multi-label annotations of research field labels specifically for Japanese natural language processing.
Details of the dataset.
DatasetDict({
train: Dataset({
features: ['is_exist', 'url', 'created_at', 'description', 'pdf_text', 'readme_text', 'nlp_taxonomy_classifier_labels', 'awesome_japanese_nlp_labels'],
num_rows: 407
})
validation: Dataset({
features: ['is_exist', 'url', 'created_at', 'description', 'pdf_text', 'readme_text', 'nlp_taxonomy_classifier_labels', 'awesome_japanese_nlp_labels'],
num_rows: 17
})
test: Dataset({
features: ['is_exist', 'url', 'created_at', 'description', 'pdf_text', 'readme_text', 'nlp_taxonomy_classifier_labels', 'awesome_japanese_nlp_labels'],
num_rows: 60
})
})
Baseline
The baseline model was used for TimSchopf/nlp_taxonomy_classifier.
The fine-tuned model was trained using this dataset to fine-tune the baseline model.
Classification Method | Model | Description | Dev Prec. | Dev Rec. | Dev F1 | Eval Prec. | Eval Rec. | Eval F1 |
---|---|---|---|---|---|---|---|---|
Random Prediction | - | - | 0.034 | 0.455 | 0.064 | 0.042 | 0.513 | 0.078 |
Baseline | TimSchopf/nlp_taxonomy_classifier | ✓ | 0.538 | 0.382 | 0.447 | 0.360 | 0.354 | 0.349 |
Fine-Tuning | TimSchopf/nlp_taxonomy_classifier | ✓ | 0.538 | 0.509 | 0.523 | 0.436 | 0.484 | 0.521 |
Zero-Shot | gpt-4o-2024-08-06 | ✓ | 0.560 | 0.255 | 0.350 | 0.476 | 0.184 | 0.265 |
License
We collect and publish this dataset under GitHub Acceptable Use Policies - 7. Information Usage Restrictions and GitHub Terms of Service - H. API Terms for research purposes. This dataset should be used solely for research verification purposes. Adhering to GitHub's regulations is mandatory.
- Downloads last month
- 232