Home

Awesome

Rakuten MA

Japanese README (日本語ドキュメント)

Introduction

Rakuten MA (morphological analyzer) is a morphological analyzer (word segmentor + PoS Tagger) for Chinese and Japanese written purely in JavaScript.

Rakuten MA has the following unique features:

Demo

You can try Rakuten MA on the demo page. (It may take a while to load this page.)

Usage

Download & Install

Since Rakuten MA is a JavaScript library, there's no need for installation. Clone the git repository as

git clone https://github.com/rakuten-nlp/rakutenma.git

or download the zip archive from here: https://github.com/rakuten-nlp/rakutenma/archive/master.zip

If you have Node.js installed, you can run the demo by

node demo.js

which is identical to the usage example below.

npm package

You can also use Rakuten MA as an npm package. You can install it by:

npm install rakutenma

The model files can be found under node_modules/rakutenma/.

Usage Example (on Node.js)

// RakutenMA demo

// Load necessary libraries
var RakutenMA = require('./rakutenma');
var fs = require('fs');

// Initialize a RakutenMA instance
// with an empty model and the default ja feature set
var rma = new RakutenMA();
rma.featset = RakutenMA.default_featset_ja;

// Let's analyze a sample sentence (from http://tatoeba.org/jpn/sentences/show/103809)
// With a disastrous result, since the model is empty!
console.log(rma.tokenize("彼は新しい仕事できっと成功するだろう。"));

// Feed the model with ten sample sentences from tatoeba.com
var tatoeba = JSON.parse(fs.readFileSync("tatoeba.json"));
for (var i = 0; i < 10; i ++) {
    rma.train_one(tatoeba[i]);
}

// Now what does the result look like?
console.log(rma.tokenize("彼は新しい仕事できっと成功するだろう。"));

// Initialize a RakutenMA instance with a pre-trained model
var model = JSON.parse(fs.readFileSync("model_ja.json"));
rma = new RakutenMA(model, 1024, 0.007812);  // Specify hyperparameter for SCW (for demonstration purpose)
rma.featset = RakutenMA.default_featset_ja;

// Set the feature hash function (15bit)
rma.hash_func = RakutenMA.create_hash_func(15);

// Tokenize one sample sentence
console.log(rma.tokenize("うらにわにはにわにわとりがいる"));

// Re-train the model feeding the right answer (pairs of [token, PoS tag])
var res = rma.train_one(
        [["うらにわ","N-nc"],
         ["に","P-k"],
         ["は","P-rj"],
         ["にわ","N-n"],
         ["にわとり","N-nc"],
         ["が","P-k"],
         ["いる","V-c"]]);
// The result of train_one contains:
//   sys: the system output (using the current model)
//   ans: answer fed by the user
//   update: whether the model was updated
console.log(res);

// Now what does the result look like?
console.log(rma.tokenize("うらにわにはにわにわとりがいる"));

Usage Example (on browsers)

Include the following code snippet in the <head> of your HTML.

<script type="text/javascript" src="rakutenma.js" charset="UTF-8"></script>
<script type="text/javascript" src="model_ja.js" charset="UTF-8"></script>
<script type="text/javascript" src="hanzenkaku.js" charset="UTF-8"></script>
<script type="text/javascript" charset="UTF-8">
  function Segment() {

    rma = new RakutenMA(model);
    rma.featset = RakutenMA.default_featset_ja;
    rma.hash_func = RakutenMA.create_hash_func(15);

    var textarea = document.getElementById("input");
    var result = document.getElementById("output");
    var tokens = rma.tokenize(HanZenKaku.hs2fs(HanZenKaku.hw2fw(HanZenKaku.h2z(textarea.value))));

    result.style.display = 'block';
    result.innerHTML = RakutenMA.tokens2string(tokens);
  }

</script>

The analysis and result looks like this:

<textarea id="input" cols="80" rows="5"></textarea>
<input type="submit" value="Analyze" onclick="Segment()">
<div id="output"></div>

Using bundled models to analyze Chinese/Japanese sentences

  1. Load an existing model, e.g., model = JSON.parse(fs.readFileSync("model_file")); then rma = new RakutenMA(model); or rma.set_model(model);
  2. Specify featset depending on your langage (e.g., rma.featset = RakutenMA.default_featset_zh; for Chinese and rma.featset = RakutenMA.default_featset_ja; for Japanese).
  3. Remember to use 15-bit feature hashing function (rma.hash_func = RakutenMA.create_hash_func(15);) when using the bundled models (model_zh.json and model_ja.json).
  4. Use rma.tokenize(input) to analyze your input.

Training your own analysis model from scratch

  1. Prepare your training corpus (a set of training sentences where a sentence is just an array of correct [token, PoS tag].)
  2. Initialize a RakutenMA instance with new RakutenMA().
  3. Specify featset. (and optionally, ctype_func, hash_func, etc.)
  4. Feed your training sentences one by one (from the first one to the last) to the train_one(sent) method.
  5. Usually SCW converges enough after one epoch (one pass through the entire training corpus) but you can repeat Step 4. to achieve even better performance.

See scripts/train_zh.js (for Chinese) and scripts/train_ja.js (for Japanese) to see an example showing how to train your own model.

Re-training an existing model (domain adaptation, fixing errors, etc.)

  1. Load an existing model and initialize a RakutenMA instance. (see "Using bundled models to analyze Chinese/Japanese sentences" above)
  2. Prepare your training data (this could be as few as a couple of sentences, depending on what and how much you want to "re-train".)
  3. Feed your training sentences one by one to the train_one(sent) method.

Reducing the model size

The model size could still be a problem for client-side distribution even after applying feature hashing. We included a script scripts/minify.js which applies feature quantization (see [Hagiwara and Sekine COLING 2014] for the details) to reduce the trained model size.

You can run it node scripts/minify.js [input_model_file] [output_model_file] to make a minified version of the model file. Remember: it also deletes the "sigma" part of the trained model, meaning that you are no longer able to re-train the minified model. If necessary, re-train the model first, then minify it.

API Documentation

ConstructorDescription
RakutenMA(model, phi, c)Creates a new RakutenMA instance. model (optional) specifies the model object to initialize the RakutenMA instance with. phi and c (both optional) are hyper parameters of SCW (default: phi = 2048, c = 0.003906).
MethodsDescription
tokenize(input)Tokenizes input (string) and returns tokenized result ([token, PoS tag] pairs).
train_one(sent)Updates the current model (if necessary) using the given answer sent ([token, PoS tag] pairs). The return value is an object with three properties ans, sys, and updated, where ans is the given answer (same as sent), sys is the system output using the (old) model, and updated is a binary (True/False) flag meaning whether the model was updated (because sys was different from ans) or not.
set_model(model)Sets the Rakuten MA instance's model to model.
set_tag_scheme(scheme)Sets the sequential labeling tag scheme. Currently, "IOB2" and "SBIEO" are supported. Specifying other tag schemes causes an exception.
PropertiesDescription
featsetSpecifies an array of feature templates (string) used for analysis. You can use RakutenMA.default_featset_ja and RakutenMA.default_featset_zh as the default feature sets for Japanese and Chinese, respectively. See below ("Supported feature templates") for the details of feature templates.
ctype_funcSpecifies the function used to convert a character to its character type. RakutenMA.ctype_ja_default_func is the default character type function used for Japanese. Alternatively, you can call RakutenMA.create_ctype_chardic_func(chardic) to create a character type function which takes a character to look it up in chardic and return its value. (For example, RakutenMA.create_ctype_chardic_func({"A": "type1"}) returns a function f where f("A") returns "type1" and [] otherwise.)
hash_funcSpecifies the hash function to use for feature hashing. Default = undefined (no feature hashing). A feature hashing function with bit-bit hash space can be created by calling RakutenMA.create_hash_func(bit).

Terms and Conditions

Distribution, modification, and academic/commercial use of Rakuten MA is permitted, provided that you conform with Apache License version 2.0 http://www.apache.org/licenses/LICENSE-2.0.html.

If you are using Rakuten MA for research purposes, please cite our paper on Rakuten MA [Hagiwara and Sekine 2014]

FAQ (Frequently Asked Questions)

Q. What are supported browsers and Node.js versions?

Q. Is commercial use permitted?

Q. I found a bug / analysis error / etc. Where should I report?

Q. Tokenization results look strange (specifically, the sentence is split up to individual characters with no PoS tags)

Q. What scripts (Simplified/Traditional) are supported for Chinese?

Q. Can we use the same model file in the JSON format for browsers?

Appendix

Supported feature templates

Feature templateDescription
w7Character unigram (c-3)
w8Character unigram (c-2)
w9Character unigram (c-1)
w0Character unigram (c0)
w1Character unigram (c+1)
w2Character unigram (c+2)
w3Character unigram (c+3)
c7Character type unigram (t-3)
c8Character type unigram (t-2)
c9Character type unigram (t-1)
c0Character type unigram (t0)
c1Character type unigram (t+1)
c2Character type unigram (t+2)
c3Character type unigram (t+3)
b7Character bigram (c-3 c-2)
b8Character bigram (c-2 c-1)
b9Character bigram (c-1 c0)
b1Character bigram (c0 c+1)
b2Character bigram (c+1 c+2)
b3Character bigram (c+2 c+3)
d7Character type bigram (t-3 t-2)
d8Character type bigram (t-2 t-1)
d9Character type bigram (t-1 t0)
d1Character type bigram (t0 t+1)
d2Character type bigram (t+1 t+2)
d3Character type bigram (t+2 t+3)
othersIf you specify a customized feature function in the featset array, the function will be called with two arguments _t and i, where _t is a function which takes a position j and returns the character object at that position, and i is the current position. A character object is an object with two properties c and t which are character and character type, respectively. The return value of that function is used as the feature value. (For example, if you specify a function f(_t, i) which returns _t(i).t;, then it's returning the character type of the current position, which is basically the same as the template c0. )

PoS tag list in Chinese

TagDescription
ADAdverb
ASAspect Particle
BAba3 (in ba-construction)
CCCoordinating conjunction
CDCardinal number
CSSubordinating conjunction
DECde5 (Complementizer/Nominalizer)
DEGde5 (Genitive/Associative)
DERde5 (Resultative)
DEVde5 (Manner)
DTDeterminer
ETCOthers
FWForeign word
IJInterjection
JJOther noun-modifier
LBbei4 (in long bei-construction)
LCLocalizer
MMeasure word
MSPOther particle
NNOther noun
NN-SHORTOther noun (abbrev.)
NRProper noun
NR-SHORTProper noun (abbrev.)
NTTemporal noun
NT-SHORTTemporal noun (abbrev.)
ODOrdinal number
ONOnomatopoeia
PPreposition
PNPronoun
PUPunctuation
SBbei4 (in short bei-construction)
SPSentence-final Particle
URLURL
VAPredicative adjective
VCCopula
VEyou3 (Main verb)
VVOther verb
XOthers

PoS tag list in Japanese and correspondence to BCCWJ tags

TagOriginal JA nameEnglish
A-c形容詞-一般Adjective-Common
A-dp形容詞-非自立可能Adjective-Dependent
C接続詞Conjunction
D代名詞Pronoun
E英単語English word
F副詞Adverb
I-c感動詞-一般Interjection-Common
J-c形状詞-一般Adjectival Noun-Common
J-tari形状詞-タリAdjectival Noun-Tari
J-xs形状詞-助動詞語幹Adjectival Noun-AuxVerb stem
M-aa補助記号-AAAuxiliary sign-AA
M-c補助記号-一般Auxiliary sign-Common
M-cp補助記号-括弧閉Auxiliary sign-Open Parenthesis
M-op補助記号-括弧開Auxiliary sign-Close Parenthesis
M-p補助記号-句点Auxiliary sign-Period
N-n名詞-名詞的Noun-Noun
N-nc名詞-普通名詞Noun-Common Noun
N-pn名詞-固有名詞Noun-Proper Noun
N-xs名詞-助動詞語幹Noun-AuxVerb stem
Oその他Others
P接頭辞Prefix
P-fj助詞-副助詞Particle-Adverbial
P-jj助詞-準体助詞Particle-Phrasal
P-k助詞-格助詞Particle-Case Marking
P-rj助詞-係助詞Particle-Binding
P-sj助詞-接続助詞Particle-Conjunctive
Q-a接尾辞-形容詞的Suffix-Adjective
Q-j接尾辞-形状詞的Suffix-Adjectival Noun
Q-n接尾辞-名詞的Suffix-Noun
Q-v接尾辞-動詞的Suffix-Verb
R連体詞Adnominal adjective
S-c記号-一般Sign-Common
S-l記号-文字Sign-Letter
UURLURL
V-c動詞-一般Verb-Common
V-dp動詞-非自立可能Verb-Dependent
W空白Whitespace
X助動詞AuxVerb

Acknowledgements

The developers would like to thank Satoshi Sekine, Satoko Marumoto, Yoichi Yoshimoto, Keiji Shinzato, Keita Yaegashi, and Soh Masuko for their contribution to this project.

References

Masato Hagiwara and Satoshi Sekine. Lightweight Client-Side Chinese/Japanese Morphological Analyzer Based on Online Learning. COLING 2014 Demo Session, pages 39-43, 2014. [PDF]

Kikuo Maekawa. Compilation of the Kotonoha-BCCWJ corpus (in Japanese). Nihongo no kenkyu (Studies in Japanese), 4(1):82–95, 2008. (Some English information can be found here.) [Site]

Jialei Wang, Peilin Zhao, and Steven C. Hoi. Exact soft confidence-weighted learning. In Proc. of ICML 2012, pages 121–128, 2012. [PDF]

Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer. The Penn Chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238, 2005. [PDF] [Site]


© 2014, 2015 Rakuten NLP Project. All Rights Reserved. / Sponsored by Rakuten, Inc. and Rakuten Institute of Technology.