Home

Awesome

CI Maven Central codecov BSD 3-Clause License Donate

This is the Java version of LIBLINEAR.

The project site of the original C++ version is located at http://www.csie.ntu.edu.tw/~cjlin/liblinear/

The upstream changelog can be found at http://www.csie.ntu.edu.tw/~cjlin/liblinear/log

The upstream GitHub project can be found at https://github.com/cjlin1/liblinear

Dependencies

The only requirement is Java 8 or later.

Usage

<dependency>
    <groupId>de.bwaldvogel</groupId>
    <artifactId>liblinear</artifactId>
    <version>2.44</version>
</dependency>

Please be aware that the code would be written differently at various places, i.e.

if it would be a pure Java project.

However, I tried to stick as close as possible to the original C++ source code for the following reasons:

Below follows a slightly modified version of the original README file. Please note that the README refers to the C++ version. As afore mentioned, the Java version is almost identical to use.

The three most important methods for programmatic usage that you might be interested in are:

Contributing

Please read the contributing guidelines if you want to contribute code to the project.

If you want to thank the author for this library or want to support the maintenance work, we are happy to receive a donation.

Donate


LIBLINEAR is a simple package for solving large-scale regularized linear classification, regression and outlier detection. It currently supports

To get started, please read the Quick Start section first. For developers, please check the Library Usage section to learn how to integrate LIBLINEAR in your software.

Table of Contents

When to use LIBLINEAR but not LIBSVM

There are some large data for which with/without nonlinear mappings gives similar performances. Without using kernels, one can efficiently train a much larger set via linear classification/regression. These data usually have a large number of features. Document classification is an example.

Warning: While generally liblinear is very fast, its default solver may be slow under certain situations (e.g., data not scaled or C is large). See Appendix B of our SVM guide about how to handle such cases.

http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf

Warning: If you are a beginner and your data sets are not large, you should consider LIBSVM first.

LIBSVM page: http://www.csie.ntu.edu.tw/~cjlin/libsvm

Quick Start

See the section Installation for installing LIBLINEAR.

After installation, there are programs train and predict for training and testing, respectively.

About the data format, please check the README file of LIBSVM. Note that feature index must start from 1 (but not 0).

A sample classification data included in this package is heart_scale.

Type train heart_scale, and the program will read the training data and output the model file heart_scale.model. If you have a test set called heart_scale.t, then type predict heart_scale.t heart_scale.model output to see the prediction accuracy. The output file contains the predicted class labels.

For more information about train and predict, see the sections train Usage and predict Usage.

To obtain good performances, sometimes one needs to scale the data. Please check the program svm-scale of LIBSVM. For large and sparse data, use -l 0 to keep the sparsity.

train Usage

Usage: train [options] training_set_file [model_file]
options:
-s type : set type of solver (default 1)
  for multi-class classification
     0 -- L2-regularized logistic regression (primal)
     1 -- L2-regularized L2-loss support vector classification (dual)
     2 -- L2-regularized L2-loss support vector classification (primal)
     3 -- L2-regularized L1-loss support vector classification (dual)
     4 -- support vector classification by Crammer and Singer
     5 -- L1-regularized L2-loss support vector classification
     6 -- L1-regularized logistic regression
     7 -- L2-regularized logistic regression (dual)
  for regression
    11 -- L2-regularized L2-loss support vector regression (primal)
    12 -- L2-regularized L2-loss support vector regression (dual)
    13 -- L2-regularized L1-loss support vector regression (dual)
  for outlier detection
    21 -- one-class support vector machine (dual)
-c cost : set the parameter C (default 1)
-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
-n nu : set the parameter nu of one-class SVM (default 0.5)
-e epsilon : set tolerance of termination criterion
    -s 0 and 2
        |f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,
        where f is the primal function and pos/neg are # of
        positive/negative data (default 0.01)
    -s 11
        |f'(w)|_2 <= eps*|f'(w0)|_2 (default 0.0001)
    -s 1, 3, 4, 7, and 21
        Dual maximal violation <= eps; similar to libsvm (default 0.1 except 0.01 for -s 21)
    -s 5 and 6
        |f'(w)|_1 <= eps*min(pos,neg)/l*|f'(w0)|_1,
        where f is the primal function (default 0.01)
    -s 12 and 13
        |f'(alpha)|_1 <= eps |f'(alpha0)|,
        where f is the dual function (default 0.1)
-B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default -1)
-R : not regularize the bias; must with -B 1 to have the bias; DON'T use this unless you know what it is
	(for -s 0, 2, 5, 6, 11)
-wi weight: weights adjust the parameter C of different classes (see README for details)
-v n: n-fold cross validation mode
-C : find parameters (C for -s 0, 2 and C, p for -s 11)
-q : quiet mode (no outputs)

Option -v randomly splits the data into n parts and calculates cross validation accuracy on them.

Option -C conducts cross validation under different parameters and finds the best one. This option is supported only by -s 0, -s 2 (for finding C) and -s 11 (for finding C, p). If the solver is not specified, -s 2 is used.

Formulations:

For L2-regularized logistic regression (-s 0), we solve

min_w w^Tw/2 + C \sum log(1 + exp(-y_i w^Tx_i))

For L2-regularized L2-loss SVC dual (-s 1), we solve

min_alpha  0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
    s.t.   0 <= alpha_i,

For L2-regularized L2-loss SVC (-s 2), we solve

min_w w^Tw/2 + C \sum max(0, 1- y_i w^Tx_i)^2

For L2-regularized L1-loss SVC dual (-s 3), we solve

min_alpha  0.5(alpha^T Q alpha) - e^T alpha
    s.t.   0 <= alpha_i <= C,

For L1-regularized L2-loss SVC (-s 5), we solve

min_w \sum |w_j| + C \sum max(0, 1- y_i w^Tx_i)^2

For L1-regularized logistic regression (-s 6), we solve

min_w \sum |w_j| + C \sum log(1 + exp(-y_i w^Tx_i))

For L2-regularized logistic regression (-s 7), we solve

min_alpha  0.5(alpha^T Q alpha) + \sum alpha_i*log(alpha_i) + \sum (C-alpha_i)*log(C-alpha_i) - a constant
    s.t.   0 <= alpha_i <= C,

where

Q is a matrix with Q_ij = y_i y_j x_i^T x_j.

For L2-regularized L2-loss SVR (-s 11), we solve

min_w w^Tw/2 + C \sum max(0, |y_i-w^Tx_i|-epsilon)^2

For L2-regularized L2-loss SVR dual (-s 12), we solve

min_beta  0.5(beta^T (Q + lambda I/2/C) beta) - y^T beta + \sum |beta_i|

For L2-regularized L1-loss SVR dual (-s 13), we solve

min_beta  0.5(beta^T Q beta) - y^T beta + \sum |beta_i|
    s.t.   -C <= beta_i <= C,

where

Q is a matrix with Q_ij = x_i^T x_j.

For one-class SVM dual (-s 21), we solve

min_alpha 0.5(alpha^T Q alpha)
    s.t.   0 <= alpha_i <= 1 and \sum alpha_i = nu*l,

where

Q is a matrix with Q_ij = x_i^T x_j.

If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias]. For example, L2-regularized logistic regression (-s 0) becomes

min_w w^Tw/2 + (w_{n+1})^2/2 + C \sum log(1 + exp(-y_i [w; w_{n+1}]^T[x_i; bias]))

Some may prefer not having (w_{n+1})^2/2 (i.e., bias variable not regularized). For primal solvers (-s 0, 2, 5, 6, 11), we provide an option -R to remove (w_{n+1})^2/2. However, -R is generally not needed as for most data with/without (w_{n+1})^2/2 give similar performances.

The primal-dual relationship implies that -s 1 and -s 2 give the same model, -s 0 and -s 7 give the same, and -s 11 and -s 12 give the same.

We implement 1-vs-the rest multi-class strategy for classification. In training i vs. non_i, their C parameters are (weight from -wi)*C and C, respectively. If there are only two classes, we train only one model. Thus weight1*C vs. weight2*C is used. See examples below.

We also implement multi-class SVM by Crammer and Singer (-s 4):

min_{w_m, \xi_i}  0.5 \sum_m ||w_m||^2 + C \sum_i \xi_i
    s.t.  w^T_{y_i} x_i - w^T_m x_i >= \e^m_i - \xi_i \forall m,i

where e^m_i = 0 if y_i  = m,
      e^m_i = 1 if y_i != m,

Here we solve the dual problem:

min_{\alpha}  0.5 \sum_m ||w_m(\alpha)||^2 + \sum_i \sum_m e^m_i alpha^m_i
    s.t.  \alpha^m_i <= C^m_i \forall m,i , \sum_m \alpha^m_i=0 \forall i

where w_m(\alpha) = \sum_i \alpha^m_i x_i,
and C^m_i = C if m  = y_i,
    C^m_i = 0 if m != y_i.

predict Usage

Usage: predict [options] test_file model_file output_file
options:
-b probability_estimates: whether to output probability estimates, 0 or 1 (default 0); currently for logistic regression only
-q : quiet mode (no outputs)

Note that -b is only needed in the prediction phase. This is different from the setting of LIBSVM.

Examples

> train data_file

Train linear SVM with L2-loss function.

> train -s 0 data_file

Train a logistic regression model.

> train -s 21 -n 0.1 data_file

Train a linear one-class SVM which selects roughly 10% data as outliers.

> train -v 5 -e 0.001 data_file

Do five-fold cross-validation using L2-loss SVM. Use a smaller stopping tolerance 0.001 than the default 0.1 if you want more accurate solutions.

> train -C data_file

Conduct cross validation many times by L2-loss SVM and find the parameter C which achieves the best cross validation accuracy.

> train -C -s 0 -v 3 -c 0.5 -e 0.0001 data_file

For parameter selection by -C, users can specify other solvers (currently -s 0, -s 2 and -s 11 are supported) and different number of CV folds. Further, users can use the -c option to specify the smallest C value of the search range. This option is useful when users want to rerun the parameter selection procedure from a specified C under a different setting, such as a stricter stopping tolerance -e 0.0001 in the above example. Similarly, for -s 11, users can use the -p option to specify the maximal p value of the search range.

> train -c 10 -w1 2 -w2 5 -w3 2 four_class_data_file

Train four classifiers: positive negative Cp Cn class 1 class 2,3,4. 20 10 class 2 class 1,3,4. 50 10 class 3 class 1,2,4. 20 10 class 4 class 1,2,3. 10 10

> train -c 10 -w3 1 -w2 5 two_class_data_file

If there are only two classes, we train ONE model. The C values for the two classes are 10 and 50.

> predict -b 1 test_file data_file.model output_file

Output probability estimates (for logistic regression only).

Library Usage

These functions and structures are declared in the header file linear.h. You can see train.c and predict.c for examples showing how to use them. We define LIBLINEAR_VERSION and declare extern int liblinear_version; in linear.h, so you can check the version number.

Additional Information

If you find LIBLINEAR helpful, please cite it as

R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin.
LIBLINEAR: A Library for Large Linear Classification, Journal of
Machine Learning Research 9(2008), 1871-1874. Software available at
http://www.csie.ntu.edu.tw/~cjlin/liblinear

For any questions and comments, please send your email to cjlin@csie.ntu.edu.tw