SVM struct 使用


下载地址:http://www.pudn.com/downloads96/sourcecode/chinese/detail392522.html


SVM struct 包括了SVMlight。 svmlight只能用来进行二分类,而svmstruct可以用来进行多分类。使用Svm_multiclass即可。

代码可以编译通过。输入输出为文本文件。


Overview

SVMstruct is a Support Vector Machine (SVM) algorithm for predicting multivariate outputs. It performs supervised learning by approximating a mapping

h: X --> Y

using labeled training examples (x1,y1), ..., (xn,yn). Unlike regular SVMs, however, which consider only univariate predictions like in classification and regression,SVMstruct can predict complex objects y like trees, sequences, or sets. Examples of problems with complex outputs are natural language parsing, sequence alignment in protein homology detection, and markov models for part-of-speech tagging.

The sparse approximation algorithm implemented in SVMstruct is described in [1][2]. The implementation is based on the SVMlight quadratic optimizer [3].

Existing Instantiations

SVMstruct can be thought of as an API for implementing different kinds of complex prediction algorithms. Currently, we have implemented the following learning tasks:

  • SVMmulticlass: Multi-class classification. Learns to predict one of k mutually exclusive classes. This is probably the simplest possible instance of SVMstruct and serves as a tutorial example of how to use the programming interface.
    More information and source code.
  • SVMcfg: Learns a weighted context free grammar from examples. Training examples (e.g. for natural language parsing) specify the sentence along with the correct parse tree. The goal is to predict the parse tree of new sentences. 
    More information and source code.
  • SVMalign: Learning to align sequences. Given examples of how sequence pairs align, the goal is to learn the substitution matrix as well as the insertion and deletion costs of operations so that one can predict alignments of new sequences. 
    More information and source code.
  • SVMhmm: Learns a Markov model from examples. Training examples (e.g. for part-of-speech tagging) specify the sequence of words along with the correct assignment of tags (i.e. states). The goal is to predict the tag sequences for new sentences. 
    More information and source code coming soon.
Please let me know, if you want me to add your implementations to this list.

Source Code for Implementing your Own Instantiation

Instead of using one of the existing instantiations of  SVMstruct  listed above, you can implement your own.  SVMstruct  contains an API that let's you specialize the general sparse approximation training algorithm for your particular application. Referring to the algorithm as presented in [1], you merely need to provide the code for the following:
  • A function for computing the feature vector Psi.
  • A function for computing the argmax over the (kernelized) linear discriminant function.
  • A loss function.
You can download the source code of the algorithm and the API from the following location:

      http://download.joachims.org/svm_struct/current/svm_struct.tar.gz

The archive contains the source code of the most recent version of  SVMstruct  as well as the source code of the  SVMlight  quadratic optimizer. Unpack the archive using the shell command:

      gunzip –c svm_struct.tar.gz | tar xvf –

This expands the archive into the current directory, which now contains all relevant files. You can compile  SVMstruct  with the empty API using the command:

      make

It will output some warnings, since the functions of the API are only templates and do not return values as required. However, it should produce the executables svm_empty_learn   svm_empty_classify . " empty " is a placeholder where you can substitute a meaningful name for your particular instance of  SVMstruct . To implement your own instantiation, you will need to edit the following files:
  • svm_struct_api_types.h
  • svm_struct_api.c
Both files already contain empty templates. The first file contains the type definitions that need to be changed. PATTERN is the structure for storing the x-part of an example (x,y), LABEL is the y-part. The learned model will be stored in STRUCTMODEL. Finally, STRUCT_LEARN_PARM can be used to store any parameters that you might want to pass to the function. The file  svm_struct_api.h  contains the functions you need to implement. See the documentation in the file for details. You might also want to look at the other instantiations of  SVMstruct  for examples of how to use the API.

How to Use

Compiling creates the executable  svm_empty_learn , which performs the learning, and the executable  svm_empty_classify  for classifying new examples. Usage is much like  SVMlight . You call it like

      svm_empty_learn -c 1.0 train.dat model.dat

which trains an SVM on the training set  train.dat  and outputs the learned rule to  model.dat  using the regularization parameter  C  set to  1.0  (note that this crashes for the empty API -- use one of the other instantiations from above for a working example). The format of the train file and the model file depend on the particular instantiation of  SVMstruct . Other options are:
General options:
         -?          -> this help
         -v [0..3]   -> verbosity level (default 1)
         -y [0..3]   -> verbosity level for svm_light (default 0)
Learning options:
         -c float    -> C: trade-off between training error
                        and margin (default 0.01)
         -p [1,2]    -> L-norm to use for slack variables. Use 1 for L1-norm,
                        use 2 for squared slacks. (default 1)
         -o [1,2]    -> Slack rescaling method to use for loss.
                        1: slack rescaling
                        2: margin rescaling
                        (default 1)
         -l [0..]    -> Loss function to use.
                        0: zero/one loss
                        (default 0)
Kernel options:
         -t int      -> type of kernel function:
                        0: linear (default)
                        1: polynomial (s a*b+c)^d
                        2: radial basis function exp(-gamma ||a-b||^2)
                        3: sigmoid tanh(s a*b + c)
                        4: user defined kernel from kernel.h
         -d int      -> parameter d in polynomial kernel
         -g float    -> parameter gamma in rbf kernel
         -s float    -> parameter s in sigmoid/poly kernel
         -r float    -> parameter c in sigmoid/poly kernel
         -u string   -> parameter of user defined kernel
Optimization options (see [1][3]):
         -q [2..]    -> maximum size of QP-subproblems (default 10)
         -n [2..q]   -> number of new variables entering the working set
                        in each iteration (default n = q). Set n size of cache for kernel evaluations in MB (default 40)
                        The larger the faster...
         -e float    -> eps: Allow that error for termination criterion
                        (default 0.01)
         -h [5..]    -> number of iterations a variable needs to be
                        optimal before considered for shrinking (default 100)
         -k [1..]    -> number of new constraints to accumulate before
                        recomputing the QP solution (default 100)
         -# int      -> terminate optimization, if no progress after this
                        number of iterations. (default 10000)
Output options:
         -a string   -> write all alphas to this file after learning
                        (in the same order as in the training set)
Structure learning options:
         --* string  -> custom parameters that can be adapted for struct
                        learning. The * can be replaced by any character
                        and there can be multiple options starting with --.
For more details on the meaning of these options consult references [1][3] and the  description of  SVMlight . The options starting with  --  are those specific to the instantiation.

Disclaimer

This software is free only for non-commercial use. It must not be distributed without prior permission of the author. The author is not responsible for implications from the use of this software.

References

[1] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support Vector Learning for Interdependent and Structured Output Spaces, ICML, 2004. [Postscript] [PDF]

[2] T. Joachims. Learning to Align Sequences: A Maximum Margin Approach, Technical Report, August, 2003. [Postscript] [PDF]

[3] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning, B. Schölkopf and C. Burges and A. Smola (ed.), MIT Press, 1999. [Postscript (gz)] [PDF]


你可能感兴趣的:(Algorithm,struct,vector,Training,postscript,Instantiation)