• Open Source Computer Vision Library

机器学习中文参考手册

（修订版本间差异）
 16:08 2008年11月7日的修订版本Wangtao (Talk | 贡献)神经网络← Previous diff 13:28 2009年4月1日的修订版本Eralvc (Talk | 贡献) nclustersNext diff → 第 1439行： 第 1439行： == nclusters== == nclusters== - 高斯成分的个数，必须事先指定 + 高斯成分的个数，必须事先指定 。有些EM的算法可以在给定区间内，自动确定最优个数，不过本算法未实现此功能。 - The number of mixtures. Some of EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet. + The number of mixtures. Some of EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet. - + - + == cov_mat_type == == cov_mat_type ==

简介：通用类和函数

CvStatModel

ML库中的统计模型基类。

class CvStatModel
{
public:
/* CvStatModel(); */
/* CvStatModel( const CvMat* train_data ... ); */

virtual ~CvStatModel();

virtual void clear()=0;

/* virtual bool train( const CvMat* train_data, [int tflag,] ..., const CvMat* responses, ...,
[const CvMat* var_idx,] ..., [const CvMat* sample_idx,] ...
[const CvMat* var_type,] ..., [const CvMat* missing_mask,] <misc_training_alg_params> ... )=0;
*/

/* virtual float predict( const CvMat* sample ... ) const=0; */

virtual void save( const char* filename, const char* name=0 )=0;
virtual void load( const char* filename, const char* name=0 )=0;

virtual void write( CvFileStorage* storage, const char* name )=0;
virtual void read( CvFileStorage* storage, CvFileNode* node )=0;
};


class CV_EXPORTS CvStatModel
{
public:
CvStatModel();
virtual ~CvStatModel();

virtual void clear();

virtual void save( const char* filename, const char* name=0 );
virtual void load( const char* filename, const char* name=0 );

virtual void write( CvFileStorage* storage, const char* name );
virtual void read( CvFileStorage* storage, CvFileNode* node );

protected:
const char* default_model_name;
};


CvStatModel::CvStatModel

CvStatModel::CvStatModel();


ML中的每个统计模型都有一个无参数构造函数。在构造二维模型时，在train()或load()之前调用这种缺省构造函数将会非常有用！（This constructor is useful for 2-stage model construction, when the default constructor is followed by train() or load().）

CvStatModel::CvStatModel(...)

CvStatModel::CvStatModel( const CvMat* train_data ... ); */


CvStatModel::~CvStatModel

CvStatModel::~CvStatModel();


CvStatModel* model;
if( use_svm )
model = new CvSVM(... /* SVM params */);
else
model = new CvDTree(... /* Decision tree params */);
...
delete model;


CvStatModel::clear

void CvStatModel::clear();


CvStatModel::save

void CvStatModel::save( const char* filename, const char* name=0 );


save方法将整个模型状态以指定名称或默认名称（取决于特定的类）保存到指定的XML或YAML文件中。该方法使用的是cxcore中的数据保存功能。

void CvStatModel::load( const char* filename, const char* name=0 );


CvStatModel::write

void CvStatModel::write( CvFileStorage* storage, const char* name );


write方法将整个模型状态用指定的或默认的名称（取决于特定的类）写到文件存储中去。这个方法被save()调用。

void CvStatMode::read( CvFileStorage* storage, CvFileNode* node );


CvStatModel::train

bool CvStatMode::train( const CvMat* train_data, [int tflag,] ..., const CvMat* responses, ...,
[const CvMat* var_idx,] ..., [const CvMat* sample_idx,] ...
[const CvMat* var_type,] ..., [const CvMat* missing_mask,] <misc_training_alg_params> ... );


tflag=CV_ROW_SAMPLE

tflag=CV_COL_SAMPLE

ML中的很多模型也可以仅仅使用选择特征的子集，或者（并且）使用选择样本的子集来训练。为了让用户易于使用，train函数通常包含var_idx和sample_idx参数。var_idx指定感兴趣的特征，sample_idx指定感兴趣的样本。这两个向量可以是整数(32sC1)向量，例如以0为开始的索引，或者8位(8uC1)的使用的特征或者样本的掩码。用户也可以传入NULL指针，用来表示训练中使用所有变量／样本。

CvStatModel::predict

float CvStatMode::predict( const CvMat* sample[, <prediction_params>] ) const;


Normal Bayes 分类器

OpenCV ERROR: Formats of input arguments do not match ()
in function cvSVD, cxsvd.cpp(1243)


CV_CALL( cov = cvCreateMat( _var_count, _var_count, CV_32FC1 ));


CV_CALL( cov = cvCreateMat( _var_count, _var_count, CV_64FC1 ));


CvNormalBayesClassifier

class CvNormalBayesClassifier : public CvStatModel
{
public:
CvNormalBayesClassifier();
virtual ~CvNormalBayesClassifier();

CvNormalBayesClassifier( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _var_idx=0, const CvMat* _sample_idx=0 );

virtual bool train( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _var_idx = 0, const CvMat* _sample_idx=0, bool update=false );

virtual float predict( const CvMat* _samples, CvMat* results=0 ) const;
virtual void clear();

virtual void save( const char* filename, const char* name=0 );
virtual void load( const char* filename, const char* name=0 );

virtual void write( CvFileStorage* storage, const char* name );
virtual void read( CvFileStorage* storage, CvFileNode* node );
protected:
...
};


CvNormalBayesClassifier::train

bool CvNormalBayesClassifier::train( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _var_idx = 0, const CvMat* _sample_idx=0, bool update=false );


CvNormalBayesClassifier::predict

float CvNormalBayesClassifier::predict( const CvMat* samples, CvMat* results=0 ) const;


K近邻算法

CvKNearest

K近邻类

class CvKNearest : public CvStatModel //继承自ML库中的统计模型基类
{
public:

CvKNearest();
virtual ~CvKNearest();  //虚函数定义

CvKNearest( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _sample_idx=0, bool _is_regression=false, int max_k=32 );

virtual bool train( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _sample_idx=0, bool is_regression=false,
int _max_k=32, bool _update_base=false );

virtual float find_nearest( const CvMat* _samples, int k, CvMat* results,
const float** neighbors=0, CvMat* neighbor_responses=0, CvMat* dist=0 ) const;

virtual void clear();
int get_max_k() const;
int get_var_count() const;
int get_sample_count() const;
bool is_regression() const;

protected:
...
};


CvKNearest::train

bool CvKNearest::train( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _sample_idx=0, bool is_regression=false,
int _max_k=32, bool _update_base=false );


CvKNearest::find_nearest

float CvKNearest::find_nearest( const CvMat* _samples, int k, CvMat* results=0,
const float** neighbors=0, CvMat* neighbor_responses=0, CvMat* dist=0 ) const;


例程：使用kNN进行2维样本集的分类，样本集的分布为混合高斯分布

#include "ml.h"
#include "highgui.h"

int main( int argc, char** argv )
{
const int K = 10;
int i, j, k, accuracy;
float response;
int train_sample_count = 100;
CvRNG rng_state = cvRNG(-1);
CvMat* trainData = cvCreateMat( train_sample_count, 2, CV_32FC1 );
CvMat* trainClasses = cvCreateMat( train_sample_count, 1, CV_32FC1 );
IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
float _sample[2];
CvMat sample = cvMat( 1, 2, CV_32FC1, _sample );
cvZero( img );

CvMat trainData1, trainData2, trainClasses1, trainClasses2;

// form the training samples
cvGetRows( trainData, &trainData1, 0, train_sample_count/2 );
cvRandArr( &rng_state, &trainData1, CV_RAND_NORMAL, cvScalar(200,200), cvScalar(50,50) );

cvGetRows( trainData, &trainData2, train_sample_count/2, train_sample_count );
cvRandArr( &rng_state, &trainData2, CV_RAND_NORMAL, cvScalar(300,300), cvScalar(50,50) );

cvGetRows( trainClasses, &trainClasses1, 0, train_sample_count/2 );
cvSet( &trainClasses1, cvScalar(1) );

cvGetRows( trainClasses, &trainClasses2, train_sample_count/2, train_sample_count );
cvSet( &trainClasses2, cvScalar(2) );

// learn classifier
CvKNearest knn( trainData, trainClasses, 0, false, K );
CvMat* nearests = cvCreateMat( 1, K, CV_32FC1);

for( i = 0; i < img->height; i++ )
{
for( j = 0; j < img->width; j++ )
{
sample.data.fl[0] = (float)j;
sample.data.fl[1] = (float)i;

// estimates the response and get the neighbors' labels
response = knn.find_nearest(&sample,K,0,0,nearests,0);

// compute the number of neighbors representing the majority
for( k = 0, accuracy = 0; k < K; k++ )
{
if( nearests->data.fl[k] == response)
accuracy++;
}
// highlight the pixel depending on the accuracy (or confidence)
cvSet2D( img, i, j, response == 1 ?
(accuracy > 5 ? CV_RGB(180,0,0) : CV_RGB(180,120,0)) :
(accuracy > 5 ? CV_RGB(0,180,0) : CV_RGB(120,120,0)) );
}
}

// display the original training samples
for( i = 0; i < train_sample_count/2; i++ )
{
CvPoint pt;
pt.x = cvRound(trainData1.data.fl[i*2]);
pt.y = cvRound(trainData1.data.fl[i*2+1]);
cvCircle( img, pt, 2, CV_RGB(255,0,0), CV_FILLED );
pt.x = cvRound(trainData2.data.fl[i*2]);
pt.y = cvRound(trainData2.data.fl[i*2+1]);
cvCircle( img, pt, 2, CV_RGB(0,255,0), CV_FILLED );
}

cvNamedWindow( "classifier result", 1 );
cvShowImage( "classifier result", img );
cvWaitKey(0);

cvReleaseMat( &trainClasses );
cvReleaseMat( &trainData );
return 0;
}


支持向量机部分

【Burges98】 C. Burges. "A tutorial on support vector machines for pattern recognition", Knowledge Discovery and Data Mining 2(2), 1998. (available online at [1]).

LIBSVM - A Library for Support Vector Machines. By Chih-Chung Chang and Chih-Jen Lin ([2])

CvSVM

class CvSVM : public CvStatModel //继承自基类CvStatModel
{
public:
// SVM type
enum { C_SVC=100, NU_SVC=101, ONE_CLASS=102, EPS_SVR=103, NU_SVR=104 };//SVC是SVM分类器，SVR是SVM回归

// SVM kernel type
enum { LINEAR=0, POLY=1, RBF=2, SIGMOID=3 }; //提供四种核函数，分别是线性，多项式，径向基，sigmoid型函数。

CvSVM();
virtual ~CvSVM();

CvSVM( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
CvSVMParams _params=CvSVMParams() );

virtual bool train( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
CvSVMParams _params=CvSVMParams() );

virtual float predict( const CvMat* _sample ) const;
virtual int get_support_vector_count() const;
virtual const float* get_support_vector(int i) const;
virtual void clear();

virtual void save( const char* filename, const char* name=0 );
virtual void load( const char* filename, const char* name=0 );

virtual void write( CvFileStorage* storage, const char* name );
virtual void read( CvFileStorage* storage, CvFileNode* node );
int get_var_count() const { return var_idx ? var_idx->cols : var_all; }

protected:
...
};


CvSVMParams

SVM训练参数struct

struct CvSVMParams
{
CvSVMParams();
CvSVMParams( int _svm_type, int _kernel_type,
double _degree, double _gamma, double _coef0,
double _C, double _nu, double _p,
CvMat* _class_weights, CvTermCriteria _term_crit );

int         svm_type;
int         kernel_type;
double      degree; // for poly
double      gamma;  // for poly/rbf/sigmoid
double      coef0;  // for poly/sigmoid

double      C;  // for CV_SVM_C_SVC, CV_SVM_EPS_SVR and CV_SVM_NU_SVR
double      nu; // for CV_SVM_NU_SVC, CV_SVM_ONE_CLASS, and CV_SVM_NU_SVR
double      p; // for CV_SVM_EPS_SVR
CvMat*      class_weights; // for CV_SVM_C_SVC
CvTermCriteria term_crit; // termination criteria
};


svm_type，SVM的类型：

CvSVM::C_SVC - n(n>=2)分类器，允许用异常值惩罚因子C进行不完全分类。
CvSVM::NU_SVC - n类似然不完全分类的分类器。参数nu取代了c，其值在区间【0，1】中，nu越大，决策边界越平滑。
CvSVM::ONE_CLASS - 单分类器，所有的训练数据提取自同一个类里，然后SVM建立了一个分界线以分割该类在特征空间中所占区域和其它类在特征空间中所占区域。
CvSVM::EPS_SVR - 回归。 训练集中的特征向量和拟合出来的超平面的距离需要小于p。异常值惩罚因子C被采用。
CvSVM::NU_SVR - 回归；nu 代替了p

kernel_type//核类型：

CvSVM::LINEAR - 没有任何向映射至高维空间，线性区分（或回归）在原始特征空间中被完成，这是最快的选择。 d(x,y) = x•y == (x,y)
CvSVM::POLY - 多项式核: d(x,y) = (gamma*(x•y)+coef0)degree
CvSVM::RBF - 径向基，对于大多数情况都是一个较好的选择：d(x,y) = exp(-gamma*|x-y|2)
CvSVM::SIGMOID - sigmoid函数被用作核函数: d(x,y) = tanh(gamma*(x•y)+coef0)

degree, gamma, coef0：都是核函数的参数，具体的参见上面的核函数的方程。

C, nu, p：在一般的SVM优化求解时的参数。

class_weights:可选权重，赋给指定的类别。一般乘以C以后去影响不同类别的错误分类惩罚项。权重越大，某一类别的误分类数据的惩罚项就越大。

term_crit:SVM的迭代训练过程的中止。（解决了部分受约束二次最优问题）

CvSVM::train

bool CvSVM::train( const CvMat* _train_data, const CvMat* _responses,
const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
CvSVMParams _params=CvSVMParams() );


The method trains SVM model. It follows the conventions of generic train "method" with the following limitations: only CV_ROW_SAMPLE data layout is supported, the input variables are all ordered, the output variables can be either categorical (_params.svm_type=CvSVM::C_SVC or _params.svm_type=CvSVM::NU_SVC) or ordered (_params.svm_type=CvSVM::EPS_SVR or _params.svm_type=CvSVM::NU_SVR) or not required at all (_params.svm_type=CvSVM::ONE_CLASS), missing measurements are not supported.

CvSVM::get_support_vector*

int CvSVM::get_support_vector_count() const;
const float* CvSVM::get_support_vector(int i) const;


决策树

To reach a leaf node, and thus to obtain a response for the input feature vector, the prediction procedure starts with the root node. From each non-leaf node the procedure goes to the left (i.e. selects the left child node as the next observed node), or to the right based on the value of a certain variable, which index is stored in the observed node. The variable can be either ordered or categorical. In the first case, the variable value is compared with the certain threshold (which is also stored in the node); if the value is less than the threshold, the procedure goes to the left, otherwise, to the right (for example, if the weight is less than 1 kilo, the procedure goes to the left, else to the right). And in the second case the discrete variable value is tested, whether it belongs to a certain subset of values (also stored in the node) from a limited set of values the variable could take; if yes, the procedure goes to the left, else - to the right (for example, if the color is green or red, go to the left, else to the right). That is, in each node, a pair of entities (<variable_index>, <decision_rule (threshold/subset)>) is used. This pair is called split (split on the variable #<variable_index>). Once a leaf node is reached, the value assigned to this node is used as the output of prediction procedure.

Sometimes, certain features of the input vector are missed (for example, in the darkness it is difficult to determine the object color), and the prediction procedure may get stuck in the certain node (in the mentioned example if the node is split by color). To avoid such situations, decision trees use so-called surrogate splits. That is, in addition to the best "primary" split, every tree node may also be split on one or more other variables with nearly the same results.

Training Decision Trees

• 树的深度达到了指定的最大值
• 在该结点训练样本的数目少于指定值，比如，没有统计意义上的集合来进一步进行结点分裂了。
• 在该结点所有的样本属于同一类（或者，如果是回归的话，变化已经非常小了）
• 跟随机选择相比，能选择到的最好的分裂已经基本没有什么有意义的改进了。

The tree is built recursively, starting from the root node. The whole training data (feature vectors and the responses) are used to split the root node. In each node the optimum decision rule (i.e. the best "primary" split) is found based on some criteria (in ML gini "purity" criteria is used for classification, and sum of squared errors is used for regression). Then, if necessary, the surrogate splits are found that resemble at the most the results of the primary split on the training data; all data are divided using the primary and the surrogate splits (just like it is done in the prediction procedure) between the left and the right child node. Then the procedure recursively splits both left and right nodes etc. At each node the recursive procedure may stop (i.e. stop splitting the node further) in one of the following cases:

• depth of the tree branch being constructed has reached the specified maximum value.
• number of training samples in the node is less than the specified threshold, i.e. it is not statistically representative set to split the node further.
• all the samples in the node belong to the same class (or, in case of regression, the variation is too small).
• the best split found does not give any noticeable improvement comparing to just a random choice.

When the tree is built, it may be pruned using cross-validation procedure, if need. That is, some branches of the tree that may lead to the model overfitting are cut off. Normally, this procedure is only applied to standalone decision trees, while tree ensembles usually build small enough trees and use their own protection schemes against overfitting.

“变量的重要性” Variable importance 决策树除了它的重要用途——预测以外，还可以用在多变量分析上。构造好的决策树算法的一个关键特性就是它可能被用在计算每个变量的重要性（相关决策力）上。举个例子，在一个垃圾邮件过滤器中，用一个经常出现在邮件中的词汇集合来作为特征向量，那么变量的重要率就可以用来决定最“垃圾邮件指示词”，这样就可以保证词汇集合的大小的合理性。

Besides the obvious use of decision trees - prediction, the tree can be also used for various data analysis. One of the key properties of the constructed decision tree algorithms is that it is possible to compute importance (relative decisive power) of each variable. For example, in a spam filter that uses a set of words occurred in the message as a feature vector, the variable importance rating can be used to determine the most "spam-indicating" words and thus help to keep the dictionary size reasonable.

Importance of each variable is computed over all the splits on this variable in the tree, primary and surrogate ones. Thus, to compute variable importance correctly, the surrogate splits must be enabled in the training parameters, even if there is no missing data.

[Brieman84] Breiman, L., Friedman, J. Olshen, R. and Stone, C. (1984), "Classification and Regression Trees", Wadsworth.

CvDTreeSplit

Decision tree node split

struct CvDTreeSplit
{
int var_idx;
int inversed;
float quality;
CvDTreeSplit* next;
union
{
int subset[2];
struct
{
float c;
int split_point;
}
ord;
};
};

var_idx

inversed

quality

next

subset

c

split_point

var_idx
Index of the variable used in the split
inversed
When it equals to 1, the inverse split rule is used (i.e. left and right branches are exchanged in the expressions below)
quality
The split quality, a positive number. It is used to choose the best primary split, then to choose and sort the surrogate splits. After the tree is constructed, it is also used to compute variable importance.
next
Pointer to the next split in the node split list.
subset
Bit array indicating the value subset in case of split on a categorical variable. The rule is: if var_value in subset then next_node<-left else next_node<-right
c
The threshold value in case of split on an ordered variable. The rule is: if var_value < c then next_node<-left else next_node<-right
split_point
Used internally by the training algorithm.

CvDTreeNode

Decision tree node

struct CvDTreeNode
{
int class_idx;
int Tn;
double value;

CvDTreeNode* parent;
CvDTreeNode* left;
CvDTreeNode* right;

CvDTreeSplit* split;

int sample_count;
int depth;
...
};

value

class_idx

Tn

parent, left, right

split

sample_count

depth

CvDTreeNode的其他数据在训练阶段使用。

value
The value assigned to the tree node. It is either a class label, or the estimated function value.
class_idx
The assigned to the node normalized class index (to 0..class_count-1 range), it is used internally in classification trees and tree ensembles.
Tn
The tree index in a ordered sequence of trees. The indices are used during and after the pruning procedure. The root node has the maximum value Tn of the whole tree, child nodes have Tn less than or equal to the parent's Tn, and the nodes with Tn≤CvDTree::pruned_tree_idx are not taken into consideration at the prediction stage (the corresponding branches are considered as cut-off), even if they have not been physically deleted from the tree at the pruning stage.
parent, left, right
Pointers to the parent node, left and right child nodes.
split
Pointer to the first (primary) split.
sample_count
The number of samples that fall into the node at the training stage. It is used to resolve the difficult cases - when the variable for the primary split is missing, and all the variables for other surrogate splits are missing too, the sample is directed to the left if left->sample_count>right->sample_count and to the right otherwise.
depth
The node depth, the root node depth is 0, the child nodes depth is the parent's depth + 1.

Other numerous fields of CvDTreeNode are used internally at the training stage.

CvDTreeParams

Decision tree training parameters

struct CvDTreeParams
{
int max_categories;
int max_depth;
int min_sample_count;
int cv_folds;
bool use_surrogates;
bool use_1se_rule;
bool truncate_pruned_tree;
float regression_accuracy;
const float* priors;

CvDTreeParams() : max_categories(10), max_depth(INT_MAX), min_sample_count(10),
cv_folds(10), use_surrogates(true), use_1se_rule(true),
truncate_pruned_tree(true), regression_accuracy(0.01f), priors(0)
{}

CvDTreeParams( int _max_depth, int _min_sample_count,
float _regression_accuracy, bool _use_surrogates,
int _max_categories, int _cv_folds,
bool _use_1se_rule, bool _truncate_pruned_tree,
const float* _priors );
};

max_depth

min_sample_count

regression_accuracy

use_surrogates

max_categories

max_depth
This parameter specifies the maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than max_depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure in the beginning of the section), and/or if the tree is pruned.
min_sample_count
A node is not split if the number of samples directed to the node is less than the parameter value.
regression_accuracy
Another stop criteria - only for regression trees. As soon as the estimated node value differs from the node training samples responses by less than the parameter value, the node is not split further.
use_surrogates
If true, surrogate splits are built. Surrogate splits are needed to handle missing measurements and for variable importance estimation.
max_categories
If a discrete variable, on which the training procedure tries to make a split, takes more than max_categories values, the precise best subset estimation may take a very long time (as the algorithm is exponential). Instead, many decision trees engines (including ML) try to find sub-optimal split in this case by clustering all the samples into max_categories clusters (i.e. some categories are merged together).

cv_folds

use_1se_rule

truncate_pruned_tree

priors

Note that this technique is used only in N(>2)-class classification problems. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases.

cv_folds
If this parameter is >1, the tree is pruned using cv_folds-fold cross validation.
use_1se_rule
If true, the tree is truncated a bit more by the pruning procedure. That leads to compact, and more resistant to the training data noise, but a bit less accurate decision tree.
truncate_pruned_tree
If true, the cut off nodes (with Tn≤CvDTree::pruned_tree_idx) are physically removed from the tree. Otherwise they are kept, and by decreasing CvDTree::pruned_tree_idx (e.g. setting it to -1) it is still possible to get the results from the original unpruned (or pruned less aggressively) tree.
priors
The array of a priori class probabilities, sorted by the class label value. The parameter can be used to tune the decision tree preferences toward a certain class. For example, if users want to detect some rare anomaly occurrence, the training base will likely contain much more normal cases than anomalies, so a very good classification performance will be achieved just by considering every case as normal. To avoid this, the priors can be specified, where the anomaly probability is artificially increased (up to 0.5 or even greater), so the weight of the misclassified anomalies becomes much bigger, and the tree is adjusted properly.

A note about memory management: the field priors is a pointer to the array of floats. The array should be allocated by user, and released just after the CvDTreeParams structure is passed to CvDTreeTrainData or CvDTree constructors/methods (as the methods make a copy of the array).

CvDTreeTrainData

Decision tree training data and shared data for tree ensembles

struct CvDTreeTrainData
{
CvDTreeTrainData();
CvDTreeTrainData( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
const CvDTreeParams& _params=CvDTreeParams(),
virtual ~CvDTreeTrainData();

virtual void set_data( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
const CvDTreeParams& _params=CvDTreeParams(),
bool _update_data=false );

virtual void get_vectors( const CvMat* _subsample_idx,
float* values, uchar* missing, float* responses, bool get_class_idx=false );

virtual CvDTreeNode* subsample_data( const CvMat* _subsample_idx );

virtual void write_params( CvFileStorage* fs );
virtual void read_params( CvFileStorage* fs, CvFileNode* node );

// release all the data
virtual void clear();

int get_num_classes() const;
int get_var_type(int vi) const;
int get_work_var_count() const;

virtual int* get_class_labels( CvDTreeNode* n );
virtual float* get_ord_responses( CvDTreeNode* n );
virtual int* get_labels( CvDTreeNode* n );
virtual int* get_cat_var_data( CvDTreeNode* n, int vi );
virtual CvPair32s32f* get_ord_var_data( CvDTreeNode* n, int vi );
virtual int get_child_buf_idx( CvDTreeNode* n );

////////////////////////////////////

virtual bool set_params( const CvDTreeParams& params );
virtual CvDTreeNode* new_node( CvDTreeNode* parent, int count,
int storage_idx, int offset );

virtual CvDTreeSplit* new_split_ord( int vi, float cmp_val,
int split_point, int inversed, float quality );
virtual CvDTreeSplit* new_split_cat( int vi, float quality );
virtual void free_node_data( CvDTreeNode* node );
virtual void free_train_data();
virtual void free_node( CvDTreeNode* node );

int sample_count, var_all, var_count, max_c_count;
int ord_var_count, cat_var_count;
bool have_labels, have_priors;
bool is_classifier;

int buf_count, buf_size;
bool shared;

CvMat* cat_count;
CvMat* cat_ofs;
CvMat* cat_map;

CvMat* counts;
CvMat* buf;
CvMat* direction;
CvMat* split_buf;

CvMat* var_idx;
CvMat* var_type; // i-th element =
//   k<0  - ordered
//   k>=0 - categorical, see k-th element of cat_* arrays
CvMat* priors;

CvDTreeParams params;

CvMemStorage* tree_storage;
CvMemStorage* temp_storage;

CvDTreeNode* data_root;

CvSet* node_heap;
CvSet* split_heap;
CvSet* cv_heap;
CvSet* nv_heap;

CvRNG rng;
};


1. 训练参数，CvDTreeParams instance。
2. 训练数据，通常为了有效地找到最佳分裂点，需要被预处理。对树集成来说，被预处理后的数据要反复用在所有的树中。另外，训练数据特征为树集成中的所有的树所共享，并如下储存：变量类型，类别数，类别标签压缩图等。
3. Buffers，树结点、分裂点和其他树结构元素的内存储存

1. 结构体需要在set_data之后用缺省构造函数初始化（或者用完整的构造函数构建）。参数_shared设置为true。
2. 用这个数据训练一棵或者多棵树，具体见方法CvDTree:train。
3. 最后，该结构体只能在所有使用它的树被释放后释放。

This structure is mostly used internally for storing both standalone trees and tree ensembles efficiently. Basically, it contains 3 types of information

1. The training parameters, CvDTreeParams instance.
2. The training data, preprocessed in order to find the best splits more efficiently. For tree ensembles this preprocessed data is reused by all the trees. Additionally, the training data characteristics that are shared by all trees in the ensemble are stored here: variable types, the number of classes, class label compression map etc.
3. Buffers, memory storages for tree nodes, splits and other elements of the trees constructed.

There are 2 ways of using this structure. In simple cases (e.g. standalone tree, or ready-to-use "black box" tree ensemble from ML, like Random Trees or Boosting) there is no need to care or even to know about the structure - just construct the needed statistical model, train it and use it. The CvDTreeTrainData structure will be constructed and used internally. However, for custom tree algorithms, or another sophisticated cases, the structure may be constructed and used explicitly. The scheme is the following:

1. The structure is initialized using the default constructor, followed by set_data (or it is built using the full form of constructor). The parameter _shared must be set to true.
2. One or more trees are trained using this data, see the special form of the method CvDTree::train.
3. Finally, the structure can be released only after all the trees using it are released.

CvDTree

Decision tree

class CvDTree : public CvStatModel
{
public:
CvDTree();
virtual ~CvDTree();

virtual bool train( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
CvDTreeParams params=CvDTreeParams() );

virtual bool train( CvDTreeTrainData* _train_data, const CvMat* _subsample_idx );

virtual CvDTreeNode* predict( const CvMat* _sample, const CvMat* _missing_data_mask=0,
bool raw_mode=false ) const;
virtual const CvMat* get_var_importance();
virtual void clear();

virtual void read( CvFileStorage* fs, CvFileNode* node );
virtual void write( CvFileStorage* fs, const char* name );

// special read & write methods for trees in the tree ensembles
virtual void read( CvFileStorage* fs, CvFileNode* node,
CvDTreeTrainData* data );
virtual void write( CvFileStorage* fs );

const CvDTreeNode* get_root() const;
int get_pruned_tree_idx() const;
CvDTreeTrainData* get_data();

protected:

virtual bool do_train( const CvMat* _subsample_idx );

virtual void try_split_node( CvDTreeNode* n );
virtual void split_node_data( CvDTreeNode* n );
virtual CvDTreeSplit* find_best_split( CvDTreeNode* n );
virtual CvDTreeSplit* find_split_ord_class( CvDTreeNode* n, int vi );
virtual CvDTreeSplit* find_split_cat_class( CvDTreeNode* n, int vi );
virtual CvDTreeSplit* find_split_ord_reg( CvDTreeNode* n, int vi );
virtual CvDTreeSplit* find_split_cat_reg( CvDTreeNode* n, int vi );
virtual CvDTreeSplit* find_surrogate_split_ord( CvDTreeNode* n, int vi );
virtual CvDTreeSplit* find_surrogate_split_cat( CvDTreeNode* n, int vi );
virtual double calc_node_dir( CvDTreeNode* node );
virtual void complete_node_dir( CvDTreeNode* node );
virtual void cluster_categories( const int* vectors, int vector_count,
int var_count, int* sums, int k, int* cluster_labels );

virtual void calc_node_value( CvDTreeNode* node );

virtual void prune_cv();
virtual double update_tree_rnc( int T, int fold );
virtual int cut_tree( int T, int fold, double min_alpha );
virtual void free_prune_data(bool cut_tree);
virtual void free_tree();

virtual void write_node( CvFileStorage* fs, CvDTreeNode* node );
virtual void write_split( CvFileStorage* fs, CvDTreeSplit* split );
virtual CvDTreeNode* read_node( CvFileStorage* fs, CvFileNode* node, CvDTreeNode* parent );
virtual CvDTreeSplit* read_split( CvFileStorage* fs, CvFileNode* node );
virtual void write_tree_nodes( CvFileStorage* fs );
virtual void read_tree_nodes( CvFileStorage* fs, CvFileNode* node );

CvDTreeNode* root;

int pruned_tree_idx;
CvMat* var_importance;

CvDTreeTrainData* data;
};


CvDTree::train

Trains decision tree

bool CvDTree::train( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
CvDTreeParams params=CvDTreeParams() );

bool CvDTree::train( CvDTreeTrainData* _train_data, const CvMat* _subsample_idx );


There are 2 train methods in CvDTree.

The first method follows the generic CvStatModel::train conventions, it is the most complete form of it. Both data layouts (_tflag=CV_ROW_SAMPLE and _tflag=CV_COL_SAMPLE) are supported, as well as sample and variable subsets, missing measurements, arbitrary combinations of input and output variable types etc. The last parameter contains all the necessary training parameters, see CvDTreeParams description.

The second method train is mostly used for building tree ensembles. It takes the pre-constructed CvDTreeTrainData instance and the optional subset of training set. The indices in _subsample_idx are counted relatively to the _sample_idx, passed to CvDTreeTrainData constructor. For example, if _sample_idx=[1, 5, 7, 100], then _subsample_idx=[0,3] means that the samples [1, 100] of the original training set are used.

CvDTree::predict

Returns the leaf node of decision tree corresponding to the input vector

CvDTreeNode* CvDTree::predict( const CvMat* _sample, const CvMat* _missing_data_mask=0,
bool raw_mode=false ) const;


The method takes the feature vector and the optional missing measurement mask on input, traverses the decision tree and returns the reached leaf node on output. The prediction result, either the class label or the estimated function value, may be retrieved as value field of the CvDTreeNode structure, for example: dtree->predict(sample,mask)->value

The last parameter is normally set to false that implies a regular input. If it is true, the method assumes that all the values of the discrete input variables have been already normalized to 0..<num_of_categoriesi>-1 ranges. (as the decision tree uses such normalized representation internally). It is useful for faster prediction with tree ensembles. For ordered input variables the flag is not used. Example. Building Tree for Classifying Mushrooms

See mushroom.cpp sample that demonstrates how to build and use the decision tree.

Boosting

Boosting 是个非常强大的学习方法, 它也是一个监督的分类学习方法。它组合许多“弱”分类器来产生一个强大的分类器组[HTF01]。一个弱分类器的性能只是比随机选择好一点，因此它可以被设计的非常简单并且不会有太大的计算花费。将很多弱分类器结合起来组成一个集成的类似于SVM或者神经网络的强分类器。

Boosting模型的学习时间里在N个训练样本{(xi,yi)}1N 其中 xi ∈ RK 并且 yi ∈ {−1, +1}。xi是一个K维向量。每一维对应你所要分类的问题中的一个特征。输出的两类为-1和+1。

1. 给定N样本 (xi,yi) 其中 $x_i \in R^k, y_i \in {-1, +1}.$
2. 初始化权值 $w_i = 1/N, i = 1,\cdots,N.$
3. 重复 for m = 1,2,…,M:
1. 根据每个训练数据的wi计算$f_m(x) \in {-1,1}$
2. 计算 $err_m = E_w [1(y \neq f_m(x))], c_m = \log((1 - err_m)/err_m)$.
3. 更新权值 $w_i \leftarrow w_i \exp[c_m 1(y_i \neq f_m(x_i))], i = 1,2,\cdots,N$, 并归一化使 Σiwi = 1.
4. 输出分类器$sign[\Sigma_{m = 1}^M c_m f_m(x)]$.

[HTF01] Hastie, T., Tibshirani, R., Friedman, J. H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. 2001.

[FHT98] Friedman, J. H., Hastie, T. and Tibshirani, R. Additive Logistic Regression: a Statistical View of Boosting. Technical Report, Dept. of Statistics, Stanford University, 1998.

CvBoostParams

Boosting 训练参数

struct CvBoostParams : public CvDTreeParams
{
int boost_type;
int weak_count;
int split_criteria;
double weight_trim_rate;

CvBoostParams();
CvBoostParams( int boost_type, int weak_count, double weight_trim_rate,
int max_depth, bool use_surrogates, const float* priors );
};

boost_type
Boosting type, 下面的一种
CvBoost::LOGIT - LogitBoost
weak_count

split_criteria

CvBoost::DEFAULT - 为特定的boosting算法选择默认系数;见下文。
weight_trim_rate
The weight trimming ratio, 在0到1之间，参考上面的讨论。 如果这个系数小于等于0或者大于1，那么这个trimming将不会被使用，所有的样本都会在每个循环中使用。其默认值为0.95

CvBoostTree

class CvBoostTree: public CvDTree
{
public:
CvBoostTree();
virtual ~CvBoostTree();

virtual bool train( CvDTreeTrainData* _train_data,
const CvMat* subsample_idx, CvBoost* ensemble );
virtual void scale( double s );
virtual void read( CvFileStorage* fs, CvFileNode* node,
CvBoost* ensemble, CvDTreeTrainData* _data );
virtual void clear();

protected:
...
CvBoost* ensemble;
};


CvBoost

Boost树分类器

class CvBoost : public CvStatModel
{
public:
// Boosting type
enum { DISCRETE=0, REAL=1, LOGIT=2, GENTLE=3 };

// Splitting criteria
enum { DEFAULT=0, GINI=1, MISCLASS=3, SQERR=4 };

CvBoost();
virtual ~CvBoost();

CvBoost( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
CvBoostParams params=CvBoostParams() );

virtual bool train( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
CvBoostParams params=CvBoostParams(),
bool update=false );

virtual float predict( const CvMat* _sample, const CvMat* _missing=0,
CvMat* weak_responses=0, CvSlice slice=CV_WHOLE_SEQ,
bool raw_mode=false ) const;

virtual void prune( CvSlice slice );

virtual void clear();

virtual void write( CvFileStorage* storage, const char* name );
virtual void read( CvFileStorage* storage, CvFileNode* node );

CvSeq* get_weak_predictors();
const CvBoostParams& get_params() const;
...

protected:
virtual bool set_params( const CvBoostParams& _params );
virtual void update_weights( CvBoostTree* tree );
virtual void trim_weights();
virtual void write_params( CvFileStorage* fs );
virtual void read_params( CvFileStorage* fs, CvFileNode* node );

CvDTreeTrainData* data;
CvBoostParams params;
CvSeq* weak;
...
};


CvBoost::train

bool CvBoost::train( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
CvBoostParams params=CvBoostParams(),
bool update=false );


CvBoost::predict

float CvBoost::predict( const CvMat* sample, const CvMat* missing=0,
CvMat* weak_responses=0, CvSlice slice=CV_WHOLE_SEQ,
bool raw_mode=false ) const;

sample

missing

weak_reponses

slice

raw_mode

CvBoost::predict方法通过总体的树运行样本，返回输出的加权类标签。

CvBoost::prune

void CvBoost::prune( CvSlice slice );


CvBoost::get_weak_predictors

CvSeq* CvBoost::get_weak_predictors();


Random Trees

Random trees have been introduced by Leo Breiman and Adele Cutler: http://www.stat.berkeley.edu/users/breiman/RandomForests/. The algorithm can deal with both classification and regression problems. Random trees is a collection (ensemble) of tree predictors that is called forest further in this section (the term has been also introduced by L. Brieman). The classification works as following: the random trees classifier takes the input feature vector, classifies it with every tree in the forest, and outputs the class label that has got the majority of "votes". In case of regression the classifier response is the average of responses over all the trees in the forest.

All the trees are trained with the same parameters, but on the different training sets, which are generated from the original training set using bootstrap procedure: for each training set we randomly select the same number of vectors as in the original set (=N). The vectors are chosen with replacement. That is, some vectors will occur more than once and some will be absent. At each node of each tree trained not all the variables are used to find the best split, rather than a random subset of them. The each node a new subset is generated, however its size is fixed for all the nodes and all the trees. It is a training parameter, set to sqrt(<number_of_variables>) by default. None of the tree built is pruned.

In random trees there is no need in any accuracy estimation procedures, such as cross-validation or bootstrap, or a separate test set to get an estimate of the training error. The error is estimated internally during the training. When the training set for the current tree is drawn by sampling with replacement, some vectors are left out (so-called oob (out-of-bag) data). The size of oob data is about N/3. The classification error is estimated by using this oob-data as following:

Get a prediction for each vector, which is oob relatively to the i-th tree, using the very i-th tree. After all the trees have been trained, for each vector that has ever been oob, find the class-"winner" for it (i.e. the class that has got the majority of votes in the trees, where the vector was oob) and compare it to the ground-truth response. Then the classification error estimate is computed as ratio of number of missclassified oob vectors to all the vectors in the original data. In the case of regression the oob-error is computed as the squared error for oob vectors difference divided by the total number of vectors.

References:

Machine Learning, Wald I, July 2002 Looking Inside the Black Box, Wald II, July 2002 Software for the Masses, Wald III, July 2002 And other articles from the web-site http://www.stat.berkeley.edu/users/breiman/RandomForests/cc_home.htm.

CvRTParams Training Parameters of Random Trees

struct CvRTParams : public CvDTreeParams {

   bool calc_var_importance;
int nactive_vars;
CvTermCriteria term_crit;

   CvRTParams() : CvDTreeParams( 5, 10, 0, false, 10, 0, false, false, 0 ),
calc_var_importance(false), nactive_vars(0)
{
term_crit = cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 50, 0.1 );
}

   CvRTParams( int _max_depth, int _min_sample_count,
float _regression_accuracy, bool _use_surrogates,
int _max_categories, const float* _priors,
bool _calc_var_importance,
int _nactive_vars, int max_tree_count,
float forest_accuracy, int termcrit_type );


};

calc_var_importance If it is set, then variable importance is computed by the training procedure. To retrieve the computed variable importance array, call the method CvRTrees::get_var_importance(). nactive_vars The number of variables that are randomly selected at each tree node and that are used to find the best split(s). term_crit Termination criteria for growing the forest: term_crit.max_iter is the maximum number of trees in the forest (see also max_tree_count parameter of the constructor, by default it is set to 50) term_crit.epsilon is the sufficient accuracy (OOB error). The set of training parameters for the forest is the superset of the training parameters for a single tree. However, Random trees do not need all the functionality/features of decision trees, most noticeably, the trees are not pruned, so the cross-validation parameters are not used.

CvRTrees Random Trees

class CvRTrees : public CvStatModel { public:

   CvRTrees();
virtual ~CvRTrees();
virtual bool train( const CvMat* _train_data, int _tflag,
const CvMat* _responses, const CvMat* _var_idx=0,
const CvMat* _sample_idx=0, const CvMat* _var_type=0,
CvRTParams params=CvRTParams() );
virtual float predict( const CvMat* sample, const CvMat* missing = 0 ) const;
virtual void clear();

   virtual const CvMat* get_var_importance();
virtual float get_proximity( const CvMat* sample_1, const CvMat* sample_2 ) const;

   virtual void read( CvFileStorage* fs, CvFileNode* node );
virtual void write( CvFileStorage* fs, const char* name );

   CvMat* get_active_var_mask();
CvRNG* get_rng();

   int get_tree_count() const;
CvForestTree* get_tree(int i) const;


protected:

   bool grow_forest( const CvTermCriteria term_crit );

   // array of the trees of the forest
CvForestTree** trees;
CvDTreeTrainData* data;
int ntrees;
int nclasses;
...


};

CvRTrees::train Trains Random Trees model

bool CvRTrees::train( const CvMat* train_data, int tflag,

                   const CvMat* responses, const CvMat* comp_idx=0,
const CvMat* sample_idx=0, const CvMat* var_type=0,
CvRTParams params=CvRTParams() );


The method CvRTrees::train is very similar to the first form of CvDTree::train() and follows the generic method CvStatModel::train conventions. All the specific to the algorithm training parameters are passed as CvRTParams instance. The estimate of the training error (oob-error) is stored in the protected class member oob_error.

CvRTrees::predict Predicts the output for the input sample

double CvRTrees::predict( const CvMat* sample, const CvMat* missing=0 ) const; The input parameters of the prediction method are the same as in CvDTree::predict, but the return value type is different. This method returns the cummulative result from all the trees in the forest (the class that receives the majority of voices, or the mean of the regression function estimates).

CvRTrees::get_var_importance Retrieves the variable importance array

const CvMat* CvRTrees::get_var_importance() const; The method returns the variable importance vector, computed at the training stage when CvRTParams::calc_var_importance is set. If the training flag is not set, then the NULL pointer is returned. This is unlike decision trees, where variable importance can be computed anytime after the training.

CvRTrees::get_proximity Retrieves proximitity measure between two training samples

float CvRTrees::get_proximity( const CvMat* sample_1, const CvMat* sample_2 ) const; The method returns proximity measure between any two samples (the ratio of the those trees in the ensemble, in which the samples fall into the same leaf node, to the total number of the trees).

Example. Prediction of mushroom edibility using random trees classifier

1. include <float.h>
2. include <stdio.h>
3. include <ctype.h>
4. include "ml.h"

int main( void ) {

   CvStatModel*    cls = NULL;
CvFileStorage*  storage = cvOpenFileStorage( "Mushroom.xml", NULL,CV_STORAGE_READ );
CvMat*          data = (CvMat*)cvReadByName(storage, NULL, "sample", 0 );
CvMat           train_data, test_data;
CvMat           response;
CvMat*          missed = NULL;
CvMat*          comp_idx = NULL;
CvMat*          sample_idx = NULL;
int             resp_col = 0;
int             i,j;
CvRTreesParams  params;
CvTreeClassifierTrainParams cart_params;
const int       ntrain_samples = 1000;
const int       ntest_samples  = 1000;
const int       nvars = 23;

   if(data == NULL || data->cols != nvars)
{
puts("Error in source data");
return -1;
}

   cvGetSubRect( data, &train_data, cvRect(0, 0, nvars, ntrain_samples) );
cvGetSubRect( data, &test_data, cvRect(0, ntrain_samples, nvars,
ntrain_samples + ntest_samples) );

   resp_col = 0;
cvGetCol( &train_data, &response, resp_col);

   /* create missed variable matrix */
missed = cvCreateMat(train_data.rows, train_data.cols, CV_8UC1);
for( i = 0; i < train_data.rows; i++ )
for( j = 0; j < train_data.cols; j++ )
CV_MAT_ELEM(*missed,uchar,i,j) = (uchar)(CV_MAT_ELEM(train_data,float,i,j) < 0);

   /* create comp_idx vector */
comp_idx = cvCreateMat(1, train_data.cols-1, CV_32SC1);
for( i = 0; i < train_data.cols; i++ )
{
if(i<resp_col)CV_MAT_ELEM(*comp_idx,int,0,i) = i;
if(i>resp_col)CV_MAT_ELEM(*comp_idx,int,0,i-1) = i;
}

   /* create sample_idx vector */
sample_idx = cvCreateMat(1, train_data.rows, CV_32SC1);
for( j = i = 0; i < train_data.rows; i++ )
{
if(CV_MAT_ELEM(response,float,i,0) < 0) continue;
CV_MAT_ELEM(*sample_idx,int,0,j) = i;
j++;
}
sample_idx->cols = j;

   /* create type mask */

   // initialize training parameters
cvSetDefaultParamTreeClassifier((CvStatModelParams*)&cart_params);
cart_params.wrong_feature_as_unknown = 1;
params.tree_params = &cart_params;
params.term_crit.max_iter = 50;
params.term_crit.epsilon = 0.1;
params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;

   puts("Random forest results");
cls = cvCreateRTreesClassifier( &train_data, CV_ROW_SAMPLE, &response,
(CvStatModelParams*)& params, comp_idx, sample_idx, type_mask, missed );
if( cls )
{
CvMat sample = cvMat( 1, nvars, CV_32FC1, test_data.data.fl );
CvMat test_resp;
int wrong = 0, total = 0;
cvGetCol( &test_data, &test_resp, resp_col);
for( i = 0; i < ntest_samples; i++, sample.data.fl += nvars )
{
if( CV_MAT_ELEM(test_resp,float,i,0) >= 0 )
{
float resp = cls->predict( cls, &sample, NULL );
wrong += (fabs(resp-response.data.fl[i]) > 1e-3 ) ? 1 : 0;
total++;
}
}
printf( "Test set error = %.2f\n", wrong*100.f/(float)total );
}
else
puts("Error forest creation");


   cvReleaseMat(&missed);
cvReleaseMat(&sample_idx);
cvReleaseMat(&comp_idx);
cvReleaseMat(&data);
cvReleaseStatModel(&cls);
cvReleaseFileStorage(&storage);
return 0;


}

Expectation-Maximization

The EM (Expectation-Maximization) algorithm estimates the parameters of the multivariate probability density function in a form of the Gaussian mixture distribution with a specified number of mixtures.

Consider the set of the feature vectors {x1, x2,..., xN}: N vectors from d-dimensional Euclidean space drawn from a Gaussian mixture:

where m is the number of mixtures, pk is the normal distribution density with the mean ak and covariance matrix Sk, πk is the weight of k-th mixture. Given the number of mixtures m and the samples {xi, i=1..N} the algorithm finds the maximum-likelihood estimates (MLE) of the all the mixture parameters, i.e. ak, Sk and πk:

EM algorithm is an iterative procedure. Each iteration of it includes two steps. At the first step (Expectation-step, or E-step), we find a probability pi,k (denoted αi,k in the formula below) of sample #i to belong to mixture #k using the currently available mixture parameter estimates:

At the second step (Maximization-step, or M-step) the mixture parameter estimates are refined using the computed probabilities:

Alternatively, the algorithm may start with M-step when initial values for pi,k can be provided. Another alternative, when pi,k are unknown, is to use a simpler clustering algorithm to pre-cluster the input samples and thus obtain initial pi,k. Often (and in ML) k-means algorithm is used for that purpose. One of the main that EM algorithm should deal with is the large number of parameters to estimate. The majority of the parameters sits in covariation matrices, which are d×d elements each (where d is the feature space dimensionality). However, in many practical problems the covariation matrices are close to diagonal, or even to μk*I, where I is identity matrix and μk is mixture-dependent "scale" parameter. So a robust computation scheme could be to start with the harder constraints on the covariation matrices and then use the estimated parameters as an input for a less constrained optimization problem (often a diagonal covariation matrix is already a good enough approximation).

References:

[Bilmes98] J. A. Bilmes. A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models. Technical Report TR-97-021, International Computer Science Institute and Computer Science Division, University of California at Berkeley, April 1998.

CvEMParams

 EM算法估计混合高斯模型所需要的参数


Parameters of EM algorithm

struct CvEMParams {

   CvEMParams() : nclusters(10), cov_mat_type(CvEM::COV_MAT_DIAGONAL),
start_step(CvEM::START_AUTO_STEP), probs(0), weights(0), means(0), covs(0)
{
term_crit=cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, FLT_EPSILON );
}

   CvEMParams( int _nclusters, int _cov_mat_type=1/*CvEM::COV_MAT_DIAGONAL*/,
int _start_step=0/*CvEM::START_AUTO_STEP*/,
CvTermCriteria _term_crit=cvTermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, FLT_EPSILON),
CvMat* _probs=0, CvMat* _weights=0, CvMat* _means=0, CvMat** _covs=0 ) :
nclusters(_nclusters), cov_mat_type(_cov_mat_type), start_step(_start_step),
probs(_probs), weights(_weights), means(_means), covs(_covs), term_crit(_term_crit)
{}

   int nclusters;
int cov_mat_type;
int start_step;
const CvMat* probs;
const CvMat* weights;
const CvMat* means;
const CvMat** covs;
CvTermCriteria term_crit;


};

nclusters

 高斯成分的个数，必须事先指定。有些EM的算法可以在给定区间内，自动确定最优个数，不过本算法未实现此功能。


The number of mixtures. Some of EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet.

cov_mat_type

 协方差矩阵的类型，可以有三种类型


The type of the mixture covariation matrices; should be one of the following:

CvEM::COV_MAT_SPHERICAL - a covariation matrix of each mixture is a scaled identity matrix, μk*I, so the only parameter to be estimated is μk. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (e.g. in case when the data is preprocessed with PCA). The results of such preliminary estimation may be passed again to the optimization procedure, this time with cov_mat_type=CvEM::COV_MAT_DIAGONAL.

start_step

开始步骤， 3种选择


The initial step the algorithm starts from; should be one of the following:

CvEM::START_E_STEP - the algorithm starts with E-step. At least, the initial values of mean vectors, CvEMParams::means must be passed. Optionally, the user may also provide initial values for weights (CvEMParams::weights) and/or covariation matrices (CvEMParams::covs).

CvEM::START_M_STEP - the algorithm starts with M-step. The initial probabilities pi,k must be provided.

CvEM::START_AUTO_STEP - No values are required from the user, k-means algorithm is used to estimate initial mixtures parameters.

term_crit

E步和M步 迭代停止的准则 Termination criteria of the procedure. EM algorithm stops either after a certain number of iterations (term_crit.num_iter), or when the parameters change too little (no more than term_crit.epsilon) from iteration to iteration.

means

The structure has 2 constructors, the default one represents a rough rule-of-thumb, with another one it is possible to override a variety of parameters, from a single number of mixtures (the only essential problem-dependent parameter), to the initial values for the mixture parameters.

CvEM EM model

class CV_EXPORTS CvEM : public CvStatModel { public:

   // Type of covariation matrices
enum { COV_MAT_SPHERICAL=0, COV_MAT_DIAGONAL=1, COV_MAT_GENERIC=2 };

   // The initial step
enum { START_E_STEP=1, START_M_STEP=2, START_AUTO_STEP=0 };

   CvEM();
CvEM( const CvMat* samples, const CvMat* sample_idx=0,
CvEMParams params=CvEMParams(), CvMat* labels=0 );
virtual ~CvEM();

   virtual bool train( const CvMat* samples, const CvMat* sample_idx=0,
CvEMParams params=CvEMParams(), CvMat* labels=0 );

   virtual float predict( const CvMat* sample, CvMat* probs ) const;
virtual void clear();

   int get_nclusters() const { return params.nclusters; }
const CvMat* get_means() const { return means; }
const CvMat** get_covs() const { return covs; }
const CvMat* get_weights() const { return weights; }
const CvMat* get_probs() const { return probs; }


protected:

   virtual void set_params( const CvEMParams& params,
const CvVectors& train_data );
virtual void init_em( const CvVectors& train_data );
virtual double run_em( const CvVectors& train_data );
virtual void init_auto( const CvVectors& samples );
virtual void kmeans( const CvVectors& train_data, int nclusters,
CvMat* labels, CvTermCriteria criteria,
const CvMat* means );
CvEMParams params;
double log_likelihood;

   CvMat* means;
CvMat** covs;
CvMat* weights;
CvMat* probs;

   CvMat* log_weight_div_det;
CvMat* inv_eigen_values;
CvMat** cov_rotate_mats;


};

CvEM::train Estimates Gaussian mixture parameters from the sample set

void CvEM::train( const CvMat* samples, const CvMat* sample_idx=0,

                 CvEMParams params=CvEMParams(), CvMat* labels=0 );


Unlike many of ML models, EM is an unsupervised learning algorithm and it does not take responses (class labels or the function values) on input. Instead, it computes MLE of Gaussian mixture parameters from the input sample set, stores all the parameters inside the stucture: pi,k in probs, ak in means Sk in covs[k], πk in weights and optionally computes the output "class label" for each sample: labelsi=arg maxk(pi,k), i=1..N (i.e. indices of the most-probable mixture for each sample).

The trained model can be used further for prediction, just like any other classifier. The model trained is similar to the normal bayes classifier.

Example. Clustering random samples of multi-gaussian distribution using EM

1. include "ml.h"
2. include "highgui.h"

int main( int argc, char** argv ) {

   const int N = 4;
const int N1 = (int)sqrt((double)N);
const CvScalar colors[] = {{{0,0,255}},Template:0,255,0,Template:0,255,255,Template:255,255,0};
int i, j;
int nsamples = 100;
CvRNG rng_state = cvRNG(-1);
CvMat* samples = cvCreateMat( nsamples, 2, CV_32FC1 );
CvMat* labels = cvCreateMat( nsamples, 1, CV_32SC1 );
IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
float _sample[2];
CvMat sample = cvMat( 1, 2, CV_32FC1, _sample );
CvEM em_model;
CvEMParams params;
CvMat samples_part;

   cvReshape( samples, samples, 2, 0 );
for( i = 0; i < N; i++ )
{
CvScalar mean, sigma;

       // form the training samples
cvGetRows( samples, &samples_part, i*nsamples/N, (i+1)*nsamples/N );
mean = cvScalar(((i%N1)+1.)*img->width/(N1+1), ((i/N1)+1.)*img->height/(N1+1));
sigma = cvScalar(30,30);
cvRandArr( &rng_state, &samples_part, CV_RAND_NORMAL, mean, sigma );
}
cvReshape( samples, samples, 1, 0 );

   // initialize model's parameters
params.covs      = NULL;
params.means     = NULL;
params.weights   = NULL;
params.probs     = NULL;
params.nclusters = N;
params.cov_mat_type       = CvEM::COV_MAT_SPHERICAL;
params.start_step         = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 10;
params.term_crit.epsilon  = 0.1;
params.term_crit.type     = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;

   // cluster the data
em_model.train( samples, 0, params, labels );

1. if 0
   // the piece of code shows how to repeatedly optimize the model
// with less-constrained parameters (COV_MAT_DIAGONAL instead of COV_MAT_SPHERICAL)
// when the output of the first stage is used as input for the second.
CvEM em_model2;
params.cov_mat_type = CvEM::COV_MAT_DIAGONAL;
params.start_step = CvEM::START_E_STEP;
params.means = em_model.get_means();
params.covs = (const CvMat**)em_model.get_covs();
params.weights = em_model.get_weights();

   em_model2.train( samples, 0, params, labels );
// to use em_model2, replace em_model.predict() with em_model2.predict() below

1. endif
   // classify every image pixel
cvZero( img );
for( i = 0; i < img->height; i++ )
{
for( j = 0; j < img->width; j++ )
{
CvPoint pt = cvPoint(j, i);
sample.data.fl[0] = (float)j;
sample.data.fl[1] = (float)i;
int response = cvRound(em_model.predict( &sample, NULL ));
CvScalar c = colors[response];

           cvCircle( img, pt, 1, cvScalar(c.val[0]*0.75,c.val[1]*0.75,c.val[2]*0.75), CV_FILLED );
}
}

   //draw the clustered samples
for( i = 0; i < nsamples; i++ )
{
CvPoint pt;
pt.x = cvRound(samples->data.fl[i*2]);
pt.y = cvRound(samples->data.fl[i*2+1]);
cvCircle( img, pt, 1, colors[labels->data.i[i]], CV_FILLED );
}

   cvNamedWindow( "EM-clustering result", 1 );
cvShowImage( "EM-clustering result", img );
cvWaitKey(0);

   cvReleaseMat( &samples );
cvReleaseMat( &labels );
return 0;


}

神经网络

ML implements feedforward artificial neural networks, more particularly, multi-layer perceptrons (MLP), the most commonly used type of neural networks. MLP consists of the input layer, output layer and one or more hidden layers. Each layer of MLP includes one or more neurons that are directionally linked with the neurons from the previous and the next layer. Here is an example of 3-layer perceptron with 3 inputs, 2 outputs and the hidden layer including 5 neurons:

ML实现了前馈(feedforward)人工神经元网络(ANN)，准确地说是最常用的神经元网络，multi-layerperceptrons (MLP)。MLP由输入层、输出层和一个或多个隐藏层构成。MLP的每一层包含了一个或多个神经元，它们从之前的层和之后的层的神经元有向的连接在一起。下面是一个三层感知器(perceptron)的例子，其由3个输入、2个输出以及包含5个神经元的隐藏层构成：

All the neurons in MLP are similar. Each of them has several input links (i.e. it takes the output values from several neurons in the previous layer on input) and several output links (i.e. it passes the response to several neurons in the next layer). The values retrieved from the previous layer are summed with certain weights, individual for each neuron, plus the bias term, and the sum is transformed using the activation function f that may be also different for different neurons. Here is the picture:

MLP中所有的神经元都是相似的。每一个有多个输入链路（比如，取前一层的多个神经元的输出作为自己的输入）和多个输出链路（比如传递响应到下一层的多个神经元）。

In other words, given the outputs {xj} of the layer n, the outputs {yi} of the layer n+1 are computed as:

   ui=sumj(w(n+1)i,j*xj) + w(n+1)i,bias
yi=f(ui)


Different activation functions may be used, the ML implements 3 standard ones:

Identity function (CvANN_MLP::IDENTITY): f(x)=x Symmetrical sigmoid (CvANN_MLP::SIGMOID_SYM): f(x)=β*(1-e-αx)/(1+e-αx), the default choice for MLP; the standard sigmoid with β=1, α=1 is shown below: Gaussian function (CvANN_MLP::GAUSSIAN): f(x)=βe-αx*x, not completely supported by the moment. In ML all the neurons have the same activation functions, with the same free parameters (α, β) that are specified by user and are not altered by the training algorithms.

So the whole trained network works as following. It takes the feature vector on input, the vector size is equal to the size of the input layer, when the values are passed as input to the first hidden layer, the outputs of the hidden layer are computed using the weights and the activation functions and passed further downstream, until we compute the output layer.

So, in order to compute the network one need to know all the weights w(n+1)i,j. The weights are computed by the training algorithm. The algorithm takes a training set: multiple input vectors with the corresponding output vectors, and iteratively adjusts the weights to try to make the network give the desired response on the provided input vectors.

The larger the network size (the number of hidden layers and their sizes), the more is the potential network flexibility, and the error on the training set could be made arbitrarily small. But at the same time the learned network will also "learn" the noise present in the training set, so the error on the test set usually starts increasing after the network size reaches some limit. Besides, the larger networks are train much longer than the smaller ones, so it is reasonable to preprocess the data (using PCA or similar technique) and train a smaller network on only the essential features.

Another feature of the MLP's is their inability to handle categorical data as is, however there is a workaround. If a certain feature in the input or output (i.e. in case of n-class classifier for n>2) layer is categorical and can take M (>2) different values, it makes sense to represent it as binary tuple of M elements, where i-th element is 1 if and only if the feature is equal to the i-th value out of M possible. It will increase the size of the input/output layer, but will speedup the training algorithm convergence and at the same time enable "fuzzy" values of such variables, i.e. a tuple of probabilities instead of a fixed value.

ML implements 2 algorithms for training MLP's. The first is the classical random sequential backpropagation algorithm and the second (default one) is batch RPROP algorithm

References:

http://en.wikipedia.org/wiki/Backpropagation. Wikipedia article about the backpropagation algorithm. Y. LeCun, L. Bottou, G.B. Orr and K.-R. Muller, "Efficient backprop", in Neural Networks---Tricks of the Trade, Springer Lecture Notes in Computer Sciences 1524, pp.5-50, 1998. M. Riedmiller and H. Braun, "A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm", Proc. ICNN, San Fransisco (1993).

CvANN_MLP_TrainParams Parameters of MLP training algorithm

struct CvANN_MLP_TrainParams {

   CvANN_MLP_TrainParams();
CvANN_MLP_TrainParams( CvTermCriteria term_crit, int train_method,
double param1, double param2=0 );
~CvANN_MLP_TrainParams();

   enum { BACKPROP=0, RPROP=1 };

   CvTermCriteria term_crit;
int train_method;

   // backpropagation parameters
double bp_dw_scale, bp_moment_scale;

   // rprop parameters
double rp_dw0, rp_dw_plus, rp_dw_minus, rp_dw_min, rp_dw_max;


}; term_crit The termination criteria for the training algorithm. Identifies how many iteration is done by the algorithm (for sequential backpropagation algorithm the number is multiplied by the size of the training set) and how much the weights could change between the iterations to make the algorithm continue. train_method The training algoithm to use; can be one of CvANN_MLP_TrainParams::BACKPROP (sequential backpropagation algorithm) or CvANN_MLP_TrainParams::RPROP (RPROP algorithm, default value). bp_dw_scale (Backpropagation only): The coefficient to multiply the computed weight gradient by. The recommended value is about 0.1. The parameter can be set via param1 of the constructor. bp_moment_scale (Backpropagation only): The coefficient to multiply the difference between weights on the 2 previous iterations. Provides some inertia to smooth the random fluctuations of the weights. Can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. The parameter can be set via param2 of the constructor. rp_dw0 (RPROP only): Initial magnitude of the weight delta. The default value is 0.1. The parameter can be set via param1 of the constructor. rp_dw_plus (RPROP only): The increase factor for the weight delta. Must be >1, default value is 1.2 that should work well in most cases, according to the algorithm author. The parameter can only be changed explicitly by modifying the structure member. rp_dw_minus (RPROP only): The decrease factor for the weight delta. Must be <1, default value is 0.5 that should work well in most cases, according to the algorithm author. The parameter can only be changed explicitly by modifying the structure member. rp_dw_min (RPROP only): The minimum value of the weight delta. Must be >0, the default value is FLT_EPSILON. The parameter can be set via param2 of the constructor. rp_dw_max (RPROP only): The maximum value of the weight delta. Must be >1, the default value is 50. The parameter can only be changed explicitly by modifying the structure member. The structure has default constructor that initalizes parameters for RPROP algorithm. There is also more advanced constructor to customize the parameters and/or choose backpropagation algorithm. Finally, the individial parameters can be adjusted after the structure is created.

CvANN_MLP MLP model

class CvANN_MLP : public CvStatModel { public:

   CvANN_MLP();
CvANN_MLP( const CvMat* _layer_sizes,
int _activ_func=SIGMOID_SYM,
double _f_param1=0, double _f_param2=0 );

   virtual ~CvANN_MLP();

   virtual void create( const CvMat* _layer_sizes,
int _activ_func=SIGMOID_SYM,
double _f_param1=0, double _f_param2=0 );

   virtual int train( const CvMat* _inputs, const CvMat* _outputs,
const CvMat* _sample_weights, const CvMat* _sample_idx=0,
CvANN_MLP_TrainParams _params = CvANN_MLP_TrainParams(),
int flags=0 );
virtual float predict( const CvMat* _inputs,
CvMat* _outputs ) const;

   virtual void clear();

   // possible activation functions
enum { IDENTITY = 0, SIGMOID_SYM = 1, GAUSSIAN = 2 };

   // available training flags
enum { UPDATE_WEIGHTS = 1, NO_INPUT_SCALE = 2, NO_OUTPUT_SCALE = 4 };

   virtual void read( CvFileStorage* fs, CvFileNode* node );
virtual void write( CvFileStorage* storage, const char* name );

   int get_layer_count() { return layer_sizes ? layer_sizes->cols : 0; }
const CvMat* get_layer_sizes() { return layer_sizes; }


protected:

   virtual bool prepare_to_train( const CvMat* _inputs, const CvMat* _outputs,
const CvMat* _sample_weights, const CvMat* _sample_idx,
CvANN_MLP_TrainParams _params,
CvVectors* _ivecs, CvVectors* _ovecs, double** _sw, int _flags );

   // sequential random backpropagation
virtual int train_backprop( CvVectors _ivecs, CvVectors _ovecs, const double* _sw );

   // RPROP algorithm
virtual int train_rprop( CvVectors _ivecs, CvVectors _ovecs, const double* _sw );

   virtual void calc_activ_func( CvMat* xf, const double* bias ) const;
virtual void calc_activ_func_deriv( CvMat* xf, CvMat* deriv, const double* bias ) const;
virtual void set_activ_func( int _activ_func=SIGMOID_SYM,
double _f_param1=0, double _f_param2=0 );
virtual void init_weights();
virtual void scale_input( const CvMat* _src, CvMat* _dst ) const;
virtual void scale_output( const CvMat* _src, CvMat* _dst ) const;
virtual void calc_input_scale( const CvVectors* vecs, int flags );
virtual void calc_output_scale( const CvVectors* vecs, int flags );

   virtual void write_params( CvFileStorage* fs );
virtual void read_params( CvFileStorage* fs, CvFileNode* node );

   CvMat* layer_sizes;
CvMat* wbuf;
CvMat* sample_weights;
double** weights;
double f_param1, f_param2;
double min_val, max_val, min_val1, max_val1;
int activ_func;
int max_count, max_buf_sz;
CvANN_MLP_TrainParams params;
CvRNG rng;


}; Unlike many other models in ML that are constructed and trained at once, in MLP these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method create. All the weights are set to zero's. Then the network is trained using the set of input and output vectors. The training procedure can be repeated more than once, i.e. the weights can be adjusted based on the new training data.

CvANN_MLP::create Constructs the MLP with the specified topology

void CvANN_MLP::create( const CvMat* _layer_sizes,

                       int _activ_func=SIGMOID_SYM,
double _f_param1=0, double _f_param2=0 );


_layer_sizes The integer vector, specifying the number of neurons in each layer, including the input and the output ones. _activ_func Specifies the activation function for each neuron; one of CvANN_MLP::IDENTITY, CvANN_MLP::SIGMOID_SYM and CvANN_MLP::GAUSSIAN. _f_param1, _f_param2 Free parameters of the activation function, α and β, respectively. See the formulas in the introduction section.

The method creates MLP network with the specified topology and assigns the same activation function to all the neurons.

int CvANN_MLP::train( const CvMat* _inputs, const CvMat* _outputs,

                     const CvMat* _sample_weights, const CvMat* _sample_idx=0,
CvANN_MLP_TrainParams _params = CvANN_MLP_TrainParams(),
int flags=0 );


_inputs A floating-point matrix of input vectors, one vector per row. _outputs A floating-point matrix of the corresponding output vectors, one vector per row. _sample_weights (RPROP only) The optional floating-point vector of weights for each sample. Some samples may be more important than others for training, e.g. user may want to gain the weight of certain classes to find the right balance between hit-rate and false-alarm rate etc. _sample_idx The optional integer vector indicating the samples (i.e. rows of _inputs and _outputs) that are taken into account. _params The training params. See CvANN_MLP_TrainParams description. _flags The various algorithm parameters. May be a combination of the following: UPDATE_WEIGHTS = 1 - update the network weights, rather than compute them from scratch (in the latter case the weights are intialized using Nguyen-Widrow algorithm). NO_INPUT_SCALE - do not normalize the input vectors. If the flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation =1. If the network is assumed to be updated frequently, the new training data should be much different from original one. In this case user should take care of proper normalization. NO_OUTPUT_SCALE - do not normalize the output vectors. If the flag is not set, the training algorithm normalizes each output features independently, by trasforming it to the certain range depending on the activation function used.

The method applies the specified training algorithm to compute/adjust the network weights. It returns the number of iterations done.